qemu/hw/virtio/virtio-mem.c

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

1902 lines
63 KiB
C
Raw Normal View History

virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
/*
* Virtio MEM device
*
* Copyright (C) 2020 Red Hat, Inc.
*
* Authors:
* David Hildenbrand <david@redhat.com>
*
* This work is licensed under the terms of the GNU GPL, version 2.
* See the COPYING file in the top-level directory.
*/
#include "qemu/osdep.h"
#include "qemu/iov.h"
#include "qemu/cutils.h"
#include "qemu/error-report.h"
#include "qemu/units.h"
#include "sysemu/numa.h"
#include "sysemu/sysemu.h"
#include "sysemu/reset.h"
virtio-mem: Support "x-ignore-shared" migration To achieve desired "x-ignore-shared" functionality, we should not discard all RAM when realizing the device and not mess with preallocation/postcopy when loading device state. In essence, we should not touch RAM content. As "x-ignore-shared" gets set after realizing the device, we cannot rely on that. Let's simply skip discarding of RAM on incoming migration. Note that virtio_mem_post_load() will call virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So once migration finished we'll have a consistent state. The initial system reset will also not discard any RAM, because virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no memory is plugged (which is the case before loading the device state). Note that something like VM templating -- see commit b17fbbe55cba ("migration: allow private destination ram with x-ignore-shared") -- is currently incompatible with virtio-mem and ram_block_discard_range() will warn in case a private file mapping is supplied by virtio-mem. For VM templating with virtio-mem, it makes more sense to either (a) Create the template without the virtio-mem device and hotplug a virtio-mem device to the new VM instances using proper own memory backend. (b) Use a virtio-mem device that doesn't provide any memory in the template (requested-size=0) and use private anonymous memory. Message-ID: <20230706075612.67404-5-david@redhat.com> Tested-by: Mario Casquero <mcasquer@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-07-06 10:56:09 +03:00
#include "sysemu/runstate.h"
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
#include "hw/virtio/virtio.h"
#include "hw/virtio/virtio-bus.h"
#include "hw/virtio/virtio-mem.h"
#include "qapi/error.h"
#include "qapi/visitor.h"
#include "exec/ram_addr.h"
#include "migration/misc.h"
#include "hw/boards.h"
#include "hw/qdev-properties.h"
#include CONFIG_DEVICES
#include "trace.h"
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static const VMStateDescription vmstate_virtio_mem_device_early;
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
/*
* We only had legacy x86 guests that did not support
* VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. Other targets don't have legacy guests.
*/
#if defined(TARGET_X86_64) || defined(TARGET_I386)
#define VIRTIO_MEM_HAS_LEGACY_GUESTS
#endif
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
/*
* Let's not allow blocks smaller than 1 MiB, for example, to keep the tracking
* bitmap small.
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
*/
#define VIRTIO_MEM_MIN_BLOCK_SIZE ((uint32_t)(1 * MiB))
static uint32_t virtio_mem_default_thp_size(void)
{
uint32_t default_thp_size = VIRTIO_MEM_MIN_BLOCK_SIZE;
#if defined(__x86_64__) || defined(__arm__) || defined(__powerpc64__)
default_thp_size = 2 * MiB;
#elif defined(__aarch64__)
if (qemu_real_host_page_size() == 4 * KiB) {
default_thp_size = 2 * MiB;
} else if (qemu_real_host_page_size() == 16 * KiB) {
default_thp_size = 32 * MiB;
} else if (qemu_real_host_page_size() == 64 * KiB) {
default_thp_size = 512 * MiB;
}
#endif
return default_thp_size;
}
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/*
* The minimum memslot size depends on this setting ("sane default"), the
* device block size, and the memory backend page size. The last (or single)
* memslot might be smaller than this constant.
*/
#define VIRTIO_MEM_MIN_MEMSLOT_SIZE (1 * GiB)
/*
* We want to have a reasonable default block size such that
* 1. We avoid splitting THPs when unplugging memory, which degrades
* performance.
* 2. We avoid placing THPs for plugged blocks that also cover unplugged
* blocks.
*
* The actual THP size might differ between Linux kernels, so we try to probe
* it. In the future (if we ever run into issues regarding 2.), we might want
* to disable THP in case we fail to properly probe the THP size, or if the
* block size is configured smaller than the THP size.
*/
static uint32_t thp_size;
#define HPAGE_PMD_SIZE_PATH "/sys/kernel/mm/transparent_hugepage/hpage_pmd_size"
static uint32_t virtio_mem_thp_size(void)
{
gchar *content = NULL;
const char *endptr;
uint64_t tmp;
if (thp_size) {
return thp_size;
}
/*
* Try to probe the actual THP size, fallback to (sane but eventually
* incorrect) default sizes.
*/
if (g_file_get_contents(HPAGE_PMD_SIZE_PATH, &content, NULL, NULL) &&
!qemu_strtou64(content, &endptr, 0, &tmp) &&
(!endptr || *endptr == '\n')) {
/* Sanity-check the value and fallback to something reasonable. */
if (!tmp || !is_power_of_2(tmp)) {
warn_report("Read unsupported THP size: %" PRIx64, tmp);
} else {
thp_size = tmp;
}
}
if (!thp_size) {
thp_size = virtio_mem_default_thp_size();
warn_report("Could not detect THP size, falling back to %" PRIx64
" MiB.", thp_size / MiB);
}
g_free(content);
return thp_size;
}
static uint64_t virtio_mem_default_block_size(RAMBlock *rb)
{
const uint64_t page_size = qemu_ram_pagesize(rb);
/* We can have hugetlbfs with a page size smaller than the THP size. */
if (page_size == qemu_real_host_page_size()) {
return MAX(page_size, virtio_mem_thp_size());
}
return MAX(page_size, VIRTIO_MEM_MIN_BLOCK_SIZE);
}
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
#if defined(VIRTIO_MEM_HAS_LEGACY_GUESTS)
static bool virtio_mem_has_shared_zeropage(RAMBlock *rb)
{
/*
* We only have a guaranteed shared zeropage on ordinary MAP_PRIVATE
* anonymous RAM. In any other case, reading unplugged *can* populate a
* fresh page, consuming actual memory.
*/
return !qemu_ram_is_shared(rb) && qemu_ram_get_fd(rb) < 0 &&
qemu_ram_pagesize(rb) == qemu_real_host_page_size();
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
}
#endif /* VIRTIO_MEM_HAS_LEGACY_GUESTS */
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
/*
* Size the usable region bigger than the requested size if possible. Esp.
* Linux guests will only add (aligned) memory blocks in case they fully
* fit into the usable region, but plug+online only a subset of the pages.
* The memory block size corresponds mostly to the section size.
*
* This allows e.g., to add 20MB with a section size of 128MB on x86_64, and
* a section size of 512MB on arm64 (as long as the start address is properly
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
* aligned, similar to ordinary DIMMs).
*
* We can change this at any time and maybe even make it configurable if
* necessary (as the section size can change). But it's more likely that the
* section size will rather get smaller and not bigger over time.
*/
#if defined(TARGET_X86_64) || defined(TARGET_I386)
#define VIRTIO_MEM_USABLE_EXTENT (2 * (128 * MiB))
#elif defined(TARGET_ARM)
#define VIRTIO_MEM_USABLE_EXTENT (2 * (512 * MiB))
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
#else
#error VIRTIO_MEM_USABLE_EXTENT not defined
#endif
static bool virtio_mem_is_busy(void)
{
/*
* Postcopy cannot handle concurrent discards and we don't want to migrate
* pages on-demand with stale content when plugging new blocks.
*
* For precopy, we don't want unplugged blocks in our migration stream, and
* when plugging new blocks, the page content might differ between source
* and destination (observable by the guest when not initializing pages
* after plugging them) until we're running on the destination (as we didn't
* migrate these blocks when they were unplugged).
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
*/
return migration_in_incoming_postcopy() || !migration_is_idle();
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
typedef int (*virtio_mem_range_cb)(VirtIOMEM *vmem, void *arg,
uint64_t offset, uint64_t size);
static int virtio_mem_for_each_unplugged_range(VirtIOMEM *vmem, void *arg,
virtio_mem_range_cb cb)
{
unsigned long first_zero_bit, last_zero_bit;
uint64_t offset, size;
int ret = 0;
first_zero_bit = find_first_zero_bit(vmem->bitmap, vmem->bitmap_size);
while (first_zero_bit < vmem->bitmap_size) {
offset = first_zero_bit * vmem->block_size;
last_zero_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
first_zero_bit + 1) - 1;
size = (last_zero_bit - first_zero_bit + 1) * vmem->block_size;
ret = cb(vmem, arg, offset, size);
if (ret) {
break;
}
first_zero_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
last_zero_bit + 2);
}
return ret;
}
static int virtio_mem_for_each_plugged_range(VirtIOMEM *vmem, void *arg,
virtio-mem: Proper support for preallocation with migration Ordinary memory preallocation runs when QEMU starts up and creates the memory backends, before processing the incoming migration stream. With virtio-mem, we don't know which memory blocks to preallocate before migration started. Now that we migrate the virtio-mem bitmap early, before migrating any RAM content, we can safely preallocate memory for all plugged memory blocks before migrating any RAM content. This is especially relevant for the following cases: (1) User errors With hugetlb/files, if we don't have sufficient backend memory available on the migration destination, we'll crash QEMU (SIGBUS) during RAM migration when running out of backend memory. Preallocating memory before actual RAM migration allows for failing gracefully and informing the user about the setup problem. (2) Excluded memory ranges during migration For example, virtio-balloon free page hinting will exclude some pages from getting migrated. In that case, we won't crash during RAM migration, but later, when running the VM on the destination, which is bad. To fix this for new QEMU machines that migrate the bitmap early, preallocate the memory early, before any RAM migration. Warn with old QEMU machines. Getting postcopy right is a bit tricky, but we essentially now implement the same (problematic) preallocation logic as ordinary preallocation: preallocate memory early and discard it again before precopy starts. During ordinary preallocation, discarding of RAM happens when postcopy is advised. As the state (bitmap) is loaded after postcopy was advised but before postcopy starts listening, we have to discard memory we preallocated immediately again ourselves. Note that nothing (not even hugetlb reservations) guarantees for postcopy that backend memory (especially, hugetlb pages) are still free after they were freed ones while discarding RAM. Still, allocating that memory at least once helps catching some basic setup problems. Before this change, trying to restore a VM when insufficient hugetlb pages are around results in the process crashing to to a "Bus error" (SIGBUS). With this change, QEMU fails gracefully: qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early' qemu-system-x86_64: load of migration failed: Cannot allocate memory And we can even introspect the early migration data, including the bitmap: $ ./scripts/analyze-migration.py -f STATEFILE { "ram (2)": { "section sizes": { "0000:00:03.0/mem0": "0x0000000780000000", "0000:00:04.0/mem1": "0x0000000780000000", "pc.ram": "0x0000000100000000", "/rom@etc/acpi/tables": "0x0000000000020000", "pc.bios": "0x0000000000040000", "0000:00:02.0/e1000.rom": "0x0000000000040000", "pc.rom": "0x0000000000020000", "/rom@etc/table-loader": "0x0000000000001000", "/rom@etc/acpi/rsdp": "0x0000000000001000" } }, "0000:00:03.0/virtio-mem-device-early (51)": { "tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x0000000040000000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, "0000:00:04.0/virtio-mem-device-early (53)": { "tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x00000001fa400000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, [...] Reported-by: Jing Qi <jinqi@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
virtio_mem_range_cb cb)
{
unsigned long first_bit, last_bit;
uint64_t offset, size;
int ret = 0;
first_bit = find_first_bit(vmem->bitmap, vmem->bitmap_size);
while (first_bit < vmem->bitmap_size) {
offset = first_bit * vmem->block_size;
last_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
first_bit + 1) - 1;
size = (last_bit - first_bit + 1) * vmem->block_size;
ret = cb(vmem, arg, offset, size);
if (ret) {
break;
}
first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
last_bit + 2);
}
return ret;
}
/*
* Adjust the memory section to cover the intersection with the given range.
*
* Returns false if the intersection is empty, otherwise returns true.
*/
static bool virtio_mem_intersect_memory_section(MemoryRegionSection *s,
uint64_t offset, uint64_t size)
{
uint64_t start = MAX(s->offset_within_region, offset);
uint64_t end = MIN(s->offset_within_region + int128_get64(s->size),
offset + size);
if (end <= start) {
return false;
}
s->offset_within_address_space += start - s->offset_within_region;
s->offset_within_region = start;
s->size = int128_make64(end - start);
return true;
}
typedef int (*virtio_mem_section_cb)(MemoryRegionSection *s, void *arg);
static int virtio_mem_for_each_plugged_section(const VirtIOMEM *vmem,
MemoryRegionSection *s,
void *arg,
virtio_mem_section_cb cb)
{
unsigned long first_bit, last_bit;
uint64_t offset, size;
int ret = 0;
first_bit = s->offset_within_region / vmem->block_size;
first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
while (first_bit < vmem->bitmap_size) {
MemoryRegionSection tmp = *s;
offset = first_bit * vmem->block_size;
last_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
first_bit + 1) - 1;
size = (last_bit - first_bit + 1) * vmem->block_size;
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
break;
}
ret = cb(&tmp, arg);
if (ret) {
break;
}
first_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
last_bit + 2);
}
return ret;
}
static int virtio_mem_for_each_unplugged_section(const VirtIOMEM *vmem,
MemoryRegionSection *s,
void *arg,
virtio_mem_section_cb cb)
{
unsigned long first_bit, last_bit;
uint64_t offset, size;
int ret = 0;
first_bit = s->offset_within_region / vmem->block_size;
first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size, first_bit);
while (first_bit < vmem->bitmap_size) {
MemoryRegionSection tmp = *s;
offset = first_bit * vmem->block_size;
last_bit = find_next_bit(vmem->bitmap, vmem->bitmap_size,
first_bit + 1) - 1;
size = (last_bit - first_bit + 1) * vmem->block_size;
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
break;
}
ret = cb(&tmp, arg);
if (ret) {
break;
}
first_bit = find_next_zero_bit(vmem->bitmap, vmem->bitmap_size,
last_bit + 2);
}
return ret;
}
static int virtio_mem_notify_populate_cb(MemoryRegionSection *s, void *arg)
{
RamDiscardListener *rdl = arg;
return rdl->notify_populate(rdl, s);
}
static int virtio_mem_notify_discard_cb(MemoryRegionSection *s, void *arg)
{
RamDiscardListener *rdl = arg;
rdl->notify_discard(rdl, s);
return 0;
}
static void virtio_mem_notify_unplug(VirtIOMEM *vmem, uint64_t offset,
uint64_t size)
{
RamDiscardListener *rdl;
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
MemoryRegionSection tmp = *rdl->section;
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
continue;
}
rdl->notify_discard(rdl, &tmp);
}
}
static int virtio_mem_notify_plug(VirtIOMEM *vmem, uint64_t offset,
uint64_t size)
{
RamDiscardListener *rdl, *rdl2;
int ret = 0;
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
MemoryRegionSection tmp = *rdl->section;
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
continue;
}
ret = rdl->notify_populate(rdl, &tmp);
if (ret) {
break;
}
}
if (ret) {
/* Notify all already-notified listeners. */
QLIST_FOREACH(rdl2, &vmem->rdl_list, next) {
MemoryRegionSection tmp = *rdl2->section;
if (rdl2 == rdl) {
break;
}
if (!virtio_mem_intersect_memory_section(&tmp, offset, size)) {
continue;
}
rdl2->notify_discard(rdl2, &tmp);
}
}
return ret;
}
static void virtio_mem_notify_unplug_all(VirtIOMEM *vmem)
{
RamDiscardListener *rdl;
if (!vmem->size) {
return;
}
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
if (rdl->double_discard_supported) {
rdl->notify_discard(rdl, rdl->section);
} else {
virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
virtio_mem_notify_discard_cb);
}
}
}
static bool virtio_mem_is_range_plugged(const VirtIOMEM *vmem,
uint64_t start_gpa, uint64_t size)
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
{
const unsigned long first_bit = (start_gpa - vmem->addr) / vmem->block_size;
const unsigned long last_bit = first_bit + (size / vmem->block_size) - 1;
unsigned long found_bit;
/* We fake a shorter bitmap to avoid searching too far. */
found_bit = find_next_zero_bit(vmem->bitmap, last_bit + 1, first_bit);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return found_bit > last_bit;
}
static bool virtio_mem_is_range_unplugged(const VirtIOMEM *vmem,
uint64_t start_gpa, uint64_t size)
{
const unsigned long first_bit = (start_gpa - vmem->addr) / vmem->block_size;
const unsigned long last_bit = first_bit + (size / vmem->block_size) - 1;
unsigned long found_bit;
/* We fake a shorter bitmap to avoid searching too far. */
found_bit = find_next_bit(vmem->bitmap, last_bit + 1, first_bit);
return found_bit > last_bit;
}
static void virtio_mem_set_range_plugged(VirtIOMEM *vmem, uint64_t start_gpa,
uint64_t size)
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
{
const unsigned long bit = (start_gpa - vmem->addr) / vmem->block_size;
const unsigned long nbits = size / vmem->block_size;
bitmap_set(vmem->bitmap, bit, nbits);
}
static void virtio_mem_set_range_unplugged(VirtIOMEM *vmem, uint64_t start_gpa,
uint64_t size)
{
const unsigned long bit = (start_gpa - vmem->addr) / vmem->block_size;
const unsigned long nbits = size / vmem->block_size;
bitmap_clear(vmem->bitmap, bit, nbits);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
static void virtio_mem_send_response(VirtIOMEM *vmem, VirtQueueElement *elem,
struct virtio_mem_resp *resp)
{
VirtIODevice *vdev = VIRTIO_DEVICE(vmem);
VirtQueue *vq = vmem->vq;
trace_virtio_mem_send_response(le16_to_cpu(resp->type));
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
iov_from_buf(elem->in_sg, elem->in_num, 0, resp, sizeof(*resp));
virtqueue_push(vq, elem, sizeof(*resp));
virtio_notify(vdev, vq);
}
static void virtio_mem_send_response_simple(VirtIOMEM *vmem,
VirtQueueElement *elem,
uint16_t type)
{
struct virtio_mem_resp resp = {
.type = cpu_to_le16(type),
};
virtio_mem_send_response(vmem, elem, &resp);
}
static bool virtio_mem_valid_range(const VirtIOMEM *vmem, uint64_t gpa,
uint64_t size)
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
{
if (!QEMU_IS_ALIGNED(gpa, vmem->block_size)) {
return false;
}
if (gpa + size < gpa || !size) {
return false;
}
if (gpa < vmem->addr || gpa >= vmem->addr + vmem->usable_region_size) {
return false;
}
if (gpa + size > vmem->addr + vmem->usable_region_size) {
return false;
}
return true;
}
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
static void virtio_mem_activate_memslot(VirtIOMEM *vmem, unsigned int idx)
{
const uint64_t memslot_offset = idx * vmem->memslot_size;
assert(vmem->memslots);
/*
* Instead of enabling/disabling memslots, we add/remove them. This should
* make address space updates faster, because we don't have to loop over
* many disabled subregions.
*/
if (memory_region_is_mapped(&vmem->memslots[idx])) {
return;
}
memory_region_add_subregion(vmem->mr, memslot_offset, &vmem->memslots[idx]);
}
static void virtio_mem_deactivate_memslot(VirtIOMEM *vmem, unsigned int idx)
{
assert(vmem->memslots);
if (!memory_region_is_mapped(&vmem->memslots[idx])) {
return;
}
memory_region_del_subregion(vmem->mr, &vmem->memslots[idx]);
}
static void virtio_mem_activate_memslots_to_plug(VirtIOMEM *vmem,
uint64_t offset, uint64_t size)
{
const unsigned int start_idx = offset / vmem->memslot_size;
const unsigned int end_idx = (offset + size + vmem->memslot_size - 1) /
vmem->memslot_size;
unsigned int idx;
assert(vmem->dynamic_memslots);
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/* Activate all involved memslots in a single transaction. */
memory_region_transaction_begin();
for (idx = start_idx; idx < end_idx; idx++) {
virtio_mem_activate_memslot(vmem, idx);
}
memory_region_transaction_commit();
}
static void virtio_mem_deactivate_unplugged_memslots(VirtIOMEM *vmem,
uint64_t offset,
uint64_t size)
{
const uint64_t region_size = memory_region_size(&vmem->memdev->mr);
const unsigned int start_idx = offset / vmem->memslot_size;
const unsigned int end_idx = (offset + size + vmem->memslot_size - 1) /
vmem->memslot_size;
unsigned int idx;
assert(vmem->dynamic_memslots);
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/* Deactivate all memslots with unplugged blocks in a single transaction. */
memory_region_transaction_begin();
for (idx = start_idx; idx < end_idx; idx++) {
const uint64_t memslot_offset = idx * vmem->memslot_size;
uint64_t memslot_size = vmem->memslot_size;
/* The size of the last memslot might be smaller. */
if (idx == vmem->nb_memslots - 1) {
memslot_size = region_size - memslot_offset;
}
/*
* Partially covered memslots might still have some blocks plugged and
* have to remain active if that's the case.
*/
if (offset > memslot_offset ||
offset + size < memslot_offset + memslot_size) {
const uint64_t gpa = vmem->addr + memslot_offset;
if (!virtio_mem_is_range_unplugged(vmem, gpa, memslot_size)) {
continue;
}
}
virtio_mem_deactivate_memslot(vmem, idx);
}
memory_region_transaction_commit();
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static int virtio_mem_set_block_state(VirtIOMEM *vmem, uint64_t start_gpa,
uint64_t size, bool plug)
{
const uint64_t offset = start_gpa - vmem->addr;
RAMBlock *rb = vmem->memdev->mr.ram_block;
int ret = 0;
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
if (virtio_mem_is_busy()) {
return -EBUSY;
}
if (!plug) {
if (ram_block_discard_range(rb, offset, size)) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return -EBUSY;
}
virtio_mem_notify_unplug(vmem, offset, size);
virtio_mem_set_range_unplugged(vmem, start_gpa, size);
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/* Deactivate completely unplugged memslots after updating the state. */
if (vmem->dynamic_memslots) {
virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
}
return 0;
}
if (vmem->prealloc) {
void *area = memory_region_get_ram_ptr(&vmem->memdev->mr) + offset;
int fd = memory_region_get_fd(&vmem->memdev->mr);
Error *local_err = NULL;
qemu_prealloc_mem(fd, area, size, 1, NULL, &local_err);
if (local_err) {
static bool warned;
/*
* Warn only once, we don't want to fill the log with these
* warnings.
*/
if (!warned) {
warn_report_err(local_err);
warned = true;
} else {
error_free(local_err);
virtio-mem: Support "prealloc=on" option For scarce memory resources, such as hugetlb, we want to be able to prealloc such memory resources in order to not crash later on access. On simple user errors we could otherwise easily run out of memory resources an crash the VM -- pretty much undesired. For ordinary memory devices, such as DIMMs, we preallocate memory via the memory backend for such use cases; however, with virtio-mem we're dealing with sparse memory backends; preallocating the whole memory backend destroys the whole purpose of virtio-mem. Instead, we want to preallocate memory when actually exposing memory to the VM dynamically, and fail plugging memory gracefully + warn the user in case preallocation fails. A common use case for hugetlb will be using "reserve=off,prealloc=off" for the memory backend and "prealloc=on" for the virtio-mem device. This way, no huge pages will be reserved for the process, but we can recover if there are no actual huge pages when plugging memory. Libvirt is already prepared for this. Note that preallocation cannot protect from the OOM killer -- which holds true for any kind of preallocation in QEMU. It's primarily useful only for scarce memory resources such as hugetlb, or shared file-backed memory. It's of little use for ordinary anonymous memory that can be swapped, KSM merged, ... but we won't forbid it. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134611.31172-9-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
}
ret = -EBUSY;
virtio-mem: Support "prealloc=on" option For scarce memory resources, such as hugetlb, we want to be able to prealloc such memory resources in order to not crash later on access. On simple user errors we could otherwise easily run out of memory resources an crash the VM -- pretty much undesired. For ordinary memory devices, such as DIMMs, we preallocate memory via the memory backend for such use cases; however, with virtio-mem we're dealing with sparse memory backends; preallocating the whole memory backend destroys the whole purpose of virtio-mem. Instead, we want to preallocate memory when actually exposing memory to the VM dynamically, and fail plugging memory gracefully + warn the user in case preallocation fails. A common use case for hugetlb will be using "reserve=off,prealloc=off" for the memory backend and "prealloc=on" for the virtio-mem device. This way, no huge pages will be reserved for the process, but we can recover if there are no actual huge pages when plugging memory. Libvirt is already prepared for this. Note that preallocation cannot protect from the OOM killer -- which holds true for any kind of preallocation in QEMU. It's primarily useful only for scarce memory resources such as hugetlb, or shared file-backed memory. It's of little use for ordinary anonymous memory that can be swapped, KSM merged, ... but we won't forbid it. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134611.31172-9-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
}
}
virtio-mem: Support "prealloc=on" option For scarce memory resources, such as hugetlb, we want to be able to prealloc such memory resources in order to not crash later on access. On simple user errors we could otherwise easily run out of memory resources an crash the VM -- pretty much undesired. For ordinary memory devices, such as DIMMs, we preallocate memory via the memory backend for such use cases; however, with virtio-mem we're dealing with sparse memory backends; preallocating the whole memory backend destroys the whole purpose of virtio-mem. Instead, we want to preallocate memory when actually exposing memory to the VM dynamically, and fail plugging memory gracefully + warn the user in case preallocation fails. A common use case for hugetlb will be using "reserve=off,prealloc=off" for the memory backend and "prealloc=on" for the virtio-mem device. This way, no huge pages will be reserved for the process, but we can recover if there are no actual huge pages when plugging memory. Libvirt is already prepared for this. Note that preallocation cannot protect from the OOM killer -- which holds true for any kind of preallocation in QEMU. It's primarily useful only for scarce memory resources such as hugetlb, or shared file-backed memory. It's of little use for ordinary anonymous memory that can be swapped, KSM merged, ... but we won't forbid it. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134611.31172-9-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
if (!ret) {
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/*
* Activate before notifying and rollback in case of any errors.
*
* When activating a yet inactive memslot, memory notifiers will get
* notified about the added memory region and can register with the
* RamDiscardManager; this will traverse all plugged blocks and skip the
* blocks we are plugging here. The following notification will inform
* registered listeners about the blocks we're plugging.
*/
if (vmem->dynamic_memslots) {
virtio_mem_activate_memslots_to_plug(vmem, offset, size);
}
ret = virtio_mem_notify_plug(vmem, offset, size);
if (ret && vmem->dynamic_memslots) {
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
virtio_mem_deactivate_unplugged_memslots(vmem, offset, size);
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
if (ret) {
/* Could be preallocation or a notifier populated memory. */
ram_block_discard_range(vmem->memdev->mr.ram_block, offset, size);
return -EBUSY;
}
virtio_mem_set_range_plugged(vmem, start_gpa, size);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return 0;
}
static int virtio_mem_state_change_request(VirtIOMEM *vmem, uint64_t gpa,
uint16_t nb_blocks, bool plug)
{
const uint64_t size = nb_blocks * vmem->block_size;
int ret;
if (!virtio_mem_valid_range(vmem, gpa, size)) {
return VIRTIO_MEM_RESP_ERROR;
}
if (plug && (vmem->size + size > vmem->requested_size)) {
return VIRTIO_MEM_RESP_NACK;
}
/* test if really all blocks are in the opposite state */
if ((plug && !virtio_mem_is_range_unplugged(vmem, gpa, size)) ||
(!plug && !virtio_mem_is_range_plugged(vmem, gpa, size))) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return VIRTIO_MEM_RESP_ERROR;
}
ret = virtio_mem_set_block_state(vmem, gpa, size, plug);
if (ret) {
return VIRTIO_MEM_RESP_BUSY;
}
if (plug) {
vmem->size += size;
} else {
vmem->size -= size;
}
notifier_list_notify(&vmem->size_change_notifiers, &vmem->size);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return VIRTIO_MEM_RESP_ACK;
}
static void virtio_mem_plug_request(VirtIOMEM *vmem, VirtQueueElement *elem,
struct virtio_mem_req *req)
{
const uint64_t gpa = le64_to_cpu(req->u.plug.addr);
const uint16_t nb_blocks = le16_to_cpu(req->u.plug.nb_blocks);
uint16_t type;
trace_virtio_mem_plug_request(gpa, nb_blocks);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, true);
virtio_mem_send_response_simple(vmem, elem, type);
}
static void virtio_mem_unplug_request(VirtIOMEM *vmem, VirtQueueElement *elem,
struct virtio_mem_req *req)
{
const uint64_t gpa = le64_to_cpu(req->u.unplug.addr);
const uint16_t nb_blocks = le16_to_cpu(req->u.unplug.nb_blocks);
uint16_t type;
trace_virtio_mem_unplug_request(gpa, nb_blocks);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
type = virtio_mem_state_change_request(vmem, gpa, nb_blocks, false);
virtio_mem_send_response_simple(vmem, elem, type);
}
static void virtio_mem_resize_usable_region(VirtIOMEM *vmem,
uint64_t requested_size,
bool can_shrink)
{
uint64_t newsize = MIN(memory_region_size(&vmem->memdev->mr),
requested_size + VIRTIO_MEM_USABLE_EXTENT);
/* The usable region size always has to be multiples of the block size. */
newsize = QEMU_ALIGN_UP(newsize, vmem->block_size);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
if (!requested_size) {
newsize = 0;
}
if (newsize < vmem->usable_region_size && !can_shrink) {
return;
}
trace_virtio_mem_resized_usable_region(vmem->usable_region_size, newsize);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
vmem->usable_region_size = newsize;
}
static int virtio_mem_unplug_all(VirtIOMEM *vmem)
{
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
const uint64_t region_size = memory_region_size(&vmem->memdev->mr);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
RAMBlock *rb = vmem->memdev->mr.ram_block;
if (vmem->size) {
if (virtio_mem_is_busy()) {
return -EBUSY;
}
if (ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb))) {
return -EBUSY;
}
virtio_mem_notify_unplug_all(vmem);
bitmap_clear(vmem->bitmap, 0, vmem->bitmap_size);
vmem->size = 0;
notifier_list_notify(&vmem->size_change_notifiers, &vmem->size);
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/* Deactivate all memslots after updating the state. */
if (vmem->dynamic_memslots) {
virtio_mem_deactivate_unplugged_memslots(vmem, 0, region_size);
}
}
trace_virtio_mem_unplugged_all();
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
return 0;
}
static void virtio_mem_unplug_all_request(VirtIOMEM *vmem,
VirtQueueElement *elem)
{
trace_virtio_mem_unplug_all_request();
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
if (virtio_mem_unplug_all(vmem)) {
virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_BUSY);
} else {
virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_ACK);
}
}
static void virtio_mem_state_request(VirtIOMEM *vmem, VirtQueueElement *elem,
struct virtio_mem_req *req)
{
const uint16_t nb_blocks = le16_to_cpu(req->u.state.nb_blocks);
const uint64_t gpa = le64_to_cpu(req->u.state.addr);
const uint64_t size = nb_blocks * vmem->block_size;
struct virtio_mem_resp resp = {
.type = cpu_to_le16(VIRTIO_MEM_RESP_ACK),
};
trace_virtio_mem_state_request(gpa, nb_blocks);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
if (!virtio_mem_valid_range(vmem, gpa, size)) {
virtio_mem_send_response_simple(vmem, elem, VIRTIO_MEM_RESP_ERROR);
return;
}
if (virtio_mem_is_range_plugged(vmem, gpa, size)) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_PLUGGED);
} else if (virtio_mem_is_range_unplugged(vmem, gpa, size)) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_UNPLUGGED);
} else {
resp.u.state.state = cpu_to_le16(VIRTIO_MEM_STATE_MIXED);
}
trace_virtio_mem_state_response(le16_to_cpu(resp.u.state.state));
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
virtio_mem_send_response(vmem, elem, &resp);
}
static void virtio_mem_handle_request(VirtIODevice *vdev, VirtQueue *vq)
{
const int len = sizeof(struct virtio_mem_req);
VirtIOMEM *vmem = VIRTIO_MEM(vdev);
VirtQueueElement *elem;
struct virtio_mem_req req;
uint16_t type;
while (true) {
elem = virtqueue_pop(vq, sizeof(VirtQueueElement));
if (!elem) {
return;
}
if (iov_to_buf(elem->out_sg, elem->out_num, 0, &req, len) < len) {
virtio_error(vdev, "virtio-mem protocol violation: invalid request"
" size: %d", len);
virtqueue_detach_element(vq, elem, 0);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
g_free(elem);
return;
}
if (iov_size(elem->in_sg, elem->in_num) <
sizeof(struct virtio_mem_resp)) {
virtio_error(vdev, "virtio-mem protocol violation: not enough space"
" for response: %zu",
iov_size(elem->in_sg, elem->in_num));
virtqueue_detach_element(vq, elem, 0);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
g_free(elem);
return;
}
type = le16_to_cpu(req.type);
switch (type) {
case VIRTIO_MEM_REQ_PLUG:
virtio_mem_plug_request(vmem, elem, &req);
break;
case VIRTIO_MEM_REQ_UNPLUG:
virtio_mem_unplug_request(vmem, elem, &req);
break;
case VIRTIO_MEM_REQ_UNPLUG_ALL:
virtio_mem_unplug_all_request(vmem, elem);
break;
case VIRTIO_MEM_REQ_STATE:
virtio_mem_state_request(vmem, elem, &req);
break;
default:
virtio_error(vdev, "virtio-mem protocol violation: unknown request"
" type: %d", type);
virtqueue_detach_element(vq, elem, 0);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
g_free(elem);
return;
}
g_free(elem);
}
}
static void virtio_mem_get_config(VirtIODevice *vdev, uint8_t *config_data)
{
VirtIOMEM *vmem = VIRTIO_MEM(vdev);
struct virtio_mem_config *config = (void *) config_data;
config->block_size = cpu_to_le64(vmem->block_size);
config->node_id = cpu_to_le16(vmem->node);
config->requested_size = cpu_to_le64(vmem->requested_size);
config->plugged_size = cpu_to_le64(vmem->size);
config->addr = cpu_to_le64(vmem->addr);
config->region_size = cpu_to_le64(memory_region_size(&vmem->memdev->mr));
config->usable_region_size = cpu_to_le64(vmem->usable_region_size);
}
static uint64_t virtio_mem_get_features(VirtIODevice *vdev, uint64_t features,
Error **errp)
{
MachineState *ms = MACHINE(qdev_get_machine());
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
VirtIOMEM *vmem = VIRTIO_MEM(vdev);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
if (ms->numa_state) {
#if defined(CONFIG_ACPI)
virtio_add_feature(&features, VIRTIO_MEM_F_ACPI_PXM);
#endif
}
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
assert(vmem->unplugged_inaccessible != ON_OFF_AUTO_AUTO);
if (vmem->unplugged_inaccessible == ON_OFF_AUTO_ON) {
virtio_add_feature(&features, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE);
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return features;
}
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
static int virtio_mem_validate_features(VirtIODevice *vdev)
{
if (virtio_host_has_feature(vdev, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE) &&
!virtio_vdev_has_feature(vdev, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE)) {
return -EFAULT;
}
return 0;
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static void virtio_mem_system_reset(void *opaque)
{
VirtIOMEM *vmem = VIRTIO_MEM(opaque);
/*
* During usual resets, we will unplug all memory and shrink the usable
* region size. This is, however, not possible in all scenarios. Then,
* the guest has to deal with this manually (VIRTIO_MEM_REQ_UNPLUG_ALL).
*/
virtio_mem_unplug_all(vmem);
}
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
static void virtio_mem_prepare_mr(VirtIOMEM *vmem)
{
const uint64_t region_size = memory_region_size(&vmem->memdev->mr);
assert(!vmem->mr && vmem->dynamic_memslots);
vmem->mr = g_new0(MemoryRegion, 1);
memory_region_init(vmem->mr, OBJECT(vmem), "virtio-mem",
region_size);
vmem->mr->align = memory_region_get_alignment(&vmem->memdev->mr);
}
static void virtio_mem_prepare_memslots(VirtIOMEM *vmem)
{
const uint64_t region_size = memory_region_size(&vmem->memdev->mr);
unsigned int idx;
g_assert(!vmem->memslots && vmem->nb_memslots && vmem->dynamic_memslots);
vmem->memslots = g_new0(MemoryRegion, vmem->nb_memslots);
/* Initialize our memslots, but don't map them yet. */
for (idx = 0; idx < vmem->nb_memslots; idx++) {
const uint64_t memslot_offset = idx * vmem->memslot_size;
uint64_t memslot_size = vmem->memslot_size;
char name[20];
/* The size of the last memslot might be smaller. */
if (idx == vmem->nb_memslots - 1) {
memslot_size = region_size - memslot_offset;
}
snprintf(name, sizeof(name), "memslot-%u", idx);
memory_region_init_alias(&vmem->memslots[idx], OBJECT(vmem), name,
&vmem->memdev->mr, memslot_offset,
memslot_size);
/*
* We want to be able to atomically and efficiently activate/deactivate
* individual memslots without affecting adjacent memslots in memory
* notifiers.
*/
memory_region_set_unmergeable(&vmem->memslots[idx], true);
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
}
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static void virtio_mem_device_realize(DeviceState *dev, Error **errp)
{
MachineState *ms = MACHINE(qdev_get_machine());
int nb_numa_nodes = ms->numa_state ? ms->numa_state->num_nodes : 0;
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VirtIOMEM *vmem = VIRTIO_MEM(dev);
uint64_t page_size;
RAMBlock *rb;
int ret;
if (!vmem->memdev) {
error_setg(errp, "'%s' property is not set", VIRTIO_MEM_MEMDEV_PROP);
return;
} else if (host_memory_backend_is_mapped(vmem->memdev)) {
error_setg(errp, "'%s' property specifies a busy memdev: %s",
VIRTIO_MEM_MEMDEV_PROP,
object_get_canonical_path_component(OBJECT(vmem->memdev)));
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return;
} else if (!memory_region_is_ram(&vmem->memdev->mr) ||
memory_region_is_rom(&vmem->memdev->mr) ||
!vmem->memdev->mr.ram_block) {
error_setg(errp, "'%s' property specifies an unsupported memdev",
VIRTIO_MEM_MEMDEV_PROP);
return;
} else if (vmem->memdev->prealloc) {
error_setg(errp, "'%s' property specifies a memdev with preallocation"
" enabled: %s. Instead, specify 'prealloc=on' for the"
" virtio-mem device. ", VIRTIO_MEM_MEMDEV_PROP,
object_get_canonical_path_component(OBJECT(vmem->memdev)));
return;
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
if ((nb_numa_nodes && vmem->node >= nb_numa_nodes) ||
(!nb_numa_nodes && vmem->node)) {
error_setg(errp, "'%s' property has value '%" PRIu32 "', which exceeds"
"the number of numa nodes: %d", VIRTIO_MEM_NODE_PROP,
vmem->node, nb_numa_nodes ? nb_numa_nodes : 1);
return;
}
if (enable_mlock) {
error_setg(errp, "Incompatible with mlock");
return;
}
rb = vmem->memdev->mr.ram_block;
page_size = qemu_ram_pagesize(rb);
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
#if defined(VIRTIO_MEM_HAS_LEGACY_GUESTS)
switch (vmem->unplugged_inaccessible) {
case ON_OFF_AUTO_AUTO:
if (virtio_mem_has_shared_zeropage(rb)) {
vmem->unplugged_inaccessible = ON_OFF_AUTO_OFF;
} else {
vmem->unplugged_inaccessible = ON_OFF_AUTO_ON;
}
break;
case ON_OFF_AUTO_OFF:
if (!virtio_mem_has_shared_zeropage(rb)) {
warn_report("'%s' property set to 'off' with a memdev that does"
" not support the shared zeropage.",
VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP);
}
break;
default:
break;
}
#else /* VIRTIO_MEM_HAS_LEGACY_GUESTS */
vmem->unplugged_inaccessible = ON_OFF_AUTO_ON;
#endif /* VIRTIO_MEM_HAS_LEGACY_GUESTS */
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
if (vmem->dynamic_memslots &&
vmem->unplugged_inaccessible != ON_OFF_AUTO_ON) {
error_setg(errp, "'%s' property set to 'on' requires '%s' to be 'on'",
VIRTIO_MEM_DYNAMIC_MEMSLOTS_PROP,
VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP);
return;
}
/*
* If the block size wasn't configured by the user, use a sane default. This
* allows using hugetlbfs backends of any page size without manual
* intervention.
*/
if (!vmem->block_size) {
vmem->block_size = virtio_mem_default_block_size(rb);
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
if (vmem->block_size < page_size) {
error_setg(errp, "'%s' property has to be at least the page size (0x%"
PRIx64 ")", VIRTIO_MEM_BLOCK_SIZE_PROP, page_size);
return;
} else if (vmem->block_size < virtio_mem_default_block_size(rb)) {
warn_report("'%s' property is smaller than the default block size (%"
PRIx64 " MiB)", VIRTIO_MEM_BLOCK_SIZE_PROP,
virtio_mem_default_block_size(rb) / MiB);
}
if (!QEMU_IS_ALIGNED(vmem->requested_size, vmem->block_size)) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
error_setg(errp, "'%s' property has to be multiples of '%s' (0x%" PRIx64
")", VIRTIO_MEM_REQUESTED_SIZE_PROP,
VIRTIO_MEM_BLOCK_SIZE_PROP, vmem->block_size);
return;
} else if (!QEMU_IS_ALIGNED(vmem->addr, vmem->block_size)) {
error_setg(errp, "'%s' property has to be multiples of '%s' (0x%" PRIx64
")", VIRTIO_MEM_ADDR_PROP, VIRTIO_MEM_BLOCK_SIZE_PROP,
vmem->block_size);
return;
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
} else if (!QEMU_IS_ALIGNED(memory_region_size(&vmem->memdev->mr),
vmem->block_size)) {
error_setg(errp, "'%s' property memdev size has to be multiples of"
"'%s' (0x%" PRIx64 ")", VIRTIO_MEM_MEMDEV_PROP,
VIRTIO_MEM_BLOCK_SIZE_PROP, vmem->block_size);
return;
}
if (ram_block_coordinated_discard_require(true)) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
error_setg(errp, "Discarding RAM is disabled");
return;
}
virtio-mem: Support "x-ignore-shared" migration To achieve desired "x-ignore-shared" functionality, we should not discard all RAM when realizing the device and not mess with preallocation/postcopy when loading device state. In essence, we should not touch RAM content. As "x-ignore-shared" gets set after realizing the device, we cannot rely on that. Let's simply skip discarding of RAM on incoming migration. Note that virtio_mem_post_load() will call virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So once migration finished we'll have a consistent state. The initial system reset will also not discard any RAM, because virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no memory is plugged (which is the case before loading the device state). Note that something like VM templating -- see commit b17fbbe55cba ("migration: allow private destination ram with x-ignore-shared") -- is currently incompatible with virtio-mem and ram_block_discard_range() will warn in case a private file mapping is supplied by virtio-mem. For VM templating with virtio-mem, it makes more sense to either (a) Create the template without the virtio-mem device and hotplug a virtio-mem device to the new VM instances using proper own memory backend. (b) Use a virtio-mem device that doesn't provide any memory in the template (requested-size=0) and use private anonymous memory. Message-ID: <20230706075612.67404-5-david@redhat.com> Tested-by: Mario Casquero <mcasquer@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-07-06 10:56:09 +03:00
/*
* We don't know at this point whether shared RAM is migrated using
* QEMU or migrated using the file content. "x-ignore-shared" will be
* configured after realizing the device. So in case we have an
* incoming migration, simply always skip the discard step.
*
* Otherwise, make sure that we start with a clean slate: either the
* memory backend might get reused or the shared file might still have
* memory allocated.
*/
if (!runstate_check(RUN_STATE_INMIGRATE)) {
ret = ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb));
if (ret) {
error_setg_errno(errp, -ret, "Unexpected error discarding RAM");
ram_block_coordinated_discard_require(false);
return;
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
virtio_mem_resize_usable_region(vmem, vmem->requested_size, true);
vmem->bitmap_size = memory_region_size(&vmem->memdev->mr) /
vmem->block_size;
vmem->bitmap = bitmap_new(vmem->bitmap_size);
virtio_init(vdev, VIRTIO_ID_MEM, sizeof(struct virtio_mem_config));
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
vmem->vq = virtio_add_queue(vdev, 128, virtio_mem_handle_request);
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/*
* With "dynamic-memslots=off" (old behavior) we always map the whole
* RAM memory region directly.
*/
if (vmem->dynamic_memslots) {
if (!vmem->mr) {
virtio_mem_prepare_mr(vmem);
}
if (vmem->nb_memslots <= 1) {
vmem->nb_memslots = 1;
vmem->memslot_size = memory_region_size(&vmem->memdev->mr);
}
if (!vmem->memslots) {
virtio_mem_prepare_memslots(vmem);
}
} else {
assert(!vmem->mr && !vmem->nb_memslots && !vmem->memslots);
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
host_memory_backend_set_mapped(vmem->memdev, true);
vmstate_register_ram(&vmem->memdev->mr, DEVICE(vmem));
if (vmem->early_migration) {
vmstate_register_any(VMSTATE_IF(vmem),
&vmstate_virtio_mem_device_early, vmem);
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
qemu_register_reset(virtio_mem_system_reset, vmem);
/*
* Set ourselves as RamDiscardManager before the plug handler maps the
* memory region and exposes it via an address space.
*/
memory_region_set_ram_discard_manager(&vmem->memdev->mr,
RAM_DISCARD_MANAGER(vmem));
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
static void virtio_mem_device_unrealize(DeviceState *dev)
{
VirtIODevice *vdev = VIRTIO_DEVICE(dev);
VirtIOMEM *vmem = VIRTIO_MEM(dev);
/*
* The unplug handler unmapped the memory region, it cannot be
* found via an address space anymore. Unset ourselves.
*/
memory_region_set_ram_discard_manager(&vmem->memdev->mr, NULL);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
qemu_unregister_reset(virtio_mem_system_reset, vmem);
if (vmem->early_migration) {
vmstate_unregister(VMSTATE_IF(vmem), &vmstate_virtio_mem_device_early,
vmem);
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
vmstate_unregister_ram(&vmem->memdev->mr, DEVICE(vmem));
host_memory_backend_set_mapped(vmem->memdev, false);
virtio_del_queue(vdev, 0);
virtio_cleanup(vdev);
g_free(vmem->bitmap);
ram_block_coordinated_discard_require(false);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
static int virtio_mem_discard_range_cb(VirtIOMEM *vmem, void *arg,
uint64_t offset, uint64_t size)
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
{
RAMBlock *rb = vmem->memdev->mr.ram_block;
return ram_block_discard_range(rb, offset, size) ? -EINVAL : 0;
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
static int virtio_mem_restore_unplugged(VirtIOMEM *vmem)
{
/* Make sure all memory is really discarded after migration. */
return virtio_mem_for_each_unplugged_range(vmem, NULL,
virtio_mem_discard_range_cb);
}
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
static int virtio_mem_activate_memslot_range_cb(VirtIOMEM *vmem, void *arg,
uint64_t offset, uint64_t size)
{
virtio_mem_activate_memslots_to_plug(vmem, offset, size);
return 0;
}
static int virtio_mem_post_load_bitmap(VirtIOMEM *vmem)
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
{
RamDiscardListener *rdl;
int ret;
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
/*
* We restored the bitmap and updated the requested size; activate all
* memslots (so listeners register) before notifying about plugged blocks.
*/
if (vmem->dynamic_memslots) {
/*
* We don't expect any active memslots at this point to deactivate: no
* memory was plugged on the migration destination.
*/
virtio_mem_for_each_plugged_range(vmem, NULL,
virtio_mem_activate_memslot_range_cb);
}
/*
* We started out with all memory discarded and our memory region is mapped
* into an address space. Replay, now that we updated the bitmap.
*/
QLIST_FOREACH(rdl, &vmem->rdl_list, next) {
ret = virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
virtio_mem_notify_populate_cb);
if (ret) {
return ret;
}
}
return 0;
}
static int virtio_mem_post_load(void *opaque, int version_id)
{
VirtIOMEM *vmem = VIRTIO_MEM(opaque);
int ret;
if (!vmem->early_migration) {
ret = virtio_mem_post_load_bitmap(vmem);
if (ret) {
return ret;
}
}
virtio-mem: Support "x-ignore-shared" migration To achieve desired "x-ignore-shared" functionality, we should not discard all RAM when realizing the device and not mess with preallocation/postcopy when loading device state. In essence, we should not touch RAM content. As "x-ignore-shared" gets set after realizing the device, we cannot rely on that. Let's simply skip discarding of RAM on incoming migration. Note that virtio_mem_post_load() will call virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So once migration finished we'll have a consistent state. The initial system reset will also not discard any RAM, because virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no memory is plugged (which is the case before loading the device state). Note that something like VM templating -- see commit b17fbbe55cba ("migration: allow private destination ram with x-ignore-shared") -- is currently incompatible with virtio-mem and ram_block_discard_range() will warn in case a private file mapping is supplied by virtio-mem. For VM templating with virtio-mem, it makes more sense to either (a) Create the template without the virtio-mem device and hotplug a virtio-mem device to the new VM instances using proper own memory backend. (b) Use a virtio-mem device that doesn't provide any memory in the template (requested-size=0) and use private anonymous memory. Message-ID: <20230706075612.67404-5-david@redhat.com> Tested-by: Mario Casquero <mcasquer@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-07-06 10:56:09 +03:00
/*
* If shared RAM is migrated using the file content and not using QEMU,
* don't mess with preallocation and postcopy.
*/
if (migrate_ram_is_ignored(vmem->memdev->mr.ram_block)) {
return 0;
}
if (vmem->prealloc && !vmem->early_migration) {
warn_report("Proper preallocation with migration requires a newer QEMU machine");
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
if (migration_in_incoming_postcopy()) {
return 0;
}
return virtio_mem_restore_unplugged(vmem);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
static int virtio_mem_prealloc_range_cb(VirtIOMEM *vmem, void *arg,
virtio-mem: Proper support for preallocation with migration Ordinary memory preallocation runs when QEMU starts up and creates the memory backends, before processing the incoming migration stream. With virtio-mem, we don't know which memory blocks to preallocate before migration started. Now that we migrate the virtio-mem bitmap early, before migrating any RAM content, we can safely preallocate memory for all plugged memory blocks before migrating any RAM content. This is especially relevant for the following cases: (1) User errors With hugetlb/files, if we don't have sufficient backend memory available on the migration destination, we'll crash QEMU (SIGBUS) during RAM migration when running out of backend memory. Preallocating memory before actual RAM migration allows for failing gracefully and informing the user about the setup problem. (2) Excluded memory ranges during migration For example, virtio-balloon free page hinting will exclude some pages from getting migrated. In that case, we won't crash during RAM migration, but later, when running the VM on the destination, which is bad. To fix this for new QEMU machines that migrate the bitmap early, preallocate the memory early, before any RAM migration. Warn with old QEMU machines. Getting postcopy right is a bit tricky, but we essentially now implement the same (problematic) preallocation logic as ordinary preallocation: preallocate memory early and discard it again before precopy starts. During ordinary preallocation, discarding of RAM happens when postcopy is advised. As the state (bitmap) is loaded after postcopy was advised but before postcopy starts listening, we have to discard memory we preallocated immediately again ourselves. Note that nothing (not even hugetlb reservations) guarantees for postcopy that backend memory (especially, hugetlb pages) are still free after they were freed ones while discarding RAM. Still, allocating that memory at least once helps catching some basic setup problems. Before this change, trying to restore a VM when insufficient hugetlb pages are around results in the process crashing to to a "Bus error" (SIGBUS). With this change, QEMU fails gracefully: qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early' qemu-system-x86_64: load of migration failed: Cannot allocate memory And we can even introspect the early migration data, including the bitmap: $ ./scripts/analyze-migration.py -f STATEFILE { "ram (2)": { "section sizes": { "0000:00:03.0/mem0": "0x0000000780000000", "0000:00:04.0/mem1": "0x0000000780000000", "pc.ram": "0x0000000100000000", "/rom@etc/acpi/tables": "0x0000000000020000", "pc.bios": "0x0000000000040000", "0000:00:02.0/e1000.rom": "0x0000000000040000", "pc.rom": "0x0000000000020000", "/rom@etc/table-loader": "0x0000000000001000", "/rom@etc/acpi/rsdp": "0x0000000000001000" } }, "0000:00:03.0/virtio-mem-device-early (51)": { "tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x0000000040000000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, "0000:00:04.0/virtio-mem-device-early (53)": { "tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x00000001fa400000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, [...] Reported-by: Jing Qi <jinqi@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
uint64_t offset, uint64_t size)
{
void *area = memory_region_get_ram_ptr(&vmem->memdev->mr) + offset;
int fd = memory_region_get_fd(&vmem->memdev->mr);
Error *local_err = NULL;
qemu_prealloc_mem(fd, area, size, 1, NULL, &local_err);
if (local_err) {
error_report_err(local_err);
return -ENOMEM;
}
return 0;
}
static int virtio_mem_post_load_early(void *opaque, int version_id)
{
VirtIOMEM *vmem = VIRTIO_MEM(opaque);
RAMBlock *rb = vmem->memdev->mr.ram_block;
int ret;
if (!vmem->prealloc) {
goto post_load_bitmap;
virtio-mem: Proper support for preallocation with migration Ordinary memory preallocation runs when QEMU starts up and creates the memory backends, before processing the incoming migration stream. With virtio-mem, we don't know which memory blocks to preallocate before migration started. Now that we migrate the virtio-mem bitmap early, before migrating any RAM content, we can safely preallocate memory for all plugged memory blocks before migrating any RAM content. This is especially relevant for the following cases: (1) User errors With hugetlb/files, if we don't have sufficient backend memory available on the migration destination, we'll crash QEMU (SIGBUS) during RAM migration when running out of backend memory. Preallocating memory before actual RAM migration allows for failing gracefully and informing the user about the setup problem. (2) Excluded memory ranges during migration For example, virtio-balloon free page hinting will exclude some pages from getting migrated. In that case, we won't crash during RAM migration, but later, when running the VM on the destination, which is bad. To fix this for new QEMU machines that migrate the bitmap early, preallocate the memory early, before any RAM migration. Warn with old QEMU machines. Getting postcopy right is a bit tricky, but we essentially now implement the same (problematic) preallocation logic as ordinary preallocation: preallocate memory early and discard it again before precopy starts. During ordinary preallocation, discarding of RAM happens when postcopy is advised. As the state (bitmap) is loaded after postcopy was advised but before postcopy starts listening, we have to discard memory we preallocated immediately again ourselves. Note that nothing (not even hugetlb reservations) guarantees for postcopy that backend memory (especially, hugetlb pages) are still free after they were freed ones while discarding RAM. Still, allocating that memory at least once helps catching some basic setup problems. Before this change, trying to restore a VM when insufficient hugetlb pages are around results in the process crashing to to a "Bus error" (SIGBUS). With this change, QEMU fails gracefully: qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early' qemu-system-x86_64: load of migration failed: Cannot allocate memory And we can even introspect the early migration data, including the bitmap: $ ./scripts/analyze-migration.py -f STATEFILE { "ram (2)": { "section sizes": { "0000:00:03.0/mem0": "0x0000000780000000", "0000:00:04.0/mem1": "0x0000000780000000", "pc.ram": "0x0000000100000000", "/rom@etc/acpi/tables": "0x0000000000020000", "pc.bios": "0x0000000000040000", "0000:00:02.0/e1000.rom": "0x0000000000040000", "pc.rom": "0x0000000000020000", "/rom@etc/table-loader": "0x0000000000001000", "/rom@etc/acpi/rsdp": "0x0000000000001000" } }, "0000:00:03.0/virtio-mem-device-early (51)": { "tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x0000000040000000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, "0000:00:04.0/virtio-mem-device-early (53)": { "tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x00000001fa400000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, [...] Reported-by: Jing Qi <jinqi@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
}
virtio-mem: Support "x-ignore-shared" migration To achieve desired "x-ignore-shared" functionality, we should not discard all RAM when realizing the device and not mess with preallocation/postcopy when loading device state. In essence, we should not touch RAM content. As "x-ignore-shared" gets set after realizing the device, we cannot rely on that. Let's simply skip discarding of RAM on incoming migration. Note that virtio_mem_post_load() will call virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So once migration finished we'll have a consistent state. The initial system reset will also not discard any RAM, because virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no memory is plugged (which is the case before loading the device state). Note that something like VM templating -- see commit b17fbbe55cba ("migration: allow private destination ram with x-ignore-shared") -- is currently incompatible with virtio-mem and ram_block_discard_range() will warn in case a private file mapping is supplied by virtio-mem. For VM templating with virtio-mem, it makes more sense to either (a) Create the template without the virtio-mem device and hotplug a virtio-mem device to the new VM instances using proper own memory backend. (b) Use a virtio-mem device that doesn't provide any memory in the template (requested-size=0) and use private anonymous memory. Message-ID: <20230706075612.67404-5-david@redhat.com> Tested-by: Mario Casquero <mcasquer@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-07-06 10:56:09 +03:00
/*
* If shared RAM is migrated using the file content and not using QEMU,
* don't mess with preallocation and postcopy.
*/
if (migrate_ram_is_ignored(rb)) {
goto post_load_bitmap;
virtio-mem: Support "x-ignore-shared" migration To achieve desired "x-ignore-shared" functionality, we should not discard all RAM when realizing the device and not mess with preallocation/postcopy when loading device state. In essence, we should not touch RAM content. As "x-ignore-shared" gets set after realizing the device, we cannot rely on that. Let's simply skip discarding of RAM on incoming migration. Note that virtio_mem_post_load() will call virtio_mem_restore_unplugged() -- unless "x-ignore-shared" is set. So once migration finished we'll have a consistent state. The initial system reset will also not discard any RAM, because virtio_mem_unplug_all() will not call virtio_mem_unplug_all() when no memory is plugged (which is the case before loading the device state). Note that something like VM templating -- see commit b17fbbe55cba ("migration: allow private destination ram with x-ignore-shared") -- is currently incompatible with virtio-mem and ram_block_discard_range() will warn in case a private file mapping is supplied by virtio-mem. For VM templating with virtio-mem, it makes more sense to either (a) Create the template without the virtio-mem device and hotplug a virtio-mem device to the new VM instances using proper own memory backend. (b) Use a virtio-mem device that doesn't provide any memory in the template (requested-size=0) and use private anonymous memory. Message-ID: <20230706075612.67404-5-david@redhat.com> Tested-by: Mario Casquero <mcasquer@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-07-06 10:56:09 +03:00
}
virtio-mem: Proper support for preallocation with migration Ordinary memory preallocation runs when QEMU starts up and creates the memory backends, before processing the incoming migration stream. With virtio-mem, we don't know which memory blocks to preallocate before migration started. Now that we migrate the virtio-mem bitmap early, before migrating any RAM content, we can safely preallocate memory for all plugged memory blocks before migrating any RAM content. This is especially relevant for the following cases: (1) User errors With hugetlb/files, if we don't have sufficient backend memory available on the migration destination, we'll crash QEMU (SIGBUS) during RAM migration when running out of backend memory. Preallocating memory before actual RAM migration allows for failing gracefully and informing the user about the setup problem. (2) Excluded memory ranges during migration For example, virtio-balloon free page hinting will exclude some pages from getting migrated. In that case, we won't crash during RAM migration, but later, when running the VM on the destination, which is bad. To fix this for new QEMU machines that migrate the bitmap early, preallocate the memory early, before any RAM migration. Warn with old QEMU machines. Getting postcopy right is a bit tricky, but we essentially now implement the same (problematic) preallocation logic as ordinary preallocation: preallocate memory early and discard it again before precopy starts. During ordinary preallocation, discarding of RAM happens when postcopy is advised. As the state (bitmap) is loaded after postcopy was advised but before postcopy starts listening, we have to discard memory we preallocated immediately again ourselves. Note that nothing (not even hugetlb reservations) guarantees for postcopy that backend memory (especially, hugetlb pages) are still free after they were freed ones while discarding RAM. Still, allocating that memory at least once helps catching some basic setup problems. Before this change, trying to restore a VM when insufficient hugetlb pages are around results in the process crashing to to a "Bus error" (SIGBUS). With this change, QEMU fails gracefully: qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early' qemu-system-x86_64: load of migration failed: Cannot allocate memory And we can even introspect the early migration data, including the bitmap: $ ./scripts/analyze-migration.py -f STATEFILE { "ram (2)": { "section sizes": { "0000:00:03.0/mem0": "0x0000000780000000", "0000:00:04.0/mem1": "0x0000000780000000", "pc.ram": "0x0000000100000000", "/rom@etc/acpi/tables": "0x0000000000020000", "pc.bios": "0x0000000000040000", "0000:00:02.0/e1000.rom": "0x0000000000040000", "pc.rom": "0x0000000000020000", "/rom@etc/table-loader": "0x0000000000001000", "/rom@etc/acpi/rsdp": "0x0000000000001000" } }, "0000:00:03.0/virtio-mem-device-early (51)": { "tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x0000000040000000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, "0000:00:04.0/virtio-mem-device-early (53)": { "tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x00000001fa400000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, [...] Reported-by: Jing Qi <jinqi@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
/*
* We restored the bitmap and verified that the basic properties
* match on source and destination, so we can go ahead and preallocate
* memory for all plugged memory blocks, before actual RAM migration starts
* touching this memory.
*/
ret = virtio_mem_for_each_plugged_range(vmem, NULL,
virtio_mem_prealloc_range_cb);
if (ret) {
return ret;
}
/*
* This is tricky: postcopy wants to start with a clean slate. On
* POSTCOPY_INCOMING_ADVISE, postcopy code discards all (ordinarily
* preallocated) RAM such that postcopy will work as expected later.
*
* However, we run after POSTCOPY_INCOMING_ADVISE -- but before actual
* RAM migration. So let's discard all memory again. This looks like an
* expensive NOP, but actually serves a purpose: we made sure that we
* were able to allocate all required backend memory once. We cannot
* guarantee that the backend memory we will free will remain free
* until we need it during postcopy, but at least we can catch the
* obvious setup issues this way.
*/
if (migration_incoming_postcopy_advised()) {
if (ram_block_discard_range(rb, 0, qemu_ram_get_used_length(rb))) {
return -EBUSY;
}
}
post_load_bitmap:
/* Finally, update any other state to be consistent with the new bitmap. */
return virtio_mem_post_load_bitmap(vmem);
virtio-mem: Proper support for preallocation with migration Ordinary memory preallocation runs when QEMU starts up and creates the memory backends, before processing the incoming migration stream. With virtio-mem, we don't know which memory blocks to preallocate before migration started. Now that we migrate the virtio-mem bitmap early, before migrating any RAM content, we can safely preallocate memory for all plugged memory blocks before migrating any RAM content. This is especially relevant for the following cases: (1) User errors With hugetlb/files, if we don't have sufficient backend memory available on the migration destination, we'll crash QEMU (SIGBUS) during RAM migration when running out of backend memory. Preallocating memory before actual RAM migration allows for failing gracefully and informing the user about the setup problem. (2) Excluded memory ranges during migration For example, virtio-balloon free page hinting will exclude some pages from getting migrated. In that case, we won't crash during RAM migration, but later, when running the VM on the destination, which is bad. To fix this for new QEMU machines that migrate the bitmap early, preallocate the memory early, before any RAM migration. Warn with old QEMU machines. Getting postcopy right is a bit tricky, but we essentially now implement the same (problematic) preallocation logic as ordinary preallocation: preallocate memory early and discard it again before precopy starts. During ordinary preallocation, discarding of RAM happens when postcopy is advised. As the state (bitmap) is loaded after postcopy was advised but before postcopy starts listening, we have to discard memory we preallocated immediately again ourselves. Note that nothing (not even hugetlb reservations) guarantees for postcopy that backend memory (especially, hugetlb pages) are still free after they were freed ones while discarding RAM. Still, allocating that memory at least once helps catching some basic setup problems. Before this change, trying to restore a VM when insufficient hugetlb pages are around results in the process crashing to to a "Bus error" (SIGBUS). With this change, QEMU fails gracefully: qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early' qemu-system-x86_64: load of migration failed: Cannot allocate memory And we can even introspect the early migration data, including the bitmap: $ ./scripts/analyze-migration.py -f STATEFILE { "ram (2)": { "section sizes": { "0000:00:03.0/mem0": "0x0000000780000000", "0000:00:04.0/mem1": "0x0000000780000000", "pc.ram": "0x0000000100000000", "/rom@etc/acpi/tables": "0x0000000000020000", "pc.bios": "0x0000000000040000", "0000:00:02.0/e1000.rom": "0x0000000000040000", "pc.rom": "0x0000000000020000", "/rom@etc/table-loader": "0x0000000000001000", "/rom@etc/acpi/rsdp": "0x0000000000001000" } }, "0000:00:03.0/virtio-mem-device-early (51)": { "tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x0000000040000000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, "0000:00:04.0/virtio-mem-device-early (53)": { "tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x00000001fa400000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, [...] Reported-by: Jing Qi <jinqi@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
}
typedef struct VirtIOMEMMigSanityChecks {
VirtIOMEM *parent;
uint64_t addr;
uint64_t region_size;
uint64_t block_size;
uint32_t node;
} VirtIOMEMMigSanityChecks;
static int virtio_mem_mig_sanity_checks_pre_save(void *opaque)
{
VirtIOMEMMigSanityChecks *tmp = opaque;
VirtIOMEM *vmem = tmp->parent;
tmp->addr = vmem->addr;
tmp->region_size = memory_region_size(&vmem->memdev->mr);
tmp->block_size = vmem->block_size;
tmp->node = vmem->node;
return 0;
}
static int virtio_mem_mig_sanity_checks_post_load(void *opaque, int version_id)
{
VirtIOMEMMigSanityChecks *tmp = opaque;
VirtIOMEM *vmem = tmp->parent;
const uint64_t new_region_size = memory_region_size(&vmem->memdev->mr);
if (tmp->addr != vmem->addr) {
error_report("Property '%s' changed from 0x%" PRIx64 " to 0x%" PRIx64,
VIRTIO_MEM_ADDR_PROP, tmp->addr, vmem->addr);
return -EINVAL;
}
/*
* Note: Preparation for resizable memory regions. The maximum size
* of the memory region must not change during migration.
*/
if (tmp->region_size != new_region_size) {
error_report("Property '%s' size changed from 0x%" PRIx64 " to 0x%"
PRIx64, VIRTIO_MEM_MEMDEV_PROP, tmp->region_size,
new_region_size);
return -EINVAL;
}
if (tmp->block_size != vmem->block_size) {
error_report("Property '%s' changed from 0x%" PRIx64 " to 0x%" PRIx64,
VIRTIO_MEM_BLOCK_SIZE_PROP, tmp->block_size,
vmem->block_size);
return -EINVAL;
}
if (tmp->node != vmem->node) {
error_report("Property '%s' changed from %" PRIu32 " to %" PRIu32,
VIRTIO_MEM_NODE_PROP, tmp->node, vmem->node);
return -EINVAL;
}
return 0;
}
static const VMStateDescription vmstate_virtio_mem_sanity_checks = {
.name = "virtio-mem-device/sanity-checks",
.pre_save = virtio_mem_mig_sanity_checks_pre_save,
.post_load = virtio_mem_mig_sanity_checks_post_load,
.fields = (VMStateField[]) {
VMSTATE_UINT64(addr, VirtIOMEMMigSanityChecks),
VMSTATE_UINT64(region_size, VirtIOMEMMigSanityChecks),
VMSTATE_UINT64(block_size, VirtIOMEMMigSanityChecks),
VMSTATE_UINT32(node, VirtIOMEMMigSanityChecks),
VMSTATE_END_OF_LIST(),
},
};
static bool virtio_mem_vmstate_field_exists(void *opaque, int version_id)
{
const VirtIOMEM *vmem = VIRTIO_MEM(opaque);
/* With early migration, these fields were already migrated. */
return !vmem->early_migration;
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static const VMStateDescription vmstate_virtio_mem_device = {
.name = "virtio-mem-device",
.minimum_version_id = 1,
.version_id = 1,
vfio: Support for RamDiscardManager in the vIOMMU case vIOMMU support works already with RamDiscardManager as long as guests only map populated memory. Both, populated and discarded memory is mapped into &address_space_memory, where vfio_get_xlat_addr() will find that memory, to create the vfio mapping. Sane guests will never map discarded memory (e.g., unplugged memory blocks in virtio-mem) into an IOMMU - or keep it mapped into an IOMMU while memory is getting discarded. However, there are two cases where a malicious guests could trigger pinning of more memory than intended. One case is easy to handle: the guest trying to map discarded memory into an IOMMU. The other case is harder to handle: the guest keeping memory mapped in the IOMMU while it is getting discarded. We would have to walk over all mappings when discarding memory and identify if any mapping would be a violation. Let's keep it simple for now and print a warning, indicating that setting RLIMIT_MEMLOCK can mitigate such attacks. We have to take care of incoming migration: at the point the IOMMUs get restored and start creating mappings in vfio, RamDiscardManager implementations might not be back up and running yet: let's add runstate priorities to enforce the order when restoring. Acked-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Alex Williamson <alex.williamson@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Alex Williamson <alex.williamson@redhat.com> Cc: Dr. David Alan Gilbert <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Cc: Pankaj Gupta <pankaj.gupta.linux@gmail.com> Cc: Peter Xu <peterx@redhat.com> Cc: Auger Eric <eric.auger@redhat.com> Cc: Wei Yang <richard.weiyang@linux.alibaba.com> Cc: teawater <teawaterz@linux.alibaba.com> Cc: Marek Kedzierski <mkedzier@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20210413095531.25603-10-david@redhat.com> Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
2021-04-13 12:55:27 +03:00
.priority = MIG_PRI_VIRTIO_MEM,
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
.post_load = virtio_mem_post_load,
.fields = (VMStateField[]) {
VMSTATE_WITH_TMP_TEST(VirtIOMEM, virtio_mem_vmstate_field_exists,
VirtIOMEMMigSanityChecks,
vmstate_virtio_mem_sanity_checks),
VMSTATE_UINT64(usable_region_size, VirtIOMEM),
VMSTATE_UINT64_TEST(size, VirtIOMEM, virtio_mem_vmstate_field_exists),
VMSTATE_UINT64(requested_size, VirtIOMEM),
VMSTATE_BITMAP_TEST(bitmap, VirtIOMEM, virtio_mem_vmstate_field_exists,
0, bitmap_size),
VMSTATE_END_OF_LIST()
},
};
/*
* Transfer properties that are immutable while migration is active early,
* such that we have have this information around before migrating any RAM
* content.
*
* Note that virtio_mem_is_busy() makes sure these properties can no longer
* change on the migration source until migration completed.
*
* With QEMU compat machines, we transmit these properties later, via
* vmstate_virtio_mem_device instead -- see virtio_mem_vmstate_field_exists().
*/
static const VMStateDescription vmstate_virtio_mem_device_early = {
.name = "virtio-mem-device-early",
.minimum_version_id = 1,
.version_id = 1,
.early_setup = true,
virtio-mem: Proper support for preallocation with migration Ordinary memory preallocation runs when QEMU starts up and creates the memory backends, before processing the incoming migration stream. With virtio-mem, we don't know which memory blocks to preallocate before migration started. Now that we migrate the virtio-mem bitmap early, before migrating any RAM content, we can safely preallocate memory for all plugged memory blocks before migrating any RAM content. This is especially relevant for the following cases: (1) User errors With hugetlb/files, if we don't have sufficient backend memory available on the migration destination, we'll crash QEMU (SIGBUS) during RAM migration when running out of backend memory. Preallocating memory before actual RAM migration allows for failing gracefully and informing the user about the setup problem. (2) Excluded memory ranges during migration For example, virtio-balloon free page hinting will exclude some pages from getting migrated. In that case, we won't crash during RAM migration, but later, when running the VM on the destination, which is bad. To fix this for new QEMU machines that migrate the bitmap early, preallocate the memory early, before any RAM migration. Warn with old QEMU machines. Getting postcopy right is a bit tricky, but we essentially now implement the same (problematic) preallocation logic as ordinary preallocation: preallocate memory early and discard it again before precopy starts. During ordinary preallocation, discarding of RAM happens when postcopy is advised. As the state (bitmap) is loaded after postcopy was advised but before postcopy starts listening, we have to discard memory we preallocated immediately again ourselves. Note that nothing (not even hugetlb reservations) guarantees for postcopy that backend memory (especially, hugetlb pages) are still free after they were freed ones while discarding RAM. Still, allocating that memory at least once helps catching some basic setup problems. Before this change, trying to restore a VM when insufficient hugetlb pages are around results in the process crashing to to a "Bus error" (SIGBUS). With this change, QEMU fails gracefully: qemu-system-x86_64: qemu_prealloc_mem: preallocating memory failed: Bad address qemu-system-x86_64: error while loading state for instance 0x0 of device '0000:00:03.0/virtio-mem-device-early' qemu-system-x86_64: load of migration failed: Cannot allocate memory And we can even introspect the early migration data, including the bitmap: $ ./scripts/analyze-migration.py -f STATEFILE { "ram (2)": { "section sizes": { "0000:00:03.0/mem0": "0x0000000780000000", "0000:00:04.0/mem1": "0x0000000780000000", "pc.ram": "0x0000000100000000", "/rom@etc/acpi/tables": "0x0000000000020000", "pc.bios": "0x0000000000040000", "0000:00:02.0/e1000.rom": "0x0000000000040000", "pc.rom": "0x0000000000020000", "/rom@etc/table-loader": "0x0000000000001000", "/rom@etc/acpi/rsdp": "0x0000000000001000" } }, "0000:00:03.0/virtio-mem-device-early (51)": { "tmp": "00 00 00 01 40 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x0000000040000000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, "0000:00:04.0/virtio-mem-device-early (53)": { "tmp": "00 00 00 08 c0 00 00 00 00 00 00 07 80 00 00 00 00 00 00 00 00 20 00 00 00 00 00 00", "size": "0x00000001fa400000", "bitmap": "ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff [...] }, [...] Reported-by: Jing Qi <jinqi@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>S Signed-off-by: David Hildenbrand <david@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2023-01-17 14:22:49 +03:00
.post_load = virtio_mem_post_load_early,
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
.fields = (VMStateField[]) {
VMSTATE_WITH_TMP(VirtIOMEM, VirtIOMEMMigSanityChecks,
vmstate_virtio_mem_sanity_checks),
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
VMSTATE_UINT64(size, VirtIOMEM),
VMSTATE_BITMAP(bitmap, VirtIOMEM, 0, bitmap_size),
VMSTATE_END_OF_LIST()
},
};
static const VMStateDescription vmstate_virtio_mem = {
.name = "virtio-mem",
.minimum_version_id = 1,
.version_id = 1,
.fields = (VMStateField[]) {
VMSTATE_VIRTIO_DEVICE,
VMSTATE_END_OF_LIST()
},
};
static void virtio_mem_fill_device_info(const VirtIOMEM *vmem,
VirtioMEMDeviceInfo *vi)
{
vi->memaddr = vmem->addr;
vi->node = vmem->node;
vi->requested_size = vmem->requested_size;
vi->size = vmem->size;
vi->max_size = memory_region_size(&vmem->memdev->mr);
vi->block_size = vmem->block_size;
vi->memdev = object_get_canonical_path(OBJECT(vmem->memdev));
}
static MemoryRegion *virtio_mem_get_memory_region(VirtIOMEM *vmem, Error **errp)
{
if (!vmem->memdev) {
error_setg(errp, "'%s' property must be set", VIRTIO_MEM_MEMDEV_PROP);
return NULL;
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
} else if (vmem->dynamic_memslots) {
if (!vmem->mr) {
virtio_mem_prepare_mr(vmem);
}
return vmem->mr;
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
return &vmem->memdev->mr;
}
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
static void virtio_mem_decide_memslots(VirtIOMEM *vmem, unsigned int limit)
{
uint64_t region_size, memslot_size, min_memslot_size;
unsigned int memslots;
RAMBlock *rb;
if (!vmem->dynamic_memslots) {
return;
}
/* We're called exactly once, before realizing the device. */
assert(!vmem->nb_memslots);
/* If realizing the device will fail, just assume a single memslot. */
if (limit <= 1 || !vmem->memdev || !vmem->memdev->mr.ram_block) {
vmem->nb_memslots = 1;
return;
}
rb = vmem->memdev->mr.ram_block;
region_size = memory_region_size(&vmem->memdev->mr);
/*
* Determine the default block size now, to determine the minimum memslot
* size. We want the minimum slot size to be at least the device block size.
*/
if (!vmem->block_size) {
vmem->block_size = virtio_mem_default_block_size(rb);
}
/* If realizing the device will fail, just assume a single memslot. */
if (vmem->block_size < qemu_ram_pagesize(rb) ||
!QEMU_IS_ALIGNED(region_size, vmem->block_size)) {
vmem->nb_memslots = 1;
return;
}
/*
* All memslots except the last one have a reasonable minimum size, and
* and all memslot sizes are aligned to the device block size.
*/
memslot_size = QEMU_ALIGN_UP(region_size / limit, vmem->block_size);
min_memslot_size = MAX(vmem->block_size, VIRTIO_MEM_MIN_MEMSLOT_SIZE);
memslot_size = MAX(memslot_size, min_memslot_size);
memslots = QEMU_ALIGN_UP(region_size, memslot_size) / memslot_size;
if (memslots != 1) {
vmem->memslot_size = memslot_size;
}
vmem->nb_memslots = memslots;
}
static unsigned int virtio_mem_get_memslots(VirtIOMEM *vmem)
{
if (!vmem->dynamic_memslots) {
/* Exactly one static RAM memory region. */
return 1;
}
/* We're called after instructed to make a decision. */
g_assert(vmem->nb_memslots);
return vmem->nb_memslots;
}
static void virtio_mem_add_size_change_notifier(VirtIOMEM *vmem,
Notifier *notifier)
{
notifier_list_add(&vmem->size_change_notifiers, notifier);
}
static void virtio_mem_remove_size_change_notifier(VirtIOMEM *vmem,
Notifier *notifier)
{
notifier_remove(notifier);
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static void virtio_mem_get_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
const VirtIOMEM *vmem = VIRTIO_MEM(obj);
uint64_t value = vmem->size;
visit_type_size(v, name, &value, errp);
}
static void virtio_mem_get_requested_size(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
const VirtIOMEM *vmem = VIRTIO_MEM(obj);
uint64_t value = vmem->requested_size;
visit_type_size(v, name, &value, errp);
}
static void virtio_mem_set_requested_size(Object *obj, Visitor *v,
const char *name, void *opaque,
Error **errp)
{
VirtIOMEM *vmem = VIRTIO_MEM(obj);
uint64_t value;
if (!visit_type_size(v, name, &value, errp)) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return;
}
/*
* The block size and memory backend are not fixed until the device was
* realized. realize() will verify these properties then.
*/
if (DEVICE(obj)->realized) {
if (!QEMU_IS_ALIGNED(value, vmem->block_size)) {
error_setg(errp, "'%s' has to be multiples of '%s' (0x%" PRIx64
")", name, VIRTIO_MEM_BLOCK_SIZE_PROP,
vmem->block_size);
return;
} else if (value > memory_region_size(&vmem->memdev->mr)) {
error_setg(errp, "'%s' cannot exceed the memory backend size"
"(0x%" PRIx64 ")", name,
memory_region_size(&vmem->memdev->mr));
return;
}
if (value != vmem->requested_size) {
virtio_mem_resize_usable_region(vmem, value, false);
vmem->requested_size = value;
}
/*
* Trigger a config update so the guest gets notified. We trigger
* even if the size didn't change (especially helpful for debugging).
*/
virtio_notify_config(VIRTIO_DEVICE(vmem));
} else {
vmem->requested_size = value;
}
}
static void virtio_mem_get_block_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
const VirtIOMEM *vmem = VIRTIO_MEM(obj);
uint64_t value = vmem->block_size;
/*
* If not configured by the user (and we're not realized yet), use the
* default block size we would use with the current memory backend.
*/
if (!value) {
if (vmem->memdev && memory_region_is_ram(&vmem->memdev->mr)) {
value = virtio_mem_default_block_size(vmem->memdev->mr.ram_block);
} else {
value = virtio_mem_thp_size();
}
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
visit_type_size(v, name, &value, errp);
}
static void virtio_mem_set_block_size(Object *obj, Visitor *v, const char *name,
void *opaque, Error **errp)
{
VirtIOMEM *vmem = VIRTIO_MEM(obj);
uint64_t value;
if (DEVICE(obj)->realized) {
error_setg(errp, "'%s' cannot be changed", name);
return;
}
if (!visit_type_size(v, name, &value, errp)) {
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
return;
}
if (value < VIRTIO_MEM_MIN_BLOCK_SIZE) {
error_setg(errp, "'%s' property has to be at least 0x%" PRIx32, name,
VIRTIO_MEM_MIN_BLOCK_SIZE);
return;
} else if (!is_power_of_2(value)) {
error_setg(errp, "'%s' property has to be a power of two", name);
return;
}
vmem->block_size = value;
}
static void virtio_mem_instance_init(Object *obj)
{
VirtIOMEM *vmem = VIRTIO_MEM(obj);
notifier_list_init(&vmem->size_change_notifiers);
QLIST_INIT(&vmem->rdl_list);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
object_property_add(obj, VIRTIO_MEM_SIZE_PROP, "size", virtio_mem_get_size,
NULL, NULL, NULL);
object_property_add(obj, VIRTIO_MEM_REQUESTED_SIZE_PROP, "size",
virtio_mem_get_requested_size,
virtio_mem_set_requested_size, NULL, NULL);
object_property_add(obj, VIRTIO_MEM_BLOCK_SIZE_PROP, "size",
virtio_mem_get_block_size, virtio_mem_set_block_size,
NULL, NULL);
}
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
static void virtio_mem_instance_finalize(Object *obj)
{
VirtIOMEM *vmem = VIRTIO_MEM(obj);
/*
* Note: the core already dropped the references on all memory regions
* (it's passed as the owner to memory_region_init_*()) and finalized
* these objects. We can simply free the memory.
*/
g_free(vmem->memslots);
vmem->memslots = NULL;
g_free(vmem->mr);
vmem->mr = NULL;
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static Property virtio_mem_properties[] = {
DEFINE_PROP_UINT64(VIRTIO_MEM_ADDR_PROP, VirtIOMEM, addr, 0),
DEFINE_PROP_UINT32(VIRTIO_MEM_NODE_PROP, VirtIOMEM, node, 0),
virtio-mem: Support "prealloc=on" option For scarce memory resources, such as hugetlb, we want to be able to prealloc such memory resources in order to not crash later on access. On simple user errors we could otherwise easily run out of memory resources an crash the VM -- pretty much undesired. For ordinary memory devices, such as DIMMs, we preallocate memory via the memory backend for such use cases; however, with virtio-mem we're dealing with sparse memory backends; preallocating the whole memory backend destroys the whole purpose of virtio-mem. Instead, we want to preallocate memory when actually exposing memory to the VM dynamically, and fail plugging memory gracefully + warn the user in case preallocation fails. A common use case for hugetlb will be using "reserve=off,prealloc=off" for the memory backend and "prealloc=on" for the virtio-mem device. This way, no huge pages will be reserved for the process, but we can recover if there are no actual huge pages when plugging memory. Libvirt is already prepared for this. Note that preallocation cannot protect from the OOM killer -- which holds true for any kind of preallocation in QEMU. It's primarily useful only for scarce memory resources such as hugetlb, or shared file-backed memory. It's of little use for ordinary anonymous memory that can be swapped, KSM merged, ... but we won't forbid it. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134611.31172-9-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:46:11 +03:00
DEFINE_PROP_BOOL(VIRTIO_MEM_PREALLOC_PROP, VirtIOMEM, prealloc, false),
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
DEFINE_PROP_LINK(VIRTIO_MEM_MEMDEV_PROP, VirtIOMEM, memdev,
TYPE_MEMORY_BACKEND, HostMemoryBackend *),
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
#if defined(VIRTIO_MEM_HAS_LEGACY_GUESTS)
DEFINE_PROP_ON_OFF_AUTO(VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP, VirtIOMEM,
virtio-mem: Default to "unplugged-inaccessible=on" with 8.1 on x86-64 Allowing guests to read unplugged memory simplified the bring-up of virtio-mem in Linux guests -- which was limited to x86-64 only. On arm64 (which was added later), we never had legacy guests and don't even allow to configure it, essentially always having "unplugged-inaccessible=on". At this point, all guests we care about should be supporting VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, so let's change the default for the 8.1 machine. This change implies that also memory that supports the shared zeropage (private anonymous memory) will now require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE in the driver in order to be usable by the guest -- as default, one can still manually set the unplugged-inaccessible property. Disallowing the guest to read unplugged memory will be important for some future features, such as memslot optimizations or protection of unplugged memory, whereby we'll actually no longer allow the guest to even read from unplugged memory. At some point, we might want to deprecate and remove that property. Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Marcel Apfelbaum <marcel.apfelbaum@gmail.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Richard Henderson <richard.henderson@linaro.org> Cc: Eduardo Habkost <eduardo@habkost.net> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20230503182352.792458-1-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2023-05-03 21:23:52 +03:00
unplugged_inaccessible, ON_OFF_AUTO_ON),
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
#endif
DEFINE_PROP_BOOL(VIRTIO_MEM_EARLY_MIGRATION_PROP, VirtIOMEM,
early_migration, true),
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
DEFINE_PROP_BOOL(VIRTIO_MEM_DYNAMIC_MEMSLOTS_PROP, VirtIOMEM,
dynamic_memslots, false),
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
DEFINE_PROP_END_OF_LIST(),
};
static uint64_t virtio_mem_rdm_get_min_granularity(const RamDiscardManager *rdm,
const MemoryRegion *mr)
{
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
g_assert(mr == &vmem->memdev->mr);
return vmem->block_size;
}
static bool virtio_mem_rdm_is_populated(const RamDiscardManager *rdm,
const MemoryRegionSection *s)
{
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
uint64_t start_gpa = vmem->addr + s->offset_within_region;
uint64_t end_gpa = start_gpa + int128_get64(s->size);
g_assert(s->mr == &vmem->memdev->mr);
start_gpa = QEMU_ALIGN_DOWN(start_gpa, vmem->block_size);
end_gpa = QEMU_ALIGN_UP(end_gpa, vmem->block_size);
if (!virtio_mem_valid_range(vmem, start_gpa, end_gpa - start_gpa)) {
return false;
}
return virtio_mem_is_range_plugged(vmem, start_gpa, end_gpa - start_gpa);
}
struct VirtIOMEMReplayData {
void *fn;
void *opaque;
};
static int virtio_mem_rdm_replay_populated_cb(MemoryRegionSection *s, void *arg)
{
struct VirtIOMEMReplayData *data = arg;
return ((ReplayRamPopulate)data->fn)(s, data->opaque);
}
static int virtio_mem_rdm_replay_populated(const RamDiscardManager *rdm,
MemoryRegionSection *s,
ReplayRamPopulate replay_fn,
void *opaque)
{
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
struct VirtIOMEMReplayData data = {
.fn = replay_fn,
.opaque = opaque,
};
g_assert(s->mr == &vmem->memdev->mr);
return virtio_mem_for_each_plugged_section(vmem, s, &data,
virtio_mem_rdm_replay_populated_cb);
}
static int virtio_mem_rdm_replay_discarded_cb(MemoryRegionSection *s,
void *arg)
{
struct VirtIOMEMReplayData *data = arg;
((ReplayRamDiscard)data->fn)(s, data->opaque);
return 0;
}
static void virtio_mem_rdm_replay_discarded(const RamDiscardManager *rdm,
MemoryRegionSection *s,
ReplayRamDiscard replay_fn,
void *opaque)
{
const VirtIOMEM *vmem = VIRTIO_MEM(rdm);
struct VirtIOMEMReplayData data = {
.fn = replay_fn,
.opaque = opaque,
};
g_assert(s->mr == &vmem->memdev->mr);
virtio_mem_for_each_unplugged_section(vmem, s, &data,
virtio_mem_rdm_replay_discarded_cb);
}
static void virtio_mem_rdm_register_listener(RamDiscardManager *rdm,
RamDiscardListener *rdl,
MemoryRegionSection *s)
{
VirtIOMEM *vmem = VIRTIO_MEM(rdm);
int ret;
g_assert(s->mr == &vmem->memdev->mr);
rdl->section = memory_region_section_new_copy(s);
QLIST_INSERT_HEAD(&vmem->rdl_list, rdl, next);
ret = virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
virtio_mem_notify_populate_cb);
if (ret) {
error_report("%s: Replaying plugged ranges failed: %s", __func__,
strerror(-ret));
}
}
static void virtio_mem_rdm_unregister_listener(RamDiscardManager *rdm,
RamDiscardListener *rdl)
{
VirtIOMEM *vmem = VIRTIO_MEM(rdm);
g_assert(rdl->section->mr == &vmem->memdev->mr);
if (vmem->size) {
if (rdl->double_discard_supported) {
rdl->notify_discard(rdl, rdl->section);
} else {
virtio_mem_for_each_plugged_section(vmem, rdl->section, rdl,
virtio_mem_notify_discard_cb);
}
}
memory_region_section_free_copy(rdl->section);
rdl->section = NULL;
QLIST_REMOVE(rdl, next);
}
static void virtio_mem_unplug_request_check(VirtIOMEM *vmem, Error **errp)
{
if (vmem->unplugged_inaccessible == ON_OFF_AUTO_OFF) {
/*
* We could allow it with a usable region size of 0, but let's just
* not care about that legacy setting.
*/
error_setg(errp, "virtio-mem device cannot get unplugged while"
" '" VIRTIO_MEM_UNPLUGGED_INACCESSIBLE_PROP "' != 'on'");
return;
}
if (vmem->size) {
error_setg(errp, "virtio-mem device cannot get unplugged while"
" '" VIRTIO_MEM_SIZE_PROP "' != '0'");
return;
}
if (vmem->requested_size) {
error_setg(errp, "virtio-mem device cannot get unplugged while"
" '" VIRTIO_MEM_REQUESTED_SIZE_PROP "' != '0'");
return;
}
}
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
static void virtio_mem_class_init(ObjectClass *klass, void *data)
{
DeviceClass *dc = DEVICE_CLASS(klass);
VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
VirtIOMEMClass *vmc = VIRTIO_MEM_CLASS(klass);
RamDiscardManagerClass *rdmc = RAM_DISCARD_MANAGER_CLASS(klass);
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
device_class_set_props(dc, virtio_mem_properties);
dc->vmsd = &vmstate_virtio_mem;
set_bit(DEVICE_CATEGORY_MISC, dc->categories);
vdc->realize = virtio_mem_device_realize;
vdc->unrealize = virtio_mem_device_unrealize;
vdc->get_config = virtio_mem_get_config;
vdc->get_features = virtio_mem_get_features;
virtio-mem: Support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE With VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE, we signal the VM that reading unplugged memory is not supported. We have to fail feature negotiation in case the guest does not support VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. First, VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE is required to properly handle memory backends (or architectures) without support for the shared zeropage in the hypervisor cleanly. Without the shared zeropage, even reading an unpopulated virtual memory location can populate real memory and consequently consume memory in the hypervisor. We have a guaranteed shared zeropage only on MAP_PRIVATE anonymous memory. Second, we want VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE to be the default long-term as even populating the shared zeropage can be problematic: for example, without THP support (possible) or without support for the shared huge zeropage with THP (unlikely), the PTE page tables to hold the shared zeropage entries can consume quite some memory that cannot be reclaimed easily. Third, there are other optimizations+features (e.g., protection of unplugged memory, reducing the total memory slot size and bitmap sizes) that will require VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE. We really only support x86 targets with virtio-mem for now (and Linux similarly only support x86), but that might change soon, so prepare for different targets already. Add a new "unplugged-inaccessible" tristate property for x86 targets: - "off" will keep VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE unset and legacy guests working. - "on" will set VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE and stop legacy guests from using the device. - "auto" selects the default based on support for the shared zeropage. Warn in case the property is set to "off" and we don't have support for the shared zeropage. For existing compat machines, the property will default to "off", to not change the behavior but eventually warn about a problematic setup. Short-term, we'll set the property default to "auto" for new QEMU machines. Mid-term, we'll set the property default to "on" for new QEMU machines. Long-term, we'll deprecate the parameter and disallow legacy guests completely. The property has to match on the migration source and destination. "auto" will result in the same VIRTIO_MEM_F_UNPLUGGED_INACCESSIBLE setting as long as the qemu command line (esp. memdev) match -- so "auto" is good enough for migration purposes and the parameter doesn't have to be migrated explicitly. Reviewed-by: Michal Privoznik <mprivozn@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20211217134039.29670-3-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2021-12-17 16:40:38 +03:00
vdc->validate_features = virtio_mem_validate_features;
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
vdc->vmsd = &vmstate_virtio_mem_device;
vmc->fill_device_info = virtio_mem_fill_device_info;
vmc->get_memory_region = virtio_mem_get_memory_region;
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
vmc->decide_memslots = virtio_mem_decide_memslots;
vmc->get_memslots = virtio_mem_get_memslots;
vmc->add_size_change_notifier = virtio_mem_add_size_change_notifier;
vmc->remove_size_change_notifier = virtio_mem_remove_size_change_notifier;
vmc->unplug_request_check = virtio_mem_unplug_request_check;
rdmc->get_min_granularity = virtio_mem_rdm_get_min_granularity;
rdmc->is_populated = virtio_mem_rdm_is_populated;
rdmc->replay_populated = virtio_mem_rdm_replay_populated;
rdmc->replay_discarded = virtio_mem_rdm_replay_discarded;
rdmc->register_listener = virtio_mem_rdm_register_listener;
rdmc->unregister_listener = virtio_mem_rdm_unregister_listener;
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
}
static const TypeInfo virtio_mem_info = {
.name = TYPE_VIRTIO_MEM,
.parent = TYPE_VIRTIO_DEVICE,
.instance_size = sizeof(VirtIOMEM),
.instance_init = virtio_mem_instance_init,
virtio-mem: Expose device memory dynamically via multiple memslots if enabled Having large virtio-mem devices that only expose little memory to a VM is currently a problem: we map the whole sparse memory region into the guest using a single memslot, resulting in one gigantic memslot in KVM. KVM allocates metadata for the whole memslot, which can result in quite some memory waste. Assuming we have a 1 TiB virtio-mem device and only expose little (e.g., 1 GiB) memory, we would create a single 1 TiB memslot and KVM has to allocate metadata for that 1 TiB memslot: on x86, this implies allocating a significant amount of memory for metadata: (1) RMAP: 8 bytes per 4 KiB, 8 bytes per 2 MiB, 8 bytes per 1 GiB -> For 1 TiB: 2147483648 + 4194304 + 8192 = ~ 2 GiB (0.2 %) With the TDP MMU (cat /sys/module/kvm/parameters/tdp_mmu) this gets allocated lazily when required for nested VMs (2) gfn_track: 2 bytes per 4 KiB -> For 1 TiB: 536870912 = ~512 MiB (0.05 %) (3) lpage_info: 4 bytes per 2 MiB, 4 bytes per 1 GiB -> For 1 TiB: 2097152 + 4096 = ~2 MiB (0.0002 %) (4) 2x dirty bitmaps for tracking: 2x 1 bit per 4 KiB page -> For 1 TiB: 536870912 = 64 MiB (0.006 %) So we primarily care about (1) and (2). The bad thing is, that the memory consumption *doubles* once SMM is enabled, because we create the memslot once for !SMM and once for SMM. Having a 1 TiB memslot without the TDP MMU consumes around: * With SMM: 5 GiB * Without SMM: 2.5 GiB Having a 1 TiB memslot with the TDP MMU consumes around: * With SMM: 1 GiB * Without SMM: 512 MiB ... and that's really something we want to optimize, to be able to just start a VM with small boot memory (e.g., 4 GiB) and a virtio-mem device that can grow very large (e.g., 1 TiB). Consequently, using multiple memslots and only mapping the memslots we really need can significantly reduce memory waste and speed up memslot-related operations. Let's expose the sparse RAM memory region using multiple memslots, mapping only the memslots we currently need into our device memory region container. The feature can be enabled using "dynamic-memslots=on" and requires "unplugged-inaccessible=on", which is nowadays the default. Once enabled, we'll auto-detect the number of memslots to use based on the memslot limit provided by the core. We'll use at most 1 memslot per gigabyte. Note that our global limit of memslots accross all memory devices is currently set to 256: even with multiple large virtio-mem devices, we'd still have a sane limit on the number of memslots used. The default is to not dynamically map memslot for now ("dynamic-memslots=off"). The optimization must be enabled manually, because some vhost setups (e.g., hotplug of vhost-user devices) might be problematic until we support more memslots especially in vhost-user backends. Note that "dynamic-memslots=on" is just a hint that multiple memslots *may* be used for internal optimizations, not that multiple memslots *must* be used. The actual number of memslots that are used is an internal detail: for example, once memslot metadata is no longer an issue, we could simply stop optimizing for that. Migration source and destination can differ on the setting of "dynamic-memslots". Message-ID: <20230926185738.277351-17-david@redhat.com> Reviewed-by: Maciej S. Szmigiero <maciej.szmigiero@oracle.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com>
2023-09-26 21:57:36 +03:00
.instance_finalize = virtio_mem_instance_finalize,
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
.class_init = virtio_mem_class_init,
.class_size = sizeof(VirtIOMEMClass),
.interfaces = (InterfaceInfo[]) {
{ TYPE_RAM_DISCARD_MANAGER },
{ }
},
virtio-mem: Paravirtualized memory hot(un)plug This is the very basic/initial version of virtio-mem. An introduction to virtio-mem can be found in the Linux kernel driver [1]. While it can be used in the current state for hotplug of a smaller amount of memory, it will heavily benefit from resizeable memory regions in the future. Each virtio-mem device manages a memory region (provided via a memory backend). After requested by the hypervisor ("requested-size"), the guest can try to plug/unplug blocks of memory within that region, in order to reach the requested size. Initially, and after a reboot, all memory is unplugged (except in special cases - reboot during postcopy). The guest may only try to plug/unplug blocks of memory within the usable region size. The usable region size is a little bigger than the requested size, to give the device driver some flexibility. The usable region size will only grow, except on reboots or when all memory is requested to get unplugged. The guest can never plug more memory than requested. Unplugged memory will get zapped/discarded, similar to in a balloon device. The block size is variable, however, it is always chosen in a way such that THP splits are avoided (e.g., 2MB). The state of each block (plugged/unplugged) is tracked in a bitmap. As virtio-mem devices (e.g., virtio-mem-pci) will be memory devices, we now expose "VirtioMEMDeviceInfo" via "query-memory-devices". -------------------------------------------------------------------------- There are two important follow-up items that are in the works: 1. Resizeable memory regions: Use resizeable allocations/RAM blocks to grow/shrink along with the usable region size. This avoids creating initially very big VMAs, RAM blocks, and KVM slots. 2. Protection of unplugged memory: Make sure the gust cannot actually make use of unplugged memory. Other follow-up items that are in the works: 1. Exclude unplugged memory during migration (via precopy notifier). 2. Handle remapping of memory. 3. Support for other architectures. -------------------------------------------------------------------------- Example usage (virtio-mem-pci is introduced in follow-up patches): Start QEMU with two virtio-mem devices (one per NUMA node): $ qemu-system-x86_64 -m 4G,maxmem=20G \ -smp sockets=2,cores=2 \ -numa node,nodeid=0,cpus=0-1 -numa node,nodeid=1,cpus=2-3 \ [...] -object memory-backend-ram,id=mem0,size=8G \ -device virtio-mem-pci,id=vm0,memdev=mem0,node=0,requested-size=0M \ -object memory-backend-ram,id=mem1,size=8G \ -device virtio-mem-pci,id=vm1,memdev=mem1,node=1,requested-size=1G Query the configuration: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 0 size: 0 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 1073741824 size: 1073741824 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 Add some memory to node 0: (qemu) qom-set vm0 requested-size 500M Remove some memory from node 1: (qemu) qom-set vm1 requested-size 200M Query the configuration again: (qemu) info memory-devices Memory device [virtio-mem]: "vm0" memaddr: 0x140000000 node: 0 requested-size: 524288000 size: 524288000 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem0 Memory device [virtio-mem]: "vm1" memaddr: 0x340000000 node: 1 requested-size: 209715200 size: 209715200 max-size: 8589934592 block-size: 2097152 memdev: /objects/mem1 [1] https://lkml.kernel.org/r/20200311171422.10484-1-david@redhat.com Cc: "Michael S. Tsirkin" <mst@redhat.com> Cc: Eric Blake <eblake@redhat.com> Cc: Markus Armbruster <armbru@redhat.com> Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com> Cc: Igor Mammedov <imammedo@redhat.com> Signed-off-by: David Hildenbrand <david@redhat.com> Message-Id: <20200626072248.78761-11-david@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
2020-06-26 10:22:37 +03:00
};
static void virtio_register_types(void)
{
type_register_static(&virtio_mem_info);
}
type_init(virtio_register_types)