b01fec3659
The current logic for calculating 'maxdomain' making it a sum of numa_state->num_nodes with spapr->gpu_numa_id. spapr->gpu_numa_id is used as a index to determine the next available NUMA id that a given NVGPU can use. The problem is that the initial value of gpu_numa_id, for any topology that has more than one NUMA node, is equal to numa_state->num_nodes. This means that our maxdomain will always be, at least, twice the amount of existing NUMA nodes. This means that a guest with 4 NUMA nodes will end up with the following max-associativity-domains: rtas/ibm,max-associativity-domains 00000004 00000008 00000008 00000008 00000008 This overtuning of maxdomains doesn't go unnoticed in the guest, being detected in SLUB during boot: dmesg | grep SLUB [ 0.000000] SLUB: HWalign=128, Order=0-3, MinObjects=0, CPUs=4, Nodes=8 SLUB is detecting 8 total nodes, with 4 nodes being online. This patch fixes ibm,max-associativity-domains by considering the amount of NVGPUs NUMA nodes presented in the guest, instead of just spapr->gpu_numa_id. Reported-by: Cédric Le Goater <clg@kaod.org> Tested-by: Cédric Le Goater <clg@kaod.org> Signed-off-by: Daniel Henrique Barboza <danielhb413@gmail.com> Message-Id: <20210128174213.1349181-4-danielhb413@gmail.com> Reviewed-by: Greg Kurz <groug@kaod.org> Signed-off-by: David Gibson <david@gibson.dropbear.id.au>