summaryrefslogtreecommitdiffstats
path: root/kernel/irq
diff options
context:
space:
mode:
authorLong Li <longli@microsoft.com>2018-11-02 18:02:48 +0000
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2019-02-12 19:46:57 +0100
commit46ed4f4fa1cf98b3da433f76ca4c7ac33f45d423 (patch)
tree8db17f6988d0705a70a357b624468aa39ad2493b /kernel/irq
parent2198c2c15eeeefe5369d7fa56a9d42a19950d4fb (diff)
downloadlinux-stable-46ed4f4fa1cf98b3da433f76ca4c7ac33f45d423.tar.gz
linux-stable-46ed4f4fa1cf98b3da433f76ca4c7ac33f45d423.tar.bz2
linux-stable-46ed4f4fa1cf98b3da433f76ca4c7ac33f45d423.zip
genirq/affinity: Spread IRQs to all available NUMA nodes
[ Upstream commit b82592199032bf7c778f861b936287e37ebc9f62 ] If the number of NUMA nodes exceeds the number of MSI/MSI-X interrupts which are allocated for a device, the interrupt affinity spreading code fails to spread them across all nodes. The reason is, that the spreading code starts from node 0 and continues up to the number of interrupts requested for allocation. This leaves the nodes past the last interrupt unused. This results in interrupt concentration on the first nodes which violates the assumption of the block layer that all nodes are covered evenly. As a consequence the NUMA nodes above the number of interrupts are all assigned to hardware queue 0 and therefore NUMA node 0, which results in bad performance and has CPU hotplug implications, because queue 0 gets shut down when the last CPU of node 0 is offlined. Go over all NUMA nodes and assign them round-robin to all requested interrupts to solve this. [ tglx: Massaged changelog ] Signed-off-by: Long Li <longli@microsoft.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ming Lei <ming.lei@redhat.com> Cc: Michael Kelley <mikelley@microsoft.com> Link: https://lkml.kernel.org/r/20181102180248.13583-1-longli@linuxonhyperv.com Signed-off-by: Sasha Levin <sashal@kernel.org>
Diffstat (limited to 'kernel/irq')
-rw-r--r--kernel/irq/affinity.c5
1 files changed, 2 insertions, 3 deletions
diff --git a/kernel/irq/affinity.c b/kernel/irq/affinity.c
index f4f29b9d90ee..e12cdf637c71 100644
--- a/kernel/irq/affinity.c
+++ b/kernel/irq/affinity.c
@@ -117,12 +117,11 @@ static int irq_build_affinity_masks(const struct irq_affinity *affd,
*/
if (numvecs <= nodes) {
for_each_node_mask(n, nodemsk) {
- cpumask_copy(masks + curvec, node_to_cpumask[n]);
- if (++done == numvecs)
- break;
+ cpumask_or(masks + curvec, masks + curvec, node_to_cpumask[n]);
if (++curvec == last_affv)
curvec = affd->pre_vectors;
}
+ done = numvecs;
goto out;
}