summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorPaul Jackson <pj@sgi.com>2005-08-23 01:04:27 -0700
committerLinus Torvalds <torvalds@g5.osdl.org>2005-08-23 20:02:52 -0700
commitd10689b68aff7b48e3de1a3f7fcd6567bd2905af (patch)
treec81c261274011d301dfbcfd1a3e13480b93c167e
parentae75784bc576a1af70509c2f3ba2b70bb65a0c58 (diff)
downloadlinux-d10689b68aff7b48e3de1a3f7fcd6567bd2905af.tar.gz
linux-d10689b68aff7b48e3de1a3f7fcd6567bd2905af.tar.bz2
linux-d10689b68aff7b48e3de1a3f7fcd6567bd2905af.zip
[PATCH] cpu_exclusive sched domains on partial nodes temp fix
This keeps the kernel/cpuset.c routine update_cpu_domains() from invoking the sched.c routine partition_sched_domains() if the cpuset in question doesn't fall on node boundaries. I have boot tested this on an SN2, and with the help of a couple of ad hoc printk's, determined that it does indeed avoid calling the partition_sched_domains() routine on partial nodes. I did not directly verify that this avoids setting up bogus sched domains or avoids the oops that Hawkes saw. This patch imposes a silent artificial constraint on which cpusets can be used to define dynamic sched domains. This patch should allow proceeding with this new feature in 2.6.13 for the configurations in which it is useful (node alligned sched domains) while avoiding trying to setup sched domains in the less useful cases that can cause the kernel corruption and oops. Signed-off-by: Paul Jackson <pj@sgi.com> Acked-by: Ingo Molnar <mingo@elte.hu> Acked-by: Dinakar Guniguntala <dino@in.ibm.com> Acked-by: John Hawkes <hawkes@sgi.com> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
-rw-r--r--kernel/cpuset.c17
1 files changed, 17 insertions, 0 deletions
diff --git a/kernel/cpuset.c b/kernel/cpuset.c
index 21a4e3b2cbda..e0d296c5b302 100644
--- a/kernel/cpuset.c
+++ b/kernel/cpuset.c
@@ -636,6 +636,23 @@ static void update_cpu_domains(struct cpuset *cur)
return;
/*
+ * Hack to avoid 2.6.13 partial node dynamic sched domain bug.
+ * Require the 'cpu_exclusive' cpuset to include all (or none)
+ * of the CPUs on each node, or return w/o changing sched domains.
+ * Remove this hack when dynamic sched domains fixed.
+ */
+ {
+ int i, j;
+
+ for_each_cpu_mask(i, cur->cpus_allowed) {
+ for_each_cpu_mask(j, node_to_cpumask(cpu_to_node(i))) {
+ if (!cpu_isset(j, cur->cpus_allowed))
+ return;
+ }
+ }
+ }
+
+ /*
* Get all cpus from parent's cpus_allowed not part of exclusive
* children
*/