diff options
author | Mel Gorman <mgorman@suse.de> | 2013-10-07 11:29:02 +0100 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2013-10-09 12:40:28 +0200 |
commit | e6628d5b0a2979f3e0ee6f7783ede5df50cb9ede (patch) | |
tree | 824c34aa911095ab568e12797db9150949068b8e /kernel/sched/core.c | |
parent | 7a0f308337d11fd5caa9f845c6d08cc5d6067988 (diff) | |
download | linux-stable-e6628d5b0a2979f3e0ee6f7783ede5df50cb9ede.tar.gz linux-stable-e6628d5b0a2979f3e0ee6f7783ede5df50cb9ede.tar.bz2 linux-stable-e6628d5b0a2979f3e0ee6f7783ede5df50cb9ede.zip |
sched/numa: Reschedule task on preferred NUMA node once selected
A preferred node is selected based on the node the most NUMA hinting
faults was incurred on. There is no guarantee that the task is running
on that node at the time so this patch rescheules the task to run on
the most idle CPU of the selected node when selected. This avoids
waiting for the balancer to make a decision.
Signed-off-by: Mel Gorman <mgorman@suse.de>
Reviewed-by: Rik van Riel <riel@redhat.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1381141781-10992-25-git-send-email-mgorman@suse.de
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'kernel/sched/core.c')
-rw-r--r-- | kernel/sched/core.c | 19 |
1 files changed, 19 insertions, 0 deletions
diff --git a/kernel/sched/core.c b/kernel/sched/core.c index b7e6b6f9c5f6..66b878e94554 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4348,6 +4348,25 @@ fail: return ret; } +#ifdef CONFIG_NUMA_BALANCING +/* Migrate current task p to target_cpu */ +int migrate_task_to(struct task_struct *p, int target_cpu) +{ + struct migration_arg arg = { p, target_cpu }; + int curr_cpu = task_cpu(p); + + if (curr_cpu == target_cpu) + return 0; + + if (!cpumask_test_cpu(target_cpu, tsk_cpus_allowed(p))) + return -EINVAL; + + /* TODO: This is not properly updating schedstats */ + + return stop_one_cpu(curr_cpu, migration_cpu_stop, &arg); +} +#endif + /* * migration_cpu_stop - this will be executed by a highprio stopper thread * and performs thread migration by bumping thread off CPU then |