diff options
author | Christoph Lameter <clameter@sgi.com> | 2007-10-16 01:25:35 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2007-10-16 09:42:58 -0700 |
commit | 56bbd65df0e92a4a8eb70c5f2b416ae2b6c5fb31 (patch) | |
tree | 714154b7b16d2e08c60d49b925aa0e789f0f0be0 /mm/mempolicy.c | |
parent | 4199cfa02b982f4c739e8a6a304d6a40e1935d25 (diff) | |
download | linux-stable-56bbd65df0e92a4a8eb70c5f2b416ae2b6c5fb31.tar.gz linux-stable-56bbd65df0e92a4a8eb70c5f2b416ae2b6c5fb31.tar.bz2 linux-stable-56bbd65df0e92a4a8eb70c5f2b416ae2b6c5fb31.zip |
Memoryless nodes: Update memory policy and page migration
Online nodes now may have no memory. The checks and initialization must
therefore be changed to no longer use the online functions.
This will correctly initialize the interleave on bootup to only target nodes
with memory and will make sys_move_pages return an error when a page is to be
moved to a memoryless node. Similarly we will get an error if MPOL_BIND and
MPOL_INTERLEAVE is used on a memoryless node.
These are somewhat new semantics. So far one could specify memoryless nodes
and we would maybe do the right thing and just ignore the node (or we'd do
something strange like with MPOL_INTERLEAVE). If we want to allow the
specification of memoryless nodes via memory policies then we need to keep
checking for online nodes.
Signed-off-by: Christoph Lameter <clameter@sgi.com>
Acked-by: Nishanth Aravamudan <nacc@us.ibm.com>
Tested-by: Lee Schermerhorn <lee.schermerhorn@hp.com>
Acked-by: Bob Picco <bob.picco@hp.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: Mel Gorman <mel@skynet.ie>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'mm/mempolicy.c')
-rw-r--r-- | mm/mempolicy.c | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 5daf63bd97e7..0d70fb7d83be 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -494,9 +494,9 @@ static void get_zonemask(struct mempolicy *p, nodemask_t *nodes) *nodes = p->v.nodes; break; case MPOL_PREFERRED: - /* or use current node instead of online map? */ + /* or use current node instead of memory_map? */ if (p->v.preferred_node < 0) - *nodes = node_online_map; + *nodes = node_states[N_HIGH_MEMORY]; else node_set(p->v.preferred_node, *nodes); break; @@ -1687,7 +1687,7 @@ void __init numa_policy_init(void) * fall back to the largest node if they're all smaller. */ nodes_clear(interleave_nodes); - for_each_online_node(nid) { + for_each_node_state(nid, N_HIGH_MEMORY) { unsigned long total_pages = node_present_pages(nid); /* Preserve the largest node */ @@ -1973,7 +1973,7 @@ int show_numa_map(struct seq_file *m, void *v) seq_printf(m, " huge"); } else { check_pgd_range(vma, vma->vm_start, vma->vm_end, - &node_online_map, MPOL_MF_STATS, md); + &node_states[N_HIGH_MEMORY], MPOL_MF_STATS, md); } if (!md->pages) @@ -2000,7 +2000,7 @@ int show_numa_map(struct seq_file *m, void *v) if (md->writeback) seq_printf(m," writeback=%lu", md->writeback); - for_each_online_node(n) + for_each_node_state(n, N_HIGH_MEMORY) if (md->node[n]) seq_printf(m, " N%d=%lu", n, md->node[n]); out: |