diff options
author | KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> | 2008-12-09 13:14:16 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2008-12-10 08:01:53 -0800 |
commit | 6841c8e26357904ef462650273f5d5015f7bb370 (patch) | |
tree | 17621c8b63393b725f42dad26ac671dd1926c9c1 | |
parent | 02d211688727ad02bb4555b1aa8ae2de16b21b39 (diff) | |
download | linux-6841c8e26357904ef462650273f5d5015f7bb370.tar.gz linux-6841c8e26357904ef462650273f5d5015f7bb370.tar.bz2 linux-6841c8e26357904ef462650273f5d5015f7bb370.zip |
mm: remove UP version of lru_add_drain_all()
Currently, lru_add_drain_all() has two version.
(1) use schedule_on_each_cpu()
(2) don't use schedule_on_each_cpu()
Gerald Schaefer reported it doesn't work well on SMP (not NUMA) S390
machine.
offline_pages() calls lru_add_drain_all() followed by drain_all_pages().
While drain_all_pages() works on each cpu, lru_add_drain_all() only runs
on the current cpu for architectures w/o CONFIG_NUMA. This let us run
into the BUG_ON(!PageBuddy(page)) in __offline_isolated_pages() during
memory hotplug stress test on s390. The page in question was still on the
pcp list, because of a race with lru_add_drain_all() and drain_all_pages()
on different cpus.
Actually, Almost machine has CONFIG_UNEVICTABLE_LRU=y. Then almost machine use
(1) version lru_add_drain_all although the machine is UP.
Then this ifdef is not valueable.
simple removing is better.
Signed-off-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Lee Schermerhorn <Lee.Schermerhorn@hp.com>
Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/swap.c | 13 |
1 files changed, 0 insertions, 13 deletions
diff --git a/mm/swap.c b/mm/swap.c index 2881987603eb..b135ec90cdeb 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -299,7 +299,6 @@ void lru_add_drain(void) put_cpu(); } -#if defined(CONFIG_NUMA) || defined(CONFIG_UNEVICTABLE_LRU) static void lru_add_drain_per_cpu(struct work_struct *dummy) { lru_add_drain(); @@ -313,18 +312,6 @@ int lru_add_drain_all(void) return schedule_on_each_cpu(lru_add_drain_per_cpu); } -#else - -/* - * Returns 0 for success - */ -int lru_add_drain_all(void) -{ - lru_add_drain(); - return 0; -} -#endif - /* * Batched page_cache_release(). Decrement the reference count on all the * passed pages. If it fell to zero then remove the page from the LRU and |