diff options
author | Nick Piggin <npiggin@suse.de> | 2006-09-25 23:31:24 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@g5.osdl.org> | 2006-09-26 08:48:48 -0700 |
commit | db37648cd6ce9b828abd6d49aa3d269926ee7b7d (patch) | |
tree | a0155c7897f4706386d10c8718f98687bc357c82 /mm | |
parent | 28e4d965e6131ace1e813e93aebca89ac6b82dc1 (diff) | |
download | linux-db37648cd6ce9b828abd6d49aa3d269926ee7b7d.tar.gz linux-db37648cd6ce9b828abd6d49aa3d269926ee7b7d.tar.bz2 linux-db37648cd6ce9b828abd6d49aa3d269926ee7b7d.zip |
[PATCH] mm: non syncing lock_page()
lock_page needs the caller to have a reference on the page->mapping inode
due to sync_page, ergo set_page_dirty_lock is obviously buggy according to
its comments.
Solve it by introducing a new lock_page_nosync which does not do a sync_page.
akpm: unpleasant solution to an unpleasant problem. If it goes wrong it could
cause great slowdowns while the lock_page() caller waits for kblockd to
perform the unplug. And if a filesystem has special sync_page() requirements
(none presently do), permanent hangs are possible.
otoh, set_page_dirty_lock() is usually (always?) called against userspace
pages. They are always up-to-date, so there shouldn't be any pending read I/O
against these pages.
Signed-off-by: Nick Piggin <npiggin@suse.de>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Linus Torvalds <torvalds@osdl.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/filemap.c | 17 | ||||
-rw-r--r-- | mm/page-writeback.c | 2 |
2 files changed, 18 insertions, 1 deletions
diff --git a/mm/filemap.c b/mm/filemap.c index b9a60c43b61a..d5af1cab4268 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -488,6 +488,12 @@ struct page *page_cache_alloc_cold(struct address_space *x) EXPORT_SYMBOL(page_cache_alloc_cold); #endif +static int __sleep_on_page_lock(void *word) +{ + io_schedule(); + return 0; +} + /* * In order to wait for pages to become available there must be * waitqueues associated with pages. By using a hash table of @@ -577,6 +583,17 @@ void fastcall __lock_page(struct page *page) } EXPORT_SYMBOL(__lock_page); +/* + * Variant of lock_page that does not require the caller to hold a reference + * on the page's mapping. + */ +void fastcall __lock_page_nosync(struct page *page) +{ + DEFINE_WAIT_BIT(wait, &page->flags, PG_locked); + __wait_on_bit_lock(page_waitqueue(page), &wait, __sleep_on_page_lock, + TASK_UNINTERRUPTIBLE); +} + /** * find_get_page - find and get a page reference * @mapping: the address_space to search diff --git a/mm/page-writeback.c b/mm/page-writeback.c index b9f4c6f1be86..555752907dc3 100644 --- a/mm/page-writeback.c +++ b/mm/page-writeback.c @@ -701,7 +701,7 @@ int set_page_dirty_lock(struct page *page) { int ret; - lock_page(page); + lock_page_nosync(page); ret = set_page_dirty(page); unlock_page(page); return ret; |