summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorKen Chen <kenchen@google.com>2007-02-08 14:20:27 -0800
committerLinus Torvalds <torvalds@woody.linux-foundation.org>2007-02-09 09:25:46 -0800
commit6649a3863232eb2e2f15ea6c622bd8ceacf96d76 (patch)
treec3b77d20afd1e7215186244375f6cdcaad94d4b2
parentf336953bfdee8d5e7f69cb8e080704546541f04b (diff)
downloadlinux-6649a3863232eb2e2f15ea6c622bd8ceacf96d76.tar.gz
linux-6649a3863232eb2e2f15ea6c622bd8ceacf96d76.tar.bz2
linux-6649a3863232eb2e2f15ea6c622bd8ceacf96d76.zip
[PATCH] hugetlb: preserve hugetlb pte dirty state
__unmap_hugepage_range() is buggy that it does not preserve dirty state of huge_pte when unmapping hugepage range. It causes data corruption in the event of dop_caches being used by sys admin. For example, an application creates a hugetlb file, modify pages, then unmap it. While leaving the hugetlb file alive, comes along sys admin doing a "echo 3 > /proc/sys/vm/drop_caches". drop_pagecache_sb() will happily free all pages that aren't marked dirty if there are no active mapping. Later when application remaps the hugetlb file back and all data are gone, triggering catastrophic flip over on application. Not only that, the internal resv_huge_pages count will also get all messed up. Fix it up by marking page dirty appropriately. Signed-off-by: Ken Chen <kenchen@google.com> Cc: "Nish Aravamudan" <nish.aravamudan@gmail.com> Cc: Adam Litke <agl@us.ibm.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: William Lee Irwin III <wli@holomorphy.com> Cc: <stable@kernel.org> Cc: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r--fs/hugetlbfs/inode.c5
-rw-r--r--mm/hugetlb.c2
2 files changed, 6 insertions, 1 deletions
diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c
index 4f4cd132b571..e6bd553fdc4c 100644
--- a/fs/hugetlbfs/inode.c
+++ b/fs/hugetlbfs/inode.c
@@ -449,10 +449,13 @@ static int hugetlbfs_symlink(struct inode *dir,
}
/*
- * For direct-IO reads into hugetlb pages
+ * mark the head page dirty
*/
static int hugetlbfs_set_page_dirty(struct page *page)
{
+ struct page *head = (struct page *)page_private(page);
+
+ SetPageDirty(head);
return 0;
}
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cb362f761f17..36db012b38dd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -389,6 +389,8 @@ void __unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
continue;
page = pte_page(pte);
+ if (pte_dirty(pte))
+ set_page_dirty(page);
list_add(&page->lru, &page_list);
}
spin_unlock(&mm->page_table_lock);