summaryrefslogtreecommitdiffstats
path: root/arch/parisc
diff options
context:
space:
mode:
authorKirill A. Shutemov <kirill.shutemov@linux.intel.com>2015-02-27 15:52:12 -0800
committerLinus Torvalds <torvalds@linux-foundation.org>2015-02-28 09:57:51 -0800
commitc07af4f1ce413d009ea76ee69099f06f73ce75b2 (patch)
tree7f907957d8499acb11b4f8d861519588048ee84e /arch/parisc
parentcc87317726f851531ae8422e0c2d3d6e2d7b1955 (diff)
downloadlinux-stable-c07af4f1ce413d009ea76ee69099f06f73ce75b2.tar.gz
linux-stable-c07af4f1ce413d009ea76ee69099f06f73ce75b2.tar.bz2
linux-stable-c07af4f1ce413d009ea76ee69099f06f73ce75b2.zip
mm: add missing __PAGETABLE_{PUD,PMD}_FOLDED defines
Core mm expects __PAGETABLE_{PUD,PMD}_FOLDED to be defined if these page table levels folded. Usually, these defines are provided by <asm-generic/pgtable-nopmd.h> and <asm-generic/pgtable-nopud.h>. But some architectures fold page table levels in a custom way. They need to define these macros themself. This patch adds missing defines. The patch fixes mm->nr_pmds underflow and eliminates dead __pmd_alloc() and __pud_alloc() on architectures without these page table levels. Signed-off-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: David Howells <dhowells@redhat.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Helge Deller <deller@gmx.de> Cc: "James E.J. Bottomley" <jejb@parisc-linux.org> Cc: Koichi Yasutake <yasutake.koichi@jp.panasonic.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/parisc')
-rw-r--r--arch/parisc/include/asm/pgtable.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/arch/parisc/include/asm/pgtable.h b/arch/parisc/include/asm/pgtable.h
index 8c966b2270aa..15207b9362bf 100644
--- a/arch/parisc/include/asm/pgtable.h
+++ b/arch/parisc/include/asm/pgtable.h
@@ -96,6 +96,7 @@ extern void purge_tlb_entries(struct mm_struct *, unsigned long);
#if PT_NLEVELS == 3
#define BITS_PER_PMD (PAGE_SHIFT + PMD_ORDER - BITS_PER_PMD_ENTRY)
#else
+#define __PAGETABLE_PMD_FOLDED
#define BITS_PER_PMD 0
#endif
#define PTRS_PER_PMD (1UL << BITS_PER_PMD)