diff options
author | Adam Litke <agl@us.ibm.com> | 2006-08-18 11:22:21 -0700 |
---|---|---|
committer | Paul Mackerras <paulus@samba.org> | 2006-08-24 10:07:23 +1000 |
commit | c9169f8747bb282cbe518132bf7d49755a00b6c1 (patch) | |
tree | 1357eb203b7e3c80d6ea2036df664e1a3a401555 /include | |
parent | d55c4a76f26160482158cd43788dcfc96a320a4f (diff) | |
download | linux-c9169f8747bb282cbe518132bf7d49755a00b6c1.tar.gz linux-c9169f8747bb282cbe518132bf7d49755a00b6c1.tar.bz2 linux-c9169f8747bb282cbe518132bf7d49755a00b6c1.zip |
[POWERPC] hugepage BUG fix
On Tue, 2006-08-15 at 08:22 -0700, Dave Hansen wrote:
> kernel BUG in cache_free_debugcheck at mm/slab.c:2748!
Alright, this one is only triggered when slab debugging is enabled. The
slabs are assumed to be aligned on a HUGEPTE_TABLE_SIZE boundary. The free
path makes use of this assumption and uses the lowest nibble to pass around
an index into an array of kmem_cache pointers. With slab debugging turned
on, the slab is still aligned, but the "working" object pointer is not.
This would break the assumption above that a full nibble is available for
the PGF_CACHENUM_MASK.
The following patch reduces PGF_CACHENUM_MASK to cover only the two least
significant bits, which is enough to cover the current number of 4 pgtable
cache types. Then use this constant to mask out the appropriate part of
the huge pte pointer.
Signed-off-by: Adam Litke <agl@us.ibm.com>
Signed-off-by: Andrew Morton <akpm@osdl.org>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Diffstat (limited to 'include')
-rw-r--r-- | include/asm-powerpc/pgalloc.h | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/include/asm-powerpc/pgalloc.h b/include/asm-powerpc/pgalloc.h index 9f0917c68659..ae63db7b3e7d 100644 --- a/include/asm-powerpc/pgalloc.h +++ b/include/asm-powerpc/pgalloc.h @@ -117,7 +117,7 @@ static inline void pte_free(struct page *ptepage) pte_free_kernel(page_address(ptepage)); } -#define PGF_CACHENUM_MASK 0xf +#define PGF_CACHENUM_MASK 0x3 typedef struct pgtable_free { unsigned long val; |