summaryrefslogtreecommitdiffstats
path: root/include
Commit message (Collapse)AuthorAgeFilesLines
* [PATCH] page migration reorgChristoph Lameter2006-03-222-19/+51
| | | | | | | | | | | | | | | | | | | | | | Centralize the page migration functions in anticipation of additional tinkering. Creates a new file mm/migrate.c 1. Extract buffer_migrate_page() from fs/buffer.c 2. Extract central migration code from vmscan.c 3. Extract some components from mempolicy.c 4. Export pageout() and remove_from_swap() from vmscan.c 5. Make it possible to configure NUMA systems without page migration and non-NUMA systems with page migration. I had to so some #ifdeffing in mempolicy.c that may need a cleanup. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: is_aligned_hugepage_range() cleanupDavid Gibson2006-03-222-4/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Quite a long time back, prepare_hugepage_range() replaced is_aligned_hugepage_range() as the callback from mm/mmap.c to arch code to verify if an address range is suitable for a hugepage mapping. is_aligned_hugepage_range() stuck around, but only to implement prepare_hugepage_range() on archs which didn't implement their own. Most archs (everything except ia64 and powerpc) used the same implementation of is_aligned_hugepage_range(). On powerpc, which implements its own prepare_hugepage_range(), the custom version was never used. In addition, "is_aligned_hugepage_range()" was a bad name, because it suggests it returns true iff the given range is a good hugepage range, whereas in fact it returns 0-or-error (so the sense is reversed). This patch cleans up by abolishing is_aligned_hugepage_range(). Instead prepare_hugepage_range() is defined directly. Most archs use the default version, which simply checks the given region is aligned to the size of a hugepage. ia64 and powerpc define custom versions. The ia64 one simply checks that the range is in the correct address space region in addition to being suitably aligned. The powerpc version (just as previously) checks for suitable addresses, and if necessary performs low-level MMU frobbing to set up new areas for use by hugepages. No libhugetlbfs testsuite regressions on ppc64 (POWER5 LPAR). Signed-off-by: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: Move hugetlb_free_pgd_range() prototype to hugetlb.hDavid Gibson2006-03-222-3/+4
| | | | | | | | | | | | | | | The optional hugepage callback, hugetlb_free_pgd_range() is presently implemented non-trivially only on ia64 (but I plan to add one for powerpc shortly). It has its own prototype for the function in asm-ia64/pgtable.h. However, since the function is called from generic code, it make sense for its prototype to be in the generic hugetlb.h header file, as the protypes other arch callbacks already are (prepare_hugepage_range(), set_huge_pte_at(), etc.). This patch makes it so. Signed-off-by: David Gibson <dwg@au1.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: Fix hugepage logic in free_pgtables()David Gibson2006-03-223-9/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | free_pgtables() has special logic to call hugetlb_free_pgd_range() instead of the normal free_pgd_range() on hugepage VMAs. However, the test it uses to do so is incorrect: it calls is_hugepage_only_range on a hugepage sized range at the start of the vma. is_hugepage_only_range() will return true if the given range has any intersection with a hugepage address region, and in this case the given region need not be hugepage aligned. So, for example, this test can return true if called on, say, a 4k VMA immediately preceding a (nicely aligned) hugepage VMA. At present we get away with this because the powerpc version of hugetlb_free_pgd_range() is just a call to free_pgd_range(). On ia64 (the only other arch with a non-trivial is_hugepage_only_range()) we get away with it for a different reason; the hugepage area is not contiguous with the rest of the user address space, and VMAs are not permitted in between, so the test can't return a false positive there. Nonetheless this should be fixed. We do that in the patch below by replacing the is_hugepage_only_range() test with an explicit test of the VMA using is_vm_hugetlb_page(). This in turn changes behaviour for platforms where is_hugepage_only_range() returns false always (everything except powerpc and ia64). We address this by ensuring that hugetlb_free_pgd_range() is defined to be identical to free_pgd_range() (instead of a no-op) on everything except ia64. Even so, it will prevent some otherwise possible coalescing of calls down to free_pgd_range(). Since this only happens for hugepage VMAs, removing this small optimization seems unlikely to cause any trouble. This patch causes no regressions on the libhugetlbfs testsuite - ppc64 POWER5 (8-way), ppc64 G5 (2-way) and i386 Pentium M (UP). Signed-off-by: David Gibson <dwg@au1.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Acked-by: Hugh Dickins <hugh@veritas.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: Make {alloc,free}_huge_page() localDavid Gibson2006-03-221-4/+0
| | | | | | | | | | | | | | | | | Originally, mm/hugetlb.c just handled the hugepage physical allocation path and its {alloc,free}_huge_page() functions were used from the arch specific hugepage code. These days those functions are only used with mm/hugetlb.c itself. Therefore, this patch makes them static and removes their prototypes from hugetlb.h. This requires a small rearrangement of code in mm/hugetlb.c to avoid a forward declaration. This patch causes no regressions on the libhugetlbfs testsuite (ppc64, POWER5). Signed-off-by: David Gibson <dwg@au1.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] hugepage: Strict page reservation for hugepage inodesDavid Gibson2006-03-221-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These days, hugepages are demand-allocated at first fault time. There's a somewhat dubious (and racy) heuristic when making a new mmap() to check if there are enough available hugepages to fully satisfy that mapping. A particularly obvious case where the heuristic breaks down is where a process maps its hugepages not as a single chunk, but as a bunch of individually mmap()ed (or shmat()ed) blocks without touching and instantiating the pages in between allocations. In this case the size of each block is compared against the total number of available hugepages. It's thus easy for the process to become overcommitted, because each block mapping will succeed, although the total number of hugepages required by all blocks exceeds the number available. In particular, this defeats such a program which will detect a mapping failure and adjust its hugepage usage downward accordingly. The patch below addresses this problem, by strictly reserving a number of physical hugepages for hugepage inodes which have been mapped, but not instatiated. MAP_SHARED mappings are thus "safe" - they will fail on mmap(), not later with an OOM SIGKILL. MAP_PRIVATE mappings can still trigger an OOM. (Actually SHARED mappings can technically still OOM, but only if the sysadmin explicitly reduces the hugepage pool between mapping and instantiation) This patch appears to address the problem at hand - it allows DB2 to start correctly, for instance, which previously suffered the failure described above. This patch causes no regressions on the libhugetblfs testsuite, and makes a test (designed to catch this problem) pass which previously failed (ppc64, POWER5). Signed-off-by: David Gibson <dwg@au1.ibm.com> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Enable mprotect on huge pagesZhang, Yanmin2006-03-224-6/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2.6.16-rc3 uses hugetlb on-demand paging, but it doesn_t support hugetlb mprotect. From: David Gibson <david@gibson.dropbear.id.au> Remove a test from the mprotect() path which checks that the mprotect()ed range on a hugepage VMA is hugepage aligned (yes, really, the sense of is_aligned_hugepage_range() is the opposite of what you'd guess :-/). In fact, we don't need this test. If the given addresses match the beginning/end of a hugepage VMA they must already be suitably aligned. If they don't, then mprotect_fixup() will attempt to split the VMA. The very first test in split_vma() will check for a badly aligned address on a hugepage VMA and return -EINVAL if necessary. From: "Chen, Kenneth W" <kenneth.w.chen@intel.com> On i386 and x86-64, pte flag _PAGE_PSE collides with _PAGE_PROTNONE. The identify of hugetlb pte is lost when changing page protection via mprotect. A page fault occurs later will trigger a bug check in huge_pte_alloc(). The fix is to always make new pte a hugetlb pte and also to clean up legacy code where _PAGE_PRESENT is forced on in the pre-faulting day. Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com> Cc: David Gibson <david@gibson.dropbear.id.au> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: William Lee Irwin III <wli@holomorphy.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Nishanth Aravamudan <nacc@us.ibm.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: optimise page_countNick Piggin2006-03-221-1/+1
| | | | | | | | | Optimise page_count compound page test and make it consistent with similar functions. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] remove set_page_count() outside mm/Nick Piggin2006-03-221-2/+9
| | | | | | | | | | | | | set_page_count usage outside mm/ is limited to setting the refcount to 1. Remove set_page_count from outside mm/, and replace those users with init_page_count() and set_page_refcounted(). This allows more debug checking, and tighter control on how code is allowed to play around with page->_count. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: nommu use compound pagesNick Piggin2006-03-221-4/+0
| | | | | | | | | | | | | | | Now that compound page handling is properly fixed in the VM, move nommu over to using compound pages rather than rolling their own refcounting. nommu vm page refcounting is broken anyway, but there is no need to have divergent code in the core VM now, nor when it gets fixed. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: David Howells <dhowells@redhat.com> (Needs testing, please). Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: make __put_page internalNick Piggin2006-03-221-1/+0
| | | | | | | | | | Remove __put_page from outside the core mm/. It is dangerous because it does not handle compound pages nicely, and misses 1->0 transitions. If a user later appears that really needs the extra speed we can reevaluate. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] vmscan: use unsigned longsAndrew Morton2006-03-222-5/+5
| | | | | | | | | | | | | | Turn basically everything in vmscan.c into `unsigned long'. This is to avoid the possibility that some piece of code in there might decide to operate upon more than 4G (or even 2G) of pages in one hit. This might be silly, but we'll need it one day. Cc: Christoph Lameter <clameter@sgi.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] on_each_cpu(): disable local interruptsAndrew Morton2006-03-221-14/+9
| | | | | | | | | | | | | | | | | | | When on_each_cpu() runs the callback on other CPUs, it runs with local interrupts disabled. So we should run the function with local interrupts disabled on this CPU, too. And do the same for UP, so the callback is run in the same environment on both UP and SMP. (strictly it should do preempt_disable() too, but I think local_irq_disable is sufficiently equivalent). Also uninlines on_each_cpu(). softirq.c was the most appropriate file I could find, but it doesn't seem to justify creating a new file. Oh, and fix up that comment over (under?) x86's smp_call_function(). It drives me nuts. Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] slab: Remove SLAB_NO_REAP optionChristoph Lameter2006-03-221-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | SLAB_NO_REAP is documented as an option that will cause this slab not to be reaped under memory pressure. However, that is not what happens. The only thing that SLAB_NO_REAP controls at the moment is the reclaim of the unused slab elements that were allocated in batch in cache_reap(). Cache_reap() is run every few seconds independently of memory pressure. Could we remove the whole thing? Its only used by three slabs anyways and I cannot find a reason for having this option. There is an additional problem with SLAB_NO_REAP. If set then the recovery of objects from alien caches is switched off. Objects not freed on the same node where they were initially allocated will only be reused if a certain amount of objects accumulates from one alien node (not very likely) or if the cache is explicitly shrunk. (Strangely __cache_shrink does not check for SLAB_NO_REAP) Getting rid of SLAB_NO_REAP fixes the problems with alien cache freeing. Signed-off-by: Christoph Lameter <clameter@sgi.com> Cc: Pekka Enberg <penberg@cs.helsinki.fi> Cc: Manfred Spraul <manfred@colorfullife.com> Cc: Mark Fasheh <mark.fasheh@oracle.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] kcalloc(): INT_MAX -> ULONG_MAXAdrian Bunk2006-03-221-1/+1
| | | | | | | | | | | | | | | | | | Since size_t has the same size as a long on all architectures, it's enough for overflow checks to check against ULONG_MAX. This change could allow a compiler better optimization (especially in the n=1 case). The practical effect seems to be positive, but quite small: text data bss dec hex filename 21762380 5859870 1848928 29471178 1c1b1ca vmlinux-old 21762211 5859870 1848928 29471009 1c1b121 vmlinux-patched Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: page_state comment moreNick Piggin2006-03-221-2/+3
| | | | | | | | | Clarify that preemption needs to be guarded against with the __xxx_page_state functions. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: split highorder pagesNick Piggin2006-03-221-0/+6
| | | | | | | | | | | | | | | | | Have an explicit mm call to split higher order pages into individual pages. Should help to avoid bugs and be more explicit about the code's intention. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: Russell King <rmk@arm.linux.org.uk> Cc: David Howells <dhowells@redhat.com> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mundt <lethal@linux-sh.org> Cc: "David S. Miller" <davem@davemloft.net> Cc: Chris Zankel <chris@zankel.net> Signed-off-by: Yoichi Yuasa <yoichi_yuasa@tripeaks.co.jp> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: de-skew page refcountingNick Piggin2006-03-221-14/+5
| | | | | | | | | atomic_add_unless (atomic_inc_not_zero) no longer requires an offset refcount to function correctly. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: simplify vmscan vs release refcountingNick Piggin2006-03-221-8/+11
| | | | | | | | | | | | | | | The VM has an interesting race where a page refcount can drop to zero, but it is still on the LRU lists for a short time. This was solved by testing a 0->1 refcount transition when picking up pages from the LRU, and dropping the refcount in that case. Instead, use atomic_add_unless to ensure we never pick up a 0 refcount page from the LRU, thus a 0 refcount page will never have its refcount elevated until it is allocated again. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: slab less atomicsNick Piggin2006-03-221-4/+2
| | | | | | | | Atomic operation removal from slab Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: page_alloc less atomicsNick Piggin2006-03-221-2/+2
| | | | | | | | More atomic operation removal from page allocator Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: less atomic opsNick Piggin2006-03-222-1/+3
| | | | | | | | | | In the page release paths, we can be sure that nobody will mess with our page->flags because the refcount has dropped to 0. So no need for atomic operations here. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: PageActive no testsetNick Piggin2006-03-221-2/+0
| | | | | | | | | PG_active is protected by zone->lru_lock, it does not need TestSet/TestClear operations. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: PageLRU no testsetNick Piggin2006-03-221-3/+2
| | | | | | | | | PG_lru is protected by zone->lru_lock. It does not need TestSet/TestClear operations. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: remove set_pgdir leftoversChristoph Hellwig2006-03-222-23/+0
| | | | | | | | | | | | | set_pgdir isn't needed anymore for a very long time. Remove the leftover implementation on sh64 and the stub on s390. Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Paul Mundt <lethal@linux-sh.org> Cc: Richard Curnow <rc@rc0.org.uk> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] rtc.h broke strace(1) buildsJoe Korty2006-03-221-2/+2
| | | | | | | | | | | | | Git patch 52dfa9a64cfb3dd01fa1ee1150d589481e54e28e [PATCH] move rtc_interrupt() prototype to rtc.h broke strace(1) builds. The below moves the kernel-only additions lower, under the already provided #ifdef __KERNEL__ statement. Cc: <stable@kernel.org> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] don't call check_acpi_pci() on x86 with ACPI disabledHerbert Poetzl2006-03-221-4/+6
| | | | | | | | | | | check_acpi_pci() is called from arch/i386/kernel/setup.c even if CONFIG_ACPI is not defined, but the code in include/asm/acpi.h doesn't provide it in this case. Signed-off-by: Herbert Pötzl <herbert@13thfloor.at> Cc: "Brown, Len" <len.brown@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge branch 'release' of ↵Linus Torvalds2006-03-2113-211/+181
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6 * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux-2.6: [IA64-SGI] SN2-XP reduce kmalloc wrapper inlining [IA64] MCA: remove obsolete ifdef [IA64] MCA: update MCA comm field for user space tasks [IA64] MCA: print messages in MCA handler [IA64-SGI] - Eliminate SN pio_phys_xxx macros. Move to assembly [IA64] use icc defined constant [IA64] add __builtin_trap definition for icc build [IA64] clean up asm/intel_intrin.h [IA64] map ia64_hint definition to intel compiler intrinsic [IA64] hooks to wait for mmio writes to drain when migrating processes [IA64-SGI] driver bugfixes and hardware workarounds for CE1.0 asic [IA64-SGI] Handle SC env. powerdown events [IA64] Delete MCA/INIT sigdelayed code [IA64-SGI] sem2mutex ioc4.c [IA64] implement ia64 specific mutex primitives [IA64] Fix UP build with BSP removal support. [IA64] support for cpu0 removal
| * Pull sn2-reduce-kmalloc-wrap into release branchTony Luck2006-03-211-22/+0
| |\
| | * [IA64-SGI] SN2-XP reduce kmalloc wrapper inliningJes Sorensen2006-02-271-22/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | Take advantage of kzalloc() as well as reduce the size of code generated for the error returns in xpc_setup_infrastructure(). Signed-off-by: Jes Sorensen <jes@sgi.com> Acked-by: Dean Nelson <dcn@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | Pull icc-cleanup into release branchTony Luck2006-03-212-168/+22
| |\ \
| | * | [IA64-SGI] - Eliminate SN pio_phys_xxx macros. Move to assemblyJack Steiner2006-02-071-51/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Rewrite the SN pio_phys_xxx macros in assembly language. This avoids issues with the Intel icc compiler. Function call overhead is not an issue - the functions reference PIOs and take 100's nsec to complete. In addition, the functions should likely be in assembly language anyway - they reference memory using physical addressing mode. One function executes with psr.ic disabled. Signed-off-by: Jack Steiner <steiner@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] use icc defined constantChen, Kenneth W2006-02-071-10/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Use icc defined constant instead of magic number. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] add __builtin_trap definition for icc buildChen, Kenneth W2006-02-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Map __builtin_trap function to break 0 instruction. Signed-off-by: HJ Lu <hongjiu.lu@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] clean up asm/intel_intrin.hChen, Kenneth W2006-02-071-106/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Include intrinsic header file from icc compiler. Remove duplicate definition from kernel source. Signed-off-by: HJ Lu <hongjiu.lu@intel.com> Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | [IA64] map ia64_hint definition to intel compiler intrinsicChen, Kenneth W2006-02-071-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Map ia64_hint() to internal intel compiler intrinsic. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | Pull sn2-mmio-writes into release branchTony Luck2006-03-215-2/+26
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Hand-fixed conflicts: include/asm-ia64/machvec_sn2.h Signed-off-by: Tony Luck <tony.luck@intel.com>
| | * | | [IA64] hooks to wait for mmio writes to drain when migrating processesBrent Casavant2006-01-265-2/+26
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On SN2, MMIO writes which are issued from separate processors are not guaranteed to arrive in any particular order at the IO hardware. When performing such writes from the kernel this is not a problem, as a kernel thread will not migrate to another CPU during execution, and mmiowb() calls can guarantee write ordering when control of the IO resource is allowed to move between threads. However, when MMIO writes can be performed from user space (e.g. DRM) there are no such guarantees and mechanisms, as the process may context-switch at any time, and may migrate to a different CPU as part of the switch. For such programs/hardware to operate correctly, it is required that the MMIO writes from the old CPU be accepted by the IO hardware before subsequent writes from the new CPU can be issued. The following patch implements this behavior on SN2 by waiting for a Shub register to indicate that these writes have been accepted. This is placed in the context switch-in path, and only performs the wait when the newly scheduled task changes CPUs. Signed-off-by: Prarit Bhargava <prarit@sgi.com> Signed-off-by: Brent Casavant <bcasavan@sgi.com>
| * | | | Pull altix-ce1.0-asic into release branchTony Luck2006-03-212-2/+42
| |\ \ \ \
| | * | | | [IA64-SGI] driver bugfixes and hardware workarounds for CE1.0 asicMark Maule2006-01-262-2/+42
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Various bugfixes and hardware bug workarounds necessary for the rev 1.0 version of the altix TIO CE asic. Signed-off-by: Mark Maule <maule@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | Pull delete-sigdelayed into release branchTony Luck2006-03-212-12/+1
| |\ \ \ \
| | * | | | [IA64] Delete MCA/INIT sigdelayed codeKeith Owens2006-01-262-12/+1
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only user of the MCA/INIT sigdelayed code (SGI's I/O probing) has moved from the kernel into SAL. Delete the MCA/INIT sigdelayed code. Signed-off-by: Keith Owens <kaos@sgi.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | Pull ia64-mutex-primitives into release branchTony Luck2006-03-211-5/+88
| |\ \ \ \
| | * | | | [IA64] implement ia64 specific mutex primitivesChen, Kenneth W2006-01-261-5/+88
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Implement ia64 optimized mutex primitives. It properly uses acquire/release memory ordering semantics in lock/unlock path. 2nd version making them all static inline functions. Signed-off-by: Ken Chen <kenneth.w.chen@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
| * | | | Pull bsp-removal into release branchTony Luck2006-03-211-0/+2
| |\ \ \ \
| | * | | | [IA64] support for cpu0 removalAshok Raj2006-01-051-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | here is the BSP removal support for IA64. Its pretty much the same thing that was released a while back, but has your feedback incorporated. - Removed CONFIG_BSP_REMOVE_WORKAROUND and associated cmdline param - Fixed compile issue with sn2/zx1 due to a undefined fix_b0_for_bsp - some formatting nits (whitespace etc) This has been tested on tiger and long back by alex on hp systems as well. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Tony Luck <tony.luck@intel.com>
* | | | | | Merge master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2006-03-211-1/+9
|\ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * master.kernel.org:/pub/scm/linux/kernel/git/herbert/crypto-2.6: [CRYPTO] aes: Fixed array boundary violation [CRYPTO] tcrypt: Fix key alignment [CRYPTO] all: Add missing cra_alignmask [CRYPTO] all: Use kzalloc where possible [CRYPTO] api: Align tfm context as wide as possible [CRYPTO] twofish: Use rol32/ror32 where appropriate
| * | | | | | [CRYPTO] api: Align tfm context as wide as possibleHerbert Xu2006-03-211-1/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since tfm contexts can contain arbitrary types we should provide at least natural alignment (__attribute__ ((__aligned__))) for them. In particular, this is needed on the Xscale which is a 32-bit architecture with a u64 type that requires 64-bit alignment. This problem was reported by Ronen Shitrit. The crypto_tfm structure's size was 44 bytes on 32-bit architectures and 80 bytes on 64-bit architectures. So adding this requirement only means that we have to add an extra 4 bytes on 32-bit architectures. On i386 the natural alignment is 16 bytes which also benefits the VIA Padlock as it no longer has to manually align its context structure to 128 bits. Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
* | | | | | | Merge master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6Linus Torvalds2006-03-2156-464/+890
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * master.kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6: (235 commits) [NETFILTER]: Add H.323 conntrack/NAT helper [TG3]: Don't mark tg3_test_registers() as returning const. [IPV6]: Cleanups for net/ipv6/addrconf.c (kzalloc, early exit) v2 [IPV6]: Nearly complete kzalloc cleanup for net/ipv6 [IPV6]: Cleanup of net/ipv6/reassambly.c [BRIDGE]: Remove duplicate const from is_link_local() argument type. [DECNET]: net/decnet/dn_route.c: fix inconsequent NULL checking [TG3]: make drivers/net/tg3.c:tg3_request_irq() static [BRIDGE]: use LLC to send STP [LLC]: llc_mac_hdr_init const arguments [BRIDGE]: allow show/store of group multicast address [BRIDGE]: use llc for receiving STP packets [BRIDGE]: stp timer to jiffies cleanup [BRIDGE]: forwarding remove unneeded preempt and bh diasables [BRIDGE]: netfilter inline cleanup [BRIDGE]: netfilter VLAN macro cleanup [BRIDGE]: netfilter dont use __constant_htons [BRIDGE]: netfilter whitespace [BRIDGE]: optimize frame pass up [BRIDGE]: use kzalloc ...
| * | | | | | | [NETFILTER]: Add H.323 conntrack/NAT helperJing Min Zhao2006-03-202-0/+32
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Signed-off-by: Jing Min Zhao <zhaojignmin@hotmail.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: David S. Miller <davem@davemloft.net>