summaryrefslogtreecommitdiffstats
path: root/include/asm-x86_64
Commit message (Collapse)AuthorAgeFilesLines
* [PATCH] Kill L1_CACHE_SHIFT_MAXRavikiran G Thirumalai2006-01-081-1/+0
| | | | | | | | | | | Kill L1_CACHE_SHIFT from all arches. Since L1_CACHE_SHIFT_MAX is not used anymore with the introduction of INTERNODE_CACHE, kill L1_CACHE_SHIFT_MAX. Signed-off-by: Ravikiran Thirumalai <kiran@scalex86.org> Signed-off-by: Shai Fultheim <shai@scalex86.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Swap Migration V5: sys_migrate_pages interfaceChristoph Lameter2006-01-082-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | sys_migrate_pages implementation using swap based page migration This is the original API proposed by Ray Bryant in his posts during the first half of 2005 on linux-mm@kvack.org and linux-kernel@vger.kernel.org. The intent of sys_migrate is to migrate memory of a process. A process may have migrated to another node. Memory was allocated optimally for the prior context. sys_migrate_pages allows to shift the memory to the new node. sys_migrate_pages is also useful if the processes available memory nodes have changed through cpuset operations to manually move the processes memory. Paul Jackson is working on an automated mechanism that will allow an automatic migration if the cpuset of a process is changed. However, a user may decide to manually control the migration. This implementation is put into the policy layer since it uses concepts and functions that are also needed for mbind and friends. The patch also provides a do_migrate_pages function that may be useful for cpusets to automatically move memory. sys_migrate_pages does not modify policies in contrast to Ray's implementation. The current code here is based on the swap based page migration capability and thus is not able to preserve the physical layout relative to it containing nodeset (which may be a cpuset). When direct page migration becomes available then the implementation needs to be changed to do a isomorphic move of pages between different nodesets. The current implementation simply evicts all pages in source nodeset that are not in the target nodeset. Patch supports ia64, i386 and x86_64. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] cpu hotplug/x86_64: disable interrupt in play_deadShaohua Li2006-01-061-0/+2
| | | | | | | | | | | With physical CPU hotplug, the CPU is hot removed and it should not receive any interrupts. Disabling interrupt is much safer. This basically is what we do in ia64 & x86. Signed-off-by: Shaohua Li <shaohua.li@intel.com> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mpspec: remove unneeded packed attributeBrian Gerst2006-01-061-1/+1
| | | | | | | | | | | | | GCC 4.1 gives the following warning: include/asm/mpspec.h:79: warning: `packed' attribute ignored for field of type `unsigned char' The packed attribute isn't really necessary anyways so just remove it. Signed-off-by: Brian Gerst <bgerst@didntduck.org> Acked-by: Dave Jones <davej@codemonkey.org.uk> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86/x86_64: mark rodata section read-only: x86-64 supportArjan van de Ven2006-01-061-0/+4
| | | | | | | | | | x86-64 specific parts to make the .rodata section read only Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86/x86_64: mark rodata section read only: generic x86-64 bugfixArjan van de Ven2006-01-061-0/+2
| | | | | | | | | | | | | | | | | Bug fix required for the .rodata work on x86-64: when change_page_attr() and friends need to break up a 2Mb page into 4Kb pages, it always set the NX bit on the PMD, which causes the cpu to consider the entire 2Mb region to be NX regardless of the actual PTE perms. This is fine in general, with one big exception: the 2Mb page that covers the last part of the kernel .text! The fix is to not invent a new permission for the new PMD entry, but to just inherit the existing one minus the PSE bit. Signed-off-by: Arjan van de Ven <arjan@infradead.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] atomic_long_t & include/asm-generic/atomic.h V2Christoph Lameter2006-01-061-0/+1
| | | | | | | | | | | | | | | | | | | Several counters already have the need to use 64 atomic variables on 64 bit platforms (see mm_counter_t in sched.h). We have to do ugly ifdefs to fall back to 32 bit atomic on 32 bit platforms. The VM statistics patch that I am working on will also make more extensive use of atomic64. This patch introduces a new type atomic_long_t by providing definitions in asm-generic/atomic.h that works similar to the c "long" type. Its 32 bits on 32 bit platforms and 64 bits on 64 bit platforms. Also cleans up the determination of the mm_counter_t in sched.h. Signed-off-by: Christoph Lameter <clameter@sgi.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] madvise(MADV_REMOVE): remove pages from tmpfs shm backing storeBadari Pulavarty2006-01-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Here is the patch to implement madvise(MADV_REMOVE) - which frees up a given range of pages & its associated backing store. Current implementation supports only shmfs/tmpfs and other filesystems return -ENOSYS. "Some app allocates large tmpfs files, then when some task quits and some client disconnect, some memory can be released. However the only way to release tmpfs-swap is to MADV_REMOVE". - Andrea Arcangeli Databases want to use this feature to drop a section of their bufferpool (shared memory segments) - without writing back to disk/swap space. This feature is also useful for supporting hot-plug memory on UML. Concerns raised by Andrew Morton: - "We have no plan for holepunching! If we _do_ have such a plan (or might in the future) then what would the API look like? I think sys_holepunch(fd, start, len), so we should start out with that." - Using madvise is very weird, because people will ask "why do I need to mmap my file before I can stick a hole in it?" - None of the other madvise operations call into the filesystem in this manner. A broad question is: is this capability an MM operation or a filesytem operation? truncate, for example, is a filesystem operation which sometimes has MM side-effects. madvise is an mm operation and with this patch, it gains FS side-effects, only they're really, really significant ones." Comments: - Andrea suggested the fs operation too but then it's more efficient to have it as a mm operation with fs side effects, because they don't immediatly know fd and physical offset of the range. It's possible to fixup in userland and to use the fs operation but it's more expensive, the vmas are already in the kernel and we can use them. Short term plan & Future Direction: - We seem to need this interface only for shmfs/tmpfs files in the short term. We have to add hooks into the filesystem for correctness and completeness. This is what this patch does. - In the future, plan is to support both fs and mmap apis also. This also involves (other) filesystem specific functions to be implemented. - Current patch doesn't support VM_NONLINEAR - which can be addressed in the future. Signed-off-by: Badari Pulavarty <pbadari@us.ibm.com> Cc: Hugh Dickins <hugh@veritas.com> Cc: Andrea Arcangeli <andrea@suse.de> Cc: Michael Kerrisk <mtk-manpages@gmx.net> Cc: Ulrich Drepper <drepper@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [FLS64]: x86_64 versionStephen Hemminger2006-01-031-1/+27
| | | | | Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [FLS64]: generic versionStephen Hemminger2006-01-031-0/+1
| | | | | Signed-off-by: Stephen Hemminger <shemminger@osdl.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* [PATCH] Avoid namespace pollution in <asm/param.h>Dag-Erling Smørgrav2006-01-021-2/+1
| | | | | | | | | | | | | | | | In commit 3D59121003721a8fad11ee72e646fd9d3076b5679c, the x86 and x86-64 <asm/param.h> was changed to include <linux/config.h> for the configurable timer frequency. However, asm/param.h is sometimes used in userland (it is included indirectly from <sys/param.h>), so your commit pollutes the userland namespace with tons of CONFIG_FOO macros. This greatly confuses software packages (such as BusyBox) which use CONFIG_FOO macros themselves to control the inclusion of optional features. After a short exchange, Christoph approved this patch Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Fix typo in x86_64 __build_write_lock_const assemblyBen Collins2005-12-241-1/+1
| | | | | | | Based on __build_read_lock_const, this looked like a bug. [ Indeed. Maybe nobody uses this version? Worth fixing up anyway ] Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] x86_64/ia64 : Fix compilation error for node_to_first_cpuRavikiran G Thirumalai2005-12-241-1/+1
| | | | | | | | | | | | | Fixes a compiler error in node_to_first_cpu, __ffs expects unsigned long as a parameter; instead cpumask_t was being passed. The macro node_to_first_cpu was not yet used in x86_64 and ia64 arches, and so we never hit this. This patch replaces __ffs with first_cpu macro, similar to other arches. Signed-off-by: Alok N Kataria <alokk@calsoftinc.com> Signed-off-by: Ravikiran G Thirumalai <kiran@scalex86.org> Signed-off-by: Shai Fultheim <shai@scalex86.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] mm: fill arch atomic64 gapsHugh Dickins2005-11-231-13/+38
| | | | | | | | | | | | | | | alpha, sparc64, x86_64 are each missing some primitives from their atomic64 support: fill in the gaps I've noticed by extrapolating asm, follow the groupings in each file. But powerpc and parisc still lack atomic64. Signed-off-by: Hugh Dickins <hugh@veritas.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: "David S. Miller" <davem@davemloft.net> Cc: Andi Kleen <ak@muc.de> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* [PATCH] Fix x86_64/msr.h interface to agree with i386/msr.hJacob.Shin@amd.com2005-11-201-1/+1
| | | | | | | | | | | | | | | Ever since we remove msr.c from x86_64 branch and started grabbing it from i386, msr device (read functionality) has been broken for us. This is due to the differences between asm-i386/msr.h and asm-x86_64/msr.h interfaces. Here is a patch to our side to fix this. Thankfully, as of current (2.6.15-rc1-git6) tree, arch/i386/kernel/msr.c is the only file that uses rdmsr_safe macro. Signed-off-by: Jacob Shin <jacob.shin@amd.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* Merge x86-64 update from AndiLinus Torvalds2005-11-1422-323/+86
|\
| * [PATCH] x86_64: Increase the maximum number of local APICs to the maximumAndi Kleen2005-11-141-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is needed for large multinode IBM systems which have a sparse APIC space in clustered mode, fully covering the available 8 bits. The previous kernels would limit the local APIC number to 127, which caused it to reject some of the CPUs at boot. I increased the maximum and shrunk the apic_version array a bit to make up for that (the version is only 8 bit, so don't need an full int to store) Cc: Chris McDermott <lcm@us.ibm.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Use common sys_time64Paolo 'Blaisorblade' Giarrusso2005-11-141-1/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Keeping this function does not makes sense because it's a copied (and buggy) copy of sys_time. The only difference is that now.tv_sec (which is a time_t, i.e. a 64-bit long) is copied (and truncated) into a int (32-bit). The prototype is the same (they both take a long __user *), so let's drop this and redirect it to sys_time (and make sure it exists by defining __ARCH_WANT_SYS_TIME). Only disadvantage is that the sys_stime definition is also compiled (may be fixed if needed by adding a separate __ARCH_WANT_SYS_STIME macro, and defining it for all arch's defining __ARCH_WANT_SYS_TIME except x86_64). Acked-by: Andi Kleen <ak@suse.de> Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Set ____cacheline_maxaligned_in_smp alignment to 128 bytesPaolo 'Blaisorblade' Giarrusso2005-11-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current value was correct before the introduction of Intel EM64T support - but now L1_CACHE_SHIFT_MAX can be less than L1_CACHE_SHIFT, which _is_ funny! Between the few users of ____cacheline_maxaligned_in_smp, we also have (for example) rcu_ctrlblk, and struct zone, with zone->{lru_,}lock. I.e. we have a lot of excess cacheline bouncing on them. No correctness issues, obviously. So this could even be merged for 2.6.14 (I'm not a fan of this idea, though). CC: Andi Kleen <ak@suse.de> Signed-off-by: Paolo 'Blaisorblade' Giarrusso <blaisorblade@yahoo.it> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Remove asm-x86_64/rwsem.hAndi Kleen2005-11-141-283/+0
| | | | | | | | | | | | | | Not needed since x86-64 always uses the spinlock based rwsems. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86-64/i386: Intel HT, Multi core detection fixesSiddha, Suresh B2005-11-141-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fields obtained through cpuid vector 0x1(ebx[16:23]) and vector 0x4(eax[14:25], eax[26:31]) indicate the maximum values and might not always be the same as what is available and what OS sees. So make sure "siblings" and "cpu cores" values in /proc/cpuinfo reflect the values as seen by OS instead of what cpuid instruction says. This will also fix the buggy BIOS cases (for example where cpuid on a single core cpu says there are "2" siblings, even when HT is disabled in the BIOS. http://bugzilla.kernel.org/show_bug.cgi?id=4359) Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Fix NUMA node lookup debug code which had bitrottedAndi Kleen2005-11-141-3/+2
| | | | | | | | | | Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Formatting fixes for arch/x86_64/kernel/process.cAndi Kleen2005-11-141-1/+1
| | | | | | | | | | | | | | No functional changes. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Allow modular build of ia32 aout loaderAndi Kleen2005-11-141-0/+5
| | | | | | | | | | Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: New heuristics to find out hotpluggable CPUs.Andi Kleen2005-11-141-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With a NR_CPUS==128 kernel with CPU hotplug enabled we would waste 4MB on per CPU data of all possible CPUs. The reason was that HOTPLUG always set up possible map to NR_CPUS cpus and then we need to allocate that much (each per CPU data is roughly ~32k now) The underlying problem is that ACPI didn't tell us how many hotplug CPUs the platform supports. So the old code just assumed all, which would lead to this memory wastage. This implements some new heuristics: - If the BIOS specified disabled CPUs in the ACPI/mptables assume they can be enabled later (this is bending the ACPI specification a bit, but seems like a obvious extension) - The user can overwrite it with a new additionals_cpus=NUM option - Otherwise use half of the available CPUs or 2, whatever is more. Cc: ashok.raj@intel.com Cc: len.brown@intel.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Use int operations in spinlocks to support more than 128 ↵Andi Kleen2005-11-141-6/+6
| | | | | | | | | | | | | | | | | | CPUs spinning. Pointed out by Eric Dumazet Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Don't apply __PHYSICAL_MASK to page frame numbersAndi Kleen2005-11-142-3/+3
| | | | | | | | | | | | | | | | | | | | | | It is for physical addresses, not for PFNs. Pointed out by Tejun Heo. Cc: htejun@gmail.com Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Unmap NULL during early bootupSiddha, Suresh B2005-11-143-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We should zap the low mappings, as soon as possible, so that we can catch kernel bugs more effectively. Previously early boot had NULL mapped and didn't trap on NULL references. This patch introduces boot_level4_pgt, which will always have low identity addresses mapped. Druing boot, all the processors will use this as their level4 pgt. On BP, we will switch to init_level4_pgt as soon as we enter C code and zap the low mappings as soon as we are done with the usage of identity low mapped addresses. On AP's we will zap the low mappings as soon as we jump to C code. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Speed up numa_node_id by putting it directly into the PDAAndi Kleen2005-11-143-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | Not go from the CPU number to an mapping array. Mode number is often used now in fast paths. This also adds a generic numa_node_id to all the topology includes Suggested by Eric Dumazet Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Fix up outdated pfn_to_page commentAndi Kleen2005-11-141-3/+1
| | | | | | | | | | | | | | | | pfn_to_page really requires pfn_valid to be true now, no question. Some people stumbled over it, but it was misleading and wrong. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] i386/x86-64: Share interrupt vectors when there is a large number of ↵James Cleverdon2005-11-142-1/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | interrupt sources Here's a patch that builds on Natalie Protasevich's IRQ compression patch and tries to work for MPS boots as well as ACPI. It is meant for a 4-node IBM x460 NUMA box, which was dying because it had interrupt pins with GSI numbers > NR_IRQS and thus overflowed irq_desc. The problem is that this system has 270 GSIs (which are 1:1 mapped with I/O APIC RTEs) and an 8-node box would have 540. This is much bigger than NR_IRQS (224 for both i386 and x86_64). Also, there aren't enough vectors to go around. There are about 190 usable vectors, not counting the reserved ones and the unused vectors at 0x20 to 0x2F. So, my patch attempts to compress the GSI range and share vectors by sharing IRQs. Cc: "Protasevich, Natalie" <Natalie.Protasevich@unisys.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Support for AMD specific MCE Threshold.Jacob Shin2005-11-143-1/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MC4_MISC - DRAM Errors Threshold Register realized under AMD K8 Rev F. This register is used to count correctable and uncorrectable ECC errors that occur during DRAM read operations. The user may interface through sysfs files in order to change the threshold configuration. bank%d/error_count - reads current error count, write to clear. bank%d/interrupt_enable - set/clear interrupt enable. bank%d/threshold_limit - read/write the threshold limit. APIC vector 0xF9 in hw_irq.h. 5 software defined bank ids in mce.h. new apic.c function to setup threshold apic lvt. defaults to interrupt off, count enabled, and threshold limit max. sysfs interface created on /sys/devices/system/threshold. AK: added some ifdefs to make it compile on UP Signed-off-by: Jacob Shin <jacob.shin@amd.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Adjust, correct, and complete the HPET definitions for x86-64.Jan Beulich2005-11-141-14/+21
| | | | | | | | | | | | Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] x86_64: Add 4GB DMA32 zoneAndi Kleen2005-11-142-2/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new 4GB GFP_DMA32 zone between the GFP_DMA and GFP_NORMAL zones. As a bit of historical background: when the x86-64 port was originally designed we had some discussion if we should use a 16MB DMA zone like i386 or a 4GB DMA zone like IA64 or both. Both was ruled out at this point because it was in early 2.4 when VM is still quite shakey and had bad troubles even dealing with one DMA zone. We settled on the 16MB DMA zone mainly because we worried about older soundcards and the floppy. But this has always caused problems since then because device drivers had trouble getting enough DMA able memory. These days the VM works much better and the wide use of NUMA has proven it can deal with many zones successfully. So this patch adds both zones. This helps drivers who need a lot of memory below 4GB because their hardware is not accessing more (graphic drivers - proprietary and free ones, video frame buffer drivers, sound drivers etc.). Previously they could only use IOMMU+16MB GFP_DMA, which was not enough memory. Another common problem is that hardware who has full memory addressing for >4GB misses it for some control structures in memory (like transmit rings or other metadata). They tended to allocate memory in the 16MB GFP_DMA or the IOMMU/swiotlb then using pci_alloc_consistent, but that can tie up a lot of precious 16MB GFPDMA/IOMMU/swiotlb memory (even on AMD systems the IOMMU tends to be quite small) especially if you have many devices. With the new zone pci_alloc_consistent can just put this stuff into memory below 4GB which works better. One argument was still if the zone should be 4GB or 2GB. The main motivation for 2GB would be an unnamed not so unpopular hardware raid controller (mostly found in older machines from a particular four letter company) who has a strange 2GB restriction in firmware. But that one works ok with swiotlb/IOMMU anyways, so it doesn't really need GFP_DMA32. I chose 4GB to be compatible with IA64 and because it seems to be the most common restriction. The new zone is so far added only for x86-64. For other architectures who don't set up this new zone nothing changes. Architectures can set a compatibility define in Kconfig CONFIG_DMA_IS_DMA32 that will define GFP_DMA32 as GFP_DMA. Otherwise it's a nop because on 32bit architectures it's normally not needed because GFP_NORMAL (=0) is DMA able enough. One problem is still that GFP_DMA means different things on different architectures. e.g. some drivers used to have #ifdef ia64 use GFP_DMA (trusting it to be 4GB) #elif __x86_64__ (use other hacks like the swiotlb because 16MB is not enough) ... . This was quite ugly and is now obsolete. These should be now converted to use GFP_DMA32 unconditionally. I haven't done this yet. Or best only use pci_alloc_consistent/dma_alloc_coherent which will use GFP_DMA32 transparently. Signed-off-by: Andi Kleen <ak@suse.de> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] atomic: inc_not_zeroNick Piggin2005-11-131-0/+19
| | | | | | | | | | | | | | | | | | | | | | Introduce an atomic_inc_not_zero operation. Make this a special case of atomic_add_unless because lockless pagecache actually wants atomic_inc_not_negativeone due to its offset refcount. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] atomic: cmpxchgNick Piggin2005-11-131-0/+2
| | | | | | | | | | | | | | | | | | Introduce an atomic_cmpxchg operation. Signed-off-by: Nick Piggin <npiggin@suse.de> Cc: "Paul E. McKenney" <paulmck@us.ibm.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] x86_64: fix tss limitSiddha, Suresh B2005-11-131-3/+10
| | | | | | | | | | | | | | | | | | Fix the x86_64 TSS limit in TSS descriptor. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] PCI: Change MSI to use physical delivery mode alwaysAshok Raj2005-11-102-3/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | MSI hardcoded delivery mode to use logical delivery mode. Recently x86_64 moved to use physical mode addressing to support physflat mode. With this mode enabled noticed that my eth with MSI werent working. msi_address_init() was hardcoded to use logical mode for i386 and x86_64. So when we switch to use physical mode, things stopped working. Since anyway we dont use lowest priority delivery with MSI, its always directed to just a single CPU. Its safe and simpler to use physical mode always, even when we use logical delivery mode for IPI's or other ioapic RTE's. Signed-off-by: Ashok Raj <ashok.raj@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
* | [PATCH] Kprobes: Track kprobe on a per_cpu basis - x86_64 changesAnanth N Mavinakayanahalli2005-11-071-0/+19
| | | | | | | | | | | | | | | | | | | | | | x86_64 changes to track kprobe execution on a per-cpu basis. We now track the kprobe state machine independently on each cpu using a arch specific kprobe control block. Signed-off-by: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Signed-off-by: Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | [PATCH] fix remaining missing includesTim Schmielau2005-11-072-0/+4
|/ | | | | | | | | | Fix more include file problems that surfaced since I submitted the previous fix-missing-includes.patch. This should now allow not to include sched.h from module.h, which is done by a followup patch. Signed-off-by: Tim Schmielau <tim@physik3.uni-rostock.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* manual update from upstream:Tony Luck2005-10-318-12/+43
|\ | | | | | | | | | | | | Applied Al's change 06a544971fad0992fe8b92c5647538d573089dd4 to new location of swiotlb.c Signed-off-by: Tony Luck <tony.luck@intel.com>
| * [PATCH] semaphore: Remove __MUTEX_INITIALIZER()Arthur Othieno2005-10-301-3/+0
| | | | | | | | | | | | | | | | | | __MUTEX_INITIALIZER() has no users, and equates to the more commonly used DECLARE_MUTEX(), thus making it pretty much redundant. Remove it for good. Signed-off-by: Arthur Othieno <a.othieno@bluewin.ch> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] vm: remove unused/broken page_pte[_prot] macrosTejun Heo2005-10-301-2/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes page_pte_prot and page_pte macros from all architectures. Some architectures define both, some only page_pte (broken) and others none. These macros are not used anywhere. page_pte_prot(page, prot) is identical to mk_pte(page, prot) and page_pte(page) is identical to page_pte_prot(page, __pgprot(0)). * The following architectures define both page_pte_prot and page_pte arm, arm26, ia64, sh64, sparc, sparc64 * The following architectures define only page_pte (broken) frv, i386, m32r, mips, sh, x86-64 * All other architectures define neither Signed-off-by: Tejun Heo <htejun@gmail.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] unify sys_ptrace prototypeChristoph Hellwig2005-10-301-2/+0
| | | | | | | | | | | | | | | | | | | | Make sure we always return, as all syscalls should. Also move the common prototype to <linux/syscalls.h> Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Miklos Szeredi <miklos@szeredi.hu> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] Clean up mtrr compat ioctl codeBrian Gerst2005-10-301-0/+33
| | | | | | | | | | | | | | | | | | | | Handle 32-bit mtrr ioctls in the mtrr driver instead of the ia32 compatability layer. Signed-off-by: Brian Gerst <bgerst@didntduck.org> Cc: Andi Kleen <ak@muc.de> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] add sem_is_read/write_locked()Rik Van Riel2005-10-291-0/+5
| | | | | | | | | | | | | | | | | | | | Add sem_is_read/write_locked functions to the read/write semaphores, along the same lines of the *_is_locked spinlock functions. The swap token tuning patch uses sem_is_read_locked; sem_is_write_locked is added for completeness. Signed-off-by: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] gfp_t: dma-mapping (amd64)Al Viro2005-10-281-1/+1
| | | | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * [PATCH] gfp_t: dma-mapping (ia64)Al Viro2005-10-281-1/+1
| | | | | | | | | | | | | | | | ... and related annotations for amd64 - swiotlb code is shared, but prototypes are not. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@osdl.org>
| * Revert "x86-64: Avoid unnecessary double bouncing for swiotlb"Linus Torvalds2005-10-271-3/+3
| | | | | | | | | | | | | | | | | | Commit id 6142891a0c0209c91aa4a98f725de0d6e2ed4918 Andi Kleen reports that it seems to break things for some people, and since it's purely a small optimization, revert it for now. Signed-off-by: Linus Torvalds <torvalds@osdl.org>
* | Update from upstream with manual merge of Yasunori Goto'sTony Luck2005-10-204-1/+5
|\| | | | | | | | | | | | | | | changes to swiotlb.c made in commit 281dd25cdc0d6903929b79183816d151ea626341 since this file has been moved from arch/ia64/lib/swiotlb.c to lib/swiotlb.c Signed-off-by: Tony Luck <tony.luck@intel.com>