| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
We should check iommu_dummy() _first_, because that means it's attached
to an iommu that we've just disabled completely. At the moment, we might
try to put the device into the identity mapping domain.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|
|
|
|
|
|
|
|
|
| |
The aligned_nrpages() function rounds up to the next VM page, but
returns its result as a number of DMA pages.
Purely theoretical except on IA64, which doesn't boot with VT-d right
now anyway.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
* git://git.infradead.org/iommu-2.6: (38 commits)
intel-iommu: Don't keep freeing page zero in dma_pte_free_pagetable()
intel-iommu: Introduce first_pte_in_page() to simplify PTE-setting loops
intel-iommu: Use cmpxchg64_local() for setting PTEs
intel-iommu: Warn about unmatched unmap requests
intel-iommu: Kill superfluous mapping_lock
intel-iommu: Ensure that PTE writes are 64-bit atomic, even on i386
intel-iommu: Make iommu=pt work on i386 too
intel-iommu: Performance improvement for dma_pte_free_pagetable()
intel-iommu: Don't free too much in dma_pte_free_pagetable()
intel-iommu: dump mappings but don't die on pte already set
intel-iommu: Combine domain_pfn_mapping() and domain_sg_mapping()
intel-iommu: Introduce domain_sg_mapping() to speed up intel_map_sg()
intel-iommu: Simplify __intel_alloc_iova()
intel-iommu: Performance improvement for domain_pfn_mapping()
intel-iommu: Performance improvement for dma_pte_clear_range()
intel-iommu: Clean up iommu_domain_identity_map()
intel-iommu: Remove last use of PHYSICAL_PAGE_MASK, for reserving PCI BARs
intel-iommu: Make iommu_flush_iotlb_psi() take pfn as argument
intel-iommu: Change aligned_size() to aligned_nrpages()
intel-iommu: Clean up intel_map_sg(), remove domain_page_mapping()
...
|
| |
| |
| |
| |
| |
| |
| | |
Check dma_pte_present() and only free the page if there _is_ one.
Kind of surprising that there was no warning about this.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
On Wed, 2009-07-01 at 16:59 -0700, Linus Torvalds wrote:
> I also _really_ hate how you do
>
> (unsigned long)pte >> VTD_PAGE_SHIFT ==
> (unsigned long)first_pte >> VTD_PAGE_SHIFT
Kill this, in favour of just looking to see if the incremented pte
pointer has 'wrapped' onto the next page. Which means we have to check
it _after_ incrementing it, not before.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
This would have found the bug in i386 pci_unmap_addr() a long time ago.
We shouldn't just silently return without doing anything.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Since we're using cmpxchg64() anyway (because that's the only way to do
an atomic 64-bit store on i386), we might as well ditch the extra
locking and just use cmpxchg64() to ensure that we don't add the page
twice.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
As with other functions, batch the CPU data cache flushes and don't keep
recalculating PTE addresses.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
The loop condition was wrong -- we should free a PMD only if its
_entire_ range is within the range we're intending to clear. The
early-termination condition was right, but not the loop.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Instead of calling domain_pfn_mapping() repeatedly with single or
small numbers of pages, just pass the sglist in. It can optimise the
number of cache flushes like domain_pfn_mapping() does, and gives a huge
speedup for large scatterlists.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
There's no need for the separate iommu_alloc_iova() function, and
certainly not for it to be global. Remove the underscores while we're at
it.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| | |
As with dma_pte_clear_range(), don't keep flushing a single PTE at a
time. And also micro-optimise the setting of PTE values rather than
using the helper functions to do all the masking.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
It's a bit silly to repeatedly call domain_flush_cache() for each PTE
individually, as we clear it. Instead, batch them up and flush a whole
range at a time. We might as well refrain from recalculating the PTE
address from scratch each time round the loop too.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
This is fairly broken anyway -- it doesn't take hotplug into account.
We should probably be checking page_is_ram() instead.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
Most of its callers are having to shift for themselves anyway, so we might
as well do it in iommu_flush_iotlb_psi().
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
... and use it in the trivial cases; the other callers want individual
(and bisectable) attention, since I screwed them up the first time...
Make the BUG_ON() happen on too-large virtual address rather than
physical address, too. That's the one we care about.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| | |
No more masking and alignment; just use pfns.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
Use unaligned address for domain->max_addr. That algorithm isn't ideal
anyway -- we should probably just look at the last iova in the tree.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
With some cleanup of intel_unmap_page(), intel_unmap_sg() and
vm_domain_exit() to no longer play with 64-bit addresses.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| | |
Noting that this is now an _inclusive_ range.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| | |
We're shifting the inputs for now, but that'll change...
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| | |
Add some helpers for converting between VT-d and normal system pfns,
since system pages can be larger than VT-d pages.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
There's no need for the GFX workaround now we have 'iommu=pt' for the
cases where people really care about performance. There's no need to
have a special case for just one type of device.
This also speeds up the iommu=pt path and reduces memory usage by
setting up the si_domain _once_ and then using it for all devices,
rather than giving each device its own private page tables.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| | |
We'll want to do this to a _domain_ (the si_domain) rather than a PCI device.
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In caching mode, domain ID 0 is reserved for non-present to present
mapping flush. Device IOTLB doesn't need to be flushed in this case.
Previously we were avoiding the flush for domain zero, even if the IOMMU
wasn't in caching mode and domain zero wasn't special.
Signed-off-by: Yu Zhao <yu.zhao@intel.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
|\ \
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* git://git.kernel.org/pub/scm/linux/kernel/git/lethal/sh-2.6:
sh: LCDC dcache flush for deferred io
sh: Fix compiler error and include the definition of IS_ERR_VALUE
sh: re-add LCDC fbdev support to the Migo-R defconfig
sh: fix se7724 ceu names
sh: ms7724se: Enable sh_eth in defconfig.
arch/sh/boards/mach-se/7206/io.c: Remove unnecessary semicolons
sh: ms7724se: Add sh_eth support
nommu: provide follow_pfn().
sh: Kill off unused DEBUG_BOOTMEM symbol.
perf_counter tools: add cpu_relax()/rmb() definitions for sh.
sh64: Hook up page fault events for software perf counters.
sh: Hook up page fault events for software perf counters.
sh: make set_perf_counter_pending() static inline.
clocksource: sh_tmu: Make undefined TCOR behaviour less undefined.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Since writenotify on uncached vmas is unsupported in 2.6.31,
live with cached framebuffer memory in the deferred io
case for now and flush the dcache before forcing refresh.
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
Acked-by: Magnus damm <damm@igel.co.jp>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Avoid undocumented vague TMU behavior when zero value is set to TCOR.
This primarily fixes up issues encountered under qemu with a zero-length
period, while the hardware itself is fairly ambivalent one way or the
other.
Signed-off-by: Shin-ichiro KAWASAKI <kawasaki@juno.dti.ne.jp>
Acked-by: Magnus Damm <damm@igel.co.jp>
Signed-off-by: Paul Mundt <lethal@linux-sh.org>
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
* git://git.infradead.org/mtd-2.6:
mtd: nand: fix build failure and incorrect return from omap_wait()
mtd: Use BLOCK_NIL consistently in NFTL/INFTL
mtd: m25p80 timeout too short for worst-case m25p16 devices
mtd: atmel_nand: Fix typo s/parititions/partitions/
mtd: cmdlineparts: Use 64-bit format when printing a debug message.
mtd: maps: Remove BUS_ID_SIZE from integrator_flash
jffs2: fix another potential leak on error path in scan.c
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
We need to include jiffies.h manually in some cases, and the status
returned from omap_wait() was broken in two separate ways.
Also add cond_resched() to the loop.
Signed-off-by: Vimal Singh <vimalsingh@ti.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Use BLOCK_NIL consistently rather than sometimes 0xffff and sometimes
BLOCK_NIL.
The semantic patch that finds this issue is below
(http://www.emn.fr/x-info/coccinelle/). On the other hand, the changes
were made by hand, in part because drivers/mtd/inftlcore.c contains dead
code that causes spatch to ignore a relevant function. Specifically, the
function INFTL_findwriteunit contains a do-while loop, but always takes a
return that leaves the loop on the first iteration.
// <smpl>
@r exists@
identifier f,C;
@@
f(...) { ... return C; }
@s@
identifier r.C;
expression E;
@@
@@
identifier r.f,r.C,I;
expression s.E;
@@
f(...) {
<...
(
I
|
- E
+ C
)
...>
}
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The m25p16 data sheet from numonyx lists the worst-case bulk erase time
(tBE) as 40 seconds.
Signed-off-by: Steven A. Falco <sfalco@harris.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Commit 69423d99fc182a81f3c5db3eb5c140acc6fc64be ("[MTD] update internal
API to support 64-bit device size") has changed some structure values
to 64-bit and has not updated this debug message, since it's not built
by default.
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
|
| | | |
| | | |
| | | |
| | | |
| | | | |
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
|