diff options
author | FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> | 2008-10-23 23:14:29 +0900 |
---|---|---|
committer | Ingo Molnar <mingo@elte.hu> | 2008-10-23 21:54:40 +0200 |
commit | 03967c5267b0e7312d1d55dc814d94cf190ca573 (patch) | |
tree | 71db8a764b7422d8c21b33fb70dc310c3659984d /arch | |
parent | 75bebb7f0c2a709812cccb4d3151a21b012c5cad (diff) | |
download | linux-03967c5267b0e7312d1d55dc814d94cf190ca573.tar.gz linux-03967c5267b0e7312d1d55dc814d94cf190ca573.tar.bz2 linux-03967c5267b0e7312d1d55dc814d94cf190ca573.zip |
x86: restore the old swiotlb alloc_coherent behavior
This restores the old swiotlb alloc_coherent behavior (before the
alloc_coherent rewrite):
http://lkml.org/lkml/2008/8/12/200
The old alloc_coherent avoids GFP_DMA allocation first and if the
allocated address is not fit for the device's coherent_dma_mask, then
dma_alloc_coherent does GFP_DMA allocation. If it fails,
alloc_coherent calls swiotlb_alloc_coherent (in short, we rarely used
swiotlb_alloc_coherent).
After the alloc_coherent rewrite, dma_alloc_coherent
(include/asm-x86/dma-mapping.h) directly calls swiotlb_alloc_coherent.
It means that we possibly can't handle a device having dma_masks >
24bit < 32bits since swiotlb_alloc_coherent doesn't have the above
GFP_DMA retry mechanism.
This patch fixes x86's swiotlb alloc_coherent to use the GFP_DMA retry
mechanism, which dma_generic_alloc_coherent() provides now
(pci-nommu.c and GART IOMMU driver also use
dma_generic_alloc_coherent).
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kernel/pci-swiotlb_64.c | 14 |
1 files changed, 13 insertions, 1 deletions
diff --git a/arch/x86/kernel/pci-swiotlb_64.c b/arch/x86/kernel/pci-swiotlb_64.c index c4ce0332759e..3c539d111abb 100644 --- a/arch/x86/kernel/pci-swiotlb_64.c +++ b/arch/x86/kernel/pci-swiotlb_64.c @@ -18,9 +18,21 @@ swiotlb_map_single_phys(struct device *hwdev, phys_addr_t paddr, size_t size, return swiotlb_map_single(hwdev, phys_to_virt(paddr), size, direction); } +static void *x86_swiotlb_alloc_coherent(struct device *hwdev, size_t size, + dma_addr_t *dma_handle, gfp_t flags) +{ + void *vaddr; + + vaddr = dma_generic_alloc_coherent(hwdev, size, dma_handle, flags); + if (vaddr) + return vaddr; + + return swiotlb_alloc_coherent(hwdev, size, dma_handle, flags); +} + struct dma_mapping_ops swiotlb_dma_ops = { .mapping_error = swiotlb_dma_mapping_error, - .alloc_coherent = swiotlb_alloc_coherent, + .alloc_coherent = x86_swiotlb_alloc_coherent, .free_coherent = swiotlb_free_coherent, .map_single = swiotlb_map_single_phys, .unmap_single = swiotlb_unmap_single, |