From 56b146d36db933844011d5026c6f55593037c7b8 Mon Sep 17 00:00:00 2001 From: Tobias Klauser Date: Mon, 10 Apr 2006 22:54:21 -0700 Subject: [PATCH] Last DMA_xBIT_MASK cleanups These are the last conversions of pci_set_dma_mask(), pci_set_consistent_dma_mask() and pci_dma_supported() to use DMA_xBIT_MASK constants from linux/dma-mapping.h Signed-off-by: Tobias Klauser Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- Documentation/DMA-mapping.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'Documentation/DMA-mapping.txt') diff --git a/Documentation/DMA-mapping.txt b/Documentation/DMA-mapping.txt index ee4bb73683cd..10bf4deb96aa 100644 --- a/Documentation/DMA-mapping.txt +++ b/Documentation/DMA-mapping.txt @@ -194,7 +194,7 @@ document for how to handle this case. Finally, if your device can only drive the low 24-bits of address during PCI bus mastering you might do something like: - if (pci_set_dma_mask(pdev, 0x00ffffff)) { + if (pci_set_dma_mask(pdev, DMA_24BIT_MASK)) { printk(KERN_WARNING "mydev: 24-bit DMA addressing not available.\n"); goto ignore_this_device; @@ -212,7 +212,7 @@ functions (for example a sound card provides playback and record functions) and the various different functions have _different_ DMA addressing limitations, you may wish to probe each mask and only provide the functionality which the machine can handle. It -is important that the last call to pci_set_dma_mask() be for the +is important that the last call to pci_set_dma_mask() be for the most specific mask. Here is pseudo-code showing how this might be done: -- cgit v1.2.3 From 21440d313358043b0ce5e43b00ff3c9b35a8616c Mon Sep 17 00:00:00 2001 From: David Brownell Date: Sat, 1 Apr 2006 10:21:52 -0800 Subject: [PATCH] dma doc updates This updates the DMA API documentation to address a few issues: - The dma_map_sg() call results are used like pci_map_sg() results: using sg_dma_address() and sg_dma_len(). That's not wholly obvious to folk reading _only_ the "new" DMA-API.txt writeup. - Buffers allocated by dma_alloc_coherent() may not be completely free of coherency concerns ... some CPUs also have write buffers that may need to be flushed. - Cacheline coherence issues are now mentioned as being among issues which affect dma buffers, and complicate/prevent using of static and (especially) stack based buffers with the DMA calls. I don't think many drivers currently need to worry about flushing write buffers, but I did hit it with one SOC using external SDRAM for DMA descriptors: without explicit writebuffer flushing, the on-chip DMA controller accessed descriptors before the CPU completed the writes. Signed-off-by: David Brownell Signed-off-by: Greg Kroah-Hartman --- Documentation/DMA-mapping.txt | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) (limited to 'Documentation/DMA-mapping.txt') diff --git a/Documentation/DMA-mapping.txt b/Documentation/DMA-mapping.txt index 10bf4deb96aa..7c717699032c 100644 --- a/Documentation/DMA-mapping.txt +++ b/Documentation/DMA-mapping.txt @@ -58,11 +58,15 @@ translating each of those pages back to a kernel address using something like __va(). [ EDIT: Update this when we integrate Gerd Knorr's generic code which does this. ] -This rule also means that you may not use kernel image addresses -(ie. items in the kernel's data/text/bss segment, or your driver's) -nor may you use kernel stack addresses for DMA. Both of these items -might be mapped somewhere entirely different than the rest of physical -memory. +This rule also means that you may use neither kernel image addresses +(items in data/text/bss segments), nor module image addresses, nor +stack addresses for DMA. These could all be mapped somewhere entirely +different than the rest of physical memory. Even if those classes of +memory could physically work with DMA, you'd need to ensure the I/O +buffers were cacheline-aligned. Without that, you'd see cacheline +sharing problems (data corruption) on CPUs with DMA-incoherent caches. +(The CPU could write to one word, DMA would write to a different one +in the same cache line, and one of them could be overwritten.) Also, this means that you cannot take the return of a kmap() call and DMA to/from that. This is similar to vmalloc(). @@ -284,6 +288,11 @@ There are two types of DMA mappings: in order to get correct behavior on all platforms. + Also, on some platforms your driver may need to flush CPU write + buffers in much the same way as it needs to flush write buffers + found in PCI bridges (such as by reading a register's value + after writing it). + - Streaming DMA mappings which are usually mapped for one DMA transfer, unmapped right after it (unless you use pci_dma_sync_* below) and for which hardware can optimize for sequential accesses. @@ -303,6 +312,9 @@ There are two types of DMA mappings: Neither type of DMA mapping has alignment restrictions that come from PCI, although some devices may have such restrictions. +Also, systems with caches that aren't DMA-coherent will work better +when the underlying buffers don't share cache lines with other data. + Using Consistent DMA mappings. -- cgit v1.2.3