summaryrefslogtreecommitdiffstats
path: root/drivers/xen/swiotlb-xen.c
Commit message (Collapse)AuthorAgeFilesLines
* xen-swiotlb: remove xen_swiotlb_set_dma_maskChristoph Hellwig2017-06-281-12/+0
| | | | | | This just duplicates the generic implementation. Signed-off-by: Christoph Hellwig <hch@lst.de>
* xen-swiotlb: implement ->mapping_errorChristoph Hellwig2017-06-201-2/+10
| | | | | | | DMA_ERROR_CODE is going to go away, so don't rely on it. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* xen-swiotlb: consolidate xen_swiotlb_dma_opsChristoph Hellwig2017-06-201-45/+48
| | | | | | | | | | | | | ARM and x86 had duplicated versions of the dma_ops structure, the only difference is that x86 hasn't wired up the set_dma_mask, mmap, and get_sgtable ops yet. On x86 all of them are identical to the generic version, so they aren't needed but harmless. All the symbols used only for xen_swiotlb_dma_ops can now be marked static as well. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* xen/arm,arm64: rename __generic_dma_ops to xen_get_dma_opsStefano Stabellini2017-05-021-4/+4
| | | | | | | | | | | | | | | Now that __generic_dma_ops is a xen specific function, rename it to xen_get_dma_ops. Change all the call sites appropriately. Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: linux@armlinux.org.uk CC: catalin.marinas@arm.com CC: will.deacon@arm.com CC: boris.ostrovsky@oracle.com CC: jgross@suse.com CC: Julien Grall <julien.grall@arm.com>
* Merge branch 'stable/for-linus-4.11' of ↵Linus Torvalds2017-03-071-0/+47
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb Pull swiotlb updates from Konrad Rzeszutek Wilk: "Two tiny implementations of the DMA API for callback in ARM (for Xen)" * 'stable/for-linus-4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: swiotlb-xen: implement xen_swiotlb_get_sgtable callback swiotlb-xen: implement xen_swiotlb_dma_mmap callback
| * swiotlb-xen: implement xen_swiotlb_get_sgtable callbackAndrii Anisov2017-02-131-0/+28
| | | | | | | | | | | | Signed-off-by: Andrii Anisov <andrii_anisov@epam.com> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
| * swiotlb-xen: implement xen_swiotlb_dma_mmap callbackStefano Stabellini2017-02-131-0/+19
| | | | | | | | | | | | | | | | | | This function creates userspace mapping for the DMA-coherent memory. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Oleksandr Dmytryshyn <oleksandr.dmytryshyn@globallogic.com> Signed-off-by: Andrii Anisov <andrii_anisov@epam.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
* | swiotlb-xen: update dev_addr after swapping pagesStefano Stabellini2017-01-201-2/+3
|/ | | | | | | | | | | | | | | | | | | | | | | In xen_swiotlb_map_page and xen_swiotlb_map_sg_attrs, if the original page is not suitable, we swap it for another page from the swiotlb pool. In these cases, we don't update the previously calculated dma address for the page before calling xen_dma_map_page. Thus, we end up calling xen_dma_map_page passing the wrong dev_addr, resulting in xen_dma_map_page mistakenly assuming that the page is foreign when it is local. Fix the bug by updating dev_addr appropriately. This change has no effect on x86, because xen_dma_map_page is a stub there. Signed-off-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: Pooya Keshavarzi <Pooya.Keshavarzi@de.bosch.com> Tested-by: Pooya Keshavarzi <Pooya.Keshavarzi@de.bosch.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* swiotlb: Export swiotlb_max_segment to usersKonrad Rzeszutek Wilk2017-01-061-0/+4
| | | | | | | | | | | | | | | So they can figure out what is the optimal number of pages that can be contingously stitched together without fear of bounce buffer. We also expose an mechanism for sub-users of SWIOTLB API, such as Xen-SWIOTLB to set the max segment value. And lastly if swiotlb=force is set (which mandates we bounce buffer everything) we set max_segment so at least we can bounce buffer one 4K page instead of a giant 512KB one for which we may not have space. Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reported-and-Tested-by: Juergen Gross <jgross@suse.com>
* swiotlb: Convert swiotlb_force from int to enumGeert Uytterhoeven2016-12-191-2/+2
| | | | | | | | | Convert the flag swiotlb_force from an int to an enum, to prepare for the advent of more possible values. Suggested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* swiotlb: Minor fix-ups for DMA_ATTR_SKIP_CPU_SYNC supportAlexander Duyck2016-11-101-2/+2
| | | | | | | | | | | | | | | I am updating the paths so that instead of trying to pass "attr | DMA_ATTR_SKIP_CPU_SYNC" we instead just OR the value into attr and then pass it since attr will not be used after we make the unmap call. I realized there was one spot I had missed when I was applying the DMA attribute to the DMA mapping exception handling. This change corrects that. Finally it looks like there is a stray blank line at the end of the swiotlb_unmap_sg_attrs function that can be dropped. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
* swiotlb: Add support for DMA_ATTR_SKIP_CPU_SYNCAlexander Duyck2016-11-071-4/+7
| | | | | | | | | | | As a first step to making DMA_ATTR_SKIP_CPU_SYNC apply to architectures beyond just ARM I need to make it so that the swiotlb will respect the flag. In order to do that I also need to update the swiotlb-xen since it heavily makes use of the functionality. Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
* swiotlb-xen: Enforce return of DMA_ERROR_CODE in mapping functionAlexander Duyck2016-11-071-12/+6
| | | | | | | | | | | | | | | | | | | The mapping function should always return DMA_ERROR_CODE when a mapping has failed as this is what the DMA API expects when a DMA error has occurred. The current function for mapping a page in Xen was returning either DMA_ERROR_CODE or 0 depending on where it failed. On x86 DMA_ERROR_CODE is 0, but on other architectures such as ARM it is ~0. We need to make sure we return the same error value if either the mapping failed or the device is not capable of accessing the mapping. If we are returning DMA_ERROR_CODE as our error value we can drop the function for checking the error code as the default is to compare the return value against DMA_ERROR_CODE if no function is defined. Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad@kernel.org>
* dma-mapping: use unsigned long for dma_attrsKrzysztof Kozlowski2016-08-041-7/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The dma-mapping core and the implementations do not change the DMA attributes passed by pointer. Thus the pointer can point to const data. However the attributes do not have to be a bitfield. Instead unsigned long will do fine: 1. This is just simpler. Both in terms of reading the code and setting attributes. Instead of initializing local attributes on the stack and passing pointer to it to dma_set_attr(), just set the bits. 2. It brings safeness and checking for const correctness because the attributes are passed by value. Semantic patches for this change (at least most of them): virtual patch virtual context @r@ identifier f, attrs; @@ f(..., - struct dma_attrs *attrs + unsigned long attrs , ...) { ... } @@ identifier r.f; @@ f(..., - NULL + 0 ) and // Options: --all-includes virtual patch virtual context @r@ identifier f, attrs; type t; @@ t f(..., struct dma_attrs *attrs); @@ identifier r.f; @@ f(..., - NULL + 0 ) Link: http://lkml.kernel.org/r/1468399300-5399-2-git-send-email-k.kozlowski@samsung.com Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Acked-by: Vineet Gupta <vgupta@synopsys.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> Acked-by: Mark Salter <msalter@redhat.com> [c6x] Acked-by: Jesper Nilsson <jesper.nilsson@axis.com> [cris] Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> [drm] Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Fabien Dessenne <fabien.dessenne@st.com> [bdisp] Reviewed-by: Marek Szyprowski <m.szyprowski@samsung.com> [vb2-core] Acked-by: David Vrabel <david.vrabel@citrix.com> [xen] Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> [xen swiotlb] Acked-by: Joerg Roedel <jroedel@suse.de> [iommu] Acked-by: Richard Kuo <rkuo@codeaurora.org> [hexagon] Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> [m68k] Acked-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> [s390] Acked-by: Bjorn Andersson <bjorn.andersson@linaro.org> Acked-by: Hans-Christian Noren Egtvedt <egtvedt@samfundet.no> [avr32] Acked-by: Vineet Gupta <vgupta@synopsys.com> [arc] Acked-by: Robin Murphy <robin.murphy@arm.com> [arm64 and dma-iommu] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* xen/swiotlb: Add support for 64KB page granularityJulien Grall2015-10-231-20/+19
| | | | | | | | | | | | | | | | | | | | | | | Swiotlb is used on ARM64 to support DMA on platform where devices are not protected by an SMMU. Furthermore it's only enabled for DOM0. While Xen is always using 4KB page granularity in the stage-2 page table, Linux ARM64 may either use 4KB or 64KB. This means that a Linux page can be spanned accross multiple Xen page. The Swiotlb code has to validate that the buffer used for DMA is physically contiguous in the memory. As a Linux page can't be shared between local memory and foreign page by design (the balloon code always removing entirely a Linux page), the changes in the code are very minimal because we only need to check the first Xen PFN. Note that it may be possible to optimize the function check_page_physically_contiguous to avoid looping over every Xen PFN for local memory. Although I will let this optimization for a follow-up. Signed-off-by: Julien Grall <julien.grall@citrix.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
* xen/swiotlb: Pass addresses rather than frame numbers to xen_arch_need_swiotlbJulien Grall2015-10-231-2/+2
| | | | | | | | | | | With 64KB page granularity support, the frame number will be different. It will be easier to modify the behavior in a single place rather than in each caller. Signed-off-by: Julien Grall <julien.grall@citrix.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
* Merge branch 'akpm' (patches from Andrew)Linus Torvalds2015-09-101-6/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge third patch-bomb from Andrew Morton: - even more of the rest of MM - lib/ updates - checkpatch updates - small changes to a few scruffy filesystems - kmod fixes/cleanups - kexec updates - a dma-mapping cleanup series from hch * emailed patches from Andrew Morton <akpm@linux-foundation.org>: (81 commits) dma-mapping: consolidate dma_set_mask dma-mapping: consolidate dma_supported dma-mapping: cosolidate dma_mapping_error dma-mapping: consolidate dma_{alloc,free}_noncoherent dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent} mm: use vma_is_anonymous() in create_huge_pmd() and wp_huge_pmd() mm: make sure all file VMAs have ->vm_ops set mm, mpx: add "vm_flags_t vm_flags" arg to do_mmap_pgoff() mm: mark most vm_operations_struct const namei: fix warning while make xmldocs caused by namei.c ipc: convert invalid scenarios to use WARN_ON zlib_deflate/deftree: remove bi_reverse() lib/decompress_unlzma: Do a NULL check for pointer lib/decompressors: use real out buf size for gunzip with kernel fs/affs: make root lookup from blkdev logical size sysctl: fix int -> unsigned long assignments in INT_MIN case kexec: export KERNEL_IMAGE_SIZE to vmcoreinfo kexec: align crash_notes allocation to make it be inside one physical page kexec: remove unnecessary test in kimage_alloc_crash_control_pages() kexec: split kexec_load syscall from kexec core code ...
| * dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent}Christoph Hellwig2015-09-101-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Since 2009 we have a nice asm-generic header implementing lots of DMA API functions for architectures using struct dma_map_ops, but unfortunately it's still missing a lot of APIs that all architectures still have to duplicate. This series consolidates the remaining functions, although we still need arch opt outs for two of them as a few architectures have very non-standard implementations. This patch (of 5): The coherent DMA allocator works the same over all architectures supporting dma_map operations. This patch consolidates them and converges the minor differences: - the debug_dma helpers are now called from all architectures, including those that were previously missing them - dma_alloc_from_coherent and dma_release_from_coherent are now always called from the generic alloc/free routines instead of the ops dma-mapping-common.h always includes dma-coherent.h to get the defintions for them, or the stubs if the architecture doesn't support this feature - checks for ->alloc / ->free presence are removed. There is only one magic instead of dma_map_ops without them (mic_dma_ops) and that one is x86 only anyway. Besides that only x86 needs special treatment to replace a default devices if none is passed and tweak the gfp_flags. An optional arch hook is provided for that. [linux@roeck-us.net: fix build] [jcmvbkbc@gmail.com: fix xtensa] Signed-off-by: Christoph Hellwig <hch@lst.de> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Yoshinori Sato <ysato@users.sourceforge.jp> Cc: Michal Simek <monstr@monstr.eu> Cc: Jonas Bonn <jonas@southpole.se> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: Guan Xuetao <gxt@mprc.pku.edu.cn> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Andy Shevchenko <andy.shevchenko@gmail.com> Signed-off-by: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Max Filippov <jcmvbkbc@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* | xen: Make clear that swiotlb and biomerge are dealing with DMA addressJulien Grall2015-09-081-8/+8
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The swiotlb is required when programming a DMA address on ARM when a device is not protected by an IOMMU. In this case, the DMA address should always be equal to the machine address. For DOM0 memory, Xen ensure it by have an identity mapping between the guest address and host address. However, when mapping a foreign grant reference, the 1:1 model doesn't work. For ARM guest, most of the callers of pfn_to_mfn expects to get a GFN (Guest Frame Number), i.e a PFN (Page Frame Number) from the Linux point of view given that all ARM guest are auto-translated. Even though the name pfn_to_mfn is misleading, we need to ensure that those caller get a GFN and not by mistake a MFN. In pratical, I haven't seen error related to this but we should fix it for the sake of correctness. In order to fix the implementation of pfn_to_mfn on ARM in a follow-up patch, we have to introduce new helpers to return the DMA from a PFN and the invert. On x86, the new helpers will be an alias of pfn_to_mfn and mfn_to_pfn. The helpers will be used in swiotlb and xen_biovec_phys_mergeable. This is necessary in the latter because we have to ensure that the biovec code will not try to merge a biovec using foreign page and another using Linux memory. Lastly, the helper mfn_to_local_pfn has been renamed to bfn_to_local_pfn given that the only usage was in swiotlb. Signed-off-by: Julien Grall <julien.grall@citrix.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
* xen: Add __GFP_DMA flag when xen_swiotlb_init gets free pages on ARMStefano Stabellini2015-05-061-1/+1
| | | | | | | | | | | | | | | | | Make sure that xen_swiotlb_init allocates buffers that are DMA capable when at least one memblock is available below 4G. Otherwise we assume that all devices on the SoC can cope with >4G addresses. We do this on ARM and ARM64, where dom0 is mapped 1:1, so pfn == mfn in this case. No functional changes on x86. From: Chen Baozi <baozich@gmail.com> Signed-off-by: Chen Baozi <baozich@gmail.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: Chen Baozi <baozich@gmail.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
* Revert "swiotlb-xen: pass dev_addr to swiotlb_tbl_unmap_single"David Vrabel2014-12-101-1/+1
| | | | | | | | | | | This reverts commit 2c3fc8d26dd09b9d7069687eead849ee81c78e46. This commit broke on x86 PV because entries in the generic SWIOTLB are indexed using (pseudo-)physical address not DMA address and these are not the same in a x86 PV guest. Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* swiotlb-xen: pass dev_addr to swiotlb_tbl_unmap_singleStefano Stabellini2014-12-041-1/+1
| | | | | | | | | | Need to pass the pointer within the swiotlb internal buffer to the swiotlb library, that in the case of xen_unmap_single is dev_addr, not paddr. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
* swiotlb-xen: call xen_dma_sync_single_for_device when appropriateStefano Stabellini2014-12-041-1/+1
| | | | | | | | | In xen_swiotlb_sync_single we always call xen_dma_sync_single_for_cpu, even when we should call xen_dma_sync_single_for_device. Fix that. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
* swiotlb-xen: remove BUG_ON in xen_bus_to_physStefano Stabellini2014-12-041-2/+0
| | | | | | | | | | | | | | | On x86 truncation cannot occur because config XEN depends on X86_64 || (X86_32 && X86_PAE). On ARM truncation can occur without CONFIG_ARM_LPAE, when the dma operation involves foreign grants. However in that case the physical address returned by xen_bus_to_phys is actually invalid (there is no mfn to pfn tracking for foreign grants on ARM) and it is not used. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
* swiotlb-xen: pass dev_addr to xen_dma_unmap_page and xen_dma_sync_single_for_cpuStefano Stabellini2014-12-041-3/+3
| | | | | | | | | | xen_dma_unmap_page and xen_dma_sync_single_for_cpu take a dma_addr_t handle as argument, not a physical address. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> CC: stable@vger.kernel.org
* xen/arm/arm64: introduce xen_arch_need_swiotlbStefano Stabellini2014-12-041-1/+4
| | | | | | | | | | | | | | | | | | Introduce an arch specific function to find out whether a particular dma mapping operation needs to bounce on the swiotlb buffer. On ARM and ARM64, if the page involved is a foreign page and the device is not coherent, we need to bounce because at unmap time we cannot execute any required cache maintenance operations (we don't know how to find the pfn from the mfn). No change of behaviour for x86. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Ian Campbell <ian.campbell@citrix.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* xen: add a dma_addr_t dev_addr argument to xen_dma_map_pageStefano Stabellini2014-12-041-2/+4
| | | | | | | | | | | | | | | | | dev_addr is the machine address of the page. The new parameter can be used by the ARM and ARM64 implementations of xen_dma_map_page to find out if the page is a local page (pfn == mfn) or a foreign page (pfn != mfn). dev_addr could be retrieved again from the physical address, using pfn_to_mfn, but it requires accessing an rbtree. Since we already have the dev_addr in our hands at the call site there is no need to get the mfn twice. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* xen: swiotlb: handle sizeof(dma_addr_t) != sizeof(phys_addr_t)Ian Campbell2014-01-301-2/+20
| | | | | | | | | | | | | | | | | | | | | The use of phys_to_machine and machine_to_phys in the phys<=>bus conversions causes us to lose the top bits of the DMA address if the size of a DMA address is not the same as the size of the phyiscal address. This can happen in practice on ARM where foreign pages can be above 4GB even though the local kernel does not have LPAE page tables enabled (which is totally reasonable if the guest does not itself have >4GB of RAM). In this case the kernel still maps the foreign pages at a phys addr below 4G (as it must) but the resulting DMA address (returned by the grant map operation) is much higher. This is analogous to a hardware device which has its view of RAM mapped up high for some reason. This patch makes I/O to foreign pages (specifically blkif) work on 32-bit ARM systems with more than 4GB of RAM. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
* swiotlb-xen: add missing xen_dma_map_page callStefano Stabellini2013-11-151-0/+5
| | | | | | | | swiotlb-xen is missing a xen_dma_map_page call in xen_swiotlb_map_sg_attrs, in the bounce buffer path. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* Merge remote-tracking branch 'stefano/swiotlb-xen-9.1' into ↵Konrad Rzeszutek Wilk2013-11-081-26/+89
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | stable/for-linus-3.13 * stefano/swiotlb-xen-9.1: swiotlb-xen: fix error code returned by xen_swiotlb_map_sg_attrs swiotlb-xen: static inline xen_phys_to_bus, xen_bus_to_phys, xen_virt_to_bus and range_straddles_page_boundary grant-table: call set_phys_to_machine after mapping grant refs arm,arm64: do not always merge biovec if we are running on Xen swiotlb: print a warning when the swiotlb is full swiotlb-xen: use xen_dma_map/unmap_page, xen_dma_sync_single_for_cpu/device xen: introduce xen_dma_map/unmap_page and xen_dma_sync_single_for_cpu/device swiotlb-xen: use xen_alloc/free_coherent_pages xen: introduce xen_alloc/free_coherent_pages arm64/xen: get_dma_ops: return xen_dma_ops if we are running as xen_initial_domain arm/xen: get_dma_ops: return xen_dma_ops if we are running as xen_initial_domain swiotlb-xen: introduce xen_swiotlb_set_dma_mask xen/arm,arm64: enable SWIOTLB_XEN xen: make xen_create_contiguous_region return the dma address xen/x86: allow __set_phys_to_machine for autotranslate guests arm/xen,arm64/xen: introduce p2m arm64: define DMA_ERROR_CODE arm: make SWIOTLB available Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Conflicts: arch/arm/include/asm/dma-mapping.h drivers/xen/swiotlb-xen.c [Conflicts arose b/c "arm: make SWIOTLB available" v8 was in Stefano's branch, while I had v9 + Ack from Russel. I also fixed up white-space issues]
| * swiotlb-xen: fix error code returned by xen_swiotlb_map_sg_attrsStefano Stabellini2013-10-291-1/+1
| | | | | | | | | | | | | | | | map_sg returns the number of elements mapped, not a dma_addr_t. In case of error return 0, not DMA_ERROR_CODE. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
| * swiotlb-xen: static inline xen_phys_to_bus, xen_bus_to_phys, xen_virt_to_bus ↵Stefano Stabellini2013-10-251-4/+4
| | | | | | | | | | | | | | | | | | and range_straddles_page_boundary This functions are small and called frequently. Static inline them. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
| * swiotlb: print a warning when the swiotlb is fullStefano Stabellini2013-10-251-0/+1
| | | | | | | | | | | | | | | | Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v7: - use dev_warn instead of pr_warn.
| * swiotlb-xen: use xen_dma_map/unmap_page, xen_dma_sync_single_for_cpu/deviceStefano Stabellini2013-10-251-8/+31
| | | | | | | | | | | | | | | | | | | | Call xen_dma_map_page, xen_dma_unmap_page, xen_dma_sync_single_for_cpu, xen_dma_sync_single_for_device from swiotlb-xen to ensure cpu/device coherency of the pages used for DMA, including the ones belonging to the swiotlb buffer. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
| * swiotlb-xen: use xen_alloc/free_coherent_pagesStefano Stabellini2013-10-101-10/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use xen_alloc_coherent_pages and xen_free_coherent_pages to allocate or free coherent pages. We need to be careful handling the pointer returned by xen_alloc_coherent_pages, because on ARM the pointer is not equal to phys_to_virt(*dma_handle). In fact virt_to_phys only works for kernel direct mapped RAM memory. In ARM case the pointer could be an ioremap address, therefore passing it to virt_to_phys would give you another physical address that doesn't correspond to it. Make xen_create_contiguous_region take a phys_addr_t as start parameter to avoid the virt_to_phys calls which would be incorrect. Changes in v6: - remove extra spaces. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
| * swiotlb-xen: introduce xen_swiotlb_set_dma_maskStefano Stabellini2013-10-091-0/+12
| | | | | | | | | | | | Implement xen_swiotlb_set_dma_mask, use it for set_dma_mask on arm. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com>
| * xen/arm,arm64: enable SWIOTLB_XENStefano Stabellini2013-10-101-0/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Xen on arm and arm64 needs SWIOTLB_XEN: when running on Xen we need to program the hardware with mfns rather than pfns for dma addresses. Remove SWIOTLB_XEN dependency on X86 and PCI and make XEN select SWIOTLB_XEN on arm and arm64. At the moment always rely on swiotlb-xen, but when Xen starts supporting hardware IOMMUs we'll be able to avoid it conditionally on the presence of an IOMMU on the platform. Implement xen_create_contiguous_region on arm and arm64: for the moment we assume that dom0 has been mapped 1:1 (physical addresses == machine addresses) therefore we don't need to call XENMEM_exchange. Simply return the physical address as dma address. Initialize the xen-swiotlb from xen_early_init (before the native dma_ops are initialized), set xen_dma_ops to &xen_swiotlb_dma_ops. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Changes in v8: - assume dom0 is mapped 1:1, no need to call XENMEM_exchange. Changes in v7: - call __set_phys_to_machine_multi from xen_create_contiguous_region and xen_destroy_contiguous_region to update the P2M; - don't call XENMEM_unpin, it has been removed; - call XENMEM_exchange instead of XENMEM_exchange_and_pin; - set nr_exchanged to 0 before calling the hypercall. Changes in v6: - introduce and export xen_dma_ops; - call xen_mm_init from as arch_initcall. Changes in v4: - remove redefinition of DMA_ERROR_CODE; - update the code to use XENMEM_exchange_and_pin and XENMEM_unpin; - add a note about hardware IOMMU in the commit message. Changes in v3: - code style changes; - warn on XENMEM_put_dma_buf failures.
| * xen: make xen_create_contiguous_region return the dma addressStefano Stabellini2013-10-091-3/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | Modify xen_create_contiguous_region to return the dma address of the newly contiguous buffer. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Acked-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: David Vrabel <david.vrabel@citrix.com> Changes in v4: - use virt_to_machine instead of virt_to_bus.
* | tracing/events: Fix swiotlb tracepoint creationThierry Reding2013-10-241-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | Tracepoints are only created when Xen support is enabled, but they are also referenced within lib/swiotlb.c. So unless Xen support is enabled the tracepoints will be missing, therefore causing builds to fail. Fix this by moving the tracepoint creation to lib/swiotlb.c, which works nicely because the Xen swiotlb support selects the generic swiotlb support. Signed-off-by: Thierry Reding <treding@nvidia.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | tracing/events: Add bounce tracing to swiotblZoltan Kiss2013-10-021-0/+5
|/ | | | | | | | | | | | | | | | | | | | Ftrace is currently not able to detect when SWIOTLB has to do double buffering. Under Xen you can only see it indirectly in function_graph, when xen_swiotlb_map_page() doesn't stop after range_straddles_page_boundary(), but calls spinlock functions, memcpy() and xen_phys_to_bus() as well. This patch introduces the swiotlb:swiotlb_bounced event, which also prints out the following informations to help you find out why bouncing happened: dev_name: 0000:08:00.0 dma_mask=ffffffffffffffff dev_addr=9149f000 size=32768 swiotlb_force=0 If you use Xen, and (dev_addr + size + 1) > dma_mask, the buffer is out of the device's DMA range. If swiotlb_force == 1, you should really change the kernel parameters. Otherwise, the buffer is not contiguous in mfn space. Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com> [v1: Don't print 'swiotlb_force=X', just print swiotlb_force if it is enabled] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* swiotlb-xen: replace dma_length with sg_dma_len() macroStefano Stabellini2013-08-091-4/+4
| | | | | | | | swiotlb-xen has an implicit dependency on CONFIG_NEED_SG_DMA_LENGTH. Remove it by replacing dma_length with sg_dma_len. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* xen: Convert printks to pr_<level>Joe Perches2013-06-281-5/+7
| | | | | | | | | | | | | | | | | Convert printks to pr_<level> (excludes printk(KERN_DEBUG...) to be more consistent throughout the xen subsystem. Add pr_fmt with KBUILD_MODNAME or "xen:" KBUILD_MODNAME Coalesce formats and add missing word spaces Add missing newlines Align arguments and reflow to 80 columns Remove DRV_NAME from formats as pr_fmt adds the same content This does change some of the prefixes of these messages but it also does make them more consistent. Signed-off-by: Joe Perches <joe@perches.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* x86: Don't panic if can not alloc buffer for swiotlbYinghai Lu2013-01-291-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Normal boot path on system with iommu support: swiotlb buffer will be allocated early at first and then try to initialize iommu, if iommu for intel or AMD could setup properly, swiotlb buffer will be freed. The early allocating is with bootmem, and could panic when we try to use kdump with buffer above 4G only, or with memmap to limit mem under 4G. for example: memmap=4095M$1M to remove memory under 4G. According to Eric, add _nopanic version and no_iotlb_memory to fail map single later if swiotlb is still needed. -v2: don't pass nopanic, and use -ENOMEM return value according to Eric. panic early instead of using swiotlb_full to panic...according to Eric/Konrad. -v3: make swiotlb_init to be notpanic, but will affect: arm64, ia64, powerpc, tile, unicore32, x86. -v4: cleanup swiotlb_init by removing swiotlb_init_with_default_size. Suggested-by: Eric W. Biederman <ebiederm@xmission.com> Signed-off-by: Yinghai Lu <yinghai@kernel.org> Link: http://lkml.kernel.org/r/1359058816-7615-36-git-send-email-yinghai@kernel.org Reviewed-and-tested-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Andrzej Pietrasiewicz <andrzej.p@samsung.com> Cc: linux-mips@linux-mips.org Cc: xen-devel@lists.xensource.com Cc: virtualization@lists.linux-foundation.org Cc: Shuah Khan <shuahkhan@gmail.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
* swiotlb: Use physical addresses instead of virtual in swiotlb_tbl_sync_singleAlexander Duyck2012-10-301-2/+1
| | | | | | | | | | | | | | | This change makes it so that the sync functionality also uses physical addresses. This helps to further reduce the use of virt_to_phys and phys_to_virt functions. In order to clarify things since we now have 2 physical addresses in use inside of swiotlb_tbl_sync_single I am renaming phys to orig_addr, and dma_addr to tlb_addr. This way is should be clear that orig_addr is contained within io_orig_addr and tlb_addr is an address within the io_tlb_addr buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* swiotlb: Use physical addresses for swiotlb_tbl_unmap_singleAlexander Duyck2012-10-301-2/+2
| | | | | | | | | | | | | | | This change makes it so that the unmap functionality also uses physical addresses. This helps to further reduce the use of virt_to_phys and phys_to_virt functions. In order to clarify things since we now have 2 physical addresses in use inside of swiotlb_tbl_unmap_single I am renaming phys to orig_addr, and dma_addr to tlb_addr. This way is should be clear that orig_addr is contained within io_orig_addr and tlb_addr is an address within the io_tlb_addr buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* swiotlb: Return physical addresses when calling swiotlb_tbl_map_singleAlexander Duyck2012-10-301-11/+11
| | | | | | | | | | | | | | | | | | | | | | This change makes it so that swiotlb_tbl_map_single will return a physical address instead of a virtual address when called. The advantage to this once again is that we are avoiding a number of virt_to_phys and phys_to_virt translations by working with everything as a physical address. One change I had to make in order to support using physical addresses is that I could no longer trust 0 to be a invalid physical address on all platforms. So instead I made it so that ~0 is returned on error. This should never be a valid return value as it implies that only one byte would be available for use. In order to clarify things since we now have 2 physical addresses in use inside of swiotlb_tbl_map_single I am renaming phys to orig_addr, and dma_addr to tlb_addr. This way is should be clear that orig_addr is contained within io_orig_addr and tlb_addr is an address within the io_tlb_addr buffer. Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* Merge branch 'stable/late-swiotlb.v3.3' into stable/for-linus-3.7Konrad Rzeszutek Wilk2012-09-221-26/+75
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * stable/late-swiotlb.v3.3: xen/swiotlb: Fix compile warnings when using plain integer instead of NULL pointer. xen/swiotlb: Remove functions not needed anymore. xen/pcifront: Use Xen-SWIOTLB when initting if required. xen/swiotlb: For early initialization, return zero on success. xen/swiotlb: Use the swiotlb_late_init_with_tbl to init Xen-SWIOTLB late when PV PCI is used. xen/swiotlb: Move the error strings to its own function. xen/swiotlb: Move the nr_tbl determination in its own function. swiotlb: add the late swiotlb initialization function with iotlb memory xen/swiotlb: With more than 4GB on 64-bit, disable the native SWIOTLB. xen/swiotlb: Simplify the logic. Conflicts: arch/x86/xen/pci-swiotlb-xen.c Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
| * xen/swiotlb: Remove functions not needed anymore.Konrad Rzeszutek Wilk2012-09-171-16/+0
| | | | | | | | | | | | | | | | | | | | | | Sparse warns us off: drivers/xen/swiotlb-xen.c:506:1: warning: symbol 'xen_swiotlb_map_sg' was not declared. Should it be static? drivers/xen/swiotlb-xen.c:534:1: warning: symbol 'xen_swiotlb_unmap_sg' was not declared. Should it be static? and it looks like we do not need this function at all. Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
| * xen/swiotlb: For early initialization, return zero on success.Konrad Rzeszutek Wilk2012-09-171-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | If everything is setup properly we would return -ENOMEM since rc by default is set to that value. Lets not do that and return a proper return code. Note: The reason the early code needs this special treatment is that it SWIOTLB library call does not return anything (and had it failed it would call panic()) - but our function does. Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
| * xen/swiotlb: Use the swiotlb_late_init_with_tbl to init Xen-SWIOTLB late ↵Konrad Rzeszutek Wilk2012-09-171-10/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | when PV PCI is used. With this patch we provide the functionality to initialize the Xen-SWIOTLB late in the bootup cycle - specifically for Xen PCI-frontend. We still will work if the user had supplied 'iommu=soft' on the Linux command line. Note: We cannot depend on after_bootmem to automatically determine whether this is early or not. This is because when PCI IOMMUs are initialized it is after after_bootmem but before a lot of "other" subsystems are initialized. CC: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> [v1: Fix smatch warnings] [v2: Added check for xen_swiotlb] [v3: Rebased with new xen-swiotlb changes] [v4: squashed xen/swiotlb: Depending on after_bootmem is not correct in] Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>