summaryrefslogtreecommitdiffstats
path: root/drivers/iommu/io-pgtable-arm.c
Commit message (Collapse)AuthorAgeFilesLines
* Merge branch 'for-next/iommu/misc' into for-next/iommu/coreWill Deacon2020-12-081-9/+9
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | Miscellaneous IOMMU changes for 5.11. Largely cosmetic, apart from a change to the way in which identity-mapped domains are configured so that the requests are now batched and can potentially use larger pages for the mapping. * for-next/iommu/misc: iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macro iommu: Defer the early return in arm_(v7s/lpae)_map iommu: Improve the performance for direct_mapping iommu: return error code when it can't get group iommu: Modify the description of iommu_sva_unbind_device
| * iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macroKunkun Jiang2020-12-081-5/+5
| | | | | | | | | | | | | | | | The 'level' parameter to the iopte_type() macro is unused, so remove it. Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> Link: https://lore.kernel.org/r/20201207120150.1891-1-jiangkunkun@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| * iommu: Defer the early return in arm_(v7s/lpae)_mapKeqian Zhu2020-12-081-4/+4
| | | | | | | | | | | | | | | | | | | | Although handling a mapping request with no permissions is a trivial no-op, defer the early return until after the size/range checks so that we are consistent with other mapping requests. Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com> Link: https://lore.kernel.org/r/20201207115758.9400-1-zhukeqian1@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
* | iommu/io-pgtable-arm: Add support to use system cacheSai Prakash Ranjan2020-11-251-2/+8
|/ | | | | | | | | | Add a quirk IO_PGTABLE_QUIRK_ARM_OUTER_WBWA to override the outer-cacheability attributes set in the TCR for a non-coherent page table walker when using system cache. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/f818676b4a2a9ad1edb92721947d47db41ed6a7c.1606287059.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
* Merge tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mappingLinus Torvalds2020-10-151-5/+0
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull dma-mapping updates from Christoph Hellwig: - rework the non-coherent DMA allocator - move private definitions out of <linux/dma-mapping.h> - lower CMA_ALIGNMENT (Paul Cercueil) - remove the omap1 dma address translation in favor of the common code - make dma-direct aware of multiple dma offset ranges (Jim Quinlan) - support per-node DMA CMA areas (Barry Song) - increase the default seg boundary limit (Nicolin Chen) - misc fixes (Robin Murphy, Thomas Tai, Xu Wang) - various cleanups * tag 'dma-mapping-5.10' of git://git.infradead.org/users/hch/dma-mapping: (63 commits) ARM/ixp4xx: add a missing include of dma-map-ops.h dma-direct: simplify the DMA_ATTR_NO_KERNEL_MAPPING handling dma-direct: factor out a dma_direct_alloc_from_pool helper dma-direct check for highmem pages in dma_direct_alloc_pages dma-mapping: merge <linux/dma-noncoherent.h> into <linux/dma-map-ops.h> dma-mapping: move large parts of <linux/dma-direct.h> to kernel/dma dma-mapping: move dma-debug.h to kernel/dma/ dma-mapping: remove <asm/dma-contiguous.h> dma-mapping: merge <linux/dma-contiguous.h> into <linux/dma-map-ops.h> dma-contiguous: remove dma_contiguous_set_default dma-contiguous: remove dev_set_cma_area dma-contiguous: remove dma_declare_contiguous dma-mapping: split <linux/dma-mapping.h> cma: decrease CMA_ALIGNMENT lower limit to 2 firewire-ohci: use dma_alloc_pages dma-iommu: implement ->alloc_noncoherent dma-mapping: add new {alloc,free}_noncoherent dma_map_ops methods dma-mapping: add a new dma_alloc_pages API dma-mapping: remove dma_cache_sync 53c700: convert to dma_alloc_noncoherent ...
| * iommu/io-pgtable-arm: Clean up faulty sanity checkRobin Murphy2020-09-211-5/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Checking for a nonzero dma_pfn_offset was a quick shortcut to validate whether the DMA == phys assumption could hold at all. Checking for a non-NULL dma_range_map is not quite equivalent, since a map may be present to describe a limited DMA window even without an offset, and thus this check can now yield false positives. However, it only ever served to short-circuit going all the way through to __arm_lpae_alloc_pages(), failing the canonical test there, and having a bit more to clean up. As such, we can simply remove it without loss of correctness. Reported-by: Naresh Kamboju <naresh.kamboju@linaro.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
| * dma-mapping: introduce DMA range map, supplanting dma_pfn_offsetJim Quinlan2020-09-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The new field 'dma_range_map' in struct device is used to facilitate the use of single or multiple offsets between mapping regions of cpu addrs and dma addrs. It subsumes the role of "dev->dma_pfn_offset" which was only capable of holding a single uniform offset and had no region bounds checking. The function of_dma_get_range() has been modified so that it takes a single argument -- the device node -- and returns a map, NULL, or an error code. The map is an array that holds the information regarding the DMA regions. Each range entry contains the address offset, the cpu_start address, the dma_start address, and the size of the region. of_dma_configure() is the typical manner to set range offsets but there are a number of ad hoc assignments to "dev->dma_pfn_offset" in the kernel driver code. These cases now invoke the function dma_direct_set_offset(dev, cpu_addr, dma_addr, size). Signed-off-by: Jim Quinlan <james.quinlan@broadcom.com> [hch: various interface cleanups] Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Mathieu Poirier <mathieu.poirier@linaro.org> Tested-by: Nathan Chancellor <natechancellor@gmail.com>
* | iommu/io-pgtable-arm: Move some definitions to a headerJean-Philippe Brucker2020-09-281-25/+2
|/ | | | | | | | | | | Extract some of the most generic TCR defines, so they can be reused by the page table sharing code. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20200918101852.582559-6-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
*-. Merge branches 'arm/renesas', 'arm/qcom', 'arm/mediatek', 'arm/omap', ↵Joerg Roedel2020-07-291-12/+9
|\ \ | | | | | | | | | 'arm/exynos', 'arm/smmu', 'ppc/pamu', 'x86/vt-d', 'x86/amd' and 'core' into next
| | * iommu: Add gfp parameter to io_pgtable_ops->map()Baolin Wang2020-07-241-9/+9
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | Now the ARM page tables are always allocated by GFP_ATOMIC parameter, but the iommu_ops->map() function has been added a gfp_t parameter by commit 781ca2de89ba ("iommu: Add gfp parameter to iommu_ops::map"), thus io_pgtable_ops->map() should use the gfp parameter passed from iommu_ops->map() to allocate page pages, which can avoid wasting the memory allocators atomic pools for some non-atomic contexts. Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/3093df4cb95497aaf713fca623ce4ecebb197c2e.1591930156.git.baolin.wang@linux.alibaba.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
| * iommu: Remove unused IOMMU_SYS_CACHE_ONLY flagWill Deacon2020-07-081-3/+0
|/ | | | | | | | | | | | | The IOMMU_SYS_CACHE_ONLY flag was never exposed via the DMA API and has no in-tree users. Remove it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Cc: Joerg Roedel <joro@8bytes.org> Cc: Rob Clark <robdclark@gmail.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Fix IOVA validation for 32-bitRobin Murphy2020-03-021-2/+2
| | | | | | | | | | | | | | | Since we ony support the TTB1 quirk for AArch64 contexts, and consequently only for 64-bit builds, the sign-extension aspect of the "are all bits above IAS consistent?" check should implicitly only apply to 64-bit IOVAs. Change the type of the cast to ensure that 32-bit longs don't inadvertently get sign-extended, and thus considered invalid, if they happen to be above 2GB in the TTB0 region. Reported-by: Stephan Gerhold <stephan@gerhold.net> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Acked-by: Acked-by: Will Deacon <will@kernel.org> Fixes: db6903010aa5 ("iommu/io-pgtable-arm: Prepare for TTBR1 usage") Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/io-pgtable-arm: Prepare for TTBR1 usageRobin Murphy2020-01-101-6/+19
| | | | | | | | | | | | Now that we can correctly extract top-level indices without relying on the remaining upper bits being zero, the only remaining impediments to using a given table for TTBR1 are the address validation on map/unmap and the awkward TCR translation granule format. Add a quirk so that we can do the right thing at those points. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Rationalise VTCR handlingWill Deacon2020-01-101-36/+21
| | | | | | | | | | | | | | Commit 05a648cd2dd7 ("iommu/io-pgtable-arm: Rationalise TCR handling") reworked the way in which the TCR register value is returned from the io-pgtable code when targetting the Arm long-descriptor format, in preparation for allowing page-tables to target TTBR1. As it turns out, the new interface is a lot nicer to use, so do the same conversion for the VTCR register even though there is only a single base register for stage-2 translation. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Rationalise TCR handlingRobin Murphy2020-01-101-58/+40
| | | | | | | | | | | | | | | | Although it's conceptually nice for the io_pgtable_cfg to provide a standard VMSA TCR value, the reality is that no VMSA-compliant IOMMU looks exactly like an Arm CPU, and they all have various other TCR controls which io-pgtable can't be expected to understand. Thus since there is an expectation that drivers will have to add to the given TCR value anyway, let's strip it down to just the essentials that are directly relevant to io-pgtable's inner workings - namely the various sizes and the walk attributes. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Add missing include of bitfield.h] Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Ensure ARM_64_LPAE_S2_TCR_RES1 is unsignedWill Deacon2020-01-101-1/+1
| | | | | | | | | | | | | ARM_64_LPAE_S2_TCR_RES1 is intended to map to bit 31 of the VTCR register, which is required to be set to 1 by the architecture. Unfortunately, we accidentally treat this as a signed quantity which means we also set the upper 32 bits of the VTCR to one, and they are required to be zero. Treat ARM_64_LPAE_S2_TCR_RES1 as unsigned to avoid the unwanted sign-extension up to 64 bits. Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Improve attribute handlingRobin Murphy2020-01-101-6/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | By VMSA rules, using Normal Non-Cacheable type with a shareability attribute of anything other than Outer Shareable is liable to lead into unpredictable territory: | Overlaying the shareability attribute (B3-1377, ARM DDI 0406C.c) | | A memory region with a resultant memory type attribute of Normal, and | a resultant cacheability attribute of Inner Non-cacheable, Outer | Non-cacheable, must have a resultant shareability attribute of Outer | Shareable, otherwise shareability is UNPREDICTABLE Although the SMMU architectures seem to give some slightly stronger guarantees of Non-Cacheable output types becoming implicitly Outer Shareable in most cases, we may as well be explicit and not take any chances. It's also weird that LPAE attribute handling is currently split between prot_to_pte() and init_pte() given that it can all be statically determined up-front. Thus, collect *all* the LPAE attributes into prot_to_pte() in order to logically pick the shareability based on the incoming IOMMU API prot value, and tweak the short-descriptor code to stop setting TTBR0.NOS for Non-Cacheable walks. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Support non-coherent stage-2 page tablesWill Deacon2020-01-101-4/+10
| | | | | | | | | | | Commit 9e6ea59f3ff3 ("iommu/io-pgtable: Support non-coherent page tables") added support for non-coherent page-table walks to the Arm IOMMU page-table backends. Unfortunately, it left the stage-2 allocator unchanged, so let's hook that up in the same way. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Cc: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Rationalise TTBRn handlingRobin Murphy2020-01-101-3/+2
| | | | | | | | | | | | | | | | | | | | TTBR1 values have so far been redundant since no users implement any support for split address spaces. Crucially, though, one of the main reasons for wanting to do so is to be able to manage each half entirely independently, e.g. context-switching one set of mappings without disturbing the other. Thus it seems unlikely that tying two tables together in a single io_pgtable_cfg would ever be particularly desirable or useful. Streamline the configs to just a single conceptual TTBR value representing the allocated table. This paves the way for future users to support split address spaces by simply allocating a table and dealing with the detailed TTBRn logistics themselves. Tested-by: Jordan Crouse <jcrouse@codeaurora.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Drop change to ttbr value] Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Rename IOMMU_QCOM_SYS_CACHE and improve docWill Deacon2019-11-071-1/+1
| | | | | | | | | | | | | | The 'IOMMU_QCOM_SYS_CACHE' IOMMU protection flag is exposed to all users of the IOMMU API. Despite its name, the idea behind it isn't especially tied to Qualcomm implementations and could conceivably be used by other systems. Rename it to 'IOMMU_SYS_CACHE_ONLY' and update the comment to describe a bit better the idea behind it. Cc: Robin Murphy <robin.murphy@arm.com> Cc: "Isaac J. Manjarres" <isaacm@codeaurora.org> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Rationalise MAIR handlingRobin Murphy2019-11-041-2/+1
| | | | | | | | | | Between VMSAv8-64 and the various 32-bit formats, there is either one 64-bit MAIR or a pair of 32-bit MAIR0/MAIR1 or NMRR/PMRR registers. As such, keeping two 64-bit values in io_pgtable_cfg has always been overkill. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Simplify level indexingRobin Murphy2019-11-041-16/+13
| | | | | | | | | | | | The nature of the LPAE format means that data->pg_shift is always redundant with data->bits_per_level, since they represent the size of a page and the number of PTEs per page respectively, and the size of a PTE is constant. Thus it works out more efficient to only store the latter, and derive the former via a trivial addition where necessary. Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: Reworked granule check in iopte_to_paddr()] Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Simplify PGD size handlingRobin Murphy2019-11-041-16/+17
| | | | | | | | | | | | We use data->pgd_size directly for the one-off allocation and freeing of the top-level table, but otherwise it serves for ARM_LPAE_PGD_IDX() to repeatedly re-calculate the effective number of top-level address bits it represents. Flip this around so we store the form we most commonly need, and derive the lesser-used one instead. This cuts a whole bunch of code out of the map/unmap/iova_to_phys fast-paths. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Simplify start level lookupRobin Murphy2019-11-041-25/+20
| | | | | | | | | | Beyond a couple of allocation-time calculations, data->levels is only ever used to derive the start level. Storing the start level directly leads to a small reduction in object code, which should help eke out a little more efficiency, and slightly more readable source to boot. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Simplify bounds checksRobin Murphy2019-11-041-3/+2
| | | | | | | | We're merely checking that the relevant upper bits of each address are all zero, so there are cheaper ways to achieve that. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Rationalise size checkRobin Murphy2019-11-041-1/+9
| | | | | | | | | | | | | It makes little sense to only validate the requested size after we think we've found a matching block size - making the check up-front is simple, and far more logical than waiting to walk off the bottom of the table to infer that we must have been passed a bogus size to start with. We're missing an equivalent check on the unmap path, so add that as well for consistency. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable: Make selftest gubbins consistently __initRobin Murphy2019-11-041-6/+7
| | | | | | | | | The selftests run as an initcall, but the annotation of the various callbacks and data seems to be somewhat arbitrary. Add it consistently for everything related to the selftests. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* Merge branch 'for-joerg/arm-smmu/fixes' into for-joerg/arm-smmu/updatesWill Deacon2019-11-041-13/+45
|\ | | | | | | | | | | | | | | | | Merge in ARM SMMU fixes to avoid conflicts in the ARM io-pgtable code. * for-joerg/arm-smmu/fixes: iommu/io-pgtable-arm: Support all Mali configurations iommu/io-pgtable-arm: Correct Mali attributes iommu/arm-smmu: Free context bitmap in the err path of arm_smmu_init_domain_context
| * iommu/io-pgtable-arm: Support all Mali configurationsRobin Murphy2019-10-011-1/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In principle, Midgard GPUs supporting smaller VA sizes should only require 3-level pagetables, since level 0 only resolves bits 48:40 of the address. However, the kbase driver does not appear to have any notion of a variable start level, and empirically T720 and T820 rapidly blow up with translation faults unless given a full 4-level table, despite only supporting a 33-bit VA size. The 'real' IAS value is still valuable in terms of validating addresses on map/unmap, so tweak the allocator to allow smaller values while still forcing the resultant tables to the full 4 levels. As far as I can test, this should make all known Midgard variants happy. Fixes: d08d42de6432 ("iommu: io-pgtable: Add ARM Mali midgard MMU page table format") Tested-by: Neil Armstrong <narmstrong@baylibre.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
| * iommu/io-pgtable-arm: Correct Mali attributesRobin Murphy2019-10-011-13/+40
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Whilst Midgard's MEMATTR follows a similar principle to the VMSA MAIR, the actual attribute values differ, so although it currently appears to work to some degree, we probably shouldn't be using our standard stage 1 MAIR for that. Instead, generate a reasonable MEMATTR with attribute values borrowed from the kbase driver; at this point we'll be overriding or ignoring pretty much all of the LPAE config, so just implement these Mali details in a dedicated allocator instead of pretending to subclass the standard VMSA format. Fixes: d08d42de6432 ("iommu: io-pgtable: Add ARM Mali midgard MMU page table format") Tested-by: Neil Armstrong <narmstrong@baylibre.com> Reviewed-by: Steven Price <steven.price@arm.com> Reviewed-by: Rob Herring <robh@kernel.org> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will@kernel.org>
* | iommu/io-pgtable: Move some initialization data to .init.rodataChristophe JAILLET2019-10-011-3/+3
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | The memory used by '__init' functions can be freed once the initialization phase has been performed. Mark some 'static const' array defined and used within some '__init' functions as '__initconst', so that the corresponding data can also be discarded. Without '__initconst', the data are put in the .rodata section. With the qualifier, they are put in the .init.rodata section. With gcc 8.3.0, the following changes have been measured: Without '__initconst': section size .rodata 00000720 .init.rodata 00000018 With '__initconst': section size .rodata 00000660 .init.rodata 00000058 Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->tlb_add_page()Will Deacon2019-07-291-8/+14
| | | | | | | | | With all the pieces in place, we can finally propagate the iommu_iotlb_gather structure from the call to unmap() down to the IOMMU drivers' implementation of ->tlb_add_page(). Currently everybody ignores it, but the machinery is now there to defer invalidation. Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable: Pass struct iommu_iotlb_gather to ->unmap()Will Deacon2019-07-291-4/+3
| | | | | | | Update the io-pgtable ->unmap() function to take an iommu_iotlb_gather pointer as an argument, and update the callers as appropriate. Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable: Remove unused ->tlb_sync() callbackWill Deacon2019-07-291-6/+0
| | | | | | The ->tlb_sync() callback is no longer used, so it can be removed. Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable: Replace ->tlb_add_flush() with ->tlb_add_page()Will Deacon2019-07-291-6/+5
| | | | | | | | | | | | | | The ->tlb_add_flush() callback in the io-pgtable API now looks a bit silly: - It takes a size and a granule, which are always the same - It takes a 'bool leaf', which is always true - It only ever flushes a single page With that in mind, replace it with an optional ->tlb_add_page() callback that drops the useless parameters. Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Call ->tlb_flush_walk() and ->tlb_flush_leaf()Will Deacon2019-07-291-5/+12
| | | | | | | | | Now that all IOMMU drivers using the io-pgtable API implement the ->tlb_flush_walk() and ->tlb_flush_leaf() callbacks, we can use them in the io-pgtable code instead of ->tlb_add_flush() immediately followed by ->tlb_sync(). Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable: Rename iommu_gather_ops to iommu_flush_opsWill Deacon2019-07-241-1/+1
| | | | | | | | | | | In preparation for TLB flush gathering in the IOMMU API, rename the iommu_gather_ops structure in io-pgtable to iommu_flush_ops, which better describes its purpose and avoids the potential for confusion between different levels of the API. $ find linux/ -type f -name '*.[ch]' | xargs sed -i 's/gather_ops/flush_ops/g' Signed-off-by: Will Deacon <will@kernel.org>
* iommu/io-pgtable-arm: Remove redundant call to io_pgtable_tlb_sync()Will Deacon2019-07-241-1/+0
| | | | | | | | | | | | | | | | | | | | Commit b6b65ca20bc9 ("iommu/io-pgtable-arm: Add support for non-strict mode") added an unconditional call to io_pgtable_tlb_sync() immediately after the case where we replace a block entry with a table entry during an unmap() call. This is redundant, since the IOMMU API will call iommu_tlb_sync() on this path and the patch in question mentions this: | To save having to reason about it too much, make sure the invalidation | in arm_lpae_split_blk_unmap() just performs its own unconditional sync | to minimise the window in which we're technically violating the break- | before-make requirement on a live mapping. This might work out redundant | with an outer-level sync for strict unmaps, but we'll never be splitting | blocks on a DMA fastpath anyway. However, this sync gets in the way of deferred TLB invalidation for leaf entries and is at best a questionable, unproven hack. Remove it. Signed-off-by: Will Deacon <will@kernel.org>
* Merge branch 'for-joerg/arm-smmu/updates' of ↵Joerg Roedel2019-07-011-15/+25
|\ | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu
| * iommu/io-pgtable: Support non-coherent page tablesBjorn Andersson2019-06-251-3/+9
| | | | | | | | | | | | | | | | | | Describe the memory related to page table walks as non-cacheable for iommu instances that are not DMA coherent. Signed-off-by: Bjorn Andersson <bjorn.andersson@linaro.org> [will: Use cfg->coherent_walk, fix arm-v7s, ensure outer-shareable for NC] Signed-off-by: Will Deacon <will@kernel.org>
| * iommu/io-pgtable: Replace IO_PGTABLE_QUIRK_NO_DMA with specific flagWill Deacon2019-06-251-11/+8
| | | | | | | | | | | | | | | | | | | | | | IO_PGTABLE_QUIRK_NO_DMA is a bit of a misnomer, since it's really just an indication of whether or not the page-table walker for the IOMMU is coherent with the CPU caches. Since cache coherency is more than just a quirk, replace the flag with its own field in the io_pgtable_cfg structure. Cc: Bjorn Andersson <bjorn.andersson@linaro.org> Signed-off-by: Will Deacon <will@kernel.org>
| * iommu/io-pgtable-arm: Add support to use system cacheVivek Gautam2019-06-181-1/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Few Qualcomm platforms such as, sdm845 have an additional outer cache called as System cache, aka. Last level cache (LLC) that allows non-coherent devices to upgrade to using caching. This cache sits right before the DDR, and is tightly coupled with the memory controller. The clients using this cache request their slices from this system cache, make it active, and can then start using it. There is a fundamental assumption that non-coherent devices can't access caches. This change adds an exception where they *can* use some level of cache despite still being non-coherent overall. The coherent devices that use cacheable memory, and CPU make use of this system cache by default. Looking at memory types, we have following - a) Normal uncached :- MAIR 0x44, inner non-cacheable, outer non-cacheable; b) Normal cached :- MAIR 0xff, inner read write-back non-transient, outer read write-back non-transient; attribute setting for coherenet I/O devices. and, for non-coherent i/o devices that can allocate in system cache another type gets added - c) Normal sys-cached :- MAIR 0xf4, inner non-cacheable, outer read write-back non-transient Coherent I/O devices use system cache by marking the memory as normal cached. Non-coherent I/O devices should mark the memory as normal sys-cached in page tables to use system cache. Acked-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
* | treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 234Thomas Gleixner2019-06-191-12/+1
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not see http www gnu org licenses extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 503 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Reviewed-by: Enrico Weigelt <info@metux.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190602204653.811534538@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* iommu: io-pgtable: Add ARM Mali midgard MMU page table formatRob Herring2019-04-121-22/+69
| | | | | | | | | | | | | | | | | | | | | | | | | | | | ARM Mali midgard GPU is similar to standard 64-bit stage 1 page tables, but have a few differences. Add a new format type to represent the format. The input address size is 48-bits and the output address size is 40-bits (and possibly less?). Note that the later bifrost GPUs follow the standard 64-bit stage 1 format. The differences in the format compared to 64-bit stage 1 format are: The 3rd level page entry bits are 0x1 instead of 0x3 for page entries. The access flags are not read-only and unprivileged, but read and write. This is similar to stage 2 entries, but the memory attributes field matches stage 1 being an index. The nG bit is not set by the vendor driver. This one didn't seem to matter, but we'll keep it aligned to the vendor driver. Cc: Will Deacon <will.deacon@arm.com> Acked-by: Robin Murphy <robin.murphy@arm.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Acked-by: Alyssa Rosenzweig <alyssa@rosenzweig.io> Acked-by: Joerg Roedel <jroedel@suse.de> Signed-off-by: Rob Herring <robh@kernel.org> Link: https://patchwork.freedesktop.org/patch/msgid/20190409205427.6943-2-robh@kernel.org
* iommu: Allow io-pgtable to be used outside of drivers/iommu/Rob Herring2019-02-111-2/+1
| | | | | | | | | | | | | | | | | | | Move io-pgtable.h to include/linux/ and export alloc_io_pgtable_ops and free_io_pgtable_ops. This enables drivers outside drivers/iommu/ to use the page table library. Specifically, some ARM Mali GPUs use the ARM page table formats. Cc: Will Deacon <will.deacon@arm.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Matthias Brugger <matthias.bgg@gmail.com> Cc: Rob Clark <robdclark@gmail.com> Cc: linux-arm-kernel@lists.infradead.org Cc: iommu@lists.linux-foundation.org Cc: linux-mediatek@lists.infradead.org Cc: linux-arm-msm@vger.kernel.org Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/io-pgtable-arm: Add support for non-strict modeZhen Lei2018-10-011-2/+12
| | | | | | | | | | | | | | | | | | | | Non-strict mode is simply a case of skipping 'regular' leaf TLBIs, since the sync is already factored out into ops->iotlb_sync at the core API level. Non-leaf invalidations where we change the page table structure itself still have to be issued synchronously in order to maintain walk caches correctly. To save having to reason about it too much, make sure the invalidation in arm_lpae_split_blk_unmap() just performs its own unconditional sync to minimise the window in which we're technically violating the break- before-make requirement on a live mapping. This might work out redundant with an outer-level sync for strict unmaps, but we'll never be splitting blocks on a DMA fastpath anyway. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> [rm: tweak comment, commit message, split_blk_unmap logic and barriers] Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* iommu/io-pgtable-arm: Fix race handling in split_blk_unmap()Robin Murphy2018-10-011-5/+4
| | | | | | | | | | | | | | | | | | | | | | In removing the pagetable-wide lock, we gained the possibility of the vanishingly unlikely case where we have a race between two concurrent unmappers splitting the same block entry. The logic to handle this is fairly straightforward - whoever loses the race frees their partial next-level table and instead dereferences the winner's newly-installed entry in order to fall back to a regular unmap, which intentionally echoes the pre-existing case of recursively splitting a 1GB block down to 4KB pages by installing a full table of 2MB blocks first. Unfortunately, the chump who implemented that logic failed to update the condition check for that fallback, meaning that if said race occurs at the last level (where the loser's unmap_idx is valid) then the unmap won't actually happen. Fix that to properly account for both the race and recursive cases. Fixes: 2c3d273eabe8 ("iommu/io-pgtable-arm: Support lockless operation") Signed-off-by: Robin Murphy <robin.murphy@arm.com> [will: re-jig control flow to avoid duplicate cmpxchg test] Signed-off-by: Will Deacon <will.deacon@arm.com>
* iommu/io-pgtable-arm: Fix pgtable allocation in selftestJean-Philippe Brucker2018-07-261-1/+2
| | | | | | | | | | Commit 4b123757eeaa ("iommu/io-pgtable-arm: Make allocations NUMA-aware") added a NUMA hint to page table allocation, but the pgtable selftest doesn't provide an SMMU device parameter. Since dev_to_node doesn't accept a NULL argument, add a special case for selftest. Signed-off-by: Jean-Philippe Brucker <jean-philippe.brucker@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
* iommu/io-pgtable-arm: Make allocations NUMA-awareRobin Murphy2018-05-291-4/+9
| | | | | | | | | | | | | | | | | | | We would generally expect pagetables to be read by the IOMMU more than written by the CPU, so in NUMA systems it makes sense to locate them close to the former and avoid cross-node pagetable walks if at all possible. As it turns out, we already have a handle on the IOMMU device for the sake of coherency management, so it's trivial to grab the appropriate NUMA node when allocating new pagetable pages. Note that we drop the semantics of alloc_pages_exact(), but that's fine since they have never been necessary: the only time we're allocating more than one page is for stage 2 top-level concatenation, but since that is based on the number of IPA bits, the size is always some exact power of two anyway. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>
* iommu/io-pgtable-arm: Use for_each_set_bit to simplify codeYueHaibing2018-05-031-4/+1
| | | | | | | | We can use for_each_set_bit() to simplify code slightly in the ARM io-pgtable self tests while unmapping. Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Joerg Roedel <jroedel@suse.de>