summaryrefslogtreecommitdiffstats
path: root/drivers/iommu
Commit message (Collapse)AuthorAgeFilesLines
* Merge tag 'iommu-updates-v5.11' of ↵Linus Torvalds2020-12-1629-1193/+1527
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux Pull IOMMU updates from Will Deacon: "There's a good mixture of improvements to the core code and driver changes across the board. One thing worth pointing out is that this includes a quirk to work around behaviour in the i915 driver (see 65f746e8285f ("iommu: Add quirk for Intel graphic devices in map_sg")), which otherwise interacts badly with the conversion of the intel IOMMU driver over to the DMA-IOMMU APU but has being fixed properly in the DRM tree. We'll revert the quirk later this cycle once we've confirmed that things don't fall apart without it. Summary: - IOVA allocation optimisations and removal of unused code - Introduction of DOMAIN_ATTR_IO_PGTABLE_CFG for parameterising the page-table of an IOMMU domain - Support for changing the default domain type in sysfs - Optimisation to the way in which identity-mapped regions are created - Driver updates: * Arm SMMU updates, including continued work on Shared Virtual Memory * Tegra SMMU updates, including support for PCI devices * Intel VT-D updates, including conversion to the IOMMU-DMA API - Cleanup, kerneldoc and minor refactoring" * tag 'iommu-updates-v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux: (50 commits) iommu/amd: Add sanity check for interrupt remapping table length macros dma-iommu: remove __iommu_dma_mmap iommu/io-pgtable: Remove tlb_flush_leaf iommu: Stop exporting free_iova_mem() iommu: Stop exporting alloc_iova_mem() iommu: Delete split_and_remove_iova() iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macro iommu: Defer the early return in arm_(v7s/lpae)_map iommu: Improve the performance for direct_mapping iommu: avoid taking iova_rbtree_lock twice iommu/vt-d: Avoid GFP_ATOMIC where it is not needed iommu/vt-d: Remove set but not used variable iommu: return error code when it can't get group iommu: Fix htmldocs warnings in sysfs-kernel-iommu_groups iommu: arm-smmu-impl: Add a space before open parenthesis iommu: arm-smmu-impl: Use table to list QCOM implementations iommu/arm-smmu: Move non-strict mode to use io_pgtable_domain_attr iommu/arm-smmu: Add support for pagetable config domain attribute iommu: Document usage of "/sys/kernel/iommu_groups/<grp_id>/type" file iommu: Take lock before reading iommu group default domain type ...
| * iommu/amd: Add sanity check for interrupt remapping table length macrosSuravee Suthikulpanit2020-12-113-13/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently, macros related to the interrupt remapping table length are defined separately. This has resulted in an oversight in which one of the macros were missed when changing the length. To prevent this, redefine the macros to add built-in sanity check. Also, rename macros to use the name of the DTE[IntTabLen] field as specified in the AMD IOMMU specification. There is no functional change. Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Suravee Suthikulpanit <suravee.suthikulpanit@amd.com> Cc: Will Deacon <will@kernel.org> Cc: Jerry Snitselaar <jsnitsel@redhat.com> Cc: Joerg Roedel <joro@8bytes.org> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Link: https://lore.kernel.org/r/20201210162436.126321-1-suravee.suthikulpanit@amd.com Signed-off-by: Will Deacon <will@kernel.org>
| * dma-iommu: remove __iommu_dma_mmapChristoph Hellwig2020-12-091-16/+1
| | | | | | | | | | | | | | | | | | The function has a single caller, so open code it there and take advantage of the precalculated page count variable. Signed-off-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20201209112019.2625029-1-hch@lst.de Signed-off-by: Will Deacon <will@kernel.org>
| * iommu/io-pgtable: Remove tlb_flush_leafRobin Murphy2020-12-088-49/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The only user of tlb_flush_leaf is a particularly hairy corner of the Arm short-descriptor code, which wants a synchronous invalidation to minimise the races inherent in trying to split a large page mapping. This is already far enough into "here be dragons" territory that no sensible caller should ever hit it, and thus it really doesn't need optimising. Although using tlb_flush_walk there may technically be more heavyweight than needed, it does the job and saves everyone else having to carry around useless baggage. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Steven Price <steven.price@arm.com> Link: https://lore.kernel.org/r/9844ab0c5cb3da8b2f89c6c2da16941910702b41.1606324115.git.robin.murphy@arm.com Signed-off-by: Will Deacon <will@kernel.org>
| * Merge branch 'for-next/iommu/fixes' into for-next/iommu/coreWill Deacon2020-12-086-16/+60
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Merge in IOMMU fixes for 5.10 in order to resolve conflicts against the queue for 5.11. * for-next/iommu/fixes: iommu/amd: Set DTE[IntTabLen] to represent 512 IRTEs iommu/vt-d: Don't read VCCAP register unless it exists x86/tboot: Don't disable swiotlb when iommu is forced on iommu: Check return of __iommu_attach_device() arm-smmu-qcom: Ensure the qcom_scm driver has finished probing iommu/amd: Enforce 4k mapping for certain IOMMU data structures MAINTAINERS: Temporarily add myself to the IOMMU entry iommu/vt-d: Fix compile error with CONFIG_PCI_ATS not set iommu/vt-d: Avoid panic if iommu init fails in tboot system iommu/vt-d: Cure VF irqdomain hickup x86/platform/uv: Fix copied UV5 output archtype x86/platform/uv: Drop last traces of uv_flush_tlb_others
| * \ Merge branch 'for-next/iommu/vt-d' into for-next/iommu/coreWill Deacon2020-12-083-809/+327
| |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Intel VT-D updates for 5.11. The main thing here is converting the code over to the iommu-dma API, which required some improvements to the core code to preserve existing functionality. * for-next/iommu/vt-d: iommu/vt-d: Avoid GFP_ATOMIC where it is not needed iommu/vt-d: Remove set but not used variable iommu/vt-d: Cleanup after converting to dma-iommu ops iommu/vt-d: Convert intel iommu driver to the iommu ops iommu/vt-d: Update domain geometry in iommu_ops.at(de)tach_dev iommu: Add quirk for Intel graphic devices in map_sg iommu: Allow the dma-iommu api to use bounce buffers iommu: Add iommu_dma_free_cpu_cached_iovas() iommu: Handle freelists when using deferred flushing in iommu drivers iommu/vt-d: include conditionally on CONFIG_INTEL_IOMMU_SVM
| | * | iommu/vt-d: Avoid GFP_ATOMIC where it is not neededChristophe JAILLET2020-12-011-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is no reason to use GFP_ATOMIC in a 'suspend' function. Use GFP_KERNEL instead to give more opportunities to allocate the requested memory. Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/20201030182630.5154-1-christophe.jaillet@wanadoo.fr Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201201013149.2466272-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu/vt-d: Remove set but not used variableLu Baolu2020-11-271-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixes gcc '-Wunused-but-set-variable' warning: drivers/iommu/intel/iommu.c:5643:27: warning: variable 'last_pfn' set but not used [-Wunused-but-set-variable] 5643 | unsigned long start_pfn, last_pfn; | ^~~~~~~~ This variable is never used, so remove it. Fixes: 2a2b8eaa5b25 ("iommu: Handle freelists when using deferred flushing in iommu drivers") Reported-by: kernel test robot <lkp@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201127013308.1833610-1-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu/vt-d: Cleanup after converting to dma-iommu opsLu Baolu2020-11-251-62/+28
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Some cleanups after converting the driver to use dma-iommu ops. - Remove nobounce option; - Cleanup and simplify the path in domain mapping. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-8-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu/vt-d: Convert intel iommu driver to the iommu opsTom Murphy2020-11-252-703/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Convert the intel iommu driver to the dma-iommu api. Remove the iova handling and reserve region code from the intel iommu driver. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-7-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu/vt-d: Update domain geometry in iommu_ops.at(de)tach_devLu Baolu2020-11-251-2/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The iommu-dma constrains IOVA allocation based on the domain geometry that the driver reports. Update domain geometry everytime a domain is attached to or detached from a device. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-6-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu: Add quirk for Intel graphic devices in map_sgLu Baolu2020-11-251-0/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Combining the sg segments exposes a bug in the Intel i915 driver which causes visual artifacts and the screen to freeze. This is most likely because of how the i915 handles the returned list. It probably doesn't respect the returned value specifying the number of elements in the list and instead depends on the previous behaviour of the Intel iommu driver which would return the same number of elements in the output list as in the input list. [ This has been fixed in the i915 tree, but we agreed to carry this fix temporarily in the iommu tree and revert it before 5.11 is released: https://lore.kernel.org/linux-iommu/20201103105442.GD22888@8bytes.org/ -- Will ] Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-5-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu: Allow the dma-iommu api to use bounce buffersTom Murphy2020-11-251-13/+149
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow the dma-iommu api to use bounce buffers for untrusted devices. This is a copy of the intel bounce buffer code. Co-developed-by: Lu Baolu <baolu.lu@linux.intel.com> Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-4-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu: Add iommu_dma_free_cpu_cached_iovas()Tom Murphy2020-11-251-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a iommu_dma_free_cpu_cached_iovas function to allow drivers which use the dma-iommu ops to free cached cpu iovas. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-3-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | iommu: Handle freelists when using deferred flushing in iommu driversTom Murphy2020-11-252-27/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow the iommu_unmap_fast to return newly freed page table pages and pass the freelist to queue_iova in the dma-iommu ops path. This is useful for iommu drivers (in this case the intel iommu driver) which need to wait for the ioTLB to be flushed before newly free/unmapped page table pages can be freed. This way we can still batch ioTLB free operations and handle the freelists. Signed-off-by: Tom Murphy <murphyt7@tcd.ie> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Tested-by: Logan Gunthorpe <logang@deltatee.com> Link: https://lore.kernel.org/r/20201124082057.2614359-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | Merge branch 'stable/for-linus-5.10-rc2' of ↵Will Deacon2020-11-231-3/+2
| | |\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb into for-next/iommu/vt-d Merge swiotlb updates from Konrad, as we depend on the updated function prototype for swiotlb_tbl_map_single(), which dropped the 'tbl_dma_addr' argument in -rc4. * 'stable/for-linus-5.10-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/swiotlb: swiotlb: remove the tbl_dma_addr argument to swiotlb_tbl_map_single swiotlb: fix "x86: Don't panic if can not alloc buffer for swiotlb"
| | * | | iommu/vt-d: include conditionally on CONFIG_INTEL_IOMMU_SVMLukas Bulwahn2020-11-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit 6ee1b77ba3ac ("iommu/vt-d: Add svm/sva invalidate function") introduced intel_iommu_sva_invalidate() when CONFIG_INTEL_IOMMU_SVM. This function uses the dedicated static variable inv_type_granu_table and functions to_vtd_granularity() and to_vtd_size(). These parts are unused when !CONFIG_INTEL_IOMMU_SVM, and hence, make CC=clang W=1 warns with an -Wunused-function warning. Include these parts conditionally on CONFIG_INTEL_IOMMU_SVM. Fixes: 6ee1b77ba3ac ("iommu/vt-d: Add svm/sva invalidate function") Signed-off-by: Lukas Bulwahn <lukas.bulwahn@gmail.com> Acked-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201115205951.20698-1-lukas.bulwahn@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
| * | | | Merge branch 'for-next/iommu/tegra-smmu' into for-next/iommu/coreWill Deacon2020-12-081-152/+88
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Tegra SMMU updates for 5.11: a complete redesign of the probing logic, support for PCI devices and cleanup work. * for-next/iommu/tegra-smmu: iommu/tegra-smmu: Add PCI support iommu/tegra-smmu: Rework tegra_smmu_probe_device() iommu/tegra-smmu: Use fwspec in tegra_smmu_(de)attach_dev iommu/tegra-smmu: Expand mutex protection range iommu/tegra-smmu: Unwrap tegra_smmu_group_get
| | * | | | iommu/tegra-smmu: Add PCI supportNicolin Chen2020-11-251-10/+25
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch simply adds support for PCI devices. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Tested-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-6-nicoleotsuka@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/tegra-smmu: Rework tegra_smmu_probe_device()Nicolin Chen2020-11-251-81/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The bus_set_iommu() in tegra_smmu_probe() enumerates all clients to call in tegra_smmu_probe_device() where each client searches its DT node for smmu pointer and swgroup ID, so as to configure an fwspec. But this requires a valid smmu pointer even before mc and smmu drivers are probed. So in tegra_smmu_probe() we added a line of code to fill mc->smmu, marking "a bit of a hack". This works for most of clients in the DTB, however, doesn't work for a client that doesn't exist in DTB, a PCI device for example. Actually, if we return ERR_PTR(-ENODEV) in ->probe_device() when it's called from bus_set_iommu(), iommu core will let everything carry on. Then when a client gets probed, of_iommu_configure() in iommu core will search DTB for swgroup ID and call ->of_xlate() to prepare an fwspec, similar to tegra_smmu_probe_device() and tegra_smmu_configure(). Then it'll call tegra_smmu_probe_device() again, and this time we shall return smmu->iommu pointer properly. So we can get rid of tegra_smmu_find() and tegra_smmu_configure() along with DT polling code by letting the iommu core handle every thing, except a problem that we search iommus property in DTB not only for swgroup ID but also for mc node to get mc->smmu pointer to call dev_iommu_priv_set() and return the smmu->iommu pointer. So we'll need to find another way to get smmu pointer. Referencing the implementation of sun50i-iommu driver, of_xlate() has client's dev pointer, mc node and swgroup ID. This means that we can call dev_iommu_priv_set() in of_xlate() instead, so we can simply get smmu pointer in ->probe_device(). This patch reworks tegra_smmu_probe_device() by: 1) Removing mc->smmu hack in tegra_smmu_probe() so as to return ERR_PTR(-ENODEV) in tegra_smmu_probe_device() during stage of tegra_smmu_probe/tegra_mc_probe(). 2) Moving dev_iommu_priv_set() to of_xlate() so we can get smmu pointer in tegra_smmu_probe_device() to replace DTB polling. 3) Removing tegra_smmu_configure() accordingly since iommu core takes care of it. This also fixes a problem that previously we could add clients to iommu groups before iommu core initializes its default domain: ubuntu@jetson:~$ dmesg | grep iommu platform 50000000.host1x: Adding to iommu group 1 platform 57000000.gpu: Adding to iommu group 2 iommu: Default domain type: Translated platform 54200000.dc: Adding to iommu group 3 platform 54240000.dc: Adding to iommu group 3 platform 54340000.vic: Adding to iommu group 4 Though it works fine with IOMMU_DOMAIN_UNMANAGED, but will have warnings if switching to IOMMU_DOMAIN_DMA: iommu: Failed to allocate default IOMMU domain of type 0 for group (null) - Falling back to IOMMU_DOMAIN_DMA iommu: Failed to allocate default IOMMU domain of type 0 for group (null) - Falling back to IOMMU_DOMAIN_DMA Now, bypassing the first probe_device() call from bus_set_iommu() fixes the sequence: ubuntu@jetson:~$ dmesg | grep iommu iommu: Default domain type: Translated tegra-host1x 50000000.host1x: Adding to iommu group 0 tegra-dc 54200000.dc: Adding to iommu group 1 tegra-dc 54240000.dc: Adding to iommu group 1 tegra-vic 54340000.vic: Adding to iommu group 2 nouveau 57000000.gpu: Adding to iommu group 3 Note that dmesg log above is testing with IOMMU_DOMAIN_UNMANAGED. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Tested-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-5-nicoleotsuka@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/tegra-smmu: Use fwspec in tegra_smmu_(de)attach_devNicolin Chen2020-11-251-33/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In tegra_smmu_(de)attach_dev() functions, we poll DTB for each client's iommus property to get swgroup ID in order to prepare "as" and enable smmu. Actually tegra_smmu_configure() prepared an fwspec for each client, and added to the fwspec all swgroup IDs of client DT node in DTB. So this patch uses fwspec in tegra_smmu_(de)attach_dev() so as to replace the redundant DT polling code. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Tested-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-4-nicoleotsuka@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/tegra-smmu: Expand mutex protection rangeNicolin Chen2020-11-251-13/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is used to protect potential race condition at use_count. since probes of client drivers, calling attach_dev(), may run concurrently. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Tested-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-3-nicoleotsuka@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/tegra-smmu: Unwrap tegra_smmu_group_getNicolin Chen2020-11-251-15/+4
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The tegra_smmu_group_get was added to group devices in different SWGROUPs and it'd return a NULL group pointer upon a mismatch at tegra_smmu_find_group(), so for most of clients/devices, it very likely would mismatch and need a fallback generic_device_group(). But now tegra_smmu_group_get handles devices in same SWGROUP too, which means that it would allocate a group for every new SWGROUP or would directly return an existing one upon matching a SWGROUP, i.e. any device will go through this function. So possibility of having a NULL group pointer in device_group() is upon failure of either devm_kzalloc() or iommu_group_alloc(). In either case, calling generic_device_group() no longer makes a sense. Especially for devm_kzalloc() failing case, it'd cause a problem if it fails at devm_kzalloc() yet succeeds at a fallback generic_device_group(), because it does not create a group->list for other devices to match. This patch simply unwraps the function to clean it up. Signed-off-by: Nicolin Chen <nicoleotsuka@gmail.com> Tested-by: Dmitry Osipenko <digetx@gmail.com> Reviewed-by: Dmitry Osipenko <digetx@gmail.com> Acked-by: Thierry Reding <treding@nvidia.com> Link: https://lore.kernel.org/r/20201125101013.14953-2-nicoleotsuka@gmail.com Signed-off-by: Will Deacon <will@kernel.org>
| * | | | Merge branch 'for-next/iommu/svm' into for-next/iommu/coreWill Deacon2020-12-0810-21/+460
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | More steps along the way to Shared Virtual {Addressing, Memory} support for Arm's SMMUv3, including the addition of a helper library that can be shared amongst other IOMMU implementations wishing to support this feature. * for-next/iommu/svm: iommu/arm-smmu-v3: Hook up ATC invalidation to mm ops iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind() iommu/sva: Add PASID helpers iommu/ioasid: Add ioasid references
| | * | | | iommu/arm-smmu-v3: Hook up ATC invalidation to mm opsJean-Philippe Brucker2020-11-233-3/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The invalidate_range() notifier is called for any change to the address space. Perform the required ATC invalidations. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-5-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/arm-smmu-v3: Implement iommu_sva_bind/unbind()Jean-Philippe Brucker2020-11-234-10/+282
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The sva_bind() function allows devices to access process address spaces using a PASID (aka SSID). (1) bind() allocates or gets an existing MMU notifier tied to the (domain, mm) pair. Each mm gets one PASID. (2) Any change to the address space calls invalidate_range() which sends ATC invalidations (in a subsequent patch). (3) When the process address space dies, the release() notifier disables the CD to allow reclaiming the page tables. Since release() has to be light we do not instruct device drivers to stop DMA here, we just ignore incoming page faults from this point onwards. To avoid any event 0x0a print (C_BAD_CD) we disable translation without clearing CD.V. PCIe Translation Requests and Page Requests are silently denied. Don't clear the R bit because the S bit can't be cleared when STALL_MODEL==0b10 (forced), and clearing R without clearing S is useless. Faulting transactions will stall and will be aborted by the IOPF handler. (4) After stopping DMA, the device driver releases the bond by calling unbind(). We release the MMU notifier, free the PASID and the bond. Three structures keep track of bonds: * arm_smmu_bond: one per {device, mm} pair, the handle returned to the device driver for a bind() request. * arm_smmu_mmu_notifier: one per {domain, mm} pair, deals with ATS/TLB invalidations and clearing the context descriptor on mm exit. * arm_smmu_ctx_desc: one per mm, holds the pinned ASID and pgd. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20201106155048.997886-4-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/sva: Add PASID helpersJean-Philippe Brucker2020-11-234-0/+107
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let IOMMU drivers allocate a single PASID per mm. Store the mm in the IOASID set to allow refcounting and searching mm by PASID, when handling an I/O page fault. Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201106155048.997886-3-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/ioasid: Add ioasid referencesJean-Philippe Brucker2020-11-233-9/+39
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Let IOASID users take references to existing ioasids with ioasid_get(). ioasid_put() drops a reference and only frees the ioasid when its reference number is zero. It returns true if the ioasid was freed. For drivers that don't call ioasid_get(), ioasid_put() is the same as ioasid_free(). Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Reviewed-by: Eric Auger <eric.auger@redhat.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20201106155048.997886-2-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
| * | | | Merge branch 'for-next/iommu/misc' into for-next/iommu/coreWill Deacon2020-12-083-21/+33
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Miscellaneous IOMMU changes for 5.11. Largely cosmetic, apart from a change to the way in which identity-mapped domains are configured so that the requests are now batched and can potentially use larger pages for the mapping. * for-next/iommu/misc: iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macro iommu: Defer the early return in arm_(v7s/lpae)_map iommu: Improve the performance for direct_mapping iommu: return error code when it can't get group iommu: Modify the description of iommu_sva_unbind_device
| | * | | | iommu/io-pgtable-arm: Remove unused 'level' parameter from iopte_type() macroKunkun Jiang2020-12-081-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 'level' parameter to the iopte_type() macro is unused, so remove it. Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> Link: https://lore.kernel.org/r/20201207120150.1891-1-jiangkunkun@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: Defer the early return in arm_(v7s/lpae)_mapKeqian Zhu2020-12-082-8/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although handling a mapping request with no permissions is a trivial no-op, defer the early return until after the size/range checks so that we are consistent with other mapping requests. Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com> Link: https://lore.kernel.org/r/20201207115758.9400-1-zhukeqian1@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: Improve the performance for direct_mappingYong Wu2020-12-071-5/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently direct_mapping always use the smallest pgsize which is SZ_4K normally to mapping. This is unnecessary. we could gather the size, and call iommu_map then, iommu_map could decide how to map better with the just right pgsize. >From the original comment, we should take care overlap, otherwise, iommu_map may return -EEXIST. In this overlap case, we should map the previous region before overlap firstly. then map the left part. Each a iommu device will call this direct_mapping when its iommu initialize, This patch is effective to improve the boot/initialization time especially while it only needs level 1 mapping. Signed-off-by: Anan Sun <anan.sun@mediatek.com> Signed-off-by: Yong Wu <yong.wu@mediatek.com> Link: https://lore.kernel.org/r/20201207093553.8635-1-yong.wu@mediatek.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: return error code when it can't get groupYang Yingliang2020-11-261-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Although iommu_group_get() in iommu_probe_device() will always succeed thanks to __iommu_probe_device() creating the group if it's not present, it's still worth initialising 'ret' to -ENODEV in case this path is reachable in the future. For now, this patch results in no functional change. Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Link: https://lore.kernel.org/r/20201126133825.3643852-1-yangyingliang@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: Modify the description of iommu_sva_unbind_deviceChen Jun2020-11-171-2/+0
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | iommu_sva_unbind_device has no return value. Remove the description of the return value of the function. Signed-off-by: Chen Jun <c00424029@huawei.com> Link: https://lore.kernel.org/r/20201023064827.74794-1-chenjun102@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| * | | | Merge branch 'for-next/iommu/iova' into for-next/iommu/coreWill Deacon2020-12-081-53/+47
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | IOVA allocator updates for 5.11, including removal of unused symbols and functions as well as some optimisations to improve allocation behaviour in the face of fragmentation. * for-next/iommu/iova: iommu: Stop exporting free_iova_mem() iommu: Stop exporting alloc_iova_mem() iommu: Delete split_and_remove_iova() iommu: avoid taking iova_rbtree_lock twice iommu/iova: Free global iova rcache on iova alloc failure iommu/iova: Retry from last rb tree node if iova search fails
| | * | | | iommu: Stop exporting free_iova_mem()John Garry2020-12-081-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It has no user outside iova.c Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1607020492-189471-4-git-send-email-john.garry@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: Stop exporting alloc_iova_mem()John Garry2020-12-081-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It is not used outside iova.c Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1607020492-189471-3-git-send-email-john.garry@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: Delete split_and_remove_iova()John Garry2020-12-081-41/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Function split_and_remove_iova() has not been referenced since commit e70b081c6f37 ("iommu/vt-d: Remove IOVA handling code from the non-dma_ops path"), so delete it. Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1607020492-189471-2-git-send-email-john.garry@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: avoid taking iova_rbtree_lock twiceCong Wang2020-12-011-2/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Both find_iova() and __free_iova() take iova_rbtree_lock, there is no reason to take and release it twice inside free_iova(). Fold them into one critical section by calling the unlock versions instead. Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: John Garry <john.garry@huawei.com> Link: https://lore.kernel.org/r/1605608734-84416-5-git-send-email-john.garry@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/iova: Free global iova rcache on iova alloc failureVijayanand Jitta2020-11-171-0/+22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When ever an iova alloc request fails we free the iova ranges present in the percpu iova rcaches and then retry but the global iova rcache is not freed as a result we could still see iova alloc failure even after retry as global rcache is holding the iova's which can cause fragmentation. So, free the global iova rcache as well and then go for the retry. Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Acked-by: John Garry <john.garry@huaqwei.com> Link: https://lore.kernel.org/r/1601451864-5956-2-git-send-email-vjitta@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu/iova: Retry from last rb tree node if iova search failsVijayanand Jitta2020-11-171-6/+17
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When ever a new iova alloc request comes iova is always searched from the cached node and the nodes which are previous to cached node. So, even if there is free iova space available in the nodes which are next to the cached node iova allocation can still fail because of this approach. Consider the following sequence of iova alloc and frees on 1GB of iova space 1) alloc - 500MB 2) alloc - 12MB 3) alloc - 499MB 4) free - 12MB which was allocated in step 2 5) alloc - 13MB After the above sequence we will have 12MB of free iova space and cached node will be pointing to the iova pfn of last alloc of 13MB which will be the lowest iova pfn of that iova space. Now if we get an alloc request of 2MB we just search from cached node and then look for lower iova pfn's for free iova and as they aren't any, iova alloc fails though there is 12MB of free iova space. To avoid such iova search failures do a retry from the last rb tree node when iova search fails, this will search the entire tree and get an iova if its available. Signed-off-by: Vijayanand Jitta <vjitta@codeaurora.org> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Link: https://lore.kernel.org/r/1601451864-5956-1-git-send-email-vjitta@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
| * | | | Merge branch 'for-next/iommu/default-domains' into for-next/iommu/coreWill Deacon2020-12-082-17/+238
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Support for changing the default domain type for singleton IOMMU groups via sysfs when the constituent device is not already bound to a device driver. * for-next/iommu/default-domains: iommu: Fix htmldocs warnings in sysfs-kernel-iommu_groups iommu: Document usage of "/sys/kernel/iommu_groups/<grp_id>/type" file iommu: Take lock before reading iommu group default domain type iommu: Add support to change default domain of an iommu group iommu: Move def_domain type check for untrusted device into core
| | * | | | iommu: Take lock before reading iommu group default domain typeSai Praneeth Prakhya2020-11-251-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | "/sys/kernel/iommu_groups/<grp_id>/type" file could be read to find out the default domain type of an iommu group. The default domain of an iommu group doesn't change after booting and hence could be read directly. But, after addding support to dynamically change iommu group default domain, the above assumption no longer stays valid. iommu group default domain type could be changed at any time by writing to "/sys/kernel/iommu_groups/<grp_id>/type". So, take group mutex before reading iommu group default domain type so that the user wouldn't see stale values or iommu_group_show_type() doesn't try to derefernce stale pointers. Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Joerg Roedel <joro@8bytes.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Sohil Mehta <sohil.mehta@intel.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20201124130604.2912899-4-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: Add support to change default domain of an iommu groupSai Praneeth Prakhya2020-11-251-1/+229
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Presently, the default domain of an iommu group is allocated during boot time and it cannot be changed later. So, the device would typically be either in identity (also known as pass_through) mode or the device would be in DMA mode as long as the machine is up and running. There is no way to change the default domain type dynamically i.e. after booting, a device cannot switch between identity mode and DMA mode. But, assume a use case wherein the user trusts the device and believes that the OS is secure enough and hence wants *only* this device to bypass IOMMU (so that it could be high performing) whereas all the other devices to go through IOMMU (so that the system is protected). Presently, this use case is not supported. It will be helpful if there is some way to change the default domain of an iommu group dynamically. Hence, add such support. A privileged user could request the kernel to change the default domain type of a iommu group by writing to "/sys/kernel/iommu_groups/<grp_id>/type" file. Presently, only three values are supported 1. identity: all the DMA transactions from the device in this group are *not* translated by the iommu 2. DMA: all the DMA transactions from the device in this group are translated by the iommu 3. auto: change to the type the device was booted with Note: 1. Default domain of an iommu group with two or more devices cannot be changed. 2. The device in the iommu group shouldn't be bound to any driver. 3. The device shouldn't be assigned to user for direct access. 4. The change request will fail if any device in the group has a mandatory default domain type and the requested one conflicts with that. Please see "Documentation/ABI/testing/sysfs-kernel-iommu_groups" for more information. Signed-off-by: Sai Praneeth Prakhya <sai.praneeth.prakhya@intel.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Joerg Roedel <joro@8bytes.org> Cc: Ashok Raj <ashok.raj@intel.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Sohil Mehta <sohil.mehta@intel.com> Cc: Robin Murphy <robin.murphy@arm.com> Cc: Jacob Pan <jacob.jun.pan@linux.intel.com> Link: https://lore.kernel.org/r/20201124130604.2912899-3-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| | * | | | iommu: Move def_domain type check for untrusted device into coreLu Baolu2020-11-252-16/+7
| | |/ / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | So that the vendor iommu drivers are no more required to provide the def_domain_type callback to always isolate the untrusted devices. Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Cc: Shameerali Kolothum Thodi <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/linux-iommu/243ce89c33fe4b9da4c56ba35acebf81@huawei.com/ Link: https://lore.kernel.org/r/20201124130604.2912899-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
| * | | | Merge branch 'for-next/iommu/arm-smmu' into for-next/iommu/coreWill Deacon2020-12-087-45/+323
| |\ \ \ \ | | |/ / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Arm SMMU updates for 5.11, including support for the SMMU integrated into the Adreno GPU as well as workarounds for the broken firmware implementation in the DB845c SoC from Qualcomm. * for-next/iommu/arm-smmu: iommu: arm-smmu-impl: Add a space before open parenthesis iommu: arm-smmu-impl: Use table to list QCOM implementations iommu/arm-smmu: Move non-strict mode to use io_pgtable_domain_attr iommu/arm-smmu: Add support for pagetable config domain attribute iommu/io-pgtable-arm: Add support to use system cache iommu/io-pgtable: Add a domain attribute for pagetable configuration dt-bindings: arm-smmu: Add compatible string for Adreno GPU SMMU iommu/arm-smmu: Add a way for implementations to influence SCTLR iommu/arm-smmu-qcom: Add implementation for the adreno GPU SMMU iommu/arm-smmu-v3: Assign boolean values to a bool variable iommu/arm-smmu: Use new devm_krealloc() iommu/arm-smmu-qcom: Implement S2CR quirk iommu/arm-smmu-qcom: Read back stream mappings iommu/arm-smmu: Allow implementation specific write_s2cr
| | * | | iommu: arm-smmu-impl: Add a space before open parenthesisSai Prakash Ranjan2020-11-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix the checkpatch warning for space required before the open parenthesis. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/0b4c3718a87992f11340a1cdd99fd746c647e485.1606287059.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
| | * | | iommu: arm-smmu-impl: Use table to list QCOM implementationsSai Prakash Ranjan2020-11-253-14/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use table and of_match_node() to match qcom implementation instead of multiple of_device_compatible() calls for each QCOM SMMU implementation. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Acked-by: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/4e11899bc02102a6e6155db215911e8b5aaba950.1606287059.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
| | * | | iommu/arm-smmu: Move non-strict mode to use io_pgtable_domain_attrSai Prakash Ranjan2020-11-252-7/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that we have a struct io_pgtable_domain_attr with quirks, use that for non_strict mode as well thereby removing the need for more members of arm_smmu_domain in the future. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/c191265f3db1f6b3e136d4057ca917666680a066.1606287059.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>
| | * | | iommu/arm-smmu: Add support for pagetable config domain attributeSai Prakash Ranjan2020-11-252-0/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for domain attribute DOMAIN_ATTR_IO_PGTABLE_CFG to get/set pagetable configuration data which initially will be used to set quirks and later can be extended to include other pagetable configuration data. Signed-off-by: Sai Prakash Ranjan <saiprakash.ranjan@codeaurora.org> Link: https://lore.kernel.org/r/2ab52ced2f853115c32461259a075a2877feffa6.1606287059.git.saiprakash.ranjan@codeaurora.org Signed-off-by: Will Deacon <will@kernel.org>