| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu
Pull IOMMU updates from Joerg Roedel:
- big-endian support and preparation for defered probing for the Exynos
IOMMU driver
- simplifications in iommu-group id handling
- support for Mediatek generation one IOMMU hardware
- conversion of the AMD IOMMU driver to use the generic IOVA allocator.
This driver now also benefits from the recent scalability
improvements in the IOVA code.
- preparations to use generic DMA mapping code in the Rockchip IOMMU
driver
- device tree adaption and conversion to use generic page-table code
for the MSM IOMMU driver
- an iova_to_phys optimization in the ARM-SMMU driver to greatly
improve page-table teardown performance with VFIO
- various other small fixes and conversions
* tag 'iommu-updates-v4.8' of git://git.kernel.org/pub/scm/linux/kernel/git/joro/iommu: (59 commits)
iommu/amd: Initialize dma-ops domains with 3-level page-table
iommu/amd: Update Alias-DTE in update_device_table()
iommu/vt-d: Return error code in domain_context_mapping_one()
iommu/amd: Use container_of to get dma_ops_domain
iommu/amd: Flush iova queue before releasing dma_ops_domain
iommu/amd: Handle IOMMU_DOMAIN_DMA in ops->domain_free call-back
iommu/amd: Use dev_data->domain in get_domain()
iommu/amd: Optimize map_sg and unmap_sg
iommu/amd: Introduce dir2prot() helper
iommu/amd: Implement timeout to flush unmap queues
iommu/amd: Implement flush queue
iommu/amd: Allow NULL pointer parameter for domain_flush_complete()
iommu/amd: Set up data structures for flush queue
iommu/amd: Remove align-parameter from __map_single()
iommu/amd: Remove other remains of old address allocator
iommu/amd: Make use of the generic IOVA allocator
iommu/amd: Remove special mapping code for dma_ops path
iommu/amd: Pass gfp-flags to iommu_map_page()
iommu/amd: Implement apply_dm_region call-back
iommu/amd: Create a list of reserved iova addresses
...
|
| |\ \ \ \ \ \ \ \
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
'arm/msm', 'arm/rockchip', 'arm/smmu' and 'core' into next
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Ida handling can be much simplified by using the ida_simple_.. functions.
This change also fixes the bug that previously checking for errors
returned by ida_get_new() was incomplete.
ida_get_new() can return errors other than EAGAIN, e.g. ENOSPC.
This case wasn't handled.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
iommu_group_ida and iommu_group_mutex can be initialized statically.
There's no need to do this dynamically in the init function.
Signed-off-by: Heiner Kallweit <hkallweit1@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | |_|_|_|_|_|/| |
| |/| | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into arm/smmu
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Use devm_request_irq to simplify error handling path,
when probe smmu device.
Also devm_{request|free}_irq when init or destroy domain context.
Signed-off-by: Peng Fan <van.freenix@gmail.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
The implementation of iova_to_phys for the long-descriptor ARM
io-pgtable code always masks with the granule size when inserting the
low virtual address bits into the physical address determined from the
page tables. In cases where the leaf entry is found before the final
level of table (i.e. due to a block mapping), this results in rounding
down to the bottom page of the block mapping. Consequently, the physical
address range batching in the vfio_unmap_unpin is defeated and we end
up taking the long way home.
This patch fixes the problem by masking the virtual address with the
appropriate mask for the level at which the leaf descriptor is located.
The short-descriptor code already gets this right, so no change is
needed there.
Cc: <stable@vger.kernel.org>
Reported-by: Robin Murphy <robin.murphy@arm.com>
Tested-by: Robin Murphy <robin.murphy@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
The PCIe ACS capability will affect the layout of iommu groups.
Generally speaking, if the path from root port to the PCIe device
is ACS enabled, the iommu will create a single iommu group for this
PCIe device. If all PCIe devices on the path are ACS enabled then
Linux can determine this path is ACS enabled.
Linux use two PCIe configuration registers to determine the ACS
status of PCIe devices:
ACS Capability Register and ACS Control Register.
The first register is used to check the implementation of ACS function
of a PCIe device, the second register is used to check the enable status
of ACS function. If one PCIe device has implemented and enabled the ACS
function then Linux will determine this PCIe device enabled ACS.
From the Chapter:6.12 of PCI Express Base Specification Revision 3.1a,
we can find that when a PCIe device implements ACS function, the enable
status is set to disabled by default and can be enabled by ACS-aware
software.
ACS will affect the iommu groups topology, so, the iommu driver is
ACS-aware software. This patch adds a call to pci_request_acs() to the
arm-smmu driver to enable the ACS function in PCIe devices that support
it, when they get probed.
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
Signed-off-by: Wei Chen <Wei.Chen@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Set geometry for allocated domains and fix .domain_alloc() callback to
work with IOMMU_DOMAIN_DMA domain type, which is used for implicit
domains on ARM64.
Signed-off-by: Shunqian Zheng <zhengsq@rock-chips.com>
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Use DMA API instead of architecture internal functions like
__cpuc_flush_dcache_area() etc.
The biggest difficulty here is that dma_map and _sync calls require some
struct device, while there is no real 1:1 relation between an IOMMU
domain and some device. To overcome this, a simple platform device is
registered for each allocated IOMMU domain.
With this patch, this driver can be used on both ARM and ARM64
platforms, such as RK3288 and RK3399 respectively.
Signed-off-by: Shunqian Zheng <zhengsq@rock-chips.com>
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
In .probe(), devm_kzalloc() is called with size == 0 and works only
by luck, due to internal behavior of the allocator and the fact
that the proper allocation size is small. Let's use proper value for
calculating the size.
Fixes: cd6438c5f844 ("iommu/rockchip: Reconstruct to support multi slaves")
Signed-off-by: Shunqian Zheng <zhengsq@rock-chips.com>
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
The iommu_dma_alloc() in iommu/dma-iommu.c calls iommu_map_sg()
that requires the callback iommu_ops .map_sg(). Adding the
default_iommu_map_sg() to Rockchip IOMMU accordingly.
Signed-off-by: Simon Xue <xxm@rock-chips.com>
Signed-off-by: Shunqian Zheng <xxm@rock-chips.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Even though the IOMMU shares IRQ with its master, the struct device
passed to {request,free}_irq is supposed to represent the device that is
signalling the interrupt. This patch makes the driver use IOMMU device
instead of master's device to make things clear.
Signed-off-by: Simon Xue <xxm@rock-chips.com>
Signed-off-by: Shunqian Zheng <zhengsq@rock-chips.com>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Signed-off-by: Tomasz Figa <tfiga@chromium.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
Now that the driver is DT adapted, bus_set_iommu gets called only
when on compatible matching. So the driver should not break multiplatform
builds now. So remove the BROKEN config.
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Tested-by: Archit Taneja <architt@codeaurora.org>
Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
This iommu uses the armv7 short descriptor format. So use the
generic ARMV7S pagetable ops instead of rewriting the same stuff
in the driver.
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Tested-by: Archit Taneja <architt@codeaurora.org>
Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
This adds the xlate callback which gets invoked during
device registration from DT. The master devices gets added
through this.
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Tested-by: Archit Taneja <architt@codeaurora.org>
Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | |
| | | | | | | | | | |
There are only two functions left in msm_iommu_dev.c. Move it to
msm_iommu.c and delete the file.
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Tested-by: Archit Taneja <architt@codeaurora.org>
Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | |/ / /
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
The driver currently works based on platform data. Remove this
and add support for DT. A single master can have multiple ports
connected to more than one iommu.
master
|
|
|
------------------------
| |
IOMMU0 IOMMU1
| |
ctx0 ctx1 ctx0 ctx1
This association of master and iommus/contexts were previously
represented by platform data parent/child device details. The client
drivers were responsible for programming all of the iommus/contexts
for the device. Now while adapting to generic DT bindings we maintain the
list of iommus, contexts that each master domain is connected to and
program all of them on attach/detach.
Signed-off-by: Sricharan R <sricharan@codeaurora.org>
Tested-by: Archit Taneja <architt@codeaurora.org>
Tested-by: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
The symbol exists elsewhere already, so that is fails to
link if the symbol is non-static.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Mediatek SoC's M4U has two generations of HW architcture. Generation one
uses flat, one layer pagetable, and was shipped with ARM architecture, it
only supports 4K size page mapping. MT2701 SoC uses this generation one
m4u HW. Generation two uses the ARM short-descriptor translation table
format for address translation, and was shipped with ARM64 architecture,
MT8173 uses this generation two m4u HW. All the two generation iommu HW
only have one iommu domain, and all its iommu clients share the same
iova address.
These two generation m4u HW have slit different register groups and
register offset, but most register names are the same. This patch add iommu
support for mediatek SoC mt2701.
Signed-off-by: Honghui Zhang <honghui.zhang@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | |
| | | | | | | | | |
Move the struct defines of mtk iommu into a new header files for
common use.
Signed-off-by: Honghui Zhang <honghui.zhang@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | |/ /
| | | | | |/| |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
The device_node will be released in of_iommu_configure, it may be double
released if call of_node_put in mtk_iommu_of_xlate.
Signed-off-by: Honghui Zhang <honghui.zhang@mediatek.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Add initial support for big endian by always writing the pte
in le32. Note, revisit if hardware capable of doing big endian
fetches.
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Register iommu_ops at the end of successful probe instead of doing that
unconditionally. This makes Exynos IOMMU driver ready for deferred probe
caused by not-yet-available clocks.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
Make clock preparation together with clk_enable(). This way inactive
SYSMMU controllers will not keep clocks prepared all the time.
This change allows more fine graded power management in the future.
All the code assumes that clock management doesn't fail, so guard
clock_prepare_enable() it with BUG_ON().
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
If SYSMMU controller is not active, there is no point in enabling master's
clock just for doing the the of internal state. This patch moves enabling
that clock to the block which actually does the register access.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | |
| | | | | | | | |
This patch reworks driver probe code to propagate error codes from
clk_get() operation. This will allow to properly handle deferred probe
in the future.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | |/ / /
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Removal of IOMMU driver cannot be done reliably, so Exynos IOMMU driver
doesn't support this operation. It is essential for system operation, so
it makes sense to prevent unbinding by disabling bind/unbind sysfs
feature for SYSMMU controller driver to avoid kernel ops or trashing
memory caused by such operation.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
CC: stable@vger.kernel.org # v4.2+
Reviewed-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
In 'commit <55d940430ab9> ("iommu/vt-d: Get rid of domain->iommu_lock")',
the error handling path is changed a little, which makes the function
always return 0.
This path fixes this.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Fixes: 55d940430ab9 ('iommu/vt-d: Get rid of domain->iommu_lock')
Cc: stable@vger.kernel.org # v4.3+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
According to the manual: "Hardware access to ... invalidation queue ...
are always coherent."
Remove unnecassary clflushes accordingly.
Signed-off-by: Nadav Amit <namit@vmware.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
On a system with an Intel PCIe port configured as an NTB device, iommu
initialization fails with
DMAR: Device scope type does not match for 0000:80:03.0
This is because the DMAR table reports this device as having scope 2
(ACPI_DMAR_SCOPE_TYPE_BRIDGE):
[0A0h 0160 1] Device Scope Entry Type : 02
[0A1h 0161 1] Entry Length : 08
[0A2h 0162 2] Reserved : 0000
[0A4h 0164 1] Enumeration ID : 00
[0A5h 0165 1] PCI Bus Number : 80
[0A6h 0166 2] PCI Path : 03,00
but the device has a type 0 PCI header:
80:03.0 Bridge [0680]: Intel Corporation Device [8086:2f0d] (rev 02)
00: 86 80 0d 2f 00 00 10 00 02 00 80 06 10 00 80 00
10: 0c 00 c0 00 c0 38 00 00 0c 00 00 00 80 38 00 00
20: 00 00 00 c8 00 00 10 c8 00 00 00 00 86 80 00 00
30: 00 00 00 00 60 00 00 00 00 00 00 00 ff 01 00 00
VT-d works perfectly on this system, so there's no reason to bail out
on initialization due to this apparent scope mismatch. Use the class
0x0680 ("Other bridge device") as a heuristic for allowing DMAR
initialization for non-bridge PCI devices listed with scope bridge.
Signed-off-by: Roland Dreier <roland@purestorage.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | |/ / /
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
In commit <8bf478163e69> ("iommu/vt-d: Split up iommu->domains array"), it
it splits iommu->domains in two levels. Each first level contains 256
entries of second level. In case of the ndomains is exact a multiple of
256, it would have one more extra first level entry for current
implementation.
This patch refines this calculation to reduce the extra first level entry.
Signed-off-by: Wei Yang <richard.weiyang@gmail.com>
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
A two-level page-table can map up to 1GB of address space.
With the IOVA allocator now in use, the allocated addresses
are often more closely to 4G, which requires the address
space to be increased much more often. Avoid that by using a
three-level page-table by default.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Not doing so might cause IO-Page-Faults when a device uses
an alias request-id and the alias-dte is left in a lower
page-mode which does not cover the address allocated from
the iova-allocator.
Fixes: 492667dacc0a ('x86/amd-iommu: Remove amd_iommu_pd_table')
Cc: stable@vger.kernel.org
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This is better than storing an extra pointer in struct
protection_domain, because this pointer can now be removed
from the struct.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Before a dma_ops_domain can be freed, we need to make sure
it is not longer referenced by the flush queue. So empty the
queue before a dma_ops_domain can be freed.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This domain type is not yet handled in the
iommu_ops->domain_free() call-back. Fix that.
Fixes: 0bb6e243d7fb ('iommu/amd: Support IOMMU_DOMAIN_DMA type allocation')
Cc: stable@vger.kernel.org # v4.2+
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Using the cached value is much more efficient than calling
into the IOMMU core code.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Optimize these functions so that they need only one call
into the address alloctor. This also saves a couple of
io-tlb flushes in the unmap_sg path.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This function converts dma_data_direction to
iommu-protection flags. This will be needed on multiple
places in the code, so this will save some code.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
In case the queue doesn't fill up, we flush the TLB at least
10ms after the unmap happened to make sure that the TLB is
cleaned up.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
With the flush queue the IOMMU TLBs will not be flushed at
every dma-ops unmap operation. The unmapped ranges will be
queued and flushed at once, when the queue is full. This
makes unmapping operations a lot faster (on average) and
restores the performance of the old address allocator.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
If domain == NULL is passed to the function, it will queue a
completion-wait command on all IOMMUs in the system.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
The flush queue is the equivalent to defered-flushing in the
Intel VT-d driver. This patch sets up the data structures
needed for this.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
This parameter is not required anymore because the
iova-allocations are always aligned to its size.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
There are other remains in the code from the old allocatore.
Remove them all.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Remove the old address allocation code and make use of the
generic IOVA allocator that is also used by other dma-ops
implementations.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Use the iommu-api map/unmap functions instead. This will be
required anyway when IOVA code is used for address
allocation.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
Make this function ready to be used in the DMA-API path.
Reorder parameters a bit while at it.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
It is used to reserve the dm-regions in the iova-tree.
Signed-off-by: Joerg Roedel <jroedel@suse.de>
|