summaryrefslogtreecommitdiffstats
path: root/arch/alpha
diff options
context:
space:
mode:
authorSebastian Ott <sebott@linux.vnet.ibm.com>2016-09-08 13:25:01 +0200
committerMartin Schwidefsky <schwidefsky@de.ibm.com>2016-09-22 13:42:33 +0200
commit13954fd6913acff8f8b8c21612074b57051ba457 (patch)
tree5f2b8e7c89df80c25923fcfcdf252e24f9434ad0 /arch/alpha
parent1f166e9e5c7cd5d1fe2a5da7c97c1688d4c93fbb (diff)
downloadlinux-stable-13954fd6913acff8f8b8c21612074b57051ba457.tar.gz
linux-stable-13954fd6913acff8f8b8c21612074b57051ba457.tar.bz2
linux-stable-13954fd6913acff8f8b8c21612074b57051ba457.zip
s390/pci_dma: improve lazy flush for unmap
Lazy unmap (defer tlb flush after unmap until dma address reuse) can greatly reduce the number of RPCIT instructions in the best case. In reality we are often far away from the best case scenario because our implementation suffers from the following problem: To create dma addresses we maintain an iommu bitmap and a pointer into that bitmap to mark the start of the next search. That pointer moves from the start to the end of that bitmap and we issue a global tlb flush once that pointer wraps around. To prevent address reuse before we issue the tlb flush we even have to move the next pointer during unmaps - when clearing a bit > next. This could lead to a situation where we only use the rear part of that bitmap and issue more tlb flushes than expected. To fix this we no longer clear bits during unmap but maintain a 2nd bitmap which we use to mark addresses that can't be reused until we issue the global tlb flush after wrap around. Signed-off-by: Sebastian Ott <sebott@linux.vnet.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Diffstat (limited to 'arch/alpha')
0 files changed, 0 insertions, 0 deletions