diff options
author | Alexey Kardashevskiy <aik@ozlabs.ru> | 2017-03-22 15:21:54 +1100 |
---|---|---|
committer | Paul Mackerras <paulus@ozlabs.org> | 2017-04-20 11:39:16 +1000 |
commit | da6f59e19233efdda58f196afbae8e05f6030d7f (patch) | |
tree | 61dbbdafde5e5fcad4f2d7ddbaf3b3e98b060755 /arch/powerpc/kernel/iommu.c | |
parent | 503bfcbe18576a79be0bc5173b23b530845e704a (diff) | |
download | linux-da6f59e19233efdda58f196afbae8e05f6030d7f.tar.gz linux-da6f59e19233efdda58f196afbae8e05f6030d7f.tar.bz2 linux-da6f59e19233efdda58f196afbae8e05f6030d7f.zip |
KVM: PPC: Use preregistered memory API to access TCE list
VFIO on sPAPR already implements guest memory pre-registration
when the entire guest RAM gets pinned. This can be used to translate
the physical address of a guest page containing the TCE list
from H_PUT_TCE_INDIRECT.
This makes use of the pre-registrered memory API to access TCE list
pages in order to avoid unnecessary locking on the KVM memory
reverse map as we know that all of guest memory is pinned and
we have a flat array mapping GPA to HPA which makes it simpler and
quicker to index into that array (even with looking up the
kernel page tables in vmalloc_to_phys) than it is to find the memslot,
lock the rmap entry, look up the user page tables, and unlock the rmap
entry. Note that the rmap pointer is initialized to NULL
where declared (not in this patch).
If a requested chunk of memory has not been preregistered, this will
fall back to non-preregistered case and lock rmap.
Signed-off-by: Alexey Kardashevskiy <aik@ozlabs.ru>
Reviewed-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Paul Mackerras <paulus@ozlabs.org>
Diffstat (limited to 'arch/powerpc/kernel/iommu.c')
0 files changed, 0 insertions, 0 deletions