diff options
author | Lai Jiangshan <laijs@linux.alibaba.com> | 2020-12-17 23:41:18 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2021-02-23 13:58:14 +0100 |
commit | 4230401d22d6365338c6cf8003d7d32f06d4680b (patch) | |
tree | d01367461a9e7c45950dfe706a1bd42008dfc524 /crypto | |
parent | 795d776b71c42aca3c616920da02368f646c5ad5 (diff) | |
download | linux-stable-4230401d22d6365338c6cf8003d7d32f06d4680b.tar.gz linux-stable-4230401d22d6365338c6cf8003d7d32f06d4680b.tar.bz2 linux-stable-4230401d22d6365338c6cf8003d7d32f06d4680b.zip |
kvm: check tlbs_dirty directly
commit 88bf56d04bc3564542049ec4ec168a8b60d0b48c upstream
In kvm_mmu_notifier_invalidate_range_start(), tlbs_dirty is used as:
need_tlb_flush |= kvm->tlbs_dirty;
with need_tlb_flush's type being int and tlbs_dirty's type being long.
It means that tlbs_dirty is always used as int and the higher 32 bits
is useless. We need to check tlbs_dirty in a correct way and this
change checks it directly without propagating it to need_tlb_flush.
Note: it's _extremely_ unlikely this neglecting of higher 32 bits can
cause problems in practice. It would require encountering tlbs_dirty
on a 4 billion count boundary, and KVM would need to be using shadow
paging or be running a nested guest.
Cc: stable@vger.kernel.org
Fixes: a4ee1ca4a36e ("KVM: MMU: delay flush all tlbs on sync_page path")
Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>
Message-Id: <20201217154118.16497-1-jiangshanlai@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
[sudip: adjust context]
Signed-off-by: Sudip Mukherjee <sudipm.mukherjee@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'crypto')
0 files changed, 0 insertions, 0 deletions