summaryrefslogtreecommitdiffstats
path: root/kernel/events
diff options
context:
space:
mode:
authorSong Liu <songliubraving@fb.com>2020-01-23 10:11:46 -0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2020-02-11 04:34:19 -0800
commita3623db43a3c06538591370db955d85b80657e17 (patch)
treec3ef1a249747ac666daf3bd34cd4ebe8efd5a640 /kernel/events
parent6284d30e96ede11d9d434eebfacbe4b4625b6c87 (diff)
downloadlinux-stable-a3623db43a3c06538591370db955d85b80657e17.tar.gz
linux-stable-a3623db43a3c06538591370db955d85b80657e17.tar.bz2
linux-stable-a3623db43a3c06538591370db955d85b80657e17.zip
perf/core: Fix mlock accounting in perf_mmap()
commit 003461559ef7a9bd0239bae35a22ad8924d6e9ad upstream. Decreasing sysctl_perf_event_mlock between two consecutive perf_mmap()s of a perf ring buffer may lead to an integer underflow in locked memory accounting. This may lead to the undesired behaviors, such as failures in BPF map creation. Address this by adjusting the accounting logic to take into account the possibility that the amount of already locked memory may exceed the current limit. Fixes: c4b75479741c ("perf/core: Make the mlock accounting simple again") Suggested-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Signed-off-by: Song Liu <songliubraving@fb.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: <stable@vger.kernel.org> Acked-by: Alexander Shishkin <alexander.shishkin@linux.intel.com> Link: https://lkml.kernel.org/r/20200123181146.2238074-1-songliubraving@fb.com Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel/events')
-rw-r--r--kernel/events/core.c10
1 files changed, 9 insertions, 1 deletions
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 16af86ab24c4..8c70ee23fbe9 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -5709,7 +5709,15 @@ accounting:
*/
user_lock_limit *= num_online_cpus();
- user_locked = atomic_long_read(&user->locked_vm) + user_extra;
+ user_locked = atomic_long_read(&user->locked_vm);
+
+ /*
+ * sysctl_perf_event_mlock may have changed, so that
+ * user->locked_vm > user_lock_limit
+ */
+ if (user_locked > user_lock_limit)
+ user_locked = user_lock_limit;
+ user_locked += user_extra;
if (user_locked > user_lock_limit)
extra = user_locked - user_lock_limit;