diff options
author | Andi Kleen <ak@linux.intel.com> | 2013-08-16 14:17:19 -0700 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2015-10-22 14:37:53 -0700 |
commit | f7c7bb9f06eb2af9c03b29e019d45ad4e943f900 (patch) | |
tree | 5039b989cbe038a8c2c2b6228e648c5cf1426e78 /fs/compat_ioctl.c | |
parent | 8938c10543801cbb9d85efd3f317184e405b45bc (diff) | |
download | linux-stable-f7c7bb9f06eb2af9c03b29e019d45ad4e943f900.tar.gz linux-stable-f7c7bb9f06eb2af9c03b29e019d45ad4e943f900.tar.bz2 linux-stable-f7c7bb9f06eb2af9c03b29e019d45ad4e943f900.zip |
x86: Add 1/2/4/8 byte optimization to 64bit __copy_{from,to}_user_inatomic
commit ff47ab4ff3cddfa7bc1b25b990e24abe2ae474ff upstream.
The 64bit __copy_{from,to}_user_inatomic always called
copy_from_user_generic, but skipped the special optimizations for 1/2/4/8
byte accesses.
This especially hurts the futex call, which accesses the 4 byte futex
user value with a complicated fast string operation in a function call,
instead of a single movl.
Use __copy_{from,to}_user for _inatomic instead to get the same
optimizations. The only problem was the might_fault() in those functions.
So move that to new wrapper and call __copy_{f,t}_user_nocheck()
from *_inatomic directly.
32bit already did this correctly by duplicating the code.
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Link: http://lkml.kernel.org/r/1376687844-19857-2-git-send-email-andi@firstfloor.org
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'fs/compat_ioctl.c')
0 files changed, 0 insertions, 0 deletions