diff options
author | Ingo Molnar <mingo@kernel.org> | 2015-04-22 11:52:13 +0200 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2015-05-19 15:47:17 +0200 |
commit | 81683cc8277e79decff4d0cf82ae0e17d2fe465f (patch) | |
tree | 6b5d271fe62cd0016becc0129db4edcc19fc580a /arch/x86/kernel | |
parent | 11ad19277e025f914518bc2943a240cdd37cf844 (diff) | |
download | linux-81683cc8277e79decff4d0cf82ae0e17d2fe465f.tar.gz linux-81683cc8277e79decff4d0cf82ae0e17d2fe465f.tar.bz2 linux-81683cc8277e79decff4d0cf82ae0e17d2fe465f.zip |
x86/fpu: Factor out fpu__flush_thread() from flush_thread()
flush_thread() open codes a lot of FPU internals - create a separate
function for it in fpu/core.c.
Turns out that this does not hurt performance:
text data bss dec hex filename
11843039 1884440 1130496 14857975 e2b6f7 vmlinux.before
11843039 1884440 1130496 14857975 e2b6f7 vmlinux.after
and since this is a slowpath clarity comes first anyway.
We can reconsider inlining decisions after the FPU code has been cleaned up.
Reviewed-by: Borislav Petkov <bp@alien8.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Fenghua Yu <fenghua.yu@intel.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Oleg Nesterov <oleg@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch/x86/kernel')
-rw-r--r-- | arch/x86/kernel/fpu/core.c | 17 | ||||
-rw-r--r-- | arch/x86/kernel/process.c | 14 |
2 files changed, 18 insertions, 13 deletions
diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c index 9211582f5d3f..787bf57b8422 100644 --- a/arch/x86/kernel/fpu/core.c +++ b/arch/x86/kernel/fpu/core.c @@ -227,6 +227,23 @@ static int fpu__unlazy_stopped(struct task_struct *child) return 0; } +void fpu__flush_thread(struct task_struct *tsk) +{ + if (!use_eager_fpu()) { + /* FPU state will be reallocated lazily at the first use. */ + drop_fpu(tsk); + fpstate_free(&tsk->thread.fpu); + } else { + if (!tsk_used_math(tsk)) { + /* kthread execs. TODO: cleanup this horror. */ + if (WARN_ON(fpstate_alloc_init(tsk))) + force_sig(SIGKILL, tsk); + user_fpu_begin(); + } + restore_init_xstate(); + } +} + /* * The xstateregs_active() routine is the same as the fpregs_active() routine, * as the "regset->n" for the xstate regset will be updated based on the feature diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 6ab180f40a7e..52fd8f6f44c7 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -146,19 +146,7 @@ void flush_thread(void) flush_ptrace_hw_breakpoint(tsk); memset(tsk->thread.tls_array, 0, sizeof(tsk->thread.tls_array)); - if (!use_eager_fpu()) { - /* FPU state will be reallocated lazily at the first use. */ - drop_fpu(tsk); - fpstate_free(&tsk->thread.fpu); - } else { - if (!tsk_used_math(tsk)) { - /* kthread execs. TODO: cleanup this horror. */ - if (WARN_ON(fpstate_alloc_init(tsk))) - force_sig(SIGKILL, tsk); - user_fpu_begin(); - } - restore_init_xstate(); - } + fpu__flush_thread(tsk); } static void hard_disable_TSC(void) |