summaryrefslogtreecommitdiffstats
path: root/arch/alpha/lib
Commit message (Collapse)AuthorAgeFilesLines
* alpha: fix lazy-FPU mis(merged/applied/whatnot)Al Viro2023-03-061-2/+2
| | | | | | | | | | | | Looks like a braino that used to be fixed in e.g. #next.alpha had gotten into alpha.git cherry-picked version of that patch. Sure, alpha has no preempt, but preempt_enable() in place of preempt_disable() is actively confusing the readers... Other than that, the cherry-picked variant matches what I have. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* alpha: lazy FPU switchingAl Viro2023-02-241-6/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | On each context switch we save the FPU registers on stack of old process and restore FPU registers from the stack of new one. That allows us to avoid doing that each time we enter/leave the kernel mode; however, that can get suboptimal in some cases. For one thing, we don't need to bother saving anything for kernel threads. For another, if between entering and leaving the kernel a thread gives CPU up more than once, it will do useless work, saving the same values every time, only to discard the saved copy as soon as it returns from switch_to(). Alternative solution: * move the array we save into from switch_stack to thread_info * have a (thread-synchronous) flag set when we save them * have another flag set when they should be restored on return to userland. * do *NOT* save/restore them in do_switch_stack()/undo_switch_stack(). * restore on the exit to user mode if the restore flag had been set. Clear both flags. * on context switch, entry to fork/clone/vfork, before entry into do_signal() and on entry into straced syscall save the registers and set the 'saved' flag unless it had been already set. * on context switch set the 'restore' flag as well. * have copy_thread() set both flags for child, so the registers would be restored once the child returns to userland. * use the saved data in setup_sigcontext(); have restore_sigcontext() set both flags and copy from sigframe to save area. * teach ptrace to look for FPU registers in thread_info instead of switch_stack. * teach isolated accesses to FPU registers (rdfpcr, wrfpcr, etc.) to check the 'saved' flag (under preempt_disable()) and work with the save area if it's been set; if 'saved' flag is found upon write access, set 'restore' flag as well. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Matt Turner <mattst88@gmail.com>
* alpha: Implement "current_stack_pointer"Kees Cook2023-02-141-1/+1
| | | | | | | | | | | | | | | | | | | | To follow the existing per-arch conventions replace open-coded use of asm "$30" as "current_stack_pointer". This will let it be used in non-arch places (like HARDENED_USERCOPY). Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Mike Rapoport <rppt@kernel.org> Cc: Mark Rutland <mark.rutland@arm.com> Cc: "Peter Zijlstra (Intel)" <peterz@infradead.org> Cc: Kefeng Wang <wangkefeng.wang@huawei.com> Cc: "Alexander A. Klimov" <grandmaster@al2klimov.de> Cc: linux-alpha@vger.kernel.org Signed-off-by: Kees Cook <keescook@chromium.org> Signed-off-by: Matt Turner <mattst88@gmail.com>
* net: unexport csum_and_copy_{from,to}_userChristoph Hellwig2022-04-291-1/+0
| | | | | | | | | | | | | | csum_and_copy_from_user and csum_and_copy_to_user are exported by a few architectures, but not actually used in modular code. Drop the exports. Link: https://lkml.kernel.org/r/20220421070440.1282704-1-hch@lst.de Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Jakub Kicinski <kuba@kernel.org> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Michael Ellerman <mpe@ellerman.id.au> (powerpc) Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
* alpha: move __udiv_qrnnd library function to arch/alpha/lib/Linus Torvalds2021-09-182-0/+166
| | | | | | | | | | | | | | | We already had the implementation for __udiv_qrnnd (unsigned divide for multi-precision arithmetic) as part of the alpha math emulation code. But you can disable the math emulation code - even if you shouldn't - and then the MPI code that actually wants this functionality (and is needed by various crypto functions) will fail to build. So move the extended-precision divide code to be a regular library function, just like all the regular division code is. That way ie is available regardless of math-emulation. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: csum_partial_copy.c: add function prototypes from <net/checksum.h>Randy Dunlap2021-05-061-0/+1
| | | | | | | | | | | | | | | | | | | | | | Fix "no previous prototype" W=1 warnings from the kernel test robot: arch/alpha/lib/csum_partial_copy.c:349:1: error: no previous prototype for 'csum_and_copy_from_user' [-Werror=missing-prototypes] 349 | csum_and_copy_from_user(const void __user *src, void *dst, int len) | ^~~~~~~~~~~~~~~~~~~~~~~ arch/alpha/lib/csum_partial_copy.c:358:1: error: no previous prototype for 'csum_partial_copy_nocheck' [-Werror=missing-prototypes] 358 | csum_partial_copy_nocheck(const void *src, void *dst, int len) | ^~~~~~~~~~~~~~~~~~~~~~~~~ Link: https://lkml.kernel.org/r/20210425235749.19113-1-rdunlap@infradead.org Fixes: 808b49da54e6 ("alpha: turn csum_partial_copy_from_user() into csum_and_copy_from_user()") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reported-by: kernel test robot <lkp@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: propagate the calling convention changes down to csum_partial_copy.c ↵Al Viro2020-08-201-88/+69
| | | | | | | | | | helpers get rid of set_fs() in csum_partial_copy_nocheck(), while we are at it - just take the part of csum_and_copy_from_user() sans the access_ok() check into a helper function and have csum_partial_copy_nocheck() call that. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* saner calling conventions for csum_and_copy_..._user()Al Viro2020-08-201-14/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | All callers of these primitives will * discard anything we might've copied in case of error * ignore the csum value in case of error * always pass 0xffffffff as the initial sum, so the resulting csum value (in case of success, that is) will never be 0. That suggest the following calling conventions: * don't pass err_ptr - just return 0 on error. * don't bother with zeroing destination, etc. in case of error * don't pass the initial sum - just use 0xffffffff. This commit does the minimal conversion in the instances of csum_and_copy_...(); the changes of actual asm code behind them are done later in the series. Note that this asm code is often shared with csum_partial_copy_nocheck(); the difference is that csum_partial_copy_nocheck() passes 0 for initial sum while csum_and_copy_..._user() pass 0xffffffff. Fortunately, we are free to pass 0xffffffff in all cases and subsequent patches will use that freedom without any special comments. A part that could be split off: parisc and uml/i386 claimed to have csum_and_copy_to_user() instances of their own, but those were identical to the generic one, so we simply drop them. Not sure if it's worth a separate commit... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* csum_partial_copy_nocheck(): drop the last argumentAl Viro2020-08-201-2/+2
| | | | | | | | | | | | It's always 0. Note that we theoretically could use ~0U as well - result will be the same modulo 0xffff, _if_ the damn thing did the right thing for any value of initial sum; later we'll make use of that when convenient. However, unlike csum_and_copy_..._user(), there are instances that did not work for arbitrary initial sums; c6x is one such. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* alpha: turn csum_partial_copy_from_user() into csum_and_copy_from_user()Al Viro2020-05-291-3/+3
| | | | | | | It's already doing the right thing - it does access_ok() and the wrapper in net/checksum.h is pointless here. Just rename it and be done with that... Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Remove 'type' argument from access_ok() functionLinus Torvalds2019-01-031-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Nobody has actually used the type (VERIFY_READ vs VERIFY_WRITE) argument of the user address range verification function since we got rid of the old racy i386-only code to walk page tables by hand. It existed because the original 80386 would not honor the write protect bit when in kernel mode, so you had to do COW by hand before doing any user access. But we haven't supported that in a long time, and these days the 'type' argument is a purely historical artifact. A discussion about extending 'user_access_begin()' to do the range checking resulted this patch, because there is no way we're going to move the old VERIFY_xyz interface to that model. And it's best done at the end of the merge window when I've done most of my merges, so let's just get this done once and for all. This patch was mostly done with a sed-script, with manual fix-ups for the cases that weren't of the trivial 'access_ok(VERIFY_xyz' form. There were a couple of notable cases: - csky still had the old "verify_area()" name as an alias. - the iter_iov code had magical hardcoded knowledge of the actual values of VERIFY_{READ,WRITE} (not that they mattered, since nothing really used it) - microblaze used the type argument for a debug printout but other than those oddities this should be a total no-op patch. I tried to fix up all architectures, did fairly extensive grepping for access_ok() uses, and the changes are trivial, but I may have missed something. Any missed conversion should be trivially fixable, though. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: Remove custom dec_and_lock() implementationSebastian Andrzej Siewior2018-06-122-46/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alpha provides a custom implementation of dec_and_lock(). The functions is split into two parts: - atomic_add_unless() + return 0 (fast path in assembly) - remaining part including locking (slow path in C) Comparing the result of the alpha implementation with the generic implementation compiled by gcc it looks like the fast path is optimized by avoiding a stack frame (and reloading the GP), register store and all this. This is only done in the slowpath. After marking the slowpath (atomic_dec_and_lock_1()) as "noinline" and doing the slowpath in C (the atomic_add_unless(atomic, -1, 1) part) I noticed differences in the resulting assembly: - the GP is still reloaded - atomic_add_unless() adds more memory barriers compared to the custom assembly - the custom assembly here does "load, sub, beq" while atomic_add_unless() does "load, cmpeq, add, bne". This is okay because it compares against zero after subtraction while the generic code compares against 1 before. I'm not sure if avoiding the stack frame (and GP reloading) brings a lot in terms of performance. Regarding the different barriers, Peter Zijlstra says: |refcount decrement needs to be a RELEASE operation, such that all the |load/stores to the object happen before we decrement the refcount. | |Otherwise things like: | | obj->foo = 5; | refcnt_dec(&obj->ref); | |can be re-ordered, which then allows fun scenarios like: | | CPU0 CPU1 | | refcnt_dec(&obj->ref); | if (dec_and_test(&obj->ref)) | free(obj); | obj->foo = 5; // oops UaF | | |This means (for alpha) that there should be a memory barrier _before_ |the decrement, however the dec_and_lock asm thing only has one _after_, |which, per the above, is too late. | |The generic version using add_unless will result in memory barrier |before and after (because that is the rule for atomic ops with a return |value) which is strictly too many barriers for the refcount story, but |who knows what other ordering requirements code has. Remove the custom alpha implementation of dec_and_lock() and if it is an issue (performance wise) then the fast path could still be inlined. Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: linux-alpha@vger.kernel.org Link: https://lkml.kernel.org/r/20180606115918.GG12198@hirez.programming.kicks-ass.net Link: https://lkml.kernel.org/r20180612161621.22645-2-bigeasy@linutronix.de
* alpha: extend memset16 to EV6 optimised routinesMichael Cree2018-01-161-6/+6
| | | | | | | | | | | Commit 92ce4c3ea7c4, "alpha: add support for memset16", renamed the function memsetw() to be memset16() but neglected to do this for the EV6 optimised version, thus when building a kernel optimised for EV6 (or later) link errors result. This extends the memset16 support to EV6. Signed-off-by: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Matt Turner <mattst88@gmail.com>
* License cleanup: add SPDX GPL-2.0 license identifier to files with no licenseGreg Kroah-Hartman2017-11-0248-0/+48
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many source files in the tree are missing licensing information, which makes it harder for compliance tools to determine the correct license. By default all files without license information are under the default license of the kernel, which is GPL version 2. Update the files which contain no license information with the 'GPL-2.0' SPDX license identifier. The SPDX identifier is a legally binding shorthand, which can be used instead of the full boiler plate text. This patch is based on work done by Thomas Gleixner and Kate Stewart and Philippe Ombredanne. How this work was done: Patches were generated and checked against linux-4.14-rc6 for a subset of the use cases: - file had no licensing information it it. - file was a */uapi/* one with no licensing information in it, - file was a */uapi/* one with existing licensing information, Further patches will be generated in subsequent months to fix up cases where non-standard license headers were used, and references to license had to be inferred by heuristics based on keywords. The analysis to determine which SPDX License Identifier to be applied to a file was done in a spreadsheet of side by side results from of the output of two independent scanners (ScanCode & Windriver) producing SPDX tag:value files created by Philippe Ombredanne. Philippe prepared the base worksheet, and did an initial spot review of a few 1000 files. The 4.13 kernel was the starting point of the analysis with 60,537 files assessed. Kate Stewart did a file by file comparison of the scanner results in the spreadsheet to determine which SPDX license identifier(s) to be applied to the file. She confirmed any determination that was not immediately clear with lawyers working with the Linux Foundation. Criteria used to select files for SPDX license identifier tagging was: - Files considered eligible had to be source code files. - Make and config files were included as candidates if they contained >5 lines of source - File already had some variant of a license header in it (even if <5 lines). All documentation files were explicitly excluded. The following heuristics were used to determine which SPDX license identifiers to apply. - when both scanners couldn't find any license traces, file was considered to have no license information in it, and the top level COPYING file license applied. For non */uapi/* files that summary was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 11139 and resulted in the first patch in this series. If that file was a */uapi/* path one, it was "GPL-2.0 WITH Linux-syscall-note" otherwise it was "GPL-2.0". Results of that was: SPDX license identifier # files ---------------------------------------------------|------- GPL-2.0 WITH Linux-syscall-note 930 and resulted in the second patch in this series. - if a file had some form of licensing information in it, and was one of the */uapi/* ones, it was denoted with the Linux-syscall-note if any GPL family license was found in the file or had no licensing in it (per prior point). Results summary: SPDX license identifier # files ---------------------------------------------------|------ GPL-2.0 WITH Linux-syscall-note 270 GPL-2.0+ WITH Linux-syscall-note 169 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-2-Clause) 21 ((GPL-2.0 WITH Linux-syscall-note) OR BSD-3-Clause) 17 LGPL-2.1+ WITH Linux-syscall-note 15 GPL-1.0+ WITH Linux-syscall-note 14 ((GPL-2.0+ WITH Linux-syscall-note) OR BSD-3-Clause) 5 LGPL-2.0+ WITH Linux-syscall-note 4 LGPL-2.1 WITH Linux-syscall-note 3 ((GPL-2.0 WITH Linux-syscall-note) OR MIT) 3 ((GPL-2.0 WITH Linux-syscall-note) AND MIT) 1 and that resulted in the third patch in this series. - when the two scanners agreed on the detected license(s), that became the concluded license(s). - when there was disagreement between the two scanners (one detected a license but the other didn't, or they both detected different licenses) a manual inspection of the file occurred. - In most cases a manual inspection of the information in the file resulted in a clear resolution of the license that should apply (and which scanner probably needed to revisit its heuristics). - When it was not immediately clear, the license identifier was confirmed with lawyers working with the Linux Foundation. - If there was any question as to the appropriate license identifier, the file was flagged for further research and to be revisited later in time. In total, over 70 hours of logged manual review was done on the spreadsheet to determine the SPDX license identifiers to apply to the source files by Kate, Philippe, Thomas and, in some cases, confirmation by lawyers working with the Linux Foundation. Kate also obtained a third independent scan of the 4.13 code base from FOSSology, and compared selected files where the other two scanners disagreed against that SPDX file, to see if there was new insights. The Windriver scanner is based on an older version of FOSSology in part, so they are related. Thomas did random spot checks in about 500 files from the spreadsheets for the uapi headers and agreed with SPDX license identifier in the files he inspected. For the non-uapi files Thomas did random spot checks in about 15000 files. In initial set of patches against 4.14-rc6, 3 files were found to have copy/paste license identifier errors, and have been fixed to reflect the correct identifier. Additionally Philippe spent 10 hours this week doing a detailed manual inspection and review of the 12,461 patched files from the initial patch version early this week with: - a full scancode scan run, collecting the matched texts, detected license ids and scores - reviewing anything where there was a license detected (about 500+ files) to ensure that the applied SPDX license was correct - reviewing anything where there was no detection but the patch license was not GPL-2.0 WITH Linux-syscall-note to ensure that the applied SPDX license was correct This produced a worksheet with 20 files needing minor correction. This worksheet was then exported into 3 different .csv files for the different types of files to be modified. These .csv files were then reviewed by Greg. Thomas wrote a script to parse the csv files and add the proper SPDX tag to the file, in the format that the file expected. This script was further refined by Greg based on the output to detect more types of files automatically and to distinguish between header and source .c files (which need different comment types.) Finally Greg ran the script using the .csv files to generate the patches. Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Philippe Ombredanne <pombredanne@nexb.com> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* alpha: add support for memset16Matthew Wilcox2017-09-081-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Alpha already had an optimised fill-memory-with-16-bit-quantity assembler routine called memsetw(). It has a slightly different calling convention from memset16() in that it takes a byte count, not a count of words. That's the same convention used by ARM's __memset routines, so rename Alpha's routine to match and add a memset16() wrapper around it. Then convert Alpha's scr_memsetw() to call memset16() instead of memsetw(). Link: http://lkml.kernel.org/r/20170720184539.31609-6-willy@infradead.org Signed-off-by: Matthew Wilcox <mawilcox@microsoft.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: "James E.J. Bottomley" <jejb@linux.vnet.ibm.com> Cc: "Martin K. Petersen" <martin.petersen@oracle.com> Cc: David Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Minchan Kim <minchan@kernel.org> Cc: Ralf Baechle <ralf@linux-mips.org> Cc: Russell King <rmk+kernel@armlinux.org.uk> Cc: Sam Ravnborg <sam@ravnborg.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: Fix typo in ev6-copy_user.SRichard Henderson2017-08-292-4/+5
| | | | | | | | | | | Patch 8525023121de4848b5f0a7d867ffeadbc477774d introduced a typo. That said, the identity AND insns added by that patch are more clearly written as MOV. At the same time, re-schedule the ev6 version so that the first dispatch can execute in parallel. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Matt Turner <mattst88@gmail.com>
* alpha: Package string routines togetherRichard Henderson2017-08-291-6/+16
| | | | | | | | There are direct branches between {str*cpy,str*cat} and stx*cpy. Ensure the branches are within range by merging these objects. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Matt Turner <mattst88@gmail.com>
* Merge tag 'kbuild-v4.12' of ↵Linus Torvalds2017-05-101-8/+3
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild Pull Kbuild updates from Masahiro Yamada: - improve Clang support - clean up various Makefiles - improve build log visibility (objtool, alpha, ia64) - improve compiler flag evaluation for better build performance - fix GCC version-dependent warning - fix genksyms * tag 'kbuild-v4.12' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (23 commits) kbuild: dtbinst: remove unnecessary __dtbs_install_prep target ia64: beatify build log for gate.so and gate-syms.o alpha: make short build log available for division routines alpha: merge build rules of division routines alpha: add $(src)/ rather than $(obj)/ to make source file path Makefile: evaluate LDFLAGS_BUILD_ID only once objtool: make it visible in make V=1 output kbuild: clang: add -no-integrated-as to KBUILD_[AC]FLAGS kbuild: Add support to generate LLVM assembly files kbuild: Add better clang cross build support kbuild: drop -Wno-unknown-warning-option from clang options kbuild: fix asm-offset generation to work with clang kbuild: consolidate redundant sed script ASM offset generation frv: Use OFFSET macro in DEF_*REG() kbuild: avoid conflict between -ffunction-sections and -pg on gcc-4.7 kbuild: Consolidate header generation from ASM offset information kbuild: use -Oz instead of -Os when using clang kbuild, LLVMLinux: Add -Werror to cc-option to support clang Kbuild: make designated_init attribute fatal kbuild: drop unneeded patterns '.*.orig' and '.*.rej' from distclean ...
| * alpha: make short build log available for division routinesMasahiro Yamada2017-05-031-2/+2
| | | | | | | | | | | | | | | | | | | | | | This enables the Kbuild standard log style as follows: AS arch/alpha/lib/__divlu.o AS arch/alpha/lib/__divqu.o AS arch/alpha/lib/__remlu.o AS arch/alpha/lib/__remqu.o Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
| * alpha: merge build rules of division routinesMasahiro Yamada2017-05-031-7/+2
| | | | | | | | | | | | | | These four objects are generated by the same build rule, with different compile options. The build rules can be merged. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
| * alpha: add $(src)/ rather than $(obj)/ to make source file pathMasahiro Yamada2017-05-031-4/+4
| | | | | | | | | | | | | | $(ev6-y)divide.S is a source file, not a build-time generated file. So, it should be prefixed with $(src)/ rather than $(obj)/. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
* | alpha: add a helper for emitting exception table entriesAl Viro2017-03-281-8/+2
| | | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | alpha: switch __copy_user() and __do_clean_user() to normal calling conventionsAl Viro2017-03-284-196/+140
|/ | | | | | | They used to need odd calling conventions due to old exception handling mechanism, the last remnants of which had disappeared back in 2002. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Replace <asm/uaccess.h> with <linux/uaccess.h> globallyLinus Torvalds2016-12-241-1/+1
| | | | | | | | | | | | | This was entirely automated, using the script by Al: PATT='^[[:blank:]]*#[[:blank:]]*include[[:blank:]]*<asm/uaccess.h>' sed -i -e "s!$PATT!#include <linux/uaccess.h>!" \ $(git grep -l "$PATT"|grep -v ^include/linux/uaccess.h) to do the replacement at the end of the merge window. Requested-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-linus' of ↵Linus Torvalds2016-10-142-37/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs Pull more misc uaccess and vfs updates from Al Viro: "The rest of the stuff from -next (more uaccess work) + assorted fixes" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs: score: traps: Add missing include file to fix build error fs/super.c: don't fool lockdep in freeze_super() and thaw_super() paths fs/super.c: fix race between freeze_super() and thaw_super() overlayfs: Fix setting IOP_XATTR flag iov_iter: kernel-doc import_iovec() and rw_copy_check_uvector() blackfin: no access_ok() for __copy_{to,from}_user() arm64: don't zero in __copy_from_user{,_inatomic} arm: don't zero in __copy_from_user_inatomic()/__copy_from_user() arc: don't leak bits of kernel stack into coredump alpha: get rid of tail-zeroing in __copy_user()
| * alpha: get rid of tail-zeroing in __copy_user()Al Viro2016-09-152-37/+2
| | | | | | | | | | | | ... and adjust copy_from_user() accordingly Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* | alpha: move exports to actual definitionsAl Viro2016-08-0736-26/+92
|/ | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* ipv4: Update parameters for csum_tcpudp_magic to their original typesAlexander Duyck2016-03-131-6/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch updates all instances of csum_tcpudp_magic and csum_tcpudp_nofold to reflect the types that are usually used as the source inputs. For example the protocol field is populated based on nexthdr which is actually an unsigned 8 bit value. The length is usually populated based on skb->len which is an unsigned integer. This addresses an issue in which the IPv6 function csum_ipv6_magic was generating a checksum using the full 32b of skb->len while csum_tcpudp_magic was only using the lower 16 bits. As a result we could run into issues when attempting to adjust the checksum as there was no protocol agnostic way to update it. With this change the value is still truncated as many architectures use "(len + proto) << 8", however this truncation only occurs for values greater than 16776960 in length and as such is unlikely to occur as we stop the inner headers at ~64K in size. I did have to make a few minor changes in the arm, mn10300, nios2, and score versions of the function in order to support these changes as they were either using things such as an OR to combine the protocol and length, or were using ntohs to convert the length which would have truncated the value. I also updated a few spots in terms of whitespace and type differences for the addresses. Most of this was just to make sure all of the definitions were in sync going forward. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* alpha: lib: export __delaySudip Mukherjee2015-09-171-0/+1
| | | | | | | | | | | | | __delay was not exported as a result while building with allmodconfig we were getting build error of undefined symbol. __delay is being used by: drivers/net/phy/mdio-octeon.c Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: fix broken network checksumMikulas Patocka2014-01-311-2/+7
| | | | | | | | | | | | | | | | | | | | | | | The patch 3ddc5b46a8e90f3c9251338b60191d0a804b0d92 breaks networking on alpha (there is a follow-up fix 5cfe8f1ba5eebe6f4b6e5858cdb1a5be4f3272a6, but networking is still broken even with the second patch). The patch 3ddc5b46a8e90f3c9251338b60191d0a804b0d92 makes csum_partial_copy_from_user check the pointer with access_ok. However, csum_partial_copy_from_user is called also from csum_partial_copy_nocheck and csum_partial_copy_nocheck is called on kernel pointers and it is supposed not to check pointer validity. This bug results in ssh session hangs if the system is loaded and bulk data are printed to ssh terminal. This patch fixes csum_partial_copy_nocheck to call set_fs(KERNEL_DS), so that access_ok in csum_partial_copy_from_user accepts kernel-space addresses. Cc: stable@vger.kernel.org Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Matt Turner <mattst88@gmail.com>
* alpha: Prevent a NULL ptr dereference in csum_partial_copy.Jay Estabrook2013-11-161-5/+5
| | | | | | | | | | | Introduced by 3ddc5b46a8e90f3c92 ("kernel-wide: fix missing validations on __get/__put/__copy_to/__copy_from_user()"). Also fix some other places which could be problematic in a similar way, although they hadn't been proved so, as far as I can tell. Cc: Michael Cree <mcree@orcon.net.nz> Signed-off-by: Matt Turner <mattst88@gmail.com>
* alpha: Eliminate compiler warning from memset macroRichard Henderson2013-11-162-9/+14
| | | | | | | | | | | | | | | | | | | | Compiling with GCC 4.8 yields several instances of crypto/vmac.c: In function ‘vmac_final’: crypto/vmac.c:616:9: warning: value computed is not used [-Wunused-value] memset(&mac, 0, sizeof(vmac_t)); ^ arch/alpha/include/asm/string.h:31:25: note: in definition of macro ‘memset’ ? __builtin_memset((s),0,(n)) \ ^ Converting the macro to an inline function eliminates this problem. However, doing only that causes problems with the GCC 3.x series. The inline function cannot be named "memset", as otherwise we wind up with recursion via __builtin_memset. Solve this by adjusting the symbols such that __memset is the inline, and ___memset is the real function. Signed-off-by: Richard Henderson <rth@twiddle.net>
* kernel-wide: fix missing validations on __get/__put/__copy_to/__copy_from_user()Mathieu Desnoyers2013-09-111-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | I found the following pattern that leads in to interesting findings: grep -r "ret.*|=.*__put_user" * grep -r "ret.*|=.*__get_user" * grep -r "ret.*|=.*__copy" * The __put_user() calls in compat_ioctl.c, ptrace compat, signal compat, since those appear in compat code, we could probably expect the kernel addresses not to be reachable in the lower 32-bit range, so I think they might not be exploitable. For the "__get_user" cases, I don't think those are exploitable: the worse that can happen is that the kernel will copy kernel memory into in-kernel buffers, and will fail immediately afterward. The alpha csum_partial_copy_from_user() seems to be missing the access_ok() check entirely. The fix is inspired from x86. This could lead to information leak on alpha. I also noticed that many architectures map csum_partial_copy_from_user() to csum_partial_copy_generic(), but I wonder if the latter is performing the access checks on every architectures. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Richard Henderson <rth@twiddle.net> Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Matt Turner <mattst88@gmail.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Oleg Nesterov <oleg@redhat.com> Cc: David Miller <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: Use new generic strncpy_from_user() and strnlen_user()Michael Cree2012-08-195-963/+0
| | | | | | | | | | | | | | | | | | | | | | | Similar to x86/sparc/powerpc implementations except: 1) we implement an extremely efficient has_zero()/find_zero() sequence with both prep_zero_mask() and create_zero_mask() no-operations. 2) Our output from prep_zero_mask() differs in that only the lowest eight bits are used to represent the zero bytes nevertheless it can be safely ORed with other similar masks from prep_zero_mask() and forms input to create_zero_mask(), the two fundamental properties prep_zero_mask() must satisfy. Tests on EV67 and EV68 CPUs revealed that the generic code is essentially as fast (to within 0.5% of CPU cycles) of the old Alpha specific code for large quadword-aligned strings, despite the 30% extra CPU instructions executed. In contrast, the generic code for unaligned strings is substantially slower (by more than a factor of 3) than the old Alpha specific code. Signed-off-by: Michael Cree <mcree@orcon.net.nz> Acked-by: Matt Turner <mattst88@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Disintegrate asm/system.h for AlphaDavid Howells2012-03-281-1/+0
| | | | | | | Disintegrate asm/system.h for Alpha. Signed-off-by: David Howells <dhowells@redhat.com> cc: linux-alpha@vger.kernel.org
* atomic: use <linux/atomic.h>Arun Sharma2011-07-261-1/+1
| | | | | | | | | | | | | | This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: Arun Sharma <asharma@fb.com> Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Fix common misspellingsLucas De Marchi2011-03-313-3/+3
| | | | | | Fixes generated by 'codespell' and manually reviewed. Signed-off-by: Lucas De Marchi <lucas.demarchi@profusion.mobi>
* alpha: change to new Makefile flag variablesmatt mooney2011-01-171-2/+2
| | | | | Signed-off-by: matt mooney <mfm@muteddisk.com> Signed-off-by: Matt Turner <mattst88@gmail.com>
* remove __attribute_used__Adrian Bunk2008-01-281-2/+1
| | | | | | | | | Remove the deprecated __attribute_used__. [Introduce __section in a few places to silence checkpatch /sam] Signed-off-by: Adrian Bunk <bunk@kernel.org> Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
* alpha: strncpy/strncat fixesIvan Kokshaysky2007-12-173-15/+15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | First of all, thanks to Bob Tracy <rct@frus.com> and Michael Cree <mcree@orcon.net.nz> for testing. Especially to Bob, as he has done titanic multi-day git-bisect work that finally helped to reproduce and nail down the bug (http://bugzilla.kernel.org/show_bug.cgi?id=9457). [ev6-]stxncpy.S: it's t12, not t2 register that is supposed to contain the last byte offset upon return. As a result of wrong register use (which was my fault back in 2003, IIRC), under some circumstances extra terminating zero bytes were added to destination string. This particularly led to incorrect DEVPATH strings generated in uevent and therefore to udev problems. strncpy.S: unrelated bug I found while testing the above fix - destination is not properly zero-padded then a byte count exceeds source length. Actually this is addition to strncpy fix from last year. Signed-off-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: Bob Tracy <rct@frus.com> Cc: Michael Cree <mcree@orcon.net.nz> Cc: Kay Sievers <kay.sievers@vrfy.org> Cc: "Rafael J. Wysocki" <rjw@sisk.pl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* spelling fixes: arch/alpha/Simon Arlott2007-10-202-2/+2
| | | | | | | Spelling fixes in arch/alpha/. Signed-off-by: Simon Arlott <simon@fire.lp0.eu> Signed-off-by: Adrian Bunk <bunk@kernel.org>
* remove asm/bitops.h includesJiri Slaby2007-10-191-1/+1
| | | | | | | | | | | | | remove asm/bitops.h includes including asm/bitops directly may cause compile errors. don't include it and include linux/bitops instead. next patch will deny including asm header directly. Cc: Adrian Bunk <bunk@kernel.org> Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* kbuild: enable 'make CFLAGS=...' to add additional options to CCSam Ravnborg2007-10-141-1/+1
| | | | | | | | | | | | | | | | | | | | | | | The variable CFLAGS is a wellknown variable and the usage by kbuild may result in unexpected behaviour. On top of that several people over time has asked for a way to pass in additional flags to gcc. This patch replace use of CFLAGS with KBUILD_CFLAGS all over the tree and enabling one to use: make CFLAGS=... to specify additional gcc commandline options. One usecase is when trying to find gcc bugs but other use cases has been requested too. Patch was tested on following architectures: alpha, arm, i386, x86_64, mips, sparc, sparc64, ia64, m68k Test was simple to do a defconfig build, apply the patch and check that nothing got rebuild. Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
* missing exports of csum_...Al Viro2007-07-171-0/+1
| | | | | | | Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Acked-by: David S. Miller <davem@davemloft.net> Acked-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: fix alignment problem in csum_ipv6_magic()Ivan Kokshaysky2007-06-242-23/+70
| | | | | | | | | | | | | | | Hopefully this fixes http://bugzilla.kernel.org/show_bug.cgi?id=8635 The struct in6_addr passed to csum_ipv6_magic() is 4 byte aligned, so we can't use the regular 64-bit loads. Since the cost of handling of 4 byte and 1 byte aligned 64-bit data is roughly the same, this code can cope with any src/dst [mis]alignment. Signed-off-by: Ivan Kokshaysky <ink@jurassic.park.msu.ru> Cc: Richard Henderson <rth@twiddle.net> Cc: Dustin Marquess <jailbird@alcatraz.fdf.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* alpha: cleanup in bitops.hRichard Henderson2007-05-302-1/+40
| | | | | | | | | | | Remove 2 functions private to the alpha implemetation, in favor of similar functions in <linux/log2.h>. Provide a more efficient version of the fls64 function for pre-ev67 alphas. Signed-off-by: Richard Henderson <rth@twiddle.net> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* [STRING]: Move strcasecmp/strncasecmp to lib/string.cDavid S. Miller2007-04-262-27/+0
| | | | | | | We have several platforms using local copies of identical code. Signed-off-by: David S. Miller <davem@davemloft.net>
* [NET]: Alpha checksum annotations and cleanups.Al Viro2006-12-022-42/+26
| | | | | | | | | | | | * sanitize prototypes and annotate * kill useless access_ok() in csum_partial_copy_from_user() (the only caller checks it already). * do_csum_partial_copy_from_user() is not needed now * replace htons(len) with len << 8 - they are the same wrt checksums on little-endian. Signed-off-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net>
* fix file specification in commentsUwe Zeisberger2006-10-033-3/+3
| | | | | | | Many files include the filename at the beginning, serveral used a wrong one. Signed-off-by: Uwe Zeisberger <Uwe_Zeisberger@digi.com> Signed-off-by: Adrian Bunk <bunk@stusta.de>
* Remove obsolete #include <linux/config.h>Jörn Engel2006-06-302-2/+0
| | | | | Signed-off-by: Jörn Engel <joern@wohnheim.fh-wedel.de> Signed-off-by: Adrian Bunk <bunk@stusta.de>