summaryrefslogtreecommitdiffstats
path: root/include/linux/llist.h
Commit message (Collapse)AuthorAgeFilesLines
* include/linux/llist.h: replace kernel.h with the necessary inclusionsAndy Shevchenko2021-11-091-1/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | When kernel.h is used in the headers it adds a lot into dependency hell, especially when there are circular dependencies are involved. Replace kernel.h inclusion with the list of what is really being used. Link: https://lkml.kernel.org/r/20211013170417.87909-6-andriy.shevchenko@linux.intel.com Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Boqun Feng <boqun.feng@gmail.com> Cc: Brendan Higgins <brendanhiggins@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jonathan Cameron <jic23@kernel.org> Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Cc: Mauro Carvalho Chehab <mchehab@kernel.org> Cc: Miguel Ojeda <miguel.ojeda.sandonis@gmail.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Sakari Ailus <sakari.ailus@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Thorsten Leemhuis <regressions@leemhuis.info> Cc: Waiman Long <longman@redhat.com> Cc: Will Deacon <will@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* llist: Add nonatomic __llist_add() and __llist_dell_all()Peter Zijlstra2020-10-121-0/+23
| | | | | | | | | We'll use these in the new, lockless kretprobes code. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/159870619318.1229682.13027387548510028721.stgit@devnote2
* treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 333Thomas Gleixner2019-06-051-13/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is distributed in the hope that it will be useful but without any warranty without even the implied warranty of merchantability or fitness for a particular purpose see the gnu general public license for more details you should have received a copy of the gnu general public license along with this program if not write to the free software foundation inc 59 temple place suite 330 boston ma 02111 1307 usa extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 136 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Alexios Zavras <alexios.zavras@intel.com> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190530000436.384967451@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* locking/atomics: COCCINELLE/treewide: Convert trivial ACCESS_ONCE() patterns ↵Mark Rutland2017-10-251-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | to READ_ONCE()/WRITE_ONCE() Please do not apply this to mainline directly, instead please re-run the coccinelle script shown below and apply its output. For several reasons, it is desirable to use {READ,WRITE}_ONCE() in preference to ACCESS_ONCE(), and new code is expected to use one of the former. So far, there's been no reason to change most existing uses of ACCESS_ONCE(), as these aren't harmful, and changing them results in churn. However, for some features, the read/write distinction is critical to correct operation. To distinguish these cases, separate read/write accessors must be used. This patch migrates (most) remaining ACCESS_ONCE() instances to {READ,WRITE}_ONCE(), using the following coccinelle script: ---- // Convert trivial ACCESS_ONCE() uses to equivalent READ_ONCE() and // WRITE_ONCE() // $ make coccicheck COCCI=/home/mark/once.cocci SPFLAGS="--include-headers" MODE=patch virtual patch @ depends on patch @ expression E1, E2; @@ - ACCESS_ONCE(E1) = E2 + WRITE_ONCE(E1, E2) @ depends on patch @ expression E; @@ - ACCESS_ONCE(E) + READ_ONCE(E) ---- Signed-off-by: Mark Rutland <mark.rutland@arm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: davem@davemloft.net Cc: linux-arch@vger.kernel.org Cc: mpe@ellerman.id.au Cc: shuah@kernel.org Cc: snitzer@redhat.com Cc: thor.thayer@linux.intel.com Cc: tj@kernel.org Cc: viro@zeniv.linux.org.uk Cc: will.deacon@arm.com Link: http://lkml.kernel.org/r/1508792849-3115-19-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* llist: clang: introduce member_address_is_nonnull()Alexander Potapenko2017-07-191-2/+19
| | | | | | | | | | | | | | | | | Currently llist_for_each_entry() and llist_for_each_entry_safe() iterate until &pos->member != NULL. But when building the kernel with Clang, the compiler assumes &pos->member cannot be NULL if the member's offset is greater than 0 (which would be equivalent to the object being non-contiguous in memory). Therefore the loop condition is always true, and the loops become infinite. To work around this, introduce the member_address_is_nonnull() macro, which casts object pointer to uintptr_t, thus letting the member pointer to be NULL. Signed-off-by: Alexander Potapenko <glider@google.com> Tested-by: Sodagudi Prasad <psodagud@codeaurora.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* llist: Provide a safe version for llist_for_each()Byungchul Park2017-05-231-0/+19
| | | | | | | | | | | | | | | | | Sometimes we have to dereference next field of llist node before entering loop becasue the node might be deleted or the next field might be modified within the loop. So this adds the safe version of llist_for_each(), that is, llist_for_each_safe(). Signed-off-by: Byungchul Park <byungchul.park@lge.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Huang, Ying <ying.huang@intel.com> Cc: <kernel-team@lge.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1494549416-10539-1-git-send-email-byungchul.park@lge.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* llist: Clarify comments about when locking is neededJoel Fernandes2017-01-231-16/+21
| | | | | | | | | | | | | | | llist.h comments are confusing about when locking is needed versus when it isn't. Clarify these comments by being more descriptive about why locking is needed for llist_del_first. Cc: Ingo Molnar <mingo@kernel.org> Cc: Will Deacon <will.deacon@arm.com> Cc: Paul McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Huang Ying <ying.huang@intel.com> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Joel Fernandes <joelaf@google.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* locking, include/llist: Use linux/atomic.h instead of asm/cmpxchg.hWill Deacon2015-08-121-1/+1
| | | | | | | | | | | | | | | Including an asm/ header directly is best avoided, so use linux/atomic.h instead of asm/cmpxchg.h in linux/llist.h. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman.Long@hp.com Cc: paulmck@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1438880084-18856-8-git-send-email-will.deacon@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
* llists: move llist_reverse_order from raid5 to llist.cChristoph Hellwig2013-11-151-0/+2
| | | | | | | | | | | Make this useful helper available for other users. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Jens Axboe <axboe@kernel.dk> Cc: Neil Brown <neilb@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* tty: Use lockless flip buffer free listPeter Hurley2013-07-231-0/+23
| | | | | | | | | | | | In preparation for lockless flip buffers, make the flip buffer free list lockless. NB: using llist is not the optimal solution, as the driver and buffer work may contend over the llist head unnecessarily. However, test measurements indicate this contention is low. Signed-off-by: Peter Hurley <peter@hurleysoftware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* llist: llist_add() can use llist_add_batch()Oleg Nesterov2013-07-131-10/+4
| | | | | | | | | | | | | | | | | | | llist_add(new, head) can simply use llist_add_batch(new, new, head), no need to duplicate the code. This obviously uninlines llist_add() and to me this is a win. But we can make llist_add_batch() inline if this is desirable, in this case gcc can notice that new_first == new_last if the caller is llist_add(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrey Vagin <avagin@openvz.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: David Howells <dhowells@redhat.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* llist: fix/simplify llist_add() and llist_add_batch()Oleg Nesterov2013-07-131-12/+7
| | | | | | | | | | | | | | | | | | | | | | | | | 1. This is mostly theoretical, but llist_add*() need ACCESS_ONCE(). Otherwise it is not guaranteed that the first cmpxchg() uses the same value for old_entry and new_last->next. 2. These helpers cache the result of cmpxchg() and read the initial value of head->first before the main loop. I do not think this makes sense. In the likely case cmpxchg() succeeds, otherwise it doesn't hurt to reload head->first. I think it would be better to simplify the code and simply read ->first before cmpxchg(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Andrey Vagin <avagin@openvz.org> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: David Howells <dhowells@redhat.com> Cc: Huang Ying <ying.huang@intel.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
* Add #includes needed to permit the removal of asm/system.hDavid Howells2012-03-281-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | asm/system.h is a cause of circular dependency problems because it contains commonly used primitive stuff like barrier definitions and uncommonly used stuff like switch_to() that might require MMU definitions. asm/system.h has been disintegrated by this point on all arches into the following common segments: (1) asm/barrier.h Moved memory barrier definitions here. (2) asm/cmpxchg.h Moved xchg() and cmpxchg() here. #included in asm/atomic.h. (3) asm/bug.h Moved die() and similar here. (4) asm/exec.h Moved arch_align_stack() here. (5) asm/elf.h Moved AT_VECTOR_SIZE_ARCH here. (6) asm/switch_to.h Moved switch_to() here. Signed-off-by: David Howells <dhowells@redhat.com>
* llist-return-whether-list-is-empty-before-adding-in-llist_add-fixAndrew Morton2011-10-311-1/+1
| | | | | | | | | | clarify comment Cc: Huang Ying <ying.huang@intel.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* llist: Add back llist_add_batch() and llist_del_first() prototypesStephen Rothwell2011-10-111-0/+6
| | | | | | | | | | | | | Commit 1230db8e1543 ("llist: Make some llist functions inline") has deleted the definitions, causing problems for (not upstream yet) code that tries to make use of them. Signed-off-by: Stephen Rothwell <sfr@canb.auug.org.au> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Huang Ying <ying.huang@intel.com> Cc: David Miller <davem@davemloft.net> Link: http://lkml.kernel.org/r/20111005172528.0d0a8afc65acef7ace22a24e@canb.auug.org.au Signed-off-by: Ingo Molnar <mingo@elte.hu>
* llist: Remove cpu_relax() usage in cmpxchg loopsPeter Zijlstra2011-10-041-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Initial benchmarks show they're a net loss: $ for i in /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor ; do echo performance > $i; done $ echo 4096 32000 64 128 > /proc/sys/kernel/sem $ ./sembench -t 2048 -w 1900 -o 0 Pre: run time 30 seconds 778936 worker burns per second run time 30 seconds 912190 worker burns per second run time 30 seconds 817506 worker burns per second run time 30 seconds 830870 worker burns per second run time 30 seconds 845056 worker burns per second Post: run time 30 seconds 905920 worker burns per second run time 30 seconds 849046 worker burns per second run time 30 seconds 886286 worker burns per second run time 30 seconds 822320 worker burns per second run time 30 seconds 900283 worker burns per second So about 4% faster. (!) cpu_relax() stalls the pipeline, therefore, when used in a tight loop it has the following benefits: - allows SMT siblings to have a go; - reduces pressure on the CPU interconnect. However, cmpxchg loops are unfair and thus have unbounded completion time, therefore we should avoid getting in such heavily contended situations where the above benefits make any difference. A typical cmpxchg loop should not go round more than a handfull of times at worst, therefore adding extra delays just slows things down. Since the llist primitives are new, there aren't any bad users yet, and we should avoid growing them. Heavily contended sites should generally be better off using the ticket locks for serialization since they provide bounded completion times (fifo-fair over the cpus). Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Huang Ying <ying.huang@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/1315836358.26517.43.camel@twins Signed-off-by: Ingo Molnar <mingo@elte.hu>
* llist: Add llist_next()Peter Zijlstra2011-10-041-0/+5
| | | | | | | | | | So we don't have to expose the struct list_node member. Cc: Huang Ying <ying.huang@intel.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315836348.26517.41.camel@twins Signed-off-by: Ingo Molnar <mingo@elte.hu>
* llist: Return whether list is empty before adding in llist_add()Huang Ying2011-10-041-1/+5
| | | | | | | | | | | | | | | Extend the llist_add*() functions to return a success indicator, this allows us in the scheduler code to send an IPI if the queue was empty. ( There's no effect on existing users, because the list_add_xxx() functions are inline, thus this will be optimized out by the compiler if not used by callers. ) Signed-off-by: Huang Ying <ying.huang@intel.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-5-git-send-email-ying.huang@intel.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* llist: Move cpu_relax() to after the cmpxchg()Huang Ying2011-10-041-2/+5
| | | | | | | | | | | | | | | If in llist_add()/etc. functions the first cmpxchg() call succeeds, it is not necessary to use cpu_relax() before the cmpxchg(). So cpu_relax() in a busy loop involving cmpxchg() should go after cmpxchg() instead of before that. This patch fixes this for all involved llist functions. Signed-off-by: Huang Ying <ying.huang@intel.com> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-4-git-send-email-ying.huang@intel.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* llist: Remove the platform-dependent NMI checksIngo Molnar2011-10-041-10/+2
| | | | | | | | | | | Remove the nmi() checks spread around the code. in_nmi() is not available on every architecture and it's a pretty obscure and ugly check in any case. Cc: Huang Ying <ying.huang@intel.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-3-git-send-email-ying.huang@intel.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* llist: Make some llist functions inlineHuang Ying2011-10-041-6/+58
| | | | | | | | | | | | Because llist code will be used in performance critical scheduler code path, make llist_add() and llist_del_all() inline to avoid function calling overhead and related 'glue' overhead. Signed-off-by: Huang Ying <ying.huang@intel.com> Acked-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/1315461646-1379-2-git-send-email-ying.huang@intel.com Signed-off-by: Ingo Molnar <mingo@elte.hu>
* lib, Add lock-less NULL terminated single listHuang Ying2011-08-031-0/+126
Cmpxchg is used to implement adding new entry to the list, deleting all entries from the list, deleting first entry of the list and some other operations. Because this is a single list, so the tail can not be accessed in O(1). If there are multiple producers and multiple consumers, llist_add can be used in producers and llist_del_all can be used in consumers. They can work simultaneously without lock. But llist_del_first can not be used here. Because llist_del_first depends on list->first->next does not changed if list->first is not changed during its operation, but llist_del_first, llist_add, llist_add (or llist_del_all, llist_add, llist_add) sequence in another consumer may violate that. If there are multiple producers and one consumer, llist_add can be used in producers and llist_del_all or llist_del_first can be used in the consumer. This can be summarized as follow: | add | del_first | del_all add | - | - | - del_first | | L | L del_all | | | - Where "-" stands for no lock is needed, while "L" stands for lock is needed. The list entries deleted via llist_del_all can be traversed with traversing function such as llist_for_each etc. But the list entries can not be traversed safely before deleted from the list. The order of deleted entries is from the newest to the oldest added one. If you want to traverse from the oldest to the newest, you must reverse the order by yourself before traversing. The basic atomic operation of this list is cmpxchg on long. On architectures that don't have NMI-safe cmpxchg implementation, the list can NOT be used in NMI handler. So code uses the list in NMI handler should depend on CONFIG_ARCH_HAVE_NMI_SAFE_CMPXCHG. Signed-off-by: Huang Ying <ying.huang@intel.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Reviewed-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Len Brown <len.brown@intel.com>