summaryrefslogtreecommitdiffstats
path: root/lib/percpu_ida.c
Commit message (Collapse)AuthorAgeFilesLines
* scsi: Remove percpu_idaMatthew Wilcox2018-06-191-370/+0
| | | | | | | | With its one user gone, remove the library code. Signed-off-by: Matthew Wilcox <willy@infradead.org> Reviewed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
* lib/percpu_ida.c: use _irqsave() instead of local_irq_save() + spin_lockSebastian Andrzej Siewior2018-06-071-42/+21
| | | | | | | | | | | | | | | | | | | | | | | percpu_ida() decouples disabling interrupts from the locking operations. This breaks some assumptions if the locking operations are replaced like they are under -RT. The same locking can be achieved by avoiding local_irq_save() and using spin_lock_irqsave() instead. percpu_ida_alloc() gains one more preemption point because after unlocking the fastpath and before the pool lock is acquired, the interrupts are briefly enabled. Link: http://lkml.kernel.org/r/20180504153218.7301-1-bigeasy@linutronix.de Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Nicholas Bellinger <nab@linux-iscsi.org> Cc: Shaohua Li <shli@fb.com> Cc: Kent Overstreet <kent.overstreet@gmail.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* sched/headers: Prepare to remove the <linux/gfp.h> include from <linux/sched.h>Ingo Molnar2017-03-021-0/+1
| | | | | | | | | | | | | <linux/topology.h> is still needed - also update other headers and .c files that depend on sched.h including gfp.h (and its sub-headers) for them. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* sched/headers: Prepare to move signal wakeup & sigpending methods from ↵Ingo Molnar2017-03-021-1/+1
| | | | | | | | | | | | | <linux/sched.h> into <linux/sched/signal.h> Fix up affected files that include this signal functionality via sched.h. Acked-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* mm, page_alloc: rename __GFP_WAIT to __GFP_RECLAIMMel Gorman2015-11-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | __GFP_WAIT was used to signal that the caller was in atomic context and could not sleep. Now it is possible to distinguish between true atomic context and callers that are not willing to sleep. The latter should clear __GFP_DIRECT_RECLAIM so kswapd will still wake. As clearing __GFP_WAIT behaves differently, there is a risk that people will clear the wrong flags. This patch renames __GFP_WAIT to __GFP_RECLAIM to clearly indicate what it does -- setting it allows all reclaim activity, clearing them prevents it. [akpm@linux-foundation.org: fix build] [akpm@linux-foundation.org: coding-style fixes] Signed-off-by: Mel Gorman <mgorman@techsingularity.net> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: Christoph Lameter <cl@linux.com> Acked-by: David Rientjes <rientjes@google.com> Cc: Vitaly Wool <vitalywool@gmail.com> Cc: Rik van Riel <riel@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* lib/percpu_ida.c: remove redundant includesRasmus Villemoes2015-02-121-3/+0
| | | | | | | | | | | | These three #includes seem to be completely redundant: Removing them yields identical objdump -d output for each of {allyes,allno,def}config, and neither included file end up in the generated dependency file through some recursive include. In total, about 50 lines are eliminated from .percpu.o.cmd. Signed-off-by: Rasmus Villemoes <linux@rasmusvillemoes.dk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* Merge branch 'for-linus' of git://git.kernel.dk/linux-blockLinus Torvalds2014-02-141-5/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull block IO fixes from Jens Axboe: "Second round of updates and fixes for 3.14-rc2. Most of this stuff has been queued up for a while. The notable exception is the blk-mq changes, which are naturally a bit more in flux still. The pull request contains: - Two bug fixes for the new immutable vecs, causing crashes with raid or swap. From Kent. - Various blk-mq tweaks and fixes from Christoph. A fix for integrity bio's from Nic. - A few bcache fixes from Kent and Darrick Wong. - xen-blk{front,back} fixes from David Vrabel, Matt Rushton, Nicolas Swenson, and Roger Pau Monne. - Fix for a vec miscount with integrity vectors from Martin. - Minor annotations or fixes from Masanari Iida and Rashika Kheria. - Tweak to null_blk to do more normal FIFO processing of requests from Shlomo Pongratz. - Elevator switching bypass fix from Tejun. - Softlockup in blkdev_issue_discard() fix when !CONFIG_PREEMPT from me" * 'for-linus' of git://git.kernel.dk/linux-block: (31 commits) block: add cond_resched() to potentially long running ioctl discard loop xen-blkback: init persistent_purge_work work_struct blk-mq: pair blk_mq_start_request / blk_mq_requeue_request blk-mq: dont assume rq->errors is set when returning an error from ->queue_rq block: Fix cloning of discard/write same bios block: Fix type mismatch in ssize_t_blk_mq_tag_sysfs_show blk-mq: rework flush sequencing logic null_blk: use blk_complete_request and blk_mq_complete_request virtio_blk: use blk_mq_complete_request blk-mq: rework I/O completions fs: Add prototype declaration to appropriate header file include/linux/bio.h fs: Mark function as static in fs/bio-integrity.c block/null_blk: Fix completion processing from LIFO to FIFO block: Explicitly handle discard/write same segments block: Fix nr_vecs for inline integrity vectors blk-mq: Add bio_integrity setup to blk_mq_make_request blk-mq: initialize sg_reserved_size blk-mq: handle dma_drain_size blk-mq: divert __blk_put_request for MQ ops blk-mq: support at_head inserations for blk_execute_rq ...
| * percpu_ida: fix a live lockShaohua Li2014-01-301-5/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | steal_tags only happens when free tags is more than half of the total tags. This is too strict and can cause live lock. I found that if one cpu has free tags, but other cpu can't steal (thread is bound to specific cpus), threads which want to allocate tags are always sleeping. I found this when I run next patch, but this could happen without it I think. I did performance test too with null_blk. Two cases (each cpu has enough percpu tags, or total tags are limited), no performance changes were observed. Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | iscsi-target: Fix connection reset hang with percpu_ida_allocNicholas Bellinger2014-01-251-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch addresses a bug where connection reset would hang indefinately once percpu_ida_alloc() was starved for tags, due to the fact that it always assumed uninterruptible sleep mode. So now make percpu_ida_alloc() check for signal_pending_state() for making interruptible sleep optional, and convert iscsit_allocate_cmd() to set TASK_INTERRUPTIBLE for GFP_KERNEL, or TASK_RUNNING for GFP_ATOMIC. Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Kent Overstreet <kmo@daterainc.com> Cc: <stable@vger.kernel.org> #3.12+ Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
* | percpu_ida: Make percpu_ida_alloc + callers accept task state bitmaskKent Overstreet2014-01-231-7/+9
|/ | | | | | | | | | | | | | | | | | | | | | | | | | This patch changes percpu_ida_alloc() + callers to accept task state bitmask for prepare_to_wait() for code like target/iscsi that needs it for interruptible sleep, that is provided in a subsequent patch. It now expects TASK_UNINTERRUPTIBLE when the caller is able to sleep waiting for a new tag, or TASK_RUNNING when the caller cannot sleep, and is forced to return a negative value when no tags are available. v2 changes: - Include blk-mq + tcm_fc + vhost/scsi + target/iscsi changes - Drop signal_pending_state() call v3 changes: - Only call prepare_to_wait() + finish_wait() when != TASK_RUNNING (PeterZ) Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Jens Axboe <axboe@kernel.dk> Signed-off-by: Kent Overstreet <kmo@daterainc.com> Cc: <stable@vger.kernel.org> #3.12+ Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
* Merge branch 'for-next' of ↵Linus Torvalds2013-11-221-3/+2
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending Pull SCSI target updates from Nicholas Bellinger: "Things have been quiet this round with mostly bugfixes, percpu conversions, and other minor iscsi-target conformance testing changes. The highlights include: - Add demo_mode_discovery attribute for iscsi-target (Thomas) - Convert tcm_fc(FCoE) to use percpu-ida pre-allocation - Add send completion interrupt coalescing for ib_isert - Convert target-core to use percpu-refcounting for se_lun - Fix mutex_trylock usage bug in iscsit_increment_maxcmdsn - tcm_loop updates (Hannes) - target-core ALUA cleanups + prep for v3.14 SCSI Referrals support (Hannes) v3.14 is currently shaping to be a busy development cycle in target land, with initial support for T10 Referrals and T10 DIF currently on the roadmap" * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending: (40 commits) iscsi-target: chap auth shouldn't match username with trailing garbage iscsi-target: fix extract_param to handle buffer length corner case iscsi-target: Expose default_erl as TPG attribute target_core_configfs: split up ALUA supported states target_core_alua: Make supported states configurable target_core_alua: Store supported ALUA states target_core_alua: Rename ALUA_ACCESS_STATE_OPTIMIZED target_core_alua: spellcheck target core: rename (ex,im)plict -> (ex,im)plicit percpu-refcount: Add percpu-refcount.o to obj-y iscsi-target: Do not reject non-immediate CmdSNs exceeding MaxCmdSN iscsi-target: Convert iscsi_session statistics to atomic_long_t target: Convert se_device statistics to atomic_long_t target: Fix delayed Task Aborted Status (TAS) handling bug iscsi-target: Reject unsupported multi PDU text command sequence ib_isert: Avoid duplicate iscsit_increment_maxcmdsn call iscsi-target: Fix mutex_trylock usage in iscsit_increment_maxcmdsn target: Core does not need blkdev.h target: Pass through I/O topology for block backstores iser-target: Avoid using FRMR for single dma entry requests ...
| * percpu_ida: Removing unused arguement from alloc_local_tagNick Swenson2013-10-031-3/+2
| | | | | | | | | | | | | | | | | | | | Removing unused struct percpu_ida *pool from arguements of alloc_local_tag, changed it's one use in percpu_ida.c (nab: Fixed reference of idr.c -> percpu_ida.c) Signed-Off-By: Nick Swenson <nks@daterainc.com> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
* | percpu_ida: add an API to return free tagsShaohua Li2013-10-251-0/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | Add an API to return free tags, blk-mq-tag will use it. Note, this just returns a snapshot of free tags number. blk-mq-tag has two usages of it. One is for info output for diagnosis. The other is to quickly check if there are free tags for request dispatch checking. Neither requires very precise. Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | percpu_ida: add percpu_ida_for_each_freeShaohua Li2013-10-251-0/+44
| | | | | | | | | | | | | | | | | | | | | | | | Add a new API to iterate free ids. blk-mq-tag will use it. Note, this doesn't guarantee to iterate all free ids restrictly. Caller should be aware of this. blk-mq uses it to do sanity check for request timedout, so can tolerate the limitation. Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | percpu_ida: make percpu_ida percpu size/batch configurableShaohua Li2013-10-251-17/+11
|/ | | | | | | | | | | | | | | | | | Make percpu_ida percpu size/batch configurable. The block-mq-tag will use it. After block-mq uses percpu_ida to manage tags, performance is improved. My test is done in a 2 sockets machine, 12 process cross the 2 sockets. So if there is lock contention or ipi, should be stressed heavily. Testing is done for null-blk. hw_queue_depth nopatch iops patch iops 64 ~800k/s ~1470k/s 2048 ~4470k/s ~4340k/s Cc: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Shaohua Li <shli@fusionio.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* idr: Percpu idaKent Overstreet2013-09-091-0/+335
Percpu frontend for allocating ids. With percpu allocation (that works), it's impossible to guarantee it will always be possible to allocate all nr_tags - typically, some will be stuck on a remote percpu freelist where the current job can't get to them. We do guarantee that it will always be possible to allocate at least (nr_tags / 2) tags - this is done by keeping track of which and how many cpus have tags on their percpu freelists. On allocation failure if enough cpus have tags that there could potentially be (nr_tags / 2) tags stuck on remote percpu freelists, we then pick a remote cpu at random to steal from. Note that there's no cpu hotplug notifier - we don't care, because steal_tags() will eventually get the down cpu's tags. We _could_ satisfy more allocations if we had a notifier - but we'll still meet our guarantees and it's absolutely not a correctness issue, so I don't think it's worth the extra code. From akpm: "It looks OK to me (that's as close as I get to an ack :)) v6 changes: - Add #include <linux/cpumask.h> to include/linux/percpu_ida.h to make alpha/arc builds happy (Fengguang) - Move second (cpu >= nr_cpu_ids) check inside of first check scope in steal_tags() (akpm + nab) v5 changes: - Change percpu_ida->cpus_have_tags to cpumask_t (kmo + akpm) - Add comment for percpu_ida_cpu->lock + ->nr_free (kmo + akpm) - Convert steal_tags() to use cpumask_weight() + cpumask_next() + cpumask_first() + cpumask_clear_cpu() (kmo + akpm) - Add comment for alloc_global_tags() (kmo + akpm) - Convert percpu_ida_alloc() to use cpumask_set_cpu() (kmo + akpm) - Convert percpu_ida_free() to use cpumask_set_cpu() (kmo + akpm) - Drop percpu_ida->cpus_have_tags allocation in percpu_ida_init() (kmo + akpm) - Drop percpu_ida->cpus_have_tags kfree in percpu_ida_destroy() (kmo + akpm) - Add comment for percpu_ida_alloc @ gfp (kmo + akpm) - Move to percpu_ida.c + percpu_ida.h (kmo + akpm + nab) v4 changes: - Fix tags.c reference in percpu_ida_init (akpm) Signed-off-by: Kent Overstreet <kmo@daterainc.com> Cc: Tejun Heo <tj@kernel.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Christoph Lameter <cl@linux-foundation.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Jens Axboe <axboe@kernel.dk> Cc: "Nicholas A. Bellinger" <nab@linux-iscsi.org> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>