summaryrefslogtreecommitdiffstats
path: root/include
Commit message (Collapse)AuthorAgeFilesLines
* ARM: pxa: change gpio to platform deviceHaojian Zhuang2011-11-151-0/+16
| | | | | | Remove most gpio macros and change gpio driver to platform driver. Signed-off-by: Haojian Zhuang <haojian.zhuang@marvell.com>
* Merge branch 'upstream/jump-label-noearly' of ↵Linus Torvalds2011-11-061-9/+14
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen * 'upstream/jump-label-noearly' of git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen: jump-label: initialize jump-label subsystem much earlier x86/jump_label: add arch_jump_label_transform_static() s390/jump-label: add arch_jump_label_transform_static() jump_label: add arch_jump_label_transform_static() to optimise non-live code updates sparc/jump_label: drop arch_jump_label_text_poke_early() x86/jump_label: drop arch_jump_label_text_poke_early() jump_label: if a key has already been initialized, don't nop it out stop_machine: make stop_machine safe and efficient to call early jump_label: use proper atomic_t initializer Conflicts: - arch/x86/kernel/jump_label.c Added __init_or_module to arch_jump_label_text_poke_early vs removal of that function entirely - kernel/stop_machine.c same patch ("stop_machine: make stop_machine safe and efficient to call early") merged twice, with whitespace fix in one version
| * jump-label: initialize jump-label subsystem much earlierJeremy Fitzhardinge2011-10-251-5/+9
| | | | | | | | | | | | | | | | Initialize jump_labels much, much earlier, so they're available for use during system setup. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
| * jump_label: add arch_jump_label_transform_static() to optimise non-live code ↵Jeremy Fitzhardinge2011-10-251-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | updates When updating a newly loaded module, the code is definitely not yet executing on any processor, so it can be updated with no need for any heavyweight synchronization. This patch adds arch_jump_label_static() which is implemented as arch_jump_label_transform() by default, but architectures can override it if it avoids, say, a call to stop_machine(). Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Jason Baron <jbaron@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
| * jump_label: if a key has already been initialized, don't nop it outJeremy Fitzhardinge2011-10-251-2/+1
| | | | | | | | | | | | | | | | | | | | | | | | If a key has been enabled before jump_label_init() is called, don't nop it out. This removes arch_jump_label_text_poke_early() (which can only nop out a site) and uses arch_jump_label_transform() instead. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Jason Baron <jbaron@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
| * jump_label: use proper atomic_t initializerJeremy Fitzhardinge2011-10-251-2/+2
| | | | | | | | | | | | | | | | ATOMIC_INIT() is the proper thing to use. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Acked-by: Jason Baron <jbaron@redhat.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
* | Merge branch 'upstream/xen-settime' of ↵Linus Torvalds2011-11-062-0/+321
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen * 'upstream/xen-settime' of git://git.kernel.org/pub/scm/linux/kernel/git/jeremy/xen: xen/dom0: set wallclock time in Xen xen: add dom0_op hypercall xen/acpi: Domain0 acpi parser related platform hypercall
| * | xen/acpi: Domain0 acpi parser related platform hypercallYu Ke2011-09-262-0/+321
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patches implements the xen_platform_op hypercall, to pass the parsed ACPI info to hypervisor. Signed-off-by: Yu Ke <ke.yu@intel.com> Signed-off-by: Tian Kevin <kevin.tian@intel.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> [v1: Added DEFINE_GUEST.. in appropiate headers] [v2: Ripped out typedefs] Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
* | | Merge branch 'modsplit-Oct31_2011' of ↵Linus Torvalds2011-11-0655-215/+231
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux * 'modsplit-Oct31_2011' of git://git.kernel.org/pub/scm/linux/kernel/git/paulg/linux: (230 commits) Revert "tracing: Include module.h in define_trace.h" irq: don't put module.h into irq.h for tracking irqgen modules. bluetooth: macroize two small inlines to avoid module.h ip_vs.h: fix implicit use of module_get/module_put from module.h nf_conntrack.h: fix up fallout from implicit moduleparam.h presence include: replace linux/module.h with "struct module" wherever possible include: convert various register fcns to macros to avoid include chaining crypto.h: remove unused crypto_tfm_alg_modname() inline uwb.h: fix implicit use of asm/page.h for PAGE_SIZE pm_runtime.h: explicitly requires notifier.h linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.h miscdevice.h: fix up implicit use of lists and types stop_machine.h: fix implicit use of smp.h for smp_processor_id of: fix implicit use of errno.h in include/linux/of.h of_platform.h: delete needless include <linux/module.h> acpi: remove module.h include from platform/aclinux.h miscdevice.h: delete unnecessary inclusion of module.h device_cgroup.h: delete needless include <linux/module.h> net: sch_generic remove redundant use of <linux/module.h> net: inet_timewait_sock doesnt need <linux/module.h> ... Fix up trivial conflicts (other header files, and removal of the ab3550 mfd driver) in - drivers/media/dvb/frontends/dibx000_common.c - drivers/media/video/{mt9m111.c,ov6650.c} - drivers/mfd/ab3550-core.c - include/linux/dmaengine.h
| * | | Revert "tracing: Include module.h in define_trace.h"Paul Gortmaker2011-10-311-10/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This reverts commit 3a9f987b3141f086de27832514aad9f50a53f754. With all the files that are real modules now having module.h explicitly called out for inclusion, and no reliance on any implicit presence of module.h assumed, we should no longer need this workaround. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | irq: don't put module.h into irq.h for tracking irqgen modules.Paul Gortmaker2011-10-312-20/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Recent commit "irq: Track the owner of irq descriptor" in commit ID b6873807a7143b7 placed module.h into linux/irq.h but we are trying to limit module.h inclusion to just C files that really need it, due to its size and number of children includes. This targets just reversing that include. Add in the basic "struct module" since that is all we really need to ensure things compile. In theory, b687380 should have added the module.h include to the irqdesc.h header as well, but the implicit module.h everywhere presence masked this from showing up. So give it the "struct module" as well. As for the C files, irqdesc.c is only using THIS_MODULE, so it does not need module.h - give it export.h instead. The C file irq/manage.c is now (as of b687380) using try_module_get and module_put and so it needs module.h (which it already has). Also convert the irq_alloc_descs variants to macros, since all they really do is is call the __irq_alloc_descs primitive. This avoids including export.h and no debug info is lost. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | bluetooth: macroize two small inlines to avoid module.hPaul Gortmaker2011-10-311-11/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | These two small inlines make calls to try_module_get() and module_put() which would force us to keep module.h present within yet another common include header. We can avoid this by turning them into macros. The hci_dev_hold construct is patterned off of raw_spin_trylock_irqsave() in spinlock.h Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | ip_vs.h: fix implicit use of module_get/module_put from module.hPaul Gortmaker2011-10-311-8/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This file was using the module get/put functions in two simple inline functions. But module_get/put were only within scope because of the implicit presence of module.h being everywhere. Rather than add module.h to another file in include/ -- which is exactly the thing we are trying to avoid, simply convert these one-line functions into a define, as per what was done for the device_schedule_callback() in commit 523ded71de0c5e669733. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | nf_conntrack.h: fix up fallout from implicit moduleparam.h presencePaul Gortmaker2011-10-311-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The implicit presence of module.h everywhere meant that this header also was getting moduleparam.h which defines struct kernel_param. Since it only needs to know that kernel_param is a struct, call that out instead of adding an include of moduleparam.h -- to get rid of this: include/net/netfilter/nf_conntrack.h:316: warning: 'struct kernel_param' declared inside parameter list include/net/netfilter/nf_conntrack.h:316: warning: its scope is only this definition or declaration, which is probably not what you want Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | include: replace linux/module.h with "struct module" wherever possiblePaul Gortmaker2011-10-3122-22/+34
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The <linux/module.h> pretty much brings in the kitchen sink along with it, so it should be avoided wherever reasonably possible in terms of being included from other commonly used <linux/something.h> files, as it results in a measureable increase on compile times. The worst culprit was probably device.h since it is used everywhere. This file also had an implicit dependency/usage of mutex.h which was masked by module.h, and is also fixed here at the same time. There are over a dozen other headers that simply declare the struct instead of pulling in the whole file, so follow their lead and simply make it a few more. Most of the implicit dependencies on module.h being present by these headers pulling it in have been now weeded out, so we can finally make this change with hopefully minimal breakage. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | include: convert various register fcns to macros to avoid include chainingPaul Gortmaker2011-10-3111-58/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The original implementations reference THIS_MODULE in an inline. We could include <linux/export.h>, but it is better to avoid chaining. Fortunately someone else already thought of this, and made a similar inline into a #define in <linux/device.h> for device_schedule_callback(), [see commit 523ded71de0] so follow that precedent here. Also bubble up any __must_check that were used on the prev. wrapper inline functions up one to the real __register functions, to preserve any prev. sanity checks that were used in those instances. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | crypto.h: remove unused crypto_tfm_alg_modname() inlinePaul Gortmaker2011-10-311-6/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The <linux/crypto.h> (which is in turn in common headers like tcp.h) wants to use module_name() in an inline fcn. But having all of <linux/module.h> along for the ride is overkill and slows down compiles by a measureable amount, since it in turn includes lots of headers. Since the inline is never used anywhere in the kernel[1], we can just remove it, and then also remove the module.h include as well. In all the many crypto modules, there were some relying on crypto.h including module.h -- for them we now explicitly call out module.h for inclusion. [1] git grep shows some staging drivers also define the same static inline, but they also never ever use it. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | uwb.h: fix implicit use of asm/page.h for PAGE_SIZEPaul Gortmaker2011-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once we clean up the implicit presence of module.h (and all its sub-includes), we'll see an implicit dependency on page.h for the PAGE_SIZE define. So fix it in advance. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | pm_runtime.h: explicitly requires notifier.hPaul Gortmaker2011-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This file was getting notifier.h via device.h --> module.h but the module.h inclusion is going away, so add notifier.h directly. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | linux/dmaengine.h: fix implicit use of bitmap.h and asm/page.hPaul Gortmaker2011-10-311-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The implicit presence of module.h and all its sub-includes was masking these implicit header usages: include/linux/dmaengine.h:684: warning: 'struct page' declared inside parameter list include/linux/dmaengine.h:684: warning: its scope is only this definition or declaration, which is probably not what you want include/linux/dmaengine.h:687: warning: 'struct page' declared inside parameter list include/linux/dmaengine.h:736:2: error: implicit declaration of function 'bitmap_zero' With input from Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | miscdevice.h: fix up implicit use of lists and typesPaul Gortmaker2011-10-311-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | By removing the implicit presence of module.h from this file, we will see things like: In file included from fs/dlm/user.c:9: include/linux/miscdevice.h:50: error: field ‘list’ has incomplete type include/linux/miscdevice.h:54: error: expected specifier-qualifier-list before ‘mode_t’ Call out lists.h and types.h for inclusion to fix each of the above respectively. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | stop_machine.h: fix implicit use of smp.h for smp_processor_idPaul Gortmaker2011-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This will show up on MIPS when we fix all the implicit header presences that are because of module.h being everywhere. In file included from kernel/trace/ftrace.c:16: include/linux/stop_machine.h: In function 'stop_one_cpu': include/linux/stop_machine.h:50: error: implicit declaration of function 'smp_processor_id' include/linux/stop_machine.h: In function 'stop_cpus': include/linux/stop_machine.h:80: error: implicit declaration of function 'raw_smp_processor_id' Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | of: fix implicit use of errno.h in include/linux/of.hPaul Gortmaker2011-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It shows up as a build failure on MIPS, as it is used in three of_property function stubs. include/linux/of.h:275: error: 'ENOSYS' undeclared (first use in this function) include/linux/of.h:282: error: 'ENOSYS' undeclared (first use in this function) include/linux/of.h:295: error: 'ENOSYS' undeclared (first use in this function) Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | of_platform.h: delete needless include <linux/module.h>Paul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is nothing modular in this file, and no reason to drag in all the 357 headers that module.h brings with it, since it just slows down compiles. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | acpi: remove module.h include from platform/aclinux.hPaul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This file had an include of module.h which was probably added in relation to this line: #define ACPI_EXPORT_SYMBOL(symbol) EXPORT_SYMBOL(symbol); However, we really expect symbol exporters to grab export.h themselves, and since this is only a define, we can remove the module.h include without aclinux.h itself causing any compile issues. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | miscdevice.h: delete unnecessary inclusion of module.hPaul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This file has a define MODULE_ALIAS_MISCDEV which in turn will use the MODULE_ALIAS define, but only if the former is explicitly used by modular device driver code (and such code should be already including module.h). Delete the include, since module.h is such a giant thing that we don't want it implicitly sneaking into compiles where it isn't specifically required. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | device_cgroup.h: delete needless include <linux/module.h>Paul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is nothing modular in this file, and no reason to drag in all the extra headers that module.h brings with it, since it just slows down compiles. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | net: sch_generic remove redundant use of <linux/module.h>Paul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This file has modular references, but they are limited to those which are covered by the simple "struct module;" declaration used in dozens of other places. In fact that declaration is already there (just outside of the context of this commit) so simply remove the include line. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | net: inet_timewait_sock doesnt need <linux/module.h>Paul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There is nothing module specific in this header, and removing it doesn't seem to uncover any implicit dependencies either. Must be simply a vestige of an ancient legacy. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | sysdev.h: dont include <linux/module.h> for no reasonPaul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The <linux/module.h> pretty much brings in the kitchen sink along with it, so it should be avoided wherever reasonably possible in terms of being included from other commonly used <linux/something.h> files, as it results in a measureable increase on compile times. There doesn't appear to be any module specifics in this file. The obvious people who were relying on the presence of the vast amount of stuff module.h sucked in have been fixed. If other files are implicitly relying on it, then lets see who they are and fix them too. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | vermagic: delete unused include of <linux/module.h>Paul Gortmaker2011-10-311-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This file consists of nothing other than things like: #ifdef CONFIG_FOO #define .... There is no reason for it to require module.h Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | regulator: Fix implicit use of notifier.h by driver.hPaul Gortmaker2011-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This was implicitly appearing by way of module.h -- but when we fix that, we'll get this: In file included from drivers/regulator/dummy.c:21: include/linux/regulator/driver.h:197: error: field 'notifier' has incomplete type make[3]: *** [drivers/regulator/dummy.o] Error 1 Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | xen: Add export.h for THIS_MODULE/EXPORT_SYMBOL to various xen users.Paul Gortmaker2011-10-311-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Things like THIS_MODULE and EXPORT_SYMBOL were simply everywhere because module.h was also everywhere. But we are fixing the latter. So we need to call out the real users in advance. Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | module.h: relocate MODULE_PARM_DESC into moduleparam.hPaul Gortmaker2011-10-312-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are files which use module_param and MODULE_PARM_DESC back to back. They only include moduleparam.h which makes sense, but the implicit presence of module.h everywhere hid the fact that MODULE_PARM_DESC wasn't in moduleparam.h at all. Relocate the macro to moduleparam.h so that the moduleparam infrastructure can be used independently of module.h Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
| * | | module.h: split out the EXPORT_SYMBOL into export.hPaul Gortmaker2011-10-312-67/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A lot of files pull in module.h when all they are really looking for is the basic EXPORT_SYMBOL functionality. The recent data from Ingo[1] shows that this is one of several instances that has a significant impact on compile times, and it should be targeted for factoring out (as done here). Note that several commonly used header files in include/* directly include <linux/module.h> themselves (some 34 of them!) The most commonly used ones of these will have to be made independent of module.h before the full benefit of this change can be realized. We also transition THIS_MODULE from module.h to export.h, since there are lots of files with subsystem structs that in turn will have a struct module *owner and only be doing: .owner = THIS_MODULE; and absolutely nothing else modular. So, we also want to have the THIS_MODULE definition present in the lightweight header. [1] https://lkml.org/lkml/2011/5/23/76 Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
* | | | Merge branch 'writeback-for-linus' of ↵Linus Torvalds2011-11-064-37/+178
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux * 'writeback-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/wfg/linux: writeback: Add a 'reason' to wb_writeback_work writeback: send work item to queue_io, move_expired_inodes writeback: trace event balance_dirty_pages writeback: trace event bdi_dirty_ratelimit writeback: fix ppc compile warnings on do_div(long long, unsigned long) writeback: per-bdi background threshold writeback: dirty position control - bdi reserve area writeback: control dirty pause time writeback: limit max dirty pause time writeback: IO-less balance_dirty_pages() writeback: per task dirty rate limit writeback: stabilize bdi->dirty_ratelimit writeback: dirty rate control writeback: add bg_threshold parameter to __bdi_update_bandwidth() writeback: dirty position control writeback: account per-bdi accumulated dirtied pages
| * | | | writeback: Add a 'reason' to wb_writeback_workCurt Wohlgemuth2011-10-313-11/+38
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This creates a new 'reason' field in a wb_writeback_work structure, which unambiguously identifies who initiates writeback activity. A 'wb_reason' enumeration has been added to writeback.h, to enumerate the possible reasons. The 'writeback_work_class' and tracepoint event class and 'writeback_queue_io' tracepoints are updated to include the symbolic 'reason' in all trace events. And the 'writeback_inodes_sbXXX' family of routines has had a wb_stats parameter added to them, so callers can specify why writeback is being started. Acked-by: Jan Kara <jack@suse.cz> Signed-off-by: Curt Wohlgemuth <curtw@google.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: send work item to queue_io, move_expired_inodesCurt Wohlgemuth2011-10-311-2/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Instead of sending ->older_than_this to queue_io() and move_expired_inodes(), send the entire wb_writeback_work structure. There are other fields of a work item that are useful in these routines and in tracepoints. Acked-by: Jan Kara <jack@suse.cz> Signed-off-by: Curt Wohlgemuth <curtw@google.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: trace event balance_dirty_pagesWu Fengguang2011-10-311-0/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Useful for analyzing the dynamics of the throttling algorithms and debugging user reported problems. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: trace event bdi_dirty_ratelimitWu Fengguang2011-10-311-0/+45
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It helps understand how various throttle bandwidths are updated. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: IO-less balance_dirty_pages()Wu Fengguang2011-10-031-24/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | As proposed by Chris, Dave and Jan, don't start foreground writeback IO inside balance_dirty_pages(). Instead, simply let it idle sleep for some time to throttle the dirtying task. In the mean while, kick off the per-bdi flusher thread to do background writeback IO. RATIONALS ========= - disk seeks on concurrent writeback of multiple inodes (Dave Chinner) If every thread doing writes and being throttled start foreground writeback, it leads to N IO submitters from at least N different inodes at the same time, end up with N different sets of IO being issued with potentially zero locality to each other, resulting in much lower elevator sort/merge efficiency and hence we seek the disk all over the place to service the different sets of IO. OTOH, if there is only one submission thread, it doesn't jump between inodes in the same way when congestion clears - it keeps writing to the same inode, resulting in large related chunks of sequential IOs being issued to the disk. This is more efficient than the above foreground writeback because the elevator works better and the disk seeks less. - lock contention and cache bouncing on concurrent IO submitters (Dave Chinner) With this patchset, the fs_mark benchmark on a 12-drive software RAID0 goes from CPU bound to IO bound, freeing "3-4 CPUs worth of spinlock contention". * "CPU usage has dropped by ~55%", "it certainly appears that most of the CPU time saving comes from the removal of contention on the inode_wb_list_lock" (IMHO at least 10% comes from the reduction of cacheline bouncing, because the new code is able to call much less frequently into balance_dirty_pages() and hence access the global page states) * the user space "App overhead" is reduced by 20%, by avoiding the cacheline pollution by the complex writeback code path * "for a ~5% throughput reduction", "the number of write IOs have dropped by ~25%", and the elapsed time reduced from 41:42.17 to 40:53.23. * On a simple test of 100 dd, it reduces the CPU %system time from 30% to 3%, and improves IO throughput from 38MB/s to 42MB/s. - IO size too small for fast arrays and too large for slow USB sticks The write_chunk used by current balance_dirty_pages() cannot be directly set to some large value (eg. 128MB) for better IO efficiency. Because it could lead to more than 1 second user perceivable stalls. Even the current 4MB write size may be too large for slow USB sticks. The fact that balance_dirty_pages() starts IO on itself couples the IO size to wait time, which makes it hard to do suitable IO size while keeping the wait time under control. Now it's possible to increase writeback chunk size proportional to the disk bandwidth. In a simple test of 50 dd's on XFS, 1-HDD, 3GB ram, the larger writeback size dramatically reduces the seek count to 1/10 (far beyond my expectation) and improves the write throughput by 24%. - long block time in balance_dirty_pages() hurts desktop responsiveness Many of us may have the experience: it often takes a couple of seconds or even long time to stop a heavy writing dd/cp/tar command with Ctrl-C or "kill -9". - IO pipeline broken by bumpy write() progress There are a broad class of "loop {read(buf); write(buf);}" applications whose read() pipeline will be under-utilized or even come to a stop if the write()s have long latencies _or_ don't progress in a constant rate. The current threshold based throttling inherently transfers the large low level IO completion fluctuations to bumpy application write()s, and further deteriorates with increasing number of dirtiers and/or bdi's. For example, when doing 50 dd's + 1 remote rsync to an XFS partition, the rsync progresses very bumpy in legacy kernel, and throughput is improved by 67% by this patchset. (plus the larger write chunk size, it will be 93% speedup). The new rate based throttling can support 1000+ dd's with excellent smoothness, low latency and low overheads. For the above reasons, it's much better to do IO-less and low latency pauses in balance_dirty_pages(). Jan Kara, Dave Chinner and me explored the scheme to let balance_dirty_pages() wait for enough writeback IO completions to safeguard the dirty limit. However it's found to have two problems: - in large NUMA systems, the per-cpu counters may have big accounting errors, leading to big throttle wait time and jitters. - NFS may kill large amount of unstable pages with one single COMMIT. Because NFS server serves COMMIT with expensive fsync() IOs, it is desirable to delay and reduce the number of COMMITs. So it's not likely to optimize away such kind of bursty IO completions, and the resulted large (and tiny) stall times in IO completion based throttling. So here is a pause time oriented approach, which tries to control the pause time in each balance_dirty_pages() invocations, by controlling the number of pages dirtied before calling balance_dirty_pages(), for smooth and efficient dirty throttling: - avoid useless (eg. zero pause time) balance_dirty_pages() calls - avoid too small pause time (less than 4ms, which burns CPU power) - avoid too large pause time (more than 200ms, which hurts responsiveness) - avoid big fluctuations of pause times It can control pause times at will. The default policy (in a followup patch) will be to do ~10ms pauses in 1-dd case, and increase to ~100ms in 1000-dd case. BEHAVIOR CHANGE =============== (1) dirty threshold Users will notice that the applications will get throttled once crossing the global (background + dirty)/2=15% threshold, and then balanced around 17.5%. Before patch, the behavior is to just throttle it at 20% dirtyable memory in 1-dd case. Since the task will be soft throttled earlier than before, it may be perceived by end users as performance "slow down" if his application happens to dirty more than 15% dirtyable memory. (2) smoothness/responsiveness Users will notice a more responsive system during heavy writeback. "killall dd" will take effect instantly. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: per task dirty rate limitWu Fengguang2011-10-031-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add two fields to task_struct. 1) account dirtied pages in the individual tasks, for accuracy 2) per-task balance_dirty_pages() call intervals, for flexibility The balance_dirty_pages() call interval (ie. nr_dirtied_pause) will scale near-sqrt to the safety gap between dirty pages and threshold. The main problem of per-task nr_dirtied is, if 1k+ tasks start dirtying pages at exactly the same time, each task will be assigned a large initial nr_dirtied_pause, so that the dirty threshold will be exceeded long before each task reached its nr_dirtied_pause and hence call balance_dirty_pages(). The solution is to watch for the number of pages dirtied on each CPU in between the calls into balance_dirty_pages(). If it exceeds ratelimit_pages (3% dirty threshold), force call balance_dirty_pages() for a chance to set bdi->dirty_exceeded. In normal situations, this safeguarding condition is not expected to trigger at all. On the sqrt in dirty_poll_interval(): It will serve as an initial guess when dirty pages are still in the freerun area. When dirty pages are floating inside the dirty control scope [freerun, limit], a followup patch will use some refined dirty poll interval to get the desired pause time. thresh-dirty (MB) sqrt 1 16 2 22 4 32 8 45 16 64 32 90 64 128 128 181 256 256 512 362 1024 512 The above table means, given 1MB (or 1GB) gap and the dd tasks polling balance_dirty_pages() on every 16 (or 512) pages, the dirty limit won't be exceeded as long as there are less than 16 (or 512) concurrent dd's. So sqrt naturally leads to less overheads and more safe concurrent tasks for large memory servers, which have large (thresh-freerun) gaps. peter: keep the per-CPU ratelimit for safeguarding the 1k+ tasks case CC: Peter Zijlstra <a.p.zijlstra@chello.nl> Reviewed-by: Andrea Righi <andrea@betterlinux.com> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: stabilize bdi->dirty_ratelimitWu Fengguang2011-10-031-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are some imperfections in balanced_dirty_ratelimit. 1) large fluctuations The dirty_rate used for computing balanced_dirty_ratelimit is merely averaged in the past 200ms (very small comparing to the 3s estimation period for write_bw), which makes rather dispersed distribution of balanced_dirty_ratelimit. It's pretty hard to average out the singular points by increasing the estimation period. Considering that the averaging technique will introduce very undesirable time lags, I give it up totally. (btw, the 3s write_bw averaging time lag is much more acceptable because its impact is one-way and therefore won't lead to oscillations.) The more practical way is filtering -- most singular balanced_dirty_ratelimit points can be filtered out by remembering some prev_balanced_rate and prev_prev_balanced_rate. However the more reliable way is to guard balanced_dirty_ratelimit with task_ratelimit. 2) due to truncates and fs redirties, the (write_bw <=> dirty_rate) match could become unbalanced, which may lead to large systematical errors in balanced_dirty_ratelimit. The truncates, due to its possibly bumpy nature, can hardly be compensated smoothly. So let's face it. When some over-estimated balanced_dirty_ratelimit brings dirty_ratelimit high, dirty pages will go higher than the setpoint. task_ratelimit will in turn become lower than dirty_ratelimit. So if we consider both balanced_dirty_ratelimit and task_ratelimit and update dirty_ratelimit only when they are on the same side of dirty_ratelimit, the systematical errors in balanced_dirty_ratelimit won't be able to bring dirty_ratelimit far away. The balanced_dirty_ratelimit estimation may also be inaccurate near @limit or @freerun, however is less an issue. 3) since we ultimately want to - keep the fluctuations of task ratelimit as small as possible - keep the dirty pages around the setpoint as long time as possible the update policy used for (2) also serves the above goals nicely: if for some reason the dirty pages are high (task_ratelimit < dirty_ratelimit), and dirty_ratelimit is low (dirty_ratelimit < balanced_dirty_ratelimit), there is no point to bring up dirty_ratelimit in a hurry only to hurt both the above two goals. So, we make use of task_ratelimit to limit the update of dirty_ratelimit in two ways: 1) avoid changing dirty rate when it's against the position control target (the adjusted rate will slow down the progress of dirty pages going back to setpoint). 2) limit the step size. task_ratelimit is changing values step by step, leaving a consistent trace comparing to the randomly jumping balanced_dirty_ratelimit. task_ratelimit also has the nice smaller errors in stable state and typically larger errors when there are big errors in rate. So it's a pretty good limiting factor for the step size of dirty_ratelimit. Note that bdi->dirty_ratelimit is always tracking balanced_dirty_ratelimit. task_ratelimit is merely used as a limiting factor. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: dirty rate controlWu Fengguang2011-10-031-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | It's all about bdi->dirty_ratelimit, which aims to be (write_bw / N) when there are N dd tasks. On write() syscall, use bdi->dirty_ratelimit ============================================ balance_dirty_pages(pages_dirtied) { task_ratelimit = bdi->dirty_ratelimit * bdi_position_ratio(); pause = pages_dirtied / task_ratelimit; sleep(pause); } On every 200ms, update bdi->dirty_ratelimit =========================================== bdi_update_dirty_ratelimit() { task_ratelimit = bdi->dirty_ratelimit * bdi_position_ratio(); balanced_dirty_ratelimit = task_ratelimit * write_bw / dirty_rate; bdi->dirty_ratelimit = balanced_dirty_ratelimit } Estimation of balanced bdi->dirty_ratelimit =========================================== balanced task_ratelimit ----------------------- balance_dirty_pages() needs to throttle tasks dirtying pages such that the total amount of dirty pages stays below the specified dirty limit in order to avoid memory deadlocks. Furthermore we desire fairness in that tasks get throttled proportionally to the amount of pages they dirty. IOW we want to throttle tasks such that we match the dirty rate to the writeout bandwidth, this yields a stable amount of dirty pages: dirty_rate == write_bw (1) The fairness requirement gives us: task_ratelimit = balanced_dirty_ratelimit == write_bw / N (2) where N is the number of dd tasks. We don't know N beforehand, but still can estimate balanced_dirty_ratelimit within 200ms. Start by throttling each dd task at rate task_ratelimit = task_ratelimit_0 (3) (any non-zero initial value is OK) After 200ms, we measured dirty_rate = # of pages dirtied by all dd's / 200ms write_bw = # of pages written to the disk / 200ms For the aggressive dd dirtiers, the equality holds dirty_rate == N * task_rate == N * task_ratelimit_0 (4) Or task_ratelimit_0 == dirty_rate / N (5) Now we conclude that the balanced task ratelimit can be estimated by write_bw balanced_dirty_ratelimit = task_ratelimit_0 * ---------- (6) dirty_rate Because with (4) and (5) we can get the desired equality (1): write_bw balanced_dirty_ratelimit == (dirty_rate / N) * ---------- dirty_rate == write_bw / N Then using the balanced task ratelimit we can compute task pause times like: task_pause = task->nr_dirtied / task_ratelimit task_ratelimit with position control ------------------------------------ However, while the above gives us means of matching the dirty rate to the writeout bandwidth, it at best provides us with a stable dirty page count (assuming a static system). In order to control the dirty page count such that it is high enough to provide performance, but does not exceed the specified limit we need another control. The dirty position control works by extending (2) to task_ratelimit = balanced_dirty_ratelimit * pos_ratio (7) where pos_ratio is a negative feedback function that subjects to 1) f(setpoint) = 1.0 2) df/dx < 0 That is, if the dirty pages are ABOVE the setpoint, we throttle each task a bit more HEAVY than balanced_dirty_ratelimit, so that the dirty pages are created less fast than they are cleaned, thus DROP to the setpoints (and the reverse). Based on (7) and the assumption that both dirty_ratelimit and pos_ratio remains CONSTANT for the past 200ms, we get task_ratelimit_0 = balanced_dirty_ratelimit * pos_ratio (8) Putting (8) into (6), we get the formula used in bdi_update_dirty_ratelimit(): write_bw balanced_dirty_ratelimit *= pos_ratio * ---------- (9) dirty_rate Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: add bg_threshold parameter to __bdi_update_bandwidth()Wu Fengguang2011-10-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | No behavior change. Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
| * | | | writeback: account per-bdi accumulated dirtied pagesWu Fengguang2011-10-031-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Introduce the BDI_DIRTIED counter. It will be used for estimating the bdi's dirty bandwidth. CC: Jan Kara <jack@suse.cz> CC: Michael Rubin <mrubin@google.com> CC: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Wu Fengguang <fengguang.wu@intel.com>
* | | | | Merge branch 'for-next' of ↵Linus Torvalds2011-11-064-8/+25
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending * 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending: target: use ->exectute_task for all CDB emulation target: remove SCF_EMULATE_CDB_ASYNC target: refactor transport_emulate_control_cdb target: pass the se_task to the CDB emulation callback target: split core_scsi3_emulate_pr target: split core_scsi2_emulate_crh target: Add generic active I/O shutdown logic target: add back error handling in transport_complete_task target/pscsi: blk_make_request() returns an ERR_PTR() target: Remove core TRANSPORT_FREE_CMD_INTR usage target: Make TFO->check_stop_free return free status iscsi-target: Fix non-immediate TMR handling iscsi-target: Add missing CMDSN_LOWER_THAN_EXP check in iscsit_handle_scsi_cmd target: Avoid double list_del for aborted se_tmr_req target: Minor cleanups to core_tmr_drain_tmr_list target: Fix wrong se_tmr being added to drain_tmr_list target: Fix incorrect se_cmd assignment in core_tmr_drain_tmr_list target: Check -ENOMEM to signal QUEUE_FULL from fabric callbacks tcm_loop: Add explict read buffer memset for SCF_SCSI_CONTROL_SG_IO_CDB target: Fix compile warning w/ missing module.h include
| * | | | | target: remove SCF_EMULATE_CDB_ASYNCChristoph Hellwig2011-11-041-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | All ->execute_task instances now need to complete the I/O explicitly, which can either happen synchronously or asynchronously. Note that a lot of the CDB emulations appear to return success even if some lowlevel operations failed. Given that this is an existing issue this patch doesn't change that fact. (nab: Adding missing switch breaks in PR-IN + PR_OUT) Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
| * | | | | target: pass the se_task to the CDB emulation callbackChristoph Hellwig2011-11-042-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We want to be able to handle all CDBs through it and remove hacks like always using the first task in a CDB in target_report_luns. Also rename the callback to ->execute_task to better describe its use. Signed-off-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Nicholas Bellinger <nab@linux-iscsi.org>
| * | | | | target: Add generic active I/O shutdown logicNicholas Bellinger2011-11-043-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds the initial pieces of generic active I/O shutdown logic. This is intended to be a 'opt-in' feature for fabric modules that includes the following functions to provide a mechinism for fabric modules to track se_cmd via se_session->sess_cmd_list: *) target_get_sess_cmd() - Add se_cmd to sess->sess_cmd_list, called from fabric module incoming I/O path. *) target_put_sess_cmd() - Check for completion or drop se_cmd from ->sess_cmd_list *) target_splice_sess_cmd_list() - Splice active I/O list from ->sess_cmd_list to ->sess_wait_list, can called with HW fabric lock held. *) target_wait_for_sess_cmds() - Walk ->sess_wait_list waiting on individual ->cmd_wait_comp. Optional transport_wait_for_tasks() call. target_splice_sess_cmd_list() is allowed to be called under HW fabric lock, and performs the splice into se_sess->sess_wait_list and set se_cmd->cmd_wait_set. Then target_wait_for_sess_cmds() walks the list waiting for individual target_put_sess_cmd() fabric callbacks to complete. It also adds TFO->check_release_cmd() to split the completion and memory release calls, where a fabric module uses target_put_sess_cmd() to check for I/O completion during session shutdown. This is currently pushed out into fabric modules as current fabric code may sleep here waiting for TFO->check_stop_free() to complete in main response path, and because target_wait_for_sess_cmds() calling TFO->release_cmd() to free fabric descriptor memory directly. Cc: Christoph Hellwig <hch@lst.de> Cc: Roland Dreier <roland@purestorage.com> Signed-off-by: Nicholas A. Bellinger <nab@linux-iscsi.org>