summaryrefslogtreecommitdiffstats
path: root/kernel/rcutorture.c
Commit message (Collapse)AuthorAgeFilesLines
* rcu: Prevent initialization race in rcutorture kthreadsPaul E. McKenney2012-09-231-4/+6
| | | | | | | | | | | | When you do something like "t = kthread_run(...)", it is possible that the kthread will start running before the assignment to "t" happens. If the child kthread expects to find a pointer to its task_struct in "t", it will then be fatally disappointed. This commit therefore switches such cases to kthread_create() followed by wake_up_process(), guaranteeing that the assignment happens before the child kthread starts running. Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Switch rcutorture to pr_alert() and friendsPaul E. McKenney2012-09-231-50/+50
| | | | | | | Drop a few characters by switching kernel/rcutorture.c from "printk(KERN_ALERT" to "pr_alert(". Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Track CPU-hotplug duration statisticsPaul E. McKenney2012-09-231-5/+37
| | | | | | | | | | | Many rcutorture runs include CPU-hotplug operations in their stress testing. This commit accumulates statistics on the durations of these operations in deference to the recent concern about the overhead and latency of these operations. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Update rcutorture defaultsPaul E. McKenney2012-09-231-3/+4
| | | | | | | | | | | A number of new features have been added to rcutorture over the years, but the defaults have not been updated to include them. This commit therefore turns on a couple of them that have proven helpful and trustworthy, namely periodic progress reports and testing of NO_HZ. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Fix broken strings in RCU's source code.Paul E. McKenney2012-07-061-19/+14
| | | | | | | | | | | Although the C language allows you to break strings across lines, doing this makes it hard for people to find the Linux kernel code corresponding to a given console message. This commit therefore fixes broken strings throughout RCU's source code. Suggested-by: Josh Triplett <josh@joshtriplett.org> Suggested-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Fix code-style issues involving "else"Paul E. McKenney2012-07-061-1/+2
| | | | | | | | | | | The Linux kernel coding style says that single-statement blocks should omit curly braces unless the other leg of the "if" statement has multiple statements, in which case the curly braces should be included. This commit fixes RCU's violations of this rule. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Make rcutorture fakewriters invoke rcu_barrier()Paul E. McKenney2012-07-021-1/+5
| | | | | | | | | | | The current rcutorture rcu_barrier() testing never intentionally runs more than one instance of rcu_barrier() at a given time. This fails to test the the shiny new concurrency features of rcu_barrier(). This commit therefore modifies the rcutorture fakewriter kthread to randomly invoke rcu_barrier() rather than the usual synchronize_rcu(). Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Fix diagnostic-printk typo in rcutorturePaul E. McKenney2012-07-021-1/+1
| | | | | | | | | The rcu_torture_barrier() function has a copy-and-paste typo in the string passed to rcutorture_shutdown_absorb(), which this commit fixes. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Fix bug in rcu_barrier() torture testPaul E. McKenney2012-07-021-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | The child threads in the rcu_torture_barrier_cbs() are improperly synchronized, which can cause the rcu_barrier() tests to hang. The failure mode is as follows: 1. CPU 0 running in rcu_torture_barrier() sets barrier_cbs_count to n_barrier_cbs. 2. CPU 1 running in rcu_torture_barrier_cbs() wakes up, posts its RCU callback, and atomically decrements barrier_cbs_count. Because barrier_cbs_count is not zero, it does not do the wake_up(). 3. CPU 2 running in rcu_torture_barrier_cbs() wakes up, but finds that barrier_cbs_count is not equal to n_barrier_cbs, and so returns to sleep. 4. The value of barrier_cbs_count therefore never reaches zero, which causes the test to hang. This commit therefore uses a phase variable to coordinate the test, preventing this scenario from occurring. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Test srcu_barrier() from rcutorture test suitePaul E. McKenney2012-07-021-2/+13
| | | | | | | | | SRCU now has a call_srcu() and an srcu_barrier(), but rcutorture does not test them. This commit adds the machinery to allow rcutorture's existing tests for call_rcu() and rcu_barrier() to apply to the SRCU equivalents. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Rationalize ordering of torture_ops listPaul E. McKenney2012-07-021-2/+2
| | | | | | | | Move the raw SRCU interfaces out of the middle of the normal SRCU interfaces. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Add rcutorture test for call_srcu()Lai Jiangshan2012-04-301-4/+40
| | | | | | | | Add srcu_torture_deferred_free() for srcu_ops so as to test the new call_srcu(). Rename the original srcu_ops to srcu_sync_ops. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Direct algorithmic SRCU implementationPaul E. McKenney2012-04-301-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current implementation of synchronize_srcu_expedited() can cause severe OS jitter due to its use of synchronize_sched(), which in turn invokes try_stop_cpus(), which causes each CPU to be sent an IPI. This can result in severe performance degradation for real-time workloads and especially for short-interation-length HPC workloads. Furthermore, because only one instance of try_stop_cpus() can be making forward progress at a given time, only one instance of synchronize_srcu_expedited() can make forward progress at a time, even if they are all operating on distinct srcu_struct structures. This commit, inspired by an earlier implementation by Peter Zijlstra (https://lkml.org/lkml/2012/1/31/211) and by further offline discussions, takes a strictly algorithmic bits-in-memory approach. This has the disadvantage of requiring one explicit memory-barrier instruction in each of srcu_read_lock() and srcu_read_unlock(), but on the other hand completely dispenses with OS jitter and furthermore allows SRCU to be used freely by CPUs that RCU believes to be idle or offline. The update-side implementation handles the single read-side memory barrier by rechecking the per-CPU counters after summing them and by running through the update-side state machine twice. This implementation has passed moderate rcutorture testing on both x86 and Power. Also updated to use this_cpu_ptr() instead of per_cpu_ptr(), as suggested by Peter Zijlstra. Reported-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Reviewed-by: Lai Jiangshan <laijs@cn.fujitsu.com>
* rcu: Introduce rcutorture testing for rcu_barrier()Paul E. McKenney2012-04-301-8/+186
| | | | | | | | | | | Although rcutorture does invoke rcu_barrier() and friends, it cannot really be called a torture test given that it invokes them only once at the end of the test. This commit therefore introduces heavy-duty rcutorture testing for rcu_barrier(), which may be carried out concurrently with normal rcutorture testing. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Fixes to rcutorture error handling and cleanupPaul E. McKenney2012-04-241-3/+16
| | | | | | | | | The rcutorture initialization code ignored the error returns from rcu_torture_onoff_init() and rcu_torture_stall_init(). The rcutorture cleanup code failed to NULL out a number of pointers. These bugs will normally have no effect, but this commit fixes them nevertheless. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* PTR_ERR should be called before its argument is cleared.Julia Lawall2012-02-211-1/+4
| | | | | | | | | | | | | | | | | | | | | | The semantic match that finds this problem is as follows: (http://coccinelle.lip6.fr/) // <smpl> @@ expression e,e1; constant c; @@ *e = c ... when != e = e1 when != &e when != true IS_ERR(e) *PTR_ERR(e) // </smpl> Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr> Reported-by: Josh Triplett <josh@joshtriplett.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Trace only after NULL-pointer checkPaul E. McKenney2012-02-211-2/+2
| | | | | | | Fix a bonehead error introduced when adding event tracing to rcutorture. Move the traces to follow the NULL-pointer checks. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Add CPU-stall capability to rcutorturePaul E. McKenney2012-02-211-0/+66
| | | | | | | | | | | | | Add module parameters to rcutorture that induce a CPU stall. The stall_cpu parameter specifies how long to stall in seconds, defaulting to zero, which indicates no stalling is to be undertaken. The stall_cpu_holdoff parameter specifies how many seconds after insmod (or boot, if rcutorture is built into the kernel) that this stall is to start. The default value for stall_cpu_holdoff is ten seconds. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcutorture: Permit holding off CPU-hotplug operations during bootPaul E. McKenney2012-02-211-2/+10
| | | | | | | | | | | When rcutorture is started automatically at boot time, it might well also start CPU-hotplug operations at that time, which might not be desirable. This commit therefore adds an rcutorture parameter that allows CPU-hotplug operations to be held off for the specified number of seconds after the start of boot. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Make rcutorture flag online/offline failuresPaul E. McKenney2012-02-211-0/+4
| | | | | | | | Make rcutorture check for CPU-hotplug failures and complain if there were any. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Add missing __cpuinit annotation in rcutorture codeHeiko Carstens2012-01-161-2/+2
| | | | | | | | | | | | | | | | | | "rcu: Add rcutorture CPU-hotplug capability" adds cpu hotplug operations to the rcutorture code but produces a false positive warning about section mismatches: WARNING: vmlinux.o(.text+0x1e420c): Section mismatch in reference from the function rcu_torture_onoff() to the function .cpuinit.text:cpu_up() The function rcu_torture_onoff() references the function __cpuinit cpu_up(). This is often because rcu_torture_onoff lacks a __cpuinit annotation or the annotation of cpu_up is wrong. This commit therefore adds a __cpuinit annotation so the warning goes away. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Make rcutorture bool parameters really bool (core code)Rusty Russell2012-01-161-2/+2
| | | | | | | | | | | | | module_param(bool) used to counter-intuitively take an int. In fddd5201 (mid-2009) we allowed bool or int/unsigned int using a messy trick. It's time to remove the int/unsigned int option. For this version it'll simply give a warning, but it'll break next kernel version. This commit makes this change to rcutorture. Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Add rcutorture tests for srcu_read_lock_raw()Paul E. McKenney2011-12-111-1/+25
| | | | | | | | | This commit adds simple rcutorture tests for srcu_read_lock_raw() and srcu_read_unlock_raw(). It does not test doing srcu_read_lock_raw() in an exception handler and releasing it in the corresponding process context. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Make rcutorture test for hotpluggability before offlining CPUsPaul E. McKenney2011-12-111-2/+2
| | | | | | | | | | | | | | | | | | The rcutorture test now can automatically exercise CPU hotplug and collect success statistics, which can be correlated with other rcutorture activity. This permits rcutorture to completely exercise RCU regardless of what sort of userspace and filesystem layout is in use. Unfortunately, rcutorture is happy to attempt to offline CPUs that cannot be offlined, for example, CPU 0 in both the x86 and ARM architectures. Although this allows rcutorture testing to proceed normally, it confounds attempts at error analysis due to the resulting flood of spurious CPU-hotplug errors. Therefore, this commit uses the new cpu_is_hotpluggable() function to avoid attempting to offline CPUs that are not hotpluggable, which in turn avoids spurious CPU-hotplug errors. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Add rcutorture CPU-hotplug capabilityPaul E. McKenney2011-12-111-5/+112
| | | | | | | | | | | | | Running CPU-hotplug operations concurrently with rcutorture has historically been a good way to find bugs in both RCU and CPU hotplug. This commit therefore adds an rcutorture module parameter called "onoff_interval" that causes a randomly selected CPU-hotplug operation to be executed at the specified interval, in seconds. The default value of "onoff_interval" is zero, which disables rcutorture-instigated CPU-hotplug operations. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Control rcutorture startup from kernel boot parametersPaul E. McKenney2011-12-111-0/+2
| | | | | | | | | | | | | Currently, if rcutorture is built into the kernel, it must be manually started or started from an init script. This is inconvenient for automated KVM testing, where it is good to be able to fully control rcutorture execution from the kernel parameters. This patch therefore adds a module parameter named "rcutorture_runnable" that defaults to zero ("don't start automatically"), but which can be set to one to cause rcutorture to start up immediately during boot. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Add rcutorture system-shutdown capabilityPaul E. McKenney2011-12-111-4/+64
| | | | | | | | | | | | | Although it is easy to run rcutorture tests under KVM, there is currently no nice way to run such a test for a fixed time period, collect all of the rcutorture data, and then shut the system down cleanly. This commit therefore adds an rcutorture module parameter named "shutdown_secs" that specified the run duration in seconds, after which rcutorture terminates the test and powers the system down. The default value for "shutdown_secs" is zero, which disables shutdown. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Add failure tracing to rcutorturePaul E. McKenney2011-12-111-0/+18
| | | | | | | | | Trace the rcutorture RCU accesses and dump the trace buffer when the first failure is detected. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: Make rcu_torture_boost() exit loops at end of testPaul E. McKenney2011-09-281-1/+2
| | | | | | | | | | One of the loops in rcu_torture_boost() fails to check kthread_should_stop(), and thus might be slowing or even stopping completion of rcutorture tests at rmmod time. This commit adds the kthread_should_stop() check to the offending loop. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Make rcu_torture_fqs() exit loops at end of testPaul E. McKenney2011-09-281-4/+6
| | | | | | | | | The rcu_torture_fqs() function can prevent the rcutorture tests from completing, resulting in a hang. This commit therefore ensures that rcu_torture_fqs() will exit its inner loops at the end of the test, and also applies the newish ULONG_CMP_LT() macro to time comparisons. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Allow rcutorture's stat_interval parameter to be changed at runtimePaul E. McKenney2011-09-281-1/+1
| | | | | | | | | | | | | | | When rcutorture is compiled directly into the kernel (instead of separately as a module), it is necessary to specify rcutorture.stat_interval as a kernel command-line parameter, otherwise, the rcu_torture_stats kthread is never started. However, when working with the system after it has booted, it is convenient to be able to change the time between statistic printing, particularly when logged into the console. This commit therefore allows the stat_interval parameter to be changed at runtime. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Don't destroy rcu_torture_boost() callback until it is donePaul E. McKenney2011-09-281-1/+1
| | | | | | | | | | The rcu_torture_boost() cleanup code destroyed debug-objects state before waiting for the last RCU callback to be invoked, resulting in rare but very real debug-objects warnings. Move the destruction to after the waiting to fix this problem. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Catch rcutorture up to new RCU API additionsPaul E. McKenney2011-09-281-34/+21
| | | | | | | | | | | | | Now that the RCU API contains synchronize_rcu_bh(), synchronize_sched(), call_rcu_sched(), and rcu_bh_expedited()... Make rcutorture test synchronize_rcu_bh(), getting rid of the old rcu_bh_torture_synchronize() workaround. Similarly, make rcutorture test synchronize_sched(), getting rid of the old sched_torture_synchronize() workaround. Make rcutorture test call_rcu_sched() instead of wrappering synchronize_sched(). Also add testing of rcu_bh_expedited(). Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: Use kthread_create_on_node()Eric Dumazet2011-09-281-2/+3
| | | | | | | | | | | | | | | | | | | | | Commit a26ac2455ffc (move TREE_RCU from softirq to kthread) added per-CPU kthreads. However, kthread creation uses kthread_create(), which can put the kthread's stack and task struct on the wrong NUMA node. Therefore, use kthread_create_on_node() instead of kthread_create() so that the stacks and task structs are placed on the correct NUMA node. A similar change was carried out in commit 94dcf29a11b3 (kthread: use kthread_create_on_node()). Also change rcutorture's priority-boost-test kthread creation. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> CC: Tejun Heo <tj@kernel.org> CC: Rusty Russell <rusty@rustcorp.com.au> CC: Andrew Morton <akpm@linux-foundation.org> CC: Andi Kleen <ak@linux.intel.com> CC: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* atomic: use <linux/atomic.h>Arun Sharma2011-07-261-1/+1
| | | | | | | | | | | | | | This allows us to move duplicated code in <asm/atomic.h> (atomic_inc_not_zero() for now) to <linux/atomic.h> Signed-off-by: Arun Sharma <asharma@fb.com> Reviewed-by: Eric Dumazet <eric.dumazet@gmail.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: David Miller <davem@davemloft.net> Cc: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Mike Frysinger <vapier@gentoo.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* rcu: treewide: Do not use rcu_read_lock_held when calling rcu_dereference_checkMichal Hocko2011-07-081-2/+0
| | | | | | | | | | Since ca5ecddf (rcu: define __rcu address space modifier for sparse) rcu_dereference_check use rcu_read_lock_held as a part of condition automatically so callers do not have to do that as well. Signed-off-by: Michal Hocko <mhocko@suse.cz> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz>
* rcu: mark rcutorture boosting callback as being on-stackPaul E. McKenney2011-05-051-0/+2
| | | | | | | | | | | | The CONFIG_DEBUG_OBJECTS_RCU_HEAD facility requires that on-stack RCU callbacks be flagged explicitly to debug-objects using the init_rcu_head_on_stack() and destroy_rcu_head_on_stack() functions. This commit applies those functions to the rcutorture code that tests RCU priority boosting. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: make rcutorture version numbers available through debugfsPaul E. McKenney2011-05-051-3/+5
| | | | | | | | | | It is not possible to accurately correlate rcutorture output with that of debugfs. This patch therefore adds a debugfs file that prints out the rcutorture version number, permitting easy correlation. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: eliminate unused boosting statisticsPaul E. McKenney2011-05-051-9/+1
| | | | | | | | | The n_rcu_torture_boost_allocerror and n_rcu_torture_boost_afferror statistics are not actually incremented anymore, so eliminate them. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* rcu: avoid hammering sched with yet another bound RT kthreadPaul E. McKenney2011-05-051-3/+3
| | | | | | | | | The scheduler does not appear to take kindly to having multiple real-time threads bound to a CPU that is going offline. So this commit is a temporary hack-around to avoid that happening. Signed-off-by: Paul E. McKenney <paul.mckenney@linaro.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcutorture: Get rid of duplicate sched.h includeJesper Juhl2011-03-041-1/+0
| | | | | | | | linux/sched.h is included twice in kernel/rcutorture.c - once is enough. Signed-off-by: Jesper Juhl <jj@chaosbits.net> Reviewed-by: Josh Triplett <josh@joshtriplett.org> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: add priority-inversion testing to rcutorturePaul E. McKenney2010-10-071-11/+259
| | | | | | | | | Add an optional test to force long-term preemption of RCU read-side critical sections, controlled by new test_boost, test_boost_interval, and test_boost_duration module parameters. This is to be used to test RCU priority boosting. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcu: fix sparse errors in rcutorture.cPaul E. McKenney2010-09-231-4/+7
| | | | | | | Add the sparse __rcu address-space identifier and make a couple of variables static. Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* rcutorture: add random preemptionLai Jiangshan2010-08-191-0/+6
| | | | | | | | | | | | | Add random preemption to help we to torture the preemptable rcu. srcu_read_delay() also calls rcu_read_delay() for shorter delays. Added comment to preempt_schedule() call indicating that no quiescent states happen if preemption is disabled. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Reviewed-by: Josh Triplett <josh@joshtriplett.org>
* sched_clock: Add local_clock() API and improve documentationPeter Zijlstra2010-06-091-2/+1
| | | | | | | | | | | | | | | | For people who otherwise get to write: cpu_clock(smp_processor_id()), there is now: local_clock(). Also, as per suggestion from Andrew, provide some documentation on the various clock interfaces, and minimize the unsigned long long vs u64 mess. Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jens Axboe <jaxboe@fusionio.com> LKML-Reference: <1275052414.1645.52.camel@laptop> Signed-off-by: Ingo Molnar <mingo@elte.hu>
* Merge branch 'sched-core-for-linus' of ↵Linus Torvalds2010-05-181-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'sched-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (49 commits) stop_machine: Move local variable closer to the usage site in cpu_stop_cpu_callback() sched, wait: Use wrapper functions sched: Remove a stale comment ondemand: Make the iowait-is-busy time a sysfs tunable ondemand: Solve a big performance issue by counting IOWAIT time as busy sched: Intoduce get_cpu_iowait_time_us() sched: Eliminate the ts->idle_lastupdate field sched: Fold updating of the last_update_time_info into update_ts_time_stats() sched: Update the idle statistics in get_cpu_idle_time_us() sched: Introduce a function to update the idle statistics sched: Add a comment to get_cpu_idle_time_us() cpu_stop: add dummy implementation for UP sched: Remove rq argument to the tracepoints rcu: need barrier() in UP synchronize_sched_expedited() sched: correctly place paranioa memory barriers in synchronize_sched_expedited() sched: kill paranoia check in synchronize_sched_expedited() sched: replace migration_thread with cpu_stop stop_machine: reimplement using cpu_stop cpu_stop: implement stop_cpu[s]() sched: Fix select_idle_sibling() logic in select_task_rq_fair() ...
| * sched: replace migration_thread with cpu_stopTejun Heo2010-05-061-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently migration_thread is serving three purposes - migration pusher, context to execute active_load_balance() and forced context switcher for expedited RCU synchronize_sched. All three roles are hardcoded into migration_thread() and determining which job is scheduled is slightly messy. This patch kills migration_thread and replaces all three uses with cpu_stop. The three different roles of migration_thread() are splitted into three separate cpu_stop callbacks - migration_cpu_stop(), active_load_balance_cpu_stop() and synchronize_sched_expedited_cpu_stop() - and each use case now simply asks cpu_stop to execute the callback as necessary. synchronize_sched_expedited() was implemented with private preallocated resources and custom multi-cpu queueing and waiting logic, both of which are provided by cpu_stop. synchronize_sched_expedited_count is made atomic and all other shared resources along with the mutex are dropped. synchronize_sched_expedited() also implemented a check to detect cases where not all the callback got executed on their assigned cpus and fall back to synchronize_sched(). If called with cpu hotplug blocked, cpu_stop already guarantees that and the condition cannot happen; otherwise, stop_machine() would break. However, this patch preserves the paranoid check using a cpumask to record on which cpus the stopper ran so that it can serve as a bisection point if something actually goes wrong theree. Because the internal execution state is no longer visible, rcu_expedited_torture_stats() is removed. This patch also renames cpu_stop threads to from "stopper/%d" to "migration/%d". The names of these threads ultimately don't matter and there's no reason to make unnecessary userland visible changes. With this patch applied, stop_machine() and sched now share the same resources. stop_machine() is faster without wasting any resources and sched migration users are much cleaner. Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Dipankar Sarma <dipankar@in.ibm.com> Cc: Josh Triplett <josh@freedesktop.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Dimitri Sivanich <sivanich@sgi.com>
* | rcu: remove all rcu head initializations, except on_stack initializationsPaul E. McKenney2010-05-111-0/+2
|/ | | | | | | | | Remove all rcu head inits. We don't care about the RCU head state before passing it to call_rcu() anyway. Only leave the "on_stack" variants so debugobjects can keep track of objects on stack. Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
* Merge branch 'for-linus' of ↵Linus Torvalds2010-03-031-4/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/percpu: percpu: add __percpu sparse annotations to what's left percpu: add __percpu sparse annotations to fs percpu: add __percpu sparse annotations to core kernel subsystems local_t: Remove leftover local.h this_cpu: Remove pageset_notifier this_cpu: Page allocator conversion percpu, x86: Generic inc / dec percpu instructions local_t: Move local.h include to ringbuffer.c and ring_buffer_benchmark.c module: Use this_cpu_xx to dynamically allocate counters local_t: Remove cpu_local_xx macros percpu: refactor the code in pcpu_[de]populate_chunk() percpu: remove compile warnings caused by __verify_pcpu_ptr() percpu: make accessors check for percpu pointer in sparse percpu: add __percpu for sparse. percpu: make access macros universal percpu: remove per_cpu__ prefix.
| * Merge branch 'master' into percpuTejun Heo2010-01-051-16/+53
| |\ | | | | | | | | | | | | | | | Conflicts: arch/powerpc/platforms/pseries/hvCall.S include/linux/percpu.h