diff options
author | Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> | 2006-06-21 15:18:34 -0700 |
---|---|---|
committer | Dave Jones <davej@redhat.com> | 2006-06-21 18:30:26 -0400 |
commit | 4ec223d02f4d5f5a3129edc0e3d22550d6ac8a32 (patch) | |
tree | 753cec643fa59ccda64a95fa5436956e481c1137 /drivers/cpufreq/cpufreq_conservative.c | |
parent | 9ed059e1551bf36092215b965838502ac21f42e4 (diff) | |
download | linux-4ec223d02f4d5f5a3129edc0e3d22550d6ac8a32.tar.gz linux-4ec223d02f4d5f5a3129edc0e3d22550d6ac8a32.tar.bz2 linux-4ec223d02f4d5f5a3129edc0e3d22550d6ac8a32.zip |
[CPUFREQ] Fix ondemand vs suspend deadlock
Rootcaused the bug to a deadlock in cpufreq and ondemand. Due to non-existent
ordering between cpu_hotplug lock and dbs_mutex. Basically a race condition
between cpu_down() and do_dbs_timer().
cpu_down() flow:
* cpu_down() call for CPU 1
* Takes hot plug lock
* Calls pre down notifier
* cpufreq notifier handler calls cpufreq_driver_target() which takes
cpu_hotplug lock again. OK as cpu_hotplug lock is recursive in same
process context
* CPU 1 goes down
* Calls post down notifier
* cpufreq notifier handler calls ondemand event stop which takes dbs_mutex
So, cpu_hotplug lock is taken before dbs_mutex in this flow.
do_dbs_timer is triggerred by a periodic timer event.
It first takes dbs_mutex and then takes cpu_hotplug lock in
cpufreq_driver_target().
Note the reverse order here compared to above. So, if this timer event happens
at right moment during cpu_down, system will deadlok.
Attached patch fixes the issue for both ondemand and conservative.
Signed-off-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
Signed-off-by: Dave Jones <davej@redhat.com>
Diffstat (limited to 'drivers/cpufreq/cpufreq_conservative.c')
-rw-r--r-- | drivers/cpufreq/cpufreq_conservative.c | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c index e07a35487bde..8878a154ed43 100644 --- a/drivers/cpufreq/cpufreq_conservative.c +++ b/drivers/cpufreq/cpufreq_conservative.c @@ -72,6 +72,14 @@ static DEFINE_PER_CPU(struct cpu_dbs_info_s, cpu_dbs_info); static unsigned int dbs_enable; /* number of CPUs using this policy */ +/* + * DEADLOCK ALERT! There is a ordering requirement between cpu_hotplug + * lock and dbs_mutex. cpu_hotplug lock should always be held before + * dbs_mutex. If any function that can potentially take cpu_hotplug lock + * (like __cpufreq_driver_target()) is being called with dbs_mutex taken, then + * cpu_hotplug lock should be taken before that. Note that cpu_hotplug lock + * is recursive for the same process. -Venki + */ static DEFINE_MUTEX (dbs_mutex); static DECLARE_WORK (dbs_work, do_dbs_timer, NULL); @@ -414,12 +422,14 @@ static void dbs_check_cpu(int cpu) static void do_dbs_timer(void *data) { int i; + lock_cpu_hotplug(); mutex_lock(&dbs_mutex); for_each_online_cpu(i) dbs_check_cpu(i); schedule_delayed_work(&dbs_work, usecs_to_jiffies(dbs_tuners_ins.sampling_rate)); mutex_unlock(&dbs_mutex); + unlock_cpu_hotplug(); } static inline void dbs_timer_init(void) @@ -514,6 +524,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, break; case CPUFREQ_GOV_LIMITS: + lock_cpu_hotplug(); mutex_lock(&dbs_mutex); if (policy->max < this_dbs_info->cur_policy->cur) __cpufreq_driver_target( @@ -524,6 +535,7 @@ static int cpufreq_governor_dbs(struct cpufreq_policy *policy, this_dbs_info->cur_policy, policy->min, CPUFREQ_RELATION_L); mutex_unlock(&dbs_mutex); + unlock_cpu_hotplug(); break; } return 0; |