summaryrefslogtreecommitdiffstats
path: root/kernel
diff options
context:
space:
mode:
authorRafael J. Wysocki <rafael.j.wysocki@intel.com>2018-05-09 11:44:56 +0200
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2018-05-16 10:10:29 +0200
commit99e9acc27033495b0b4f6dd38125b217b611afdc (patch)
tree844d1c8ada0ec377dc7f852ea3197f440049b191 /kernel
parent64a03d3b240f1d3217ff426cbc7645a37c30f6cf (diff)
downloadlinux-stable-99e9acc27033495b0b4f6dd38125b217b611afdc.tar.gz
linux-stable-99e9acc27033495b0b4f6dd38125b217b611afdc.tar.bz2
linux-stable-99e9acc27033495b0b4f6dd38125b217b611afdc.zip
cpufreq: schedutil: Avoid using invalid next_freq
commit 97739501f207efe33145b918817f305b822987f8 upstream. If the next_freq field of struct sugov_policy is set to UINT_MAX, it shouldn't be used for updating the CPU frequency (this is a special "invalid" value), but after commit b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely) it may be passed as the new frequency to sugov_update_commit() in sugov_update_single(). Fix that by adding an extra check for the special UINT_MAX value of next_freq to sugov_update_single(). Fixes: b7eaf1aab9f8 (cpufreq: schedutil: Avoid reducing frequency of busy CPUs prematurely) Reported-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: 4.12+ <stable@vger.kernel.org> # 4.12+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r--kernel/sched/cpufreq_schedutil.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched/cpufreq_schedutil.c b/kernel/sched/cpufreq_schedutil.c
index d6717a3331a1..81eb7899c7c8 100644
--- a/kernel/sched/cpufreq_schedutil.c
+++ b/kernel/sched/cpufreq_schedutil.c
@@ -282,7 +282,8 @@ static void sugov_update_single(struct update_util_data *hook, u64 time,
* Do not reduce the frequency if the CPU has not been idle
* recently, as the reduction is likely to be premature then.
*/
- if (busy && next_f < sg_policy->next_freq) {
+ if (busy && next_f < sg_policy->next_freq &&
+ sg_policy->next_freq != UINT_MAX) {
next_f = sg_policy->next_freq;
/* Reset cached freq as next_freq has changed */