summaryrefslogtreecommitdiffstats
path: root/include/linux/cpuidle.h
Commit message (Collapse)AuthorAgeFilesLines
* cpuidle: menu: Fix wakeup statistics updates for polling stateRafael J. Wysocki2019-11-241-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [ Upstream commit 5f26bdceb9c0a5e6c696aa2899d077cd3ae93413 ] If the CPU exits the "polling" state due to the time limit in the loop in poll_idle(), this is not a real wakeup and it just means that the "polling" state selection was not adequate. The governor mispredicted short idle duration, but had a more suitable state been selected, the CPU might have spent more time in it. In fact, there is no reason to expect that there would have been a wakeup event earlier than the next timer in that case. Handling such cases as regular wakeups in menu_update() may cause the menu governor to make suboptimal decisions going forward, but ignoring them altogether would not be correct either, because every time menu_select() is invoked, it makes a separate new attempt to predict the idle duration taking distinct time to the closest timer event as input and the outcomes of all those attempts should be recorded. For this reason, make menu_update() always assume that if the "polling" state was exited due to the time limit, the next proper wakeup event for the CPU would be the next timer event (not including the tick). Fixes: a37b969a61c1 "cpuidle: poll_state: Add time limit to poll_idle()" Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Sasha Levin <sashal@kernel.org>
* cpuidle: governors: Consolidate PM QoS handlingRafael J. Wysocki2018-05-301-0/+1
| | | | | | | | There is some code duplication related to the PM QoS handling between the existing cpuidle governors, so move that code to a common helper function and call that from the governors. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Return nohz hint from cpuidle_select()Rafael J. Wysocki2018-04-061-3/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add a new pointer argument to cpuidle_select() and to the ->select cpuidle governor callback to allow a boolean value indicating whether or not the tick should be stopped before entering the selected state to be returned from there. Make the ladder governor ignore that pointer (to preserve its current behavior) and make the menu governor return 'false" through it if: (1) the idle exit latency is constrained at 0, or (2) the selected state is a polling one, or (3) the expected idle period duration is within the tick period range. In addition to that, the correction factor computations in the menu governor need to take the possibility that the tick may not be stopped into account to avoid artificially small correction factor values. To that end, add a mechanism to record tick wakeups, as suggested by Peter Zijlstra, and use it to modify the menu_update() behavior when tick wakeup occurs. Namely, if the CPU is woken up by the tick and the return value of tick_nohz_get_sleep_length() is not within the tick boundary, the predicted idle duration is likely too short, so make menu_update() try to compensate for that by updating the governor statistics as though the CPU was idle for a long time. Since the value returned through the new argument pointer of cpuidle_select() is not used by its caller yet, this change by itself is not expected to alter the functionality of the code. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* PM: cpuidle/suspend: Add s2idle usage and time state attributesRafael J. Wysocki2018-03-291-0/+4
| | | | | | | | | | | | | | | Add a new attribute group called "s2idle" under the sysfs directory of each cpuidle state that supports the ->enter_s2idle callback and put two new attributes, "usage" and "time", into that group to represent the number of times the given state was requested for suspend-to-idle and the total time spent in suspend-to-idle after requesting that state, respectively. That will allow diagnostic information related to suspend-to-idle to be collected without enabling advanced debug features and analyzing dmesg output. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge branches 'pm-cpuidle' and 'pm-opp'Rafael J. Wysocki2018-02-151-1/+1
|\ | | | | | | | | | | | | | | | | * pm-cpuidle: PM: cpuidle: Fix cpuidle_poll_state_init() prototype Documentation/ABI: update cpuidle sysfs documentation * pm-opp: opp: cpu: Replace GFP_ATOMIC with GFP_KERNEL in dev_pm_opp_init_cpufreq_table
| * PM: cpuidle: Fix cpuidle_poll_state_init() prototypeRafael J. Wysocki2018-02-121-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Commit f85942207516 (x86: PM: Make APM idle driver initialize polling state) made apm_init() call cpuidle_poll_state_init(), but that only is defined for CONFIG_CPU_IDLE set, so make the empty stub of it available for CONFIG_CPU_IDLE unset too to fix the resulting build issue. Fixes: f85942207516 (x86: PM: Make APM idle driver initialize polling state) Cc: 4.14+ <stable@vger.kernel.org> # 4.14+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: Add new macro to enter a retention idle statePrashanth Prakash2018-01-021-16/+24
|/ | | | | | | | | | | | | | | | | | | | | | | | | If a CPU is entering a low power idle state where it doesn't lose any context, then there is no need to call cpu_pm_enter()/cpu_pm_exit(). Add a new macro(CPU_PM_CPU_IDLE_ENTER_RETENTION) to be used by cpuidle drivers when they are entering retention state. By not calling cpu_pm_enter and cpu_pm_exit we reduce the latency involved in entering and exiting the retention idle states. CPU_PM_CPU_IDLE_ENTER_RETENTION assumes that no state is lost and hence CPU PM notifiers will not be called. We may need a broader change if we need to support partial retention states effeciently. On ARM64 based Qualcomm Server Platform we measured below overhead for for calling cpu_pm_enter and cpu_pm_exit for retention states. workload: stress --hdd #CPUs --hdd-bytes 32M -t 30 Average overhead of cpu_pm_enter - 1.2us Average overhead of cpu_pm_exit - 3.1us Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Prashanth Prakash <pprakash@codeaurora.org> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
* Merge branch 'pm-sleep'Rafael J. Wysocki2017-09-041-4/+4
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-sleep: ACPI / PM: Check low power idle constraints for debug only PM / s2idle: Rename platform operations structure PM / s2idle: Rename ->enter_freeze to ->enter_s2idle PM / s2idle: Rename freeze_state enum and related items PM / s2idle: Rename PM_SUSPEND_FREEZE to PM_SUSPEND_TO_IDLE ACPI / PM: Prefer suspend-to-idle over S3 on some systems platform/x86: intel-hid: Wake up Dell Latitude 7275 from suspend-to-idle PM / suspend: Define pr_fmt() in suspend.c PM / suspend: Use mem_sleep_labels[] strings in messages PM / sleep: Put pm_test under CONFIG_PM_SLEEP_DEBUG PM / sleep: Check pm_wakeup_pending() in __device_suspend_noirq() PM / core: Add error argument to dpm_show_time() PM / core: Split dpm_suspend_noirq() and dpm_resume_noirq() PM / s2idle: Rearrange the main suspend-to-idle loop PM / timekeeping: Print debug messages when requested PM / sleep: Mark suspend/hibernation start and finish PM / sleep: Do not print debug messages by default PM / suspend: Export pm_suspend_target_state
| * PM / s2idle: Rename ->enter_freeze to ->enter_s2idleRafael J. Wysocki2017-08-111-4/+4
| | | | | | | | | | | | | | | | Rename the ->enter_freeze cpuidle driver callback to ->enter_s2idle to make it clear that it is used for entering suspend-to-idle and rename the related functions, variables and so on accordingly. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: Make drivers initialize polling stateRafael J. Wysocki2017-08-301-2/+2
| | | | | | | | | | | | | | | | | | | | | | Make the drivers that want to include the polling state into their states table initialize it explicitly and drop the initialization of it (which in fact is conditional, but that is not obvious from the code) from the core. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
* | cpuidle: Move polling state initialization code to separate fileRafael J. Wysocki2017-08-301-0/+6
| | | | | | | | | | | | | | | | | | | | Move the polling state initialization code to a separate file built conditionally on CONFIG_ARCH_HAS_CPU_RELAX to get rid of the #ifdef in driver.c. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
* | cpuidle: Eliminate the CPUIDLE_DRIVER_STATE_START symbolRafael J. Wysocki2017-08-301-6/+1
|/ | | | | | | | | | | | | | | | | | | | | | On some architectures the first (index 0) idle state is a polling one and it doesn't really save energy, so there is the CPUIDLE_DRIVER_STATE_START symbol allowing some pieces of cpuidle code to avoid using that state. However, this makes the code rather hard to follow. It is better to explicitly avoid the polling state, so add a new cpuidle state flag CPUIDLE_FLAG_POLLING to mark it and make the relevant code check that flag for the first state instead of using the CPUIDLE_DRIVER_STATE_START symbol. In the ACPI processor driver that cannot always rely on the state flags (like before the states table has been set up) define a new internal symbol ACPI_IDLE_STATE_START equivalent to the CPUIDLE_DRIVER_STATE_START one and drop the latter. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
* cpuidle:powernv: Add helper function to populate powernv idle states.Gautham R. Shenoy2017-01-311-0/+1
| | | | | | | | | | | | | | In the current code for powernv_add_idle_states, there is a lot of code duplication while initializing an idle state in powernv_states table. Add an inline helper function to populate the powernv_states[] table for a given idle state. Invoke this for populating the "Nap", "Fastsleep" and the stop states in powernv_add_idle_states. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
* cpuidle: Allow enforcing deepest idle state selectionJacob Pan2016-11-291-1/+6
| | | | | | | | | | | | | When idle injection is used to cap power, we need to override the governor's choice of idle states. For this reason, make it possible the deepest idle state selection to be enforced by setting a flag on a given CPU to achieve the maximum potential power draw reduction. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> [ rjw: Subject & changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: governors: Remove remaining old module codeDaniel Lezcano2016-10-211-2/+0
| | | | | | | | | | | | The governor's code use try_module_get() and put_module() to refcount the governor's module. But the governors are not compiled as module. The refcount does not prevent to switch the governor or unload a module as they aren't compiled as modules. The code is pointless, so remove it. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: introduce CPU_PM_CPU_IDLE_ENTER macro for ARM{32, 64}Sudeep Holla2016-07-211-0/+18
| | | | | | | | | | | | | | | The function arm_enter_idle_state is exactly the same in both generic ARM{32,64} CPUIdle driver and will be the same even on ARM64 backend for ACPI processor idle driver. So we can unify it and move it to a common place by introducing CPU_PM_CPU_IDLE_ENTER macro that can be used in all places avoiding duplication. This is in preparation of reuse of the generic cpuidle entry function for ACPI LPI support on ARM64. Suggested-by: Rafael J. Wysocki <rjw@rjwysocki.net> Signed-off-by: Sudeep Holla <sudeep.holla@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Do not access cpuidle_devices when !CONFIG_CPU_IDLECatalin Marinas2016-06-021-0/+3
| | | | | | | | | | | | | | | | | The cpuidle_devices per-CPU variable is only defined when CPU_IDLE is enabled. Commit c8cc7d4de7a4 ("sched/idle: Reorganize the idle loop") removed the #ifdef CONFIG_CPU_IDLE around cpuidle_idle_call() with the compiler optimising away __this_cpu_read(cpuidle_devices). However, with CONFIG_UBSAN && !CONFIG_CPU_IDLE, this optimisation no longer happens and the kernel fails to link since cpuidle_devices is not defined. This patch introduces an accessor function for the current CPU cpuidle device (returning NULL when !CONFIG_CPU_IDLE) and uses it in cpuidle_idle_call(). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Cc: 4.5+ <stable@vger.kernel.org> # 4.5+ Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle/coupled: Remove cpuidle_device::safe_state_indexXunlei Pang2015-08-281-1/+0
| | | | | | | | | | | | | cpuidle_device::safe_state_index need to be initialized before use, it should be the same as cpuidle_driver::safe_state_index. We tackled this issue by removing the safe_state_index from the cpuidle_device structure and use the one in the cpuidle_driver structure instead. Suggested-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Xunlei Pang <pang.xunlei@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
*-. Merge branches 'pm-sleep' and 'pm-runtime'Rafael J. Wysocki2015-06-191-6/+10
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | * pm-sleep: PM / sleep: trace_device_pm_callback coverage in dpm_prepare/complete PM / wakeup: add a dummy wakeup_source to record statistics PM / sleep: Make suspend-to-idle-specific code depend on CONFIG_SUSPEND PM / sleep: Return -EBUSY from suspend_enter() on wakeup detection PM / tick: Add tracepoints for suspend-to-idle diagnostics PM / sleep: Fix symbol name in a comment in kernel/power/main.c leds / PM: fix hibernation on arm when gpio-led used with CPU led trigger ARM: omap-device: use SET_NOIRQ_SYSTEM_SLEEP_PM_OPS bus: omap_l3_noc: add missed callbacks for suspend-to-disk PM / sleep: Add macro to define common noirq system PM callbacks PM / sleep: Refine diagnostic messages in enter_state() PM / wakeup: validate wakeup source before activating it. * pm-runtime: PM / Runtime: Update last_busy in rpm_resume PM / runtime: add note about re-calling in during device probe()
| * | PM / sleep: Make suspend-to-idle-specific code depend on CONFIG_SUSPENDRafael J. Wysocki2015-05-191-6/+10
| |/ | | | | | | | | | | | | | | | | | | | | | | Since idle_should_freeze() is defined to always return 'false' for CONFIG_SUSPEND unset, all of the code depending on it in cpuidle_idle_call() is not necessary in that case. Make that code depend on CONFIG_SUSPEND too to avoid building it when it is not going to be used. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de>
* | sched / idle: Call default_idle_call() from cpuidle_enter_state()Rafael J. Wysocki2015-05-141-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The check of the cpuidle_enter() return value against -EBUSY made in call_cpuidle() will not be necessary any more if cpuidle_enter_state() calls default_idle_call() directly when it is about to return -EBUSY, so make that happen and eliminate the check. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Kevin Hilman <khilman@linaro.org>
* | sched / idle: Call idle_set_state() from cpuidle_enter_state()Rafael J. Wysocki2015-05-141-0/+3
|/ | | | | | | | | | | | | | | | | | | | Introduce a wrapper function around idle_set_state() called sched_idle_set_state() that will pass this_rq() to it as the first argument and make cpuidle_enter_state() call the new function before and after entering the target state. At the same time, remove direct invocations of idle_set_state() from call_cpuidle(). This will allow the invocation of default_idle_call() to be moved from call_cpuidle() to cpuidle_enter_state() safely and call_cpuidle() to be simplified a bit as a result. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Reviewed-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Preeti U Murthy <preeti@linux.vnet.ibm.com> Tested-by: Sudeep Holla <sudeep.holla@arm.com> Acked-by: Kevin Hilman <khilman@linaro.org>
* cpuidle: remove state_count field from struct cpuidle_deviceBartlomiej Zolnierkiewicz2015-04-031-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Thomas Schlichter reports the following issue on his Samsung NC20: "The C-states C1 and C2 to the OS when connected to AC, and additionally provides the C3 C-state when disconnected from AC. However, the number of C-states shown in sysfs is fixed to the number of C-states present at boot. If I boot with AC connected, I always only see the C-states up to C2 even if I disconnect AC. The reason is commit 130a5f692425 (ACPI / cpuidle: remove dev->state_count setting). It removes the update of dev->state_count, but sysfs uses exactly this variable to show the C-states. The fix is to use drv->state_count in sysfs. As this is currently the last user of dev->state_count, this variable can be completely removed." Remove dev->state_count as per the above. Reported-by: Thomas Schlichter <thomas.schlichter@web.de> Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Cc: 3.14+ <stable@vger.kernel.org> # 3.14+ [ rjw: Changelog ] Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle / sleep: Use broadcast timer for states that stop local timerRafael J. Wysocki2015-03-051-2/+15
| | | | | | | | | | | | | | | | | | | | | Commit 381063133246 (PM / sleep: Re-implement suspend-to-idle handling) overlooked the fact that entering some sufficiently deep idle states by CPUs may cause their local timers to stop and in those cases it is necessary to switch over to a broadcast timer prior to entering the idle state. If the cpuidle driver in use does not provide the new ->enter_freeze callback for any of the idle states, that problem affects suspend-to-idle too, but it is not taken into account after the changes made by commit 381063133246. Fix that by changing the definition of cpuidle_enter_freeze() and re-arranging of the code in cpuidle_idle_call(), so the former does not call cpuidle_enter() any more and the fallback case is handled by cpuidle_idle_call() directly. Fixes: 381063133246 (PM / sleep: Re-implement suspend-to-idle handling) Reported-and-tested-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* PM / sleep: Make it possible to quiesce timers during suspend-to-idleRafael J. Wysocki2015-02-151-0/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The efficiency of suspend-to-idle depends on being able to keep CPUs in the deepest available idle states for as much time as possible. Ideally, they should only be brought out of idle by system wakeup interrupts. However, timer interrupts occurring periodically prevent that from happening and it is not practical to chase all of the "misbehaving" timers in a whack-a-mole fashion. A much more effective approach is to suspend the local ticks for all CPUs and the entire timekeeping along the lines of what is done during full suspend, which also helps to keep suspend-to-idle and full suspend reasonably similar. The idea is to suspend the local tick on each CPU executing cpuidle_enter_freeze() and to make the last of them suspend the entire timekeeping. That should prevent timer interrupts from triggering until an IO interrupt wakes up one of the CPUs. It needs to be done with interrupts disabled on all of the CPUs, though, because otherwise the suspended clocksource might be accessed by an interrupt handler which might lead to fatal consequences. Unfortunately, the existing ->enter callbacks provided by cpuidle drivers generally cannot be used for implementing that, because some of them re-enable interrupts temporarily and some idle entry methods cause interrupts to be re-enabled automatically on exit. Also some of these callbacks manipulate local clock event devices of the CPUs which really shouldn't be done after suspending their ticks. To overcome that difficulty, introduce a new cpuidle state callback, ->enter_freeze, that will be guaranteed (1) to keep interrupts disabled all the time (and return with interrupts disabled) and (2) not to touch the CPU timer devices. Modify cpuidle_enter_freeze() to look for the deepest available idle state with ->enter_freeze present and to make the CPU execute that callback with suspended tick (and the last of the online CPUs to execute it with suspended timekeeping). Suggested-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* PM / sleep: Re-implement suspend-to-idle handlingRafael J. Wysocki2015-02-131-2/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | In preparation for adding support for quiescing timers in the final stage of suspend-to-idle transitions, rework the freeze_enter() function making the system wait on a wakeup event, the freeze_wake() function terminating the suspend-to-idle loop and the mechanism by which deep idle states are entered during suspend-to-idle. First of all, introduce a simple state machine for suspend-to-idle and make the code in question use it. Second, prevent freeze_enter() from losing wakeup events due to race conditions and ensure that the number of online CPUs won't change while it is being executed. In addition to that, make it force all of the CPUs re-enter the idle loop in case they are in idle states already (so they can enter deeper idle states if possible). Next, drop cpuidle_use_deepest_state() and replace use_deepest_state checks in cpuidle_select() and cpuidle_reflect() with a single suspend-to-idle state check in cpuidle_idle_call(). Finally, introduce cpuidle_enter_freeze() that will simply find the deepest idle state available to the given CPU and enter it using cpuidle_enter(). Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
* cpuidle / ACPI: remove unused CPUIDLE_FLAG_TIME_INVALIDLen Brown2014-12-171-3/+0
| | | | | | | | | | | | | | | | | | | | | | CPUIDLE_FLAG_TIME_INVALID is no longer checked by menu or ladder cpuidle governors, so don't bother setting or defining it. It was originally invented to account for the fact that acpi_safe_halt() enables interrupts to invoke HLT. That would allow interrupt service routines to be included in the last_idle duration measurements made in cpuidle_enter_state(), potentially returning a duration much larger than reality. But menu and ladder can gracefully handle erroneously large duration intervals without checking for CPUIDLE_FLAG_TIME_INVALID. Further, if they don't check CPUIDLE_FLAG_TIME_INVALID, they can also benefit from the instances when the duration interval is not erroneously large. Signed-off-by: Len Brown <len.brown@intel.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Invert CPUIDLE_FLAG_TIME_VALID logicDaniel Lezcano2014-11-121-2/+2
| | | | | | | | | | | | | | | The only place where the time is invalid is when the ACPI_CSTATE_FFH entry method is not set. Otherwise for all the drivers, the time can be correctly measured. Instead of duplicating the CPUIDLE_FLAG_TIME_VALID flag in all the drivers for all the states, just invert the logic by replacing it by the flag CPUIDLE_FLAG_TIME_INVALID, hence we can set this flag only for the acpi idle driver, remove the former flag from all the drivers and invert the logic with this flag in the different governor. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linusLinus Torvalds2014-06-091-0/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull MIPS updates from Ralf Baechle: - three fixes for 3.15 that didn't make it in time - limited Octeon 3 support. - paravirtualization support - improvment to platform support for Netlogix SOCs. - add support for powering down the Malta eval board in software - add many instructions to the in-kernel microassembler. - add support for the BPF JIT. - minor cleanups of the BCM47xx code. - large cleanup of math emu code resulting in significant code size reduction, better readability of the code and more accurate emulation. - improvments to the MIPS CPS code. - support C3 power status for the R4k count/compare clock device. - improvments to the GIO support for older SGI workstations. - increase number of supported CPUs to 256; this can be reached on certain embedded multithreaded ccNUMA configurations. - various small cleanups, updates and fixes * 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus: (173 commits) MIPS: IP22/IP28: Improve GIO support MIPS: Octeon: Add twsi interrupt initialization for OCTEON 3XXX, 5XXX, 63XX DEC: Document the R4k MB ASIC mini interrupt controller DEC: Add self as the maintainer MIPS: Add microMIPS MSA support. MIPS: Replace calls to obsolete strict_strto call with kstrto* equivalents. MIPS: Replace obsolete strict_strto call with kstrto MIPS: BFP: Simplify code slightly. MIPS: Call find_vma with the mmap_sem held MIPS: Fix 'write_msa_##' inline macro. MIPS: Fix MSA toolchain support detection. mips: Update the email address of Geert Uytterhoeven MIPS: Add minimal defconfig for mips_paravirt MIPS: Enable build for new system 'paravirt' MIPS: paravirt: Add pci controller for virtio MIPS: Add code for new system 'paravirt' MIPS: Add functions for hypervisor call MIPS: OCTEON: Add OCTEON3 to __get_cpu_type MIPS: Add function get_ebase_cpunum MIPS: Add minimal support for OCTEON3 to c-r4k.c ...
| * cpuidle: declare cpuidle_dev in cpuidle.hPaul Burton2014-05-281-0/+1
| | | | | | | | | | | | | | | | | | Declaring this allows drivers which need to initialise each struct cpuidle_device at initialisation time to make use of the structures already defined in cpuidle.c, rather than having to wastefully define their own. Signed-off-by: Paul Burton <paul.burton@imgtec.com>
* | PM / suspend: Always use deepest C-state in the "freeze" sleep stateRafael J. Wysocki2014-05-071-0/+2
| | | | | | | | | | | | | | | | | | | | | | If freeze_enter() is called, we want to bypass the current cpuidle governor and always use the deepest available (that is, not disabled) C-state, because we want to save as much energy as reasonably possible then and runtime latency constraints don't matter at that point, since the system is in a sleep state anyway. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Tested-by: Aubrey Li <aubrey.li@linux.intel.com>
* | cpuidle: Combine cpuidle_enabled() with cpuidle_select()Rafael J. Wysocki2014-05-011-5/+0
|/ | | | | | | | | | | Since both cpuidle_enabled() and cpuidle_select() are only called by cpuidle_idle_call(), it is not really useful to keep them separate and combining them will help to avoid complicating cpuidle_idle_call() even further if governors are changed to return error codes sometimes. This code modification shouldn't lead to any functional changes. Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* sched/idle: Reorganize the idle loopDaniel Lezcano2014-03-111-0/+2
| | | | | | | | | | | | | | | | | Now that we have the main cpuidle function in idle.c, move some code from the idle mainloop to this function for the sake of clarity. That removes if then else indentation difficult to follow when looking at the code. This patch does not change the current behavior. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: tglx@linutronix.de Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-3-git-send-email-daniel.lezcano@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* cpuidle/idle: Move the cpuidle_idle_call function to idle.cDaniel Lezcano2014-03-111-2/+0
| | | | | | | | | | | | | | | | | The cpuidle_idle_call does nothing more than calling the three individuals function and is no longer used by any arch specific code but only in the cpuidle framework code. We can move this function into the idle task code to ensure better proximity to the scheduler code. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nicolas.pitre@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-2-git-send-email-daniel.lezcano@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* idle/cpuidle: Split cpuidle_idle_call main function into smaller functionsDaniel Lezcano2014-03-111-0/+19
| | | | | | | | | | | | | | | | | | | | | | | | | | In order to allow better integration between the cpuidle framework and the scheduler, reducing the distance between these two sub-components will facilitate this integration by moving part of the cpuidle code in the idle task file and, because idle.c is in the sched directory, we have access to the scheduler's private structures. This patch splits the cpuidle_idle_call main entry function into 3 calls to a newly added API: 1. select the idle state 2. enter the idle state 3. reflect the idle state The cpuidle_idle_call calls these three functions to implement the main idle entry function. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Peter Zijlstra <peterz@infradead.org> Cc: rjw@rjwysocki.net Cc: preeti@linux.vnet.ibm.com Link: http://lkml.kernel.org/r/1393832934-11625-1-git-send-email-daniel.lezcano@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
* cpuidle: remove cpuidle_unregister_governor()Viresh Kumar2013-10-301-6/+0
| | | | | | | | | | | cpuidle_unregister_governor() and cpuidle_replace_governor() aren't used anymore and can be removed. They were used by cpufreq governors earlier, but since the governors can't be compiled as modules any more, these two functions aren't necessary. Suggested-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: fix indentation of cpumaskViresh Kumar2013-10-301-1/+1
| | | | | | | | Use tabs for cpumask indentation in struct cpuidle_driver. [rjw: Changelog] Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Add missing forward declarations of structuresDaniel Lezcano2013-07-151-0/+2
| | | | | | | | | Add missing forward declarations of struct cpuidle_state_kobj and struct cpuidle_driver_kobj in cpuidle.h. [rjw: Changelog] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: Make cpuidle's sysfs directory dynamically allocatedDaniel Lezcano2013-07-151-4/+3
| | | | | | | | | | | | | | | | | | | | | | | | The cpuidle sysfs code is designed to have a single instance of per CPU cpuidle directory. It is not possible to remove the sysfs entry and create it again. This is not a problem with the current code but future changes will add CPU hotplug support to enable/disable the device, so it will need to remove the sysfs entry like other subsystems do. That won't be possible without this change, because the kobj is a static object which can't be reused for kobj_init_and_add(). Add cpuidle_device_kobj to be allocated dynamically when adding/removing a sysfs entry which is consistent with the other cpuidle's sysfs entries. An added benefit is that the sysfs code is now more self-contained and the includes needed for sysfs can be moved from cpuidle.h directly into sysfs.c so as to reduce the total number of headers dragged along with cpuidle.h. [rjw: Changelog] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: simplify multiple driver supportDaniel Lezcano2013-06-111-3/+3
| | | | | | | | | | | | | | | | | | | | | | | Commit bf4d1b5 (cpuidle: support multiple drivers) introduced support for using multiple cpuidle drivers at the same time. It added a couple of new APIs to register the driver per CPU, but that led to some unnecessary code complexity related to the kernel config options deciding whether or not the multiple driver support is enabled. The code has to work as it did before when the multiple driver support is not enabled and the multiple driver support has to be compatible with the previously existing API. Remove the new API, not used by any driver in the tree yet (but needed for the HMP cpuidle drivers that will be submitted soon), and add a new cpumask pointer to the cpuidle driver structure that will point to the mask of CPUs handled by the given driver. That will allow the cpuidle_[un]register_driver() API to be used for the multiple driver support along with the cpuidle_[un]register() functions added recently. [rjw: Changelog] Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* Merge branch 'release' of ↵Linus Torvalds2013-05-111-1/+1
|\ | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux Pull idle update from Len Brown: "Add support for new Haswell-ULT CPU idle power states" * 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux: intel_idle: initial C8, C9, C10 support tools/power turbostat: display C8, C9, C10 residency
| * intel_idle: initial C8, C9, C10 supportLen Brown2013-04-171-1/+1
| | | | | | | | | | | | | | | | | | Allow intel_idle and cpuidle to utilize C8, C9, C10 when they are present on... "Fourth Generation Intel(R) Core(TM) Processors", which are based on Intel(R) microarchitecture code name Haswell. Signed-off-by: Len Brown <len.brown@intel.com>
* | cpuidle: make a single register function for allDaniel Lezcano2013-04-231-2/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The usual scheme to initialize a cpuidle driver on a SMP is: cpuidle_register_driver(drv); for_each_possible_cpu(cpu) { device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); } This code is duplicated in each cpuidle driver. On UP systems, it is done this way: cpuidle_register_driver(drv); device = &per_cpu(cpuidle_dev, cpu); cpuidle_register_device(device); On UP, the macro 'for_each_cpu' does one iteration: #define for_each_cpu(cpu, mask) \ for ((cpu) = 0; (cpu) < 1; (cpu)++, (void)mask) Hence, the initialization loop is the same for UP than SMP. Beside, we saw different bugs / mis-initialization / return code unchecked in the different drivers, the code is duplicated including bugs. After fixing all these ones, it appears the initialization pattern is the same for everyone. Please note, some drivers are doing dev->state_count = drv->state_count. This is not necessary because it is done by the cpuidle_enable_device function in the cpuidle framework. This is true, until you have the same states for all your devices. Otherwise, the 'low level' API should be used instead with the specific initialization for the driver. Let's add a wrapper function doing this initialization with a cpumask parameter for the coupled idle states and use it for all the drivers. That will save a lot of LOC, consolidate the code, and the modifications in the future could be done in a single place. Another benefit is the consolidation of the cpuidle_device variable which is now in the cpuidle framework and no longer spread accross the different arch specific drivers. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: remove en_core_tk_irqen flagDaniel Lezcano2013-04-231-11/+0
| | | | | | | | | | | | | | | | | | | | | | The en_core_tk_irqen flag is set in all the cpuidle driver which means it is not necessary to specify this flag. Remove the flag and the code related to it. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Kevin Hilman <khilman@linaro.org> # for mach-omap2/* Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle: initialize the broadcast timer frameworkDaniel Lezcano2013-04-011-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | The commit 89878baa73f0f1c679355006bd8632e5d78f96c2 introduced the CPUIDLE_FLAG_TIMER_STOP flag where we specify a specific idle state stops the local timer. Now use this flag to check at init time if one state will need the broadcast timer and, in this case, setup the broadcast timer framework. That prevents multiple code duplication in the drivers. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* | cpuidle : handle clockevent notify from the cpuidle frameworkDaniel Lezcano2013-04-011-0/+1
|/ | | | | | | | | | | | | | | | | | When a cpu enters a deep idle state, the local timers are stopped and the time framework falls back to the timer device used as a broadcast timer. The different cpuidle drivers are calling clockevents_notify ENTER/EXIT when the idle state stops the local timer. Add a new flag CPUIDLE_FLAG_TIMER_STOP which can be set by the cpuidle drivers. If the flag is set, the cpuidle core code takes care of the notification on behalf of the driver to avoid pointless code duplication. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: remove vestage definition of cpuidle_state_usage.driver_dataLen Brown2013-02-111-22/+0
| | | | | | | This field is no longer used. Signed-off-by: Len Brown <len.brown@intel.com> Acked-by: Daniel Lezcano <daniel.lezcano@linaro.org>
* cpuidle: remove the power_specified field in the driverDaniel Lezcano2013-01-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | We realized that the power usage field is never filled and when it is filled for tegra, the power_specified flag is not set causing all of these values to be reset when the driver is initialized with set_power_state(). However, the power_specified flag can be simply removed under the assumption that the states are always backward sorted, which is the case with the current code. This change allows the menu governor select function and the cpuidle_play_dead() to be simplified. Moreover, the set_power_states() function can removed as it does not make sense any more. Drop the power_specified flag from struct cpuidle_driver and make the related changes as described above. As a consequence, this also fixes the bug where on the dynamic C-states system, the power fields are not initialized. [rjw: Changelog] References: https://bugzilla.kernel.org/show_bug.cgi?id=42870 References: https://bugzilla.kernel.org/show_bug.cgi?id=43349 References: https://lkml.org/lkml/2012/10/16/518 Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: support multiple driversDaniel Lezcano2012-11-151-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | With the tegra3 and the big.LITTLE [1] new architectures, several cpus with different characteristics (latencies and states) can co-exists on the system. The cpuidle framework has the limitation of handling only identical cpus. This patch removes this limitation by introducing the multiple driver support for cpuidle. This option is configurable at compile time and should be enabled for the architectures mentioned above. So there is no impact for the other platforms if the option is disabled. The option defaults to 'n'. Note the multiple drivers support is also compatible with the existing drivers, even if just one driver is needed, all the cpu will be tied to this driver using an extra small chunk of processor memory. The multiple driver support use a per-cpu driver pointer instead of a global variable and the accessor to this variable are done from a cpu context. In order to keep the compatibility with the existing drivers, the function 'cpuidle_register_driver' and 'cpuidle_unregister_driver' will register the specified driver for all the cpus. The semantic for the output of /sys/devices/system/cpu/cpuidle/current_driver remains the same except the driver name will be related to the current cpu. The /sys/devices/system/cpu/cpu[0-9]/cpuidle/driver/name files are added allowing to read the per cpu driver name. [1] http://lwn.net/Articles/481055/ Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
* cpuidle: move driver's refcount to cpuidleDaniel Lezcano2012-11-151-0/+1
| | | | | | | | | | We want to support different cpuidle drivers co-existing together. In this case we should move the refcount to the cpuidle_driver structure to handle several drivers at a time. Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org> Acked-by: Peter De Schrijver <pdeschrijver@nvidia.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>