diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-05-14 13:19:15 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-05-14 13:19:15 -0700 |
commit | 0c181b1d97dc4deaa902da46740e412c0d0bf9fb (patch) | |
tree | c90a951c11520931d4dee3bdd55b4a0023477757 /drivers/cpufreq | |
parent | f952b6c863090464c148066df9f46cb3edd603da (diff) | |
parent | de1c2722e07819c7ea65bb4bf37a2cfe2556095b (diff) | |
download | linux-0c181b1d97dc4deaa902da46740e412c0d0bf9fb.tar.gz linux-0c181b1d97dc4deaa902da46740e412c0d0bf9fb.tar.bz2 linux-0c181b1d97dc4deaa902da46740e412c0d0bf9fb.zip |
Merge tag 'pm-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
Pull power management updates from Rafael Wysocki:
"These are mostly cpufreq updates, including a significant intel-pstate
driver update and several amd-pstate improvements plus some updates of
ARM cpufreq drivers, general fixes and cleanups.
Also included are changes related to system sleep, power capping
updates adding support for a new platform and a new hardware feature
(among other things), a Samsung exynos-asv driver update allowing it
to change its Energy Model after adjusting voltage, minor cpuidle and
devfreq updates and a small documentation cleanup.
Specifics:
- Rework the handling of disabled turbo in the intel_pstate driver
and make it update the maximum CPU frequency consistently
regardless of the reason on top of a number of cleanups (Rafael
Wysocki)
- Add missing checks for NULL .exit() cpufreq driver callback to the
cpufreq core (Viresh Kumar)
- Prevent pulicy->max from going above the frequency QoS maximum
value when cpufreq_frequency_table_verify() is used (Xuewen Yan)
- Prevent a negative CPU number or frequency value from being printed
if they are really large (Joshua Yeong)
- Update MAINTAINERS entry for amd-pstate to add two new
submaintainers and a designated reviewer (Huang Rui)
- Clean up the amd-pstate driver and update its documentation
(Gautham Shenoy)
- Fix the highest frequency issue in the amd-pstate driver which
limits performance (Perry Yuan)
- Enable CPPC v2 for certain processors in the family 17H, as
requested by TR40 processor users who expect improved performance
and lower system temperature (Perry Yuan)
- Change latency and delay values to be read from platform firmware
firstly for more accurate timing (Perry Yuan)
- A new quirk is introduced for supporting amd-pstate on legacy
processors which either lack CPPC capability, or only only have
CPPC v2 capability (Perry Yuan)
- Sun50i cpufreq: Add support for opp_supported_hw, H616 platform and
general cleanups (Andre Przywara, Martin Botka, Brandon Cheo Fusi,
Dan Carpenter, Viresh Kumar)
- CPPC cpufreq: Fix possible null pointer dereference (Aleksandr
Mishin)
- Eliminate uses of of_node_put() from cpufreq (Javier Carrasco,
Shivani Gupta)
- brcmstb-avs: ISO C90 forbids mixed declarations (Portia Stephens)
- mediatek cpufreq: Add support for MT7988A (Sam Shih)
- cpufreq-qcom-hw: Add SM4450 compatibles in DT bindings (Tengfei
Fan)
- Fix struct cpudata::epp_cached kernel-doc in the intel_pstate
cpufreq driver (Jeff Johnson)
- Fix kerneldoc description of ladder_do_selection() (Jeff Johnson)
- Convert the cpuidle kirkwood driver to platform remove callback
returning void (Yangtao Li)
- Replace deprecated strncpy() with strscpy() in the hibernation core
code (Justin Stitt)
- Use %ps to simplify debug output in the core system-wide suspend
and resume code (Len Brown)
- Remove unnecessary else from device_init_wakeup() and make
device_wakeup_disable() return void (Dhruva Gole)
- Enable PMU support in the Intel TPMI RAPL driver (Zhang Rui)
- Add support for ArrowLake-H platform to the Intel RAPL driver
(Zhang Rui)
- Avoid explicit cpumask allocation on stack in DTPM (Dawei Li)
- Make the Samsung exynos-asv driver update the Energy Model after
adjusting voltage on top of some preliminary changes of the OPP and
Enery Model generic code (Lukasz Luba)
- Remove a reference to a function that has been dropped from the
power management documentation (Bjorn Helgaas)
- Convert the platfrom remove callback to .remove_new for the
exyno-nocp, exynos-ppmu, mtk-cci-devfreq, sun8i-a33-mbus, and
rk3399_dmc devfreq drivers (Uwe Kleine-König)
- Use DEFINE_SIMPLE_PM_OPS for exyno-bus.c driver (Anand Moon)"
* tag 'pm-6.10-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: (68 commits)
PM / devfreq: exynos: Use DEFINE_SIMPLE_DEV_PM_OPS for PM functions
PM / devfreq: rk3399_dmc: Convert to platform remove callback returning void
PM / devfreq: sun8i-a33-mbus: Convert to platform remove callback returning void
PM / devfreq: mtk-cci: Convert to platform remove callback returning void
PM / devfreq: exynos-ppmu: Convert to platform remove callback returning void
PM / devfreq: exynos-nocp: Convert to platform remove callback returning void
cpufreq: amd-pstate: fix the highest frequency issue which limits performance
cpufreq: intel_pstate: fix struct cpudata::epp_cached kernel-doc
cpuidle: ladder: fix ladder_do_selection() kernel-doc
powercap: intel_rapl_tpmi: Enable PMU support
powercap: intel_rapl: Introduce APIs for PMU support
PM: hibernate: replace deprecated strncpy() with strscpy()
cpufreq: Fix up printing large CPU numbers and frequency values
MAINTAINERS: cpufreq: amd-pstate: Add co-maintainers and reviewer
cpufreq: amd-pstate: remove unused variable lowest_nonlinear_freq
cpufreq: amd-pstate: fix code format problems
cpufreq: amd-pstate: Add quirk for the pstate CPPC capabilities missing
cppc_acpi: print error message if CPPC is unsupported
cpufreq: amd-pstate: get transition delay and latency value from ACPI tables
cpufreq: amd-pstate: Bail out if min/max/nominal_freq is 0
...
Diffstat (limited to 'drivers/cpufreq')
-rw-r--r-- | drivers/cpufreq/amd-pstate.c | 280 | ||||
-rw-r--r-- | drivers/cpufreq/brcmstb-avs-cpufreq.c | 5 | ||||
-rw-r--r-- | drivers/cpufreq/cppc_cpufreq.c | 14 | ||||
-rw-r--r-- | drivers/cpufreq/cpufreq-dt-platdev.c | 10 | ||||
-rw-r--r-- | drivers/cpufreq/cpufreq-dt.c | 21 | ||||
-rw-r--r-- | drivers/cpufreq/cpufreq.c | 11 | ||||
-rw-r--r-- | drivers/cpufreq/freq_table.c | 12 | ||||
-rw-r--r-- | drivers/cpufreq/intel_pstate.c | 173 | ||||
-rw-r--r-- | drivers/cpufreq/mediatek-cpufreq.c | 10 | ||||
-rw-r--r-- | drivers/cpufreq/sun50i-cpufreq-nvmem.c | 209 | ||||
-rw-r--r-- | drivers/cpufreq/tegra124-cpufreq.c | 19 | ||||
-rw-r--r-- | drivers/cpufreq/ti-cpufreq.c | 4 |
12 files changed, 469 insertions, 299 deletions
diff --git a/drivers/cpufreq/amd-pstate.c b/drivers/cpufreq/amd-pstate.c index 2015c9fcc3c9..6a342b0c0140 100644 --- a/drivers/cpufreq/amd-pstate.c +++ b/drivers/cpufreq/amd-pstate.c @@ -50,7 +50,8 @@ #define AMD_PSTATE_TRANSITION_LATENCY 20000 #define AMD_PSTATE_TRANSITION_DELAY 1000 -#define AMD_PSTATE_PREFCORE_THRESHOLD 166 +#define CPPC_HIGHEST_PERF_PERFORMANCE 196 +#define CPPC_HIGHEST_PERF_DEFAULT 166 /* * TODO: We need more time to fine tune processors with shared memory solution @@ -67,6 +68,7 @@ static struct cpufreq_driver amd_pstate_epp_driver; static int cppc_state = AMD_PSTATE_UNDEFINED; static bool cppc_enabled; static bool amd_pstate_prefcore = true; +static struct quirk_entry *quirks; /* * AMD Energy Preference Performance (EPP) @@ -111,6 +113,41 @@ static unsigned int epp_values[] = { typedef int (*cppc_mode_transition_fn)(int); +static struct quirk_entry quirk_amd_7k62 = { + .nominal_freq = 2600, + .lowest_freq = 550, +}; + +static int __init dmi_matched_7k62_bios_bug(const struct dmi_system_id *dmi) +{ + /** + * match the broken bios for family 17h processor support CPPC V2 + * broken BIOS lack of nominal_freq and lowest_freq capabilities + * definition in ACPI tables + */ + if (boot_cpu_has(X86_FEATURE_ZEN2)) { + quirks = dmi->driver_data; + pr_info("Overriding nominal and lowest frequencies for %s\n", dmi->ident); + return 1; + } + + return 0; +} + +static const struct dmi_system_id amd_pstate_quirks_table[] __initconst = { + { + .callback = dmi_matched_7k62_bios_bug, + .ident = "AMD EPYC 7K62", + .matches = { + DMI_MATCH(DMI_BIOS_VERSION, "5.14"), + DMI_MATCH(DMI_BIOS_RELEASE, "12/12/2019"), + }, + .driver_data = &quirk_amd_7k62, + }, + {} +}; +MODULE_DEVICE_TABLE(dmi, amd_pstate_quirks_table); + static inline int get_mode_idx_from_str(const char *str, size_t size) { int i; @@ -290,6 +327,21 @@ static inline int amd_pstate_enable(bool enable) return static_call(amd_pstate_enable)(enable); } +static u32 amd_pstate_highest_perf_set(struct amd_cpudata *cpudata) +{ + struct cpuinfo_x86 *c = &cpu_data(0); + + /* + * For AMD CPUs with Family ID 19H and Model ID range 0x70 to 0x7f, + * the highest performance level is set to 196. + * https://bugzilla.kernel.org/show_bug.cgi?id=218759 + */ + if (c->x86 == 0x19 && (c->x86_model >= 0x70 && c->x86_model <= 0x7f)) + return CPPC_HIGHEST_PERF_PERFORMANCE; + + return CPPC_HIGHEST_PERF_DEFAULT; +} + static int pstate_init_perf(struct amd_cpudata *cpudata) { u64 cap1; @@ -306,7 +358,7 @@ static int pstate_init_perf(struct amd_cpudata *cpudata) * the default max perf. */ if (cpudata->hw_prefcore) - highest_perf = AMD_PSTATE_PREFCORE_THRESHOLD; + highest_perf = amd_pstate_highest_perf_set(cpudata); else highest_perf = AMD_CPPC_HIGHEST_PERF(cap1); @@ -330,7 +382,7 @@ static int cppc_init_perf(struct amd_cpudata *cpudata) return ret; if (cpudata->hw_prefcore) - highest_perf = AMD_PSTATE_PREFCORE_THRESHOLD; + highest_perf = amd_pstate_highest_perf_set(cpudata); else highest_perf = cppc_perf.highest_perf; @@ -604,78 +656,6 @@ static void amd_pstate_adjust_perf(unsigned int cpu, cpufreq_cpu_put(policy); } -static int amd_get_min_freq(struct amd_cpudata *cpudata) -{ - struct cppc_perf_caps cppc_perf; - - int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); - if (ret) - return ret; - - /* Switch to khz */ - return cppc_perf.lowest_freq * 1000; -} - -static int amd_get_max_freq(struct amd_cpudata *cpudata) -{ - struct cppc_perf_caps cppc_perf; - u32 max_perf, max_freq, nominal_freq, nominal_perf; - u64 boost_ratio; - - int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); - if (ret) - return ret; - - nominal_freq = cppc_perf.nominal_freq; - nominal_perf = READ_ONCE(cpudata->nominal_perf); - max_perf = READ_ONCE(cpudata->highest_perf); - - boost_ratio = div_u64(max_perf << SCHED_CAPACITY_SHIFT, - nominal_perf); - - max_freq = nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT; - - /* Switch to khz */ - return max_freq * 1000; -} - -static int amd_get_nominal_freq(struct amd_cpudata *cpudata) -{ - struct cppc_perf_caps cppc_perf; - - int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); - if (ret) - return ret; - - /* Switch to khz */ - return cppc_perf.nominal_freq * 1000; -} - -static int amd_get_lowest_nonlinear_freq(struct amd_cpudata *cpudata) -{ - struct cppc_perf_caps cppc_perf; - u32 lowest_nonlinear_freq, lowest_nonlinear_perf, - nominal_freq, nominal_perf; - u64 lowest_nonlinear_ratio; - - int ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); - if (ret) - return ret; - - nominal_freq = cppc_perf.nominal_freq; - nominal_perf = READ_ONCE(cpudata->nominal_perf); - - lowest_nonlinear_perf = cppc_perf.lowest_nonlinear_perf; - - lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT, - nominal_perf); - - lowest_nonlinear_freq = nominal_freq * lowest_nonlinear_ratio >> SCHED_CAPACITY_SHIFT; - - /* Switch to khz */ - return lowest_nonlinear_freq * 1000; -} - static int amd_pstate_set_boost(struct cpufreq_policy *policy, int state) { struct amd_cpudata *cpudata = policy->driver_data; @@ -828,9 +808,93 @@ free_cpufreq_put: mutex_unlock(&amd_pstate_driver_lock); } +/* + * Get pstate transition delay time from ACPI tables that firmware set + * instead of using hardcode value directly. + */ +static u32 amd_pstate_get_transition_delay_us(unsigned int cpu) +{ + u32 transition_delay_ns; + + transition_delay_ns = cppc_get_transition_latency(cpu); + if (transition_delay_ns == CPUFREQ_ETERNAL) + return AMD_PSTATE_TRANSITION_DELAY; + + return transition_delay_ns / NSEC_PER_USEC; +} + +/* + * Get pstate transition latency value from ACPI tables that firmware + * set instead of using hardcode value directly. + */ +static u32 amd_pstate_get_transition_latency(unsigned int cpu) +{ + u32 transition_latency; + + transition_latency = cppc_get_transition_latency(cpu); + if (transition_latency == CPUFREQ_ETERNAL) + return AMD_PSTATE_TRANSITION_LATENCY; + + return transition_latency; +} + +/* + * amd_pstate_init_freq: Initialize the max_freq, min_freq, + * nominal_freq and lowest_nonlinear_freq for + * the @cpudata object. + * + * Requires: highest_perf, lowest_perf, nominal_perf and + * lowest_nonlinear_perf members of @cpudata to be + * initialized. + * + * Returns 0 on success, non-zero value on failure. + */ +static int amd_pstate_init_freq(struct amd_cpudata *cpudata) +{ + int ret; + u32 min_freq; + u32 highest_perf, max_freq; + u32 nominal_perf, nominal_freq; + u32 lowest_nonlinear_perf, lowest_nonlinear_freq; + u32 boost_ratio, lowest_nonlinear_ratio; + struct cppc_perf_caps cppc_perf; + + ret = cppc_get_perf_caps(cpudata->cpu, &cppc_perf); + if (ret) + return ret; + + if (quirks && quirks->lowest_freq) + min_freq = quirks->lowest_freq * 1000; + else + min_freq = cppc_perf.lowest_freq * 1000; + + if (quirks && quirks->nominal_freq) + nominal_freq = quirks->nominal_freq ; + else + nominal_freq = cppc_perf.nominal_freq; + + nominal_perf = READ_ONCE(cpudata->nominal_perf); + + highest_perf = READ_ONCE(cpudata->highest_perf); + boost_ratio = div_u64(highest_perf << SCHED_CAPACITY_SHIFT, nominal_perf); + max_freq = (nominal_freq * boost_ratio >> SCHED_CAPACITY_SHIFT) * 1000; + + lowest_nonlinear_perf = READ_ONCE(cpudata->lowest_nonlinear_perf); + lowest_nonlinear_ratio = div_u64(lowest_nonlinear_perf << SCHED_CAPACITY_SHIFT, + nominal_perf); + lowest_nonlinear_freq = (nominal_freq * lowest_nonlinear_ratio >> SCHED_CAPACITY_SHIFT) * 1000; + + WRITE_ONCE(cpudata->min_freq, min_freq); + WRITE_ONCE(cpudata->lowest_nonlinear_freq, lowest_nonlinear_freq); + WRITE_ONCE(cpudata->nominal_freq, nominal_freq); + WRITE_ONCE(cpudata->max_freq, max_freq); + + return 0; +} + static int amd_pstate_cpu_init(struct cpufreq_policy *policy) { - int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret; + int min_freq, max_freq, nominal_freq, ret; struct device *dev; struct amd_cpudata *cpudata; @@ -855,20 +919,25 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) if (ret) goto free_cpudata1; - min_freq = amd_get_min_freq(cpudata); - max_freq = amd_get_max_freq(cpudata); - nominal_freq = amd_get_nominal_freq(cpudata); - lowest_nonlinear_freq = amd_get_lowest_nonlinear_freq(cpudata); + ret = amd_pstate_init_freq(cpudata); + if (ret) + goto free_cpudata1; + + min_freq = READ_ONCE(cpudata->min_freq); + max_freq = READ_ONCE(cpudata->max_freq); + nominal_freq = READ_ONCE(cpudata->nominal_freq); - if (min_freq < 0 || max_freq < 0 || min_freq > max_freq) { - dev_err(dev, "min_freq(%d) or max_freq(%d) value is incorrect\n", - min_freq, max_freq); + if (min_freq <= 0 || max_freq <= 0 || + nominal_freq <= 0 || min_freq > max_freq) { + dev_err(dev, + "min_freq(%d) or max_freq(%d) or nominal_freq (%d) value is incorrect, check _CPC in ACPI tables\n", + min_freq, max_freq, nominal_freq); ret = -EINVAL; goto free_cpudata1; } - policy->cpuinfo.transition_latency = AMD_PSTATE_TRANSITION_LATENCY; - policy->transition_delay_us = AMD_PSTATE_TRANSITION_DELAY; + policy->cpuinfo.transition_latency = amd_pstate_get_transition_latency(policy->cpu); + policy->transition_delay_us = amd_pstate_get_transition_delay_us(policy->cpu); policy->min = min_freq; policy->max = max_freq; @@ -896,13 +965,8 @@ static int amd_pstate_cpu_init(struct cpufreq_policy *policy) goto free_cpudata2; } - /* Initial processor data capability frequencies */ - cpudata->max_freq = max_freq; - cpudata->min_freq = min_freq; cpudata->max_limit_freq = max_freq; cpudata->min_limit_freq = min_freq; - cpudata->nominal_freq = nominal_freq; - cpudata->lowest_nonlinear_freq = lowest_nonlinear_freq; policy->driver_data = cpudata; @@ -966,7 +1030,7 @@ static ssize_t show_amd_pstate_max_freq(struct cpufreq_policy *policy, int max_freq; struct amd_cpudata *cpudata = policy->driver_data; - max_freq = amd_get_max_freq(cpudata); + max_freq = READ_ONCE(cpudata->max_freq); if (max_freq < 0) return max_freq; @@ -979,7 +1043,7 @@ static ssize_t show_amd_pstate_lowest_nonlinear_freq(struct cpufreq_policy *poli int freq; struct amd_cpudata *cpudata = policy->driver_data; - freq = amd_get_lowest_nonlinear_freq(cpudata); + freq = READ_ONCE(cpudata->lowest_nonlinear_freq); if (freq < 0) return freq; @@ -1290,7 +1354,7 @@ static bool amd_pstate_acpi_pm_profile_undefined(void) static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) { - int min_freq, max_freq, nominal_freq, lowest_nonlinear_freq, ret; + int min_freq, max_freq, nominal_freq, ret; struct amd_cpudata *cpudata; struct device *dev; u64 value; @@ -1317,13 +1381,18 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) if (ret) goto free_cpudata1; - min_freq = amd_get_min_freq(cpudata); - max_freq = amd_get_max_freq(cpudata); - nominal_freq = amd_get_nominal_freq(cpudata); - lowest_nonlinear_freq = amd_get_lowest_nonlinear_freq(cpudata); - if (min_freq < 0 || max_freq < 0 || min_freq > max_freq) { - dev_err(dev, "min_freq(%d) or max_freq(%d) value is incorrect\n", - min_freq, max_freq); + ret = amd_pstate_init_freq(cpudata); + if (ret) + goto free_cpudata1; + + min_freq = READ_ONCE(cpudata->min_freq); + max_freq = READ_ONCE(cpudata->max_freq); + nominal_freq = READ_ONCE(cpudata->nominal_freq); + if (min_freq <= 0 || max_freq <= 0 || + nominal_freq <= 0 || min_freq > max_freq) { + dev_err(dev, + "min_freq(%d) or max_freq(%d) or nominal_freq(%d) value is incorrect, check _CPC in ACPI tables\n", + min_freq, max_freq, nominal_freq); ret = -EINVAL; goto free_cpudata1; } @@ -1333,12 +1402,6 @@ static int amd_pstate_epp_cpu_init(struct cpufreq_policy *policy) /* It will be updated by governor */ policy->cur = policy->cpuinfo.min_freq; - /* Initial processor data capability frequencies */ - cpudata->max_freq = max_freq; - cpudata->min_freq = min_freq; - cpudata->nominal_freq = nominal_freq; - cpudata->lowest_nonlinear_freq = lowest_nonlinear_freq; - policy->driver_data = cpudata; cpudata->epp_cached = amd_pstate_get_epp(cpudata, 0); @@ -1656,6 +1719,11 @@ static int __init amd_pstate_init(void) if (cpufreq_get_current_driver()) return -EEXIST; + quirks = NULL; + + /* check if this machine need CPPC quirks */ + dmi_check_system(amd_pstate_quirks_table); + switch (cppc_state) { case AMD_PSTATE_UNDEFINED: /* Disable on the following configs by default: diff --git a/drivers/cpufreq/brcmstb-avs-cpufreq.c b/drivers/cpufreq/brcmstb-avs-cpufreq.c index 1a1857b0a6f4..ea8438550b49 100644 --- a/drivers/cpufreq/brcmstb-avs-cpufreq.c +++ b/drivers/cpufreq/brcmstb-avs-cpufreq.c @@ -481,9 +481,12 @@ static bool brcm_avs_is_firmware_loaded(struct private_data *priv) static unsigned int brcm_avs_cpufreq_get(unsigned int cpu) { struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); + struct private_data *priv; + if (!policy) return 0; - struct private_data *priv = policy->driver_data; + + priv = policy->driver_data; cpufreq_cpu_put(policy); diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c index 64420d9cfd1e..15f1d41920a3 100644 --- a/drivers/cpufreq/cppc_cpufreq.c +++ b/drivers/cpufreq/cppc_cpufreq.c @@ -741,10 +741,15 @@ static unsigned int cppc_cpufreq_get_rate(unsigned int cpu) { struct cppc_perf_fb_ctrs fb_ctrs_t0 = {0}, fb_ctrs_t1 = {0}; struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); - struct cppc_cpudata *cpu_data = policy->driver_data; + struct cppc_cpudata *cpu_data; u64 delivered_perf; int ret; + if (!policy) + return -ENODEV; + + cpu_data = policy->driver_data; + cpufreq_cpu_put(policy); ret = cppc_get_perf_ctrs(cpu, &fb_ctrs_t0); @@ -822,10 +827,15 @@ static struct cpufreq_driver cppc_cpufreq_driver = { static unsigned int hisi_cppc_cpufreq_get_rate(unsigned int cpu) { struct cpufreq_policy *policy = cpufreq_cpu_get(cpu); - struct cppc_cpudata *cpu_data = policy->driver_data; + struct cppc_cpudata *cpu_data; u64 desired_perf; int ret; + if (!policy) + return -ENODEV; + + cpu_data = policy->driver_data; + cpufreq_cpu_put(policy); ret = cppc_get_desired_perf(cpu, &desired_perf); diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c index b993a498084b..c74dd1e01e0d 100644 --- a/drivers/cpufreq/cpufreq-dt-platdev.c +++ b/drivers/cpufreq/cpufreq-dt-platdev.c @@ -104,6 +104,9 @@ static const struct of_device_id allowlist[] __initconst = { */ static const struct of_device_id blocklist[] __initconst = { { .compatible = "allwinner,sun50i-h6", }, + { .compatible = "allwinner,sun50i-h616", }, + { .compatible = "allwinner,sun50i-h618", }, + { .compatible = "allwinner,sun50i-h700", }, { .compatible = "apple,arm-platform", }, @@ -195,19 +198,18 @@ static const struct of_device_id blocklist[] __initconst = { static bool __init cpu0_node_has_opp_v2_prop(void) { - struct device_node *np = of_cpu_device_node_get(0); + struct device_node *np __free(device_node) = of_cpu_device_node_get(0); bool ret = false; if (of_property_present(np, "operating-points-v2")) ret = true; - of_node_put(np); return ret; } static int __init cpufreq_dt_platdev_init(void) { - struct device_node *np = of_find_node_by_path("/"); + struct device_node *np __free(device_node) = of_find_node_by_path("/"); const struct of_device_id *match; const void *data = NULL; @@ -223,11 +225,9 @@ static int __init cpufreq_dt_platdev_init(void) if (cpu0_node_has_opp_v2_prop() && !of_match_node(blocklist, np)) goto create_pdev; - of_node_put(np); return -ENODEV; create_pdev: - of_node_put(np); return PTR_ERR_OR_ZERO(platform_device_register_data(NULL, "cpufreq-dt", -1, data, sizeof(struct cpufreq_dt_platform_data))); diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c index 2d83bbc65dd0..907e22632fda 100644 --- a/drivers/cpufreq/cpufreq-dt.c +++ b/drivers/cpufreq/cpufreq-dt.c @@ -68,12 +68,9 @@ static int set_target(struct cpufreq_policy *policy, unsigned int index) */ static const char *find_supply_name(struct device *dev) { - struct device_node *np; + struct device_node *np __free(device_node) = of_node_get(dev->of_node); struct property *pp; int cpu = dev->id; - const char *name = NULL; - - np = of_node_get(dev->of_node); /* This must be valid for sure */ if (WARN_ON(!np)) @@ -82,22 +79,16 @@ static const char *find_supply_name(struct device *dev) /* Try "cpu0" for older DTs */ if (!cpu) { pp = of_find_property(np, "cpu0-supply", NULL); - if (pp) { - name = "cpu0"; - goto node_put; - } + if (pp) + return "cpu0"; } pp = of_find_property(np, "cpu-supply", NULL); - if (pp) { - name = "cpu"; - goto node_put; - } + if (pp) + return "cpu"; dev_dbg(dev, "no regulator for cpu%d\n", cpu); -node_put: - of_node_put(np); - return name; + return NULL; } static int cpufreq_init(struct cpufreq_policy *policy) diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c index 1de8bd105934..a45aac17c20f 100644 --- a/drivers/cpufreq/cpufreq.c +++ b/drivers/cpufreq/cpufreq.c @@ -1679,10 +1679,13 @@ static void __cpufreq_offline(unsigned int cpu, struct cpufreq_policy *policy) */ if (cpufreq_driver->offline) { cpufreq_driver->offline(policy); - } else if (cpufreq_driver->exit) { - cpufreq_driver->exit(policy); - policy->freq_table = NULL; + return; } + + if (cpufreq_driver->exit) + cpufreq_driver->exit(policy); + + policy->freq_table = NULL; } static int cpufreq_offline(unsigned int cpu) @@ -1740,7 +1743,7 @@ static void cpufreq_remove_dev(struct device *dev, struct subsys_interface *sif) } /* We did light-weight exit earlier, do full tear down now */ - if (cpufreq_driver->offline) + if (cpufreq_driver->offline && cpufreq_driver->exit) cpufreq_driver->exit(policy); up_write(&policy->rwsem); diff --git a/drivers/cpufreq/freq_table.c b/drivers/cpufreq/freq_table.c index c17dc51a5a02..10e80d912b8d 100644 --- a/drivers/cpufreq/freq_table.c +++ b/drivers/cpufreq/freq_table.c @@ -70,7 +70,7 @@ int cpufreq_frequency_table_verify(struct cpufreq_policy_data *policy, struct cpufreq_frequency_table *table) { struct cpufreq_frequency_table *pos; - unsigned int freq, next_larger = ~0; + unsigned int freq, prev_smaller = 0; bool found = false; pr_debug("request for verification of policy (%u - %u kHz) for cpu %u\n", @@ -86,12 +86,12 @@ int cpufreq_frequency_table_verify(struct cpufreq_policy_data *policy, break; } - if ((next_larger > freq) && (freq > policy->max)) - next_larger = freq; + if ((prev_smaller < freq) && (freq <= policy->max)) + prev_smaller = freq; } if (!found) { - policy->max = next_larger; + policy->max = prev_smaller; cpufreq_verify_within_cpu_limits(policy); } @@ -194,7 +194,7 @@ int cpufreq_table_index_unsorted(struct cpufreq_policy *policy, } if (optimal.driver_data > i) { if (suboptimal.driver_data > i) { - WARN(1, "Invalid frequency table: %d\n", policy->cpu); + WARN(1, "Invalid frequency table: %u\n", policy->cpu); return 0; } @@ -254,7 +254,7 @@ static ssize_t show_available_freqs(struct cpufreq_policy *policy, char *buf, if (show_boost ^ (pos->flags & CPUFREQ_BOOST_FREQ)) continue; - count += sprintf(&buf[count], "%d ", pos->frequency); + count += sprintf(&buf[count], "%u ", pos->frequency); } count += sprintf(&buf[count], "\n"); diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c index dbbf299f4219..4b986c044741 100644 --- a/drivers/cpufreq/intel_pstate.c +++ b/drivers/cpufreq/intel_pstate.c @@ -173,7 +173,6 @@ struct vid_data { * based on the MSR_IA32_MISC_ENABLE value and whether or * not the maximum reported turbo P-state is different from * the maximum reported non-turbo one. - * @turbo_disabled_mf: The @turbo_disabled value reflected by cpuinfo.max_freq. * @min_perf_pct: Minimum capacity limit in percent of the maximum turbo * P-state capacity. * @max_perf_pct: Maximum capacity limit in percent of the maximum turbo @@ -182,7 +181,6 @@ struct vid_data { struct global_params { bool no_turbo; bool turbo_disabled; - bool turbo_disabled_mf; int max_perf_pct; int min_perf_pct; }; @@ -213,7 +211,7 @@ struct global_params { * @epp_policy: Last saved policy used to set EPP/EPB * @epp_default: Power on default HWP energy performance * preference/bias - * @epp_cached Cached HWP energy-performance preference value + * @epp_cached: Cached HWP energy-performance preference value * @hwp_req_cached: Cached value of the last HWP Request MSR * @hwp_cap_cached: Cached value of the last HWP Capabilities MSR * @last_io_update: Last time when IO wake flag was set @@ -292,11 +290,11 @@ struct pstate_funcs { static struct pstate_funcs pstate_funcs __read_mostly; -static int hwp_active __read_mostly; -static int hwp_mode_bdw __read_mostly; -static bool per_cpu_limits __read_mostly; +static bool hwp_active __ro_after_init; +static int hwp_mode_bdw __ro_after_init; +static bool per_cpu_limits __ro_after_init; +static bool hwp_forced __ro_after_init; static bool hwp_boost __read_mostly; -static bool hwp_forced __read_mostly; static struct cpufreq_driver *intel_pstate_driver __read_mostly; @@ -594,12 +592,13 @@ static void intel_pstate_hybrid_hwp_adjust(struct cpudata *cpu) cpu->pstate.min_pstate = intel_pstate_freq_to_hwp(cpu, freq); } -static inline void update_turbo_state(void) +static bool turbo_is_disabled(void) { u64 misc_en; rdmsrl(MSR_IA32_MISC_ENABLE, misc_en); - global.turbo_disabled = misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE; + + return !!(misc_en & MSR_IA32_MISC_ENABLE_TURBO_DISABLE); } static int min_perf_pct_min(void) @@ -1154,12 +1153,15 @@ static void intel_pstate_update_policies(void) static void __intel_pstate_update_max_freq(struct cpudata *cpudata, struct cpufreq_policy *policy) { - policy->cpuinfo.max_freq = global.turbo_disabled_mf ? + intel_pstate_get_hwp_cap(cpudata); + + policy->cpuinfo.max_freq = READ_ONCE(global.no_turbo) ? cpudata->pstate.max_freq : cpudata->pstate.turbo_freq; + refresh_frequency_limits(policy); } -static void intel_pstate_update_max_freq(unsigned int cpu) +static void intel_pstate_update_limits(unsigned int cpu) { struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpu); @@ -1171,25 +1173,12 @@ static void intel_pstate_update_max_freq(unsigned int cpu) cpufreq_cpu_release(policy); } -static void intel_pstate_update_limits(unsigned int cpu) +static void intel_pstate_update_limits_for_all(void) { - mutex_lock(&intel_pstate_driver_lock); - - update_turbo_state(); - /* - * If turbo has been turned on or off globally, policy limits for - * all CPUs need to be updated to reflect that. - */ - if (global.turbo_disabled_mf != global.turbo_disabled) { - global.turbo_disabled_mf = global.turbo_disabled; - arch_set_max_freq_ratio(global.turbo_disabled); - for_each_possible_cpu(cpu) - intel_pstate_update_max_freq(cpu); - } else { - cpufreq_update_policy(cpu); - } + int cpu; - mutex_unlock(&intel_pstate_driver_lock); + for_each_possible_cpu(cpu) + intel_pstate_update_limits(cpu); } /************************** sysfs begin ************************/ @@ -1287,11 +1276,7 @@ static ssize_t show_no_turbo(struct kobject *kobj, return -EAGAIN; } - update_turbo_state(); - if (global.turbo_disabled) - ret = sprintf(buf, "%u\n", global.turbo_disabled); - else - ret = sprintf(buf, "%u\n", global.no_turbo); + ret = sprintf(buf, "%u\n", global.no_turbo); mutex_unlock(&intel_pstate_driver_lock); @@ -1302,32 +1287,34 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, const char *buf, size_t count) { unsigned int input; - int ret; + bool no_turbo; - ret = sscanf(buf, "%u", &input); - if (ret != 1) + if (sscanf(buf, "%u", &input) != 1) return -EINVAL; mutex_lock(&intel_pstate_driver_lock); if (!intel_pstate_driver) { - mutex_unlock(&intel_pstate_driver_lock); - return -EAGAIN; + count = -EAGAIN; + goto unlock_driver; } - mutex_lock(&intel_pstate_limits_lock); + no_turbo = !!clamp_t(int, input, 0, 1); + + if (no_turbo == global.no_turbo) + goto unlock_driver; - update_turbo_state(); if (global.turbo_disabled) { pr_notice_once("Turbo disabled by BIOS or unavailable on processor\n"); - mutex_unlock(&intel_pstate_limits_lock); - mutex_unlock(&intel_pstate_driver_lock); - return -EPERM; + count = -EPERM; + goto unlock_driver; } - global.no_turbo = clamp_t(int, input, 0, 1); + WRITE_ONCE(global.no_turbo, no_turbo); - if (global.no_turbo) { + mutex_lock(&intel_pstate_limits_lock); + + if (no_turbo) { struct cpudata *cpu = all_cpu_data[0]; int pct = cpu->pstate.max_pstate * 100 / cpu->pstate.turbo_pstate; @@ -1338,9 +1325,10 @@ static ssize_t store_no_turbo(struct kobject *a, struct kobj_attribute *b, mutex_unlock(&intel_pstate_limits_lock); - intel_pstate_update_policies(); - arch_set_max_freq_ratio(global.no_turbo); + intel_pstate_update_limits_for_all(); + arch_set_max_freq_ratio(no_turbo); +unlock_driver: mutex_unlock(&intel_pstate_driver_lock); return count; @@ -1621,7 +1609,6 @@ static void intel_pstate_notify_work(struct work_struct *work) struct cpufreq_policy *policy = cpufreq_cpu_acquire(cpudata->cpu); if (policy) { - intel_pstate_get_hwp_cap(cpudata); __intel_pstate_update_max_freq(cpudata, policy); cpufreq_cpu_release(policy); @@ -1636,11 +1623,10 @@ static cpumask_t hwp_intr_enable_mask; void notify_hwp_interrupt(void) { unsigned int this_cpu = smp_processor_id(); - struct cpudata *cpudata; unsigned long flags; u64 value; - if (!READ_ONCE(hwp_active) || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) + if (!hwp_active || !boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) return; rdmsrl_safe(MSR_HWP_STATUS, &value); @@ -1652,24 +1638,8 @@ void notify_hwp_interrupt(void) if (!cpumask_test_cpu(this_cpu, &hwp_intr_enable_mask)) goto ack_intr; - /* - * Currently we never free all_cpu_data. And we can't reach here - * without this allocated. But for safety for future changes, added - * check. - */ - if (unlikely(!READ_ONCE(all_cpu_data))) - goto ack_intr; - - /* - * The free is done during cleanup, when cpufreq registry is failed. - * We wouldn't be here if it fails on init or switch status. But for - * future changes, added check. - */ - cpudata = READ_ONCE(all_cpu_data[this_cpu]); - if (unlikely(!cpudata)) - goto ack_intr; - - schedule_delayed_work(&cpudata->hwp_notify_work, msecs_to_jiffies(10)); + schedule_delayed_work(&all_cpu_data[this_cpu]->hwp_notify_work, + msecs_to_jiffies(10)); spin_unlock_irqrestore(&hwp_notify_lock, flags); @@ -1682,7 +1652,7 @@ ack_intr: static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata) { - unsigned long flags; + bool cancel_work; if (!boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) return; @@ -1690,22 +1660,22 @@ static void intel_pstate_disable_hwp_interrupt(struct cpudata *cpudata) /* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */ wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x00); - spin_lock_irqsave(&hwp_notify_lock, flags); - if (cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask)) - cancel_delayed_work(&cpudata->hwp_notify_work); - spin_unlock_irqrestore(&hwp_notify_lock, flags); + spin_lock_irq(&hwp_notify_lock); + cancel_work = cpumask_test_and_clear_cpu(cpudata->cpu, &hwp_intr_enable_mask); + spin_unlock_irq(&hwp_notify_lock); + + if (cancel_work) + cancel_delayed_work_sync(&cpudata->hwp_notify_work); } static void intel_pstate_enable_hwp_interrupt(struct cpudata *cpudata) { /* Enable HWP notification interrupt for guaranteed performance change */ if (boot_cpu_has(X86_FEATURE_HWP_NOTIFY)) { - unsigned long flags; - - spin_lock_irqsave(&hwp_notify_lock, flags); + spin_lock_irq(&hwp_notify_lock); INIT_DELAYED_WORK(&cpudata->hwp_notify_work, intel_pstate_notify_work); cpumask_set_cpu(cpudata->cpu, &hwp_intr_enable_mask); - spin_unlock_irqrestore(&hwp_notify_lock, flags); + spin_unlock_irq(&hwp_notify_lock); /* wrmsrl_on_cpu has to be outside spinlock as this can result in IPC */ wrmsrl_on_cpu(cpudata->cpu, MSR_HWP_INTERRUPT, 0x01); @@ -1791,7 +1761,7 @@ static u64 atom_get_val(struct cpudata *cpudata, int pstate) u32 vid; val = (u64)pstate << 8; - if (global.no_turbo && !global.turbo_disabled) + if (READ_ONCE(global.no_turbo) && !global.turbo_disabled) val |= (u64)1 << 32; vid_fp = cpudata->vid.min + mul_fp( @@ -1956,7 +1926,7 @@ static u64 core_get_val(struct cpudata *cpudata, int pstate) u64 val; val = (u64)pstate << 8; - if (global.no_turbo && !global.turbo_disabled) + if (READ_ONCE(global.no_turbo) && !global.turbo_disabled) val |= (u64)1 << 32; return val; @@ -2029,14 +1999,6 @@ static void intel_pstate_set_min_pstate(struct cpudata *cpu) intel_pstate_set_pstate(cpu, cpu->pstate.min_pstate); } -static void intel_pstate_max_within_limits(struct cpudata *cpu) -{ - int pstate = max(cpu->pstate.min_pstate, cpu->max_perf_ratio); - - update_turbo_state(); - intel_pstate_set_pstate(cpu, pstate); -} - static void intel_pstate_get_cpu_pstates(struct cpudata *cpu) { int perf_ctl_max_phys = pstate_funcs.get_max_physical(cpu->cpu); @@ -2262,7 +2224,7 @@ static inline int32_t get_target_pstate(struct cpudata *cpu) sample->busy_scaled = busy_frac * 100; - target = global.no_turbo || global.turbo_disabled ? + target = READ_ONCE(global.no_turbo) ? cpu->pstate.max_pstate : cpu->pstate.turbo_pstate; target += target >> 2; target = mul_fp(target, busy_frac); @@ -2306,8 +2268,6 @@ static void intel_pstate_adjust_pstate(struct cpudata *cpu) struct sample *sample; int target_pstate; - update_turbo_state(); - target_pstate = get_target_pstate(cpu); target_pstate = intel_pstate_prepare_request(cpu, target_pstate); trace_cpu_frequency(target_pstate * cpu->pstate.scaling, cpu->cpu); @@ -2437,6 +2397,7 @@ static const struct x86_cpu_id intel_pstate_cpu_ids[] = { }; MODULE_DEVICE_TABLE(x86cpu, intel_pstate_cpu_ids); +#ifdef CONFIG_ACPI static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = { X86_MATCH(BROADWELL_D, core_funcs), X86_MATCH(BROADWELL_X, core_funcs), @@ -2445,6 +2406,7 @@ static const struct x86_cpu_id intel_pstate_cpu_oob_ids[] __initconst = { X86_MATCH(SAPPHIRERAPIDS_X, core_funcs), {} }; +#endif static const struct x86_cpu_id intel_pstate_cpu_ee_disable_ids[] = { X86_MATCH(KABYLAKE, core_funcs), @@ -2526,7 +2488,7 @@ static void intel_pstate_clear_update_util_hook(unsigned int cpu) static int intel_pstate_get_max_freq(struct cpudata *cpu) { - return global.turbo_disabled || global.no_turbo ? + return READ_ONCE(global.no_turbo) ? cpu->pstate.max_freq : cpu->pstate.turbo_freq; } @@ -2611,12 +2573,14 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy) intel_pstate_update_perf_limits(cpu, policy->min, policy->max); if (cpu->policy == CPUFREQ_POLICY_PERFORMANCE) { + int pstate = max(cpu->pstate.min_pstate, cpu->max_perf_ratio); + /* * NOHZ_FULL CPUs need this as the governor callback may not * be invoked on them. */ intel_pstate_clear_update_util_hook(policy->cpu); - intel_pstate_max_within_limits(cpu); + intel_pstate_set_pstate(cpu, pstate); } else { intel_pstate_set_update_util_hook(policy->cpu); } @@ -2659,10 +2623,9 @@ static void intel_pstate_verify_cpu_policy(struct cpudata *cpu, { int max_freq; - update_turbo_state(); if (hwp_active) { intel_pstate_get_hwp_cap(cpu); - max_freq = global.no_turbo || global.turbo_disabled ? + max_freq = READ_ONCE(global.no_turbo) ? cpu->pstate.max_freq : cpu->pstate.turbo_freq; } else { max_freq = intel_pstate_get_max_freq(cpu); @@ -2756,9 +2719,7 @@ static int __intel_pstate_cpu_init(struct cpufreq_policy *policy) /* cpuinfo and default policy values */ policy->cpuinfo.min_freq = cpu->pstate.min_freq; - update_turbo_state(); - global.turbo_disabled_mf = global.turbo_disabled; - policy->cpuinfo.max_freq = global.turbo_disabled ? + policy->cpuinfo.max_freq = READ_ONCE(global.no_turbo) ? cpu->pstate.max_freq : cpu->pstate.turbo_freq; policy->min = policy->cpuinfo.min_freq; @@ -2923,8 +2884,6 @@ static int intel_cpufreq_target(struct cpufreq_policy *policy, struct cpufreq_freqs freqs; int target_pstate; - update_turbo_state(); - freqs.old = policy->cur; freqs.new = target_freq; @@ -2946,8 +2905,6 @@ static unsigned int intel_cpufreq_fast_switch(struct cpufreq_policy *policy, struct cpudata *cpu = all_cpu_data[policy->cpu]; int target_pstate; - update_turbo_state(); - target_pstate = intel_pstate_freq_to_hwp(cpu, target_freq); target_pstate = intel_cpufreq_update_pstate(policy, target_pstate, true); @@ -2965,9 +2922,9 @@ static void intel_cpufreq_adjust_perf(unsigned int cpunum, int old_pstate = cpu->pstate.current_pstate; int cap_pstate, min_pstate, max_pstate, target_pstate; - update_turbo_state(); - cap_pstate = global.turbo_disabled ? HWP_GUARANTEED_PERF(hwp_cap) : - HWP_HIGHEST_PERF(hwp_cap); + cap_pstate = READ_ONCE(global.no_turbo) ? + HWP_GUARANTEED_PERF(hwp_cap) : + HWP_HIGHEST_PERF(hwp_cap); /* Optimization: Avoid unnecessary divisions. */ @@ -3135,10 +3092,8 @@ static void intel_pstate_driver_cleanup(void) if (intel_pstate_driver == &intel_pstate) intel_pstate_clear_update_util_hook(cpu); - spin_lock(&hwp_notify_lock); kfree(all_cpu_data[cpu]); WRITE_ONCE(all_cpu_data[cpu], NULL); - spin_unlock(&hwp_notify_lock); } } cpus_read_unlock(); @@ -3155,6 +3110,10 @@ static int intel_pstate_register_driver(struct cpufreq_driver *driver) memset(&global, 0, sizeof(global)); global.max_perf_pct = 100; + global.turbo_disabled = turbo_is_disabled(); + global.no_turbo = global.turbo_disabled; + + arch_set_max_freq_ratio(global.turbo_disabled); intel_pstate_driver = driver; ret = cpufreq_register_driver(intel_pstate_driver); @@ -3466,7 +3425,7 @@ static int __init intel_pstate_init(void) * deal with it. */ if ((!no_hwp && boot_cpu_has(X86_FEATURE_HWP_EPP)) || hwp_forced) { - WRITE_ONCE(hwp_active, 1); + hwp_active = true; hwp_mode_bdw = id->driver_data; intel_pstate.attr = hwp_cpufreq_attrs; intel_cpufreq.attr = hwp_cpufreq_attrs; diff --git a/drivers/cpufreq/mediatek-cpufreq.c b/drivers/cpufreq/mediatek-cpufreq.c index a0a61919bc4c..518606adf14e 100644 --- a/drivers/cpufreq/mediatek-cpufreq.c +++ b/drivers/cpufreq/mediatek-cpufreq.c @@ -707,6 +707,15 @@ static const struct mtk_cpufreq_platform_data mt7623_platform_data = { .ccifreq_supported = false, }; +static const struct mtk_cpufreq_platform_data mt7988_platform_data = { + .min_volt_shift = 100000, + .max_volt_shift = 200000, + .proc_max_volt = 900000, + .sram_min_volt = 0, + .sram_max_volt = 1150000, + .ccifreq_supported = true, +}; + static const struct mtk_cpufreq_platform_data mt8183_platform_data = { .min_volt_shift = 100000, .max_volt_shift = 200000, @@ -740,6 +749,7 @@ static const struct of_device_id mtk_cpufreq_machines[] __initconst = { { .compatible = "mediatek,mt2712", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt7622", .data = &mt7622_platform_data }, { .compatible = "mediatek,mt7623", .data = &mt7623_platform_data }, + { .compatible = "mediatek,mt7988a", .data = &mt7988_platform_data }, { .compatible = "mediatek,mt8167", .data = &mt8516_platform_data }, { .compatible = "mediatek,mt817x", .data = &mt2701_platform_data }, { .compatible = "mediatek,mt8173", .data = &mt2701_platform_data }, diff --git a/drivers/cpufreq/sun50i-cpufreq-nvmem.c b/drivers/cpufreq/sun50i-cpufreq-nvmem.c index 32a9c88f8ff6..0b882765cd66 100644 --- a/drivers/cpufreq/sun50i-cpufreq-nvmem.c +++ b/drivers/cpufreq/sun50i-cpufreq-nvmem.c @@ -10,6 +10,7 @@ #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt +#include <linux/arm-smccc.h> #include <linux/cpu.h> #include <linux/module.h> #include <linux/nvmem-consumer.h> @@ -18,26 +19,155 @@ #include <linux/pm_opp.h> #include <linux/slab.h> -#define MAX_NAME_LEN 7 - #define NVMEM_MASK 0x7 #define NVMEM_SHIFT 5 static struct platform_device *cpufreq_dt_pdev, *sun50i_cpufreq_pdev; +struct sunxi_cpufreq_data { + u32 (*efuse_xlate)(u32 speedbin); +}; + +static u32 sun50i_h6_efuse_xlate(u32 speedbin) +{ + u32 efuse_value; + + efuse_value = (speedbin >> NVMEM_SHIFT) & NVMEM_MASK; + + /* + * We treat unexpected efuse values as if the SoC was from + * the slowest bin. Expected efuse values are 1-3, slowest + * to fastest. + */ + if (efuse_value >= 1 && efuse_value <= 3) + return efuse_value - 1; + else + return 0; +} + +static int get_soc_id_revision(void) +{ +#ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY + return arm_smccc_get_soc_id_revision(); +#else + return SMCCC_RET_NOT_SUPPORTED; +#endif +} + +/* + * Judging by the OPP tables in the vendor BSP, the quality order of the + * returned speedbin index is 4 -> 0/2 -> 3 -> 1, from worst to best. + * 0 and 2 seem identical from the OPP tables' point of view. + */ +static u32 sun50i_h616_efuse_xlate(u32 speedbin) +{ + int ver_bits = get_soc_id_revision(); + u32 value = 0; + + switch (speedbin & 0xffff) { + case 0x2000: + value = 0; + break; + case 0x2400: + case 0x7400: + case 0x2c00: + case 0x7c00: + if (ver_bits != SMCCC_RET_NOT_SUPPORTED && ver_bits <= 1) { + /* ic version A/B */ + value = 1; + } else { + /* ic version C and later version */ + value = 2; + } + break; + case 0x5000: + case 0x5400: + case 0x6000: + value = 3; + break; + case 0x5c00: + value = 4; + break; + case 0x5d00: + value = 0; + break; + default: + pr_warn("sun50i-cpufreq-nvmem: unknown speed bin 0x%x, using default bin 0\n", + speedbin & 0xffff); + value = 0; + break; + } + + return value; +} + +static struct sunxi_cpufreq_data sun50i_h6_cpufreq_data = { + .efuse_xlate = sun50i_h6_efuse_xlate, +}; + +static struct sunxi_cpufreq_data sun50i_h616_cpufreq_data = { + .efuse_xlate = sun50i_h616_efuse_xlate, +}; + +static const struct of_device_id cpu_opp_match_list[] = { + { .compatible = "allwinner,sun50i-h6-operating-points", + .data = &sun50i_h6_cpufreq_data, + }, + { .compatible = "allwinner,sun50i-h616-operating-points", + .data = &sun50i_h616_cpufreq_data, + }, + {} +}; + +/** + * dt_has_supported_hw() - Check if any OPPs use opp-supported-hw + * + * If we ask the cpufreq framework to use the opp-supported-hw feature, it + * will ignore every OPP node without that DT property. If none of the OPPs + * have it, the driver will fail probing, due to the lack of OPPs. + * + * Returns true if we have at least one OPP with the opp-supported-hw property. + */ +static bool dt_has_supported_hw(void) +{ + bool has_opp_supported_hw = false; + struct device_node *np, *opp; + struct device *cpu_dev; + + cpu_dev = get_cpu_device(0); + if (!cpu_dev) + return false; + + np = dev_pm_opp_of_get_opp_desc_node(cpu_dev); + if (!np) + return false; + + for_each_child_of_node(np, opp) { + if (of_find_property(opp, "opp-supported-hw", NULL)) { + has_opp_supported_hw = true; + break; + } + } + + of_node_put(np); + + return has_opp_supported_hw; +} + /** * sun50i_cpufreq_get_efuse() - Determine speed grade from efuse value - * @versions: Set to the value parsed from efuse * - * Returns 0 if success. + * Returns non-negative speed bin index on success, a negative error + * value otherwise. */ -static int sun50i_cpufreq_get_efuse(u32 *versions) +static int sun50i_cpufreq_get_efuse(void) { + const struct sunxi_cpufreq_data *opp_data; struct nvmem_cell *speedbin_nvmem; + const struct of_device_id *match; struct device_node *np; struct device *cpu_dev; - u32 *speedbin, efuse_value; - size_t len; + u32 *speedbin; int ret; cpu_dev = get_cpu_device(0); @@ -48,12 +178,12 @@ static int sun50i_cpufreq_get_efuse(u32 *versions) if (!np) return -ENOENT; - ret = of_device_is_compatible(np, - "allwinner,sun50i-h6-operating-points"); - if (!ret) { + match = of_match_node(cpu_opp_match_list, np); + if (!match) { of_node_put(np); return -ENOENT; } + opp_data = match->data; speedbin_nvmem = of_nvmem_cell_get(np, NULL); of_node_put(np); @@ -61,33 +191,25 @@ static int sun50i_cpufreq_get_efuse(u32 *versions) return dev_err_probe(cpu_dev, PTR_ERR(speedbin_nvmem), "Could not get nvmem cell\n"); - speedbin = nvmem_cell_read(speedbin_nvmem, &len); + speedbin = nvmem_cell_read(speedbin_nvmem, NULL); nvmem_cell_put(speedbin_nvmem); if (IS_ERR(speedbin)) return PTR_ERR(speedbin); - efuse_value = (*speedbin >> NVMEM_SHIFT) & NVMEM_MASK; - - /* - * We treat unexpected efuse values as if the SoC was from - * the slowest bin. Expected efuse values are 1-3, slowest - * to fastest. - */ - if (efuse_value >= 1 && efuse_value <= 3) - *versions = efuse_value - 1; - else - *versions = 0; + ret = opp_data->efuse_xlate(*speedbin); kfree(speedbin); - return 0; + + return ret; }; static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev) { int *opp_tokens; - char name[MAX_NAME_LEN]; - unsigned int cpu; - u32 speed = 0; + char name[] = "speedXXXXXXXXXXX"; /* Integers can take 11 chars max */ + unsigned int cpu, supported_hw; + struct dev_pm_opp_config config = {}; + int speed; int ret; opp_tokens = kcalloc(num_possible_cpus(), sizeof(*opp_tokens), @@ -95,13 +217,24 @@ static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev) if (!opp_tokens) return -ENOMEM; - ret = sun50i_cpufreq_get_efuse(&speed); - if (ret) { + speed = sun50i_cpufreq_get_efuse(); + if (speed < 0) { kfree(opp_tokens); - return ret; + return speed; } - snprintf(name, MAX_NAME_LEN, "speed%d", speed); + /* + * We need at least one OPP with the "opp-supported-hw" property, + * or else the upper layers will ignore every OPP and will bail out. + */ + if (dt_has_supported_hw()) { + supported_hw = 1U << speed; + config.supported_hw = &supported_hw; + config.supported_hw_count = 1; + } + + snprintf(name, sizeof(name), "speed%d", speed); + config.prop_name = name; for_each_possible_cpu(cpu) { struct device *cpu_dev = get_cpu_device(cpu); @@ -111,12 +244,11 @@ static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev) goto free_opp; } - opp_tokens[cpu] = dev_pm_opp_set_prop_name(cpu_dev, name); - if (opp_tokens[cpu] < 0) { - ret = opp_tokens[cpu]; - pr_err("Failed to set prop name\n"); + ret = dev_pm_opp_set_config(cpu_dev, &config); + if (ret < 0) goto free_opp; - } + + opp_tokens[cpu] = ret; } cpufreq_dt_pdev = platform_device_register_simple("cpufreq-dt", -1, @@ -131,7 +263,7 @@ static int sun50i_cpufreq_nvmem_probe(struct platform_device *pdev) free_opp: for_each_possible_cpu(cpu) - dev_pm_opp_put_prop_name(opp_tokens[cpu]); + dev_pm_opp_clear_config(opp_tokens[cpu]); kfree(opp_tokens); return ret; @@ -145,7 +277,7 @@ static void sun50i_cpufreq_nvmem_remove(struct platform_device *pdev) platform_device_unregister(cpufreq_dt_pdev); for_each_possible_cpu(cpu) - dev_pm_opp_put_prop_name(opp_tokens[cpu]); + dev_pm_opp_clear_config(opp_tokens[cpu]); kfree(opp_tokens); } @@ -160,6 +292,9 @@ static struct platform_driver sun50i_cpufreq_driver = { static const struct of_device_id sun50i_cpufreq_match_list[] = { { .compatible = "allwinner,sun50i-h6" }, + { .compatible = "allwinner,sun50i-h616" }, + { .compatible = "allwinner,sun50i-h618" }, + { .compatible = "allwinner,sun50i-h700" }, {} }; MODULE_DEVICE_TABLE(of, sun50i_cpufreq_match_list); diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c index aae951d4e77c..514146d98bca 100644 --- a/drivers/cpufreq/tegra124-cpufreq.c +++ b/drivers/cpufreq/tegra124-cpufreq.c @@ -52,12 +52,15 @@ out: static int tegra124_cpufreq_probe(struct platform_device *pdev) { + struct device_node *np __free(device_node) = of_cpu_device_node_get(0); struct tegra124_cpufreq_priv *priv; - struct device_node *np; struct device *cpu_dev; struct platform_device_info cpufreq_dt_devinfo = {}; int ret; + if (!np) + return -ENODEV; + priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); if (!priv) return -ENOMEM; @@ -66,15 +69,9 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev) if (!cpu_dev) return -ENODEV; - np = of_cpu_device_node_get(0); - if (!np) - return -ENODEV; - priv->cpu_clk = of_clk_get_by_name(np, "cpu_g"); - if (IS_ERR(priv->cpu_clk)) { - ret = PTR_ERR(priv->cpu_clk); - goto out_put_np; - } + if (IS_ERR(priv->cpu_clk)) + return PTR_ERR(priv->cpu_clk); priv->dfll_clk = of_clk_get_by_name(np, "dfll"); if (IS_ERR(priv->dfll_clk)) { @@ -110,8 +107,6 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev) platform_set_drvdata(pdev, priv); - of_node_put(np); - return 0; out_put_pllp_clk: @@ -122,8 +117,6 @@ out_put_dfll_clk: clk_put(priv->dfll_clk); out_put_cpu_clk: clk_put(priv->cpu_clk); -out_put_np: - of_node_put(np); return ret; } diff --git a/drivers/cpufreq/ti-cpufreq.c b/drivers/cpufreq/ti-cpufreq.c index 46c41e2ca727..714ed53753fa 100644 --- a/drivers/cpufreq/ti-cpufreq.c +++ b/drivers/cpufreq/ti-cpufreq.c @@ -347,12 +347,10 @@ static const struct of_device_id ti_cpufreq_of_match[] = { static const struct of_device_id *ti_cpufreq_match_node(void) { - struct device_node *np; + struct device_node *np __free(device_node) = of_find_node_by_path("/"); const struct of_device_id *match; - np = of_find_node_by_path("/"); match = of_match_node(ti_cpufreq_of_match, np); - of_node_put(np); return match; } |