summaryrefslogtreecommitdiffstats
path: root/arch/x86/kernel/cpu/intel_rdt.c
diff options
context:
space:
mode:
authorVikas Shivappa <vikas.shivappa@linux.intel.com>2018-04-20 15:36:21 -0700
committerThomas Gleixner <tglx@linutronix.de>2018-05-19 13:16:44 +0200
commitde73f38f768021610bd305cf74ef3702fcf6a1eb (patch)
treee72107bd73f857e91041e41bebd4b446a93fde06 /arch/x86/kernel/cpu/intel_rdt.c
parentba0f26d8529c2dfc9aa6d9e8a338180737f8c1be (diff)
downloadlinux-de73f38f768021610bd305cf74ef3702fcf6a1eb.tar.gz
linux-de73f38f768021610bd305cf74ef3702fcf6a1eb.tar.bz2
linux-de73f38f768021610bd305cf74ef3702fcf6a1eb.zip
x86/intel_rdt/mba_sc: Feedback loop to dynamically update mem bandwidth
mba_sc is a feedback loop where we periodically read MBM counters and try to restrict the bandwidth below a max value so the below is always true: "current bandwidth(cur_bw) < user specified bandwidth(user_bw)" The frequency of these checks is currently 1s and we just tag along the MBM overflow timer to do the updates. Doing it once in a second also makes the calculation of bandwidth easy. The steps of increase or decrease of bandwidth is the minimum granularity specified by the hardware. Although the MBA's goal is to restrict the bandwidth below a maximum, there may be a need to even increase the bandwidth. Since MBA controls the L2 external bandwidth where as MBM measures the L3 external bandwidth, we may end up restricting some rdtgroups unnecessarily. This may happen in the sequence where rdtgroup (set of jobs) had high "L3 <-> memory traffic" in initial phases -> mba_sc kicks in and reduced bandwidth percentage values -> but after some it has mostly "L2 <-> L3" traffic. In this scenario mba_sc increases the bandwidth percentage when there is lesser memory traffic. Signed-off-by: Vikas Shivappa <vikas.shivappa@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: ravi.v.shankar@intel.com Cc: tony.luck@intel.com Cc: fenghua.yu@intel.com Cc: vikas.shivappa@intel.com Cc: ak@linux.intel.com Cc: hpa@zytor.com Link: https://lkml.kernel.org/r/1524263781-14267-7-git-send-email-vikas.shivappa@linux.intel.com
Diffstat (limited to 'arch/x86/kernel/cpu/intel_rdt.c')
-rw-r--r--arch/x86/kernel/cpu/intel_rdt.c3
1 files changed, 1 insertions, 2 deletions
diff --git a/arch/x86/kernel/cpu/intel_rdt.c b/arch/x86/kernel/cpu/intel_rdt.c
index ad03d975883e..24bfa63e86cf 100644
--- a/arch/x86/kernel/cpu/intel_rdt.c
+++ b/arch/x86/kernel/cpu/intel_rdt.c
@@ -33,7 +33,6 @@
#include <asm/intel_rdt_sched.h>
#include "intel_rdt.h"
-#define MAX_MBA_BW 100u
#define MBA_IS_LINEAR 0x4
#define MBA_MAX_MBPS U32_MAX
@@ -350,7 +349,7 @@ static int get_cache_id(int cpu, int level)
* that can be written to QOS_MSRs.
* There are currently no SKUs which support non linear delay values.
*/
-static u32 delay_bw_map(unsigned long bw, struct rdt_resource *r)
+u32 delay_bw_map(unsigned long bw, struct rdt_resource *r)
{
if (r->membw.delay_linear)
return MAX_MBA_BW - bw;