diff options
author | Ville Syrjälä <ville.syrjala@linux.intel.com> | 2014-04-13 12:45:03 +0300 |
---|---|---|
committer | Ingo Molnar <mingo@kernel.org> | 2014-04-14 08:50:56 +0200 |
commit | 86e587623a0ca8426267dad8d3eaebd6fc2d00f1 (patch) | |
tree | 9e383ae14c3ab264719ca02f4cd14a1dd0ca6fba /arch | |
parent | f96364041ccda63ff4bed96fd06b267d8d841dc0 (diff) | |
download | linux-86e587623a0ca8426267dad8d3eaebd6fc2d00f1.tar.gz linux-86e587623a0ca8426267dad8d3eaebd6fc2d00f1.tar.bz2 linux-86e587623a0ca8426267dad8d3eaebd6fc2d00f1.zip |
x86/gpu: Fix sign extension issue in Intel graphics stolen memory quirks
Have the KB(),MB(),GB() macros produce unsigned longs to avoid
unintended sign extension issues with the gen2 memory size
detection.
What happens is first the uint8_t returned by
read_pci_config_byte() gets promoted to an int which gets
multiplied by another int from the MB() macro, and finally the
result gets sign extended to size_t.
Although this shouldn't be a problem in practice as all affected
gen2 platforms are 32bit AFAIK, so size_t will be 32 bits.
Reported-by: Bjorn Helgaas <bhelgaas@google.com>
Suggested-by: H. Peter Anvin <hpa@zytor.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: linux-kernel@vger.kernel.org
Link: http://lkml.kernel.org/r/1397382303-17525-1-git-send-email-ville.syrjala@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/kernel/early-quirks.c | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/kernel/early-quirks.c b/arch/x86/kernel/early-quirks.c index b0cc3809723d..6e2537c32190 100644 --- a/arch/x86/kernel/early-quirks.c +++ b/arch/x86/kernel/early-quirks.c @@ -240,7 +240,7 @@ static u32 __init intel_stolen_base(int num, int slot, int func, size_t stolen_s return base; } -#define KB(x) ((x) * 1024) +#define KB(x) ((x) * 1024UL) #define MB(x) (KB (KB (x))) #define GB(x) (MB (KB (x))) |