diff options
author | Will Deacon <will.deacon@arm.com> | 2014-04-23 17:52:52 +0100 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2014-04-27 15:20:05 -0700 |
commit | ec6931b281797b69e6cf109f9cc94d5a2bf994e0 (patch) | |
tree | 9e8ab9ff709939a2ca13a7a5556b436689e2908b /arch | |
parent | ac6c9e2bed093c4b60e313674fb7aec4f264c3d4 (diff) | |
download | linux-ec6931b281797b69e6cf109f9cc94d5a2bf994e0.tar.gz linux-ec6931b281797b69e6cf109f9cc94d5a2bf994e0.tar.bz2 linux-ec6931b281797b69e6cf109f9cc94d5a2bf994e0.zip |
word-at-a-time: avoid undefined behaviour in zero_bytemask macro
The asm-generic, big-endian version of zero_bytemask creates a mask of
bytes preceding the first zero-byte by left shifting ~0ul based on the
position of the first zero byte.
Unfortunately, if the first (top) byte is zero, the output of
prep_zero_mask has only the top bit set, resulting in undefined C
behaviour as we shift left by an amount equal to the width of the type.
As it happens, GCC doesn't manage to spot this through the call to fls(),
but the issue remains if architectures choose to implement their shift
instructions differently.
An example would be arch/arm/ (AArch32), where LSL Rd, Rn, #32 results
in Rd == 0x0, whilst on arch/arm64 (AArch64) LSL Xd, Xn, #64 results in
Xd == Xn.
Rather than check explicitly for the problematic shift, this patch adds
an extra shift by 1, replacing fls with __fls. Since zero_bytemask is
never called with a zero argument (has_zero() is used to check the data
first), we don't need to worry about calling __fls(0), which is
undefined.
Cc: <stable@vger.kernel.org>
Cc: Victor Kamensky <victor.kamensky@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch')
0 files changed, 0 insertions, 0 deletions