diff options
author | Jason A. Donenfeld <Jason@zx2c4.com> | 2022-05-14 13:59:30 +0200 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2022-06-22 14:11:17 +0200 |
commit | 1aeedbe02b5c730a6fbc2370f6bc110c2d47f6a8 (patch) | |
tree | 49e4574ed08034d71f47b4d5a76a8aa5a271d657 /mm | |
parent | ceaf1feefe6ea8122c9cf2244ab9f373db298ec7 (diff) | |
download | linux-stable-1aeedbe02b5c730a6fbc2370f6bc110c2d47f6a8.tar.gz linux-stable-1aeedbe02b5c730a6fbc2370f6bc110c2d47f6a8.tar.bz2 linux-stable-1aeedbe02b5c730a6fbc2370f6bc110c2d47f6a8.zip |
random: move randomize_page() into mm where it belongs
commit 5ad7dd882e45d7fe432c32e896e2aaa0b21746ea upstream.
randomize_page is an mm function. It is documented like one. It contains
the history of one. It has the naming convention of one. It looks
just like another very similar function in mm, randomize_stack_top().
And it has always been maintained and updated by mm people. There is no
need for it to be in random.c. In the "which shape does not look like
the other ones" test, pointing to randomize_page() is correct.
So move randomize_page() into mm/util.c, right next to the similar
randomize_stack_top() function.
This commit contains no actual code changes.
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'mm')
-rw-r--r-- | mm/util.c | 32 |
1 files changed, 32 insertions, 0 deletions
diff --git a/mm/util.c b/mm/util.c index ab358c64bbd3..04ebc76588aa 100644 --- a/mm/util.c +++ b/mm/util.c @@ -320,6 +320,38 @@ unsigned long randomize_stack_top(unsigned long stack_top) #endif } +/** + * randomize_page - Generate a random, page aligned address + * @start: The smallest acceptable address the caller will take. + * @range: The size of the area, starting at @start, within which the + * random address must fall. + * + * If @start + @range would overflow, @range is capped. + * + * NOTE: Historical use of randomize_range, which this replaces, presumed that + * @start was already page aligned. We now align it regardless. + * + * Return: A page aligned address within [start, start + range). On error, + * @start is returned. + */ +unsigned long randomize_page(unsigned long start, unsigned long range) +{ + if (!PAGE_ALIGNED(start)) { + range -= PAGE_ALIGN(start) - start; + start = PAGE_ALIGN(start); + } + + if (start > ULONG_MAX - range) + range = ULONG_MAX - start; + + range >>= PAGE_SHIFT; + + if (range == 0) + return start; + + return start + (get_random_long() % range << PAGE_SHIFT); +} + #ifdef CONFIG_ARCH_WANT_DEFAULT_TOPDOWN_MMAP_LAYOUT unsigned long arch_randomize_brk(struct mm_struct *mm) { |