diff options
author | Chris Wilson <chris@chris-wilson.co.uk> | 2016-12-16 07:46:41 +0000 |
---|---|---|
committer | Daniel Vetter <daniel.vetter@ffwll.ch> | 2016-12-16 14:31:06 +0100 |
commit | ad579002c8ec429930721c5bb8bd763e6c0c6286 (patch) | |
tree | 27c789dc98b2b341ea0c48336c460133a34be228 /include | |
parent | 56e3d1cd05cc7b24cfcae8714b0661bf607aaba3 (diff) | |
download | linux-ad579002c8ec429930721c5bb8bd763e6c0c6286.tar.gz linux-ad579002c8ec429930721c5bb8bd763e6c0c6286.tar.bz2 linux-ad579002c8ec429930721c5bb8bd763e6c0c6286.zip |
drm: Add drm_mm_for_each_node_safe()
A complement to drm_mm_for_each_node(), wraps list_for_each_entry_safe()
for walking the list of nodes safe against removal.
Note from Joonas:
"Most of the diff is about __drm_mm_nodes(mm), which could be split into
own patch and keep the R-b's."
But I don't feel like insisting on the resend.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
[danvet: Add note.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161216074718.32500-4-chris@chris-wilson.co.uk
Diffstat (limited to 'include')
-rw-r--r-- | include/drm/drm_mm.h | 19 |
1 files changed, 16 insertions, 3 deletions
diff --git a/include/drm/drm_mm.h b/include/drm/drm_mm.h index 0b8371795aeb..0cc1b78c9ec2 100644 --- a/include/drm/drm_mm.h +++ b/include/drm/drm_mm.h @@ -179,6 +179,8 @@ static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node) return __drm_mm_hole_node_end(hole_node); } +#define __drm_mm_nodes(mm) (&(mm)->head_node.node_list) + /** * drm_mm_for_each_node - iterator to walk over all allocated nodes * @entry: drm_mm_node structure to assign to in each iteration step @@ -187,9 +189,20 @@ static inline u64 drm_mm_hole_node_end(struct drm_mm_node *hole_node) * This iterator walks over all nodes in the range allocator. It is implemented * with list_for_each, so not save against removal of elements. */ -#define drm_mm_for_each_node(entry, mm) list_for_each_entry(entry, \ - &(mm)->head_node.node_list, \ - node_list) +#define drm_mm_for_each_node(entry, mm) \ + list_for_each_entry(entry, __drm_mm_nodes(mm), node_list) + +/** + * drm_mm_for_each_node_safe - iterator to walk over all allocated nodes + * @entry: drm_mm_node structure to assign to in each iteration step + * @next: drm_mm_node structure to store the next step + * @mm: drm_mm allocator to walk + * + * This iterator walks over all nodes in the range allocator. It is implemented + * with list_for_each_safe, so save against removal of elements. + */ +#define drm_mm_for_each_node_safe(entry, next, mm) \ + list_for_each_entry_safe(entry, next, __drm_mm_nodes(mm), node_list) #define __drm_mm_for_each_hole(entry, mm, hole_start, hole_end, backwards) \ for (entry = list_entry((backwards) ? (mm)->hole_stack.prev : (mm)->hole_stack.next, struct drm_mm_node, hole_stack); \ |