summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorErico Nunes <nunes.erico@gmail.com>2024-01-24 03:59:40 +0100
committerQiang Yu <yuq825@gmail.com>2024-02-12 16:26:31 +0800
commitb5b345ea9b3e68c41aeb4163c0a221bf7405b8d8 (patch)
tree74e9a7bbb7b31d28b40bee7c13a40896f3a70e06
parent7edd06233958d9086a9e3eb723a8768d3c5a9ce1 (diff)
downloadlinux-stable-b5b345ea9b3e68c41aeb4163c0a221bf7405b8d8.tar.gz
linux-stable-b5b345ea9b3e68c41aeb4163c0a221bf7405b8d8.tar.bz2
linux-stable-b5b345ea9b3e68c41aeb4163c0a221bf7405b8d8.zip
drm/lima: reset async_reset on pp hard reset
Lima pp jobs use an async reset to avoid having to wait for the soft reset right after a job. The soft reset is done at the end of a job and a reset_complete flag is expected to be set at the next job. However, in case the user runs into a job timeout from any application, a hard reset is issued to the hardware. This hard reset clears the reset_complete flag, which causes an error message to show up before the next job. This is probably harmless for the execution but can be very confusing to debug, as it blames a reset timeout on the next application to submit a job. Reset the async_reset flag when doing the hard reset so that we don't get that message. Signed-off-by: Erico Nunes <nunes.erico@gmail.com> Reviewed-by: Vasily Khoruzhick <anarsoul@gmail.com> Signed-off-by: Qiang Yu <yuq825@gmail.com> Link: https://patchwork.freedesktop.org/patch/msgid/20240124025947.2110659-2-nunes.erico@gmail.com
-rw-r--r--drivers/gpu/drm/lima/lima_pp.c7
1 files changed, 7 insertions, 0 deletions
diff --git a/drivers/gpu/drm/lima/lima_pp.c b/drivers/gpu/drm/lima/lima_pp.c
index a5c95bed08c0..a8f8f63b8295 100644
--- a/drivers/gpu/drm/lima/lima_pp.c
+++ b/drivers/gpu/drm/lima/lima_pp.c
@@ -191,6 +191,13 @@ static int lima_pp_hard_reset(struct lima_ip *ip)
pp_write(LIMA_PP_PERF_CNT_0_LIMIT, 0);
pp_write(LIMA_PP_INT_CLEAR, LIMA_PP_IRQ_MASK_ALL);
pp_write(LIMA_PP_INT_MASK, LIMA_PP_IRQ_MASK_USED);
+
+ /*
+ * if there was an async soft reset queued,
+ * don't wait for it in the next job
+ */
+ ip->data.async_reset = false;
+
return 0;
}