| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Export msm_gem_pin_pages_locked() and acquire the reservation lock
directly in GEM pin callback. Same for unpin. Prepares for further
changes.
Dma-buf locking semantics require callers to hold the buffer's
reservation lock when invoking the pin and unpin callbacks. Prepare
msm accordingly by pushing locking out of the implementation. A
follow-up patch will fix locking for all GEM code at once.
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Tested-by: Dmitry Osipenko <dmitry.osipenko@collabora.com> # virtio-gpu
Acked-by: Christian König <christian.koenig@amd.com>
Acked-by: Zack Rusin <zack.rusin@broadcom.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240227113853.8464-5-tzimmermann@suse.de
|
|
|
|
|
|
|
|
|
|
| |
Replace the ww_mutex locking dance with the drm_exec helper.
v2: Error path fixes, move drm_exec_fini so we only call it once (and
only if we have drm_exec_init()
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/568342/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Untangle unpinning from unlock/unref loop. The unpin only happens in
error paths so it is easier to decouple from the normal unlock path.
Since we never have an intermediate state where a subset of buffers
are pinned (ie. we never bail out of the pin or unpin loops) we can
replace the bo state flag bit with a global flag in the submit.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/568335/
|
|
|
|
|
|
|
|
|
|
| |
This was a small optimization for pre-soft-pin userspace. But mesa
switched to soft-pin nearly 5yrs ago. So lets drop the optimization
and simplify the code.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/568328/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The EXT_external_objects extension is a bit awkward as it doesn't pass
explicit modifiers, leaving the importer to guess with incomplete
information. In the case of vk (turnip) exporting and gl (freedreno)
importing, the "OPTIMAL_TILING_EXT" layout depends on VkImageCreateInfo
flags (among other things), which the importer does not know. Which
unfortunately leaves us with the need for a metadata back-channel.
The contents of the metadata are defined by userspace. The
EXT_external_objects extension is only required to work between
compatible versions of gl and vk drivers, as defined by device and
driver UUIDs.
v2: add missing metadata kfree
v3: Rework to move copy_from/to_user out from under gem obj lock
to avoid angering lockdep about deadlocks against fs-reclaim
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/566157/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was not strictly necessary, as page unpinning (ie. shrinker) only
cares about the resv. It did give us some extra sanity checking for
userspace controlled iova, and was useful to catch issues on kernel and
userspace side when enabling userspace iova. But if userspace screws
this up, it just corrupts it's own gpu buffers and/or gets iova faults.
So we can just let userspace shoot it's own foot and drop the extra per-
buffer SUBMIT overhead.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Patchwork: https://patchwork.freedesktop.org/patch/551023/
|
|
|
|
|
|
|
|
|
| |
Split out pin_count incrementing and lru updating into a separate loop
so we can take the lru lock only once for all objs. Since we are still
holding the obj lock, it is safe to split this up.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/551025/
|
|
|
|
|
|
|
|
| |
Basically everywhere wants the base ptr type. So store that instead of
msm_gem_object.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/551021/
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Now that everything that controls which LRU an obj lives in *except* the
backing pages is protected by the LRU lock, add a special path to unpin
in the job_run() path, where we are assured that we already have backing
pages and will not be racing against eviction (because the GEM object's
dma_resv contains the fence that will be signaled when the submit/job
completes).
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527845/
Link: https://lore.kernel.org/r/20230320144356.803762-10-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Since the LRU lock is already acquired when moving an obj between LRUs,
we can use it to protect pin_count and madv, without any significant
change in locking (ie. it just expands the scope of the lock by a hand-
ful of instructions). This prepares the way to decrement the pin_count
in the job_run() path without needing to hold the obj lock, to avoid a
potential deadlock (or rather stall) caused by the fence-signaling path
(job_run()) blocking on shrinker/reclaim. (Only a stall because the
wait for fence signaling wait_for_idle() is not infinite.)
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527843/
Link: https://lore.kernel.org/r/20230320144356.803762-9-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
We need to use the inuse count to track that a BO is pinned until
we have the hw_fence. But we want to remove the obj lock from the
job_run() path as this could deadlock against reclaim/shrinker
(because it is blocking the hw_fence from eventually being signaled).
So split that tracking out into a per-vma lock with narrower scope.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527839/
Link: https://lore.kernel.org/r/20230320144356.803762-5-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
|
| |
Stop open coding VMA construction, which will be needed in the next
commit. And since the VMA already has a ptr to the adress space, stop
passing that around everywhere. (Also, an aspace always has an mmu so
we can drop a couple pointless NULL checks.)
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/527833/
Link: https://lore.kernel.org/r/20230320144356.803762-4-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
| |
Utilize the power of lockdep for our GEM locking related sanity
checking.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496139/
Link: https://lore.kernel.org/r/20220802155152.1727594-16-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
| |
All use of msm_gem_is_locked() is just for WARN_ON()s, so extract out
into an msm_gem_assert_locked() patch.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496136/
Link: https://lore.kernel.org/r/20220802155152.1727594-15-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
This converts over to use the shared GEM LRU/shrinker helpers. Note
that it means we are no longer tracking purgeable or willneed buffers
that are active separately. But the most recently pinned buffers should
be at the tail of the various LRUs, and the shrinker is already prepared
to encounter objects which are still active.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496131/
Link: https://lore.kernel.org/r/20220802155152.1727594-11-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
At this point the pinned refcnt is sufficient, and the shrinker is
already prepared to encounter objects which are still active according
to fences attached to the resv.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496122/
Link: https://lore.kernel.org/r/20220802155152.1727594-9-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Since that is what these fxns actually do.. they are getting *pinned*
pages (as opposed to cases where we need pages, but don't need them
pinned, like CPU mappings).
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496121/
Link: https://lore.kernel.org/r/20220802155152.1727594-7-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
| |
Currently in our shrinker path we shouldn't be encountering anything
that is active, but this will change in subsequent patches. So check
if there are unsignaled fences.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/496117/
Link: https://lore.kernel.org/r/20220802155152.1727594-5-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The only reason we grabbed the lock was to satisfy a bunch of places
that WARN_ON() if called without the lock held. But this angers lockdep
which doesn't realize no one else can be holding the lock by the time we
end up destroying the object (and sees what would otherwise be a locking
inversion between reservation_ww_class_mutex and fs_reclaim).
Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/14
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/489364/
Link: https://lore.kernel.org/r/20220613205032.2652374-1-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
| |
Misc small cleanup I noticed. Not called from another object file since
commit 3c9edd9c85f5 ("drm/msm: Introduce GEM object funcs")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Abhinav Kumar <quic_abhinavk@quicinc.com>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Patchwork: https://patchwork.freedesktop.org/patch/489362/
Link: https://lore.kernel.org/r/20220613204910.2651747-1-robdclark@gmail.com
Signed-off-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the BO_PINNED state in the submit was tracking two related
but different things: (1) that the buffer object was pinned, and (2)
that the vma (mapping within a set of pagetables) was pinned. But with
fenced vma unpin (needed so that userspace couldn't race with retire
path for releasing a vma) these two were decoupled. The fact that the
BO_PINNED flag was already cleared meant that we leaked the bo pin count
which should have been dropped when the submit was retired.
So split this state into BO_OBJ_PINNED and BO_VMA_PINNED, so they can be
dropped independently.
Fixes: 95d1deb02a9c ("drm/msm/gem: Add fenced vma unpin")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Patchwork: https://patchwork.freedesktop.org/patch/487559/
Link: https://lore.kernel.org/r/20220527172341.2151005-1-robdclark@gmail.com
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The motivation at this point is mainly native userspace mesa driver in a
VM guest. The one remaining synchronous "hotpath" is buffer allocation,
because guest needs to wait to know the bo's iova before it can start
emitting cmdstream/state that references the new bo. By allocating the
iova in the guest userspace, we no longer need to wait for a response
from the host, but can just rely on the allocation request being
processed before the cmdstream submission. Allocation failures (OoM,
etc) would just be treated as context-lost (ie. GL_GUILTY_CONTEXT_RESET)
or subsequent allocations (or readpix, etc) can raise GL_OUT_OF_MEMORY.
v2: Fix inuse check
v3: Change mismatched iova case to -EBUSY
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Reviewed-by: Dmitry Osipenko <dmitry.osipenko@collabora.com>
Link: https://lore.kernel.org/r/20220411215849.297838-11-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
With userspace allocated iova (next patch), we can have a race condition
where userspace observes the fence completion and deletes the vma before
retire_submit() gets around to unpinning the vma. To handle this, add a
fenced unpin which drops the refcount but tracks the fence, and update
msm_gem_vma_inuse() to check any previously unsignaled fences.
v2: Fix inuse underflow (duplicate unpin)
v3: Fix msm_job_run() vs submit_cleanup() race condition
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220411215849.297838-10-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
| |
This way we only lookup vma once per object per submit, for both the
submit and retire path.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220411215849.297838-9-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
There was only a single user, which could just as easily stash the iova
when pinning.
v2: fix prepare->prepare->cleanup->cleanup sequences
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-7-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
| |
Get rid of all the unnecessary conversion between address/size and page
offsets. It just confuses things.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-6-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
| |
Prep for a following patch, where it gets a bit more complicated.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-5-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
| |
These belong more cleanly in the gem header.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Reviewed-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
Link: https://lore.kernel.org/r/20220411215849.297838-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
| |
Noticed this was unused and never set.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220225202614.225197-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Other processes don't need to know about faults that they are isolated
from by virtue of address space isolation. They are only interested in
whether some of their state might have been corrupted.
But to be safe, also track unattributed faults. This case should really
never happen unless there is a kernel bug (and that would never happen,
right?)
v2: Instead of adding a new param, just change the behavior of the
existing param to match what userspace actually wants [anholt]
Closes: https://gitlab.freedesktop.org/mesa/mesa/-/issues/5934
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20220201161618.778455-3-robdclark@gmail.com
Reviewed-by: Emma Anholt <emma@anholt.net>
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|\
| |
| |
| |
| |
| | |
Kickstart new drm-misc-next cycle.
Signed-off-by: Maxime Ripard <maxime@cerno.tech>
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Moving the driver-specific mmap code into a GEM object function allows
for using DRM helpers for various mmap callbacks.
The respective msm functions are being removed. The file_operations
structure fops is now being created by the helper macro
DEFINE_DRM_GEM_FOPS().
v2:
* rebase onto latest upstream
* remove declaration of msm_gem_mmap_obj() from msm_fbdev.c
Signed-off-by: Thomas Zimmermann <tzimmermann@suse.de>
Link: https://lore.kernel.org/r/20210706084753.8194-1-tzimmermann@suse.de
[squash in missing VM_DONTEXPAND flag]
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|/
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
drm_sched_job_init is already at the right place, so this boils down
to deleting code.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Rob Clark <robdclark@gmail.com>
Reviewed-by: Rob Clark <robdclark@gmail.com>
Cc: Sean Paul <sean@poorly.run>
Cc: Sumit Semwal <sumit.semwal@linaro.org>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: linux-arm-msm@vger.kernel.org
Cc: freedreno@lists.freedesktop.org
Cc: linux-media@vger.kernel.org
Cc: linaro-mm-sig@lists.linaro.org
Link: https://patchwork.freedesktop.org/patch/msgid/20210805104705.862416-13-daniel.vetter@ffwll.ch
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was only used to detect userspace including the same bo multiple
times in a submit. But ww_mutex can already tell us this.
When we drop struct_mutex around the submit ioctl, we'd otherwise need
to lock the bo before adding it to the bo_list. But since ww_mutex can
already tell us this, it is simpler just to remove the bo_list.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210728010632.2633470-11-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For existing adrenos, there is one or more ringbuffer, depending on
whether preemption is supported. When preemption is supported, each
ringbuffer has it's own priority. A submitqueue (which maps to a
gl context or vk queue in userspace) is mapped to a specific ring-
buffer at creation time, based on the submitqueue's priority.
Each ringbuffer has it's own drm_gpu_scheduler. Each submitqueue
maps to a drm_sched_entity. And each submit maps to a drm_sched_job.
Closes: https://gitlab.freedesktop.org/drm/msm/-/issues/4
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-10-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously the (non-fd) fence returned from submit ioctl was a raw
seqno, which is scoped to the ring. But from UABI standpoint, the
ioctls related to seqno fences all specify a submitqueue. We can
take advantage of that to replace the seqno fences with a cyclic idr
handle.
This is in preperation for moving to drm scheduler, at which point
the submit ioctl will return after queuing the submit job to the
scheduler, but before the submit is written into the ring (and
therefore before a ring seqno has been assigned). Which means we
need to replace the dma_fence that userspace may need to wait on
with a scheduler fence.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-8-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move all the locked/active/pinned state handling to msm_gem_submit.c.
In particular, for drm/scheduler, we'll need to do all this before
pushing the submit job to the scheduler. But while we're at it we can
get rid of the dupicate pin and refcnt.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-7-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
| |
No idea why we were still using this. It certainly hasn't been needed
for some time. So drop the pointless twin codepaths.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-4-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
| |
Fix a couple incorrect or misspelt comments, and add submitqueue doc
comment.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Christian König <christian.koenig@amd.com>
Link: https://lore.kernel.org/r/20210728010632.2633470-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Wire up support to stall the SMMU on iova fault, and collect a devcore-
dump snapshot for easier debugging of faults.
Currently this is a6xx-only, but mostly only because so far it is the
only one using adreno-smmu-priv.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Acked-by: Jordan Crouse <jordan@cosmicpenguin.net>
Link: https://lore.kernel.org/r/20210610214431.539029-6-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Our initial logic for excluding dma-bufs was not quite right. In
particular we want msm_gem_get/put_pages() path used for exported
dma-bufs to increment/decrement the pin-count.
Also, in case the importer is vmap'ing the dma-buf, we need to be
sure to update the object's status, because it is now no longer
potentially evictable.
Fixes: 63f17ef83428 drm/msm: Support evicting GEM objects to swap
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210426235326.1230125-1-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Objects that are potential for swapping out are (1) willneed (ie. if
they are purgable/MADV_WONTNEED we can just free the pages without them
having to land in swap), (2) not on an active list, (3) not dma-buf
imported or exported, and (4) not vmap'd. This repurposes the purged
list for objects that do not have backing pages (either because they
have not been pinned for the first time yet, or in a later patch because
they have been unpinned/evicted.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-7-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
| |
Currently nearly everything, other than newly allocated objects which
are not yet backed by pages, is pinned and resident in RAM. But it will
be nice to have some stats on what is unpinned once that is supported.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-6-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
| |
If you mess something up, you don't really need to see the same warn on
splat 4000 times pumped out a slow debug UART port..
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210405174532.1441497-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
| |
The previous patch fixes the user visible spelling. This one fixes the
code. Oops.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Link: https://lore.kernel.org/r/20210406151816.1515329-1-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
| |
The last patch lost the breakdown of active vs inactive GEM objects in
$debugfs/gem. But we can add some better stats to summarize not just
active vs inactive, but also purgable/purged to make up for that.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-5-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In normal cases the gem obj lock is acquired first before mm_lock. The
exception is iterating the various object lists. In the shrinker path,
deadlock is avoided by using msm_gem_trylock() and skipping over objects
that cannot be locked. But for debugfs the straightforward thing is to
split things out into a separate list of all objects protected by it's
own lock.
Fixes: d984457b31c4 ("drm/msm: Add priv->mm_lock to protect active/inactive lists")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-4-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the system is under heavy memory pressure, we can end up with lots
of concurrent calls into the shrinker. Keeping a running tab on what we
can shrink avoids grabbing a lock in shrinker->count(), and avoids
shrinker->scan() getting called when not profitable.
Also, we can keep purged objects in their own list to avoid re-traversing
them to help cut down time in the critical section further.
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-3-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|
|
|
|
|
|
|
|
|
| |
Unused since commit c951a9b284b9 ("drm/msm: Remove msm_gem_free_work")
Signed-off-by: Rob Clark <robdclark@chromium.org>
Tested-by: Douglas Anderson <dianders@chromium.org>
Reviewed-by: Douglas Anderson <dianders@chromium.org>
Link: https://lore.kernel.org/r/20210401012722.527712-2-robdclark@gmail.com
Signed-off-by: Rob Clark <robdclark@chromium.org>
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
https://gitlab.freedesktop.org/drm/msm into drm-next
* Shutdown hook for GPU (to ensure GPU is idle before iommu goes away)
* GPU cooling device support
* DSI 7nm and 10nm phy/pll updates
* Additional sm8150/sm8250 DPU support (merge_3d and DSPP color
processing)
* Various DP fixes
* A whole bunch of W=1 fixes from Lee Jones
* GEM locking re-work (no more trylock_recursive in shrinker!)
* LLCC (system cache) support
* Various other fixes/cleanups
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Rob Clark <robdclark@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/CAF6AEGt0G=H3_RbF_GAQv838z5uujSmFd+7fYhL6Yg=23LwZ=g@mail.gmail.com
|