diff options
author | Jens Axboe <axboe@kernel.dk> | 2018-06-21 09:49:37 -0600 |
---|---|---|
committer | Christoph Hellwig <hch@lst.de> | 2018-06-21 18:59:46 +0200 |
commit | 943e942e6266f22babee5efeb00f8f672fbff5bd (patch) | |
tree | 9122de26af304afdf313020e689e9e4008de375c /drivers/nvme/host/nvme.h | |
parent | 9f9cafc14016f23f982d3ce18f9057923bd3037a (diff) | |
download | linux-943e942e6266f22babee5efeb00f8f672fbff5bd.tar.gz linux-943e942e6266f22babee5efeb00f8f672fbff5bd.tar.bz2 linux-943e942e6266f22babee5efeb00f8f672fbff5bd.zip |
nvme-pci: limit max IO size and segments to avoid high order allocations
nvme requires an sg table allocation for each request. If the request
is large, then the allocation can become quite large. For instance,
with our default software settings of 1280KB IO size, we'll need
10248 bytes of sg table. That turns into a 2nd order allocation,
which we can't always guarantee. If we fail the allocation, blk-mq
will retry it later. But there's no guarantee that we'll EVER be
able to allocate that much contigious memory.
Limit the IO size such that we never need more than a single page
of memory. That's a lot faster and more reliable. Then back that
allocation with a mempool, so that we know we'll always be able
to succeed the allocation at some point.
Signed-off-by: Jens Axboe <axboe@kernel.dk>
Acked-by: Keith Busch <keith.busch@intel.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
Diffstat (limited to 'drivers/nvme/host/nvme.h')
-rw-r--r-- | drivers/nvme/host/nvme.h | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h index 231807cbc849..0c4a33df3b2f 100644 --- a/drivers/nvme/host/nvme.h +++ b/drivers/nvme/host/nvme.h @@ -170,6 +170,7 @@ struct nvme_ctrl { u64 cap; u32 page_size; u32 max_hw_sectors; + u32 max_segments; u16 oncs; u16 oacs; u16 nssa; |