| Commit message (Collapse) | Author | Age | Files | Lines |
... | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If the last element in the PRP list fits on the end of the page, there's
no need to allocate an extra page to put that single element in. It can
fit on the end of the page.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The spec says this is a 0s based value. We don't need to handle the
maximal value because it's reserved to mean "every namespace".
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The head can never overrun the tail since we won't allocate enough command
IDs to let that happen. The status codes are in sync with the spec.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | | |
Reported-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | | |
Reported-by: Randy Dunlap <rdunlap@xenotime.net>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The spec says we're not allowed to completely fill the submission queue.
Solve this by reducing the number of allocatable cmdids by 1.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
When we submit subsequent portions of the I/O, we need to access the
updated block, not start reading again from the original position.
This was showing up as miscompares in the XFS randholes testcase.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
NVMe scatterlists must be virtually contiguous, like almost all I/Os.
However, when the filesystem lays out files with a hole, it can be that
adjacent LBAs map to non-adjacent virtual addresses. Handle this by
submitting one NVMe command at a time for each virtually discontiguous
range.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Linux implements Flush as a bit in the bio. That means there may also be
data associated with the flush; if so the flush should be sent before the
data. To avoid completing the bio twice, I add CMD_CTX_FLUSH to indicate
the completion routine should do nothing.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The value written to the doorbell needs to be the first free index in
the queue, not the most recently used index in the queue.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If interrupts are misconfigured, the kthread will be needed to process
admin queue completions.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I got confused about whether this included the admin queue or not, and
had to resort to reading the spec. It doesn't include the admin queue,
so make that clear in the name.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
This was the data transfer bit until spec rev 0.92
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Instead of trying to resubmit I/Os in the I/O completion path (in
interrupt context), wake up a kthread which will resubmit I/O from
user context. This allows mke2fs to run to completion.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Return -EBUSY if the queue is full or -ENOMEM if we failed to allocate
memory (or map a scatterlist). Also use GFP_ATOMIC to allocate the
nvme_bio and move the locking to the callers of nvme_submit_bio_queue().
In nvme_make_request(), don't permit an I/O to jump the queue -- if the
congestion list already has an entry, just add to the tail, rather than
trying to submit.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In order to not overrun the sg array, we have to merge physically
contiguous pages into a single sg entry.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
If dma_map_sg returns 0 (failure), we need to fail the I/O.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We were passing the nvme_queue to access the q_dmadev for the
dma_alloc_coherent calls, but since we moved to the dma pool API,
we really only need the nvme_dev.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add a second memory pool for smaller I/Os. We can pack 16 of these on a
single page instead of using an entire page for each one.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Calling dma_free_coherent from interrupt context causes warnings.
Using the DMA pools delays freeing until pool destruction, so avoids
the problem.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There are too many things called 'info' in this driver. This data
structure is auxiliary information for a struct bio, so call it nvme_bio,
or nbio when used as a variable.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add a pointer to the nvme_req_info to hold a new data structure
(nvme_prps) which contains a list of the pages allocated to this
particular request for holding PRP list entries. nvme_setup_prps()
now returns this pointer.
To allocate and free the memory used for PRP lists, we need a struct
device, so we need to pass the nvme_queue pointer to many functions
which didn't use to need it.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
For multipage BIOs, we were always using sg[0] instead of advancing
through the list. Oops :-)
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If POISON_POINTER_DELTA isn't defined, ensure they're in page 0 which
should never be mapped.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In the bio completion handler, check for bios on the congestion list
for this NVM queue. Also, lock the congestion list in the make_request
function as the queue may end up being shared between multiple CPUs.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In addition to recording the completion data for each command, record
the anticipated completion time. Choose a timeout of 5 seconds for
normal I/Os and 60 seconds for admin I/Os.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If we're sharing a queue between multiple CPUs and we cancel a sync I/O,
we must have the queue locked to avoid corrupting the stack of the thread
that submitted the I/O. It turns out this is the same locking that's needed
for the threaded irq handler, so share that code.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If the adapter completes a command ID that is outside the bounds of
the array, return CMD_CTX_INVALID instead of random data, and print a
message in the sync_completion handler (which is rapidly becoming the
misc completion handler :-)
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Set the context value to CMD_CTX_COMPLETED, and print a message in the
sync_completion handler if we see it.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I have plans for other special values in sync_completion. Plus, this
is more self-documenting, and lets us detect bogus usages.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We're currently calling bio_endio from hard interrupt context. This is
not a good idea for preemptible kernels as it will cause longer latencies.
Using a threaded interrupt will run the entire queue processing mechanism
(including bio_endio) in a thread, which can be preempted. Unfortuantely,
it also adds about 7us of latency to the single-I/O case, so make it a
module parameter for the moment.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
We can't have preemption disabled when we call schedule(). Accept the
possibility that we'll get preempted, and it'll cost us some cacheline
bounces.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If the user sends a fatal signal, sleeping in the TASK_KILLABLE state
permits the task to be aborted. The only wrinkle is making sure that
if/when the command completes later that it doesn't upset anything.
Handle this by setting the data pointer to 0, and checking the value
isn't NULL in the sync completion path. Eventually, bios can be cancelled
through this path too. Note that the cmdid isn't freed to prevent reuse.
We should also abort the command in the future, but this is a good start.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Because I wasn't setting driverfs_dev, the devices were showing up under
/sys/devices/virtual/block. Now they appear underneath the PCI device
which they belong to.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In case the card has been left in a partially-configured state,
write 0 to the Enable bit.
Signed-off-by: Shane Michael Matthews <shane.matthews@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
Calling pci_request_selected_regions() reserves these regions for our use.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Need to call dma_set_coherent_mask() to allow queues to be allocated
above 4GB.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
Need to call pci_set_master() to enable device DMA
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Call pci_enable_device_mem() at initialisation and pci_disable_device
at exit.
Signed-off-by: Shane Michael Matthews <shane.matthews@intel.com>
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
It can return NULL, so handle that.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
We don't keep a list of nvme_dev any more
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | | |
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | | |
Allow userspace to submit synchronous I/O like the SCSI sg interface does.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
These are generalisations of the code that was in
nvme_submit_user_admin_command().
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Factor out most of nvme_identify() into a new nvme_submit_user_admin_command()
function. Change nvme_get_range_type() to call it and change nvme_ioctl to
realise that it's getting back all 64 ranges.
Signed-off-by: Matthew Wilcox <matthew.r.wilcox@intel.com>
|