| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is a static checker warning, not something I'm desperately
concerned about. But snprintf() returns the number of bytes that
would have been copied if there were space. We really care about the
number of bytes that actually were copied so we should use scnprintf()
instead.
It probably won't overrun, and in that case we may as well just use
sprintf() but these sorts of things make static checkers and code
reviewers happier.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
The peer_addr member of intel_ntb_dev is not set, therefore when
acquiring ntb_peer_db and ntb_peer_spad we only get the offset rather
than the actual physical address. Adding fix to correct that.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
schedule_timeout_* takes a timeout in jiffies but the code currently is
passing in a constant which makes this timeout HZ dependent, so pass it
through msecs_to_jiffies() to fix this up.
Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
| |
schedule_timeout_* takes a timeout in jiffies but the code currently is
passing in a constant which makes this timeout HZ dependent, so pass it
through msecs_to_jiffies() to fix this up.
Signed-off-by: Nicholas Mc Guire <hofrat@osadl.org>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
| |
Fix typo in module parameter descriptions.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
| |
Fix 'db_init' parameter description.
Signed-off-by: Wei Yongjun <weiyj.lk@gmail.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
| |
Adding support on the rx DMA path to allow recovery of errors when
DMA responds with error status and abort all the subsequent ops.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Cc: Jon Mason <jdmason@kudzu.us>
Cc: linux-ntb@googlegroups.com
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
|
|
|
|
|
|
|
| |
Adding support on the tx DMA path to allow recovery of errors when
DMA responds with error status and abort all the subsequent ops.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Cc: Jon Mason <jdmason@kudzu.us>
Cc: linux-ntb@googlegroups.com
Signed-off-by: Vinod Koul <vinod.koul@intel.com>
|
|
|
|
|
|
|
|
|
| |
Clean up duplicated expression by replacing it with the equivalent local
variable pdev.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
| |
It will be useful to know the hardware configured BAR size to diagnose
issues with NTB memory windows.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the link goes down, the link_is_up flag did not return to
false. This could have caused some subtle corner case bugs
when the link goes up and down quickly.
Once that was fixed, there was found to be a race if the link was
brought down then immediately up. The link_cleanup work would
occasionally be scheduled after the next link up event. This would
cancel the link_work that was supposed to occur and leave ntb_perf
in an unusable state.
To fix this we get rid of the link_cleanup work and put the actions
directly in the link_down event.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit adds a debugfs 'count' file to ntb_pingpong. This is so
testing with ntb_pingpong can be automated beyond just checking the
logs for pong messages.
The count file returns a number which increments every pong. The
counter can be cleared by writing a zero.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to more successfully script with ntb_tool it's useful to
have a link file to check the link status so that the script
doesn't use the other files until the link is up.
This commit adds a 'link' file to the debugfs directory which reads
boolean (Y or N) depending on the link status. Writing to the file
change the link state using ntb_link_enable or ntb_link_disable.
A 'link_event' file is also provided so an application can block until
the link changes to the desired state. If the user writes a 1, it will
block until the link is up. If the user writes a 0, it will block until
the link is down.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In order to make the interface closer to the raw NTB API, this commit
changes memory windows so they are not initialized on link up.
Instead, the 'peer_trans*' debugfs files are introduced. When read,
they return information provided by ntb_mw_get_range. When written,
they create a buffer and initialize the memory window. The
value written is taken as the requested size of the buffer (which
is then rounded for alignment). Writing a value of zero frees the buffer
and tears down the memory window translation. The 'peer_mw*' file is
only created once the memory window translation is setup by the user.
Additionally, it was noticed that the read and write functions for the
'peer_mw*' files should have checked for a NULL pointer.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of returning immediately with an error when the link is
down, wait for the link to come up (or the user sends a SIGINT).
This is to make scripting ntb_perf easier.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Instead of having to watch logs, allow the results to be retrieved
by reading back the run file. This file will return "running" when
the test is running and nothing if no tests have been run yet.
It returns 1 line per thread, and will display an error message if the
corresponding thread returns an error.
With the above change, the pr_info calls that returned the results are
then changed to pr_debug calls.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit accomplishes a few things:
1) Properly prevent multiple sets of threads from running at once using
a mutex. Lots of race issues existed with the thread_cleanup.
2) The mutex allows us to ensure that threads are finished before
tearing down the device or module.
3) Don't use kthread_stop when the threads can exit by themselves, as
this is counter-indicated by the kthread_create documentation. Threads
now wait for kthread_stop to occur.
4) Writing to the run file now blocks until the threads are complete.
The test can then be safely interrupted by a SIGINT.
Also, while I was at it:
5) debugfs_run_write shouldn't return 0 in the early check cases as this
could cause debugfs_run_write to loop undesirably.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When debugging performance problems, if some issue causes the ntb
hardware to be significantly slower than expected, ntb_perf will
hang requiring a reboot because it only schedules once every 4GB.
Instead, schedule based on jiffies so it will not hang the CPU if
the transfer is slow.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I'm working on hardware that currently has a limited number of
scratchpad registers and ntb_ndev fails with no clue as to why. I
feel it is better to fail early and provide a reasonable error message
then to fail later on.
The same is done to ntb_perf, but it doesn't currently require enough
spads to actually fail. I've also removed the unused SPAD_MSG and
SPAD_ACK enums so that MAX_SPAD accurately reflects the number of
spads used.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
| |
We allocate some memory window buffers when the link comes up, then we
provide debugfs files to read/write each side of the link.
This is useful for debugging the mapping when writing new drivers.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
| |
On my system, dma_alloc_coherent won't produce memory anywhere
near the size of the BAR. So I needed a way to limit this.
It's pretty much copied straight from ntb_transport.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Currently we only allocate a fixed default number of descriptors for the tx
and rx side. We should dynamically resize it to the number of descriptors
resides in the transport rings. We should know the number of transmit
descriptors at initializaiton. We will allocate the default number of
descriptors for receive side and allocate additional ones when we know the
actual max entries for receive.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Allen Hubbe <allen.hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
On hardware with 32 scratchpad registers the spad field in ntb tool
could chop off the end. The maximum buffer size is increased from
256 to 15 times the number or scratchpads.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If you tried to write two spads in one line, as per the example:
root@peer# echo '0 0x01010101 1 0x7f7f7f7f' > $DBG_DIR/peer_spad
then the CPU would freeze in an infinite loop.
This wasn't immediately obvious but 'pos' was not incrementing the
buffer, so after reading the second pair of values, 'pos' would once
again be 3 and it would re-read the second pair of values ad infinitum.
Signed-off-by: Logan Gunthorpe <logang@deltatee.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Kernel zero day testing warned about address space confusion. A virtual
iomem address was used where a physical address is expected. The
offending functions implement an optional part of the api, so they are
removed. They can be added later, after testing.
Fixes: a1b3695820aa490e58915d720a1438069813008b
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
The clean up routine when we failed to allocate kthread is not cleaning
up all the threads, only the same one over and over again.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
| |
kthread_create_no_node() returns error pointers, never NULL. Fix check so
it handles error correctly.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
| |
kmalloc can fail and we should check for NULL before using the pointer
returned by kmalloc.
Signed-off-by: Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
| |
The perf tool is missing the setup of translation window. Adding call to
setup the translation window for backed memory.
Signed-off-by: John Kading <john.kading@gd-ms.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
| |
Instead of keep trying to go through the init routine when we aren't able
to allocate memory, we should just stop and go down.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
| |
We can leave tasklet spinning forever if we disable the tasklet during
qp shutdown and the tasklets are still being kicked off. This hopefully
should avoid that race condition.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reported-by: Alex Depoutovitch <alex@pernixdata.com>
Tested-by: Alex Depoutovitch <alex@pernixdata.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The ntb driver assigns between pointers an __iomem tokens, and
also casts them to 64-bit integers, which results in compiler
warnings on 32-bit systems:
drivers/ntb/test/ntb_perf.c: In function 'perf_copy':
drivers/ntb/test/ntb_perf.c:213:10: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
vbase = (u64)(u64 *)mw->vbase;
^
drivers/ntb/test/ntb_perf.c:214:14: error: cast from pointer to integer of different size [-Werror=pointer-to-int-cast]
dst_vaddr = (u64)(u64 *)dst;
^
This adds __iomem annotations where needed and changes the temporary
variables to iomem pointers to avoid casting them to u64. I did not
see the problem in linux-next earlier, but it show showed up in
4.5-rc1.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Fixes: 8a7b6a778a85 ("ntb: ntb perf tool")
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
If the parameter given to the macro is replaced throughout the macro as
it is evaluated. The intent is that the macro parameter should replace
the only the first parameter to container_of(). However, the way the
macro was written, it would also inadvertantly replace a structure field
name. If a parameter of any other name is given to the macro, it will
fail to compile, if the structure does not contain a field of the same
name. At worst, it will compile, and hide improper access of an
unintended field in the structure.
Change the macro parameter name, so it does not conflict with the
structure field name.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Acked-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This adds support for AMD's PCI-Express Non-Transparent Bridge
(NTB) device on the Zeppelin platform. The driver connnects to the
standard NTB sub-system interface, with modification to add hooks
for power management in a separate patch. The AMD NTB device has 3
memory windows, 16 doorbell, 16 scratch-pad registers, and supports
up to 16 PCIe lanes running a Gen3 speeds.
Signed-off-by: Xiangliang Yu <Xiangliang.Yu@amd.com>
Reviewed-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Providing raw performance data via a tool that directly access data from
NTB w/o any software overhead. This allows measurement of the hardware
performance limit. In revision one we are only doing single direction
CPU and DMA writes. Eventually we will provide bi-directional writes.
The measurement using DMA engine for NTB performance measure does
not measure the raw performance of DMA engine over NTB due to software
overhead. But it should provide the peak performance through the Linux DMA
driver.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Tested-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
| |
The transport right now does not handle the case where we run out of DMA
descriptors. We just fail when we do not succeed. Adding code to retry for
a bit attempting to use the DMA engine instead of instantly fail to CPU
copy.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Reviewed-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
The lower bits read from a BAR register will contain property bits
that we do not care about. Clear those so that we can use the BAR
values for limit and xlat registers.
Reported-by: Conrad Meyer <cem@freebsd.org>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
The transmit overrun avoidance error path in ntb_process_tx accidentally
swapped the first two values being passed to the tx_handler client.
This could result in crashes in the ntb_netdev (or other out-of-tree NTB
clients).
Reported-by: Alex Depoutovitch <alex@pernixdata.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
resource_size_t may be 32-bit wide on some architectures, which causes
this warning when building the NTB code:
drivers/ntb/ntb_transport.c: In function 'ntb_transport_link_work':
drivers/ntb/ntb_transport.c:828:46: warning: right shift count >= width of type [-Wshift-count-overflow]
The warning is harmless but can be avoided by using the upper_32_bits()
macro.
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Fixes: e26a5843f7f5 ("NTB: Split ntb_hw_intel and ntb_transport drivers")
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
| |
There is no need for the upstream and downstream addresses to be different
for the NTB configs. Go to using a single set of address. It is still
possible to configure them differently using module parameter override
however.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Acked and Tested-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
Order of operations issue with the QP Num and MW count, which would
result in the receive buffer pointer being invalid if there are more
than 1 MW. Corrected with parenthesis to enforce the proper order of
operations.
Reported-by: John I. Kading <John.Kading@gd-ms.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
| |
These variables were not used anywhere. So remove them.
Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
| |
We were accessing nt->mw_vec after freeing it. Fix the error path so
that we free nt->mw_vec after we have finished using it.
Signed-off-by: Sudip Mukherjee <sudip@vectorindia.org>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
| |
smatch detected an issue in the function ntb_transport_max_size() where
we could be dereferencing a dma channel pointer when it is NULL.
Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
| |
The range check must exclude the upper bound.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Check that b2b_mw_idx is in range of the number of memory windows when
initializing the device. The workaround is considered to be in effect
only if the device b2b_idx is exactly UINT_MAX, instead of any index
past the last memory window.
Only print B2B MW workaround information in debugfs if the workaround is
in effect.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
| |
Allocate two DMA channels, one for TX operation and one for RX
operation, instead of having one DMA channel for everything. This
provides slightly better performance, and also will make error handling
cleaner later on.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The dma_sync_wait can hurt the performance of workloads mixed with both
large and small frames. Large frames will be copied using the dma
engine. Small frames will be copied by the cpu. The dma_sync_wait
prevents the cpu and dma engine copying in parallel.
In the period where the cpu is copying, the dma engine is stopped. The
dma engine is not doing any useful work to copy large frames during that
time, and the additional time to restart the dma engine for the next
large frame. This will decrease the throughput for the portion of a
workload with large frames.
In the period where the dma engine is copying, the cpu is held up
waiting for dma to complete. The small frames processing will be
delayed until the dma is complete. The RX frames are completed
in-order, and the processing of small frames takes very little time, so
dma_sync_wait may have an insignificant impact on the respose time of
frames. The more significant impact is to the system, because the delay
in dma_sync_wait is implemented as busy non-blocking wait. This can
prevent the delayed core from doing any useful work, even if it could be
processing work for other drivers, unrelated to transport RX processing.
After applying the earlier patch to fix out-of-order RX acknoledgement,
the dma_sync_wait is no longer necessary. Remove it, so that cpu memcpy
will proceed immediately for small frames, in parallel with ongoing dma
for large frames. Do not hold up the cpu from doing work while dma is
in progress. The prior fix will continue to ensure in-order completion
of the RX frames to the upper layer, and in-order delivery of the RX
acknoledgement.
Signed-off-by: Allen Hubbe <Allen.Hubbe@emc.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
| |
Make QP stats info more readable for debugging purposes. Also add an
entry to indicate whether DMA is being used.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|
|
|
|
|
|
|
|
|
| |
The list should be added from the bottom and not the top in order to
ensure the transport is provided in the same order to clients as ntb
devices are discovered.
Signed-off-by: Dave Jiang <dave.jiang@intel.com>
Signed-off-by: Jon Mason <jdmason@kudzu.us>
|