diff options
author | Zach Brown <zach.brown@oracle.com> | 2010-05-18 15:44:50 -0700 |
---|---|---|
committer | Andy Grover <andy.grover@oracle.com> | 2010-09-08 18:15:16 -0700 |
commit | 89bf9d4158b5a1b6bd00960eb2e47601ec8cc138 (patch) | |
tree | e11c5ea0b69fb1bc53a03f83570e160dbe3b005f /net/rds/ib_send.c | |
parent | a46ca94e7fb2c93a59e08b42fd77d8c478fda5fc (diff) | |
download | linux-89bf9d4158b5a1b6bd00960eb2e47601ec8cc138.tar.gz linux-89bf9d4158b5a1b6bd00960eb2e47601ec8cc138.tar.bz2 linux-89bf9d4158b5a1b6bd00960eb2e47601ec8cc138.zip |
RDS/IB: get the xmit max_sge from the RDS IB device on the connection
rds_ib_xmit_rdma() was calling ib_get_client_data() to get at the rds_ibdevice
just to get the max_sge for the transmit. This patch instead has it get it
directly off the rds_ibdev which is stored on the connection.
The current code won't free the rds_ibdev until all the IB connections that use
it are freed. So it's safe to reference the rds_ibdev this way. In the future
it also makes it easier to support proper reference counting of the rds_ibdev
struct.
As an additional bonus, this gets rid of the performance hit of calling in to
the IB stack to look up the rds_ibdev. The current implementation in the IB
stack acquires an interrupt blocking spinlock to protect the registration of
client callback data.
Signed-off-by: Zach Brown <zach.brown@oracle.com>
Diffstat (limited to 'net/rds/ib_send.c')
-rw-r--r-- | net/rds/ib_send.c | 12 |
1 files changed, 5 insertions, 7 deletions
diff --git a/net/rds/ib_send.c b/net/rds/ib_send.c index 209dbc6d159d..3f91e794eae9 100644 --- a/net/rds/ib_send.c +++ b/net/rds/ib_send.c @@ -806,10 +806,10 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op) struct rds_ib_send_work *first; struct rds_ib_send_work *prev; struct ib_send_wr *failed_wr; - struct rds_ib_device *rds_ibdev; struct scatterlist *scat; unsigned long len; u64 remote_addr = op->op_remote_addr; + u32 max_sge = ic->rds_ibdev->max_sge; u32 pos; u32 work_alloc; u32 i; @@ -818,8 +818,6 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op) int ret; int num_sge; - rds_ibdev = ib_get_client_data(ic->i_cm_id->device, &rds_ib_client); - /* map the op the first time we see it */ if (!op->op_mapped) { op->op_count = ib_dma_map_sg(ic->i_cm_id->device, @@ -839,7 +837,7 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op) * Instead of knowing how to return a partial rdma read/write we insist that there * be enough work requests to send the entire message. */ - i = ceil(op->op_count, rds_ibdev->max_sge); + i = ceil(op->op_count, max_sge); work_alloc = rds_ib_ring_alloc(&ic->i_send_ring, i, &pos); if (work_alloc != i) { @@ -867,9 +865,9 @@ int rds_ib_xmit_rdma(struct rds_connection *conn, struct rm_rdma_op *op) send->s_wr.wr.rdma.remote_addr = remote_addr; send->s_wr.wr.rdma.rkey = op->op_rkey; - if (num_sge > rds_ibdev->max_sge) { - send->s_wr.num_sge = rds_ibdev->max_sge; - num_sge -= rds_ibdev->max_sge; + if (num_sge > max_sge) { + send->s_wr.num_sge = max_sge; + num_sge -= max_sge; } else { send->s_wr.num_sge = num_sge; } |