| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
| |
not needed anymore
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
| |
shouldn't be needed now
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
| |
turn op->waitq into struct completion...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Make cancels reuse the aborted read/write op, to make sure they do not
fail on lack of memory.
Don't issue a cancel unless the daemon has seen our read/write, has not
replied and isn't being shut down.
If cancel *is* issued, don't wait for it to complete; stash the slot
in there and just have it freed when cancel is finally replied to or
purged (and delay dropping the reference until then, obviously).
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
A couple of caches were no longer needed:
- iov_iter improvements to orangefs_devreq_write_iter eliminated
the need for the dev_req_cache.
- removal (months ago) of the old AIO code eliminated the need
for the kiocb_cache.
Also, deobfuscation of use of GFP_KERNEL when calling kmem_cache_(z)alloc
for remaining caches.
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
|
|
| |
fold orangefs_op_initialize() in there, don't bother locking something
nobody else could've seen yet, use kmem_cache_zalloc() instead of
explicit memset()...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
| |
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* create with refcount 1
* make op_release() decrement and free if zero (i.e. old put_op()
has become that).
* mark when submitter has given up waiting; from that point nobody
else can move between the lists, change state, etc.
* have daemon read/write_iter grab a reference when picking op
and *always* give it up in the end
* don't put into hash until we know it's been successfully passed to
daemon
* move op->lock _lower_ than htab_in_progress_lock (and make sure
to take it in purge_inprogress_ops())
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
| |
The op structure's ref_count member hasn't got anything to do with
asynchronous I/O.
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|
|
|
|
|
|
|
|
|
|
| |
There was previously MAX_ALIGNED_DEV_REQ_(UP|DOWN)SIZE macros which
evaluated to MAX_DEV_REQ_(UP|DOWN)SIZE+8. As it is unclear what this is
for, other than creating a situation where we accept more data than we
can parse, it is removed.
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
Signed-off-by: Martin Brandenburg <martin@omnibond.com>
|
|
Also changed references within source files that referred to
header files whose names had changed.
Signed-off-by: Mike Marshall <hubcap@omnibond.com>
|