summaryrefslogtreecommitdiffstats
path: root/include/trace/events/io_uring.h
Commit message (Collapse)AuthorAgeFilesLines
* io_uring: trace local task work runDylan Yudaken2022-09-211-0/+29
| | | | | | | | Add tracing for io_run_local_task_work Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220830125013.570060-8-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: Add tracepoint for short writesStefan Roesch2022-07-241-0/+25
| | | | | | | | | | This adds the io_uring_short_write tracepoint to io_uring. A short write is issued if not all pages that are required for a write are in the page cache and the async buffered writes have to return EAGAIN. Signed-off-by: Stefan Roesch <shr@fb.com> Link: https://lore.kernel.org/r/20220616212221.2024518-13-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: fix io_uring_cqe_overflow trace formatDylan Yudaken2022-07-241-1/+1
| | | | | | | | Make the trace format consistent with io_uring_complete for cflags Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220630091231.1456789-12-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: add trace event for running task workDylan Yudaken2022-07-241-0/+30
| | | | | | | | This is useful for investigating if task_work is batching Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220622134028.2013417-8-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: clean up tracing eventsPavel Begunkov2022-07-241-85/+57
| | | | | | | | | | | | | We have lots of trace events accepting an io_uring request and wanting to print some of its fields like user_data, opcode, flags and so on. However, as trace points were unaware of io_uring structures, we had to pass all the fields as arguments. Teach trace/events/io_uring.h about struct io_kiocb and stop the misery of passing a horde of arguments to trace helpers. Signed-off-by: Pavel Begunkov <asml.silence@gmail.com> Link: https://lore.kernel.org/r/40ff72f92798114e56d400f2b003beb6cde6ef53.1655384063.git.asml.silence@gmail.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: move io_uring_get_opcode out of TP_printkDylan Yudaken2022-06-231-7/+35
| | | | | | | | | | | | | | | | | The TP_printk macro's are not supposed to use custom code ([1]) or else tools such as perf cannot use these events. Convert the opcode string representation to use the __string wiring that the event framework provides ([2]). [1]: https://lwn.net/Articles/379903/ [2]: https://lwn.net/Articles/381064/ Fixes: 033b87d24f72 ("io_uring: use the text representation of ops in trace") Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220623083743.2648321-1-dylany@fb.com [axboe: fixup spurious removal of sq_thread assignment] Signed-off-by: Jens Axboe <axboe@kernel.dk>
* Merge tag 'for-5.19/io_uring-passthrough-2022-05-22' of ↵Linus Torvalds2022-05-231-4/+14
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.dk/linux-block Pull io_uring NVMe command passthrough from Jens Axboe: "On top of everything else, this adds support for passthrough for io_uring. The initial feature for this is NVMe passthrough support, which allows non-filesystem based IO commands and admin commands. To support this, io_uring grows support for SQE and CQE members that are twice as big, allowing to pass in a full NVMe command without having to copy data around. And to complete with more than just a single 32-bit value as the output" * tag 'for-5.19/io_uring-passthrough-2022-05-22' of git://git.kernel.dk/linux-block: (22 commits) io_uring: cleanup handling of the two task_work lists nvme: enable uring-passthrough for admin commands nvme: helper for uring-passthrough checks blk-mq: fix passthrough plugging nvme: add vectored-io support for uring-cmd nvme: wire-up uring-cmd support for io-passthru on char-device. nvme: refactor nvme_submit_user_cmd() block: wire-up support for passthrough plugging fs,io_uring: add infrastructure for uring-cmd io_uring: support CQE32 for nop operation io_uring: enable CQE32 io_uring: support CQE32 in /proc info io_uring: add tracing for additional CQE32 fields io_uring: overflow processing for CQE32 io_uring: flush completions for CQE32 io_uring: modify io_get_cqe for CQE32 io_uring: add CQE32 completion processing io_uring: add CQE32 setup processing io_uring: change ring size calculation for CQE32 io_uring: store add. return values for CQE32 ...
| * io_uring: add tracing for additional CQE32 fieldsStefan Roesch2022-05-091-4/+14
| | | | | | | | | | | | | | | | | | | | This adds tracing for the extra1 and extra2 fields. Co-developed-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Stefan Roesch <shr@fb.com> Reviewed-by: Kanchan Joshi <joshi.k@samsung.com> Link: https://lore.kernel.org/r/20220426182134.136504-10-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge tag 'for-5.19/io_uring-socket-2022-05-22' of ↵Linus Torvalds2022-05-231-15/+21
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.dk/linux-block Pull io_uring socket() support from Jens Axboe: "This adds support for socket(2) for io_uring. This is handy when using direct / registered file descriptors with io_uring. Outside of those two patches, a small series from Dylan on top that improves the tracing by providing a text representation of the opcode rather than needing to decode this by reading the header file every time. That sits in this branch as it was the last opcode added (until it wasn't...)" * tag 'for-5.19/io_uring-socket-2022-05-22' of git://git.kernel.dk/linux-block: io_uring: use the text representation of ops in trace io_uring: rename op -> opcode io_uring: add io_uring_get_opcode io_uring: add type to op enum io_uring: add socket(2) support net: add __sys_socket_file()
| * io_uring: use the text representation of ops in traceDylan Yudaken2022-04-281-15/+21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | It is annoying to translate opcodes to textwhen tracing io_uring. Use the io_uring_get_opcode function instead to use the text representation. A downside here might have been that if the opcode is invalid it will not be obvious, however the opcode is already overridden in these cases to 0 (NOP) in io_init_req(). Therefore this is a non issue. Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220426082907.3600028-5-dylany@fb.com [axboe: don't include register, those are not req opcodes] Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: rename op -> opcodeDylan Yudaken2022-04-261-1/+1
| | | | | | | | | | | | | | | | do this for consistency with the other trace messages Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220426082907.3600028-4-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge tag 'for-5.19/io_uring-xattr-2022-05-22' of ↵Linus Torvalds2022-05-231-4/+5
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.dk/linux-block Pull io_uring xattr support from Jens Axboe: "Support for the xattr variants" * tag 'for-5.19/io_uring-xattr-2022-05-22' of git://git.kernel.dk/linux-block: io_uring: cleanup error-handling around io_req_complete io_uring: fix trace for reduced sqe padding io_uring: add fgetxattr and getxattr support io_uring: add fsetxattr and setxattr support fs: split off do_getxattr from getxattr fs: split off setxattr_copy and do_setxattr function from setxattr
| * io_uring: fix trace for reduced sqe paddingJens Axboe2022-04-241-4/+5
| | | | | | | | | | | | | | | | __pad2 is only 1 u64 now, the other one is addr3. Adjust the trace so that it matches up. Fixes: a56834e0fafe ("io_uring: add fgetxattr and getxattr support") Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | Merge tag 'for-5.19/io_uring-2022-05-22' of git://git.kernel.dk/linux-blockLinus Torvalds2022-05-231-2/+42
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Pull io_uring updates from Jens Axboe: "Here are the main io_uring changes for 5.19. This contains: - Fixes for sparse type warnings (Christoph, Vasily) - Support for multi-shot accept (Hao) - Support for io_uring managed fixed files, rather than always needing the applicationt o manage the indices (me) - Fix for a spurious poll wakeup (Dylan) - CQE overflow fixes (Dylan) - Support more types of cancelations (me) - Support for co-operative task_work signaling, rather than always forcing an IPI (me) - Support for doing poll first when appropriate, rather than always attempting a transfer first (me) - Provided buffer cleanups and support for mapped buffers (me) - Improve how io_uring handles inflight SCM files (Pavel) - Speedups for registered files (Pavel, me) - Organize the completion data in a struct in io_kiocb rather than keep it in separate spots (Pavel) - task_work improvements (Pavel) - Cleanup and optimize the submission path, in general and for handling links (Pavel) - Speedups for registered resource handling (Pavel) - Support sparse buffers and file maps (Pavel, me) - Various fixes and cleanups (Almog, Pavel, me)" * tag 'for-5.19/io_uring-2022-05-22' of git://git.kernel.dk/linux-block: (111 commits) io_uring: fix incorrect __kernel_rwf_t cast io_uring: disallow mixed provided buffer group registrations io_uring: initialize io_buffer_list head when shared ring is unregistered io_uring: add fully sparse buffer registration io_uring: use rcu_dereference in io_close io_uring: consistently use the EPOLL* defines io_uring: make apoll_events a __poll_t io_uring: drop a spurious inline on a forward declaration io_uring: don't use ERR_PTR for user pointers io_uring: use a rwf_t for io_rw.flags io_uring: add support for ring mapped supplied buffers io_uring: add io_pin_pages() helper io_uring: add buffer selection support to IORING_OP_NOP io_uring: fix locking state for empty buffer group io_uring: implement multishot mode for accept io_uring: let fast poll support multishot io_uring: add REQ_F_APOLL_MULTISHOT for requests io_uring: add IORING_ACCEPT_MULTISHOT for accept io_uring: only wake when the correct events are set io_uring: avoid io-wq -EAGAIN looping for !IOPOLL ...
| * | io_uring: fix incorrect __kernel_rwf_t castVasily Averin2022-05-191-1/+1
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Currently 'make C=1 fs/io_uring.o' generates sparse warning: CHECK fs/io_uring.c fs/io_uring.c: note: in included file (through include/trace/trace_events.h, include/trace/define_trace.h, i nclude/trace/events/io_uring.h): ./include/trace/events/io_uring.h:488:1: warning: incorrect type in assignment (different base types) expected unsigned int [usertype] op_flags got restricted __kernel_rwf_t const [usertype] rw_flags This happen on cast of sqe->rw_flags which is defined as __kernel_rwf_t, this type is bitwise and requires __force attribute for any casts. However rw_flags is a member of the union, and its access can be safely replaced by using of its neighbours, so let's use poll32_events to fix the sparse warning. Signed-off-by: Vasily Averin <vvs@openvz.org> Link: https://lore.kernel.org/r/6f009241-a63f-ae43-a04b-62841aaef293@openvz.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
| * io_uring: add trace support for CQE overflowDylan Yudaken2022-04-241-1/+41
| | | | | | | | | | | | | | | | Add trace function for overflowing CQ ring. Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220421091345.2115755-2-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* | io_uring: fix ordering of args in io_uring_queue_async_workDylan Yudaken2022-05-121-1/+1
|/ | | | | | | | | Fix arg ordering in TP_ARGS macro, which fixes the output. Fixes: 502c87d65564c ("io-uring: Make tracepoints consistent.") Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220512091834.728610-2-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: make tracing format consistentDylan Yudaken2022-03-161-12/+12
| | | | | | | | | | | | | | | | Make the tracing formatting for user_data and flags consistent. Having consistent formatting allows one for example to grep for a specific user_data/flags and be able to trace a single sqe through easily. Change user_data to 0x%llx and flags to 0x%x everywhere. The '0x' is useful to disambiguate for example "user_data 100". Additionally remove the '=' for flags in io_uring_req_failed, again for consistency. Signed-off-by: Dylan Yudaken <dylany@fb.com> Link: https://lore.kernel.org/r/20220316095204.2191498-1-dylany@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io-uring: Make tracepoints consistent.Stefan Roesch2022-03-101-165/+153
| | | | | | | | | | | | | This makes the io-uring tracepoints consistent. Where it makes sense the tracepoints start with the following four fields: - context (ring) - request - user_data - opcode. Signed-off-by: Stefan Roesch <shr@fb.com> Link: https://lore.kernel.org/r/20220214180430.70572-3-shr@fb.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: remove trace for eventfdUsama Arif2022-03-101-8/+5
| | | | | | | | | | | The information on whether eventfd is registered is not very useful and would result in the tracepoint being enclosed in an rcu_readlock in a later patch that tries to avoid ring quiesce for registering eventfd. Suggested-by: Jens Axboe <axboe@kernel.dk> Signed-off-by: Usama Arif <usama.arif@bytedance.com> Link: https://lore.kernel.org/r/20220204145117.1186568-2-usama.arif@bytedance.com Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: dump sqe contents if issue failsJens Axboe2021-10-191-0/+61
| | | | | | | | | | | | I recently had to look at a production problem where a request ended up getting the dreaded -EINVAL error on submit. The most used and hence useless of error codes, as it just tells you that something was wrong with your request, but not more than that. Let's dump the full sqe contents if we run into an issue failure, that'll allow easier diagnosing of a wide variety of issues. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: io_uring_complete() trace should take an integerJens Axboe2021-09-031-3/+3
| | | | | | | | | | | | | | | | | It currently takes a long, and while that's normally OK, the io_uring limit is an int. Internally in io_uring it's an int, but sometimes it's passed as a long. That can yield confusing results where a completions seems to generate a huge result: ou-sqp-1297-1298 [001] ...1 788.056371: io_uring_complete: ring 000000000e98e046, user_data 0x0, result 4294967171, cflags 0 which is due to -ECANCELED being stored in an unsigned, and then passed in as a long. Using the right int type, the trace looks correct: iou-sqp-338-339 [002] ...1 15.633098: io_uring_complete: ring 00000000e0ac60cf, user_data 0x0, result -125, cflags 0 Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: minor clean up in trace events definitionOlivier Langlois2021-06-161-18/+17
| | | | | | | | Fix tabulation to make nice columns Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Olivier Langlois <olivier@trillion01.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: Add to traces the req pointer when availableOlivier Langlois2021-06-161-18/+53
| | | | | | | | | | The req pointer uniquely identify a specific request. Having it in traces can provide valuable insights that is not possible to have if the calling process is reusing the same user_data value. Reviewed-by: Pavel Begunkov <asml.silence@gmail.com> Signed-off-by: Olivier Langlois <olivier@trillion01.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>
* Merge tag 'trace-v5.13' of ↵Linus Torvalds2021-05-031-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace Pull tracing updates from Steven Rostedt: "New feature: - A new "func-no-repeats" option in tracefs/options directory. When set the function tracer will detect if the current function being traced is the same as the previous one, and instead of recording it, it will keep track of the number of times that the function is repeated in a row. And when another function is recorded, it will write a new event that shows the function that repeated, the number of times it repeated and the time stamp of when the last repeated function occurred. Enhancements: - In order to implement the above "func-no-repeats" option, the ring buffer timestamp can now give the accurate timestamp of the event as it is being recorded, instead of having to record an absolute timestamp for all events. This helps the histogram code which no longer needs to waste ring buffer space. - New validation logic to make sure all trace events that access dereferenced pointers do so in a safe way, and will warn otherwise. Fixes: - No longer limit the PIDs of tasks that are recorded for "saved_cmdlines" to PID_MAX_DEFAULT (32768), as systemd now allows for a much larger range. This caused the mapping of PIDs to the task names to be dropped for all tasks with a PID greater than 32768. - Change trace_clock_global() to never block. This caused a deadlock. Clean ups: - Typos, prototype fixes, and removing of duplicate or unused code. - Better management of ftrace_page allocations" * tag 'trace-v5.13' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace: (32 commits) tracing: Restructure trace_clock_global() to never block tracing: Map all PIDs to command lines ftrace: Reuse the output of the function tracer for func_repeats tracing: Add "func_no_repeats" option for function tracing tracing: Unify the logic for function tracing options tracing: Add method for recording "func_repeats" events tracing: Add "last_func_repeats" to struct trace_array tracing: Define new ftrace event "func_repeats" tracing: Define static void trace_print_time() ftrace: Simplify the calculation of page number for ftrace_page->records some more ftrace: Store the order of pages allocated in ftrace_page tracing: Remove unused argument from "ring_buffer_time_stamp() tracing: Remove duplicate struct declaration in trace_events.h tracing: Update create_system_filter() kernel-doc comment tracing: A minor cleanup for create_system_filter() kernel: trace: Mundane typo fixes in the file trace_events_filter.c tracing: Fix various typos in comments scripts/recordmcount.pl: Make vim and emacs indent the same scripts/recordmcount.pl: Make indent spacing consistent tracing: Add a verifier to check string pointers for trace events ...
| * tracing: Fix various typos in commentsIngo Molnar2021-03-231-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Fix ~59 single-word typos in the tracing code comments, and fix the grammar in a handful of places. Link: https://lore.kernel.org/r/20210322224546.GA1981273@gmail.com Link: https://lkml.kernel.org/r/20210323174935.GA4176821@gmail.com Reviewed-by: Randy Dunlap <rdunlap@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Signed-off-by: Steven Rostedt (VMware) <rostedt@goodmis.org>
* | io_uring: include cflags in completion trace eventJens Axboe2021-04-111-4/+7
|/ | | | | | | We should be including the completion flags for better introspection on exactly what completion event was logged. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: use poll driven retry for files that support itJens Axboe2020-03-021-0/+103
| | | | | | | | | | | | | | | | | | | | | Currently io_uring tries any request in a non-blocking manner, if it can, and then retries from a worker thread if we get -EAGAIN. Now that we have a new and fancy poll based retry backend, use that to retry requests if the file supports it. This means that, for example, an IORING_OP_RECVMSG on a socket no longer requires an async thread to complete the IO. If we get -EAGAIN reading from the socket in a non-blocking manner, we arm a poll handler for notification on when the socket becomes readable. When it does, the pending read is executed directly by the task again, through the io_uring task work handlers. Not only is this faster and more efficient, it also means we're not generating potentially tons of async threads that just sit and block, waiting for the IO to complete. The feature is marked with IORING_FEAT_FAST_POLL, meaning that async pollable IO is fast, and that poll<link>other_op is fast as well. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: add opcode to issue trace eventJens Axboe2020-01-201-4/+9
| | | | | | | | For some test apps at least, user_data is just zeroes. So it's not a good way to tell what the command actually is. Add the opcode to the issue trace point. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: improve trace_io_uring_defer() trace pointJens Axboe2019-11-251-8/+8
| | | | | | | | We don't have shadow requests anymore, so get rid of the shadow argument. Add the user_data argument, as that's often useful to easily match up requests, instead of having to look at request pointers. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: add completion trace eventJens Axboe2019-11-041-9/+45
| | | | | | | | We currently don't have a completion event trace, add one of those. And to better be able to match up submissions and completions, add user_data to the submission trace as well. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: remove io_uring_add_to_prev() trace eventJens Axboe2019-11-011-29/+0
| | | | | | | This internal logic was killed with the conversion to io-wq, so we no longer have a need for this particular trace. Kill it. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: replace workqueue usage with io-wqJens Axboe2019-10-291-5/+7
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Drop various work-arounds we have for workqueues: - We no longer need the async_list for tracking sequential IO. - We don't have to maintain our own mm tracking/setting. - We don't need a separate workqueue for buffered writes. This didn't even work that well to begin with, as it was suboptimal for multiple buffered writers on multiple files. - We can properly cancel pending interruptible work. This fixes deadlocks with particularly socket IO, where we cannot cancel them when the io_uring is closed. Hence the ring will wait forever for these requests to complete, which may never happen. This is different from disk IO where we know requests will complete in a finite amount of time. - Due to being able to cancel work interruptible work that is already running, we can implement file table support for work. We need that for supporting system calls that add to a process file table. - It gets us one step closer to adding async support for any system call. Signed-off-by: Jens Axboe <axboe@kernel.dk>
* io_uring: add set of tracing eventsDmitrii Dolgov2019-10-291-0/+349
To trace io_uring activity one can get an information from workqueue and io trace events, but looks like some parts could be hard to identify via this approach. Making what happens inside io_uring more transparent is important to be able to reason about many aspects of it, hence introduce the set of tracing events. All such events could be roughly divided into two categories: * those, that are helping to understand correctness (from both kernel and an application point of view). E.g. a ring creation, file registration, or waiting for available CQE. Proposed approach is to get a pointer to an original structure of interest (ring context, or request), and then find relevant events. io_uring_queue_async_work also exposes a pointer to work_struct, to be able to track down corresponding workqueue events. * those, that provide performance related information. Mostly it's about events that change the flow of requests, e.g. whether an async work was queued, or delayed due to some dependencies. Another important case is how io_uring optimizations (e.g. registered files) are utilized. Signed-off-by: Dmitrii Dolgov <9erthalion6@gmail.com> Signed-off-by: Jens Axboe <axboe@kernel.dk>