- Jun 13, 2020
-
-
Jerome Marchand authored
The current default value of interval (99999999999) in dbstat is too high to be used in the sleep() function in python 3. I couldn't find a authoritative source on the issue, but it seems the max value is 2^63/10^9 (9223372036). Anyway, 99999999 is the de facto standard for a very big number here, so just use that. It's over 3 years, that should be enough. For consistency, I also change a couple of value in klockstat even though they didn't overflow. It fixes the following error: $ dbstat mysql Tracing database queries for pids slower than 0 ms... Traceback (most recent call last): File "./dbstat", line 112, in <module> sleep(args.interval) OverflowError: timestamp too large to convert to C _PyTime_t
-
- May 21, 2020
-
-
Mauricio Vásquez authored
The KFUNC_PROBE macro is using "void" as return type, this is causing problems in some tools that have a filtering enable that returns 0. Reproducer: (Notice that it requires BTF support) ``` $ python opensnoop.py --pid 5 /virtual/main.c:33:21: error: void function '____kretfunc__do_sys_open' should not return a value [-Wreturn-type] if (pid != 5) { return 0; } ^ ~ 1 error generated. ... ``` Signed-off-by:
Mauricio Vásquez <mauricio@kinvolk.io>
-
- Feb 27, 2020
-
-
Jiri Olsa authored
Adding kfunc return trampoline probe if available instead of kprobe/kretprobe probes. The kfunc trampolines are faster - less intrusive. The benchmark without klockstat.py script on background: $ perf bench sched messaging -l 50000 # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 10 groups == 400 processes run Total time: 18.571 [sec] With kprobe tracing: $ perf bench sched messaging -l 50000 # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 10 groups == 400 processes run Total time: 183.395 [sec] With kfunc tracing: $ perf bench sched messaging -l 50000 # Running 'sched/messaging' benchmark: # 20 sender and receiver processes per group # 10 groups == 400 processes run Total time: 39.773 [sec] Signed-off-by:
Jiri Olsa <jolsa@kernel.org>
-
- Jan 31, 2020
-
-
Jiri Olsa authored
Currently we display the caller symbol in stack, which ends up with output below when we enable stack: Caller Avg Hold Count Max hold Total hold b'flush_to_ldisc+0x22' 56112 2 103914 112225 b'flush_to_ldisc+0x22' b'process_one_work+0x1b0' b'worker_thread+0x50' b'kthread+0xfb' Skipping one more symbol in stack to fix that: Caller Avg Hold Count Max hold Total hold b'flush_to_ldisc+0x22' 1893 2 2765 3787 b'process_one_work+0x1b0' b'worker_thread+0x50' b'kthread+0xfb' Signed-off-by:
Jiri Olsa <jolsa@kernel.org>
-
- Jan 09, 2020
-
-
Michael Prokop authored
fix a bunch of types in man pages, docs, tools, tests, src and examples.
-
- Dec 18, 2019
-
-
Marko Myllynen authored
- args.caller needs to be bytes not string - clarify man page a bit
-
- Dec 05, 2019
-
-
Yonghong Song authored
The current lockstat.py is tracing three kernel functions mutex_lock_enter(), mutex_unlock_enter(), mutex_lock_return() for kernel locking statistics. There are some other efforts trying to get user lock stats by tracing e.g. pthread locking primitives. For example, Sasha Goldshtein's linux-tracing-workshop https://github.com/goldshtn/linux-tracing-workshop is referenced in bcc/docs/tutorial_bcc_python_developer.md. It has a tool called lockstat.py which traces pthread_mutex_init to collect some lock statistics for userspace locks. In bcc, in the past, we also had an effort to gather userspace lock statistics with the same name lockstat.py. https://github.com/iovisor/bcc/pull/1268 In the future, bcc could have a lockstat tool tracing userspace locks. So let us rename the current lockstat.py to klockstat.py to clearly express its scope.
-
- Nov 29, 2019
-
-
uppinggoer authored
fix bug for lockstat.py with map keys in stack instead of map value in order to run in older kernels.
-
- Nov 18, 2019
-
-
Jiri Olsa authored
Adding lockstat tool to trace kernel mutex lock events and display locks statistics and displays following data: Caller Avg Spin Count Max spin Total spin psi_avgs_work+0x2e 3675 5 5468 18379 flush_to_ldisc+0x22 2833 2 4210 5667 n_tty_write+0x30c 3914 1 3914 3914 isig+0x5d 2390 1 2390 2390 tty_buffer_flush+0x2a 1604 1 1604 1604 commit_echoes+0x22 1400 1 1400 1400 n_tty_receive_buf_common+0x3b9 1399 1 1399 1399 Caller Avg Hold Count Max hold Total hold flush_to_ldisc+0x22 42558 2 76135 85116 psi_avgs_work+0x2e 14821 5 20446 74106 n_tty_receive_buf_common+0x3b9 12300 1 12300 12300 n_tty_write+0x30c 10712 1 10712 10712 isig+0x5d 3362 1 3362 3362 tty_buffer_flush+0x2a 3078 1 3078 3078 commit_echoes+0x22 3017 1 3017 3017 Every caller to using kernel's mutex is displayed on every line. First portion of lines show the lock acquiring data, showing the amount of time it took to acquired given lock. 'Caller' - symbol acquiring the mutex 'Average Spin' - average time to acquire the mutex 'Count' - number of times mutex was acquired 'Max spin' - maximum time to acquire the mutex 'Total spin' - total time spent in acquiring the mutex Second portion of lines show the lock holding data, showing the amount of time it took to hold given lock. 'Caller' - symbol holding the mutex 'Average Hold' - average time mutex was held 'Count' - number of times mutex was held 'Max hold' - maximum time mutex was held 'Total hold' - total time spent in holding the mutex This works by tracing mutex_lock/unlock kprobes, udating the lock stats in maps and processing them in the python part. Examples: lockstats # trace system wide lockstats -d 5 # trace for 5 seconds only lockstats -i 5 # display stats every 5 seconds lockstats -p 123 # trace locks for PID 123 lockstats -t 321 # trace locks for PID 321 lockstats -c pipe_ # display stats only for lock callers with 'pipe_' substring lockstats -S acq_count # sort lock acquired results on acquired count lockstats -S hld_total # sort lock held results on total held time lockstats -S acq_count,hld_total # combination of above lockstats -n 3 # display 3 locks lockstats -s 3 # display 3 levels of stack Signed-off-by:
Jiri Olsa <jolsa@kernel.org>
-