FYI, we noticed interrupts.CAL:Function_call_interrupts -7.6% due to commit:
commit f1967d7f70350b62f087671c106c3968ef8eac98 ("sched/debug: decouple
'sched_stat_*' tracepoints' from CONFIG_SCHEDSTATS")
https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git sched/tracepoints
in testcase: fio-basic
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
with following parameters:
runtime: 300s
nr_task: 8
disk: 1SSD
fs: xfs
rw: write
bs: 4k
ioengine: sync
test_size: 400g
cpufreq_governor: performance
Fio is a tool that will spawn a number of threads or processes doing a particular type of
I/O action as specified by the user.
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone
git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase:
4k/gcc-6/performance/1SSD/xfs/sync/x86_64-rhel-7.2/8/debian-x86_64-2016-08-31.cgz/300s/write/lkp-bdw-de1/400g/fio-basic
commit:
25c49d6ace ("Merge branch 'tip/sched/core'")
f1967d7f70 ("sched/debug: decouple 'sched_stat_*' tracepoints' from
CONFIG_SCHEDSTATS")
25c49d6ace2da4e4 f1967d7f70350b62f087671c10
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
55374 ± 0% -7.6% 51191 ± 1% interrupts.CAL:Function_call_interrupts
1.229e+11 ± 0% +2.1% 1.254e+11 ± 2% perf-stat.branch-instructions
86.43 ± 0% +2.7% 88.81 ± 0% perf-stat.iTLB-load-miss-rate%
1.626e+08 ± 1% +25.8% 2.046e+08 ± 2% perf-stat.iTLB-load-misses
6.471e+11 ± 0% +1.8% 6.588e+11 ± 2% perf-stat.instructions
3979 ± 1% -19.1% 3221 ± 2% perf-stat.instructions-per-iTLB-miss
0.00 ± -1% +Inf% 63990 ±124%
latency_stats.avg.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 63892 ±124%
latency_stats.avg.rpc_wait_bit_killable.__rpc_wait_for_completion_task._nfs4_proc_open_confirm.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 63990 ±124%
latency_stats.max.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 63892 ±124%
latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task._nfs4_proc_open_confirm.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 63990 ±124%
latency_stats.sum.call_rwsem_down_write_failed.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
1422 ± 5% +726.9% 11759 ± 58%
latency_stats.sum.pipe_wait.wait_for_partner.fifo_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 63892 ±124%
latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task._nfs4_proc_open_confirm.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
4988 ± 1% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.exec_clock.avg
13988 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.exec_clock.max
1841 ± 16% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.exec_clock.min
2721 ± 2% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.exec_clock.stddev
0.84 ± 3% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.avg
1.00 ± 49% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.max
0.83 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.min
24212 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu.sched_count.avg
35103 ± 7% -100.0% 0.00 ± -1% sched_debug.cpu.sched_count.max
16449 ± 8% -100.0% 0.00 ± -1% sched_debug.cpu.sched_count.min
5084 ± 5% -100.0% 0.00 ± -1% sched_debug.cpu.sched_count.stddev
11290 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu.sched_goidle.avg
16427 ± 4% -100.0% 0.00 ± -1% sched_debug.cpu.sched_goidle.max
6782 ± 12% -100.0% 0.00 ± -1% sched_debug.cpu.sched_goidle.min
2487 ± 7% -100.0% 0.00 ± -1% sched_debug.cpu.sched_goidle.stddev
11839 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_count.avg
19828 ± 9% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_count.max
7753 ± 6% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_count.min
3206 ± 14% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_count.stddev
5720 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_local.avg
7580 ± 4% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_local.max
4000 ± 10% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_local.min
933.86 ± 15% -100.0% 0.00 ± -1% sched_debug.cpu.ttwu_local.stddev
Thanks,
Xiaolong