[mm] 6471384af2: hackbench.throughput -4.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -4.9% regression of hackbench.throughput due to commit:
commit: 6471384af2a6530696fc0203bafe4de41a23c9ef ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: hackbench
on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
with following parameters:
nr_threads: 100%
mode: process
ipc: socket
cpufreq_governor: performance
ucode: 0xb8
test-description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.
test-url: https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/sc...
In addition to that, the commit also has significant impact on the following tests:
+------------------+-------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -4.3% regression |
| test machine | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_threads=100% |
| | ucode=0xb8 |
+------------------+-------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/socket/x86_64-rhel-7.2-clear_lck_7595/process/100%/clear-ota-25590-x86_64-2018-10-18.cgz/lkp-cfl-e1/hackbench/0xb8
commit:
ba5c5e4a5d ("arm64: move jump_label_init() before parse_early_param()")
6471384af2 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
ba5c5e4a5da443e8 6471384af2a6530696fc0203baf
---------------- ---------------------------
%stddev %change %stddev
\ | \
90709 -4.9% 86276 hackbench.throughput
108209 +42.0% 153632 hackbench.time.involuntary_context_switches
825777 -3.9% 793179 hackbench.time.minor_page_faults
712.02 -3.9% 684.38 hackbench.time.user_time
5.616e+08 -4.3% 5.376e+08 hackbench.workload
0.84 +0.1 0.95 ± 2% mpstat.cpu.-1.irq%
1347 ± 3% +12.7% 1518 ± 2% slabinfo.kmalloc-rcl-64.num_objs
3.49 +4.7% 3.65 turbostat.RAMWatt
396367 -1.3% 391171 vmstat.system.cs
14755342 -3.8% 14194624 proc-vmstat.numa_hit
14755342 -3.8% 14194624 proc-vmstat.numa_local
14815709 -3.7% 14261066 proc-vmstat.pgalloc_normal
14779482 -3.8% 14224922 proc-vmstat.pgfree
8627 ±172% +682.5% 67514 ± 65% sched_debug.cfs_rq:/.MIN_vruntime.avg
137821 ±172% +585.5% 944775 ± 59% sched_debug.cfs_rq:/.MIN_vruntime.max
33372 ±172% +620.8% 240555 ± 61% sched_debug.cfs_rq:/.MIN_vruntime.stddev
8627 ±172% +682.5% 67514 ± 65% sched_debug.cfs_rq:/.max_vruntime.avg
137821 ±172% +585.5% 944775 ± 59% sched_debug.cfs_rq:/.max_vruntime.max
33372 ±172% +620.8% 240555 ± 61% sched_debug.cfs_rq:/.max_vruntime.stddev
556.14 ± 4% -9.3% 504.21 ± 5% sched_debug.cpu.clock_task.stddev
1883 ± 36% -48.0% 979.75 ± 26% interrupts.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
46373 ± 4% -10.1% 41692 ± 4% interrupts.CPU0.RES:Rescheduling_interrupts
1883 ± 36% -48.0% 979.75 ± 26% interrupts.CPU1.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
45230 ± 5% -11.7% 39943 ± 4% interrupts.CPU1.RES:Rescheduling_interrupts
57716 ± 3% -12.3% 50601 interrupts.CPU12.RES:Rescheduling_interrupts
53566 ± 3% -11.5% 47399 interrupts.CPU13.RES:Rescheduling_interrupts
52007 ± 4% -14.0% 44710 ± 5% interrupts.CPU15.RES:Rescheduling_interrupts
46887 ± 3% -12.3% 41140 ± 6% interrupts.CPU2.RES:Rescheduling_interrupts
52269 ± 5% -12.1% 45958 ± 5% interrupts.CPU3.RES:Rescheduling_interrupts
57120 ± 7% -12.0% 50281 ± 4% interrupts.CPU4.RES:Rescheduling_interrupts
49253 ± 3% -11.5% 43586 ± 2% interrupts.CPU7.RES:Rescheduling_interrupts
820390 -10.2% 736882 interrupts.RES:Rescheduling_interrupts
44.93 -2.4% 43.85 perf-stat.i.MPKI
9.635e+09 -3.5% 9.3e+09 perf-stat.i.branch-instructions
1.106e+08 -4.2% 1.06e+08 perf-stat.i.branch-misses
13.44 +0.3 13.72 perf-stat.i.cache-miss-rate%
2.912e+08 -4.3% 2.786e+08 perf-stat.i.cache-misses
2.171e+09 -6.3% 2.034e+09 perf-stat.i.cache-references
397614 -1.4% 391925 perf-stat.i.context-switches
1.26 +4.2% 1.31 perf-stat.i.cpi
62650 -3.9% 60222 perf-stat.i.cpu-migrations
209.19 +4.6% 218.81 perf-stat.i.cycles-between-cache-misses
1.459e+10 -4.0% 1.401e+10 perf-stat.i.dTLB-loads
1.025e+10 -4.2% 9.817e+09 perf-stat.i.dTLB-stores
86435026 -2.5% 84314445 perf-stat.i.iTLB-load-misses
4.833e+10 -4.0% 4.639e+10 perf-stat.i.instructions
562.26 -1.5% 553.60 perf-stat.i.instructions-per-iTLB-miss
0.79 -4.0% 0.76 perf-stat.i.ipc
0.00 ± 35% +0.0 0.00 ± 49% perf-stat.i.node-load-miss-rate%
0.64 ± 41% +40.2% 0.90 ± 32% perf-stat.i.node-load-misses
21652012 -2.2% 21184023 perf-stat.i.node-loads
0.57 ± 7% +76.9% 1.00 ± 33% perf-stat.i.node-store-misses
26105714 +79.9% 46970933 perf-stat.i.node-stores
44.93 -2.4% 43.85 perf-stat.overall.MPKI
13.41 +0.3 13.69 perf-stat.overall.cache-miss-rate%
1.26 +4.2% 1.31 perf-stat.overall.cpi
208.98 +4.6% 218.59 perf-stat.overall.cycles-between-cache-misses
559.11 -1.6% 550.27 perf-stat.overall.instructions-per-iTLB-miss
0.79 -4.0% 0.76 perf-stat.overall.ipc
0.00 ± 42% +0.0 0.00 ± 33% perf-stat.overall.node-load-miss-rate%
9.619e+09 -3.5% 9.285e+09 perf-stat.ps.branch-instructions
1.104e+08 -4.2% 1.058e+08 perf-stat.ps.branch-misses
2.907e+08 -4.3% 2.781e+08 perf-stat.ps.cache-misses
2.168e+09 -6.3% 2.031e+09 perf-stat.ps.cache-references
396957 -1.4% 391281 perf-stat.ps.context-switches
62546 -3.9% 60123 perf-stat.ps.cpu-migrations
1.457e+10 -4.0% 1.399e+10 perf-stat.ps.dTLB-loads
1.023e+10 -4.2% 9.801e+09 perf-stat.ps.dTLB-stores
86292583 -2.5% 84176193 perf-stat.ps.iTLB-load-misses
4.825e+10 -4.0% 4.632e+10 perf-stat.ps.instructions
0.64 ± 41% +40.2% 0.90 ± 32% perf-stat.ps.node-load-misses
21616330 -2.2% 21149342 perf-stat.ps.node-loads
0.56 ± 7% +76.9% 1.00 ± 33% perf-stat.ps.node-store-misses
26062702 +79.9% 46894016 perf-stat.ps.node-stores
2.929e+13 -3.5% 2.827e+13 perf-stat.total.instructions
39.46 -1.3 38.13 perf-profile.calltrace.cycles-pp.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.37 -1.3 38.05 perf-profile.calltrace.cycles-pp.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
38.18 -1.3 36.91 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
35.37 -1.2 34.19 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.ksys_read.__x64_sys_read.do_syscall_64
34.98 -1.2 33.81 perf-profile.calltrace.cycles-pp.new_sync_read.__vfs_read.vfs_read.ksys_read.__x64_sys_read
34.12 -1.1 33.00 perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.__vfs_read.vfs_read.ksys_read
32.88 -1.0 31.84 perf-profile.calltrace.cycles-pp.sock_recvmsg.sock_read_iter.new_sync_read.__vfs_read.vfs_read
32.46 -1.0 31.43 perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.new_sync_read.__vfs_read
32.04 -1.0 31.03 perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.new_sync_read
2.70 ± 2% -0.8 1.93 perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
2.16 ± 2% -0.8 1.41 perf-profile.calltrace.cycles-pp.refcount_add_checked.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
2.70 -0.7 1.95 perf-profile.calltrace.cycles-pp.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
2.16 ± 2% -0.7 1.41 perf-profile.calltrace.cycles-pp.refcount_add_not_zero_checked.refcount_add_checked.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg
11.94 -0.4 11.50 perf-profile.calltrace.cycles-pp.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
9.59 -0.4 9.18 perf-profile.calltrace.cycles-pp.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
9.49 -0.4 9.09 perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
9.20 -0.4 8.85 perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
8.85 -0.3 8.54 perf-profile.calltrace.cycles-pp.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
6.51 -0.3 6.21 perf-profile.calltrace.cycles-pp._copy_to_iter.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
7.29 -0.3 7.00 perf-profile.calltrace.cycles-pp.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
5.44 -0.2 5.20 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string._copy_to_iter.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
3.99 -0.2 3.79 perf-profile.calltrace.cycles-pp.syscall_slow_exit_work.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.44 ± 2% -0.2 2.24 ± 4% perf-profile.calltrace.cycles-pp.___cache_free.kfree.skb_free_head.skb_release_data.skb_release_all
3.23 -0.2 3.05 perf-profile.calltrace.cycles-pp.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
3.28 -0.2 3.10 perf-profile.calltrace.cycles-pp.__audit_syscall_exit.syscall_slow_exit_work.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.74 -0.2 2.56 perf-profile.calltrace.cycles-pp.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
3.05 -0.2 2.88 perf-profile.calltrace.cycles-pp.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic
1.53 ± 3% -0.2 1.36 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
4.44 -0.2 4.27 perf-profile.calltrace.cycles-pp.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
2.62 -0.2 2.46 perf-profile.calltrace.cycles-pp.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
1.95 -0.2 1.79 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
1.11 -0.1 0.97 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.___cache_free.kfree.skb_free_head.skb_release_data
1.53 ± 2% -0.1 1.39 ± 2% perf-profile.calltrace.cycles-pp.cache_alloc_refill.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb
1.02 ± 2% -0.1 0.89 ± 3% perf-profile.calltrace.cycles-pp.queued_spin_lock_slowpath._raw_spin_lock.___cache_free.kfree.skb_free_head
3.69 -0.1 3.56 perf-profile.calltrace.cycles-pp.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
3.83 -0.1 3.70 perf-profile.calltrace.cycles-pp.kfree_skbmem.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
1.60 -0.1 1.47 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
1.85 -0.1 1.73 perf-profile.calltrace.cycles-pp.syscall_trace_enter.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.60 -0.1 1.48 ± 2% perf-profile.calltrace.cycles-pp.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
2.70 -0.1 2.59 perf-profile.calltrace.cycles-pp._copy_from_iter.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.69 ± 3% -0.1 0.58 perf-profile.calltrace.cycles-pp.queued_spin_lock_slowpath._raw_spin_lock.cache_alloc_refill.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller
1.77 -0.1 1.66 perf-profile.calltrace.cycles-pp.___cache_free.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_read_generic
0.74 ± 2% -0.1 0.64 perf-profile.calltrace.cycles-pp._raw_spin_lock.cache_alloc_refill.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller.__kmalloc_reserve
1.82 -0.1 1.72 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.__x64_sys_write.do_syscall_64
1.90 -0.1 1.81 perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.98 ± 2% -0.1 0.89 ± 3% perf-profile.calltrace.cycles-pp.unroll_tree_refs.__audit_syscall_exit.syscall_slow_exit_work.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 -0.1 0.99 ± 3% perf-profile.calltrace.cycles-pp.unix_write_space.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all
0.97 ± 2% -0.1 0.92 ± 2% perf-profile.calltrace.cycles-pp.__audit_syscall_entry.syscall_trace_enter.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.65 -0.1 0.59 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.___cache_free.kmem_cache_free.kfree_skbmem.consume_skb
0.60 -0.1 0.55 ± 3% perf-profile.calltrace.cycles-pp.queued_spin_lock_slowpath._raw_spin_lock.___cache_free.kmem_cache_free.kfree_skbmem
0.70 -0.1 0.64 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
0.86 -0.1 0.81 perf-profile.calltrace.cycles-pp.free_block.___cache_free.kmem_cache_free.kfree_skbmem.consume_skb
2.03 -0.1 1.98 perf-profile.calltrace.cycles-pp.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.66 ± 2% -0.1 0.60 perf-profile.calltrace.cycles-pp.refcount_inc_not_zero_checked.refcount_inc_checked.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
1.98 -0.0 1.94 perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
0.69 ± 3% -0.0 0.64 ± 2% perf-profile.calltrace.cycles-pp.refcount_inc_checked.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
1.94 -0.0 1.89 perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg
0.75 ± 3% -0.0 0.71 perf-profile.calltrace.cycles-pp.deactivate_task.__sched_text_start.schedule.schedule_timeout.unix_stream_read_generic
0.70 ± 2% -0.0 0.67 perf-profile.calltrace.cycles-pp.cache_alloc_refill.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
1.86 ± 2% +0.2 2.08 perf-profile.calltrace.cycles-pp.refcount_inc_not_zero_checked.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
92.65 +0.4 93.01 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
90.21 +0.6 90.79 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.0 1.00 ± 4% perf-profile.calltrace.cycles-pp.memset_erms.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
0.00 +1.1 1.06 perf-profile.calltrace.cycles-pp.memset_erms.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
4.41 +1.7 6.13 perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
42.73 +2.2 44.95 perf-profile.calltrace.cycles-pp.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
42.64 +2.2 44.86 perf-profile.calltrace.cycles-pp.ksys_write.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
40.20 +2.4 42.55 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
37.99 +2.4 40.41 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.__x64_sys_write.do_syscall_64
37.73 +2.4 40.18 perf-profile.calltrace.cycles-pp.new_sync_write.__vfs_write.vfs_write.ksys_write.__x64_sys_write
36.93 +2.5 39.41 perf-profile.calltrace.cycles-pp.sock_write_iter.new_sync_write.__vfs_write.vfs_write.ksys_write
35.68 +2.5 38.19 perf-profile.calltrace.cycles-pp.sock_sendmsg.sock_write_iter.new_sync_write.__vfs_write.vfs_write
34.85 +2.5 37.36 perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write.__vfs_write
4.99 +2.8 7.79 perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
5.22 +3.0 8.21 perf-profile.calltrace.cycles-pp.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
16.62 +4.0 20.60 perf-profile.calltrace.cycles-pp.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
12.22 +4.7 16.95 perf-profile.calltrace.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
12.55 +4.7 17.29 perf-profile.calltrace.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
39.76 -1.3 38.41 perf-profile.children.cycles-pp.__x64_sys_read
39.48 -1.3 38.16 perf-profile.children.cycles-pp.ksys_read
38.25 -1.3 36.99 perf-profile.children.cycles-pp.vfs_read
35.48 -1.2 34.28 perf-profile.children.cycles-pp.__vfs_read
35.06 -1.2 33.89 perf-profile.children.cycles-pp.new_sync_read
34.37 -1.1 33.23 perf-profile.children.cycles-pp.sock_read_iter
6.75 -1.1 5.62 perf-profile.children.cycles-pp._raw_spin_lock
32.74 -1.0 31.70 perf-profile.children.cycles-pp.unix_stream_recvmsg
33.00 -1.0 31.95 perf-profile.children.cycles-pp.sock_recvmsg
32.16 -1.0 31.14 perf-profile.children.cycles-pp.unix_stream_read_generic
2.74 -0.8 1.98 perf-profile.children.cycles-pp.skb_set_owner_w
2.21 ± 2% -0.7 1.46 perf-profile.children.cycles-pp.refcount_add_checked
2.33 -0.7 1.59 perf-profile.children.cycles-pp.refcount_add_not_zero_checked
12.06 -0.4 11.61 perf-profile.children.cycles-pp.consume_skb
9.63 -0.4 9.22 perf-profile.children.cycles-pp.unix_stream_read_actor
9.56 -0.4 9.15 perf-profile.children.cycles-pp.skb_copy_datagram_iter
9.26 -0.4 8.90 perf-profile.children.cycles-pp.__skb_datagram_iter
8.99 -0.4 8.63 perf-profile.children.cycles-pp.simple_copy_to_iter
4.35 -0.4 3.99 perf-profile.children.cycles-pp.queued_spin_lock_slowpath
7.37 -0.3 7.06 perf-profile.children.cycles-pp.skb_release_all
4.26 ± 2% -0.3 3.95 perf-profile.children.cycles-pp.___cache_free
6.56 -0.3 6.26 perf-profile.children.cycles-pp._copy_to_iter
7.13 -0.3 6.83 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
4.06 -0.3 3.79 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.06 -0.3 2.80 perf-profile.children.cycles-pp.syscall_return_via_sysret
4.12 -0.2 3.91 perf-profile.children.cycles-pp.syscall_slow_exit_work
2.78 -0.2 2.59 perf-profile.children.cycles-pp.skb_queue_tail
2.46 ± 2% -0.2 2.27 perf-profile.children.cycles-pp.__might_sleep
3.28 -0.2 3.10 perf-profile.children.cycles-pp.skb_release_head_state
2.74 -0.2 2.57 perf-profile.children.cycles-pp.sock_wfree
3.31 -0.2 3.14 ± 2% perf-profile.children.cycles-pp.unix_destruct_scm
4.49 -0.2 4.32 perf-profile.children.cycles-pp.skb_copy_datagram_from_iter
3.48 -0.2 3.31 perf-profile.children.cycles-pp.__audit_syscall_exit
2.24 -0.2 2.08 perf-profile.children.cycles-pp.cache_alloc_refill
3.79 -0.2 3.64 perf-profile.children.cycles-pp.kmem_cache_free
3.90 -0.1 3.77 perf-profile.children.cycles-pp.kfree_skbmem
1.62 -0.1 1.49 perf-profile.children.cycles-pp.entry_SYSCALL_64
2.77 -0.1 2.65 perf-profile.children.cycles-pp._copy_from_iter
1.94 -0.1 1.82 perf-profile.children.cycles-pp.syscall_trace_enter
2.52 -0.1 2.40 perf-profile.children.cycles-pp.__fget_light
3.14 ± 2% -0.1 3.02 perf-profile.children.cycles-pp.__check_object_size
1.58 ± 3% -0.1 1.46 perf-profile.children.cycles-pp.___might_sleep
1.61 -0.1 1.50 ± 2% perf-profile.children.cycles-pp.skb_unlink
2.67 -0.1 2.56 perf-profile.children.cycles-pp.__fdget_pos
1.77 ± 2% -0.1 1.67 perf-profile.children.cycles-pp.free_block
1.10 ± 3% -0.1 1.01 ± 3% perf-profile.children.cycles-pp.__audit_syscall_entry
0.57 ± 5% -0.1 0.48 ± 9% perf-profile.children.cycles-pp.memcg_kmem_put_cache
1.08 ± 2% -0.1 1.00 ± 2% perf-profile.children.cycles-pp.unroll_tree_refs
0.93 ± 4% -0.1 0.84 perf-profile.children.cycles-pp.__might_fault
0.18 ± 7% -0.1 0.11 ± 4% perf-profile.children.cycles-pp.should_failslab
0.56 ± 6% -0.1 0.50 perf-profile.children.cycles-pp.rcu_all_qs
2.70 -0.1 2.64 perf-profile.children.cycles-pp.schedule
2.75 -0.1 2.69 perf-profile.children.cycles-pp.schedule_timeout
2.83 -0.1 2.78 perf-profile.children.cycles-pp.__sched_text_start
1.07 -0.1 1.02 perf-profile.children.cycles-pp.__list_del_entry_valid
0.70 ± 3% -0.1 0.65 ± 2% perf-profile.children.cycles-pp.refcount_inc_checked
0.30 -0.0 0.25 ± 6% perf-profile.children.cycles-pp.copyin
0.67 -0.0 0.62 perf-profile.children.cycles-pp._cond_resched
1.19 -0.0 1.15 ± 2% perf-profile.children.cycles-pp.security_file_permission
0.74 ± 2% -0.0 0.70 ± 2% perf-profile.children.cycles-pp.wait_for_unix_gc
1.14 -0.0 1.10 ± 2% perf-profile.children.cycles-pp.unix_write_space
0.32 ± 2% -0.0 0.29 ± 3% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.98 -0.0 0.94 perf-profile.children.cycles-pp.deactivate_task
0.22 -0.0 0.20 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.73 -0.0 0.71 ± 2% perf-profile.children.cycles-pp.dequeue_task_fair
0.16 ± 2% -0.0 0.15 perf-profile.children.cycles-pp.set_next_entity
0.08 ± 8% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.cache_grow_begin
0.09 ± 4% +0.0 0.12 ± 11% perf-profile.children.cycles-pp.maybe_add_creds
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.memset
0.97 ± 5% +0.2 1.13 ± 3% perf-profile.children.cycles-pp.__virt_addr_valid
2.52 ± 2% +0.2 2.69 perf-profile.children.cycles-pp.refcount_inc_not_zero_checked
92.69 +0.4 93.05 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
90.39 +0.6 90.97 perf-profile.children.cycles-pp.do_syscall_64
4.64 +1.8 6.49 perf-profile.children.cycles-pp.kmem_cache_alloc_node_trace
0.00 +2.1 2.11 ± 2% perf-profile.children.cycles-pp.memset_erms
43.03 +2.2 45.23 perf-profile.children.cycles-pp.__x64_sys_write
42.76 +2.2 44.96 perf-profile.children.cycles-pp.ksys_write
40.25 +2.3 42.59 perf-profile.children.cycles-pp.vfs_write
38.06 +2.4 40.48 perf-profile.children.cycles-pp.__vfs_write
37.80 +2.4 40.24 perf-profile.children.cycles-pp.new_sync_write
37.11 +2.5 39.57 perf-profile.children.cycles-pp.sock_write_iter
35.75 +2.5 38.24 perf-profile.children.cycles-pp.sock_sendmsg
35.18 +2.5 37.70 perf-profile.children.cycles-pp.unix_stream_sendmsg
5.05 +2.8 7.89 perf-profile.children.cycles-pp.__kmalloc_node_track_caller
5.26 +3.0 8.30 perf-profile.children.cycles-pp.__kmalloc_reserve
16.66 +4.0 20.62 perf-profile.children.cycles-pp.sock_alloc_send_pskb
12.30 +4.7 17.02 perf-profile.children.cycles-pp.__alloc_skb
12.59 +4.7 17.33 perf-profile.children.cycles-pp.alloc_skb_with_frags
3.43 ± 2% -0.8 2.61 perf-profile.self.cycles-pp._raw_spin_lock
2.31 -0.7 1.57 perf-profile.self.cycles-pp.refcount_add_not_zero_checked
4.33 -0.3 3.98 perf-profile.self.cycles-pp.queued_spin_lock_slowpath
7.10 -0.3 6.80 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
3.03 -0.3 2.77 perf-profile.self.cycles-pp.syscall_return_via_sysret
1.87 -0.2 1.62 ± 2% perf-profile.self.cycles-pp.__check_object_size
3.01 -0.2 2.80 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
1.76 -0.2 1.59 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.37 ± 2% -0.1 1.24 ± 2% perf-profile.self.cycles-pp.sock_def_readable
1.62 -0.1 1.49 perf-profile.self.cycles-pp.entry_SYSCALL_64
2.46 -0.1 2.34 perf-profile.self.cycles-pp.__fget_light
1.80 -0.1 1.69 perf-profile.self.cycles-pp.unix_stream_read_generic
1.49 ± 2% -0.1 1.38 ± 2% perf-profile.self.cycles-pp.___might_sleep
0.90 ± 3% -0.1 0.81 ± 2% perf-profile.self.cycles-pp.__audit_syscall_entry
1.07 ± 2% -0.1 0.98 ± 3% perf-profile.self.cycles-pp.unroll_tree_refs
1.14 ± 2% -0.1 1.06 perf-profile.self.cycles-pp.sock_read_iter
0.66 ± 3% -0.1 0.58 ± 2% perf-profile.self.cycles-pp.sock_wfree
1.04 ± 3% -0.1 0.98 perf-profile.self.cycles-pp.__might_sleep
1.04 -0.1 0.98 perf-profile.self.cycles-pp.__list_del_entry_valid
1.86 -0.1 1.80 perf-profile.self.cycles-pp.__audit_syscall_exit
0.46 ± 8% -0.1 0.40 ± 3% perf-profile.self.cycles-pp.rcu_all_qs
0.98 ± 2% -0.1 0.93 ± 4% perf-profile.self.cycles-pp.free_block
0.74 ± 2% -0.1 0.69 ± 2% perf-profile.self.cycles-pp.vfs_read
0.58 -0.0 0.54 ± 2% perf-profile.self.cycles-pp.syscall_slow_exit_work
0.11 ± 9% -0.0 0.07 ± 5% perf-profile.self.cycles-pp.should_failslab
0.74 -0.0 0.70 perf-profile.self.cycles-pp.new_sync_read
0.37 ± 3% -0.0 0.34 ± 2% perf-profile.self.cycles-pp._cond_resched
0.64 -0.0 0.60 perf-profile.self.cycles-pp._copy_from_iter
0.18 ± 2% -0.0 0.15 ± 5% perf-profile.self.cycles-pp.copyin
0.30 ± 5% -0.0 0.27 ± 5% perf-profile.self.cycles-pp.__skb_datagram_iter
0.25 ± 4% -0.0 0.23 ± 6% perf-profile.self.cycles-pp.simple_copy_to_iter
0.23 -0.0 0.21 ± 5% perf-profile.self.cycles-pp.sock_sendmsg
0.14 ± 5% -0.0 0.12 perf-profile.self.cycles-pp.__list_add_valid
0.18 -0.0 0.16 ± 4% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.12 ± 4% -0.0 0.11 ± 6% perf-profile.self.cycles-pp.security_socket_sendmsg
0.09 ± 4% +0.0 0.11 ± 9% perf-profile.self.cycles-pp.maybe_add_creds
0.11 ± 21% +0.0 0.15 ± 24% perf-profile.self.cycles-pp.kmalloc_slab
0.19 ± 5% +0.1 0.26 ± 12% perf-profile.self.cycles-pp.__kmalloc_reserve
1.11 ± 2% +0.1 1.19 perf-profile.self.cycles-pp.kfree
0.13 ± 11% +0.1 0.22 ± 8% perf-profile.self.cycles-pp.__kmalloc_node_track_caller
0.94 ± 5% +0.1 1.09 ± 3% perf-profile.self.cycles-pp.__virt_addr_valid
2.51 ± 2% +0.2 2.68 perf-profile.self.cycles-pp.refcount_inc_not_zero_checked
1.66 ± 5% +0.8 2.42 perf-profile.self.cycles-pp.__alloc_skb
1.26 +1.9 3.15 ± 2% perf-profile.self.cycles-pp.kmem_cache_alloc_node_trace
0.00 +2.1 2.07 ± 2% perf-profile.self.cycles-pp.memset_erms
hackbench.throughput
100000 +-+----------------------------------------------------------------+
90000 +-+++ +.++ +.++.++.+ + ++.+ ++ +.++ +.++ +.+ ++ +.+ + +|
|O OO OO O: O O O O OO :O O OO O O: OO O O: OO :O O: O OO :|
80000 +-+ : : : : : : : : :: : : : : : : :: : : : :|
70000 +-+ : : : : : : : : :: : : : : : : :: : : : :|
|: : : : : : :: : : : : : : : : : : : : : : :: : |
60000 +-+ : : : : : :: : : : : : : : : : : : : : : :: : |
50000 +-+ : : : : : :: : : : : : : : : : : : : : : :: : |
40000 +-+ :: :: :: :: :: :: :: :: :: :: :: :: |
| :: :: :: :: :: :: :: :: :: :: :: :: |
30000 +-+ :: :: :: :: :: :: :: :: :: :: :: :: |
20000 +-+ : : : : : : : : : : : : |
| : : : : : : : : : : : : |
10000 +-+ : : : : : : : : : : : : |
0 O-+-------O-O---O-O---O----O---O----O---O----O---O----O---O--O-----+
hackbench.time.involuntary_context_switches
180000 +-+----------------------------------------------------------------+
| O O O OO O |
160000 +O+ O O O O O O O OO O O O O O O O OO |
140000 +-+O O OO |
| + |
120000 +-+ + +.++.++.+ + : +. + |
100000 +-+++ +.+: : : : ++.+ ++ +.++ +.+: : + :+ +.+ + +|
|: : : : : : : : : :: : : : : : : :: : : : :|
80000 +-+ : : : : : :: : : : : : : : : : : : : : : : :|
60000 +-+ : : : : : :: : : : : : : : : : : : : : : :: : |
|: : : :: :: :: :: : : : : :: :: : : :: :: |
40000 +-+ :: :: :: :: :: :: :: :: :: :: :: :: |
20000 +-+ : : : :: :: : : : : : :: :: |
| : : : : : : : : : : : : |
0 O-+-------O-O---O-O---O----O---O----O---O----O---O----O---O--O-----+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2-clear_lck_7595/process/100%/clear-ota-25590-x86_64-2018-10-18.cgz/lkp-cfl-e1/hackbench/0xb8
commit:
ba5c5e4a5d ("arm64: move jump_label_init() before parse_early_param()")
6471384af2 ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
ba5c5e4a5da443e8 6471384af2a6530696fc0203baf
---------------- ---------------------------
%stddev %change %stddev
\ | \
88494 -4.3% 84657 hackbench.throughput
138438 ± 4% +31.9% 182656 ± 7% hackbench.time.involuntary_context_switches
694.79 -5.9% 653.85 hackbench.time.user_time
5.568e+08 -4.3% 5.328e+08 hackbench.workload
8544005 ± 38% -45.2% 4685694 ± 6% cpuidle.C1E.time
5.31 ± 11% +29.2% 6.86 ± 16% sched_debug.cpu.nr_uninterruptible.stddev
3.48 +4.6% 3.64 turbostat.RAMWatt
0.84 ± 2% +0.1 0.95 mpstat.cpu.-1.irq%
0.05 ± 3% +0.0 0.06 ± 10% mpstat.cpu.-1.soft%
15632269 -4.1% 14984464 ± 3% proc-vmstat.numa_hit
15632269 -4.1% 14984464 ± 3% proc-vmstat.numa_local
15715101 -4.2% 15056704 ± 3% proc-vmstat.pgalloc_normal
15686075 -4.2% 15022294 ± 3% proc-vmstat.pgfree
45.36 -2.9% 44.03 perf-stat.i.MPKI
9.663e+09 -2.7% 9.402e+09 perf-stat.i.branch-instructions
1.088e+08 -2.7% 1.058e+08 perf-stat.i.branch-misses
13.34 +0.3 13.62 perf-stat.i.cache-miss-rate%
2.908e+08 -4.1% 2.787e+08 perf-stat.i.cache-misses
2.179e+09 -6.0% 2.048e+09 perf-stat.i.cache-references
1.26 +3.4% 1.31 perf-stat.i.cpi
61720 -2.9% 59924 perf-stat.i.cpu-migrations
210.15 +4.0% 218.46 perf-stat.i.cycles-between-cache-misses
1.449e+10 -3.3% 1.401e+10 perf-stat.i.dTLB-loads
1.015e+10 -3.5% 9.79e+09 perf-stat.i.dTLB-stores
4.808e+10 -3.3% 4.651e+10 perf-stat.i.instructions
0.79 -3.3% 0.77 perf-stat.i.ipc
20904568 -2.2% 20436868 perf-stat.i.node-loads
25841390 +79.8% 46459814 perf-stat.i.node-stores
45.31 -2.8% 44.03 perf-stat.overall.MPKI
13.35 +0.3 13.61 perf-stat.overall.cache-miss-rate%
1.26 +3.5% 1.31 perf-stat.overall.cpi
208.67 +4.4% 217.94 perf-stat.overall.cycles-between-cache-misses
0.79 -3.4% 0.77 perf-stat.overall.ipc
9.647e+09 -2.7% 9.387e+09 perf-stat.ps.branch-instructions
1.086e+08 -2.7% 1.056e+08 perf-stat.ps.branch-misses
2.903e+08 -4.1% 2.783e+08 perf-stat.ps.cache-misses
2.175e+09 -6.0% 2.045e+09 perf-stat.ps.cache-references
61620 -2.9% 59827 perf-stat.ps.cpu-migrations
1.447e+10 -3.3% 1.399e+10 perf-stat.ps.dTLB-loads
1.013e+10 -3.5% 9.774e+09 perf-stat.ps.dTLB-stores
4.8e+10 -3.3% 4.643e+10 perf-stat.ps.instructions
20870674 -2.2% 20403728 perf-stat.ps.node-loads
25799504 +79.8% 46384429 perf-stat.ps.node-stores
2.964e+13 -3.4% 2.863e+13 perf-stat.total.instructions
38.02 -1.3 36.70 perf-profile.calltrace.cycles-pp.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
38.11 -1.3 36.80 perf-profile.calltrace.cycles-pp.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.86 -1.3 35.58 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
34.13 -1.2 32.97 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.ksys_read.__x64_sys_read.do_syscall_64
33.76 -1.1 32.61 perf-profile.calltrace.cycles-pp.new_sync_read.__vfs_read.vfs_read.ksys_read.__x64_sys_read
32.94 -1.1 31.84 perf-profile.calltrace.cycles-pp.sock_read_iter.new_sync_read.__vfs_read.vfs_read.ksys_read
31.73 -1.0 30.70 perf-profile.calltrace.cycles-pp.sock_recvmsg.sock_read_iter.new_sync_read.__vfs_read.vfs_read
31.32 -1.0 30.30 perf-profile.calltrace.cycles-pp.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.new_sync_read.__vfs_read
30.92 -1.0 29.92 perf-profile.calltrace.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.new_sync_read
2.61 ± 3% -0.6 1.97 perf-profile.calltrace.cycles-pp.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
2.02 ± 4% -0.6 1.39 perf-profile.calltrace.cycles-pp.refcount_add_not_zero_checked.refcount_add_checked.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg
2.03 ± 4% -0.6 1.39 perf-profile.calltrace.cycles-pp.refcount_add_checked.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
2.68 -0.6 2.05 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
6.64 ± 3% -0.4 6.25 ± 2% perf-profile.calltrace.cycles-pp.sock_def_readable.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
9.06 -0.4 8.68 ± 2% perf-profile.calltrace.cycles-pp.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
8.98 -0.4 8.59 ± 2% perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
8.74 -0.4 8.36 ± 2% perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
8.41 -0.4 8.05 ± 2% perf-profile.calltrace.cycles-pp.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic
11.82 -0.3 11.47 perf-profile.calltrace.cycles-pp.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
6.11 -0.3 5.79 perf-profile.calltrace.cycles-pp._copy_to_iter.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.unix_stream_read_actor
5.04 -0.3 4.79 ± 2% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string._copy_to_iter.simple_copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
3.55 ± 3% -0.2 3.36 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg
3.57 -0.2 3.38 perf-profile.calltrace.cycles-pp.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
7.34 -0.2 7.16 perf-profile.calltrace.cycles-pp.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
3.67 -0.2 3.50 perf-profile.calltrace.cycles-pp.kfree_skbmem.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
3.47 -0.2 3.31 perf-profile.calltrace.cycles-pp.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
2.90 -0.2 2.75 perf-profile.calltrace.cycles-pp.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb
1.85 ± 3% -0.1 1.70 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg
1.53 ± 3% -0.1 1.39 ± 4% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
2.32 ± 2% -0.1 2.18 perf-profile.calltrace.cycles-pp.___cache_free.kfree.skb_free_head.skb_release_data.skb_release_all
4.39 -0.1 4.25 perf-profile.calltrace.cycles-pp.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
1.77 ± 3% -0.1 1.63 ± 4% perf-profile.calltrace.cycles-pp.queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.__wake_up_sync_key.sock_def_readable
3.28 -0.1 3.15 perf-profile.calltrace.cycles-pp.unix_destruct_scm.skb_release_head_state.skb_release_all.consume_skb.unix_stream_read_generic
2.04 ± 2% -0.1 1.92 ± 2% perf-profile.calltrace.cycles-pp.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
2.67 -0.1 2.55 perf-profile.calltrace.cycles-pp.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
1.96 ± 2% -0.1 1.84 ± 2% perf-profile.calltrace.cycles-pp.__sched_text_start.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg
2.00 ± 2% -0.1 1.88 ± 2% perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
2.68 ± 2% -0.1 2.57 ± 2% perf-profile.calltrace.cycles-pp._copy_from_iter.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
1.43 ± 2% -0.1 1.33 perf-profile.calltrace.cycles-pp.cache_alloc_refill.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb
1.01 ± 3% -0.1 0.91 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.___cache_free.kfree.skb_free_head.skb_release_data
3.91 -0.1 3.81 perf-profile.calltrace.cycles-pp.syscall_slow_exit_work.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.40 ± 4% -0.1 1.29 ± 2% perf-profile.calltrace.cycles-pp.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
1.71 ± 2% -0.1 1.61 perf-profile.calltrace.cycles-pp.___cache_free.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_read_generic
0.93 ± 3% -0.1 0.83 ± 2% perf-profile.calltrace.cycles-pp.queued_spin_lock_slowpath._raw_spin_lock.___cache_free.kfree.skb_free_head
3.24 -0.1 3.15 perf-profile.calltrace.cycles-pp.__audit_syscall_exit.syscall_slow_exit_work.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 ± 2% -0.1 1.29 perf-profile.calltrace.cycles-pp.rw_verify_area.vfs_read.ksys_read.__x64_sys_read.do_syscall_64
3.74 -0.1 3.66 perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
0.96 ± 4% -0.1 0.89 ± 2% perf-profile.calltrace.cycles-pp.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
0.67 ± 3% -0.1 0.60 perf-profile.calltrace.cycles-pp._raw_spin_lock.cache_alloc_refill.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller.__kmalloc_reserve
0.97 -0.1 0.90 perf-profile.calltrace.cycles-pp.security_file_permission.rw_verify_area.vfs_read.ksys_read.__x64_sys_read
0.61 ± 3% -0.1 0.55 perf-profile.calltrace.cycles-pp.queued_spin_lock_slowpath._raw_spin_lock.cache_alloc_refill.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller
0.81 ± 4% -0.1 0.75 perf-profile.calltrace.cycles-pp.arch_stack_walk.stack_trace_save_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.57 ± 5% -0.0 0.52 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.61 ± 2% -0.0 0.57 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.66 -0.0 0.61 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
0.65 -0.0 0.61 ± 4% perf-profile.calltrace.cycles-pp.wait_for_unix_gc.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
1.13 ± 2% +0.1 1.23 perf-profile.calltrace.cycles-pp.ksize.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
1.10 +0.1 1.21 perf-profile.calltrace.cycles-pp.__ksize.ksize.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
0.38 ± 57% +0.1 0.53 perf-profile.calltrace.cycles-pp.refcount_dec_and_test_checked.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
1.80 ± 2% +0.2 1.96 perf-profile.calltrace.cycles-pp.refcount_inc_not_zero_checked.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
92.56 +0.4 92.96 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
90.14 +0.7 90.81 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.9 0.86 ± 6% perf-profile.calltrace.cycles-pp.memset_erms.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
0.00 +0.9 0.89 ± 8% perf-profile.calltrace.cycles-pp.memset_erms.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
4.35 +1.5 5.85 perf-profile.calltrace.cycles-pp.kmem_cache_alloc_node_trace.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
44.05 +2.2 46.24 perf-profile.calltrace.cycles-pp.ksys_write.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
44.14 +2.2 46.34 perf-profile.calltrace.cycles-pp.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
41.66 +2.3 43.94 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.__x64_sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.46 +2.4 41.82 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.__x64_sys_write.do_syscall_64
39.21 +2.4 41.58 perf-profile.calltrace.cycles-pp.new_sync_write.__vfs_write.vfs_write.ksys_write.__x64_sys_write
38.46 +2.4 40.85 perf-profile.calltrace.cycles-pp.sock_write_iter.new_sync_write.__vfs_write.vfs_write.ksys_write
4.91 +2.4 7.30 perf-profile.calltrace.cycles-pp.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
37.20 +2.4 39.63 perf-profile.calltrace.cycles-pp.sock_sendmsg.sock_write_iter.new_sync_write.__vfs_write.vfs_write
36.36 +2.4 38.80 perf-profile.calltrace.cycles-pp.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write.__vfs_write
5.15 +2.6 7.74 perf-profile.calltrace.cycles-pp.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
16.15 +3.9 20.05 perf-profile.calltrace.cycles-pp.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.new_sync_write
11.73 +4.5 16.21 perf-profile.calltrace.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
12.06 +4.5 16.55 perf-profile.calltrace.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
38.39 -1.3 37.06 perf-profile.children.cycles-pp.__x64_sys_read
38.13 -1.3 36.81 perf-profile.children.cycles-pp.ksys_read
36.94 -1.3 35.66 perf-profile.children.cycles-pp.vfs_read
34.24 -1.2 33.07 perf-profile.children.cycles-pp.__vfs_read
33.84 -1.2 32.68 perf-profile.children.cycles-pp.new_sync_read
33.17 -1.1 32.06 perf-profile.children.cycles-pp.sock_read_iter
31.60 -1.0 30.55 perf-profile.children.cycles-pp.unix_stream_recvmsg
31.85 -1.0 30.81 perf-profile.children.cycles-pp.sock_recvmsg
31.02 -1.0 30.02 perf-profile.children.cycles-pp.unix_stream_read_generic
6.85 -1.0 5.88 perf-profile.children.cycles-pp._raw_spin_lock
2.65 ± 3% -0.6 2.00 perf-profile.children.cycles-pp.skb_set_owner_w
2.08 ± 4% -0.6 1.44 perf-profile.children.cycles-pp.refcount_add_checked
2.23 ± 4% -0.6 1.62 perf-profile.children.cycles-pp.refcount_add_not_zero_checked
6.85 ± 2% -0.4 6.44 ± 2% perf-profile.children.cycles-pp.sock_def_readable
9.10 -0.4 8.70 ± 2% perf-profile.children.cycles-pp.unix_stream_read_actor
9.04 -0.4 8.65 ± 2% perf-profile.children.cycles-pp.skb_copy_datagram_iter
8.79 -0.4 8.41 ± 2% perf-profile.children.cycles-pp.__skb_datagram_iter
8.50 -0.4 8.14 ± 2% perf-profile.children.cycles-pp.simple_copy_to_iter
11.93 -0.3 11.59 perf-profile.children.cycles-pp.consume_skb
5.62 -0.3 5.29 perf-profile.children.cycles-pp.queued_spin_lock_slowpath
6.17 -0.3 5.84 perf-profile.children.cycles-pp._copy_to_iter
6.71 -0.3 6.42 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
4.08 -0.2 3.84 perf-profile.children.cycles-pp.___cache_free
3.02 -0.2 2.80 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
5.15 -0.2 4.93 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.66 -0.2 3.46 perf-profile.children.cycles-pp.kmem_cache_free
7.41 -0.2 7.22 perf-profile.children.cycles-pp.skb_release_all
3.74 -0.2 3.58 perf-profile.children.cycles-pp.kfree_skbmem
3.01 -0.2 2.85 perf-profile.children.cycles-pp.sock_wfree
3.51 -0.2 3.36 perf-profile.children.cycles-pp.skb_release_head_state
2.38 ± 2% -0.1 2.23 perf-profile.children.cycles-pp.__might_sleep
4.44 -0.1 4.30 perf-profile.children.cycles-pp.skb_copy_datagram_from_iter
3.53 -0.1 3.40 perf-profile.children.cycles-pp.unix_destruct_scm
1.55 ± 3% -0.1 1.42 ± 2% perf-profile.children.cycles-pp.___might_sleep
2.10 ± 2% -0.1 1.97 perf-profile.children.cycles-pp.cache_alloc_refill
2.70 -0.1 2.58 perf-profile.children.cycles-pp.skb_queue_tail
2.75 ± 2% -0.1 2.64 ± 2% perf-profile.children.cycles-pp._copy_from_iter
1.84 -0.1 1.74 perf-profile.children.cycles-pp.rw_verify_area
4.03 -0.1 3.93 perf-profile.children.cycles-pp.syscall_slow_exit_work
1.06 ± 4% -0.1 0.96 ± 3% perf-profile.children.cycles-pp.unwind_next_frame
1.41 ± 4% -0.1 1.31 ± 2% perf-profile.children.cycles-pp.skb_unlink
2.71 -0.1 2.61 perf-profile.children.cycles-pp.schedule_timeout
2.83 -0.1 2.73 perf-profile.children.cycles-pp.__sched_text_start
3.43 -0.1 3.34 perf-profile.children.cycles-pp.__audit_syscall_exit
2.65 -0.1 2.57 perf-profile.children.cycles-pp.schedule
1.22 ± 4% -0.1 1.14 perf-profile.children.cycles-pp.stack_trace_save_tsk
1.15 -0.1 1.07 perf-profile.children.cycles-pp.security_file_permission
0.58 ± 5% -0.1 0.50 ± 3% perf-profile.children.cycles-pp.memcg_kmem_put_cache
2.65 -0.1 2.58 ± 2% perf-profile.children.cycles-pp.__fdget_pos
1.06 ± 3% -0.1 0.99 perf-profile.children.cycles-pp.arch_stack_walk
0.90 ± 3% -0.1 0.83 ± 3% perf-profile.children.cycles-pp.__might_fault
1.42 ± 2% -0.1 1.37 perf-profile.children.cycles-pp.__fsnotify_parent
0.17 ± 7% -0.1 0.12 ± 10% perf-profile.children.cycles-pp.should_failslab
1.03 -0.1 0.98 perf-profile.children.cycles-pp.__list_del_entry_valid
0.89 ± 2% -0.0 0.85 ± 2% perf-profile.children.cycles-pp.deactivate_task
0.30 -0.0 0.26 ± 4% perf-profile.children.cycles-pp.copyin
1.07 ± 2% -0.0 1.04 perf-profile.children.cycles-pp.unroll_tree_refs
1.01 -0.0 0.98 perf-profile.children.cycles-pp.mutex_lock
0.35 ± 2% -0.0 0.32 ± 5% perf-profile.children.cycles-pp.dequeue_entity
0.36 ± 3% -0.0 0.34 ± 2% perf-profile.children.cycles-pp.select_idle_sibling
0.25 ± 2% -0.0 0.23 ± 2% perf-profile.children.cycles-pp.copyout
0.33 ± 3% -0.0 0.31 ± 3% perf-profile.children.cycles-pp.path_put
0.56 +0.0 0.59 ± 2% perf-profile.children.cycles-pp.refcount_dec_and_test_checked
0.08 ± 10% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.cache_grow_begin
0.11 ± 6% +0.0 0.15 ± 14% perf-profile.children.cycles-pp.kmalloc_slab
0.00 +0.1 0.08 ± 10% perf-profile.children.cycles-pp.memset
1.15 +0.1 1.26 perf-profile.children.cycles-pp.__ksize
1.18 ± 2% +0.1 1.30 perf-profile.children.cycles-pp.ksize
2.52 ± 2% +0.2 2.69 perf-profile.children.cycles-pp.refcount_inc_not_zero_checked
92.60 +0.4 93.00 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
90.32 +0.7 90.98 perf-profile.children.cycles-pp.do_syscall_64
4.57 +1.6 6.21 perf-profile.children.cycles-pp.kmem_cache_alloc_node_trace
0.00 +1.8 1.77 ± 8% perf-profile.children.cycles-pp.memset_erms
44.45 +2.2 46.62 perf-profile.children.cycles-pp.__x64_sys_write
44.16 +2.2 46.36 perf-profile.children.cycles-pp.ksys_write
41.70 +2.3 43.98 perf-profile.children.cycles-pp.vfs_write
39.54 +2.4 41.89 perf-profile.children.cycles-pp.__vfs_write
39.29 +2.4 41.66 perf-profile.children.cycles-pp.new_sync_write
38.63 +2.4 41.02 perf-profile.children.cycles-pp.sock_write_iter
4.98 +2.4 7.41 perf-profile.children.cycles-pp.__kmalloc_node_track_caller
37.26 +2.4 39.70 perf-profile.children.cycles-pp.sock_sendmsg
36.68 +2.4 39.13 perf-profile.children.cycles-pp.unix_stream_sendmsg
5.18 +2.6 7.80 perf-profile.children.cycles-pp.__kmalloc_reserve
16.19 +3.9 20.09 perf-profile.children.cycles-pp.sock_alloc_send_pskb
11.81 +4.5 16.28 perf-profile.children.cycles-pp.__alloc_skb
12.11 +4.5 16.59 perf-profile.children.cycles-pp.alloc_skb_with_frags
3.37 -0.7 2.63 perf-profile.self.cycles-pp._raw_spin_lock
2.21 ± 3% -0.6 1.60 perf-profile.self.cycles-pp.refcount_add_not_zero_checked
5.60 -0.3 5.27 perf-profile.self.cycles-pp.queued_spin_lock_slowpath
6.68 -0.3 6.39 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.77 -0.2 1.55 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
2.98 -0.2 2.77 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.83 ± 4% -0.2 1.65 ± 6% perf-profile.self.cycles-pp.__check_object_size
1.45 ± 3% -0.1 1.34 ± 2% perf-profile.self.cycles-pp.___might_sleep
2.98 -0.1 2.88 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.65 -0.1 0.55 ± 2% perf-profile.self.cycles-pp.sock_wfree
1.71 -0.1 1.61 ± 2% perf-profile.self.cycles-pp.unix_stream_read_generic
2.94 -0.1 2.85 perf-profile.self.cycles-pp.unix_stream_sendmsg
1.85 -0.1 1.78 perf-profile.self.cycles-pp.kmem_cache_free
0.92 ± 3% -0.1 0.85 ± 4% perf-profile.self.cycles-pp.__audit_syscall_entry
1.00 ± 2% -0.1 0.94 perf-profile.self.cycles-pp.__list_del_entry_valid
1.10 ± 3% -0.1 1.05 ± 2% perf-profile.self.cycles-pp.sock_read_iter
0.50 ± 4% -0.1 0.45 ± 4% perf-profile.self.cycles-pp.memcg_kmem_put_cache
0.71 ± 3% -0.1 0.66 perf-profile.self.cycles-pp.new_sync_read
1.00 ± 2% -0.0 0.95 ± 3% perf-profile.self.cycles-pp.__might_sleep
1.83 -0.0 1.79 perf-profile.self.cycles-pp.__audit_syscall_exit
0.34 ± 5% -0.0 0.30 ± 4% perf-profile.self.cycles-pp.skb_copy_datagram_from_iter
0.18 ± 4% -0.0 0.14 ± 5% perf-profile.self.cycles-pp.copyin
0.40 ± 2% -0.0 0.37 ± 5% perf-profile.self.cycles-pp.skb_set_owner_w
0.10 ± 8% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.should_failslab
0.23 ± 3% -0.0 0.21 ± 3% perf-profile.self.cycles-pp.skb_copy_datagram_iter
0.13 ± 6% -0.0 0.11 ± 7% perf-profile.self.cycles-pp.update_rq_clock
0.14 ± 6% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.stack_trace_consume_entry_nosched
0.11 ± 4% -0.0 0.10 ± 4% perf-profile.self.cycles-pp.__update_load_avg_se
0.11 +0.0 0.12 ± 4% perf-profile.self.cycles-pp.kfree_skbmem
0.31 ± 2% +0.0 0.34 ± 2% perf-profile.self.cycles-pp.skb_release_data
0.10 ± 11% +0.0 0.13 ± 14% perf-profile.self.cycles-pp.kmalloc_slab
0.18 ± 2% +0.0 0.23 ± 4% perf-profile.self.cycles-pp.__kmalloc_reserve
0.15 ± 3% +0.1 0.21 ± 2% perf-profile.self.cycles-pp.__kmalloc_node_track_caller
1.07 ± 2% +0.1 1.18 ± 2% perf-profile.self.cycles-pp.kfree
1.12 +0.1 1.24 perf-profile.self.cycles-pp.__ksize
2.51 ± 2% +0.2 2.67 perf-profile.self.cycles-pp.refcount_inc_not_zero_checked
1.38 ± 10% +1.0 2.39 perf-profile.self.cycles-pp.__alloc_skb
0.00 +1.7 1.73 ± 8% perf-profile.self.cycles-pp.memset_erms
1.26 +1.8 3.08 perf-profile.self.cycles-pp.kmem_cache_alloc_node_trace
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
1 year, 2 months
Re: [LTP] [xfs] 73e5fff98b: kmsg.dev/zero:Can't_open_blockdev
by Jan Stancek
----- Original Message -----
> > > > # mount -t xfs /dev/zero /mnt/xfs
> >
> > Assuming that is what is being done ...
>
> Arrrh, of course, a difference between get_tree_bdev() and
> mount_bdev() is that get_tree_bdev() prints this message when
> blkdev_get_by_path() fails whereas mount_bdev() doesn't.
>
> Both however do return an error in this case so the behaviour
> is the same.
>
> So I'm calling this not a problem with the subject patch.
>
> What needs to be done to resolve this in ltp I don't know?
I think that's question for kernel test robot, which has this extra
check built on top. ltp itself doesn't treat this extra message as FAIL.
Jan
1 year, 2 months
[net] 19f92a030c: apachebench.requests_per_second -37.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -37.9% regression of apachebench.requests_per_second due to commit:
commit: 19f92a030ca6d772ab44b22ee6a01378a8cb32d4 ("net: increase SOMAXCONN to 4096")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: apachebench
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 48G memory
with following parameters:
runtime: 300s
concurrency: 4000
cluster: cs-localhost
cpufreq_governor: performance
ucode: 0x7000019
test-description: apachebench is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server.
test-url: https://httpd.apache.org/docs/2.4/programs/ab.html
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------+
| testcase: change | apachebench: apachebench.requests_per_second -37.5% regression |
| test machine | 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 48G memory |
| test parameters | cluster=cs-localhost |
| | concurrency=8000 |
| | cpufreq_governor=performance |
| | runtime=300s |
| | ucode=0x7000019 |
+------------------+------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
cluster/compiler/concurrency/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/testcase/ucode:
cs-localhost/gcc-7/4000/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/lkp-bdw-de1/apachebench/0x7000019
commit:
6d6f0383b6 ("netdevsim: Fix use-after-free during device dismantle")
19f92a030c ("net: increase SOMAXCONN to 4096")
6d6f0383b697f004 19f92a030ca6d772ab44b22ee6a
---------------- ---------------------------
%stddev %change %stddev
\ | \
22640 ± 4% +71.1% 38734 apachebench.connection_time.processing.max
24701 +60.9% 39743 apachebench.connection_time.total.max
22639 ± 4% +71.1% 38734 apachebench.connection_time.waiting.max
24701 +15042.0 39743 apachebench.max_latency.100%
40454 -37.9% 25128 apachebench.requests_per_second
25.69 +58.8% 40.79 apachebench.time.elapsed_time
25.69 +58.8% 40.79 apachebench.time.elapsed_time.max
79.00 -37.0% 49.75 apachebench.time.percent_of_cpu_this_job_got
98.88 +61.0% 159.18 apachebench.time_per_request
434631 -37.9% 269889 apachebench.transfer_rate
1.5e+08 ± 18% +109.5% 3.141e+08 ± 27% cpuidle.C3.time
578957 ± 7% +64.1% 949934 ± 12% cpuidle.C3.usage
79085 ± 4% +24.8% 98720 meminfo.AnonHugePages
41176 +14.2% 47013 meminfo.PageTables
69429 -34.9% 45222 meminfo.max_used_kB
63.48 +12.7 76.15 mpstat.cpu.all.idle%
2.42 ± 2% -0.9 1.56 mpstat.cpu.all.soft%
15.30 -5.2 10.13 mpstat.cpu.all.sys%
18.80 -6.6 12.16 mpstat.cpu.all.usr%
65.00 +17.7% 76.50 vmstat.cpu.id
17.00 -35.3% 11.00 vmstat.cpu.us
7.00 ± 24% -50.0% 3.50 ± 14% vmstat.procs.r
62957 -33.3% 42012 vmstat.system.cs
33174 -1.4% 32693 vmstat.system.in
5394 ± 5% +16.3% 6272 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
5396 ± 5% +16.3% 6275 ± 6% sched_debug.cfs_rq:/.spread0.stddev
33982 ± 48% -83.3% 5676 ± 47% sched_debug.cpu.avg_idle.min
26.75 ± 77% +169.8% 72.17 ± 41% sched_debug.cpu.sched_count.avg
212.00 ± 90% +168.2% 568.50 ± 50% sched_debug.cpu.sched_count.max
52.30 ± 89% +182.5% 147.73 ± 48% sched_debug.cpu.sched_count.stddev
11.33 ± 80% +193.9% 33.30 ± 42% sched_debug.cpu.sched_goidle.avg
104.50 ± 92% +170.6% 282.75 ± 50% sched_debug.cpu.sched_goidle.max
26.18 ± 90% +183.9% 74.31 ± 48% sched_debug.cpu.sched_goidle.stddev
959.00 -32.0% 652.00 turbostat.Avg_MHz
39.01 -11.2 27.79 turbostat.Busy%
1.46 ± 7% -0.5 0.96 ± 5% turbostat.C1%
9.58 ± 4% -3.2 6.38 turbostat.C1E%
578646 ± 7% +64.1% 949626 ± 12% turbostat.C3
940073 +51.1% 1420298 turbostat.IRQ
2.20 ± 22% +159.7% 5.71 ± 12% turbostat.Pkg%pc2
31.22 -17.2% 25.86 turbostat.PkgWatt
4.74 -7.5% 4.39 turbostat.RAMWatt
93184 -1.6% 91678 proc-vmstat.nr_active_anon
92970 -1.8% 91314 proc-vmstat.nr_anon_pages
288405 +1.0% 291286 proc-vmstat.nr_file_pages
8307 +6.3% 8831 proc-vmstat.nr_kernel_stack
10315 +14.2% 11783 proc-vmstat.nr_page_table_pages
21499 +6.0% 22798 proc-vmstat.nr_slab_unreclaimable
284131 +1.0% 286977 proc-vmstat.nr_unevictable
93184 -1.6% 91678 proc-vmstat.nr_zone_active_anon
284131 +1.0% 286977 proc-vmstat.nr_zone_unevictable
198874 ± 2% +43.7% 285772 ± 16% proc-vmstat.numa_hit
198874 ± 2% +43.7% 285772 ± 16% proc-vmstat.numa_local
249594 ± 3% +59.6% 398267 ± 13% proc-vmstat.pgalloc_normal
1216885 +12.7% 1371283 ± 3% proc-vmstat.pgfault
179705 ± 16% +82.9% 328634 ± 21% proc-vmstat.pgfree
346.25 ± 5% +133.5% 808.50 ± 2% slabinfo.TCPv6.active_objs
346.25 ± 5% +134.4% 811.75 ± 2% slabinfo.TCPv6.num_objs
22966 +15.6% 26559 slabinfo.anon_vma.active_objs
23091 +15.5% 26664 slabinfo.anon_vma.num_objs
69747 +16.1% 81011 slabinfo.anon_vma_chain.active_objs
1094 +15.9% 1269 slabinfo.anon_vma_chain.active_slabs
70092 +15.9% 81259 slabinfo.anon_vma_chain.num_objs
1094 +15.9% 1269 slabinfo.anon_vma_chain.num_slabs
1649 +12.9% 1861 slabinfo.cred_jar.active_objs
1649 +12.9% 1861 slabinfo.cred_jar.num_objs
4924 +20.0% 5907 slabinfo.pid.active_objs
4931 +19.9% 5912 slabinfo.pid.num_objs
266.50 ± 3% +299.2% 1063 slabinfo.request_sock_TCP.active_objs
266.50 ± 3% +299.2% 1063 slabinfo.request_sock_TCP.num_objs
11.50 ± 4% +1700.0% 207.00 ± 4% slabinfo.tw_sock_TCPv6.active_objs
11.50 ± 4% +1700.0% 207.00 ± 4% slabinfo.tw_sock_TCPv6.num_objs
41682 +16.0% 48360 slabinfo.vm_area_struct.active_objs
1046 +15.7% 1211 slabinfo.vm_area_struct.active_slabs
41879 +15.7% 48468 slabinfo.vm_area_struct.num_objs
1046 +15.7% 1211 slabinfo.vm_area_struct.num_slabs
4276 ± 2% +10.0% 4705 ± 2% slabinfo.vmap_area.num_objs
21.25 ± 27% +3438.8% 752.00 ± 83% interrupts.36:IR-PCI-MSI.2621443-edge.eth0-TxRx-2
21.25 ± 20% +1777.6% 399.00 ±155% interrupts.38:IR-PCI-MSI.2621445-edge.eth0-TxRx-4
54333 +54.3% 83826 interrupts.CPU0.LOC:Local_timer_interrupts
54370 +54.6% 84072 interrupts.CPU1.LOC:Local_timer_interrupts
21.25 ± 27% +3438.8% 752.00 ± 83% interrupts.CPU10.36:IR-PCI-MSI.2621443-edge.eth0-TxRx-2
54236 +54.7% 83925 interrupts.CPU10.LOC:Local_timer_interrupts
54223 +54.3% 83655 interrupts.CPU11.LOC:Local_timer_interrupts
377.75 ± 21% +27.1% 480.25 ± 10% interrupts.CPU11.RES:Rescheduling_interrupts
21.25 ± 20% +1777.6% 399.00 ±155% interrupts.CPU12.38:IR-PCI-MSI.2621445-edge.eth0-TxRx-4
54279 +54.1% 83646 interrupts.CPU12.LOC:Local_timer_interrupts
53683 +55.3% 83365 interrupts.CPU13.LOC:Local_timer_interrupts
53887 +55.7% 83903 interrupts.CPU14.LOC:Local_timer_interrupts
54156 +54.7% 83803 interrupts.CPU15.LOC:Local_timer_interrupts
54041 +55.1% 83806 interrupts.CPU2.LOC:Local_timer_interrupts
54042 +55.4% 83991 interrupts.CPU3.LOC:Local_timer_interrupts
54081 +55.2% 83938 interrupts.CPU4.LOC:Local_timer_interrupts
54322 +54.9% 84166 interrupts.CPU5.LOC:Local_timer_interrupts
53586 +56.5% 83849 interrupts.CPU6.LOC:Local_timer_interrupts
54049 +55.2% 83892 interrupts.CPU7.LOC:Local_timer_interrupts
54056 +54.9% 83751 interrupts.CPU8.LOC:Local_timer_interrupts
53862 +54.7% 83331 interrupts.CPU9.LOC:Local_timer_interrupts
865212 +55.0% 1340925 interrupts.LOC:Local_timer_interrupts
16477 ± 4% +32.2% 21779 ± 8% softirqs.CPU0.TIMER
18508 ± 15% +39.9% 25891 ± 17% softirqs.CPU1.TIMER
16625 ± 8% +21.3% 20166 ± 7% softirqs.CPU10.TIMER
5906 ± 21% +62.5% 9597 ± 13% softirqs.CPU12.SCHED
17474 ± 12% +29.4% 22610 ± 7% softirqs.CPU12.TIMER
7680 ± 11% +20.4% 9246 ± 14% softirqs.CPU13.SCHED
45558 ± 36% -37.8% 28320 ± 25% softirqs.CPU14.NET_RX
15365 ± 4% +40.7% 21622 ± 5% softirqs.CPU14.TIMER
8084 ± 4% +18.7% 9599 ± 12% softirqs.CPU15.RCU
16433 ± 4% +41.2% 23203 ± 14% softirqs.CPU2.TIMER
8436 ± 7% +19.9% 10117 ± 10% softirqs.CPU3.RCU
15992 ± 3% +48.5% 23742 ± 18% softirqs.CPU3.TIMER
17389 ± 14% +38.7% 24116 ± 11% softirqs.CPU4.TIMER
17749 ± 13% +42.2% 25235 ± 15% softirqs.CPU5.TIMER
16528 ± 9% +28.3% 21200 ± 2% softirqs.CPU6.TIMER
8321 ± 8% +31.3% 10929 ± 5% softirqs.CPU7.RCU
18024 ± 8% +28.8% 23212 ± 5% softirqs.CPU7.TIMER
15717 ± 5% +27.1% 19983 ± 7% softirqs.CPU8.TIMER
7383 ± 11% +30.5% 9632 ± 9% softirqs.CPU9.SCHED
16342 ± 18% +41.0% 23037 ± 10% softirqs.CPU9.TIMER
148013 +10.2% 163086 ± 2% softirqs.RCU
112139 +28.0% 143569 softirqs.SCHED
276690 ± 3% +30.4% 360747 ± 3% softirqs.TIMER
1.453e+09 -36.2% 9.273e+08 perf-stat.i.branch-instructions
67671486 -35.9% 43396843 ± 2% perf-stat.i.branch-misses
5.188e+08 -35.3% 3.357e+08 perf-stat.i.cache-misses
5.188e+08 -35.3% 3.357e+08 perf-stat.i.cache-references
71149 -36.0% 45536 perf-stat.i.context-switches
2.92 ± 6% +38.4% 4.04 ± 5% perf-stat.i.cpi
1.581e+10 -33.8% 1.046e+10 perf-stat.i.cpu-cycles
1957 ± 6% -35.4% 1264 ± 5% perf-stat.i.cpu-migrations
40.76 ± 2% +33.0% 54.21 perf-stat.i.cycles-between-cache-misses
0.64 ± 2% +0.2 0.86 ± 2% perf-stat.i.dTLB-load-miss-rate%
10720126 ± 2% -34.0% 7071299 perf-stat.i.dTLB-load-misses
2.05e+09 -35.8% 1.315e+09 perf-stat.i.dTLB-loads
0.16 +0.0 0.18 ± 5% perf-stat.i.dTLB-store-miss-rate%
1688635 -31.7% 1153595 perf-stat.i.dTLB-store-misses
1.17e+09 -33.5% 7.777e+08 perf-stat.i.dTLB-stores
61.00 ± 6% +8.7 69.70 ± 2% perf-stat.i.iTLB-load-miss-rate%
13112146 ± 10% -38.5% 8061589 ± 13% perf-stat.i.iTLB-load-misses
9827689 ± 3% -37.1% 6184316 perf-stat.i.iTLB-loads
7e+09 -36.2% 4.469e+09 perf-stat.i.instructions
0.42 ± 2% -22.2% 0.33 ± 3% perf-stat.i.ipc
45909 -32.4% 31036 perf-stat.i.minor-faults
45909 -32.4% 31036 perf-stat.i.page-faults
2.26 +3.6% 2.34 perf-stat.overall.cpi
30.47 +2.3% 31.16 perf-stat.overall.cycles-between-cache-misses
0.44 -3.5% 0.43 perf-stat.overall.ipc
1.398e+09 -35.3% 9.049e+08 perf-stat.ps.branch-instructions
65124290 -35.0% 42353120 ± 2% perf-stat.ps.branch-misses
4.993e+08 -34.4% 3.276e+08 perf-stat.ps.cache-misses
4.993e+08 -34.4% 3.276e+08 perf-stat.ps.cache-references
68469 -35.1% 44440 perf-stat.ps.context-switches
1.521e+10 -32.9% 1.021e+10 perf-stat.ps.cpu-cycles
1884 ± 6% -34.5% 1234 ± 5% perf-stat.ps.cpu-migrations
10314734 ± 2% -33.1% 6899548 perf-stat.ps.dTLB-load-misses
1.973e+09 -34.9% 1.283e+09 perf-stat.ps.dTLB-loads
1624883 -30.7% 1125668 perf-stat.ps.dTLB-store-misses
1.126e+09 -32.6% 7.589e+08 perf-stat.ps.dTLB-stores
12617676 ± 10% -37.7% 7866427 ± 13% perf-stat.ps.iTLB-load-misses
9458687 ± 3% -36.2% 6036594 perf-stat.ps.iTLB-loads
6.736e+09 -35.3% 4.361e+09 perf-stat.ps.instructions
44179 -31.4% 30288 perf-stat.ps.minor-faults
30791 +1.4% 31223 perf-stat.ps.msec
44179 -31.4% 30288 perf-stat.ps.page-faults
21.96 ± 69% -22.0 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
21.96 ± 69% -22.0 0.00 perf-profile.calltrace.cycles-pp.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.63 ± 89% -18.6 0.00 perf-profile.calltrace.cycles-pp.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.63 ± 89% -18.6 0.00 perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64
13.15 ± 67% -8.2 5.00 ±173% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
8.81 ±133% -8.0 0.83 ±173% perf-profile.calltrace.cycles-pp.ret_from_fork
13.15 ± 67% -7.3 5.83 ±173% perf-profile.calltrace.cycles-pp.secondary_startup_64
13.15 ± 67% -7.3 5.83 ±173% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
13.15 ± 67% -7.3 5.83 ±173% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
13.15 ± 67% -7.3 5.83 ±173% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.15 ± 67% -7.3 5.83 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.15 ± 67% -7.3 5.83 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
6.70 ±100% -6.7 0.00 perf-profile.calltrace.cycles-pp.__clear_user.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
4.79 ±108% -4.8 0.00 perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
4.79 ±108% -4.8 0.00 perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.exit_mmap.mmput.do_exit
4.79 ±108% -4.0 0.83 ±173% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.get_signal.do_signal
4.79 ±108% -4.0 0.83 ±173% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.get_signal
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.ioctl.perf_evsel__enable.evsel__enable.evlist__enable
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl.perf_evsel__enable.evsel__enable
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.event_function_call.perf_event_for_each_child._perf_ioctl.perf_ioctl.do_vfs_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_event_for_each_child._perf_ioctl.perf_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.__libc_start_main
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.main.__libc_start_main
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.run_builtin.main.__libc_start_main
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.cmd_record.run_builtin.main.__libc_start_main
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.evlist__enable.cmd_record.run_builtin.main.__libc_start_main
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.evsel__enable.evlist__enable.cmd_record.run_builtin.main
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.perf_evsel__enable.evsel__enable.evlist__enable.cmd_record.run_builtin
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.ioctl.perf_evsel__enable.evsel__enable.evlist__enable.cmd_record
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl.perf_evsel__enable
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.ksys_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.do_vfs_ioctl.ksys_ioctl.__x64_sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.perf_ioctl.do_vfs_ioctl.ksys_ioctl.__x64_sys_ioctl.do_syscall_64
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp._perf_ioctl.perf_ioctl.do_vfs_ioctl.ksys_ioctl.__x64_sys_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.calltrace.cycles-pp.perf_event_for_each_child._perf_ioctl.perf_ioctl.do_vfs_ioctl.ksys_ioctl
4.79 ±108% +8.5 13.33 ±173% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.79 ±108% +8.5 13.33 ±173% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.79 ±108% +8.5 13.33 ±173% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.79 ±108% +8.5 13.33 ±173% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
4.79 ±108% +8.5 13.33 ±173% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
12.92 ±100% +12.1 25.00 ±173% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.92 ±100% +12.1 25.00 ±173% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.92 ±100% +12.1 25.00 ±173% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.25 ±101% +13.8 25.00 ±173% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
9.58 ±108% +15.4 25.00 ±173% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
21.96 ± 69% -22.0 0.00 perf-profile.children.cycles-pp.__x64_sys_execve
21.96 ± 69% -22.0 0.00 perf-profile.children.cycles-pp.__do_execve_file
18.63 ± 89% -18.6 0.00 perf-profile.children.cycles-pp.search_binary_handler
18.63 ± 89% -18.6 0.00 perf-profile.children.cycles-pp.load_elf_binary
13.15 ± 67% -8.2 5.00 ±173% perf-profile.children.cycles-pp.intel_idle
8.81 ±133% -8.0 0.83 ±173% perf-profile.children.cycles-pp.ret_from_fork
13.15 ± 67% -7.3 5.83 ±173% perf-profile.children.cycles-pp.secondary_startup_64
13.15 ± 67% -7.3 5.83 ±173% perf-profile.children.cycles-pp.start_secondary
13.15 ± 67% -7.3 5.83 ±173% perf-profile.children.cycles-pp.cpu_startup_entry
13.15 ± 67% -7.3 5.83 ±173% perf-profile.children.cycles-pp.do_idle
13.15 ± 67% -7.3 5.83 ±173% perf-profile.children.cycles-pp.cpuidle_enter
13.15 ± 67% -7.3 5.83 ±173% perf-profile.children.cycles-pp.cpuidle_enter_state
6.70 ±100% -6.7 0.00 perf-profile.children.cycles-pp.__clear_user
6.46 ±100% -6.5 0.00 perf-profile.children.cycles-pp.handle_mm_fault
4.79 ±108% -4.8 0.00 perf-profile.children.cycles-pp.page_fault
4.79 ±108% -4.8 0.00 perf-profile.children.cycles-pp.do_page_fault
4.79 ±108% -4.8 0.00 perf-profile.children.cycles-pp.__do_page_fault
4.79 ±108% -4.8 0.00 perf-profile.children.cycles-pp.free_pgtables
4.79 ±108% -4.8 0.00 perf-profile.children.cycles-pp.unlink_file_vma
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.__libc_start_main
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.main
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.run_builtin
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.cmd_record
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.evlist__enable
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.evsel__enable
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.perf_evsel__enable
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.__x64_sys_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.ksys_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.do_vfs_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.perf_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp._perf_ioctl
5.24 ±112% -0.2 5.00 ±173% perf-profile.children.cycles-pp.perf_event_for_each_child
4.79 ±108% +8.5 13.33 ±173% perf-profile.children.cycles-pp.exit_to_usermode_loop
4.79 ±108% +8.5 13.33 ±173% perf-profile.children.cycles-pp.do_signal
4.79 ±108% +8.5 13.33 ±173% perf-profile.children.cycles-pp.get_signal
5.24 ±112% +8.9 14.17 ±173% perf-profile.children.cycles-pp.event_function_call
5.24 ±112% +8.9 14.17 ±173% perf-profile.children.cycles-pp.smp_call_function_single
12.92 ±100% +12.1 25.00 ±173% perf-profile.children.cycles-pp.__x64_sys_exit_group
13.15 ± 67% -8.2 5.00 ±173% perf-profile.self.cycles-pp.intel_idle
6.46 ±100% -6.5 0.00 perf-profile.self.cycles-pp.unmap_page_range
5.24 ±112% +8.9 14.17 ±173% perf-profile.self.cycles-pp.smp_call_function_single
apachebench.time.percent_of_cpu_this_job_got
80 +-+--------------------------------------------------------------------+
| : : : : : |
70 +-+ : : : : : |
60 +-+ : : : : : |
|: : O: : : : |
50 O-O :O O O O O O O O O O O :O :O O: O:O |
|: : : : : : |
40 +-+ : : : : : |
|: : : : : : |
30 +-+: : : : : |
20 +-+: : : : : |
| :: :: :: |
10 +-+ : : |
| : : : |
0 +-+--------------------------------------------------------------------+
apachebench.time.elapsed_time
45 +-+--------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
40 O-+ |
35 +-+ O |
| |
30 +-+ |
25 +-+ +.+..+.+..+.+..+.+.+..+.+..+ +.+ +..+.+.+..+.+..+.+..+.+..+.|
| : : : : : |
20 +-+ : : : : : |
15 +-+ : : : : : |
|: : : : : : |
10 +-+: : : : : |
5 +-+: :: : : |
| : : : |
0 +-+--------------------------------------------------------------------+
apachebench.time.elapsed_time.max
45 +-+--------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O |
40 O-+ |
35 +-+ O |
| |
30 +-+ |
25 +-+ +.+..+.+..+.+..+.+.+..+.+..+ +.+ +..+.+.+..+.+..+.+..+.+..+.|
| : : : : : |
20 +-+ : : : : : |
15 +-+ : : : : : |
|: : : : : : |
10 +-+: : : : : |
5 +-+: :: : : |
| : : : |
0 +-+--------------------------------------------------------------------+
apachebench.requests_per_second
45000 +-+-----------------------------------------------------------------+
| +.+.+..+.+.+.. .+. .+.+.+ +.. +.. .+.+.. .+..+.+.+..+.|
40000 +-+ : + +. : : + : + +.+ |
35000 +-+ : : : : : |
| : : : : : |
30000 +-+ : O: : : : |
25000 O-O :O O O O O O O O O O O : O:O O:O:O |
|: : : : : : |
20000 +-+: : : : : |
15000 +-+: : : : : |
|: : : : : : |
10000 +-+: :: : |
5000 +-+ : : |
| : : : |
0 +-+-----------------------------------------------------------------+
apachebench.time_per_request
160 O-O--O-O--O-O-O--O-O--O-O--O-O----O-O--O-O-O--------------------------+
| |
140 +-+ O |
120 +-+ |
| |
100 +-+ +.+..+.+. .+.+..+.+..+.+.+ +..+ +..+.+..+.+..+.+.+..+.+..+.|
| : +. : : : : |
80 +-+ : : : : : |
|: : : : : : |
60 +-+ : : : : : |
40 +-+: : : : : |
|: : : : : : |
20 +-+: :: : |
| : : : |
0 +-+-------------------------------------------------------------------+
apachebench.transfer_rate
450000 +-+----------------------------------------------------------------+
| : +..+.+ : : + : + +.+ + |
400000 +-+ : : : : : |
350000 +-+ : : : : : |
|: : O: : : : |
300000 O-+ : : : : : |
250000 +-O :O O O O O O O O O O O :O :O O:O :O |
|: : : : : : |
200000 +-+: : : : : |
150000 +-+: : : : : |
|: : : : : : |
100000 +-+: :: :: |
50000 +-+ : : |
| : : : |
0 +-+----------------------------------------------------------------+
apachebench.connection_time.processing.max
40000 +-O--O-O-O--O------O---O---------O-O--O-O-O-------------------------+
O O O |
35000 +-+ O O O O |
30000 +-+ |
| |
25000 +-+ |
| + + +.+..+. .+ .+ .+. .+..+.+. .+.+..+.|
20000 +-+ +.+ :: :: : + : +. : +. + +..+ |
| : : : : : : : : : : : |
15000 +-+ : : : : : : : : : : : |
10000 +-+ : :: : :: : : : : |
|: : + + + : : : : |
5000 +-+: : : :: |
| : : : |
0 +-+-----------------------------------------------------------------+
apachebench.connection_time.waiting.max
40000 +-O--O-O-O--O------O---O---------O-O--O-O-O-------------------------+
O O O |
35000 +-+ O O O O |
30000 +-+ |
| |
25000 +-+ |
| + + +.+..+. .+ .+ .+. .+..+.+. .+.+..+.|
20000 +-+ +.+ :: :: : + : +. : +. + +..+ |
| : : : : : : : : : : : |
15000 +-+ : : : : : : : : : : : |
10000 +-+ : :: : :: : : : : |
|: : + + + : : : : |
5000 +-+: : : :: |
| : : : |
0 +-+-----------------------------------------------------------------+
apachebench.connection_time.total.max
40000 O-O--O-O-O--O-O-O--O-O-O--O-O----O-O--O-O-O-------------------------+
| |
35000 +-+ O |
30000 +-+ |
| |
25000 +-+ +.+ + + +.+..+.+.+ +..+ +..+.+.+..+.+.+..+.+.+..+.|
| : : : :: : : : : : |
20000 +-+ : : : : : : : : : : : |
|: : : : : : : : : : : : |
15000 +-+ : : : : : : : : : : : |
10000 +-+: :: : : : : : : |
|: : + + + : : : : |
5000 +-+: :: : |
| : : : |
0 +-+-----------------------------------------------------------------+
apachebench.max_latency.100_
40000 O-O--O-O-O--O-O-O--O-O-O--O-O----O-O--O-O-O-------------------------+
| |
35000 +-+ O |
30000 +-+ |
| |
25000 +-+ +.+ + + +.+..+.+.+ +..+ +..+.+.+..+.+.+..+.+.+..+.|
| : : : :: : : : : : |
20000 +-+ : : : : : : : : : : : |
|: : : : : : : : : : : : |
15000 +-+ : : : : : : : : : : : |
10000 +-+: :: : : : : : : |
|: : + + + : : : : |
5000 +-+: :: : |
| : : : |
0 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-de1: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 48G memory
=========================================================================================
cluster/compiler/concurrency/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/testcase/ucode:
cs-localhost/gcc-7/8000/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/lkp-bdw-de1/apachebench/0x7000019
commit:
6d6f0383b6 ("netdevsim: Fix use-after-free during device dismantle")
19f92a030c ("net: increase SOMAXCONN to 4096")
6d6f0383b697f004 19f92a030ca6d772ab44b22ee6a
---------------- ---------------------------
%stddev %change %stddev
\ | \
23369 ± 3% +66.1% 38819 apachebench.connection_time.processing.max
24915 +59.9% 39829 apachebench.connection_time.total.max
23369 ± 3% +66.1% 38819 apachebench.connection_time.waiting.max
24915 +14914.2 39829 apachebench.max_latency.100%
40108 -37.5% 25067 apachebench.requests_per_second
25.93 +57.6% 40.88 apachebench.time.elapsed_time
25.93 +57.6% 40.88 apachebench.time.elapsed_time.max
78.75 -36.8% 49.75 apachebench.time.percent_of_cpu_this_job_got
199.49 +60.0% 319.15 apachebench.time_per_request
430914 -37.5% 269230 apachebench.transfer_rate
41537 +15.4% 47913 ± 2% meminfo.PageTables
70135 ± 2% -33.1% 46888 meminfo.max_used_kB
82132073 ± 72% +311.1% 3.377e+08 ± 21% cpuidle.C6.time
124196 ± 56% +219.9% 397318 ± 16% cpuidle.C6.usage
21928 ± 12% +18.1% 25902 ± 5% cpuidle.POLL.usage
64.46 ± 2% +12.0 76.51 mpstat.cpu.all.idle%
2.37 ± 3% -0.8 1.57 ± 3% mpstat.cpu.all.soft%
14.98 ± 2% -5.0 10.01 ± 2% mpstat.cpu.all.sys%
18.20 ± 6% -6.3 11.92 mpstat.cpu.all.usr%
21948 ± 24% -55.5% 9766 ± 35% sched_debug.cfs_rq:/.MIN_vruntime.max
5370 ± 25% -52.0% 2576 ± 35% sched_debug.cfs_rq:/.MIN_vruntime.stddev
21948 ± 24% -55.4% 9798 ± 35% sched_debug.cfs_rq:/.max_vruntime.max
5372 ± 25% -51.9% 2583 ± 35% sched_debug.cfs_rq:/.max_vruntime.stddev
4960 ± 4% -9.1% 4510 ± 7% sched_debug.cfs_rq:/.min_vruntime.min
66.25 ± 2% +16.2% 77.00 vmstat.cpu.id
16.50 ± 5% -33.3% 11.00 vmstat.cpu.us
5.00 -30.0% 3.50 ± 14% vmstat.procs.r
62040 ± 2% -33.0% 41568 vmstat.system.cs
32363 ± 4% -15.6% 27301 ± 11% vmstat.system.in
288455 +1.0% 291410 proc-vmstat.nr_file_pages
8275 +6.7% 8833 proc-vmstat.nr_kernel_stack
10389 +15.5% 12005 ± 2% proc-vmstat.nr_page_table_pages
13525 +0.7% 13625 proc-vmstat.nr_slab_reclaimable
25255 +7.0% 27029 proc-vmstat.nr_slab_unreclaimable
284157 +1.0% 287108 proc-vmstat.nr_unevictable
284157 +1.0% 287108 proc-vmstat.nr_zone_unevictable
213180 ± 3% +37.4% 292925 ± 16% proc-vmstat.numa_hit
213180 ± 3% +37.4% 292925 ± 16% proc-vmstat.numa_local
273233 +30.8% 357492 ± 13% proc-vmstat.pgalloc_normal
1219028 +8.2% 1318974 ± 4% proc-vmstat.pgfault
933.00 ± 3% -31.0% 643.75 ± 2% turbostat.Avg_MHz
38.04 ± 3% -10.5 27.51 ± 2% turbostat.Busy%
9.68 ± 5% -1.9 7.74 ± 14% turbostat.C1E%
33.12 ± 40% -18.8 14.30 ± 75% turbostat.C3%
123287 ± 57% +221.7% 396655 ± 16% turbostat.C6
17.93 ± 72% +31.7 49.59 ± 22% turbostat.C6%
49.46 ± 5% -17.8% 40.65 ± 2% turbostat.CPU%c1
6.14 ±104% +317.2% 25.62 ± 38% turbostat.CPU%c6
947698 ± 4% +26.7% 1200588 ± 13% turbostat.IRQ
2.16 ± 27% +68.6% 3.64 ± 10% turbostat.Pkg%pc2
30.80 -20.1% 24.61 turbostat.PkgWatt
4.69 -7.6% 4.33 turbostat.RAMWatt
9476 ± 21% +44.9% 13734 softirqs.CPU0.SCHED
17135 ± 10% +36.9% 23455 ± 8% softirqs.CPU10.TIMER
6756 ± 8% +44.0% 9730 ± 11% softirqs.CPU12.SCHED
16484 ± 10% +37.2% 22623 ± 10% softirqs.CPU12.TIMER
6497 ± 4% +51.4% 9838 ± 11% softirqs.CPU13.SCHED
17804 ± 8% +38.1% 24590 ± 12% softirqs.CPU13.TIMER
17458 ± 11% +30.5% 22791 ± 13% softirqs.CPU15.TIMER
16726 ± 5% +38.9% 23239 ± 6% softirqs.CPU2.TIMER
6636 ± 11% +29.6% 8598 ± 12% softirqs.CPU3.SCHED
16694 ± 5% +63.8% 27337 ± 12% softirqs.CPU3.TIMER
188590 ± 78% -86.1% 26256 ± 34% softirqs.CPU4.NET_RX
19263 ± 15% +22.3% 23553 ± 9% softirqs.CPU4.TIMER
16189 ± 4% +43.5% 23234 ± 14% softirqs.CPU5.TIMER
15942 ± 8% +33.6% 21299 ± 9% softirqs.CPU6.TIMER
16372 ± 3% +32.0% 21603 ± 8% softirqs.CPU7.TIMER
9531 ± 7% +18.7% 11313 ± 6% softirqs.CPU8.RCU
7629 ± 13% +35.1% 10307 ± 6% softirqs.CPU9.SCHED
113585 +26.5% 143672 softirqs.SCHED
281115 +28.4% 360965 ± 2% softirqs.TIMER
54612 ± 5% +28.9% 70407 ± 14% interrupts.CPU0.LOC:Local_timer_interrupts
54567 ± 4% +28.5% 70099 ± 14% interrupts.CPU1.LOC:Local_timer_interrupts
478.25 ± 17% -31.6% 327.25 ± 10% interrupts.CPU1.RES:Rescheduling_interrupts
54440 ± 5% +28.4% 69905 ± 14% interrupts.CPU10.LOC:Local_timer_interrupts
455.00 ± 15% +24.1% 564.50 ± 7% interrupts.CPU10.RES:Rescheduling_interrupts
54429 ± 4% +28.9% 70143 ± 14% interrupts.CPU11.LOC:Local_timer_interrupts
54543 ± 5% +29.0% 70357 ± 14% interrupts.CPU12.LOC:Local_timer_interrupts
54167 ± 4% +29.0% 69861 ± 14% interrupts.CPU13.LOC:Local_timer_interrupts
54488 ± 6% +28.3% 69886 ± 14% interrupts.CPU14.LOC:Local_timer_interrupts
54716 ± 4% +28.5% 70305 ± 13% interrupts.CPU15.LOC:Local_timer_interrupts
54478 ± 5% +28.6% 70065 ± 14% interrupts.CPU2.LOC:Local_timer_interrupts
54570 ± 5% +28.5% 70121 ± 14% interrupts.CPU3.LOC:Local_timer_interrupts
54202 ± 4% +29.4% 70129 ± 14% interrupts.CPU4.LOC:Local_timer_interrupts
54387 ± 5% +29.0% 70153 ± 14% interrupts.CPU5.LOC:Local_timer_interrupts
54135 ± 5% +29.5% 70108 ± 14% interrupts.CPU6.LOC:Local_timer_interrupts
54331 ± 5% +28.8% 69985 ± 13% interrupts.CPU7.LOC:Local_timer_interrupts
54274 ± 5% +29.3% 70186 ± 13% interrupts.CPU8.LOC:Local_timer_interrupts
54347 ± 5% +28.9% 70035 ± 14% interrupts.CPU9.LOC:Local_timer_interrupts
448.75 ± 17% +27.7% 573.25 ± 5% interrupts.CPU9.RES:Rescheduling_interrupts
870692 ± 5% +28.8% 1121753 ± 14% interrupts.LOC:Local_timer_interrupts
421.75 ± 27% +34.4% 567.00 ± 20% interrupts.TLB:TLB_shootdowns
342.50 ± 3% +137.4% 813.25 slabinfo.TCPv6.active_objs
342.50 ± 3% +137.7% 814.25 slabinfo.TCPv6.num_objs
23040 +14.8% 26457 slabinfo.anon_vma.active_objs
23194 +14.5% 26569 slabinfo.anon_vma.num_objs
69731 +16.3% 81074 slabinfo.anon_vma_chain.active_objs
1095 +15.9% 1269 slabinfo.anon_vma_chain.active_slabs
70163 +15.8% 81268 slabinfo.anon_vma_chain.num_objs
1095 +15.9% 1269 slabinfo.anon_vma_chain.num_slabs
1184 ± 4% +18.9% 1408 ± 6% slabinfo.avc_xperms_data.active_objs
1184 ± 4% +18.9% 1408 ± 6% slabinfo.avc_xperms_data.num_objs
1650 +11.2% 1835 ± 3% slabinfo.cred_jar.active_objs
1650 +11.2% 1835 ± 3% slabinfo.cred_jar.num_objs
8356 ± 2% +14.1% 9531 ± 3% slabinfo.pid.active_objs
8366 +14.0% 9533 ± 3% slabinfo.pid.num_objs
294.00 +260.9% 1061 slabinfo.request_sock_TCP.active_objs
294.00 +260.9% 1061 slabinfo.request_sock_TCP.num_objs
2529 ± 2% +15.3% 2917 ± 10% slabinfo.skbuff_head_cache.num_objs
15.25 ± 43% +1260.7% 207.50 ± 9% slabinfo.tw_sock_TCPv6.active_objs
15.25 ± 43% +1260.7% 207.50 ± 9% slabinfo.tw_sock_TCPv6.num_objs
42187 +14.5% 48308 slabinfo.vm_area_struct.active_objs
1060 +14.1% 1209 slabinfo.vm_area_struct.active_slabs
42427 +14.1% 48400 slabinfo.vm_area_struct.num_objs
1060 +14.1% 1209 slabinfo.vm_area_struct.num_slabs
1.374e+09 ± 6% -33.3% 9.157e+08 perf-stat.i.branch-instructions
63009132 ± 10% -32.2% 42719501 perf-stat.i.branch-misses
5.02e+08 ± 2% -33.8% 3.325e+08 ± 2% perf-stat.i.cache-misses
5.02e+08 ± 2% -33.8% 3.325e+08 ± 2% perf-stat.i.cache-references
68561 ± 2% -34.3% 45061 perf-stat.i.context-switches
2.95 ± 4% +27.8% 3.77 ± 11% perf-stat.i.cpi
1.518e+10 ± 3% -32.4% 1.026e+10 perf-stat.i.cpu-cycles
2009 ± 10% -38.6% 1234 ± 8% perf-stat.i.cpu-migrations
43.69 ± 5% +17.7% 51.40 ± 7% perf-stat.i.cycles-between-cache-misses
10322885 ± 3% -33.0% 6920202 ± 2% perf-stat.i.dTLB-load-misses
1.954e+09 ± 4% -33.3% 1.303e+09 perf-stat.i.dTLB-loads
1648818 -33.1% 1103637 ± 4% perf-stat.i.dTLB-store-misses
1.124e+09 ± 3% -32.5% 7.585e+08 perf-stat.i.dTLB-stores
13218907 ± 4% -32.9% 8868341 ± 7% perf-stat.i.iTLB-load-misses
9268876 ± 5% -33.3% 6185290 ± 2% perf-stat.i.iTLB-loads
6.634e+09 ± 6% -33.5% 4.413e+09 ± 2% perf-stat.i.instructions
0.43 ± 3% -18.9% 0.34 ± 9% perf-stat.i.ipc
44269 ± 2% -28.3% 31730 ± 2% perf-stat.i.minor-faults
44269 ± 2% -28.3% 31730 ± 2% perf-stat.i.page-faults
30.22 +2.1% 30.85 perf-stat.overall.cycles-between-cache-misses
1.324e+09 ± 6% -32.5% 8.938e+08 perf-stat.ps.branch-instructions
60732087 ± 10% -31.3% 41694538 perf-stat.ps.branch-misses
4.839e+08 ± 2% -32.9% 3.245e+08 ± 2% perf-stat.ps.cache-misses
4.839e+08 ± 2% -32.9% 3.245e+08 ± 2% perf-stat.ps.cache-references
66084 ± 2% -33.4% 43980 perf-stat.ps.context-switches
1.463e+10 ± 3% -31.6% 1.001e+10 perf-stat.ps.cpu-cycles
1937 ± 10% -37.8% 1205 ± 8% perf-stat.ps.cpu-migrations
9949524 ± 3% -32.1% 6753873 ± 2% perf-stat.ps.dTLB-load-misses
1.884e+09 ± 4% -32.5% 1.271e+09 perf-stat.ps.dTLB-loads
1589327 -32.2% 1077121 ± 4% perf-stat.ps.dTLB-store-misses
1.084e+09 ± 3% -31.7% 7.403e+08 perf-stat.ps.dTLB-stores
12742546 ± 4% -32.1% 8655302 ± 7% perf-stat.ps.iTLB-load-misses
8934307 ± 5% -32.4% 6037143 ± 2% perf-stat.ps.iTLB-loads
6.395e+09 ± 6% -32.6% 4.307e+09 ± 2% perf-stat.ps.instructions
42675 ± 2% -27.4% 30969 ± 3% perf-stat.ps.minor-faults
30845 +1.3% 31232 perf-stat.ps.msec
42675 ± 2% -27.4% 30969 ± 3% perf-stat.ps.page-faults
13.67 ± 65% -12.0 1.67 ±173% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.67 ± 65% -12.0 1.67 ±173% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.67 ± 65% -12.0 1.67 ±173% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.88 ± 78% -11.9 0.00 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
11.88 ± 78% -11.9 0.00 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
15.45 ± 61% -10.5 5.00 ±173% perf-profile.calltrace.cycles-pp.secondary_startup_64
15.45 ± 61% -10.5 5.00 ±173% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
15.45 ± 61% -10.5 5.00 ±173% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
15.45 ± 61% -10.5 5.00 ±173% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.67 ± 65% -10.3 3.33 ±173% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
13.67 ± 65% -8.7 5.00 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.67 ± 65% -8.7 5.00 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
7.42 ±100% -7.4 0.00 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput
7.42 ±100% -7.4 0.00 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
7.42 ±100% -7.4 0.00 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
5.63 ±112% -5.6 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
5.63 ±112% -5.6 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
5.63 ±112% -5.6 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
5.63 ±112% -5.6 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
5.63 ±112% -5.6 0.00 perf-profile.calltrace.cycles-pp.read
5.05 ±105% -5.0 0.00 perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
5.05 ±105% -5.0 0.00 perf-profile.calltrace.cycles-pp.elf_map.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
5.05 ±105% -5.0 0.00 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.elf_map.load_elf_binary.search_binary_handler
5.05 ±105% -5.0 0.00 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.elf_map.load_elf_binary.search_binary_handler.__do_execve_file
5.05 ±105% -5.0 0.00 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.elf_map.load_elf_binary
4.91 ±107% -4.9 0.00 perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file
4.91 ±107% -4.9 0.00 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
4.91 ±107% -4.9 0.00 perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
4.91 ±107% -4.9 0.00 perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.__queue_work.queue_work_on.put_task_stack.finish_task_switch.__schedule
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.vfs_read.ksys_read
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.seq_printf.show_interrupts.seq_read.proc_reg_read.vfs_read
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.seq_vprintf.seq_printf.show_interrupts.seq_read.proc_reg_read
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.try_to_wake_up.__queue_work.queue_work_on.put_task_stack.finish_task_switch
3.71 ±100% -3.7 0.00 perf-profile.calltrace.cycles-pp.vsnprintf.seq_vprintf.seq_printf.show_interrupts.seq_read
3.71 ±100% -2.0 1.67 ±173% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.71 ±100% -2.0 1.67 ±173% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.96 ± 92% -1.6 8.33 ±173% perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64
9.96 ± 92% -1.6 8.33 ±173% perf-profile.calltrace.cycles-pp.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.17 ±124% +0.2 8.33 ±173% perf-profile.calltrace.cycles-pp.page_fault.perf_mmap__push.record__mmap_read_evlist.cmd_record.run_builtin
8.17 ±124% +0.2 8.33 ±173% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
8.17 ±124% +0.2 8.33 ±173% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
6.83 ± 65% +1.5 8.33 ±173% perf-profile.calltrace.cycles-pp.record__mmap_read_evlist.cmd_record.run_builtin.main.__libc_start_main
5.50 ±108% +2.8 8.33 ±173% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.71 ±100% +4.6 8.33 ±173% perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.cmd_record.run_builtin.main
13.67 ± 65% -12.0 1.67 ±173% perf-profile.children.cycles-pp.__x64_sys_exit_group
15.45 ± 61% -10.5 5.00 ±173% perf-profile.children.cycles-pp.secondary_startup_64
15.45 ± 61% -10.5 5.00 ±173% perf-profile.children.cycles-pp.start_secondary
15.45 ± 61% -10.5 5.00 ±173% perf-profile.children.cycles-pp.cpu_startup_entry
15.45 ± 61% -10.5 5.00 ±173% perf-profile.children.cycles-pp.do_idle
13.67 ± 65% -10.3 3.33 ±173% perf-profile.children.cycles-pp.intel_idle
8.76 ± 66% -8.8 0.00 perf-profile.children.cycles-pp.do_mmap
8.76 ± 66% -8.8 0.00 perf-profile.children.cycles-pp.mmap_region
13.67 ± 65% -8.7 5.00 ±173% perf-profile.children.cycles-pp.cpuidle_enter
13.67 ± 65% -8.7 5.00 ±173% perf-profile.children.cycles-pp.cpuidle_enter_state
7.42 ±100% -7.4 0.00 perf-profile.children.cycles-pp.release_pages
7.42 ±100% -7.4 0.00 perf-profile.children.cycles-pp.tlb_finish_mmu
7.42 ±100% -7.4 0.00 perf-profile.children.cycles-pp.tlb_flush_mmu
8.76 ± 66% -7.1 1.67 ±173% perf-profile.children.cycles-pp.vm_mmap_pgoff
5.63 ±112% -5.6 0.00 perf-profile.children.cycles-pp.ksys_read
5.63 ±112% -5.6 0.00 perf-profile.children.cycles-pp.vfs_read
5.63 ±112% -5.6 0.00 perf-profile.children.cycles-pp.read
5.05 ±105% -5.0 0.00 perf-profile.children.cycles-pp.page_remove_rmap
5.05 ±105% -5.0 0.00 perf-profile.children.cycles-pp.elf_map
5.05 ±105% -5.0 0.00 perf-profile.children.cycles-pp.memcpy_erms
4.91 ±107% -4.9 0.00 perf-profile.children.cycles-pp.flush_old_exec
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp._raw_spin_lock
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.__schedule
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.__queue_work
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.finish_task_switch
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.proc_reg_read
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.put_task_stack
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.queue_work_on
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.seq_read
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.show_interrupts
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.seq_printf
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.seq_vprintf
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.try_to_wake_up
3.71 ±100% -3.7 0.00 perf-profile.children.cycles-pp.vsnprintf
3.71 ±100% -2.0 1.67 ±173% perf-profile.children.cycles-pp.ksys_mmap_pgoff
9.96 ± 92% -1.6 8.33 ±173% perf-profile.children.cycles-pp.load_elf_binary
9.96 ± 92% -1.6 8.33 ±173% perf-profile.children.cycles-pp.search_binary_handler
8.62 ± 64% -0.3 8.33 ±173% perf-profile.children.cycles-pp.page_fault
8.17 ±124% +0.2 8.33 ±173% perf-profile.children.cycles-pp.unmap_page_range
8.17 ±124% +0.2 8.33 ±173% perf-profile.children.cycles-pp.unmap_vmas
6.83 ± 65% +1.5 8.33 ±173% perf-profile.children.cycles-pp.record__mmap_read_evlist
6.83 ± 65% +1.5 8.33 ±173% perf-profile.children.cycles-pp.perf_mmap__push
5.50 ±108% +2.8 8.33 ±173% perf-profile.children.cycles-pp.do_page_fault
5.50 ±108% +2.8 8.33 ±173% perf-profile.children.cycles-pp.__do_page_fault
5.50 ±108% +2.8 8.33 ±173% perf-profile.children.cycles-pp.__handle_mm_fault
5.50 ±108% +2.8 8.33 ±173% perf-profile.children.cycles-pp.handle_mm_fault
4.91 ±107% +3.4 8.33 ±173% perf-profile.children.cycles-pp.free_pgtables
13.67 ± 65% -10.3 3.33 ±173% perf-profile.self.cycles-pp.intel_idle
5.50 ±108% -5.5 0.00 perf-profile.self.cycles-pp.release_pages
5.05 ±105% -5.0 0.00 perf-profile.self.cycles-pp.page_remove_rmap
5.05 ±105% -5.0 0.00 perf-profile.self.cycles-pp.memcpy_erms
3.71 ±100% -3.7 0.00 perf-profile.self.cycles-pp._raw_spin_lock
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 2 months
[compiler.h] 642a312d47: stderr.from/usr/src/perf_selftests-x86_64-rhel-#/tools/bpf/bpf_dbg.c
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 642a312d47ceb54603630d9d04f5052f3b46d9a3 ("compiler.h: Split {READ,WRITE}_ONCE definitions out into rwonce.h")
https://git.kernel.org/cgit/linux/kernel/git/will/linux.git lto
in testcase: bpf_offload
with following parameters:
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.6-642a312d47ceb54603630d9d04f5052f3b46d9a3
2019-11-12 15:37:58 cd /
2019-11-12 15:38:49 cd /usr/src/linux-selftests-x86_64-rhel-7.6-642a312d47ceb54603630d9d04f5052f3b46d9a3/tools/testing/selftests
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-642a312d47ceb54603630d9d04f5052f3b46d9a3/tools/bpf'
Auto-detecting system features:
... libbfd: [ [32mon[m ]
... disassembler-four-args: [ [31mOFF[m ]
CC bpf_jit_disasm.o
LINK bpf_jit_disasm
CC bpf_dbg.o
Makefile:61: recipe for target 'bpf_dbg.o' failed
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-642a312d47ceb54603630d9d04f5052f3b46d9a3/tools/bpf'
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc6-00001-g642a312d47ceb .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 2 months
[tcp] f859a44847: stderr.fastopen-client.pkt:#:runtime_error_in_sendto_call:Expected_result#but_got-#with_errno#(Operation_now_in_progress)
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: f859a448470304135f7a1af0083b99e188873bb4 ("tcp: allow zerocopy with fastopen")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: packetdrill
with following parameters:
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Running packetdrill/tests/bsd/fast_retransmit/fr-4pkt-sack-bsd.pkt ...
2019-11-08 16:55:24 packetdrill/packetdrill --tolerance_usecs=40000 packetdrill/tests/bsd/fast_retransmit/fr-4pkt-sack-bsd.pkt
packetdrill/tests/bsd/fast_retransmit/fr-4pkt-sack-bsd.pkt:25: error handling packet: live packet payload: expected 1000 bytes vs actual 2000 bytes
packetdrill/tests/bsd/fast_retransmit/fr-4pkt-sack-bsd.pkt failed
Running packetdrill/tests/linux/fast_retransmit/fr-4pkt-sack-linux.pkt ...
2019-11-08 16:55:25 packetdrill/packetdrill --tolerance_usecs=40000 packetdrill/tests/linux/fast_retransmit/fr-4pkt-sack-linux.pkt
packetdrill/tests/linux/fast_retransmit/fr-4pkt-sack-linux.pkt pass
Running packetdrill/tests/linux/packetdrill/socket_err.pkt ...
2019-11-08 16:55:25 packetdrill/packetdrill --tolerance_usecs=40000 packetdrill/tests/linux/packetdrill/socket_err.pkt
packetdrill/tests/linux/packetdrill/socket_err.pkt:6: runtime error in socket call: Expected non-negative result but got -1 with errno 93 (Protocol not supported)
packetdrill/tests/linux/packetdrill/socket_err.pkt failed
Running packetdrill/tests/linux/packetdrill/socket_wrong_err.pkt ...
2019-11-08 16:55:26 packetdrill/packetdrill --tolerance_usecs=40000 packetdrill/tests/linux/packetdrill/socket_wrong_err.pkt
packetdrill/tests/linux/packetdrill/socket_wrong_err.pkt:4: runtime error in socket call: Expected result -99 but got -1 with errno 93 (Protocol not supported)
packetdrill/tests/linux/packetdrill/socket_wrong_err.pkt failed
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-accept.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-accept.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-connect.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-read.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-read.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-write.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-local-close-then-remote-fin.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-local-close-then-remote-fin.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-on-syn-sent.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-remote-fin-then-close.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-remote-fin-then-close.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/cwnd_moderation/cwnd-moderation-disorder-no-moderation.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/cwnd_moderation/cwnd-moderation-ecn-enter-cwr-no-moderation-700.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/cwnd_moderation/cwnd-moderation-ecn-enter-cwr-no-moderation-700.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-large.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-retrans.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-retrans.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-small.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-subsequent.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-subsequent.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_in_edge.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge_default_notsent_lowat.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge_notsent_lowat.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge_notsent_lowat.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/gro/gro-mss-option.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/inq/client.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/inq/client.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/inq/server.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ioctl/ioctl-siocinq-fin.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ioctl/ioctl-siocinq-fin.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/limited_transmit/limited-transmit-no-sack.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/limited_transmit/limited-transmit-sack.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/limited_transmit/limited-transmit-sack.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/md5/md5-only-on-client-ack.pkt (ipv4-mapped-v6)]
stdout:
/proc/net/tcp6: 1: 0000000000000000FFFF0000D5FFA8C0:1F90 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 47675 2 (____ptrval____) 100 0 0 10 1
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-getsockopt-tcp_maxseg-server.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-getsockopt-tcp_maxseg-server.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-setsockopt-tcp_maxseg-client.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-setsockopt-tcp_maxseg-server.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-setsockopt-tcp_maxseg-server.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/mtu_probe/basic-v4.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/https_client.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/https_client.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/sendmsg_msg_more.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/sockopt_cork_nodelay.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/sockopt_cork_nodelay.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-default.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-setsockopt.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-setsockopt.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-sysctl.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-route-refresh-ip-tos.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-route-refresh-ip-tos.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-2-6-8-3-9-nofack.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-7-3-4-8-9-fack.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-7-3-4-8-9-fack.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-7-5-6-8-9-fack.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sendfile/sendfile-simple.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sendfile/sendfile-simple.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rd-close.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rd-wr-close.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rd-wr-close.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-close.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-send-queue-ack-close.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-send-queue-ack-close.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-write-queue-close.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-wr-close.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-wr-close.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/splice/tcp_splice_loop_test.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/fastopen-invalid-buf-ptr.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/fastopen-invalid-buf-ptr.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/sendmsg-empty-iov.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/syscall-invalid-buf-ptr.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/syscall-invalid-buf-ptr.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-last_data_recv.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-rwnd-limited.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-sndbuf-limited.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-sndbuf-limited.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/client-only-last-byte.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/partial.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/partial.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/server.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/fin_tsval.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/invalid_ack.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/reset_tsval.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/validate/validate-established-no-flags.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/basic.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/batch.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/client.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-accept.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-connect.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-write.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-local-close-then-remote-fin.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-on-syn-sent.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/cwnd_moderation/cwnd-moderation-disorder-no-moderation.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/cwnd_moderation/cwnd-moderation-ecn-enter-cwr-no-moderation-700.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-large.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-small.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-subsequent.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_in_edge.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge_default_notsent_lowat.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge_notsent_lowat.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/gro/gro-mss-option.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/inq/server.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ioctl/ioctl-siocinq-fin.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/limited_transmit/limited-transmit-no-sack.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/md5/md5-only-on-client-ack.pkt (ipv4)]
stdout:
/proc/net/tcp: 0: 5054A8C0:1F90 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 47290 2 (____ptrval____) 100 0 0 10 1
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-getsockopt-tcp_maxseg-server.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-setsockopt-tcp_maxseg-client.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/mtu_probe/basic-v4.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/https_client.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/sendmsg_msg_more.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-default.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-setsockopt.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-sysctl.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-2-6-8-3-9-nofack.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-7-3-4-8-9-fack.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-7-5-6-8-9-fack.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rd-close.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rd-wr-close.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-close.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-write-queue-close.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-wr-close.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/splice/tcp_splice_loop_test.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/sendmsg-empty-iov.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/syscall-invalid-buf-ptr.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-last_data_recv.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-rwnd-limited.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/client-only-last-byte.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/partial.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/server.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/invalid_ack.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/reset_tsval.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/user_timeout/user-timeout-probe.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/user_timeout/user_timeout.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/validate/validate-established-no-flags.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/batch.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/client.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_exclusive.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-connect.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-write.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-remote-fin-then-close.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-large.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-small.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/gro/gro-mss-option.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/inq/server.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/limited_transmit/limited-transmit-sack.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-setsockopt-tcp_maxseg-client.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/mtu_probe/basic-v6.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/sockopt_cork_nodelay.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-sysctl.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-2-6-8-3-9-nofack.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sendfile/sendfile-simple.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-close.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-write-queue-close.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/fastopen-invalid-buf-ptr.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-last_data_recv.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-sndbuf-limited.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/server.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/fin_tsval.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/reset_tsval.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/validate/validate-established-no-flags.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/batch.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/closed.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/closed.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_exclusive.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_oneshot.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/blocking/blocking-read.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/cwnd_moderation/cwnd-moderation-disorder-no-moderation.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_in_edge.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/inq/client.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/md5/md5-only-on-client-ack.pkt (ipv6)]
stdout:
/proc/net/tcp6: 0: 7BFA3DFD00007DD10000000036638C83:1F90 00000000000000000000000000000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 45971 2 (____ptrval____) 100 0 0 10 1
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/nagle/sendmsg_msg_more.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-route-refresh-ip-tos.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rd-close.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/splice/tcp_splice_loop_test.pkt (ipv4)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/tcp_info/tcp-info-rwnd-limited.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/fin_tsval.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/user_timeout/user-timeout-probe.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/user_timeout/user_timeout.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/basic.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/closed.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_edge.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_exclusive.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_oneshot.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/fastopen-client.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/fastopen-server.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/close/close-on-syn-sent.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/epoll/epoll_out_edge_default_notsent_lowat.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/mss/mss-setsockopt-tcp_maxseg-server.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/sack/sack-shift-sacked-7-5-6-8-9-fack.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/syscall_bad_arg/sendmsg-empty-iov.pkt (ipv6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/ts_recent/invalid_ack.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/user_timeout/user_timeout.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/client.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_edge.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/fastopen-client.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/fastopen-server.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/small.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/eor/no-coalesce-retrans.pkt (ipv4-mapped-v6)]
stdout:
stderr:
FAIL [/lkp/benchmarks/packetdrill/gtests/net/tcp/notsent_lowat/notsent-lowat-default.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/timestamping/client-only-last-byte.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/basic.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_oneshot.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/fastopen-server.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/small.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/limited_transmit/limited-transmit-no-sack.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/user_timeout/user-timeout-probe.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/fastopen-client.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/maxfrags.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/small.pkt (ipv6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/shutdown/shutdown-rdwr-send-queue-ack-close.pkt (ipv4-mapped-v6)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/maxfrags.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/epoll_edge.pkt (ipv4)]
stdout:
stderr:
OK [/lkp/benchmarks/packetdrill/gtests/net/tcp/zerocopy/maxfrags.pkt (ipv6)]
stdout:
stderr:
Ran 216 tests: 172 passing, 44 failing, 0 timed out (29.60 sec): tcp
To reproduce:
# build kernel
cd linux
cp config-5.0.0-rc3-00332-gf859a44847030 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 2 months
[perf stat] 1d2620eedc: perf-sanity-tests.'import_perf'_in_python.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 1d2620eedc3ea61969983df63709ef8cb3494ade ("perf stat: Use affinity for closing file descriptors")
https://git.kernel.org/cgit/linux/kernel/git/ak/linux-misc.git perf/stat-scale-8
in testcase: perf-sanity-tests
with following parameters:
perf_compiler: gcc
ucode: 0x27
on test machine: 8 threads Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2019-11-09 06:45:28 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 10
10: DSO data read : Ok
2019-11-09 06:45:28 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 11
11: DSO data cache : Ok
2019-11-09 06:45:28 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 12
12: DSO data reopen : Ok
2019-11-09 06:45:28 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 13
13: Roundtrip evsel->name : Ok
2019-11-09 06:45:28 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 14
14: Parse sched tracepoints fields : Ok
2019-11-09 06:45:28 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 15
15: syscalls:sys_enter_openat event fields : Ok
2019-11-09 06:45:28 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 16
16: Setup struct perf_event_attr : Ok
2019-11-09 06:45:34 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 17
17: Match and link multiple hists : Ok
2019-11-09 06:45:34 sudo /usr/src/perf_selftests-x86_64-rhel-7.6-1d2620eedc3ea61969983df63709ef8cb3494ade/tools/perf/perf test 18
18: 'import perf' in python : FAILED!
There's a fix patch now: https://www.spinics.net/lists/kernel/msg3313909.html
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
1 year, 2 months
Re: [LTP] [xfs] 73e5fff98b: kmsg.dev/zero:Can't_open_blockdev
by Jan Stancek
----- Original Message -----
> >
> > If you fix the issue, kindly add following tag
> > Reported-by: kernel test robot <rong.a.chen(a)intel.com>
> >
> > [ 135.976643] LTP: starting fs_fill
> > [ 135.993912] /dev/zero: Can't open blockdev
>
> I've looked at the github source but I don't see how to find out what
> command was used when this error occurred so I don't know even how to
> start to work out what might have caused this.
>
> Can you help me to find the test script source please.
https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/fs...
Setup of that test is trying different file system types, and it looks
at errno code of "mount -t $fs /dev/zero /mnt/$fs".
Test still PASSes. This report appears to be only about extra message in dmesg,
which wasn't there before:
# mount -t xfs /dev/zero /mnt/xfs
# dmesg -c
[ 897.177168] /dev/zero: Can't open blockdev
Regards,
Jan
1 year, 2 months
[xfs] 646a9fb7fb: aim7.jobs-per-min -33.0% regression
by kernel test robot
Greeting,
FYI, we noticed a -33.0% regression of aim7.jobs-per-min due to commit:
commit: 646a9fb7fb983a7d53da1892c7f1225929de5bd1 ("xfs: deferred inode inactivation")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git repair-hard-problems
in testcase: aim7
on test machine: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: xfs
test: disk_wrt
load: 3000
cpufreq_governor: performance
ucode: 0x2000064
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.6/3000/RAID0/debian-x86_64-2019-09-23.cgz/lkp-skl-2sp7/disk_wrt/aim7/0x2000064
commit:
109ace4e35 ("xfs: pass around xfs_inode_ag_walk iget/irele helper functions")
646a9fb7fb ("xfs: deferred inode inactivation")
109ace4e35577294 646a9fb7fb983a7d53da1892c7f
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 100% 4:4 kmsg.XFS(md#):xlog_verify_grant_tail:space>BBTOB(tail_blocks)
%stddev %change %stddev
\ | \
438138 -33.0% 293397 ± 2% aim7.jobs-per-min
41.26 +49.2% 61.56 ± 2% aim7.time.elapsed_time
41.26 +49.2% 61.56 ± 2% aim7.time.elapsed_time.max
1229 ± 5% +677.3% 9553 ± 4% aim7.time.involuntary_context_switches
429.70 +363.3% 1990 ± 2% aim7.time.system_time
84.68 +6.5% 90.16 aim7.time.user_time
924587 -52.2% 442059 ± 4% aim7.time.voluntary_context_switches
8217 ±114% +99.3% 16377 ± 42% numa-numastat.node1.other_node
26110006 ± 88% -95.6% 1144783 ± 20% cpuidle.C1.time
160209 ± 34% -76.5% 37711 ± 18% cpuidle.C1.usage
4453545 ±119% -77.6% 995813 ± 15% cpuidle.POLL.time
82.85 -27.5 55.31 ± 5% mpstat.cpu.all.idle%
0.00 ± 55% -0.0 0.00 mpstat.cpu.all.iowait%
14.24 ± 2% +28.4 42.68 ± 7% mpstat.cpu.all.sys%
2.89 -0.9 2.01 ± 4% mpstat.cpu.all.usr%
83.25 -32.7% 56.00 ± 5% vmstat.cpu.id
3086 ± 5% +127.4% 7020 ± 7% vmstat.io.bo
11.25 ± 3% +173.3% 30.75 ± 9% vmstat.procs.r
43674 -62.1% 16550 ± 2% vmstat.system.cs
83.45 -32.6% 56.26 ± 5% iostat.cpu.idle
13.74 ± 2% +203.9% 41.76 ± 7% iostat.cpu.system
2.80 -29.4% 1.98 ± 4% iostat.cpu.user
92.59 +205.5% 282.91 ± 7% iostat.md0.w/s
3231 ± 5% +124.5% 7252 ± 7% iostat.md0.wkB/s
566.50 ± 2% +145.4% 1390 ± 7% turbostat.Avg_MHz
19.01 ± 4% +26.9 45.89 ± 6% turbostat.Busy%
156911 ± 34% -77.3% 35696 ± 14% turbostat.C1
0.84 ± 87% -0.8 0.03 ± 19% turbostat.C1%
77.66 ± 5% -34.0% 51.24 ± 10% turbostat.CPU%c1
1.33 ±173% +3.9 5.20 ± 23% turbostat.PKG_%
3347 ± 14% +21.3% 4060 ± 10% numa-meminfo.node0.Dirty
3514 ± 27% +59.8% 5617 ± 10% numa-meminfo.node0.Inactive(file)
51128 ± 18% +44.4% 73837 ± 20% numa-meminfo.node0.KReclaimable
1273839 ± 17% +16.7% 1486631 ± 15% numa-meminfo.node0.MemUsed
51128 ± 18% +44.4% 73837 ± 20% numa-meminfo.node0.SReclaimable
110431 ± 36% +46.5% 161821 ± 25% numa-meminfo.node0.SUnreclaim
161560 ± 30% +45.9% 235659 ± 22% numa-meminfo.node0.Slab
5501 ± 39% +115.0% 11829 ± 54% numa-meminfo.node1.Inactive
1514 ± 66% +346.2% 6755 ±103% numa-meminfo.node1.Inactive(anon)
844.75 ± 14% +24.8% 1054 ± 11% numa-vmstat.node0.nr_dirty
882.00 ± 26% +64.5% 1451 ± 10% numa-vmstat.node0.nr_inactive_file
12779 ± 18% +44.4% 18455 ± 20% numa-vmstat.node0.nr_slab_reclaimable
27602 ± 36% +46.5% 40444 ± 25% numa-vmstat.node0.nr_slab_unreclaimable
889.25 ± 27% +62.7% 1446 ± 10% numa-vmstat.node0.nr_zone_inactive_file
779.00 ± 21% +33.5% 1039 ± 12% numa-vmstat.node1.nr_dirty
378.00 ± 66% +346.7% 1688 ±103% numa-vmstat.node1.nr_inactive_anon
1577 ± 40% +92.4% 3034 ± 15% numa-vmstat.node1.nr_written
378.00 ± 66% +346.7% 1688 ±103% numa-vmstat.node1.nr_zone_inactive_anon
47945 ± 3% +32.1% 63332 ± 9% meminfo.AnonHugePages
25623 ± 2% +17.5% 30095 ± 2% meminfo.Inactive
18131 +8.8% 19719 meminfo.Inactive(anon)
7491 ± 8% +38.5% 10375 ± 8% meminfo.Inactive(file)
98549 +37.2% 135204 meminfo.KReclaimable
98549 +37.2% 135204 meminfo.SReclaimable
244175 +9.9% 268384 meminfo.SUnreclaim
31605 ± 2% +57.8% 49875 ± 11% meminfo.Shmem
342725 +17.8% 403589 meminfo.Slab
63159 -30.5% 43885 ± 5% meminfo.max_used_kB
16.00 +346.1% 71.38 ±116% sched_debug.cfs_rq:/.runnable_load_avg.max
613880 ± 4% +13.9% 698933 ± 8% sched_debug.cpu.avg_idle.avg
2219 ± 51% +6020.3% 135810 ± 70% sched_debug.cpu.avg_idle.min
258921 ± 3% -22.1% 201779 ± 12% sched_debug.cpu.avg_idle.stddev
30262 ± 2% +75.0% 52944 ± 24% sched_debug.cpu.clock.avg
30265 ± 2% +75.0% 52950 ± 24% sched_debug.cpu.clock.max
30258 ± 2% +74.9% 52936 ± 24% sched_debug.cpu.clock.min
1.73 ± 7% +104.3% 3.54 ± 64% sched_debug.cpu.clock.stddev
30262 ± 2% +75.0% 52944 ± 24% sched_debug.cpu.clock_task.avg
30265 ± 2% +75.0% 52950 ± 24% sched_debug.cpu.clock_task.max
30258 ± 2% +74.9% 52936 ± 24% sched_debug.cpu.clock_task.min
1.73 ± 7% +104.3% 3.54 ± 64% sched_debug.cpu.clock_task.stddev
0.01 ±173% +47900.0% 3.33 ±109% sched_debug.cpu.nr_uninterruptible.avg
-18.25 +98.6% -36.25 sched_debug.cpu.nr_uninterruptible.min
30259 ± 2% +74.9% 52938 ± 24% sched_debug.cpu_clk
27548 ± 2% +82.3% 50227 ± 25% sched_debug.ktime
30597 ± 2% +74.1% 53282 ± 24% sched_debug.sched_clk
90738 +2.7% 93156 proc-vmstat.nr_active_anon
1651 ± 9% +24.4% 2054 ± 10% proc-vmstat.nr_dirty
281450 +1.7% 286223 proc-vmstat.nr_file_pages
4532 +8.8% 4929 proc-vmstat.nr_inactive_anon
1886 ± 8% +43.0% 2698 ± 9% proc-vmstat.nr_inactive_file
57113 -6.6% 53362 ± 5% proc-vmstat.nr_kernel_stack
7081 +6.1% 7517 ± 2% proc-vmstat.nr_mapped
7901 ± 2% +57.8% 12469 ± 11% proc-vmstat.nr_shmem
24642 +37.1% 33789 proc-vmstat.nr_slab_reclaimable
61028 +9.9% 67087 proc-vmstat.nr_slab_unreclaimable
8606 ± 25% +70.9% 14710 ± 15% proc-vmstat.nr_written
90738 +2.7% 93156 proc-vmstat.nr_zone_active_anon
4532 +8.8% 4929 proc-vmstat.nr_zone_inactive_anon
1886 ± 8% +43.0% 2698 ± 9% proc-vmstat.nr_zone_inactive_file
615.25 ±139% +561.8% 4071 ± 62% proc-vmstat.numa_pages_migrated
289105 ± 2% +23.6% 357466 ± 2% proc-vmstat.pgfault
615.25 ±139% +561.8% 4071 ± 62% proc-vmstat.pgmigrate_success
136493 ± 5% +242.5% 467494 ± 3% proc-vmstat.pgpgout
1970 ± 5% +18.1% 2326 ± 3% slabinfo.blkdev_ioc.active_objs
1970 ± 5% +18.1% 2326 ± 3% slabinfo.blkdev_ioc.num_objs
3708 ± 3% +639.3% 27420 ± 5% slabinfo.dmaengine-unmap-16.active_objs
93.50 ± 2% +803.7% 845.00 slabinfo.dmaengine-unmap-16.active_slabs
3952 ± 2% +798.3% 35504 slabinfo.dmaengine-unmap-16.num_objs
93.50 ± 2% +803.7% 845.00 slabinfo.dmaengine-unmap-16.num_slabs
2324 +12.8% 2622 slabinfo.ip6-frags.active_objs
2324 +12.8% 2622 slabinfo.ip6-frags.num_objs
3633 +36.6% 4962 ± 2% slabinfo.kmalloc-128.active_objs
3633 +36.6% 4962 ± 2% slabinfo.kmalloc-128.num_objs
27732 +330.8% 119465 ± 5% slabinfo.kmalloc-16.active_objs
108.75 +331.3% 469.00 ± 5% slabinfo.kmalloc-16.active_slabs
27840 +331.9% 120245 ± 5% slabinfo.kmalloc-16.num_objs
108.75 +331.3% 469.00 ± 5% slabinfo.kmalloc-16.num_slabs
1210 +19.1% 1441 ± 3% slabinfo.kmalloc-4k.active_objs
1225 +18.5% 1452 ± 3% slabinfo.kmalloc-4k.num_objs
9390 +250.5% 32916 ± 4% slabinfo.kmalloc-512.active_objs
305.50 +296.2% 1210 ± 2% slabinfo.kmalloc-512.active_slabs
9791 +295.7% 38742 ± 2% slabinfo.kmalloc-512.num_objs
305.50 +296.2% 1210 ± 2% slabinfo.kmalloc-512.num_slabs
1316 +23.8% 1629 slabinfo.kmalloc-8k.active_objs
1322 +28.1% 1694 slabinfo.kmalloc-8k.num_objs
26510 ± 2% +22.0% 32331 ± 9% slabinfo.radix_tree_node.active_objs
473.50 ± 2% +23.0% 582.50 ± 9% slabinfo.radix_tree_node.active_slabs
26551 ± 2% +23.0% 32654 ± 9% slabinfo.radix_tree_node.num_objs
473.50 ± 2% +23.0% 582.50 ± 9% slabinfo.radix_tree_node.num_slabs
1902 +12.8% 2145 slabinfo.xfs_btree_cur.active_objs
1902 +12.8% 2145 slabinfo.xfs_btree_cur.num_objs
2088 ± 6% +25.3% 2616 slabinfo.xfs_buf.active_objs
2088 ± 6% +25.3% 2616 slabinfo.xfs_buf.num_objs
3141 ± 4% +22.4% 3844 slabinfo.xfs_buf_item.active_objs
3141 ± 4% +22.4% 3844 slabinfo.xfs_buf_item.num_objs
1760 +12.6% 1983 slabinfo.xfs_da_state.active_objs
1760 +12.6% 1983 slabinfo.xfs_da_state.num_objs
1636 ± 3% +68.9% 2765 ± 2% slabinfo.xfs_efd_item.active_objs
1636 ± 3% +68.9% 2765 ± 2% slabinfo.xfs_efd_item.num_objs
2704 ± 3% +876.7% 26413 ± 6% slabinfo.xfs_inode.active_objs
97.75 ± 4% +985.4% 1061 slabinfo.xfs_inode.active_slabs
3142 ± 4% +981.0% 33967 slabinfo.xfs_inode.num_objs
97.75 ± 4% +985.4% 1061 slabinfo.xfs_inode.num_slabs
6.732e+09 -11.8% 5.938e+09 ± 5% perf-stat.i.branch-instructions
79001438 ± 3% -31.3% 54255443 ± 4% perf-stat.i.branch-misses
21293936 ± 7% -33.2% 14225969 ± 3% perf-stat.i.cache-misses
94432275 ± 12% -40.5% 56195402 ± 12% perf-stat.i.cache-references
45374 -62.9% 16826 ± 2% perf-stat.i.context-switches
1.72 ± 17% +77.6% 3.05 ± 8% perf-stat.i.cpi
4.018e+10 ± 2% +149.8% 1.003e+11 ± 6% perf-stat.i.cpu-cycles
1252 ± 31% -45.3% 685.42 ± 8% perf-stat.i.cpu-migrations
2504 ± 9% +136.5% 5924 ± 5% perf-stat.i.cycles-between-cache-misses
10043892 ± 3% -41.3% 5896357 ± 7% perf-stat.i.dTLB-load-misses
9.848e+09 -8.8% 8.977e+09 ± 5% perf-stat.i.dTLB-loads
181346 ± 13% -49.7% 91182 ± 31% perf-stat.i.dTLB-store-misses
5.606e+09 -31.8% 3.826e+09 ± 4% perf-stat.i.dTLB-stores
8571284 -39.5% 5185323 ± 4% perf-stat.i.iTLB-load-misses
9289772 ± 5% -38.9% 5675605 ± 13% perf-stat.i.iTLB-loads
3.39e+10 -10.5% 3.034e+10 ± 5% perf-stat.i.instructions
0.74 ± 2% -46.7% 0.40 ± 3% perf-stat.i.ipc
6021 -14.5% 5149 ± 2% perf-stat.i.minor-faults
3332988 ± 14% -31.9% 2270988 ± 4% perf-stat.i.node-load-misses
1658009 ± 6% -40.3% 989211 ± 5% perf-stat.i.node-loads
26.86 ± 9% +10.2 37.10 ± 2% perf-stat.i.node-store-miss-rate%
4189562 ± 2% -39.5% 2532889 ± 4% perf-stat.i.node-stores
6021 -14.5% 5149 ± 2% perf-stat.i.page-faults
2.78 ± 11% -33.4% 1.85 ± 12% perf-stat.overall.MPKI
1.17 ± 2% -0.3 0.91 perf-stat.overall.branch-miss-rate%
1.18 +178.8% 3.30 perf-stat.overall.cpi
1900 ± 9% +270.7% 7045 ± 3% perf-stat.overall.cycles-between-cache-misses
0.10 ± 2% -0.0 0.07 ± 5% perf-stat.overall.dTLB-load-miss-rate%
3955 +47.9% 5852 perf-stat.overall.instructions-per-iTLB-miss
0.84 -64.1% 0.30 perf-stat.overall.ipc
17.47 ± 16% +6.6 24.11 perf-stat.overall.node-store-miss-rate%
6.565e+09 -11.0% 5.843e+09 ± 5% perf-stat.ps.branch-instructions
77051640 ± 3% -30.7% 53394071 ± 4% perf-stat.ps.branch-misses
20767108 ± 7% -32.6% 13999643 ± 3% perf-stat.ps.cache-misses
92101166 ± 12% -40.0% 55304912 ± 12% perf-stat.ps.cache-references
44251 -62.6% 16558 ± 2% perf-stat.ps.context-switches
3.918e+10 ± 2% +152.0% 9.874e+10 ± 6% perf-stat.ps.cpu-cycles
1221 ± 31% -44.8% 674.55 ± 8% perf-stat.ps.cpu-migrations
9795401 ± 3% -40.8% 5802426 ± 7% perf-stat.ps.dTLB-load-misses
9.604e+09 -8.0% 8.834e+09 ± 5% perf-stat.ps.dTLB-loads
176901 ± 13% -49.3% 89755 ± 31% perf-stat.ps.dTLB-store-misses
5.467e+09 -31.1% 3.765e+09 ± 4% perf-stat.ps.dTLB-stores
8359064 -39.0% 5102649 ± 4% perf-stat.ps.iTLB-load-misses
9060035 ± 5% -38.3% 5585619 ± 13% perf-stat.ps.iTLB-loads
3.306e+10 -9.7% 2.986e+10 ± 5% perf-stat.ps.instructions
5878 -13.7% 5071 ± 2% perf-stat.ps.minor-faults
3250486 ± 14% -31.2% 2234779 ± 4% perf-stat.ps.node-load-misses
1616982 ± 6% -39.8% 973433 ± 4% perf-stat.ps.node-loads
4085833 ± 2% -39.0% 2492523 ± 4% perf-stat.ps.node-stores
5878 -13.7% 5071 ± 2% perf-stat.ps.page-faults
1.369e+12 +40.8% 1.928e+12 perf-stat.total.instructions
10189 ± 4% -21.0% 8053 ± 9% softirqs.CPU0.RCU
20822 ± 7% +39.9% 29137 ± 2% softirqs.CPU0.TIMER
9974 ± 4% -23.7% 7606 ± 8% softirqs.CPU1.RCU
19331 ± 2% +43.8% 27788 softirqs.CPU1.TIMER
20495 ± 6% +49.8% 30696 ± 20% softirqs.CPU10.TIMER
9678 ± 4% -21.8% 7567 ± 8% softirqs.CPU11.RCU
19183 ± 4% +44.0% 27627 ± 2% softirqs.CPU11.TIMER
9717 ± 2% -20.3% 7744 ± 8% softirqs.CPU12.RCU
19610 ± 2% +41.8% 27816 ± 3% softirqs.CPU12.TIMER
9975 ± 4% -23.7% 7615 ± 9% softirqs.CPU13.RCU
19641 ± 3% +43.9% 28262 ± 3% softirqs.CPU13.TIMER
9934 ± 3% -18.9% 8061 ± 16% softirqs.CPU14.RCU
19792 ± 4% +37.8% 27277 ± 3% softirqs.CPU14.TIMER
19248 +45.0% 27911 ± 2% softirqs.CPU15.TIMER
18812 ± 4% +46.4% 27541 softirqs.CPU16.TIMER
9643 ± 7% -21.7% 7550 ± 15% softirqs.CPU17.RCU
21982 ± 14% +38.7% 30500 ± 16% softirqs.CPU17.TIMER
11100 ± 3% -23.5% 8488 ± 12% softirqs.CPU18.RCU
21432 ± 10% +40.8% 30174 ± 12% softirqs.CPU18.TIMER
10254 ± 2% -24.1% 7786 ± 12% softirqs.CPU19.RCU
20127 ± 4% +37.2% 27610 ± 7% softirqs.CPU19.TIMER
19067 ± 4% +55.2% 29600 ± 2% softirqs.CPU2.TIMER
10238 ± 2% -24.3% 7749 ± 19% softirqs.CPU20.RCU
20680 ± 7% +44.2% 29819 ± 12% softirqs.CPU20.TIMER
11414 ± 11% -34.9% 7429 ± 12% softirqs.CPU21.RCU
20495 ± 9% +33.5% 27370 ± 5% softirqs.CPU21.TIMER
10724 ± 9% -29.1% 7602 ± 10% softirqs.CPU22.RCU
22190 ± 16% +26.3% 28035 ± 6% softirqs.CPU22.TIMER
20468 ± 7% +50.4% 30776 ± 15% softirqs.CPU23.TIMER
9837 ± 2% -22.4% 7630 ± 12% softirqs.CPU24.RCU
19337 ± 5% +46.7% 28366 ± 4% softirqs.CPU24.TIMER
10048 -24.3% 7606 ± 14% softirqs.CPU25.RCU
19879 ± 4% +39.0% 27629 ± 4% softirqs.CPU25.TIMER
10113 -25.6% 7523 ± 12% softirqs.CPU26.RCU
19752 ± 4% +41.4% 27929 ± 5% softirqs.CPU26.TIMER
9885 ± 2% -25.2% 7393 ± 12% softirqs.CPU27.RCU
19471 ± 4% +42.8% 27802 ± 4% softirqs.CPU27.TIMER
10655 ± 9% -24.8% 8016 ± 13% softirqs.CPU28.RCU
19580 ± 6% +47.0% 28775 ± 6% softirqs.CPU28.TIMER
9955 ± 3% -23.0% 7664 ± 14% softirqs.CPU29.RCU
19909 ± 3% +40.5% 27979 ± 4% softirqs.CPU29.TIMER
11278 ± 17% -22.8% 8705 ± 12% softirqs.CPU3.RCU
20144 ± 4% +41.1% 28431 ± 2% softirqs.CPU3.TIMER
19379 ± 5% +43.7% 27839 ± 5% softirqs.CPU30.TIMER
10566 ± 10% -33.6% 7018 ± 13% softirqs.CPU31.RCU
19802 ± 4% +39.2% 27560 ± 5% softirqs.CPU31.TIMER
9621 ± 6% -28.3% 6897 ± 11% softirqs.CPU32.RCU
9602 ± 4% -27.3% 6978 ± 13% softirqs.CPU33.RCU
19288 ± 5% +45.5% 28056 ± 7% softirqs.CPU33.TIMER
9711 ± 4% -26.2% 7163 ± 11% softirqs.CPU34.RCU
19495 ± 4% +46.3% 28528 ± 3% softirqs.CPU34.TIMER
19688 ± 5% +47.7% 29082 ± 3% softirqs.CPU35.TIMER
10457 ± 2% -25.5% 7791 ± 8% softirqs.CPU36.RCU
19860 ± 8% +37.6% 27321 softirqs.CPU36.TIMER
10690 ± 6% -28.5% 7647 ± 9% softirqs.CPU37.RCU
20270 ± 11% +36.2% 27614 ± 2% softirqs.CPU37.TIMER
10147 -25.1% 7597 ± 10% softirqs.CPU38.RCU
18995 ± 2% +42.4% 27043 softirqs.CPU38.TIMER
10330 -23.5% 7903 ± 8% softirqs.CPU39.RCU
19115 +44.2% 27561 softirqs.CPU39.TIMER
12080 ± 17% -34.5% 7913 ± 6% softirqs.CPU4.RCU
19952 ± 5% +40.6% 28061 softirqs.CPU4.TIMER
10233 ± 2% -23.5% 7824 ± 9% softirqs.CPU40.RCU
18201 ± 9% +50.6% 27413 softirqs.CPU40.TIMER
10309 ± 4% -28.4% 7381 ± 10% softirqs.CPU41.RCU
19269 ± 4% +41.4% 27252 softirqs.CPU41.TIMER
18614 ± 2% +43.9% 26794 ± 2% softirqs.CPU42.TIMER
10415 ± 7% -27.1% 7598 ± 10% softirqs.CPU43.RCU
18598 ± 2% +46.2% 27199 ± 2% softirqs.CPU43.TIMER
10246 ± 2% -27.9% 7392 ± 12% softirqs.CPU44.RCU
18630 ± 4% +44.4% 26904 softirqs.CPU44.TIMER
10857 ± 14% -31.0% 7487 ± 10% softirqs.CPU45.RCU
10197 ± 2% -29.0% 7242 ± 5% softirqs.CPU46.RCU
19247 ± 3% +44.0% 27723 ± 3% softirqs.CPU46.TIMER
10174 ± 3% -26.4% 7488 ± 11% softirqs.CPU47.RCU
18969 ± 6% +43.5% 27222 softirqs.CPU47.TIMER
10377 ± 5% -28.6% 7411 ± 11% softirqs.CPU48.RCU
19583 ± 7% +37.4% 26902 ± 3% softirqs.CPU48.TIMER
10084 -24.1% 7653 ± 11% softirqs.CPU49.RCU
18508 ± 4% +47.1% 27224 softirqs.CPU49.TIMER
10078 ± 3% -22.2% 7842 ± 7% softirqs.CPU5.RCU
19160 ± 3% +46.4% 28054 softirqs.CPU5.TIMER
10208 ± 3% -26.8% 7473 ± 8% softirqs.CPU50.RCU
18450 ± 2% +47.9% 27292 softirqs.CPU50.TIMER
10212 ± 2% -27.9% 7367 ± 12% softirqs.CPU51.RCU
18613 ± 3% +43.5% 26706 ± 3% softirqs.CPU51.TIMER
10012 -25.3% 7478 ± 10% softirqs.CPU52.RCU
18440 ± 3% +45.8% 26888 softirqs.CPU52.TIMER
9782 ± 6% -23.7% 7467 ± 13% softirqs.CPU53.RCU
19168 ± 3% +43.9% 27588 ± 2% softirqs.CPU53.TIMER
9896 ± 5% -27.4% 7183 ± 10% softirqs.CPU54.RCU
20511 ± 11% +39.6% 28632 ± 10% softirqs.CPU54.TIMER
9689 ± 5% -30.5% 6732 ± 15% softirqs.CPU55.RCU
19413 ± 4% +42.4% 27642 ± 8% softirqs.CPU55.TIMER
9719 ± 3% -24.9% 7303 ± 19% softirqs.CPU56.RCU
19312 ± 4% +43.8% 27773 ± 8% softirqs.CPU56.TIMER
20063 ± 15% +36.1% 27306 ± 6% softirqs.CPU57.TIMER
9789 ± 14% -28.6% 6990 ± 11% softirqs.CPU58.RCU
18743 ± 7% +43.4% 26870 ± 6% softirqs.CPU58.TIMER
9592 ± 4% -28.5% 6857 ± 18% softirqs.CPU59.RCU
19654 ± 5% +40.1% 27534 ± 4% softirqs.CPU59.TIMER
11114 ± 23% -29.7% 7818 ± 10% softirqs.CPU6.RCU
19544 ± 8% +40.4% 27442 ± 5% softirqs.CPU6.TIMER
19208 ± 5% +46.0% 28048 ± 8% softirqs.CPU60.TIMER
19390 ± 5% +39.5% 27051 ± 5% softirqs.CPU61.TIMER
9604 ± 9% -29.5% 6767 ± 14% softirqs.CPU62.RCU
19032 ± 5% +40.1% 26660 ± 5% softirqs.CPU62.TIMER
19398 ± 5% +39.1% 26985 ± 7% softirqs.CPU63.TIMER
18919 ± 5% +48.7% 28127 ± 6% softirqs.CPU64.TIMER
19285 ± 7% +41.8% 27352 ± 5% softirqs.CPU65.TIMER
18880 ± 6% +46.7% 27689 ± 4% softirqs.CPU66.TIMER
18997 ± 6% +39.9% 26579 ± 5% softirqs.CPU67.TIMER
18619 ± 6% +46.3% 27247 ± 6% softirqs.CPU68.TIMER
18851 ± 5% +46.1% 27546 ± 5% softirqs.CPU69.TIMER
19457 ± 2% +43.3% 27888 softirqs.CPU7.TIMER
18964 ± 5% +43.5% 27211 ± 7% softirqs.CPU70.TIMER
18996 ± 5% +58.8% 30160 ± 12% softirqs.CPU71.TIMER
19593 ± 3% +54.6% 30290 ± 8% softirqs.CPU8.TIMER
18531 ± 2% +46.4% 27131 ± 2% softirqs.CPU9.TIMER
715216 -24.4% 540757 ± 10% softirqs.RCU
557516 -11.4% 494067 ± 7% softirqs.SCHED
1405899 ± 3% +42.9% 2008869 ± 3% softirqs.TIMER
86.00 +53.5% 132.00 ± 5% interrupts.9:IO-APIC.9-fasteoi.acpi
52948 ± 2% +28.4% 67994 ± 4% interrupts.CAL:Function_call_interrupts
749.25 +28.5% 962.75 ± 3% interrupts.CPU0.CAL:Function_call_interrupts
86.00 +53.5% 132.00 ± 5% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
677.00 ± 19% +43.3% 970.00 interrupts.CPU1.CAL:Function_call_interrupts
97.50 ± 14% +253.1% 344.25 ± 28% interrupts.CPU10.RES:Rescheduling_interrupts
746.75 ± 2% +24.9% 932.75 interrupts.CPU11.CAL:Function_call_interrupts
3026 ± 17% +66.9% 5052 ± 2% interrupts.CPU11.NMI:Non-maskable_interrupts
3026 ± 17% +66.9% 5052 ± 2% interrupts.CPU11.PMI:Performance_monitoring_interrupts
101.00 ± 11% +220.8% 324.00 ± 14% interrupts.CPU11.RES:Rescheduling_interrupts
746.75 ± 2% +25.1% 934.50 ± 6% interrupts.CPU12.CAL:Function_call_interrupts
101.75 ± 17% +309.6% 416.75 ± 49% interrupts.CPU12.RES:Rescheduling_interrupts
744.50 ± 2% +27.7% 951.00 ± 3% interrupts.CPU13.CAL:Function_call_interrupts
2787 ± 33% +80.9% 5044 ± 4% interrupts.CPU13.NMI:Non-maskable_interrupts
2787 ± 33% +80.9% 5044 ± 4% interrupts.CPU13.PMI:Performance_monitoring_interrupts
109.75 ± 30% +194.1% 322.75 ± 22% interrupts.CPU13.RES:Rescheduling_interrupts
731.00 ± 4% +31.3% 959.50 ± 4% interrupts.CPU14.CAL:Function_call_interrupts
98.25 ± 20% +201.3% 296.00 ± 5% interrupts.CPU14.RES:Rescheduling_interrupts
748.25 +27.0% 950.50 ± 4% interrupts.CPU15.CAL:Function_call_interrupts
113.50 ± 16% +191.0% 330.25 ± 12% interrupts.CPU15.RES:Rescheduling_interrupts
3052 ± 17% +62.5% 4959 ± 4% interrupts.CPU16.NMI:Non-maskable_interrupts
3052 ± 17% +62.5% 4959 ± 4% interrupts.CPU16.PMI:Performance_monitoring_interrupts
96.25 ± 26% +193.2% 282.25 ± 6% interrupts.CPU16.RES:Rescheduling_interrupts
748.00 +27.3% 952.25 ± 5% interrupts.CPU17.CAL:Function_call_interrupts
107.75 ± 20% +539.0% 688.50 ± 96% interrupts.CPU17.RES:Rescheduling_interrupts
668.75 ± 19% +42.1% 950.50 ± 4% interrupts.CPU18.CAL:Function_call_interrupts
82.75 ± 24% +456.2% 460.25 ± 63% interrupts.CPU18.RES:Rescheduling_interrupts
740.25 ± 2% +25.3% 927.75 ± 8% interrupts.CPU19.CAL:Function_call_interrupts
163.00 ± 77% +210.6% 506.25 ± 44% interrupts.CPU19.RES:Rescheduling_interrupts
743.75 ± 2% +24.6% 926.50 ± 9% interrupts.CPU2.CAL:Function_call_interrupts
2641 ± 16% +90.4% 5027 ± 4% interrupts.CPU2.NMI:Non-maskable_interrupts
2641 ± 16% +90.4% 5027 ± 4% interrupts.CPU2.PMI:Performance_monitoring_interrupts
157.25 ± 41% +96.2% 308.50 ± 7% interrupts.CPU2.RES:Rescheduling_interrupts
734.50 ± 3% +30.2% 956.25 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
84.75 ± 19% +231.0% 280.50 ± 11% interrupts.CPU20.RES:Rescheduling_interrupts
701.25 ± 6% +36.5% 957.50 ± 4% interrupts.CPU21.CAL:Function_call_interrupts
85.50 ± 18% +235.1% 286.50 ± 19% interrupts.CPU21.RES:Rescheduling_interrupts
737.50 ± 2% +29.8% 957.25 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
84.25 ± 7% +322.0% 355.50 ± 13% interrupts.CPU22.RES:Rescheduling_interrupts
82.00 ± 19% +265.2% 299.50 ± 8% interrupts.CPU23.RES:Rescheduling_interrupts
742.00 ± 2% +24.3% 922.25 ± 9% interrupts.CPU24.CAL:Function_call_interrupts
99.50 ± 18% +169.8% 268.50 ± 9% interrupts.CPU24.RES:Rescheduling_interrupts
735.00 ± 3% +30.4% 958.25 ± 4% interrupts.CPU25.CAL:Function_call_interrupts
89.00 ± 6% +241.6% 304.00 ± 6% interrupts.CPU25.RES:Rescheduling_interrupts
739.75 +28.8% 953.00 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
85.25 ± 8% +225.5% 277.50 ± 13% interrupts.CPU26.RES:Rescheduling_interrupts
80.50 ± 19% +244.7% 277.50 ± 12% interrupts.CPU27.RES:Rescheduling_interrupts
743.00 +28.7% 956.25 ± 4% interrupts.CPU28.CAL:Function_call_interrupts
97.50 ± 11% +179.5% 272.50 ± 3% interrupts.CPU28.RES:Rescheduling_interrupts
718.00 ± 7% +33.2% 956.50 ± 4% interrupts.CPU29.CAL:Function_call_interrupts
134.25 ± 46% +87.5% 251.75 ± 12% interrupts.CPU29.RES:Rescheduling_interrupts
95.75 ± 17% +227.2% 313.25 ± 6% interrupts.CPU3.RES:Rescheduling_interrupts
740.75 ± 2% +29.2% 957.25 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
87.75 ± 10% +204.3% 267.00 ± 12% interrupts.CPU30.RES:Rescheduling_interrupts
659.75 ± 22% +44.6% 953.75 ± 4% interrupts.CPU31.CAL:Function_call_interrupts
86.75 ± 18% +229.1% 285.50 ± 16% interrupts.CPU31.RES:Rescheduling_interrupts
718.75 ± 6% +33.7% 960.75 ± 5% interrupts.CPU32.CAL:Function_call_interrupts
90.75 ± 21% +251.0% 318.50 ± 8% interrupts.CPU32.RES:Rescheduling_interrupts
764.00 ± 6% +25.4% 957.75 ± 4% interrupts.CPU33.CAL:Function_call_interrupts
78.00 ± 15% +264.1% 284.00 ± 12% interrupts.CPU33.RES:Rescheduling_interrupts
670.00 ± 16% +41.6% 949.00 ± 4% interrupts.CPU34.CAL:Function_call_interrupts
82.50 ± 12% +245.2% 284.75 ± 11% interrupts.CPU34.RES:Rescheduling_interrupts
692.00 ± 9% +35.2% 935.25 ± 4% interrupts.CPU35.CAL:Function_call_interrupts
95.75 ± 18% +268.9% 353.25 ± 33% interrupts.CPU35.RES:Rescheduling_interrupts
762.75 ± 3% +28.8% 982.50 ± 4% interrupts.CPU36.CAL:Function_call_interrupts
90.25 ± 14% +265.1% 329.50 ± 12% interrupts.CPU36.RES:Rescheduling_interrupts
743.25 ± 2% +28.7% 956.25 ± 4% interrupts.CPU37.CAL:Function_call_interrupts
96.25 ± 22% +228.3% 316.00 ± 8% interrupts.CPU37.RES:Rescheduling_interrupts
743.50 ± 2% +28.1% 952.50 ± 4% interrupts.CPU38.CAL:Function_call_interrupts
108.50 ± 26% +166.8% 289.50 ± 5% interrupts.CPU38.RES:Rescheduling_interrupts
744.75 ± 2% +28.3% 955.50 ± 4% interrupts.CPU39.CAL:Function_call_interrupts
102.75 ± 15% +180.5% 288.25 ± 12% interrupts.CPU39.RES:Rescheduling_interrupts
754.75 +27.0% 958.50 ± 4% interrupts.CPU4.CAL:Function_call_interrupts
744.00 ± 2% +20.0% 893.00 ± 9% interrupts.CPU40.CAL:Function_call_interrupts
114.00 ± 8% +160.3% 296.75 ± 6% interrupts.CPU40.RES:Rescheduling_interrupts
743.00 ± 2% +27.9% 950.50 ± 4% interrupts.CPU41.CAL:Function_call_interrupts
110.00 ± 19% +192.0% 321.25 ± 15% interrupts.CPU41.RES:Rescheduling_interrupts
742.50 ± 2% +28.4% 953.25 ± 4% interrupts.CPU42.CAL:Function_call_interrupts
102.50 ± 12% +193.4% 300.75 ± 7% interrupts.CPU42.RES:Rescheduling_interrupts
741.50 +28.6% 953.50 ± 4% interrupts.CPU43.CAL:Function_call_interrupts
96.75 ± 21% +196.4% 286.75 ± 7% interrupts.CPU43.RES:Rescheduling_interrupts
742.75 ± 2% +22.9% 912.50 ± 4% interrupts.CPU44.CAL:Function_call_interrupts
94.00 ± 27% +221.0% 301.75 ± 13% interrupts.CPU44.RES:Rescheduling_interrupts
740.75 ± 2% +22.3% 905.75 ± 5% interrupts.CPU45.CAL:Function_call_interrupts
95.50 ± 23% +215.4% 301.25 ± 6% interrupts.CPU45.RES:Rescheduling_interrupts
757.75 ± 3% +25.8% 953.00 ± 4% interrupts.CPU46.CAL:Function_call_interrupts
106.75 ± 14% +154.1% 271.25 ± 13% interrupts.CPU46.RES:Rescheduling_interrupts
744.00 ± 2% +28.2% 953.50 ± 4% interrupts.CPU47.CAL:Function_call_interrupts
106.25 ± 17% +174.1% 291.25 ± 12% interrupts.CPU47.RES:Rescheduling_interrupts
742.50 +21.7% 903.75 ± 11% interrupts.CPU48.CAL:Function_call_interrupts
108.75 ± 14% +163.9% 287.00 ± 8% interrupts.CPU48.RES:Rescheduling_interrupts
746.50 ± 2% +27.4% 951.00 ± 4% interrupts.CPU49.CAL:Function_call_interrupts
97.50 ± 20% +223.6% 315.50 ± 7% interrupts.CPU49.RES:Rescheduling_interrupts
733.00 ± 4% +31.1% 961.00 ± 4% interrupts.CPU5.CAL:Function_call_interrupts
102.50 ± 17% +197.3% 304.75 ± 7% interrupts.CPU5.RES:Rescheduling_interrupts
744.25 ± 2% +27.3% 947.75 ± 3% interrupts.CPU50.CAL:Function_call_interrupts
102.50 ± 17% +241.0% 349.50 ± 26% interrupts.CPU50.RES:Rescheduling_interrupts
743.25 ± 2% +28.1% 951.75 ± 4% interrupts.CPU51.CAL:Function_call_interrupts
100.00 ± 23% +203.8% 303.75 ± 6% interrupts.CPU51.RES:Rescheduling_interrupts
746.00 ± 2% +28.0% 955.25 ± 4% interrupts.CPU52.CAL:Function_call_interrupts
3028 ± 18% +64.1% 4969 ± 4% interrupts.CPU52.NMI:Non-maskable_interrupts
3028 ± 18% +64.1% 4969 ± 4% interrupts.CPU52.PMI:Performance_monitoring_interrupts
89.50 ± 20% +231.3% 296.50 ± 9% interrupts.CPU52.RES:Rescheduling_interrupts
3105 ± 18% +61.7% 5022 ± 3% interrupts.CPU53.NMI:Non-maskable_interrupts
3105 ± 18% +61.7% 5022 ± 3% interrupts.CPU53.PMI:Performance_monitoring_interrupts
97.50 ± 15% +194.1% 286.75 ± 5% interrupts.CPU53.RES:Rescheduling_interrupts
743.00 ± 2% +28.7% 956.25 ± 4% interrupts.CPU54.CAL:Function_call_interrupts
2719 ± 24% +80.0% 4895 ± 3% interrupts.CPU54.NMI:Non-maskable_interrupts
2719 ± 24% +80.0% 4895 ± 3% interrupts.CPU54.PMI:Performance_monitoring_interrupts
88.50 ± 16% +208.5% 273.00 ± 7% interrupts.CPU54.RES:Rescheduling_interrupts
741.00 ± 2% +29.0% 955.75 ± 3% interrupts.CPU55.CAL:Function_call_interrupts
83.75 ± 20% +253.4% 296.00 ± 14% interrupts.CPU55.RES:Rescheduling_interrupts
743.25 ± 2% +28.3% 953.50 ± 4% interrupts.CPU56.CAL:Function_call_interrupts
2693 ± 23% +84.6% 4972 ± 3% interrupts.CPU56.NMI:Non-maskable_interrupts
2693 ± 23% +84.6% 4972 ± 3% interrupts.CPU56.PMI:Performance_monitoring_interrupts
80.75 ± 24% +227.2% 264.25 ± 12% interrupts.CPU56.RES:Rescheduling_interrupts
740.50 ± 2% +28.0% 947.75 ± 3% interrupts.CPU57.CAL:Function_call_interrupts
2697 ± 25% +84.8% 4986 ± 4% interrupts.CPU57.NMI:Non-maskable_interrupts
2697 ± 25% +84.8% 4986 ± 4% interrupts.CPU57.PMI:Performance_monitoring_interrupts
77.25 ± 20% +284.8% 297.25 ± 15% interrupts.CPU57.RES:Rescheduling_interrupts
741.75 +28.4% 952.50 ± 4% interrupts.CPU58.CAL:Function_call_interrupts
2653 ± 24% +88.1% 4991 ± 2% interrupts.CPU58.NMI:Non-maskable_interrupts
2653 ± 24% +88.1% 4991 ± 2% interrupts.CPU58.PMI:Performance_monitoring_interrupts
81.50 ± 27% +229.4% 268.50 ± 12% interrupts.CPU58.RES:Rescheduling_interrupts
729.50 ± 4% +30.7% 953.25 ± 4% interrupts.CPU59.CAL:Function_call_interrupts
2732 ± 27% +84.6% 5044 interrupts.CPU59.NMI:Non-maskable_interrupts
2732 ± 27% +84.6% 5044 interrupts.CPU59.PMI:Performance_monitoring_interrupts
75.75 ± 16% +281.2% 288.75 ± 10% interrupts.CPU59.RES:Rescheduling_interrupts
729.25 ± 5% +31.2% 956.75 ± 4% interrupts.CPU6.CAL:Function_call_interrupts
113.25 ± 38% +165.3% 300.50 ± 10% interrupts.CPU6.RES:Rescheduling_interrupts
740.25 +28.7% 952.75 ± 4% interrupts.CPU60.CAL:Function_call_interrupts
2729 ± 24% +82.2% 4972 ± 4% interrupts.CPU60.NMI:Non-maskable_interrupts
2729 ± 24% +82.2% 4972 ± 4% interrupts.CPU60.PMI:Performance_monitoring_interrupts
74.25 ± 11% +338.0% 325.25 ± 29% interrupts.CPU60.RES:Rescheduling_interrupts
728.25 ± 3% +30.9% 953.00 ± 3% interrupts.CPU61.CAL:Function_call_interrupts
2606 ± 25% +93.6% 5045 ± 3% interrupts.CPU61.NMI:Non-maskable_interrupts
2606 ± 25% +93.6% 5045 ± 3% interrupts.CPU61.PMI:Performance_monitoring_interrupts
82.25 ± 16% +228.0% 269.75 ± 9% interrupts.CPU61.RES:Rescheduling_interrupts
735.75 ± 2% +30.2% 958.00 ± 3% interrupts.CPU62.CAL:Function_call_interrupts
81.25 ± 23% +262.2% 294.25 ± 15% interrupts.CPU62.RES:Rescheduling_interrupts
743.50 ± 2% +28.9% 958.00 ± 3% interrupts.CPU63.CAL:Function_call_interrupts
743.00 ± 2% +28.9% 958.00 ± 3% interrupts.CPU64.CAL:Function_call_interrupts
77.75 ± 28% +266.2% 284.75 ± 10% interrupts.CPU64.RES:Rescheduling_interrupts
725.25 ± 2% +32.0% 957.50 ± 3% interrupts.CPU65.CAL:Function_call_interrupts
76.25 ± 19% +283.3% 292.25 ± 18% interrupts.CPU65.RES:Rescheduling_interrupts
705.75 ± 10% +36.5% 963.00 ± 4% interrupts.CPU66.CAL:Function_call_interrupts
71.50 ± 17% +282.5% 273.50 ± 14% interrupts.CPU66.RES:Rescheduling_interrupts
728.75 +31.8% 960.50 ± 4% interrupts.CPU67.CAL:Function_call_interrupts
81.50 ± 16% +231.3% 270.00 ± 12% interrupts.CPU67.RES:Rescheduling_interrupts
729.00 +31.4% 958.00 ± 4% interrupts.CPU68.CAL:Function_call_interrupts
76.75 ± 12% +272.6% 286.00 ± 15% interrupts.CPU68.RES:Rescheduling_interrupts
732.50 +31.0% 959.75 ± 4% interrupts.CPU69.CAL:Function_call_interrupts
86.75 ± 11% +238.9% 294.00 ± 13% interrupts.CPU69.RES:Rescheduling_interrupts
752.75 ± 3% +29.7% 976.50 ± 7% interrupts.CPU7.CAL:Function_call_interrupts
95.00 ± 13% +204.7% 289.50 ± 9% interrupts.CPU7.RES:Rescheduling_interrupts
739.75 ± 2% +29.5% 958.25 ± 4% interrupts.CPU70.CAL:Function_call_interrupts
69.00 ± 17% +290.2% 269.25 ± 11% interrupts.CPU70.RES:Rescheduling_interrupts
742.75 ± 2% +29.2% 959.75 ± 4% interrupts.CPU71.CAL:Function_call_interrupts
80.25 ± 27% +291.6% 314.25 ± 11% interrupts.CPU71.RES:Rescheduling_interrupts
747.00 +27.8% 955.00 ± 3% interrupts.CPU8.CAL:Function_call_interrupts
701.75 ± 9% +31.6% 923.50 ± 3% interrupts.CPU9.CAL:Function_call_interrupts
112.00 ± 17% +574.8% 755.75 ± 63% interrupts.CPU9.RES:Rescheduling_interrupts
9539 ± 13% +147.7% 23630 ± 6% interrupts.RES:Rescheduling_interrupts
52.54 ± 6% -31.5 20.99 ± 4% perf-profile.calltrace.cycles-pp.write
34.34 ± 11% -28.3 6.00 ± 10% perf-profile.calltrace.cycles-pp.secondary_startup_64
33.91 ± 11% -28.0 5.94 ± 10% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
33.91 ± 11% -28.0 5.94 ± 10% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
33.90 ± 11% -28.0 5.94 ± 10% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
33.03 ± 11% -27.3 5.68 ± 11% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
32.93 ± 11% -27.3 5.64 ± 11% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
44.64 ± 6% -26.4 18.27 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
31.31 ± 13% -26.3 5.04 ± 13% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
44.31 ± 6% -26.2 18.16 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
37.67 ± 6% -22.1 15.52 ± 3% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
36.53 ± 6% -21.4 15.10 ± 3% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
32.77 ± 6% -19.1 13.66 ± 3% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
31.25 ± 6% -18.2 13.06 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
26.95 ± 6% -16.1 10.89 ± 4% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
26.73 ± 6% -15.9 10.82 ± 4% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write.vfs_write
20.45 ± 6% -12.1 8.38 ± 4% perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
10.73 ± 7% -6.0 4.70 ± 4% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
6.12 ± 7% -3.6 2.50 ± 2% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
5.95 ± 7% -3.5 2.44 ± 2% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply
4.77 ± 7% -3.4 1.35 ± 3% perf-profile.calltrace.cycles-pp.close
4.69 ± 7% -3.4 1.32 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
4.69 ± 7% -3.4 1.33 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.close
4.67 ± 7% -3.4 1.32 ± 3% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
4.67 ± 7% -3.4 1.32 ± 3% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
4.67 ± 7% -3.4 1.31 ± 3% perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.65 ± 7% -3.3 1.31 ± 3% perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
4.63 ± 7% -3.3 1.30 ± 3% perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
4.88 ± 5% -3.0 1.85 ± 4% perf-profile.calltrace.cycles-pp.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
4.20 ± 12% -2.8 1.44 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.write
4.42 ± 5% -2.7 1.75 ± 5% perf-profile.calltrace.cycles-pp.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.new_sync_write
3.77 ± 4% -2.3 1.47 ± 6% perf-profile.calltrace.cycles-pp.xfs_file_iomap_begin_delay.xfs_file_iomap_begin.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
3.44 ± 7% -2.2 1.27 ± 3% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
3.42 ± 7% -2.2 1.26 ± 3% perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
3.30 ± 5% -2.1 1.16 ± 5% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.write
4.06 ± 6% -2.1 1.97 ± 5% perf-profile.calltrace.cycles-pp.iomap_read_page_sync.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
3.16 ± 5% -2.0 1.17 ± 6% perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_read_page_sync.iomap_write_begin.iomap_write_actor.iomap_apply
2.53 ± 6% -1.6 0.95 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.new_sync_write.vfs_write.ksys_write
2.65 ± 8% -1.5 1.18 ± 2% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor
2.04 ± 6% -1.3 0.76 ± 4% perf-profile.calltrace.cycles-pp.iomap_set_page_dirty.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
2.07 ± 5% -1.3 0.79 ± 4% perf-profile.calltrace.cycles-pp.iomap_set_range_uptodate.iomap_write_end.iomap_write_actor.iomap_apply.iomap_file_buffered_write
2.15 ± 6% -1.3 0.88 ± 4% perf-profile.calltrace.cycles-pp.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
1.53 ± 15% -1.2 0.30 ±100% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
1.43 ± 16% -1.1 0.28 ±100% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
1.26 ± 4% -1.0 0.26 ±100% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
1.69 ± 7% -1.0 0.73 ± 4% perf-profile.calltrace.cycles-pp.copyin.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.37 ± 4% -1.0 0.42 ± 57% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.53 ± 7% -0.9 0.67 ± 4% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.iov_iter_copy_from_user_atomic.iomap_write_actor.iomap_apply
1.37 ± 4% -0.8 0.58 ± 6% perf-profile.calltrace.cycles-pp.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
1.33 ± 4% -0.8 0.57 ± 6% perf-profile.calltrace.cycles-pp.xfs_remove.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64
1.33 ± 4% -0.8 0.58 ± 6% perf-profile.calltrace.cycles-pp.xfs_vn_unlink.vfs_unlink.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.30 ± 7% -0.6 0.69 ± 2% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin
1.34 ± 2% -0.6 0.76 ± 2% perf-profile.calltrace.cycles-pp.xfs_generic_create.path_openat.do_filp_open.do_sys_open.do_syscall_64
1.31 ± 2% -0.6 0.76 ± 3% perf-profile.calltrace.cycles-pp.xfs_create.xfs_generic_create.path_openat.do_filp_open.do_sys_open
1.06 ± 4% -0.4 0.67 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.path_openat.do_filp_open
1.08 ± 4% -0.4 0.70 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64
4.07 ± 3% +31.3 35.37 ± 2% perf-profile.calltrace.cycles-pp.unlink
3.98 ± 3% +31.4 35.33 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.unlink
3.98 ± 3% +31.4 35.33 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
3.94 ± 3% +31.4 35.32 ± 2% perf-profile.calltrace.cycles-pp.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
3.42 ± 3% +32.2 35.67 ± 2% perf-profile.calltrace.cycles-pp.creat
3.38 ± 3% +32.3 35.65 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
3.38 ± 3% +32.3 35.65 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
3.36 ± 3% +32.3 35.65 ± 2% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
2.41 ± 3% +32.3 34.69 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe.unlink
3.33 ± 3% +32.3 35.64 ± 2% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.33 ± 3% +32.3 35.64 ± 2% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.85 ± 4% +32.8 34.64 ± 2% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.76 ± 3% +33.2 33.92 ± 2% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.do_unlinkat.do_syscall_64
1.56 ± 3% +33.2 34.72 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_open.do_syscall_64
1.21 ± 4% +33.5 34.67 ± 2% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.path_openat.do_filp_open.do_sys_open
0.00 +34.0 33.98 ± 2% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.path_openat.do_filp_open
52.91 ± 6% -31.7 21.24 ± 4% perf-profile.children.cycles-pp.write
34.34 ± 11% -28.3 6.00 ± 10% perf-profile.children.cycles-pp.do_idle
34.34 ± 11% -28.3 6.00 ± 10% perf-profile.children.cycles-pp.secondary_startup_64
34.34 ± 11% -28.3 6.00 ± 10% perf-profile.children.cycles-pp.cpu_startup_entry
33.91 ± 11% -28.0 5.94 ± 10% perf-profile.children.cycles-pp.start_secondary
33.45 ± 11% -27.7 5.73 ± 11% perf-profile.children.cycles-pp.cpuidle_enter
33.45 ± 11% -27.7 5.73 ± 11% perf-profile.children.cycles-pp.cpuidle_enter_state
31.56 ± 12% -26.5 5.08 ± 13% perf-profile.children.cycles-pp.intel_idle
37.68 ± 6% -22.1 15.53 ± 3% perf-profile.children.cycles-pp.ksys_write
36.57 ± 6% -21.4 15.12 ± 3% perf-profile.children.cycles-pp.vfs_write
32.78 ± 6% -19.1 13.66 ± 3% perf-profile.children.cycles-pp.new_sync_write
31.30 ± 6% -18.2 13.08 ± 3% perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
26.95 ± 6% -16.1 10.90 ± 4% perf-profile.children.cycles-pp.iomap_file_buffered_write
26.78 ± 6% -15.9 10.84 ± 4% perf-profile.children.cycles-pp.iomap_apply
20.52 ± 6% -12.1 8.41 ± 4% perf-profile.children.cycles-pp.iomap_write_actor
10.74 ± 7% -6.0 4.70 ± 3% perf-profile.children.cycles-pp.iomap_write_begin
6.17 ± 7% -3.6 2.52 ± 2% perf-profile.children.cycles-pp.grab_cache_page_write_begin
5.99 ± 7% -3.5 2.46 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
4.77 ± 7% -3.4 1.35 ± 3% perf-profile.children.cycles-pp.close
4.67 ± 7% -3.4 1.31 ± 3% perf-profile.children.cycles-pp.dput
4.67 ± 7% -3.4 1.31 ± 3% perf-profile.children.cycles-pp.__fput
4.67 ± 7% -3.4 1.32 ± 3% perf-profile.children.cycles-pp.task_work_run
4.67 ± 7% -3.3 1.32 ± 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
4.63 ± 7% -3.3 1.30 ± 3% perf-profile.children.cycles-pp.__dentry_kill
5.23 ± 5% -3.3 1.96 ± 5% perf-profile.children.cycles-pp.iomap_set_range_uptodate
4.90 ± 5% -3.0 1.85 ± 4% perf-profile.children.cycles-pp.iomap_write_end
4.48 ± 5% -2.7 1.77 ± 5% perf-profile.children.cycles-pp.xfs_file_iomap_begin
3.96 ± 6% -2.5 1.45 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64
3.84 ± 5% -2.5 1.34 ± 5% perf-profile.children.cycles-pp.syscall_return_via_sysret
3.82 ± 4% -2.3 1.48 ± 6% perf-profile.children.cycles-pp.xfs_file_iomap_begin_delay
3.44 ± 7% -2.2 1.27 ± 3% perf-profile.children.cycles-pp.evict
3.42 ± 7% -2.2 1.27 ± 3% perf-profile.children.cycles-pp.truncate_inode_pages_range
4.07 ± 6% -2.1 1.99 ± 5% perf-profile.children.cycles-pp.iomap_read_page_sync
2.58 ± 6% -1.6 0.96 ± 3% perf-profile.children.cycles-pp.xfs_file_aio_write_checks
2.92 ± 3% -1.5 1.46 ± 2% perf-profile.children.cycles-pp.rwsem_spin_on_owner
2.65 ± 8% -1.5 1.19 ± 2% perf-profile.children.cycles-pp.add_to_page_cache_lru
2.08 ± 6% -1.3 0.78 ± 4% perf-profile.children.cycles-pp.iomap_set_page_dirty
2.15 ± 6% -1.3 0.88 ± 5% perf-profile.children.cycles-pp.iov_iter_copy_from_user_atomic
1.64 ± 6% -1.0 0.62 ± 6% perf-profile.children.cycles-pp.xfs_ilock
1.71 ± 6% -1.0 0.74 ± 4% perf-profile.children.cycles-pp.copyin
1.63 ± 7% -0.9 0.70 ± 4% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
1.97 ± 15% -0.9 1.04 ± 8% perf-profile.children.cycles-pp.apic_timer_interrupt
1.83 ± 15% -0.9 0.97 ± 10% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.37 ± 6% -0.8 0.54 ± 5% perf-profile.children.cycles-pp.down_write
1.38 ± 4% -0.8 0.55 ± 5% perf-profile.children.cycles-pp.security_file_permission
1.30 ± 8% -0.8 0.48 ± 5% perf-profile.children.cycles-pp.find_get_entry
1.27 ± 10% -0.8 0.46 ± 6% perf-profile.children.cycles-pp.__lru_cache_add
1.37 ± 4% -0.8 0.58 ± 6% perf-profile.children.cycles-pp.vfs_unlink
1.26 ± 3% -0.8 0.50 ± 5% perf-profile.children.cycles-pp.selinux_file_permission
1.21 ± 12% -0.8 0.45 ± 3% perf-profile.children.cycles-pp.release_pages
1.33 ± 4% -0.8 0.57 ± 6% perf-profile.children.cycles-pp.xfs_remove
1.33 ± 4% -0.8 0.58 ± 6% perf-profile.children.cycles-pp.xfs_vn_unlink
1.18 ± 10% -0.7 0.44 ± 7% perf-profile.children.cycles-pp.pagevec_lru_move_fn
1.19 ± 4% -0.7 0.45 ± 7% perf-profile.children.cycles-pp.___might_sleep
1.15 ± 13% -0.7 0.43 ± 4% perf-profile.children.cycles-pp.__pagevec_release
1.11 ± 9% -0.7 0.40 ± 8% perf-profile.children.cycles-pp.xfs_file_iomap_end
1.30 ± 5% -0.7 0.59 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
1.16 ± 8% -0.7 0.46 ± 5% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.26 ± 6% -0.7 0.57 ± 2% perf-profile.children.cycles-pp.xfs_log_commit_cil
1.09 ± 9% -0.7 0.41 ± 3% perf-profile.children.cycles-pp.__set_page_dirty
1.08 ± 6% -0.7 0.42 ± 8% perf-profile.children.cycles-pp.xfs_file_write_iter
1.01 ± 6% -0.6 0.36 ± 5% perf-profile.children.cycles-pp.iov_iter_fault_in_readable
1.33 ± 7% -0.6 0.69 ± 2% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.99 ± 6% -0.6 0.35 ± 5% perf-profile.children.cycles-pp.xfs_iunlock
0.97 ± 5% -0.6 0.37 ± 5% perf-profile.children.cycles-pp.__sb_start_write
0.89 ± 4% -0.6 0.30 ± 5% perf-profile.children.cycles-pp.truncate_cleanup_page
1.34 ± 2% -0.6 0.76 ± 2% perf-profile.children.cycles-pp.xfs_generic_create
0.92 ± 4% -0.6 0.36 ± 7% perf-profile.children.cycles-pp.xfs_dir_removename
1.31 ± 2% -0.6 0.76 ± 3% perf-profile.children.cycles-pp.xfs_create
0.90 ± 6% -0.6 0.35 ± 2% perf-profile.children.cycles-pp.delete_from_page_cache_batch
0.92 ± 8% -0.5 0.37 ± 4% perf-profile.children.cycles-pp.file_update_time
0.90 ± 7% -0.5 0.35 ± 7% perf-profile.children.cycles-pp.get_page_from_freelist
0.90 ± 5% -0.5 0.35 ± 6% perf-profile.children.cycles-pp.xfs_dir2_node_removename
0.85 ± 8% -0.5 0.32 ± 4% perf-profile.children.cycles-pp.xas_load
0.80 ± 12% -0.5 0.29 ± 3% perf-profile.children.cycles-pp.account_page_dirtied
0.75 ± 3% -0.5 0.26 ± 5% perf-profile.children.cycles-pp.__cancel_dirty_page
0.79 ± 11% -0.5 0.29 ± 6% perf-profile.children.cycles-pp.__fdget_pos
0.77 ± 19% -0.5 0.29 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.79 ± 5% -0.5 0.30 ± 9% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
1.12 ± 6% -0.5 0.65 ± 12% perf-profile.children.cycles-pp.xfs_inactive
0.71 ± 5% -0.5 0.25 ± 7% perf-profile.children.cycles-pp.up_write
0.66 ± 5% -0.4 0.22 ± 3% perf-profile.children.cycles-pp.xfs_break_layouts
0.68 ± 6% -0.4 0.25 ± 6% perf-profile.children.cycles-pp.__might_sleep
1.12 ± 13% -0.4 0.70 ± 10% perf-profile.children.cycles-pp.hrtimer_interrupt
0.68 ± 4% -0.4 0.27 ± 3% perf-profile.children.cycles-pp.xfs_da3_node_lookup_int
0.68 ± 7% -0.4 0.27 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
0.61 ± 3% -0.4 0.20 ± 5% perf-profile.children.cycles-pp.account_page_cleaned
0.66 ± 6% -0.4 0.26 ± 7% perf-profile.children.cycles-pp.xfs_dir3_data_check
0.66 ± 6% -0.4 0.26 ± 7% perf-profile.children.cycles-pp.__xfs_dir3_data_check
0.64 ± 6% -0.4 0.24 ± 4% perf-profile.children.cycles-pp.__mod_lruvec_state
0.62 ± 12% -0.4 0.24 ± 7% perf-profile.children.cycles-pp.__fget_light
0.98 ± 6% -0.4 0.61 ± 14% perf-profile.children.cycles-pp.xfs_inactive_ifree
0.43 ± 27% -0.4 0.06 ± 20% perf-profile.children.cycles-pp.start_kernel
0.78 ± 8% -0.4 0.42 ± 3% perf-profile.children.cycles-pp.xas_store
0.53 ± 26% -0.4 0.16 ± 7% perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited
0.55 ± 20% -0.3 0.22 ± 14% perf-profile.children.cycles-pp.irq_exit
0.52 ± 3% -0.3 0.20 ± 4% perf-profile.children.cycles-pp.xfs_trans_read_buf_map
0.40 ± 3% -0.3 0.08 ± 8% perf-profile.children.cycles-pp.ttwu_do_activate
0.40 ± 3% -0.3 0.08 ± 8% perf-profile.children.cycles-pp.activate_task
0.40 ± 3% -0.3 0.08 ± 8% perf-profile.children.cycles-pp.enqueue_task_fair
0.51 ± 6% -0.3 0.19 ± 6% perf-profile.children.cycles-pp._cond_resched
0.49 ± 4% -0.3 0.18 ± 6% perf-profile.children.cycles-pp.fsnotify
0.51 ± 26% -0.3 0.19 ± 11% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.49 ± 6% -0.3 0.18 ± 7% perf-profile.children.cycles-pp.xfs_errortag_test
0.47 ± 2% -0.3 0.17 ± 6% perf-profile.children.cycles-pp.__inode_security_revalidate
0.38 ± 4% -0.3 0.07 ± 5% perf-profile.children.cycles-pp.enqueue_entity
0.48 ± 13% -0.3 0.18 ± 2% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.46 ± 5% -0.3 0.18 ± 6% perf-profile.children.cycles-pp.xfs_dir2_leafn_lookup_for_entry
0.35 ± 14% -0.3 0.08 ± 5% perf-profile.children.cycles-pp.try_to_wake_up
0.41 ± 4% -0.3 0.14 ± 3% perf-profile.children.cycles-pp.xfs_break_leased_layouts
0.51 ± 5% -0.3 0.24 ± 13% perf-profile.children.cycles-pp.xfs_bmbt_to_iomap
0.41 ± 6% -0.3 0.15 ± 7% perf-profile.children.cycles-pp.xfs_iext_lookup_extent
0.43 ± 22% -0.3 0.17 ± 20% perf-profile.children.cycles-pp.__softirqentry_text_start
0.39 ± 4% -0.3 0.13 ± 6% perf-profile.children.cycles-pp.__x64_sys_write
0.31 ± 3% -0.2 0.06 ± 11% perf-profile.children.cycles-pp.__account_scheduler_latency
0.45 ± 18% -0.2 0.20 ± 21% perf-profile.children.cycles-pp.ktime_get
0.41 ± 16% -0.2 0.16 ± 9% perf-profile.children.cycles-pp.xfs_vn_update_time
0.37 ± 2% -0.2 0.13 ± 8% perf-profile.children.cycles-pp.xfs_buf_read_map
0.43 ± 4% -0.2 0.20 ± 4% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.36 ± 2% -0.2 0.13 ± 5% perf-profile.children.cycles-pp.xfs_buf_get_map
0.35 ± 6% -0.2 0.12 ± 4% perf-profile.children.cycles-pp.unlock_page
0.30 ± 4% -0.2 0.08 ± 10% perf-profile.children.cycles-pp.__schedule
0.37 ± 16% -0.2 0.16 ± 11% perf-profile.children.cycles-pp.clockevents_program_event
0.34 ± 6% -0.2 0.12 ± 8% perf-profile.children.cycles-pp.__mod_memcg_state
0.32 ± 7% -0.2 0.11 ± 3% perf-profile.children.cycles-pp.xfs_dir_createname
0.33 ± 3% -0.2 0.11 ± 7% perf-profile.children.cycles-pp.xfs_buf_find
0.65 ± 5% -0.2 0.45 ± 17% perf-profile.children.cycles-pp.xfs_ifree
0.26 ± 5% -0.2 0.05 ± 8% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.24 ± 5% -0.2 0.04 ± 57% perf-profile.children.cycles-pp.arch_stack_walk
0.31 ± 7% -0.2 0.11 ± 4% perf-profile.children.cycles-pp.xfs_dir2_node_addname
0.32 ± 3% -0.2 0.13 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.35 ± 18% -0.2 0.15 ± 27% perf-profile.children.cycles-pp.menu_select
0.31 ± 4% -0.2 0.12 ± 5% perf-profile.children.cycles-pp.page_mapping
0.32 ± 7% -0.2 0.13 ± 10% perf-profile.children.cycles-pp.current_time
0.29 ± 12% -0.2 0.11 ± 13% perf-profile.children.cycles-pp.xas_start
0.28 ± 4% -0.2 0.10 ± 7% perf-profile.children.cycles-pp.__fsnotify_parent
0.24 ± 4% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.xfs_read_agi
0.29 ± 3% -0.2 0.11 ± 3% perf-profile.children.cycles-pp.__mod_node_page_state
0.33 ± 8% -0.2 0.15 ± 5% perf-profile.children.cycles-pp.generic_write_checks
0.62 ± 14% -0.2 0.44 ± 13% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.29 ± 14% -0.2 0.11 ± 3% perf-profile.children.cycles-pp.node_dirty_ok
0.28 ± 7% -0.2 0.10 ± 7% perf-profile.children.cycles-pp.rcu_all_qs
0.24 ± 15% -0.2 0.06 ± 6% perf-profile.children.cycles-pp.xfs_buf_item_release
0.29 ± 5% -0.2 0.11 ± 9% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.22 ± 16% -0.2 0.06 ± 7% perf-profile.children.cycles-pp.xfs_buf_unlock
0.21 ± 17% -0.2 0.05 perf-profile.children.cycles-pp.up
0.26 ± 11% -0.2 0.10 ± 12% perf-profile.children.cycles-pp.pagevec_lookup_entries
0.26 ± 11% -0.2 0.10 ± 12% perf-profile.children.cycles-pp.find_get_entries
0.57 ± 3% -0.2 0.42 ± 5% perf-profile.children.cycles-pp.xfs_dir_ialloc
0.57 ± 4% -0.2 0.41 ± 6% perf-profile.children.cycles-pp.xfs_ialloc
0.21 ± 5% -0.1 0.07 perf-profile.children.cycles-pp.iov_iter_advance
0.23 ± 3% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.xfs_da_read_buf
0.23 ± 9% -0.1 0.09 ± 8% perf-profile.children.cycles-pp.free_unref_page_list
0.18 ± 5% -0.1 0.06 ± 14% perf-profile.children.cycles-pp.schedule
0.19 ± 11% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.xfs_dir3_data_entsize
0.18 ± 24% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.rcu_core
0.14 ± 9% -0.1 0.03 ±100% perf-profile.children.cycles-pp.file_modified
0.20 ± 10% -0.1 0.09 ± 5% perf-profile.children.cycles-pp.xfs_btree_lookup
0.50 ± 3% -0.1 0.39 ± 6% perf-profile.children.cycles-pp.xfs_dialloc
0.17 ± 7% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.lock_page_memcg
0.19 ± 3% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.xfs_dir_ino_validate
0.18 ± 3% -0.1 0.07 ± 5% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.17 ± 4% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.xfs_vn_lookup
0.20 ± 6% -0.1 0.09 ± 4% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.17 ± 6% -0.1 0.06 perf-profile.children.cycles-pp.xfs_lookup
0.17 ± 9% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.xfs_dir3_leaf_check_int
0.25 ± 13% -0.1 0.15 ± 24% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.43 ± 20% -0.1 0.33 ± 4% perf-profile.children.cycles-pp.tick_sched_timer
0.20 ± 8% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.workingset_update_node
0.18 ± 10% -0.1 0.08 ± 12% perf-profile.children.cycles-pp.xfs_iunlink_remove
0.22 ± 9% -0.1 0.12 ± 20% perf-profile.children.cycles-pp.xfs_find_bdev_for_inode
0.15 ± 6% -0.1 0.05 perf-profile.children.cycles-pp._raw_spin_lock_irq
0.17 ± 9% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.xas_init_marks
0.16 ± 4% -0.1 0.06 perf-profile.children.cycles-pp.xfs_dir_lookup
0.12 ± 4% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__mark_inode_dirty
0.18 ± 6% -0.1 0.09 ± 17% perf-profile.children.cycles-pp.xfs_trans_alloc
0.15 ± 5% -0.1 0.06 ± 7% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
0.46 ± 4% -0.1 0.36 ± 18% perf-profile.children.cycles-pp.xfs_difree
0.15 ± 8% -0.1 0.06 perf-profile.children.cycles-pp.unaccount_page_cache_page
0.13 ± 12% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.xas_clear_mark
0.15 ± 5% -0.1 0.06 ± 7% perf-profile.children.cycles-pp.xfs_da3_node_read
0.17 ± 13% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.generic_write_check_limits
0.14 ± 6% -0.1 0.05 perf-profile.children.cycles-pp.xfs_dir2_node_lookup
0.12 ± 6% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.xfs_trans_unreserve_and_mod_sb
0.15 ± 10% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.__xa_set_mark
0.15 ± 5% -0.1 0.07 ± 7% perf-profile.children.cycles-pp.__sb_end_write
0.37 ± 5% -0.1 0.29 ± 18% perf-profile.children.cycles-pp.xfs_difree_inobt
0.13 ± 3% -0.1 0.06 ± 15% perf-profile.children.cycles-pp.timestamp_truncate
0.36 ± 22% -0.1 0.28 ± 4% perf-profile.children.cycles-pp.tick_sched_handle
0.13 ± 6% -0.1 0.05 ± 8% perf-profile.children.cycles-pp.alloc_pages_current
0.11 ± 7% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.native_write_msr
0.13 ± 5% -0.1 0.06 ± 7% perf-profile.children.cycles-pp.xfs_verify_ino
0.10 ± 5% -0.1 0.03 ±100% perf-profile.children.cycles-pp.xfs_btree_read_buf_block
0.43 ± 4% -0.1 0.36 ± 6% perf-profile.children.cycles-pp.xfs_dialloc_ag
0.70 ± 4% -0.1 0.64 ± 8% perf-profile.children.cycles-pp.xfs_check_agi_freecount
0.12 ± 5% -0.1 0.05 ± 8% perf-profile.children.cycles-pp._xfs_trans_bjoin
0.13 ± 6% -0.1 0.07 ± 14% perf-profile.children.cycles-pp.xfs_trans_reserve
0.12 ± 6% -0.1 0.06 ± 16% perf-profile.children.cycles-pp.xfs_log_reserve
0.12 ± 12% -0.1 0.05 ± 9% perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
0.09 ± 9% -0.1 0.03 ±100% perf-profile.children.cycles-pp.native_irq_return_iret
0.14 ± 6% -0.1 0.08 ± 12% perf-profile.children.cycles-pp.xfs_agino_range
0.10 ± 4% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.lapic_next_deadline
0.20 ± 5% -0.1 0.14 ± 10% perf-profile.children.cycles-pp.xfs_buf_item_format
0.10 ± 19% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.update_load_avg
0.15 ± 2% -0.0 0.11 ± 6% perf-profile.children.cycles-pp.memcpy_erms
0.16 ± 9% -0.0 0.12 ± 21% perf-profile.children.cycles-pp.xfs_bmapi_reserve_delalloc
0.08 ± 5% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.xfs_iunlink
0.09 ± 24% +0.1 0.14 ± 5% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.10 ± 51% perf-profile.children.cycles-pp.xfs_inactive_worker
0.06 ± 17% +0.1 0.17 ± 29% perf-profile.children.cycles-pp.worker_thread
0.06 ± 9% +0.1 0.17 ± 29% perf-profile.children.cycles-pp.process_one_work
0.08 ± 19% +0.1 0.21 ± 22% perf-profile.children.cycles-pp.ret_from_fork
0.07 ± 14% +0.1 0.21 ± 22% perf-profile.children.cycles-pp.kthread
0.00 +0.6 0.58 ± 24% perf-profile.children.cycles-pp.xfs_inode_free_blocks
0.00 +0.7 0.65 ± 13% perf-profile.children.cycles-pp.xfs_inactive_inode
0.00 +0.7 0.67 ± 14% perf-profile.children.cycles-pp.xfs_ici_walk_fns
0.00 +0.7 0.67 ± 14% perf-profile.children.cycles-pp.xfs_ici_walk_ag
4.07 ± 3% +31.3 35.37 ± 2% perf-profile.children.cycles-pp.unlink
3.94 ± 3% +31.4 35.32 ± 2% perf-profile.children.cycles-pp.do_unlinkat
3.43 ± 3% +32.2 35.67 ± 2% perf-profile.children.cycles-pp.creat
3.37 ± 3% +32.3 35.65 ± 2% perf-profile.children.cycles-pp.do_sys_open
3.33 ± 3% +32.3 35.64 ± 2% perf-profile.children.cycles-pp.do_filp_open
3.33 ± 3% +32.3 35.64 ± 2% perf-profile.children.cycles-pp.path_openat
56.86 ± 5% +33.8 90.66 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
56.54 ± 5% +34.0 90.56 perf-profile.children.cycles-pp.do_syscall_64
3.97 ± 3% +65.5 69.42 ± 2% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
3.07 ± 4% +66.2 69.31 ± 2% perf-profile.children.cycles-pp.rwsem_optimistic_spin
0.84 ± 4% +67.1 67.93 ± 2% perf-profile.children.cycles-pp.osq_lock
31.55 ± 12% -26.5 5.08 ± 13% perf-profile.self.cycles-pp.intel_idle
6.23 ± 7% -3.7 2.48 ± 5% perf-profile.self.cycles-pp.do_syscall_64
5.17 ± 5% -3.2 1.93 ± 5% perf-profile.self.cycles-pp.iomap_set_range_uptodate
3.83 ± 5% -2.5 1.33 ± 6% perf-profile.self.cycles-pp.syscall_return_via_sysret
3.44 ± 6% -2.2 1.25 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64
2.90 ± 3% -1.4 1.45 ± 2% perf-profile.self.cycles-pp.rwsem_spin_on_owner
1.60 ± 7% -0.9 0.70 ± 4% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.13 ± 6% -0.7 0.39 ± 7% perf-profile.self.cycles-pp.xfs_file_iomap_begin_delay
1.15 ± 4% -0.7 0.43 ± 7% perf-profile.self.cycles-pp.___might_sleep
1.05 ± 6% -0.6 0.40 ± 8% perf-profile.self.cycles-pp.xfs_file_write_iter
0.99 ± 6% -0.6 0.35 ± 5% perf-profile.self.cycles-pp.iov_iter_fault_in_readable
0.84 ± 8% -0.5 0.32 ± 10% perf-profile.self.cycles-pp.xfs_file_iomap_end
0.68 ± 6% -0.4 0.24 ± 7% perf-profile.self.cycles-pp.up_write
0.77 ± 7% -0.4 0.33 ± 5% perf-profile.self.cycles-pp.selinux_file_permission
0.69 ± 8% -0.4 0.25 ± 5% perf-profile.self.cycles-pp.down_write
0.70 ± 8% -0.4 0.28 ± 3% perf-profile.self.cycles-pp.write
0.66 ± 7% -0.4 0.26 ± 7% perf-profile.self.cycles-pp.iomap_apply
0.66 ± 7% -0.4 0.26 ± 10% perf-profile.self.cycles-pp.iomap_write_actor
0.64 ± 7% -0.4 0.24 ± 8% perf-profile.self.cycles-pp.find_get_entry
0.62 ± 5% -0.4 0.23 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.60 ± 12% -0.4 0.22 ± 6% perf-profile.self.cycles-pp.__fget_light
0.58 ± 10% -0.4 0.23 ± 8% perf-profile.self.cycles-pp.xfs_file_buffered_aio_write
0.56 ± 8% -0.3 0.21 ± 2% perf-profile.self.cycles-pp.xas_load
0.48 ± 29% -0.3 0.14 ± 6% perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited
0.58 ± 6% -0.3 0.25 ± 3% perf-profile.self.cycles-pp.xfs_file_iomap_begin
0.50 ± 6% -0.3 0.18 ± 10% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.51 ± 26% -0.3 0.19 ± 11% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.48 ± 4% -0.3 0.18 ± 6% perf-profile.self.cycles-pp.fsnotify
0.52 ± 4% -0.3 0.21 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.47 ± 6% -0.3 0.18 ± 6% perf-profile.self.cycles-pp.iomap_write_end
0.46 ± 6% -0.3 0.16 ± 9% perf-profile.self.cycles-pp.xfs_errortag_test
0.45 ± 4% -0.3 0.16 ± 2% perf-profile.self.cycles-pp.pagecache_get_page
0.43 ± 4% -0.3 0.15 ± 7% perf-profile.self.cycles-pp.iov_iter_copy_from_user_atomic
0.47 ± 6% -0.3 0.19 ± 5% perf-profile.self.cycles-pp.__sb_start_write
0.45 ± 8% -0.3 0.17 ± 2% perf-profile.self.cycles-pp.vfs_write
0.41 ± 6% -0.3 0.15 ± 5% perf-profile.self.cycles-pp.iomap_write_begin
0.41 ± 3% -0.3 0.14 ± 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.41 ± 5% -0.3 0.15 ± 7% perf-profile.self.cycles-pp.xfs_iext_lookup_extent
0.39 ± 4% -0.3 0.14 ± 3% perf-profile.self.cycles-pp.xfs_break_leased_layouts
0.40 ± 13% -0.2 0.15 ± 5% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.36 ± 4% -0.2 0.12 ± 6% perf-profile.self.cycles-pp.__x64_sys_write
0.35 ± 6% -0.2 0.12 ± 4% perf-profile.self.cycles-pp.unlock_page
0.33 ± 4% -0.2 0.11 ± 15% perf-profile.self.cycles-pp.ksys_write
0.39 ± 20% -0.2 0.17 ± 23% perf-profile.self.cycles-pp.ktime_get
0.32 ± 7% -0.2 0.10 ± 7% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.35 ± 7% -0.2 0.14 ± 6% perf-profile.self.cycles-pp.release_pages
0.33 ± 7% -0.2 0.12 ± 10% perf-profile.self.cycles-pp.iomap_set_page_dirty
0.34 ± 6% -0.2 0.14 ± 10% perf-profile.self.cycles-pp.get_page_from_freelist
0.32 ± 6% -0.2 0.12 ± 7% perf-profile.self.cycles-pp.__mod_memcg_state
0.38 ± 5% -0.2 0.18 ± 5% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.32 ± 3% -0.2 0.13 ± 6% perf-profile.self.cycles-pp.__list_del_entry_valid
0.30 ± 4% -0.2 0.11 ± 7% perf-profile.self.cycles-pp.new_sync_write
0.29 ± 7% -0.2 0.11 ± 4% perf-profile.self.cycles-pp.xfs_iunlock
0.28 ± 6% -0.2 0.10 ± 8% perf-profile.self.cycles-pp.xfs_ilock
0.29 ± 4% -0.2 0.11 ± 7% perf-profile.self.cycles-pp.page_mapping
0.27 ± 4% -0.2 0.10 ± 9% perf-profile.self.cycles-pp.__fsnotify_parent
0.26 ± 9% -0.2 0.08 ± 10% perf-profile.self.cycles-pp.xfs_file_aio_write_checks
0.28 ± 3% -0.2 0.11 ± 3% perf-profile.self.cycles-pp.__mod_node_page_state
0.24 ± 20% -0.2 0.07 ± 5% perf-profile.self.cycles-pp.account_page_dirtied
0.27 ± 12% -0.2 0.10 ± 14% perf-profile.self.cycles-pp.xas_start
0.23 ± 3% -0.2 0.08 ± 5% perf-profile.self.cycles-pp.account_page_cleaned
0.24 ± 7% -0.2 0.09 ± 7% perf-profile.self.cycles-pp._cond_resched
0.20 ± 15% -0.2 0.05 ± 61% perf-profile.self.cycles-pp.menu_select
0.25 ± 3% -0.1 0.11 ± 12% perf-profile.self.cycles-pp.xfs_bmbt_to_iomap
0.20 ± 6% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.iov_iter_advance
0.22 ± 10% -0.1 0.08 ± 10% perf-profile.self.cycles-pp.find_get_entries
0.19 ± 6% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.xfs_break_layouts
0.20 ± 9% -0.1 0.08 ± 6% perf-profile.self.cycles-pp.__mod_lruvec_state
0.31 ± 7% -0.1 0.19 ± 5% perf-profile.self.cycles-pp.xas_store
0.24 ± 6% -0.1 0.12 ± 10% perf-profile.self.cycles-pp.__add_to_page_cache_locked
0.19 ± 13% -0.1 0.07 ± 5% perf-profile.self.cycles-pp.node_dirty_ok
0.14 ± 9% -0.1 0.03 ±100% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.19 ± 6% -0.1 0.08 ± 6% perf-profile.self.cycles-pp.rcu_all_qs
0.20 ± 4% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.file_update_time
0.16 ± 6% -0.1 0.05 ± 9% perf-profile.self.cycles-pp.iomap_file_buffered_write
0.13 ± 9% -0.1 0.03 ±100% perf-profile.self.cycles-pp.cpuidle_enter_state
0.16 ± 5% -0.1 0.06 ± 11% perf-profile.self.cycles-pp.lock_page_memcg
0.16 ± 7% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.xfs_dir3_leaf_check_int
0.21 ± 10% -0.1 0.11 ± 21% perf-profile.self.cycles-pp.xfs_find_bdev_for_inode
0.17 ± 9% -0.1 0.07 ± 6% perf-profile.self.cycles-pp.__fdget_pos
0.16 ± 9% -0.1 0.06 ± 11% perf-profile.self.cycles-pp.xfs_dir3_data_entsize
0.15 ± 8% -0.1 0.06 ± 9% perf-profile.self.cycles-pp.truncate_inode_pages_range
0.14 ± 3% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.__inode_security_revalidate
0.17 ± 8% -0.1 0.08 ± 6% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.15 ± 10% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.xfs_log_commit_cil
0.17 ± 15% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.generic_write_check_limits
0.12 ± 7% -0.1 0.03 ±100% perf-profile.self.cycles-pp.timestamp_truncate
0.11 ± 9% -0.1 0.03 ±100% perf-profile.self.cycles-pp.security_file_permission
0.17 ± 11% -0.1 0.09 ± 4% perf-profile.self.cycles-pp.workingset_update_node
0.12 ± 10% -0.1 0.04 ± 57% perf-profile.self.cycles-pp.xas_clear_mark
0.15 ± 3% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.generic_write_checks
0.15 ± 8% -0.1 0.07 ± 10% perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.12 ± 5% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.__xfs_dir3_data_check
0.14 ± 5% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.__sb_end_write
0.11 ± 7% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.native_write_msr
0.12 ± 5% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.current_time
0.09 ± 7% -0.1 0.03 ±100% perf-profile.self.cycles-pp.__percpu_counter_sum
0.09 ± 9% -0.1 0.03 ±100% perf-profile.self.cycles-pp.native_irq_return_iret
0.11 ± 7% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.xfs_agino_range
0.15 ± 4% -0.0 0.11 ± 7% perf-profile.self.cycles-pp.memcpy_erms
0.84 ± 4% +66.7 67.58 ± 2% perf-profile.self.cycles-pp.osq_lock
aim7.jobs-per-min
460000 +-+----------------------------------------------------------------+
| .+.. .+. .+.. .+. .+. .+. .+.+..+.+. .+.|
440000 +-+ + + +.+.+.+. + +..+ + +.+..+.+.+.+..+ |
420000 +-+ |
| |
400000 +-+ O O |
380000 +-+ O O |
| O O O O O O |
360000 O-+ O O O O O O O O O O O |
340000 +-O O O |
| |
320000 +-+ |
300000 +-+ O |
| O O |
280000 +-+-----------------------------------------------------O----------+
aim7.time.system_time
2200 +-+------------------------------------------------------------------+
| O O |
2000 +-+ O O |
1800 +-+ |
| O |
1600 O-+ O O O O O O |
1400 +-+ O O O O O O O O O O O |
| O O O O O O |
1200 +-+ |
1000 +-+ |
| |
800 +-+ |
600 +-+ |
| |
400 +-+------------------------------------------------------------------+
aim7.time.elapsed_time
65 +-+--------------------------------------------------------------------+
| O O |
| O |
60 +-+ O |
| |
| |
55 +-+ |
| O O O |
50 O-+ O O O O O O O O O O O |
| O O O O O O |
| O O O O |
45 +-+ |
| |
|.+.. .+. .+. .+.+.+..+. .+.. .+..+. .+.. .+.+.+..+.+..+.+.+.. .|
40 +-+--------------------------------------------------------------------+
aim7.time.elapsed_time.max
65 +-+--------------------------------------------------------------------+
| O O |
| O |
60 +-+ O |
| |
| |
55 +-+ |
| O O O |
50 O-+ O O O O O O O O O O O |
| O O O O O O |
| O O O O |
45 +-+ |
| |
|.+.. .+. .+. .+.+.+..+. .+.. .+..+. .+.. .+.+.+..+.+..+.+.+.. .|
40 +-+--------------------------------------------------------------------+
aim7.time.voluntary_context_switches
1e+06 +-+----------------------------------------------------------------+
| |
900000 +-+.+..+.+.+.+..+.+.+.+..+.+.+.+..+.+.+.+.+..+.+.+.+..+.+.+.+..+.+.|
| |
| |
800000 +-+ |
| |
700000 +-+ |
| |
600000 +-+ |
| |
| O O O O O O |
500000 O-+ O O O O O O O O O O O O O O O O O |
| O O O O |
400000 +-+---------------------------------------------------------O------+
aim7.time.involuntary_context_switches
10000 +-+------------------------------------------------------O-----O----+
| O |
9000 +-+ O |
8000 +-O O O |
O O O O O O |
7000 +-+ O O O O O O O |
6000 +-+ O O O O O O |
| O O O |
5000 +-+ |
4000 +-+ |
| |
3000 +-+ |
2000 +-+ |
|.+.. .+.. .+. .+..+. .+.+. .+. .+. |
1000 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 2 months
[cpuidle] 331b89c842: unixbench.score 5.6% improvement
by kernel test robot
Greeting,
FYI, we noticed a 5.6% improvement of unixbench.score due to commit:
commit: 331b89c8420810efbf395f3bba69b075b56a4c11 ("cpuidle: Use nanoseconds as the unit of time")
https://github.com/0day-ci/linux UPDATE-20191109-152231/Rafael-J-Wysocki/cpuidle-Use-nanoseconds-as-the-unit-of-time/20191109-002344
in testcase: unixbench
on test machine: 160 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 256G memory
with following parameters:
runtime: 300s
nr_task: 1
test: shell8
cpufreq_governor: performance
ucode: 0xb000038
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1/debian-x86_64-2019-09-23.cgz/300s/lkp-bdw-ex2/shell8/unixbench/0xb000038
commit:
737d3c9826 ("Merge branch 'acpi-mm' into linux-next")
331b89c842 ("cpuidle: Use nanoseconds as the unit of time")
737d3c982616413a 331b89c8420810efbf395f3bba6
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
1:4 -25% :4 dmesg.WARNING:stack_recursion
0:4 9% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 9% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
7937 +5.6% 8378 unixbench.score
3748 ± 7% +14.5% 4292 ± 2% unixbench.time.involuntary_context_switches
35796981 +5.6% 37795787 unixbench.time.minor_page_faults
187.02 +1.8% 190.34 unixbench.time.user_time
1073178 +7.2% 1150878 unixbench.time.voluntary_context_switches
300020 +5.6% 316708 unixbench.workload
42454 +7.9% 45813 vmstat.system.cs
7042224 ±171% +214.7% 22164286 ± 57% numa-numastat.node3.local_node
7065516 ±170% +214.0% 22187613 ± 57% numa-numastat.node3.numa_hit
859.50 ± 8% +29.2% 1110 ± 7% slabinfo.skbuff_fclone_cache.active_objs
859.50 ± 8% +29.2% 1110 ± 7% slabinfo.skbuff_fclone_cache.num_objs
247.29 ± 2% +7.3% 265.24 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
1.13 ± 92% +512.9% 6.93 ± 32% sched_debug.cpu.ttwu_count.avg
79.25 ±114% +745.1% 669.75 ± 36% sched_debug.cpu.ttwu_count.max
7.61 ±103% +747.2% 64.48 ± 37% sched_debug.cpu.ttwu_count.stddev
6453045 ±116% +98465.0% 6.36e+09 ± 17% cpuidle.C1.time
499294 ±134% +2931.8% 15137735 ± 2% cpuidle.C1.usage
4.141e+08 ± 84% +917.5% 4.213e+09 ± 14% cpuidle.C1E.time
8.01e+09 ± 38% -93.9% 4.866e+08 ± 2% cpuidle.C6.time
9305687 ± 34% -94.5% 508125 cpuidle.C6.usage
64076299 ±166% +463.1% 3.608e+08 ± 14% cpuidle.POLL.time
175398 ±152% +467.5% 995322 ± 11% cpuidle.POLL.usage
1738 ± 4% -6.7% 1622 ± 2% proc-vmstat.nr_page_table_pages
28099906 +5.4% 29615389 proc-vmstat.numa_hit
28006746 +5.4% 29522237 proc-vmstat.numa_local
29724990 +5.5% 31345142 proc-vmstat.pgalloc_normal
36048821 +5.6% 38050470 proc-vmstat.pgfault
29626806 +5.5% 31246186 proc-vmstat.pgfree
533606 +5.6% 563332 proc-vmstat.unevictable_pgs_culled
8807 ±112% +176.9% 24384 ± 50% numa-vmstat.node0.nr_active_anon
8743 ±113% +177.9% 24299 ± 50% numa-vmstat.node0.nr_anon_pages
8807 ±112% +176.9% 24384 ± 50% numa-vmstat.node0.nr_zone_active_anon
1661 ± 91% -88.6% 189.25 ± 95% numa-vmstat.node2.nr_inactive_anon
2192 ± 30% -30.0% 1534 ± 18% numa-vmstat.node2.nr_mapped
1773 ± 86% -88.8% 198.75 ± 88% numa-vmstat.node2.nr_shmem
7638 ± 22% -45.5% 4164 ± 16% numa-vmstat.node2.nr_slab_reclaimable
17941 ± 7% -31.0% 12382 ± 21% numa-vmstat.node2.nr_slab_unreclaimable
1661 ± 91% -88.6% 189.25 ± 95% numa-vmstat.node2.nr_zone_inactive_anon
22.00 ±104% +244.3% 75.75 ± 37% numa-vmstat.node3.nr_inactive_file
1717 ± 20% +54.7% 2656 ± 15% numa-vmstat.node3.nr_mapped
22.00 ±104% +244.3% 75.75 ± 37% numa-vmstat.node3.nr_zone_inactive_file
3504467 ±163% +208.4% 10806584 ± 56% numa-vmstat.node3.numa_local
154.75 ± 11% +26.3% 195.50 ± 5% turbostat.Avg_MHz
2094 ± 3% +24.0% 2596 turbostat.Bzy_MHz
497563 ±135% +2945.6% 15153724 ± 3% turbostat.C1
0.05 ±116% +50.7 50.73 ± 17% turbostat.C1%
3.30 ± 84% +30.3 33.62 ± 14% turbostat.C1E%
9262728 ± 34% -95.2% 444422 ± 2% turbostat.C6
63.79 ± 38% -60.0 3.78 ± 2% turbostat.C6%
30.71 ± 14% +191.4% 89.50 turbostat.CPU%c1
19.40 ± 97% -97.3% 0.53 ±120% turbostat.CPU%c3
42.51 ± 52% -94.3% 2.42 ± 3% turbostat.CPU%c6
47.25 ± 5% -99.1% 0.41 ± 30% turbostat.Pkg%pc2
291.09 ± 2% +32.0% 384.37 turbostat.PkgWatt
21.14 ± 8% +54.4% 32.64 turbostat.RAMWatt
35273 ±111% +176.5% 97535 ± 50% numa-meminfo.node0.Active
35227 ±112% +176.9% 97535 ± 50% numa-meminfo.node0.Active(anon)
34969 ±113% +178.0% 97200 ± 50% numa-meminfo.node0.AnonPages
107825 ± 8% +16.9% 126002 ± 7% numa-meminfo.node0.Slab
6890 ± 87% -87.6% 853.50 ± 76% numa-meminfo.node2.Inactive
6645 ± 91% -88.6% 757.75 ± 95% numa-meminfo.node2.Inactive(anon)
30550 ± 22% -45.5% 16660 ± 16% numa-meminfo.node2.KReclaimable
8670 ± 27% -28.8% 6169 ± 20% numa-meminfo.node2.Mapped
30550 ± 22% -45.5% 16660 ± 16% numa-meminfo.node2.SReclaimable
71835 ± 7% -30.9% 49656 ± 22% numa-meminfo.node2.SUnreclaim
7093 ± 86% -88.8% 795.00 ± 88% numa-meminfo.node2.Shmem
102385 ± 8% -35.2% 66317 ± 20% numa-meminfo.node2.Slab
6634 ± 17% +55.3% 10301 ± 14% numa-meminfo.node3.Mapped
571465 ± 15% +19.3% 681780 ± 15% numa-meminfo.node3.MemUsed
12.14 ± 29% -6.0 6.13 ± 15% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
12.14 ± 29% -6.0 6.13 ± 15% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.61 ± 4% -0.6 1.04 ± 29% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
1.61 ± 4% -0.4 1.21 ± 18% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
52.16 ± 27% +22.9 75.03 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
52.16 ± 27% +22.9 75.03 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
52.16 ± 27% +22.9 75.03 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
51.30 ± 27% +22.9 74.24 perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
52.26 ± 27% +23.0 75.28 perf-profile.calltrace.cycles-pp.secondary_startup_64
51.00 ± 27% +23.1 74.11 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
35.87 ± 46% -18.8 17.11 ± 12% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
35.73 ± 47% -18.6 17.11 ± 12% perf-profile.children.cycles-pp.do_syscall_64
9.93 ± 34% -5.6 4.32 ± 32% perf-profile.children.cycles-pp.__fput
10.03 ± 34% -5.5 4.57 ± 26% perf-profile.children.cycles-pp.task_work_run
10.14 ± 35% -5.4 4.74 ± 22% perf-profile.children.cycles-pp.exit_to_usermode_loop
52.16 ± 27% +22.9 75.03 perf-profile.children.cycles-pp.start_secondary
52.26 ± 27% +23.0 75.28 perf-profile.children.cycles-pp.secondary_startup_64
52.26 ± 27% +23.0 75.28 perf-profile.children.cycles-pp.cpu_startup_entry
52.26 ± 27% +23.0 75.28 perf-profile.children.cycles-pp.do_idle
51.40 ± 27% +23.1 74.48 perf-profile.children.cycles-pp.cpuidle_enter
51.40 ± 27% +23.1 74.48 perf-profile.children.cycles-pp.cpuidle_enter_state
88.00 ± 14% -17.3% 72.75 ± 8% interrupts.51:PCI-MSI.1572864-edge.eth0-TxRx-0
297179 +7.5% 319568 interrupts.CAL:Function_call_interrupts
88.00 ± 14% -17.3% 72.75 ± 8% interrupts.CPU0.51:PCI-MSI.1572864-edge.eth0-TxRx-0
5.25 ± 89% +1390.5% 78.25 ± 30% interrupts.CPU0.TLB:TLB_shootdowns
1535 +11.6% 1714 ± 4% interrupts.CPU16.CAL:Function_call_interrupts
1521 ± 3% +10.4% 1679 ± 2% interrupts.CPU164.CAL:Function_call_interrupts
176.00 ± 98% -60.5% 69.50 ±173% interrupts.CPU167.RES:Rescheduling_interrupts
79.00 ±173% +255.1% 280.50 ± 62% interrupts.CPU169.RES:Rescheduling_interrupts
1522 ± 2% +10.3% 1679 ± 2% interrupts.CPU177.CAL:Function_call_interrupts
1536 +9.8% 1687 ± 2% interrupts.CPU179.CAL:Function_call_interrupts
1542 +9.7% 1691 ± 2% interrupts.CPU180.CAL:Function_call_interrupts
1547 +9.3% 1691 ± 2% interrupts.CPU181.CAL:Function_call_interrupts
1554 +9.8% 1707 ± 4% interrupts.CPU23.CAL:Function_call_interrupts
4.75 ± 34% +1673.7% 84.25 ± 30% interrupts.CPU24.TLB:TLB_shootdowns
177.75 ± 93% -63.9% 64.25 ±170% interrupts.CPU48.RES:Rescheduling_interrupts
163.50 ± 96% -61.0% 63.75 ±168% interrupts.CPU49.RES:Rescheduling_interrupts
185.75 ± 99% -55.2% 83.25 ±173% interrupts.CPU70.RES:Rescheduling_interrupts
1487 ± 8% +15.3% 1715 ± 2% interrupts.CPU72.CAL:Function_call_interrupts
86.50 ±170% +257.2% 309.00 ± 62% interrupts.CPU73.RES:Rescheduling_interrupts
98.75 ±172% +210.4% 306.50 ± 54% interrupts.CPU74.RES:Rescheduling_interrupts
88.00 ±173% +206.5% 269.75 ± 59% interrupts.CPU76.RES:Rescheduling_interrupts
1545 +12.0% 1730 ± 5% interrupts.CPU83.CAL:Function_call_interrupts
129.50 ±131% +164.1% 342.00 ± 49% interrupts.CPU95.RES:Rescheduling_interrupts
64320 ± 13% +15.2% 74070 ± 4% interrupts.RES:Rescheduling_interrupts
241.00 ± 5% +489.6% 1421 ± 4% interrupts.TLB:TLB_shootdowns
24.14 ± 4% -45.7% 13.10 ± 2% perf-stat.i.MPKI
3.758e+09 ± 8% +34.5% 5.053e+09 ± 3% perf-stat.i.branch-instructions
2.97 ± 8% -1.2 1.76 ± 3% perf-stat.i.branch-miss-rate%
1.068e+08 ± 6% -18.7% 86810106 perf-stat.i.branch-misses
0.38 ± 5% +0.2 0.62 ± 11% perf-stat.i.cache-miss-rate%
4.194e+08 ± 2% -27.4% 3.047e+08 perf-stat.i.cache-references
44539 +6.9% 47630 perf-stat.i.context-switches
1.81 ± 4% -11.3% 1.61 ± 2% perf-stat.i.cpi
0.40 ± 4% -0.2 0.22 perf-stat.i.dTLB-load-miss-rate%
18597280 ± 4% -28.3% 13337508 ± 4% perf-stat.i.dTLB-load-misses
4.772e+09 ± 5% +27.9% 6.104e+09 ± 4% perf-stat.i.dTLB-loads
0.13 ± 2% -0.0 0.09 perf-stat.i.dTLB-store-miss-rate%
4559225 ± 3% -36.3% 2902725 ± 10% perf-stat.i.dTLB-store-misses
61.28 ± 4% -10.9 50.41 ± 2% perf-stat.i.iTLB-load-miss-rate%
7316157 ± 4% -12.0% 6441005 ± 5% perf-stat.i.iTLB-load-misses
4711889 ± 9% +31.0% 6173969 ± 3% perf-stat.i.iTLB-loads
1.841e+10 ± 5% +26.3% 2.325e+10 ± 2% perf-stat.i.instructions
2501 ± 7% +61.3% 4034 ± 17% perf-stat.i.instructions-per-iTLB-miss
558522 +5.0% 586595 perf-stat.i.minor-faults
558460 +5.0% 586582 perf-stat.i.page-faults
22.84 ± 4% -42.6% 13.11 ± 2% perf-stat.overall.MPKI
2.85 ± 8% -1.1 1.72 ± 2% perf-stat.overall.branch-miss-rate%
0.36 ± 4% +0.2 0.53 ± 4% perf-stat.overall.cache-miss-rate%
0.39 ± 4% -0.2 0.22 perf-stat.overall.dTLB-load-miss-rate%
0.13 ± 2% -0.0 0.09 perf-stat.overall.dTLB-store-miss-rate%
60.88 ± 4% -9.9 51.02 perf-stat.overall.iTLB-load-miss-rate%
2522 ± 7% +43.6% 3621 ± 5% perf-stat.overall.instructions-per-iTLB-miss
3846863 ± 6% +20.4% 4632272 ± 2% perf-stat.overall.path-length
3.693e+09 ± 8% +34.5% 4.966e+09 ± 3% perf-stat.ps.branch-instructions
1.05e+08 ± 6% -18.7% 85353443 perf-stat.ps.branch-misses
4.123e+08 ± 2% -27.4% 2.994e+08 perf-stat.ps.cache-references
43756 +6.9% 46791 perf-stat.ps.context-switches
18276268 ± 4% -28.3% 13106079 ± 4% perf-stat.ps.dTLB-load-misses
4.689e+09 ± 5% +27.9% 5.999e+09 ± 4% perf-stat.ps.dTLB-loads
4481504 ± 3% -36.3% 2853056 ± 10% perf-stat.ps.dTLB-store-misses
7189784 ± 4% -12.0% 6328997 ± 5% perf-stat.ps.iTLB-load-misses
4629950 ± 9% +31.1% 6067676 ± 3% perf-stat.ps.iTLB-loads
1.809e+10 ± 5% +26.3% 2.285e+10 ± 2% perf-stat.ps.instructions
548686 +5.0% 576249 perf-stat.ps.minor-faults
548625 +5.0% 576236 perf-stat.ps.page-faults
1.154e+12 ± 5% +27.2% 1.467e+12 ± 2% perf-stat.total.instructions
11329 ± 11% +34.1% 15189 ± 27% softirqs.CPU1.SCHED
17634 ± 8% +10.0% 19395 ± 7% softirqs.CPU11.RCU
30187 ± 7% -22.3% 23445 softirqs.CPU112.TIMER
15975 ± 4% +9.6% 17513 softirqs.CPU129.RCU
8458 +31.6% 11129 ± 26% softirqs.CPU14.SCHED
8802 ± 7% +12.0% 9862 ± 3% softirqs.CPU168.SCHED
8805 +12.9% 9939 softirqs.CPU169.SCHED
16049 ± 10% +19.9% 19236 ± 6% softirqs.CPU171.RCU
8868 ± 2% +12.4% 9971 ± 3% softirqs.CPU171.SCHED
28474 ± 12% -14.3% 24400 ± 2% softirqs.CPU172.TIMER
16350 ± 13% +18.6% 19394 ± 5% softirqs.CPU173.RCU
8940 ± 4% +9.8% 9817 ± 3% softirqs.CPU173.SCHED
16281 ± 15% +25.4% 20423 ± 10% softirqs.CPU177.RCU
8679 ± 2% +18.3% 10269 ± 10% softirqs.CPU177.SCHED
8861 ± 2% +12.0% 9926 ± 5% softirqs.CPU182.SCHED
8998 ± 3% +8.3% 9744 ± 4% softirqs.CPU183.SCHED
8820 +12.8% 9947 ± 3% softirqs.CPU185.SCHED
8811 ± 2% +12.7% 9933 ± 4% softirqs.CPU186.SCHED
8748 ± 3% +12.7% 9855 ± 5% softirqs.CPU187.SCHED
8653 ± 9% +13.9% 9859 ± 4% softirqs.CPU189.SCHED
15813 ± 4% +18.7% 18764 softirqs.CPU22.RCU
9725 ± 3% +14.9% 11177 ± 12% softirqs.CPU24.SCHED
16568 ± 7% +13.5% 18799 ± 7% softirqs.CPU25.RCU
8785 ± 5% +16.2% 10211 ± 15% softirqs.CPU30.SCHED
28116 ± 5% -13.1% 24419 ± 2% softirqs.CPU31.TIMER
8617 ± 4% +16.7% 10058 ± 10% softirqs.CPU33.SCHED
8949 ± 6% +13.5% 10158 ± 9% softirqs.CPU60.SCHED
9187 ± 6% +11.7% 10259 ± 5% softirqs.CPU72.SCHED
9053 ± 3% +13.4% 10265 ± 2% softirqs.CPU73.SCHED
16876 ± 11% +13.3% 19127 ± 7% softirqs.CPU75.RCU
16897 ± 11% +17.2% 19797 ± 7% softirqs.CPU76.RCU
16703 ± 13% +19.6% 19985 ± 6% softirqs.CPU77.RCU
8777 ± 4% +14.2% 10022 ± 3% softirqs.CPU77.SCHED
8630 ± 6% +15.4% 9955 ± 3% softirqs.CPU78.SCHED
9162 +12.1% 10269 ± 4% softirqs.CPU80.SCHED
8905 ± 3% +11.5% 9929 ± 4% softirqs.CPU84.SCHED
9097 +11.2% 10116 ± 3% softirqs.CPU85.SCHED
8757 ± 3% +16.5% 10199 ± 7% softirqs.CPU86.SCHED
8699 ± 2% +16.9% 10165 ± 3% softirqs.CPU87.SCHED
8655 +16.4% 10070 ± 2% softirqs.CPU88.SCHED
8727 ± 4% +15.1% 10040 ± 3% softirqs.CPU89.SCHED
16459 ± 5% +16.2% 19134 softirqs.CPU9.RCU
8680 ± 3% +17.7% 10213 ± 17% softirqs.CPU9.SCHED
8853 ± 3% +13.9% 10083 ± 2% softirqs.CPU90.SCHED
8855 ± 3% +13.1% 10017 ± 3% softirqs.CPU91.SCHED
8924 ± 4% +16.3% 10383 ± 14% softirqs.CPU97.SCHED
unixbench.score
9000 +-+------------------------------------------------------------------+
O O.O.OO O O O O O OO O O O O O OO.+.+. .++. .+ |
8000 +-+ +.+ +.+.+.+.++.+.+.+.+.+.++ +.+.+ +.+.+.+.+.+ +.+.+.|
7000 +-+ : : |
| : : |
6000 +-+ : : |
5000 +-+ : : |
| : : |
4000 +-+ : : |
3000 +-+ : : |
| : : |
2000 +-+ :: |
1000 +-+ : |
| : |
0 +-+--------------O---------------------------------------------------+
unixbench.time.minor_page_faults
4e+07 +-+---------------------------------------------------------------+
O.O.OO.O.O OO O.+.O.OO.O O O OO O.O.++.+. .+.++.+.+.+. .+.+.++.+.|
3.5e+07 +-+ + +.+ + +.+.+.++.+ + ++ |
3e+07 +-+ : : |
| : : |
2.5e+07 +-+ : : |
| : : |
2e+07 +-+ : : |
| : : |
1.5e+07 +-+ :: |
1e+07 +-+ :: |
| : |
5e+06 +-+ : |
| : |
0 +-+-------------O-------------------------------------------------+
unixbench.time.voluntary_context_switches
1.2e+06 +-+---------------------------------------------------------------+
O O.OO.O.O OO O.+.O.OO.O O O OO O O.++.+. .++.+.+.+. .+.+.++.+.|
1e+06 +-+ : +.+ + +.+.+.++.+.+ +.+ ++ |
| : : |
| : : |
800000 +-+ : : |
| : : |
600000 +-+ : : |
| : : |
400000 +-+ :: |
| :: |
| : |
200000 +-+ : |
| : |
0 +-+-------------O-------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 2 months
[tracepoints] 110ce06f36: BUG:unable_to_handle_page_fault_for_address
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 110ce06f3698063ac517a8c0daec10ddc1d62657 ("tracepoints: Use static_call")
https://git.kernel.org/cgit/linux/kernel/git/peterz/queue.git x86/static_call
in testcase: blktests
with following parameters:
disk: 1SSD
test: block-group1
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------+------------+------------+
| | 9b3a76eb31 | 110ce06f36 |
+------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 10 | 16 |
| IP-Config:Auto-configuration_of_network_failed | 10 | |
| BUG:unable_to_handle_page_fault_for_address | 0 | 15 |
| Oops:#[##] | 0 | 16 |
| RIP:static_call_site_swap | 0 | 16 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 16 |
+------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 8.784285] BUG: unable to handle page fault for address: ffffffffc02775f5
[ 8.785728] #PF: supervisor write access in kernel mode
[ 8.786808] #PF: error_code(0x0003) - permissions violation
[ 8.787942] PGD 19dc0f067 P4D 19dc0f067 PUD 19dc11067 PMD 19e934067 PTE 80000001cfb5f061
[ 8.789674] Oops: 0003 [#1] SMP PTI
[ 8.790470] CPU: 0 PID: 228 Comm: systemd-udevd Not tainted 5.4.0-rc5-00305-g110ce06f36980 #1
[ 8.792217] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 8.794038] RIP: 0010:static_call_site_swap+0x18/0x30
[ 8.795180] Code: ff ff c3 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 66 66 66 66 90 44 8b 06 48 89 f8 8b 0f 48 29 f0 8b 57 04 41 29 c0 01 c1 <44> 89 07 44 8b 46 04 41 29 c0 01 d0 44 89 47 04 89 0e 89 46 04 c3
[ 8.798790] RSP: 0018:ffffa41c402b7c20 EFLAGS: 00010283
[ 8.799866] RAX: fffffffffffffff0 RBX: ffffffffc02775ed RCX: 00000000fffdcaf2
[ 8.801335] RDX: 000000000000cea7 RSI: ffffffffc0277605 RDI: ffffffffc02775f5
[ 8.802785] RBP: 0000000000000000 R08: 00000000fffdcdd5 R09: ffffffff8e5efdd0
[ 8.804178] R10: ffff93b4ffd851e0 R11: 00000000000303c0 R12: ffffffffc0277605
[ 8.805657] R13: 0000000000000008 R14: 0000000000000008 R15: 0000000000000008
[ 8.807050] FS: 00007fa9263b48c0(0000) GS:ffff93b4ffc00000(0000) knlGS:0000000000000000
[ 8.808721] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 8.809954] CR2: ffffffffc02775f5 CR3: 000000019db84000 CR4: 00000000000406f0
[ 8.811348] Call Trace:
[ 8.811979] sort_r+0x15a/0x1c0
[ 8.812718] ? bpf_fd_reuseport_array_update_elem+0x1a0/0x1a0
[ 8.813918] ? static_call_site_cmp+0x40/0x40
[ 8.814853] __static_call_init+0x3d/0x120
[ 8.815847] static_call_module_notify+0x6c/0xb0
[ 8.816829] notifier_call_chain_robust+0x56/0xc0
[ 8.817857] blocking_notifier_call_chain_robust+0x3e/0x60
[ 8.818981] load_module+0x18e2/0x2070
[ 8.819820] ? ima_post_read_file+0xe2/0x120
[ 8.820744] ? __do_sys_finit_module+0xe9/0x110
[ 8.821799] __do_sys_finit_module+0xe9/0x110
[ 8.822792] do_syscall_64+0x5b/0x1d0
[ 8.823665] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 8.824777] RIP: 0033:0x7fa926b2ef59
[ 8.825622] Code: 00 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 07 6f 0c 00 f7 d8 64 89 01 48
[ 8.829174] RSP: 002b:00007fffb37b4ab8 EFLAGS: 00000246 ORIG_RAX: 0000000000000139
[ 8.830764] RAX: ffffffffffffffda RBX: 0000560d2d14deb0 RCX: 00007fa926b2ef59
[ 8.832159] RDX: 0000000000000000 RSI: 00007fa926e62265 RDI: 0000000000000007
[ 8.833629] RBP: 00007fa926e62265 R08: 0000000000000000 R09: 0000000000000000
[ 8.835024] R10: 0000000000000007 R11: 0000000000000246 R12: 0000000000000000
[ 8.836500] R13: 0000560d2d14bf70 R14: 0000000000020000 R15: 0000560d2b85ccbc
[ 8.837932] Modules linked in: soundcore pcspkr libata(+) joydev serio_raw virtio_scsi i2c_piix4 parport_pc(+) floppy parport ip_tables
[ 8.840348] CR2: ffffffffc02775f5
[ 8.841148] ---[ end trace db06f9011e8e3626 ]---
To reproduce:
# build kernel
cd linux
cp config-5.4.0-rc5-00305-g110ce06f36980 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 2 months