[sched/fair] 3c29e651e1: hackbench.throughput -15.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -15.2% regression of hackbench.throughput due to commit:
commit: 3c29e651e16dd3b3179cfb2d055ee9538e37515c ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: hackbench
on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
with following parameters:
nr_threads: 100%
mode: threads
ipc: pipe
cpufreq_governor: performance
ucode: 0xca
test-description: Hackbench is both a benchmark and a stress test for the Linux kernel scheduler.
test-url: https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/sc...
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -11.3% regression |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | iterations=18 |
| | mode=threads |
| | nr_threads=1600% |
| | ucode=0x43 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -10.6% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=process |
| | nr_threads=50% |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -6.8% regression |
| test machine | 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=threads |
| | nr_threads=100% |
| | ucode=0xb8 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/threads/100%/debian-x86_64-20191114.cgz/lkp-cfl-e1/hackbench/0xca
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
%stddev %change %stddev
\ | \
69492 ± 3% -15.2% 58899 hackbench.throughput
7.727e+08 ± 7% +38.3% 1.069e+09 ± 2% hackbench.time.involuntary_context_switches
131708 ± 4% -15.4% 111382 hackbench.time.minor_page_faults
684.98 ± 2% -2.5% 667.72 hackbench.time.user_time
1.755e+09 ± 2% +11.2% 1.953e+09 hackbench.time.voluntary_context_switches
4.32e+08 ± 4% -15.6% 3.648e+08 hackbench.workload
186.50 ± 12% -16.6% 155.50 ± 11% interrupts.TLB:TLB_shootdowns
4147892 ± 5% +19.6% 4961643 vmstat.system.cs
0.72 ± 10% -0.1 0.58 ± 17% mpstat.cpu.all.idle%
0.00 ±156% +0.0 0.01 ± 62% mpstat.cpu.all.soft%
1038 ± 9% +18.5% 1230 ± 4% slabinfo.avc_xperms_data.active_objs
1038 ± 9% +18.5% 1230 ± 4% slabinfo.avc_xperms_data.num_objs
40647036 ± 10% -34.6% 26589504 proc-vmstat.numa_hit
40647036 ± 10% -34.6% 26589504 proc-vmstat.numa_local
40724029 ± 10% -34.5% 26661546 proc-vmstat.pgalloc_normal
763906 ± 2% -7.5% 706781 proc-vmstat.pgfault
40701242 ± 10% -34.6% 26637947 proc-vmstat.pgfree
1636058 ± 11% -45.0% 899629 ± 11% turbostat.C1
0.10 ± 7% -0.0 0.07 ± 7% turbostat.C1%
15201 ± 25% +113.3% 32422 ± 8% turbostat.C1E
0.01 ± 57% +0.0 0.03 ± 13% turbostat.C1E%
9093 ± 11% +217.2% 28841 ± 11% turbostat.C3
0.00 ±173% +0.0 0.03 ± 13% turbostat.C3%
16977 ± 2% -92.8% 1222 ± 98% turbostat.C8
0.17 ± 4% -0.2 0.01 ±110% turbostat.C8%
0.13 -100.0% 0.00 turbostat.CPU%c7
9623106 ± 5% -35.1% 6241651 ± 8% cpuidle.C1.time
1636668 ± 11% -45.0% 899926 ± 11% cpuidle.C1.usage
899494 ± 33% +272.6% 3351412 ± 5% cpuidle.C1E.time
15327 ± 25% +112.6% 32592 ± 8% cpuidle.C1E.usage
435674 ± 20% +643.5% 3239351 ± 14% cpuidle.C3.time
9179 ± 11% +215.2% 28933 ± 11% cpuidle.C3.usage
16850924 ± 2% -94.2% 979516 ± 86% cpuidle.C8.time
17150 ± 2% -92.0% 1378 ± 88% cpuidle.C8.usage
6022328 ± 18% -57.1% 2584773 ± 7% cpuidle.POLL.time
5339703 ± 19% -60.1% 2130089 ± 7% cpuidle.POLL.usage
14.06 ± 25% +64.0% 23.07 ± 13% sched_debug.cfs_rq:/.runnable_load_avg.min
25.10 ± 5% -15.8% 21.12 ± 7% sched_debug.cfs_rq:/.runnable_load_avg.stddev
13339 ± 24% +45.7% 19433 ± 7% sched_debug.cfs_rq:/.runnable_weight.min
166.51 ± 24% +48.9% 247.86 ± 7% sched_debug.cfs_rq:/.util_est_enqueued.min
253.56 ± 2% -12.3% 222.30 ± 9% sched_debug.cfs_rq:/.util_est_enqueued.stddev
159602 ± 23% +55.6% 248411 ± 6% sched_debug.cpu.avg_idle.max
51567 ± 24% +49.1% 76905 ± 11% sched_debug.cpu.avg_idle.stddev
11576 ± 5% -9.5% 10478 sched_debug.cpu.curr->pid.avg
9976 ± 8% -16.8% 8305 ± 9% sched_debug.cpu.curr->pid.min
4.81 ± 15% +43.3% 6.89 ± 8% sched_debug.cpu.nr_running.min
6.68 ± 4% -17.0% 5.54 ± 8% sched_debug.cpu.nr_running.stddev
76003928 ± 2% +21.0% 91975904 sched_debug.cpu.nr_switches.avg
78115404 ± 2% +23.4% 96397551 sched_debug.cpu.nr_switches.max
74275755 ± 2% +19.0% 88359903 sched_debug.cpu.nr_switches.min
1043054 ± 15% +83.9% 1917752 ± 10% sched_debug.cpu.nr_switches.stddev
-685.26 +64.8% -1129 sched_debug.cpu.nr_uninterruptible.min
15470 ± 9% -31.0% 10674 ± 9% softirqs.CPU0.SCHED
13756 ± 10% -40.8% 8140 ± 4% softirqs.CPU1.SCHED
13645 ± 11% -39.7% 8227 ± 3% softirqs.CPU10.SCHED
14065 ± 12% -38.8% 8609 ± 2% softirqs.CPU11.SCHED
13948 ± 11% -39.9% 8381 ± 6% softirqs.CPU12.SCHED
13823 ± 12% -39.7% 8332 ± 9% softirqs.CPU13.SCHED
73323 +13.4% 83165 ± 11% softirqs.CPU14.RCU
13872 ± 12% -36.5% 8809 ± 7% softirqs.CPU14.SCHED
13726 ± 12% -40.4% 8175 ± 4% softirqs.CPU15.SCHED
13780 ± 11% -34.8% 8987 ± 12% softirqs.CPU2.SCHED
13895 ± 10% -40.5% 8266 ± 6% softirqs.CPU3.SCHED
221879 +13.4% 251589 ± 14% softirqs.CPU3.TIMER
13873 ± 10% -40.4% 8262 ± 6% softirqs.CPU4.SCHED
72625 +12.0% 81355 ± 10% softirqs.CPU5.RCU
13876 ± 13% -41.4% 8137 ± 3% softirqs.CPU5.SCHED
14118 ± 12% -40.8% 8354 ± 3% softirqs.CPU6.SCHED
13503 ± 12% -40.0% 8099 softirqs.CPU7.SCHED
13549 ± 11% -38.2% 8374 ± 8% softirqs.CPU8.SCHED
14273 ± 10% -40.1% 8548 ± 6% softirqs.CPU9.SCHED
223184 ± 11% -38.9% 136383 ± 5% softirqs.SCHED
55.80 +9.9% 61.34 perf-stat.i.MPKI
1.32 +0.1 1.39 perf-stat.i.branch-miss-rate%
1.066e+08 +3.7% 1.105e+08 perf-stat.i.branch-misses
0.15 ± 8% -0.0 0.11 ± 3% perf-stat.i.cache-miss-rate%
2521584 ± 9% -26.5% 1854353 ± 3% perf-stat.i.cache-misses
2.27e+09 +8.0% 2.451e+09 perf-stat.i.cache-references
4162783 ± 4% +19.7% 4981708 perf-stat.i.context-switches
1.52 +2.0% 1.55 perf-stat.i.cpi
227633 ± 14% +75.1% 398694 ± 3% perf-stat.i.cpu-migrations
33931 ± 12% +42.7% 48428 ± 2% perf-stat.i.cycles-between-cache-misses
0.01 -0.0 0.01 ± 4% perf-stat.i.dTLB-load-miss-rate%
1408442 ± 2% -16.4% 1177727 ± 4% perf-stat.i.dTLB-load-misses
1.229e+10 -2.1% 1.203e+10 perf-stat.i.dTLB-loads
0.00 ± 2% -0.0 0.00 ± 4% perf-stat.i.dTLB-store-miss-rate%
35757 ± 2% -16.0% 30032 ± 3% perf-stat.i.dTLB-store-misses
7.449e+09 -2.3% 7.279e+09 perf-stat.i.dTLB-stores
39246970 -5.5% 37079604 perf-stat.i.iTLB-load-misses
76813 ± 8% -26.2% 56724 ± 6% perf-stat.i.iTLB-loads
4.096e+10 -1.9% 4.02e+10 perf-stat.i.instructions
1062 +4.1% 1105 perf-stat.i.instructions-per-iTLB-miss
0.66 -1.9% 0.65 perf-stat.i.ipc
1210 -7.1% 1124 perf-stat.i.minor-faults
105889 ± 10% -24.8% 79596 ± 5% perf-stat.i.node-loads
174508 ± 7% -24.4% 131915 perf-stat.i.node-stores
1210 -7.1% 1124 perf-stat.i.page-faults
55.42 ± 2% +10.0% 60.98 perf-stat.overall.MPKI
1.31 +0.1 1.38 perf-stat.overall.branch-miss-rate%
0.11 ± 11% -0.0 0.08 ± 3% perf-stat.overall.cache-miss-rate%
1.51 +1.9% 1.54 perf-stat.overall.cpi
24740 ± 9% +34.9% 33377 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 -0.0 0.01 ± 4% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 2% -0.0 0.00 ± 4% perf-stat.overall.dTLB-store-miss-rate%
1043 +3.9% 1084 perf-stat.overall.instructions-per-iTLB-miss
0.66 -1.9% 0.65 perf-stat.overall.ipc
57826 ± 3% +15.6% 66874 perf-stat.overall.path-length
1.064e+08 +3.7% 1.104e+08 perf-stat.ps.branch-misses
2517524 ± 9% -26.5% 1851335 ± 3% perf-stat.ps.cache-misses
2.266e+09 +8.0% 2.447e+09 perf-stat.ps.cache-references
4155898 ± 4% +19.7% 4973458 perf-stat.ps.context-switches
227258 ± 14% +75.1% 398041 ± 3% perf-stat.ps.cpu-migrations
1406119 ± 2% -16.4% 1175777 ± 4% perf-stat.ps.dTLB-load-misses
1.227e+10 -2.1% 1.201e+10 perf-stat.ps.dTLB-loads
35699 ± 2% -16.0% 29983 ± 3% perf-stat.ps.dTLB-store-misses
7.437e+09 -2.3% 7.267e+09 perf-stat.ps.dTLB-stores
39182154 -5.5% 37018206 perf-stat.ps.iTLB-load-misses
76695 ± 8% -26.2% 56638 ± 6% perf-stat.ps.iTLB-loads
4.09e+10 -1.9% 4.013e+10 perf-stat.ps.instructions
1208 -7.1% 1122 perf-stat.ps.minor-faults
105716 ± 10% -24.8% 79465 ± 5% perf-stat.ps.node-loads
174234 ± 7% -24.4% 131703 perf-stat.ps.node-stores
1208 -7.1% 1122 perf-stat.ps.page-faults
29.39 ±104% -29.4 0.00 perf-profile.calltrace.cycles-pp.start_thread
15.86 ±105% -15.9 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write.start_thread
14.88 ±105% -14.9 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
14.75 ±105% -14.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
13.53 ±104% -13.5 0.00 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
12.51 ±104% -12.5 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read.start_thread
12.45 ±105% -12.4 0.00 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
11.21 ±105% -11.2 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
11.08 ±105% -11.1 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
10.47 ±105% -10.5 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
9.73 ±105% -9.7 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
3.09 ± 12% -0.6 2.50 ± 2% perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.new_sync_read.vfs_read.ksys_read
2.45 ± 14% -0.5 1.95 ± 2% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.new_sync_write.vfs_write.ksys_write
1.15 ± 13% -0.2 0.95 ± 4% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.new_sync_write.vfs_write
0.99 ± 13% -0.2 0.83 ± 5% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.new_sync_write
0.92 ± 15% +0.2 1.10 perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.84 ± 23% +0.3 1.10 ± 2% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.99 ± 19% +0.3 1.26 ± 3% perf-profile.calltrace.cycles-pp.update_curr.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.97 ± 23% +0.3 1.26 perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.29 ±100% +0.3 0.60 ± 5% perf-profile.calltrace.cycles-pp.reschedule_interrupt.__lock_text_start.__wake_up_common_lock.pipe_write.new_sync_write
0.55 ± 62% +0.3 0.89 ± 7% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
0.28 ±100% +0.4 0.63 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop
0.33 ±100% +0.4 0.69 ± 2% perf-profile.calltrace.cycles-pp.prepare_to_wait.pipe_wait.pipe_read.new_sync_read.vfs_read
0.29 ±100% +0.4 0.65 ± 4% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
1.00 ± 27% +0.4 1.37 ± 2% perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.57 ± 63% +0.4 0.95 ± 5% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
1.46 ± 31% +0.4 1.86 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.72 ± 60% +0.4 1.14 ± 2% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.29 ±100% +0.4 0.70 ± 6% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.34 ±100% +0.4 0.77 ± 7% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task
1.26 ± 26% +0.5 1.72 perf-profile.calltrace.cycles-pp.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.13 ± 26% +0.5 1.59 perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.35 ±100% +0.5 0.83 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
1.28 ± 28% +0.5 1.77 ± 3% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
1.40 ± 26% +0.5 1.89 perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.44 ±100% +0.5 0.95 ± 7% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
2.33 ± 17% +0.5 2.88 ± 2% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
1.33 ± 28% +0.6 1.89 ± 2% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.40 ± 24% +0.6 1.97 ± 4% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.83 ± 66% +0.6 1.40 ± 5% perf-profile.calltrace.cycles-pp.native_write_msr
5.01 ± 8% +0.8 5.81 ± 2% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.75 ± 39% +0.9 2.61 ± 6% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
1.33 ± 67% +1.0 2.28 ± 6% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.56 ± 12% +1.2 7.78 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
8.96 ± 9% +1.6 10.56 perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
9.23 ± 10% +1.6 10.87 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
9.17 ± 10% +1.6 10.81 perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
3.72 ± 31% +1.7 5.44 ± 6% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.37 ± 8% +1.9 5.32 ± 4% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
4.37 ± 10% +2.3 6.65 ± 5% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
15.27 ± 10% +2.3 17.59 perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.new_sync_read
15.59 ± 10% +2.4 17.96 perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.new_sync_read.vfs_read
3.00 ± 71% +2.6 5.60 ± 6% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.10 ± 71% +2.7 5.77 ± 6% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
18.05 ± 10% +2.7 20.73 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.new_sync_read.vfs_read.ksys_read
28.21 ± 5% +3.7 31.87 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
24.29 ± 8% +4.7 28.98 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
23.38 ± 8% +4.7 28.08 perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
23.72 ± 8% +4.7 28.43 perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
31.18 ± 41% +12.7 43.86 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
33.44 ± 40% +13.2 46.61 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.81 ± 43% +27.5 92.34 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
65.30 ± 43% +27.7 92.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
29.39 ±104% -29.4 0.00 perf-profile.children.cycles-pp.start_thread
16.09 ±105% -16.1 0.00 perf-profile.children.cycles-pp.__GI___libc_write
12.73 ±104% -12.7 0.00 perf-profile.children.cycles-pp.__GI___libc_read
5.80 ± 10% -1.2 4.65 ± 3% perf-profile.children.cycles-pp.security_file_permission
4.10 ± 16% -1.0 3.10 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.21 ± 9% -0.7 2.55 ± 3% perf-profile.children.cycles-pp.copy_page_to_iter
1.45 ± 32% -0.6 0.83 perf-profile.children.cycles-pp.entry_SYSCALL_64
3.04 ± 8% -0.6 2.44 ± 3% perf-profile.children.cycles-pp.selinux_file_permission
2.56 ± 11% -0.6 1.99 ± 2% perf-profile.children.cycles-pp.copy_page_from_iter
2.80 ± 9% -0.5 2.25 ± 2% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
3.38 ± 7% -0.5 2.87 ± 2% perf-profile.children.cycles-pp.mutex_lock
1.62 ± 12% -0.4 1.23 ± 3% perf-profile.children.cycles-pp.___might_sleep
1.82 ± 9% -0.4 1.43 ± 4% perf-profile.children.cycles-pp.syscall_return_via_sysret
2.09 ± 12% -0.4 1.72 ± 6% perf-profile.children.cycles-pp.file_has_perm
0.90 ± 20% -0.3 0.57 ± 9% perf-profile.children.cycles-pp.__mutex_lock
1.14 ± 12% -0.3 0.82 ± 3% perf-profile.children.cycles-pp.__inode_security_revalidate
1.26 ± 10% -0.3 0.97 ± 4% perf-profile.children.cycles-pp.copyin
1.70 ± 8% -0.3 1.42 ± 3% perf-profile.children.cycles-pp.copyout
1.25 ± 11% -0.3 0.98 ± 3% perf-profile.children.cycles-pp.fsnotify
1.04 ± 9% -0.3 0.79 ± 3% perf-profile.children.cycles-pp._cond_resched
1.72 ± 7% -0.2 1.48 ± 5% perf-profile.children.cycles-pp.fput_many
1.16 ± 10% -0.2 0.92 ± 4% perf-profile.children.cycles-pp.__might_sleep
1.09 ± 8% -0.2 0.85 ± 3% perf-profile.children.cycles-pp.__fsnotify_parent
0.83 ± 13% -0.2 0.63 ± 3% perf-profile.children.cycles-pp.__might_fault
0.69 ± 11% -0.1 0.56 perf-profile.children.cycles-pp.current_time
0.99 ± 4% -0.1 0.89 ± 4% perf-profile.children.cycles-pp.touch_atime
0.65 ± 6% -0.1 0.55 ± 6% perf-profile.children.cycles-pp.atime_needs_update
0.39 ± 4% -0.1 0.29 perf-profile.children.cycles-pp.rcu_all_qs
0.13 ± 39% -0.1 0.04 ± 59% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.34 ± 11% -0.1 0.26 ± 9% perf-profile.children.cycles-pp.preempt_schedule_common
0.49 ± 5% -0.1 0.41 ± 2% perf-profile.children.cycles-pp.wake_up_q
0.24 ± 2% -0.1 0.17 ± 4% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.30 ± 11% -0.1 0.23 ± 3% perf-profile.children.cycles-pp.inode_has_perm
0.23 ± 8% -0.1 0.18 ± 6% perf-profile.children.cycles-pp.__sb_end_write
0.21 ± 9% -0.1 0.16 ± 9% perf-profile.children.cycles-pp.__x64_sys_read
0.29 ± 8% -0.0 0.24 ± 2% perf-profile.children.cycles-pp.timespec64_trunc
0.21 ± 8% -0.0 0.17 ± 8% perf-profile.children.cycles-pp.__x64_sys_write
0.22 ± 7% -0.0 0.19 ± 2% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.08 ± 16% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.bpf_fd_pass
0.10 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.perf_exclude_event
0.08 ± 24% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.rcu_note_context_switch
0.17 ± 19% +0.1 0.24 ± 8% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.23 ± 19% +0.1 0.32 ± 3% perf-profile.children.cycles-pp.resched_curr
0.61 ± 8% +0.1 0.72 ± 6% perf-profile.children.cycles-pp.sched_clock
0.67 ± 8% +0.1 0.78 ± 6% perf-profile.children.cycles-pp.sched_clock_cpu
0.50 ± 9% +0.1 0.62 ± 3% perf-profile.children.cycles-pp.account_entity_enqueue
0.61 ± 10% +0.2 0.79 perf-profile.children.cycles-pp.account_entity_dequeue
0.43 ± 19% +0.2 0.62 ± 7% perf-profile.children.cycles-pp.clear_buddies
1.42 ± 14% +0.2 1.66 ± 4% perf-profile.children.cycles-pp.native_write_msr
1.17 ± 13% +0.3 1.42 ± 2% perf-profile.children.cycles-pp.check_preempt_wakeup
0.85 ± 23% +0.3 1.12 ± 7% perf-profile.children.cycles-pp.put_prev_entity
0.94 ± 14% +0.3 1.22 ± 5% perf-profile.children.cycles-pp.___perf_sw_event
1.45 ± 11% +0.3 1.73 perf-profile.children.cycles-pp.check_preempt_curr
1.45 ± 12% +0.3 1.76 perf-profile.children.cycles-pp.__enqueue_entity
1.60 ± 10% +0.3 1.91 perf-profile.children.cycles-pp.ttwu_do_wakeup
1.12 ± 9% +0.3 1.43 ± 3% perf-profile.children.cycles-pp.__update_load_avg_se
1.64 ± 8% +0.4 2.00 ± 4% perf-profile.children.cycles-pp.available_idle_cpu
0.96 ± 19% +0.4 1.34 ± 7% perf-profile.children.cycles-pp.pick_next_entity
1.85 ± 16% +0.4 2.28 ± 6% perf-profile.children.cycles-pp.switch_fpu_return
1.90 ± 9% +0.6 2.50 ± 3% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
3.22 ± 13% +0.6 3.87 perf-profile.children.cycles-pp.update_load_avg
4.03 ± 10% +0.6 4.68 ± 2% perf-profile.children.cycles-pp.update_curr
3.01 ± 14% +0.7 3.72 ± 2% perf-profile.children.cycles-pp.reweight_entity
5.10 ± 8% +0.8 5.87 ± 2% perf-profile.children.cycles-pp.enqueue_entity
4.71 ± 15% +1.1 5.77 ± 4% perf-profile.children.cycles-pp.pick_next_task_fair
6.73 ± 11% +1.1 7.87 perf-profile.children.cycles-pp.dequeue_task_fair
9.07 ± 9% +1.5 10.61 perf-profile.children.cycles-pp.enqueue_task_fair
9.32 ± 9% +1.6 10.89 perf-profile.children.cycles-pp.ttwu_do_activate
91.46 +1.6 93.03 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
9.27 ± 9% +1.6 10.84 perf-profile.children.cycles-pp.activate_task
4.16 ± 26% +1.6 5.78 ± 6% perf-profile.children.cycles-pp.exit_to_usermode_loop
90.79 +1.7 92.47 perf-profile.children.cycles-pp.do_syscall_64
3.48 ± 8% +1.9 5.36 ± 4% perf-profile.children.cycles-pp.select_idle_sibling
4.44 ± 10% +2.2 6.67 ± 5% perf-profile.children.cycles-pp.select_task_rq_fair
18.46 ± 9% +2.3 20.81 perf-profile.children.cycles-pp.pipe_wait
19.93 ± 11% +3.5 23.46 ± 2% perf-profile.children.cycles-pp.__schedule
28.82 ± 5% +3.6 32.45 perf-profile.children.cycles-pp.__wake_up_common_lock
19.89 ± 12% +3.7 23.62 ± 2% perf-profile.children.cycles-pp.schedule
23.96 ± 8% +4.5 28.45 perf-profile.children.cycles-pp.try_to_wake_up
24.53 ± 8% +4.5 29.06 perf-profile.children.cycles-pp.__wake_up_common
23.91 ± 8% +4.6 28.46 perf-profile.children.cycles-pp.autoremove_wake_function
2.76 ± 9% -0.5 2.23 ± 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.46 ± 18% -0.4 1.06 ± 4% perf-profile.self.cycles-pp.pipe_write
1.81 ± 9% -0.4 1.43 ± 4% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.56 ± 12% -0.4 1.19 ± 3% perf-profile.self.cycles-pp.___might_sleep
2.09 ± 7% -0.3 1.81 ± 2% perf-profile.self.cycles-pp.mutex_lock
1.86 ± 8% -0.3 1.59 ± 3% perf-profile.self.cycles-pp.selinux_file_permission
1.21 ± 11% -0.3 0.96 ± 2% perf-profile.self.cycles-pp.fsnotify
1.70 ± 7% -0.2 1.46 ± 5% perf-profile.self.cycles-pp.fput_many
1.04 ± 11% -0.2 0.82 ± 3% perf-profile.self.cycles-pp.__might_sleep
1.01 ± 8% -0.2 0.80 ± 3% perf-profile.self.cycles-pp.__fsnotify_parent
1.04 ± 10% -0.2 0.83 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.67 ± 10% -0.2 0.51 ± 5% perf-profile.self.cycles-pp.new_sync_write
0.64 ± 12% -0.2 0.48 ± 5% perf-profile.self.cycles-pp.vfs_write
1.67 -0.2 1.51 ± 2% perf-profile.self.cycles-pp.pipe_read
0.58 ± 17% -0.1 0.44 ± 7% perf-profile.self.cycles-pp.file_has_perm
0.44 ± 18% -0.1 0.31 ± 7% perf-profile.self.cycles-pp.__mutex_lock
0.49 ± 13% -0.1 0.35 ± 3% perf-profile.self.cycles-pp.copy_page_from_iter
0.64 ± 8% -0.1 0.52 ± 4% perf-profile.self.cycles-pp.copy_page_to_iter
0.67 ± 7% -0.1 0.55 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.30 ± 5% -0.1 0.20 ± 4% perf-profile.self.cycles-pp.ksys_write
0.40 ± 12% -0.1 0.30 ± 7% perf-profile.self.cycles-pp.security_file_permission
0.13 ± 38% -0.1 0.04 ± 59% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.36 ± 11% -0.1 0.28 ± 3% perf-profile.self.cycles-pp.__inode_security_revalidate
0.76 ± 5% -0.1 0.68 ± 5% perf-profile.self.cycles-pp.do_syscall_64
0.21 ± 2% -0.1 0.14 ± 8% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.26 ± 12% -0.1 0.20 ± 4% perf-profile.self.cycles-pp.inode_has_perm
0.29 ± 11% -0.1 0.23 ± 6% perf-profile.self.cycles-pp.current_time
0.28 ± 4% -0.1 0.21 ± 2% perf-profile.self.cycles-pp.rcu_all_qs
0.22 ± 7% -0.1 0.17 ± 7% perf-profile.self.cycles-pp.__sb_end_write
0.31 -0.1 0.26 ± 7% perf-profile.self.cycles-pp.ksys_read
0.20 ± 11% -0.0 0.16 ± 9% perf-profile.self.cycles-pp.__x64_sys_read
0.26 ± 8% -0.0 0.22 perf-profile.self.cycles-pp.timespec64_trunc
0.20 ± 5% -0.0 0.16 ± 9% perf-profile.self.cycles-pp.__x64_sys_write
0.24 ± 9% -0.0 0.21 ± 6% perf-profile.self.cycles-pp.__might_fault
0.09 ± 9% -0.0 0.06 perf-profile.self.cycles-pp.copyin
0.14 ± 5% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.07 ± 10% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.sched_clock
0.16 ± 5% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.ttwu_do_wakeup
0.20 ± 12% +0.0 0.24 ± 5% perf-profile.self.cycles-pp.__list_add_valid
0.10 ± 26% +0.0 0.14 ± 10% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.15 ± 16% +0.1 0.21 ± 11% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.35 ± 7% +0.1 0.43 ± 4% perf-profile.self.cycles-pp.prepare_to_wait
0.55 ± 2% +0.1 0.63 ± 3% perf-profile.self.cycles-pp.__fget_light
0.22 ± 18% +0.1 0.30 ± 3% perf-profile.self.cycles-pp.resched_curr
0.48 ± 11% +0.1 0.58 ± 3% perf-profile.self.cycles-pp.check_preempt_wakeup
0.39 ± 11% +0.1 0.50 perf-profile.self.cycles-pp.account_entity_enqueue
0.85 ± 7% +0.1 0.97 ± 4% perf-profile.self.cycles-pp.enqueue_entity
0.43 ± 16% +0.1 0.55 ± 7% perf-profile.self.cycles-pp.pick_next_entity
0.86 ± 10% +0.1 0.99 perf-profile.self.cycles-pp.dequeue_task_fair
0.37 ± 21% +0.2 0.55 ± 6% perf-profile.self.cycles-pp.clear_buddies
0.45 ± 14% +0.2 0.65 perf-profile.self.cycles-pp.account_entity_dequeue
1.41 ± 14% +0.2 1.65 ± 4% perf-profile.self.cycles-pp.native_write_msr
0.80 ± 16% +0.3 1.06 ± 4% perf-profile.self.cycles-pp.___perf_sw_event
1.09 ± 9% +0.3 1.39 ± 2% perf-profile.self.cycles-pp.__update_load_avg_se
0.84 ± 16% +0.3 1.14 ± 7% perf-profile.self.cycles-pp.select_task_rq_fair
1.45 ± 12% +0.3 1.75 perf-profile.self.cycles-pp.__enqueue_entity
1.62 ± 8% +0.4 1.99 ± 4% perf-profile.self.cycles-pp.available_idle_cpu
1.83 ± 16% +0.4 2.25 ± 6% perf-profile.self.cycles-pp.switch_fpu_return
2.15 ± 11% +0.5 2.60 ± 5% perf-profile.self.cycles-pp.update_curr
1.88 ± 9% +0.6 2.45 ± 3% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
1.21 ± 9% +0.7 1.92 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
0.80 ± 11% +1.4 2.17 ± 3% perf-profile.self.cycles-pp.select_idle_sibling
hackbench.throughput
74000 +-+-----------------------------------------------------------------+
72000 +-+.+ .+ |
|.+ : .+. .+. + |
70000 +-+ : + + +. : |
68000 +-+ + + : |
| :: |
66000 +-+ + |
64000 +-+ |
62000 +-+ |
| |
60000 +-+ O O OO O O O O O O O OO |
58000 +-+ O O O O O O O O O O O
| O O OO O O |
56000 O-+ O O O O O O |
54000 +-O-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-hsw-ep4: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/18/x86_64-rhel-7.6/threads/1600%/debian-x86_64-2019-11-14.cgz/lkp-hsw-ep4/hackbench/0x43
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
176075 ± 2% -11.3% 156152 hackbench.throughput
891.79 +12.3% 1001 hackbench.time.elapsed_time
891.79 +12.3% 1001 hackbench.time.elapsed_time.max
1.889e+09 ± 9% +33.1% 2.514e+09 ± 3% hackbench.time.involuntary_context_switches
42939 ± 3% +17.0% 50220 hackbench.time.system_time
18978 +3.2% 19592 hackbench.time.user_time
2.558e+09 ± 8% +33.5% 3.415e+09 ± 2% hackbench.time.voluntary_context_switches
43133704 ± 3% +39.2% 60035429 ± 33% cpuidle.C1.time
48661 +17.7% 57274 ± 5% meminfo.Shmem
19043 ± 2% -14.0% 16377 meminfo.max_used_kB
2502 +1.3% 2535 turbostat.Avg_MHz
3.368e+08 ± 5% +30.6% 4.398e+08 turbostat.IRQ
66.25 +4.5% 69.25 vmstat.cpu.sy
29.25 ± 2% -8.5% 26.75 vmstat.cpu.us
5000443 ± 7% +18.6% 5931772 vmstat.system.cs
370173 ± 3% +17.1% 433616 vmstat.system.in
98786923 ± 7% +74.5% 1.724e+08 numa-numastat.node0.local_node
98791606 ± 7% +74.5% 1.724e+08 numa-numastat.node0.numa_hit
4686 ± 99% +297.8% 18642 ± 24% numa-numastat.node0.other_node
1.818e+08 -51.4% 88447082 ± 3% numa-numastat.node1.local_node
1.819e+08 -51.4% 88451833 ± 3% numa-numastat.node1.numa_hit
18694 ± 25% -74.6% 4753 ± 97% numa-numastat.node1.other_node
155316 +1.5% 157632 proc-vmstat.nr_active_anon
99.50 +5.3% 104.75 ± 3% proc-vmstat.nr_dirtied
4440 +4.9% 4656 proc-vmstat.nr_inactive_anon
6283 +1.9% 6399 proc-vmstat.nr_mapped
12248 ± 2% +16.6% 14276 ± 5% proc-vmstat.nr_shmem
98.75 +5.3% 104.00 proc-vmstat.nr_written
155316 +1.5% 157632 proc-vmstat.nr_zone_active_anon
4440 +4.9% 4656 proc-vmstat.nr_zone_inactive_anon
2.807e+08 -7.1% 2.609e+08 proc-vmstat.numa_hit
2.807e+08 -7.1% 2.609e+08 proc-vmstat.numa_local
2.83e+08 -7.0% 2.632e+08 proc-vmstat.pgalloc_normal
3212744 +2.3% 3286078 proc-vmstat.pgfault
2.828e+08 -7.0% 2.631e+08 proc-vmstat.pgfree
39216 ± 35% +153.3% 99342 ± 4% numa-vmstat.node0.nr_active_anon
31864 ± 44% +211.0% 99111 ± 4% numa-vmstat.node0.nr_anon_pages
11183 ± 3% +4212.8% 482321 ± 2% numa-vmstat.node0.nr_kernel_stack
499.50 ± 23% +66.9% 833.75 ± 8% numa-vmstat.node0.nr_page_table_pages
11417 -63.5% 4168 ± 5% numa-vmstat.node0.nr_shmem
15905 ± 4% -12.5% 13919 ± 2% numa-vmstat.node0.nr_slab_reclaimable
17996 ± 3% +587.4% 123708 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
39216 ± 35% +153.3% 99342 ± 4% numa-vmstat.node0.nr_zone_active_anon
52471566 ± 6% +73.3% 90915199 numa-vmstat.node0.numa_hit
52466577 ± 6% +73.2% 90896366 numa-vmstat.node0.numa_local
4990 ± 92% +277.5% 18836 ± 24% numa-vmstat.node0.numa_other
114712 ± 12% -50.1% 57234 ± 7% numa-vmstat.node1.nr_active_anon
114545 ± 12% -58.2% 47874 ± 9% numa-vmstat.node1.nr_anon_pages
462075 -96.6% 15717 ± 67% numa-vmstat.node1.nr_kernel_stack
871.50 ± 13% -37.9% 541.00 ± 13% numa-vmstat.node1.nr_page_table_pages
691.25 ± 22% +1359.2% 10086 ± 8% numa-vmstat.node1.nr_shmem
11357 ± 5% +18.1% 13414 ± 4% numa-vmstat.node1.nr_slab_reclaimable
118594 -85.5% 17243 ± 16% numa-vmstat.node1.nr_slab_unreclaimable
114712 ± 12% -50.1% 57234 ± 7% numa-vmstat.node1.nr_zone_active_anon
93308245 ± 2% -49.7% 46921467 ± 4% numa-vmstat.node1.numa_hit
93128224 ± 2% -49.8% 46755145 ± 4% numa-vmstat.node1.numa_local
157762 ± 35% +154.4% 401333 ± 4% numa-meminfo.node0.Active
157721 ± 35% +154.4% 401249 ± 4% numa-meminfo.node0.Active(anon)
128232 ± 44% +212.2% 400337 ± 4% numa-meminfo.node0.AnonPages
63487 ± 4% -12.0% 55850 ± 2% numa-meminfo.node0.KReclaimable
11691 ± 3% +4087.3% 489553 numa-meminfo.node0.KernelStack
1170596 ± 4% +104.2% 2390602 numa-meminfo.node0.MemUsed
2003 ± 23% +67.3% 3351 ± 8% numa-meminfo.node0.PageTables
63487 ± 4% -12.0% 55850 ± 2% numa-meminfo.node0.SReclaimable
72301 ± 3% +591.9% 500285 numa-meminfo.node0.SUnreclaim
45749 -63.5% 16679 ± 5% numa-meminfo.node0.Shmem
135789 ± 3% +309.6% 556136 numa-meminfo.node0.Slab
460025 ± 11% -50.2% 228882 ± 7% numa-meminfo.node1.Active
459902 ± 11% -50.2% 228802 ± 7% numa-meminfo.node1.Active(anon)
459237 ± 11% -58.3% 191354 ± 9% numa-meminfo.node1.AnonPages
45568 ± 5% +17.8% 53657 ± 4% numa-meminfo.node1.KReclaimable
468955 -96.7% 15461 ± 67% numa-meminfo.node1.KernelStack
2241450 -51.9% 1078199 ± 3% numa-meminfo.node1.MemUsed
3493 ± 13% -38.0% 2167 ± 13% numa-meminfo.node1.PageTables
45568 ± 5% +17.8% 53657 ± 4% numa-meminfo.node1.SReclaimable
479010 -85.6% 68750 ± 16% numa-meminfo.node1.SUnreclaim
2779 ± 23% +1354.1% 40419 ± 8% numa-meminfo.node1.Shmem
524580 -76.7% 122408 ± 8% numa-meminfo.node1.Slab
250988 ± 19% -52.0% 120407 ± 33% sched_debug.cfs_rq:/.MIN_vruntime.avg
1469922 ± 14% -32.1% 998119 ± 33% sched_debug.cfs_rq:/.MIN_vruntime.stddev
20766 ± 9% -25.3% 15519 ± 10% sched_debug.cfs_rq:/.load.avg
43820 ± 19% -40.6% 26044 ± 36% sched_debug.cfs_rq:/.load.stddev
250988 ± 19% -52.0% 120407 ± 33% sched_debug.cfs_rq:/.max_vruntime.avg
1469922 ± 14% -32.1% 998119 ± 33% sched_debug.cfs_rq:/.max_vruntime.stddev
10990567 ± 9% +22.9% 13509735 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
102.40 -33.3% 68.27 ± 57% sched_debug.cfs_rq:/.removed.load_avg.max
4711 -33.4% 3136 ± 57% sched_debug.cfs_rq:/.removed.runnable_sum.max
19930 ± 10% -25.1% 14917 ± 10% sched_debug.cfs_rq:/.runnable_weight.avg
-5713498 -349.0% 14226386 ± 8% sched_debug.cfs_rq:/.spread0.avg
8106715 ± 13% +283.1% 31057182 ± 6% sched_debug.cfs_rq:/.spread0.max
-18025645 -99.7% -45393 sched_debug.cfs_rq:/.spread0.min
11027552 ± 9% +21.5% 13398318 ± 7% sched_debug.cfs_rq:/.spread0.stddev
570.46 ± 5% +109.2% 1193 ± 21% sched_debug.cfs_rq:/.util_est_enqueued.avg
2898 ± 11% +90.0% 5507 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.max
563.27 ± 13% +149.3% 1404 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.stddev
155578 ± 22% +84.1% 286457 ± 28% sched_debug.cpu.avg_idle.min
1678 ± 43% +105.9% 3457 ± 31% sched_debug.cpu.clock.stddev
1678 ± 43% +105.9% 3457 ± 31% sched_debug.cpu.clock_task.stddev
50207 -12.3% 44017 ± 7% sched_debug.cpu.curr->pid.avg
82673 -19.5% 66514 ± 5% sched_debug.cpu.curr->pid.max
746.05 ± 11% +554.3% 4881 ± 8% sched_debug.cpu.curr->pid.min
30728 ± 2% -40.8% 18187 ± 10% sched_debug.cpu.curr->pid.stddev
0.00 ± 43% +106.0% 0.00 ± 31% sched_debug.cpu.next_balance.stddev
20.49 ± 6% +149.0% 51.03 ± 23% sched_debug.cpu.nr_running.avg
157.83 ± 9% +77.0% 279.37 ± 8% sched_debug.cpu.nr_running.max
29.95 ± 11% +137.1% 70.99 ± 16% sched_debug.cpu.nr_running.stddev
29803727 ± 9% +29.6% 38623870 ± 3% sched_debug.cpu.nr_switches.avg
51549476 ± 10% +22.5% 63171598 ± 3% sched_debug.cpu.nr_switches.max
12792570 ± 3% +43.5% 18362678 ± 4% sched_debug.cpu.nr_switches.min
2.03 ± 80% +430.9% 10.76 ± 38% sched_debug.cpu.nr_uninterruptible.avg
6.99 ± 4% +48.7% 10.39 ± 5% perf-stat.i.MPKI
7.49 ± 5% -1.2 6.29 ± 5% perf-stat.i.cache-miss-rate%
27202880 ± 2% +46.9% 39966416 ± 2% perf-stat.i.cache-misses
4.662e+08 ± 5% +71.5% 7.998e+08 ± 5% perf-stat.i.cache-references
5989366 ± 4% +12.9% 6760958 ± 5% perf-stat.i.context-switches
197319 ± 12% -19.2% 159504 ± 3% perf-stat.i.cpu-migrations
7415 ± 2% -23.3% 5686 ± 3% perf-stat.i.cycles-between-cache-misses
0.61 ± 35% -0.3 0.35 ± 4% perf-stat.i.dTLB-load-miss-rate%
1.506e+08 ± 38% -44.9% 82967539 ± 5% perf-stat.i.dTLB-load-misses
0.22 ± 4% -0.0 0.20 ± 4% perf-stat.i.dTLB-store-miss-rate%
35974976 ± 5% -13.4% 31146077 ± 6% perf-stat.i.dTLB-store-misses
1.627e+10 -3.7% 1.566e+10 perf-stat.i.dTLB-stores
46665620 ± 3% -14.6% 39838962 ± 5% perf-stat.i.iTLB-load-misses
2191 ± 5% +21.4% 2660 ± 7% perf-stat.i.instructions-per-iTLB-miss
0.47 -2.5% 0.46 perf-stat.i.ipc
6660 ± 8% -19.1% 5391 ± 8% perf-stat.i.minor-faults
62.93 ± 2% +11.1 74.06 perf-stat.i.node-load-miss-rate%
12537290 ± 4% +73.5% 21750457 ± 4% perf-stat.i.node-load-misses
5459308 ± 3% -7.4% 5057404 ± 3% perf-stat.i.node-loads
29.39 ± 2% -4.1 25.29 ± 2% perf-stat.i.node-store-miss-rate%
2863317 +13.6% 3252632 ± 2% perf-stat.i.node-store-misses
6381925 ± 2% +53.5% 9796918 ± 4% perf-stat.i.node-stores
6638 ± 8% -19.4% 5348 ± 8% perf-stat.i.page-faults
4.89 ± 4% +78.4% 8.73 perf-stat.overall.MPKI
1.29 +0.0 1.31 perf-stat.overall.branch-miss-rate%
6.13 ± 4% -1.3 4.85 ± 3% perf-stat.overall.cache-miss-rate%
2.15 +2.0% 2.19 perf-stat.overall.cpi
7187 -27.9% 5182 ± 3% perf-stat.overall.cycles-between-cache-misses
0.63 ± 39% -0.3 0.36 ± 5% perf-stat.overall.dTLB-load-miss-rate%
0.23 ± 5% -0.0 0.21 ± 3% perf-stat.overall.dTLB-store-miss-rate%
75.65 -1.1 74.59 perf-stat.overall.iTLB-load-miss-rate%
1604 ± 3% +15.9% 1859 perf-stat.overall.instructions-per-iTLB-miss
0.47 -2.0% 0.46 perf-stat.overall.ipc
63.14 ± 2% +13.6 76.70 perf-stat.overall.node-load-miss-rate%
30.74 ± 2% -5.1 25.62 ± 2% perf-stat.overall.node-store-miss-rate%
47754 ± 2% +11.7% 53321 perf-stat.overall.path-length
24893507 +40.6% 34993733 ± 2% perf-stat.ps.cache-misses
4.072e+08 ± 5% +77.1% 7.212e+08 perf-stat.ps.cache-references
4968881 ± 7% +19.0% 5911450 perf-stat.ps.context-switches
1.789e+11 +1.3% 1.812e+11 perf-stat.ps.cpu-cycles
1.59e+08 ± 39% -44.1% 88801351 ± 6% perf-stat.ps.dTLB-load-misses
2.502e+10 -2.0% 2.453e+10 perf-stat.ps.dTLB-loads
39138578 ± 5% -12.6% 34190129 ± 3% perf-stat.ps.dTLB-store-misses
1.675e+10 -3.7% 1.613e+10 perf-stat.ps.dTLB-stores
51901139 ± 2% -14.3% 44461197 perf-stat.ps.iTLB-load-misses
16711264 ± 2% -9.3% 15153636 ± 2% perf-stat.ps.iTLB-loads
3573 -9.1% 3247 perf-stat.ps.minor-faults
10427704 ± 3% +73.2% 18055799 ± 3% perf-stat.ps.node-load-misses
6085553 ± 4% -10.0% 5479559 ± 2% perf-stat.ps.node-loads
2566511 +13.4% 2911367 ± 2% perf-stat.ps.node-store-misses
5784179 ± 2% +46.2% 8454210 ± 3% perf-stat.ps.node-stores
3573 -9.1% 3247 perf-stat.ps.page-faults
7.427e+13 ± 2% +11.7% 8.293e+13 perf-stat.total.instructions
127394 ± 5% +19.1% 151665 softirqs.CPU0.RCU
321761 ± 3% +12.8% 363107 ± 3% softirqs.CPU0.TIMER
124526 ± 5% +20.2% 149697 ± 3% softirqs.CPU1.RCU
123768 ± 4% +24.7% 154332 ± 2% softirqs.CPU10.RCU
130315 ± 3% +22.1% 159111 softirqs.CPU11.RCU
126684 ± 4% +26.1% 159755 ± 2% softirqs.CPU12.RCU
124649 ± 9% +27.8% 159298 ± 3% softirqs.CPU13.RCU
124528 ± 4% +23.2% 153372 ± 2% softirqs.CPU14.RCU
326150 ± 2% +9.8% 357991 ± 4% softirqs.CPU14.TIMER
121923 ± 6% +17.4% 143119 softirqs.CPU15.RCU
122114 ± 5% +16.7% 142560 softirqs.CPU16.RCU
121425 ± 6% +14.4% 138861 ± 3% softirqs.CPU17.RCU
330948 ± 2% +22.2% 404570 ± 16% softirqs.CPU17.TIMER
312317 ± 4% +18.2% 369091 ± 2% softirqs.CPU18.TIMER
9626 ± 2% +15.0% 11067 ± 3% softirqs.CPU19.SCHED
322308 ± 3% +21.1% 390454 ± 2% softirqs.CPU19.TIMER
124281 ± 5% +22.8% 152670 ± 2% softirqs.CPU2.RCU
9452 ± 2% +15.6% 10927 ± 3% softirqs.CPU20.SCHED
318575 ± 6% +22.3% 389741 ± 3% softirqs.CPU20.TIMER
9327 ± 3% +16.7% 10886 ± 3% softirqs.CPU21.SCHED
306628 ± 4% +23.3% 377962 ± 3% softirqs.CPU21.TIMER
9424 ± 2% +17.6% 11084 ± 3% softirqs.CPU22.SCHED
302796 ± 4% +24.2% 376126 ± 2% softirqs.CPU22.TIMER
9798 ± 2% +11.4% 10918 ± 4% softirqs.CPU23.SCHED
312009 ± 4% +23.6% 385626 ± 2% softirqs.CPU23.TIMER
9559 ± 3% +13.2% 10822 softirqs.CPU24.SCHED
311772 ± 4% +23.0% 383504 ± 2% softirqs.CPU24.TIMER
9514 ± 4% +13.2% 10766 ± 3% softirqs.CPU25.SCHED
307640 ± 4% +22.4% 376479 ± 2% softirqs.CPU25.TIMER
9573 ± 3% +13.8% 10897 ± 3% softirqs.CPU26.SCHED
314607 ± 9% +21.1% 380918 ± 2% softirqs.CPU26.TIMER
9385 +13.7% 10668 ± 2% softirqs.CPU27.SCHED
301949 ± 4% +25.2% 378140 ± 3% softirqs.CPU27.TIMER
9483 ± 2% +14.7% 10874 ± 2% softirqs.CPU28.SCHED
299462 ± 3% +24.9% 373896 ± 2% softirqs.CPU28.TIMER
9505 ± 3% +13.3% 10769 ± 3% softirqs.CPU29.SCHED
307087 ± 4% +23.8% 380066 ± 3% softirqs.CPU29.TIMER
124992 ± 4% +19.4% 149253 ± 3% softirqs.CPU3.RCU
9630 ± 4% +13.6% 10942 ± 3% softirqs.CPU30.SCHED
306382 ± 4% +25.4% 384355 ± 2% softirqs.CPU30.TIMER
9496 ± 6% +12.6% 10689 ± 2% softirqs.CPU31.SCHED
308689 ± 4% +24.5% 384419 ± 2% softirqs.CPU31.TIMER
9714 +12.6% 10937 ± 4% softirqs.CPU32.SCHED
304137 ± 3% +35.9% 413240 ± 13% softirqs.CPU32.TIMER
9598 ± 2% +13.4% 10882 ± 5% softirqs.CPU33.SCHED
9622 +13.8% 10949 ± 3% softirqs.CPU34.SCHED
305882 ± 4% +27.3% 389315 ± 4% softirqs.CPU34.TIMER
9409 ± 4% +15.3% 10853 ± 4% softirqs.CPU35.SCHED
306215 ± 4% +23.8% 379222 ± 2% softirqs.CPU35.TIMER
321617 ± 4% +12.9% 362966 ± 3% softirqs.CPU36.TIMER
134843 ± 6% +19.4% 161053 ± 2% softirqs.CPU37.RCU
130954 ± 6% +20.4% 157716 softirqs.CPU38.RCU
129884 ± 5% +17.8% 152978 softirqs.CPU39.RCU
124744 ± 4% +20.1% 149790 ± 3% softirqs.CPU4.RCU
130593 ± 5% +17.2% 153092 ± 2% softirqs.CPU40.RCU
134185 ± 5% +19.9% 160871 softirqs.CPU41.RCU
133336 ± 5% +18.1% 157409 softirqs.CPU42.RCU
134486 ± 4% +18.7% 159613 softirqs.CPU43.RCU
129857 ± 6% +18.5% 153887 softirqs.CPU44.RCU
131816 ± 5% +20.0% 158144 softirqs.CPU45.RCU
132691 ± 5% +22.4% 162412 softirqs.CPU46.RCU
138736 ± 3% +18.7% 164668 softirqs.CPU47.RCU
132971 ± 6% +24.2% 165142 softirqs.CPU48.RCU
136142 ± 3% +20.8% 164493 softirqs.CPU49.RCU
125011 ± 4% +23.6% 154462 ± 2% softirqs.CPU5.RCU
133660 ± 4% +19.5% 159753 ± 2% softirqs.CPU50.RCU
140405 ± 4% +19.5% 167772 softirqs.CPU51.RCU
140523 ± 4% +18.5% 166576 softirqs.CPU52.RCU
138849 ± 5% +14.6% 159157 ± 4% softirqs.CPU53.RCU
332171 ± 2% +21.4% 403260 ± 15% softirqs.CPU53.TIMER
133037 ± 4% +14.0% 151688 ± 2% softirqs.CPU54.RCU
9584 +14.3% 10959 ± 2% softirqs.CPU54.SCHED
310603 ± 4% +18.0% 366472 ± 3% softirqs.CPU54.TIMER
9565 ± 2% +13.8% 10889 ± 5% softirqs.CPU55.SCHED
321231 ± 3% +21.3% 389753 ± 2% softirqs.CPU55.TIMER
136633 ± 4% +12.6% 153854 softirqs.CPU56.RCU
9430 ± 2% +15.6% 10898 ± 5% softirqs.CPU56.SCHED
319477 ± 6% +22.1% 390107 ± 2% softirqs.CPU56.TIMER
128672 ± 4% +14.1% 146791 ± 3% softirqs.CPU57.RCU
9493 ± 2% +13.1% 10740 ± 3% softirqs.CPU57.SCHED
305670 ± 4% +23.5% 377597 ± 2% softirqs.CPU57.TIMER
127129 ± 4% +15.6% 146993 ± 2% softirqs.CPU58.RCU
9529 ± 3% +13.6% 10824 ± 4% softirqs.CPU58.SCHED
302317 ± 4% +23.8% 374371 ± 3% softirqs.CPU58.TIMER
9427 ± 2% +14.0% 10743 ± 2% softirqs.CPU59.SCHED
310985 ± 4% +23.5% 384119 ± 3% softirqs.CPU59.TIMER
124937 ± 4% +21.6% 151962 ± 3% softirqs.CPU6.RCU
9435 ± 2% +14.1% 10762 ± 3% softirqs.CPU60.SCHED
311646 ± 4% +22.8% 382731 ± 3% softirqs.CPU60.TIMER
9585 +13.5% 10877 ± 2% softirqs.CPU61.SCHED
306857 ± 4% +22.8% 376944 ± 2% softirqs.CPU61.TIMER
9483 ± 2% +12.5% 10671 ± 5% softirqs.CPU62.SCHED
9620 ± 4% +9.7% 10556 ± 4% softirqs.CPU63.SCHED
301085 ± 3% +24.7% 375397 ± 3% softirqs.CPU63.TIMER
300443 ± 4% +23.7% 371620 ± 2% softirqs.CPU64.TIMER
9667 +11.2% 10751 ± 4% softirqs.CPU65.SCHED
307194 ± 4% +23.4% 379020 ± 3% softirqs.CPU65.TIMER
9493 ± 3% +14.9% 10909 ± 3% softirqs.CPU66.SCHED
306226 ± 4% +25.1% 383140 ± 3% softirqs.CPU66.TIMER
9674 ± 3% +13.8% 11010 ± 3% softirqs.CPU67.SCHED
307870 ± 4% +25.0% 384793 ± 2% softirqs.CPU67.TIMER
9451 ± 2% +14.2% 10792 ± 4% softirqs.CPU68.SCHED
301873 ± 3% +26.1% 380748 softirqs.CPU68.TIMER
9542 +13.8% 10863 ± 3% softirqs.CPU69.SCHED
316206 ± 8% +20.5% 380893 ± 2% softirqs.CPU69.TIMER
128158 ± 4% +20.4% 154294 ± 2% softirqs.CPU7.RCU
9456 ± 3% +13.5% 10734 ± 3% softirqs.CPU70.SCHED
305361 ± 4% +36.4% 416546 ± 15% softirqs.CPU70.TIMER
9520 ± 3% +15.4% 10985 ± 4% softirqs.CPU71.SCHED
305605 ± 4% +24.6% 380653 ± 2% softirqs.CPU71.TIMER
122831 ± 4% +21.6% 149424 ± 2% softirqs.CPU8.RCU
123870 ± 4% +23.0% 152396 ± 2% softirqs.CPU9.RCU
9357957 ± 5% +15.4% 10794953 softirqs.RCU
23209193 ± 3% +15.5% 26806870 ± 3% softirqs.TIMER
4.79 ± 4% -3.1 1.71 ± 16% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.85 ± 4% -2.4 1.41 ± 16% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.39 ± 4% -2.0 1.35 ± 16% perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.new_sync_read.vfs_read.ksys_read
2.93 ± 5% -1.9 0.99 ± 18% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.new_sync_write.vfs_write.ksys_write
3.25 ± 4% -1.9 1.31 ± 10% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.new_sync_write.vfs_write.ksys_write
2.14 ± 4% -1.2 0.91 ± 14% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
2.06 ± 4% -1.2 0.85 ± 13% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
1.52 ± 3% -1.0 0.49 ± 59% perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.new_sync_read.vfs_read
1.46 ± 3% -1.0 0.46 ± 59% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.new_sync_read
1.53 ± 8% -0.8 0.68 ± 12% perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.new_sync_read.vfs_read.ksys_read
1.65 ± 4% -0.8 0.81 ± 10% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.45 ± 5% -0.7 0.74 ± 10% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.35 ± 5% -0.7 0.68 ± 10% perf-profile.calltrace.cycles-pp.__fget.__fget_light.__fdget_pos.ksys_write.do_syscall_64
1.28 ± 8% -0.7 0.62 ± 12% perf-profile.calltrace.cycles-pp.file_update_time.pipe_write.new_sync_write.vfs_write.ksys_write
1.29 ± 3% -0.6 0.67 ± 9% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.new_sync_write.vfs_write.ksys_write
0.64 ± 11% +0.3 0.96 ± 7% perf-profile.calltrace.cycles-pp.__lock_text_start.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
0.00 +0.5 0.55 ± 3% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
0.00 +0.6 0.55 ± 5% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.00 +0.6 0.57 ± 6% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.00 +0.6 0.61 ± 7% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.7 0.69 ± 8% perf-profile.calltrace.cycles-pp.finish_task_switch.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.7 0.70 ± 6% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.00 +0.7 0.71 ± 11% perf-profile.calltrace.cycles-pp.__enqueue_entity.put_prev_entity.pick_next_task_fair.__schedule.schedule
0.00 +0.7 0.72 ± 7% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.00 +0.7 0.72 ± 6% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +0.8 0.76 ± 10% perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.00 +0.8 0.77 ± 2% perf-profile.calltrace.cycles-pp.update_curr.dequeue_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.8 0.77 ± 5% perf-profile.calltrace.cycles-pp.__switch_to
0.00 +0.8 0.77 ± 6% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.8 0.79 ± 5% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.schedule
0.00 +0.8 0.81 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_wait.pipe_read.new_sync_read.vfs_read
0.00 +0.8 0.82 ± 6% perf-profile.calltrace.cycles-pp.update_rq_clock.__schedule.schedule.pipe_wait.pipe_read
0.00 +0.8 0.82 ± 6% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 +0.9 0.90 ± 3% perf-profile.calltrace.cycles-pp.find_next_bit.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up
0.00 +0.9 0.91 ± 4% perf-profile.calltrace.cycles-pp.update_rq_clock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.00 +0.9 0.91 ± 6% perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.00 +0.9 0.92 ± 2% perf-profile.calltrace.cycles-pp.native_write_msr
0.00 +1.1 1.05 ± 5% perf-profile.calltrace.cycles-pp.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +1.1 1.08 ± 8% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
0.00 +1.1 1.13 ± 5% perf-profile.calltrace.cycles-pp.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.00 +1.2 1.17 ± 9% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +1.2 1.18 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.00 +1.2 1.20 ± 9% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.pipe_wait.pipe_read
0.00 +1.3 1.30 ± 5% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 +1.3 1.34 ± 6% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +1.6 1.62 ± 4% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.9 1.88 ± 5% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.00 +1.9 1.93 ± 8% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.13 ±173% +2.0 2.11 ± 5% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.00 +2.1 2.06 ± 4% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +2.4 2.38 ± 4% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +2.7 2.65 ± 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.35 ±103% +4.3 4.67 ± 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.22 ±173% +4.6 4.81 ± 4% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.23 ±173% +4.7 4.97 ± 4% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.72 ± 23% +5.0 5.72 ± 5% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
0.69 ± 17% +5.6 6.25 ± 4% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.74 ± 17% +5.8 6.50 ± 4% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.74 ± 18% +5.8 6.55 ± 5% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
15.99 ± 54% +7.5 23.51 ± 6% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.13 +7.7 20.86 ± 3% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.49 +8.7 20.19 ± 3% perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.31 ± 27% +8.9 10.21 ± 7% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
1.81 ± 22% +11.2 12.98 ± 5% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.new_sync_read
1.89 ± 22% +11.4 13.28 ± 5% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.new_sync_read.vfs_read
2.18 ± 21% +13.1 15.28 ± 4% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.new_sync_read.vfs_read.ksys_read
2.01 ± 25% +19.1 21.10 ± 7% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
2.38 ± 23% +20.0 22.40 ± 7% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
19.01 ± 3% +23.4 42.38 ± 4% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
21.13 ± 53% +23.8 44.92 ± 5% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
17.85 ± 4% +23.9 41.76 ± 4% perf-profile.calltrace.cycles-pp.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
19.16 ± 53% +24.6 43.74 ± 5% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.79 ± 22% +30.7 34.45 ± 6% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
3.84 ± 23% +30.9 34.69 ± 6% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
4.13 ± 21% +31.0 35.10 ± 6% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
5.77 ± 14% +31.0 36.76 ± 6% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
9.20 ± 6% -6.0 3.16 ± 12% perf-profile.children.cycles-pp.syscall_return_via_sysret
8.88 ± 3% -5.8 3.03 ± 12% perf-profile.children.cycles-pp.entry_SYSCALL_64
8.72 ± 4% -5.3 3.39 ± 11% perf-profile.children.cycles-pp.security_file_permission
4.27 ± 3% -2.4 1.89 ± 9% perf-profile.children.cycles-pp.selinux_file_permission
3.15 ± 5% -2.1 1.07 ± 12% perf-profile.children.cycles-pp.file_has_perm
3.48 ± 3% -2.0 1.46 ± 10% perf-profile.children.cycles-pp.copy_page_to_iter
3.80 ± 3% -2.0 1.79 ± 6% perf-profile.children.cycles-pp.mutex_unlock
3.02 ± 5% -1.9 1.11 ± 11% perf-profile.children.cycles-pp.copy_page_from_iter
3.31 ± 4% -1.6 1.75 ± 6% perf-profile.children.cycles-pp.__fdget_pos
2.55 ± 3% -1.5 1.02 ± 10% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
3.17 ± 4% -1.5 1.70 ± 6% perf-profile.children.cycles-pp.__fget_light
2.10 ± 7% -1.4 0.71 ± 12% perf-profile.children.cycles-pp.fsnotify
2.52 ± 6% -1.3 1.23 ± 8% perf-profile.children.cycles-pp.__fget
1.77 ± 6% -1.1 0.66 ± 9% perf-profile.children.cycles-pp.avc_has_perm
1.89 ± 5% -1.1 0.78 ± 10% perf-profile.children.cycles-pp.___might_sleep
1.44 ± 4% -1.0 0.48 ± 12% perf-profile.children.cycles-pp.__inode_security_revalidate
1.33 ± 10% -0.9 0.40 ± 16% perf-profile.children.cycles-pp.current_time
1.56 ± 3% -0.9 0.66 ± 9% perf-profile.children.cycles-pp.copyout
1.56 ± 8% -0.8 0.73 ± 7% perf-profile.children.cycles-pp.touch_atime
1.33 ± 4% -0.8 0.53 ± 10% perf-profile.children.cycles-pp.__might_sleep
1.15 ± 9% -0.8 0.36 ± 12% perf-profile.children.cycles-pp.atime_needs_update
1.17 ± 4% -0.7 0.43 ± 13% perf-profile.children.cycles-pp.copyin
1.23 ± 5% -0.7 0.54 ± 9% perf-profile.children.cycles-pp.__fsnotify_parent
1.32 ± 7% -0.6 0.68 ± 6% perf-profile.children.cycles-pp.file_update_time
1.02 ± 3% -0.6 0.38 ± 12% perf-profile.children.cycles-pp.__might_fault
2.54 ± 2% -0.5 2.03 perf-profile.children.cycles-pp.mutex_lock
0.78 ± 4% -0.4 0.36 ± 10% perf-profile.children.cycles-pp.fput_many
0.53 ± 4% -0.4 0.15 ± 21% perf-profile.children.cycles-pp.inode_has_perm
0.44 ± 12% -0.3 0.11 ± 22% perf-profile.children.cycles-pp.iov_iter_init
0.76 ± 4% -0.3 0.44 ± 8% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.53 ± 3% -0.3 0.21 ± 6% perf-profile.children.cycles-pp.rcu_all_qs
0.41 ± 9% -0.3 0.12 ± 15% perf-profile.children.cycles-pp.timespec64_trunc
0.97 -0.3 0.71 ± 3% perf-profile.children.cycles-pp._cond_resched
0.47 ± 7% -0.3 0.21 ± 11% perf-profile.children.cycles-pp.__x64_sys_read
0.45 ± 7% -0.3 0.20 ± 10% perf-profile.children.cycles-pp.__x64_sys_write
0.37 ± 4% -0.2 0.17 ± 16% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
0.44 ± 12% -0.2 0.24 ± 3% perf-profile.children.cycles-pp.__sb_start_write
0.31 ± 6% -0.2 0.12 ± 13% perf-profile.children.cycles-pp.__sb_end_write
0.36 ± 7% -0.2 0.18 ± 6% perf-profile.children.cycles-pp.rw_verify_area
0.26 ± 14% -0.2 0.09 ± 14% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.29 ± 8% -0.2 0.14 ± 16% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.22 ± 39% -0.2 0.06 ± 11% perf-profile.children.cycles-pp.__vfs_read
0.21 ± 19% -0.1 0.11 ± 9% perf-profile.children.cycles-pp.__mutex_unlock_slowpath
0.12 ± 9% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__vfs_write
0.12 ± 17% -0.0 0.08 ± 11% perf-profile.children.cycles-pp.wake_up_q
0.08 ± 10% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.kill_fasync
0.11 ± 7% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.09 ± 5% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.01 ±173% +0.1 0.06 ± 6% perf-profile.children.cycles-pp.__mnt_want_write
0.07 ± 25% +0.1 0.12 ± 27% perf-profile.children.cycles-pp.__softirqentry_text_start
0.08 ± 30% +0.1 0.14 ± 26% perf-profile.children.cycles-pp.irq_exit
0.06 ± 6% +0.1 0.12 ± 10% perf-profile.children.cycles-pp.__mark_inode_dirty
0.00 +0.1 0.06 perf-profile.children.cycles-pp.preempt_schedule_common
0.00 +0.1 0.07 ± 6% perf-profile.children.cycles-pp.is_cpu_allowed
0.03 ±100% +0.1 0.10 ± 8% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.00 +0.1 0.07 ± 5% perf-profile.children.cycles-pp.__x2apic_send_IPI_dest
0.01 ±173% +0.1 0.09 ± 32% perf-profile.children.cycles-pp.migrate_task_rq_fair
0.00 +0.1 0.07 ± 14% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.05 ± 61% +0.1 0.13 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.00 +0.1 0.08 ± 8% perf-profile.children.cycles-pp.rcu_note_context_switch
0.00 +0.1 0.08 ± 13% perf-profile.children.cycles-pp.smp_reschedule_interrupt
0.00 +0.1 0.09 ± 4% perf-profile.children.cycles-pp.native_load_tls
0.03 ±102% +0.1 0.12 ± 30% perf-profile.children.cycles-pp.set_task_cpu
0.00 +0.1 0.09 ± 11% perf-profile.children.cycles-pp.check_cfs_rq_runtime
0.00 +0.1 0.10 ± 5% perf-profile.children.cycles-pp.rb_next
0.10 ± 10% +0.1 0.20 ± 9% perf-profile.children.cycles-pp.generic_update_time
0.00 +0.1 0.10 ± 29% perf-profile.children.cycles-pp.__x64_sys_exit
0.00 +0.1 0.10 ± 28% perf-profile.children.cycles-pp.do_exit
0.00 +0.1 0.11 ± 8% perf-profile.children.cycles-pp.rb_insert_color
0.00 +0.1 0.11 ± 7% perf-profile.children.cycles-pp.__list_add_valid
0.00 +0.1 0.12 ± 7% perf-profile.children.cycles-pp.resched_curr
0.00 +0.1 0.12 ± 6% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.13 ± 10% +0.1 0.26 ± 4% perf-profile.children.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.15 ± 4% perf-profile.children.cycles-pp.set_next_buddy
0.00 +0.2 0.16 ± 5% perf-profile.children.cycles-pp.deactivate_task
0.05 ± 9% +0.2 0.22 ± 7% perf-profile.children.cycles-pp.rb_erase
0.06 ± 13% +0.2 0.23 ± 9% perf-profile.children.cycles-pp.finish_wait
0.17 ± 12% +0.2 0.35 ± 2% perf-profile.children.cycles-pp.anon_pipe_buf_release
0.01 ±173% +0.2 0.20 ± 8% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.00 +0.2 0.21 ± 4% perf-profile.children.cycles-pp.cpumask_next
0.07 ± 22% +0.3 0.40 ± 2% perf-profile.children.cycles-pp.prepare_to_wait
0.03 ±100% +0.4 0.39 ± 5% perf-profile.children.cycles-pp.update_min_vruntime
0.03 ±100% +0.4 0.39 ± 12% perf-profile.children.cycles-pp.cpus_share_cache
0.06 ± 14% +0.4 0.43 perf-profile.children.cycles-pp.clear_buddies
0.11 ± 15% +0.4 0.50 ± 3% perf-profile.children.cycles-pp.cpuacct_charge
0.10 ± 26% +0.4 0.49 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.32 ± 31% +0.4 0.75 ± 14% perf-profile.children.cycles-pp.sched_ttwu_pending
0.11 ± 26% +0.4 0.56 ± 3% perf-profile.children.cycles-pp.sched_clock
0.34 ± 32% +0.5 0.80 ± 15% perf-profile.children.cycles-pp.scheduler_ipi
0.08 ± 12% +0.5 0.56 ± 7% perf-profile.children.cycles-pp.account_entity_enqueue
0.13 ± 26% +0.5 0.62 ± 3% perf-profile.children.cycles-pp.sched_clock_cpu
0.14 ± 26% +0.6 0.77 ± 4% perf-profile.children.cycles-pp.__calc_delta
0.08 ± 16% +0.7 0.76 ± 3% perf-profile.children.cycles-pp.account_entity_dequeue
0.84 ± 5% +0.7 1.54 ± 5% perf-profile.children.cycles-pp.__lock_text_start
1.61 ± 16% +0.7 2.31 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.42 ± 31% +0.7 1.12 ± 13% perf-profile.children.cycles-pp.reschedule_interrupt
0.24 ± 28% +0.9 1.11 perf-profile.children.cycles-pp.native_write_msr
0.14 ± 22% +0.9 1.01 ± 5% perf-profile.children.cycles-pp.check_preempt_wakeup
0.24 ± 32% +0.9 1.12 ± 7% perf-profile.children.cycles-pp.finish_task_switch
0.22 ± 22% +0.9 1.10 ± 3% perf-profile.children.cycles-pp.find_next_bit
0.22 ± 23% +0.9 1.12 ± 3% perf-profile.children.cycles-pp.set_next_entity
0.21 ± 32% +0.9 1.14 ± 3% perf-profile.children.cycles-pp.update_cfs_group
0.16 ± 23% +1.0 1.15 ± 4% perf-profile.children.cycles-pp.check_preempt_curr
0.21 ± 30% +1.0 1.21 ± 2% perf-profile.children.cycles-pp.__switch_to
0.23 ± 21% +1.0 1.24 ± 2% perf-profile.children.cycles-pp.__switch_to_asm
0.16 ± 38% +1.0 1.20 ± 4% perf-profile.children.cycles-pp.___perf_sw_event
0.15 ± 26% +1.0 1.20 ± 3% perf-profile.children.cycles-pp.pick_next_entity
0.15 ± 20% +1.1 1.21 ± 7% perf-profile.children.cycles-pp.put_prev_entity
0.17 ± 22% +1.1 1.23 ± 4% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.19 ± 10% +1.3 1.52 ± 10% perf-profile.children.cycles-pp.__enqueue_entity
0.24 ± 23% +1.3 1.59 ± 6% perf-profile.children.cycles-pp.__update_load_avg_se
0.28 ± 23% +1.4 1.64 ± 3% perf-profile.children.cycles-pp.switch_fpu_return
0.31 ± 29% +1.6 1.88 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.44 ± 21% +1.8 2.24 ± 4% perf-profile.children.cycles-pp.cpumask_next_wrap
0.33 ± 20% +1.8 2.15 ± 4% perf-profile.children.cycles-pp.dequeue_entity
23.28 ± 2% +2.0 25.27 ± 2% perf-profile.children.cycles-pp.ksys_read
0.46 ± 25% +2.1 2.57 ± 5% perf-profile.children.cycles-pp.update_load_avg
0.20 ± 21% +2.1 2.34 ± 4% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.91 ± 29% +2.2 3.12 ± 11% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.38 ± 23% +2.4 2.80 ± 4% perf-profile.children.cycles-pp.reweight_entity
0.52 ± 18% +2.4 2.96 ± 3% perf-profile.children.cycles-pp.enqueue_entity
0.49 ± 23% +2.6 3.10 ± 4% perf-profile.children.cycles-pp.update_curr
20.92 ± 2% +3.2 24.12 ± 2% perf-profile.children.cycles-pp.vfs_read
0.21 ± 23% +3.4 3.65 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
0.76 ± 24% +3.9 4.67 ± 3% perf-profile.children.cycles-pp.pick_next_task_fair
0.66 ± 34% +4.4 5.06 ± 2% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.81 ± 22% +5.0 5.83 ± 4% perf-profile.children.cycles-pp.dequeue_task_fair
1.01 ± 21% +5.8 6.86 ± 4% perf-profile.children.cycles-pp.enqueue_task_fair
1.06 ± 21% +6.0 7.10 ± 4% perf-profile.children.cycles-pp.activate_task
1.07 ± 21% +6.1 7.15 ± 4% perf-profile.children.cycles-pp.ttwu_do_activate
13.19 +7.8 20.94 ± 3% perf-profile.children.cycles-pp.new_sync_read
11.61 +8.7 20.29 ± 3% perf-profile.children.cycles-pp.pipe_read
1.45 ± 25% +8.9 10.35 ± 7% perf-profile.children.cycles-pp.available_idle_cpu
78.85 ± 2% +9.8 88.66 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
77.90 ± 2% +10.1 88.00 perf-profile.children.cycles-pp.do_syscall_64
2.42 ± 21% +12.9 15.36 ± 4% perf-profile.children.cycles-pp.pipe_wait
2.71 ± 26% +15.4 18.12 ± 4% perf-profile.children.cycles-pp.__schedule
2.76 ± 25% +15.6 18.37 ± 4% perf-profile.children.cycles-pp.schedule
27.59 ± 2% +18.6 46.17 ± 3% perf-profile.children.cycles-pp.ksys_write
2.26 ± 23% +19.0 21.30 ± 7% perf-profile.children.cycles-pp.select_idle_sibling
24.98 ± 2% +19.9 44.88 ± 3% perf-profile.children.cycles-pp.vfs_write
2.48 ± 23% +20.0 22.46 ± 7% perf-profile.children.cycles-pp.select_task_rq_fair
19.06 ± 3% +23.3 42.41 ± 4% perf-profile.children.cycles-pp.new_sync_write
17.98 ± 4% +23.9 41.93 ± 4% perf-profile.children.cycles-pp.pipe_write
4.03 ± 21% +30.6 34.60 ± 6% perf-profile.children.cycles-pp.try_to_wake_up
3.97 ± 22% +30.8 34.74 ± 6% perf-profile.children.cycles-pp.autoremove_wake_function
4.30 ± 20% +30.9 35.19 ± 6% perf-profile.children.cycles-pp.__wake_up_common
6.06 ± 14% +31.2 37.21 ± 5% perf-profile.children.cycles-pp.__wake_up_common_lock
24.41 ± 6% -15.5 8.96 ± 10% perf-profile.self.cycles-pp.do_syscall_64
9.18 ± 6% -6.0 3.14 ± 12% perf-profile.self.cycles-pp.syscall_return_via_sysret
8.63 ± 5% -5.6 3.00 ± 12% perf-profile.self.cycles-pp.entry_SYSCALL_64
3.71 ± 3% -2.0 1.73 ± 6% perf-profile.self.cycles-pp.mutex_unlock
2.47 ± 3% -1.5 0.96 ± 10% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
2.76 ± 3% -1.4 1.35 ± 8% perf-profile.self.cycles-pp.selinux_file_permission
2.02 ± 7% -1.3 0.68 ± 12% perf-profile.self.cycles-pp.fsnotify
2.46 ± 6% -1.3 1.20 ± 8% perf-profile.self.cycles-pp.__fget
1.75 ± 6% -1.1 0.64 ± 10% perf-profile.self.cycles-pp.avc_has_perm
1.83 ± 5% -1.1 0.75 ± 10% perf-profile.self.cycles-pp.___might_sleep
1.75 -1.0 0.77 ± 7% perf-profile.self.cycles-pp.pipe_write
1.20 ± 4% -0.7 0.46 ± 10% perf-profile.self.cycles-pp.__might_sleep
1.29 ± 11% -0.7 0.58 ± 10% perf-profile.self.cycles-pp.new_sync_read
1.63 ± 4% -0.7 0.97 ± 5% perf-profile.self.cycles-pp.pipe_read
1.15 ± 5% -0.7 0.49 ± 8% perf-profile.self.cycles-pp.__fsnotify_parent
0.89 ± 8% -0.6 0.27 ± 15% perf-profile.self.cycles-pp.copy_page_from_iter
0.89 ± 4% -0.6 0.29 ± 14% perf-profile.self.cycles-pp.security_file_permission
0.83 ± 5% -0.6 0.24 ± 21% perf-profile.self.cycles-pp.file_has_perm
0.90 ± 2% -0.5 0.41 ± 6% perf-profile.self.cycles-pp.new_sync_write
0.67 ± 10% -0.5 0.20 ± 15% perf-profile.self.cycles-pp.current_time
0.83 ± 4% -0.4 0.41 ± 10% perf-profile.self.cycles-pp.copy_page_to_iter
0.72 ± 7% -0.4 0.31 ± 9% perf-profile.self.cycles-pp.vfs_write
0.75 ± 4% -0.4 0.35 ± 9% perf-profile.self.cycles-pp.fput_many
1.01 ± 4% -0.3 0.69 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.45 ± 4% -0.3 0.14 ± 19% perf-profile.self.cycles-pp.inode_has_perm
0.54 ± 8% -0.3 0.23 ± 9% perf-profile.self.cycles-pp.file_update_time
0.41 ± 10% -0.3 0.11 ± 20% perf-profile.self.cycles-pp.iov_iter_init
0.49 ± 7% -0.3 0.19 ± 8% perf-profile.self.cycles-pp.atime_needs_update
0.44 ± 7% -0.3 0.15 ± 14% perf-profile.self.cycles-pp.ksys_read
0.44 ± 7% -0.3 0.17 ± 9% perf-profile.self.cycles-pp.__inode_security_revalidate
0.41 ± 2% -0.3 0.15 ± 8% perf-profile.self.cycles-pp.rcu_all_qs
0.43 ± 8% -0.3 0.17 ± 11% perf-profile.self.cycles-pp.ksys_write
0.37 ± 9% -0.3 0.11 ± 17% perf-profile.self.cycles-pp.timespec64_trunc
0.42 ± 14% -0.2 0.18 ± 8% perf-profile.self.cycles-pp.__sb_start_write
0.39 ± 7% -0.2 0.17 ± 11% perf-profile.self.cycles-pp.__x64_sys_write
0.40 ± 8% -0.2 0.17 ± 8% perf-profile.self.cycles-pp.__x64_sys_read
0.41 ± 7% -0.2 0.20 ± 11% perf-profile.self.cycles-pp.__wake_up_common_lock
0.60 ± 4% -0.2 0.39 ± 7% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
1.25 -0.2 1.04 ± 2% perf-profile.self.cycles-pp.mutex_lock
0.30 ± 6% -0.2 0.12 ± 15% perf-profile.self.cycles-pp.__sb_end_write
0.35 ± 5% -0.2 0.18 ± 3% perf-profile.self.cycles-pp.rw_verify_area
0.34 ± 7% -0.2 0.18 ± 8% perf-profile.self.cycles-pp.touch_atime
0.32 ± 4% -0.2 0.15 ± 14% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
0.23 ± 16% -0.2 0.08 ± 11% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.74 ± 3% -0.2 0.59 ± 5% perf-profile.self.cycles-pp.vfs_read
0.60 ± 3% -0.2 0.45 ± 4% perf-profile.self.cycles-pp.__fget_light
0.27 ± 8% -0.1 0.12 ± 15% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.18 ± 45% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.__vfs_read
0.18 ± 5% -0.1 0.06 ± 17% perf-profile.self.cycles-pp.__fdget_pos
0.25 ± 5% -0.1 0.14 ± 11% perf-profile.self.cycles-pp.__might_fault
0.07 ± 7% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.kill_fasync
0.01 ±173% +0.1 0.06 ± 17% perf-profile.self.cycles-pp.sched_ttwu_pending
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.scheduler_ipi
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.rcu_note_context_switch
0.01 ±173% +0.1 0.07 ± 12% perf-profile.self.cycles-pp.generic_update_time
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.cpumask_next
0.00 +0.1 0.06 ± 11% perf-profile.self.cycles-pp.ttwu_do_activate
0.00 +0.1 0.06 perf-profile.self.cycles-pp.__mnt_want_write
0.06 ± 14% +0.1 0.12 ± 10% perf-profile.self.cycles-pp.__mark_inode_dirty
0.00 +0.1 0.06 ± 13% perf-profile.self.cycles-pp.is_cpu_allowed
0.00 +0.1 0.07 ± 13% perf-profile.self.cycles-pp.sched_clock_cpu
0.00 +0.1 0.07 ± 6% perf-profile.self.cycles-pp.sched_clock
0.03 ±100% +0.1 0.10 ± 11% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.00 +0.1 0.07 ± 14% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.00 +0.1 0.08 perf-profile.self.cycles-pp.native_load_tls
0.05 ± 60% +0.1 0.13 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.00 +0.1 0.09 ± 9% perf-profile.self.cycles-pp.ttwu_do_wakeup
0.00 +0.1 0.09 ± 11% perf-profile.self.cycles-pp.rb_next
0.00 +0.1 0.10 ± 8% perf-profile.self.cycles-pp.__list_add_valid
0.00 +0.1 0.10 ± 10% perf-profile.self.cycles-pp.rb_insert_color
0.00 +0.1 0.11 ± 9% perf-profile.self.cycles-pp.resched_curr
0.00 +0.1 0.11 ± 7% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.32 ± 7% +0.1 0.45 ± 5% perf-profile.self.cycles-pp.__wake_up_common
0.13 ± 10% +0.1 0.25 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.00 +0.1 0.13 ± 5% perf-profile.self.cycles-pp.autoremove_wake_function
0.00 +0.1 0.13 ± 8% perf-profile.self.cycles-pp.check_preempt_curr
0.04 ± 58% +0.1 0.18 ± 10% perf-profile.self.cycles-pp.finish_wait
0.00 +0.1 0.14 ± 5% perf-profile.self.cycles-pp.set_next_buddy
0.00 +0.1 0.15 ± 3% perf-profile.self.cycles-pp.put_prev_entity
0.00 +0.2 0.15 ± 5% perf-profile.self.cycles-pp.deactivate_task
0.05 ± 9% +0.2 0.21 ± 5% perf-profile.self.cycles-pp.rb_erase
0.00 +0.2 0.15 ± 10% perf-profile.self.cycles-pp.exit_to_usermode_loop
0.00 +0.2 0.18 ± 6% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.15 ± 7% +0.2 0.34 ± 2% perf-profile.self.cycles-pp.anon_pipe_buf_release
0.03 ±100% +0.2 0.21 ± 8% perf-profile.self.cycles-pp.pipe_wait
0.03 ±102% +0.2 0.24 ± 3% perf-profile.self.cycles-pp.activate_task
0.01 ±173% +0.2 0.23 ± 3% perf-profile.self.cycles-pp.prepare_to_wait
0.03 ±100% +0.2 0.25 ± 4% perf-profile.self.cycles-pp.set_next_entity
0.02 ±173% +0.2 0.26 ± 9% perf-profile.self.cycles-pp.dequeue_entity
0.04 ±100% +0.3 0.29 ± 5% perf-profile.self.cycles-pp.enqueue_entity
0.08 ± 31% +0.3 0.34 ± 2% perf-profile.self.cycles-pp.schedule
0.78 ± 3% +0.3 1.09 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.09 ± 24% +0.3 0.42 ± 7% perf-profile.self.cycles-pp.finish_task_switch
0.03 ±100% +0.3 0.37 ± 6% perf-profile.self.cycles-pp.update_min_vruntime
0.03 ±102% +0.4 0.38 perf-profile.self.cycles-pp.clear_buddies
0.03 ±100% +0.4 0.39 ± 12% perf-profile.self.cycles-pp.cpus_share_cache
0.07 ± 22% +0.4 0.44 ± 7% perf-profile.self.cycles-pp.check_preempt_wakeup
0.09 ± 27% +0.4 0.47 ± 4% perf-profile.self.cycles-pp.native_sched_clock
0.07 ± 26% +0.4 0.45 ± 4% perf-profile.self.cycles-pp.pick_next_entity
0.11 ± 15% +0.4 0.50 ± 3% perf-profile.self.cycles-pp.cpuacct_charge
0.07 ± 14% +0.5 0.53 ± 8% perf-profile.self.cycles-pp.account_entity_enqueue
0.10 ± 21% +0.5 0.56 ± 8% perf-profile.self.cycles-pp.try_to_wake_up
0.10 ± 18% +0.5 0.62 ± 6% perf-profile.self.cycles-pp.dequeue_task_fair
0.14 ± 26% +0.6 0.75 ± 4% perf-profile.self.cycles-pp.__calc_delta
0.06 ± 13% +0.6 0.68 ± 4% perf-profile.self.cycles-pp.account_entity_dequeue
0.15 ± 27% +0.6 0.79 ± 6% perf-profile.self.cycles-pp.enqueue_task_fair
0.17 ± 30% +0.7 0.89 ± 4% perf-profile.self.cycles-pp.update_load_avg
0.15 ± 20% +0.7 0.88 ± 4% perf-profile.self.cycles-pp.reweight_entity
0.15 ± 23% +0.8 0.96 ± 3% perf-profile.self.cycles-pp.pick_next_task_fair
0.24 ± 30% +0.9 1.11 perf-profile.self.cycles-pp.native_write_msr
0.20 ± 23% +0.9 1.07 ± 5% perf-profile.self.cycles-pp.select_task_rq_fair
0.20 ± 22% +0.9 1.07 ± 3% perf-profile.self.cycles-pp.find_next_bit
0.20 ± 31% +0.9 1.11 ± 3% perf-profile.self.cycles-pp.update_cfs_group
0.20 ± 28% +0.9 1.14 ± 2% perf-profile.self.cycles-pp.__switch_to
0.14 ± 37% +1.0 1.13 ± 5% perf-profile.self.cycles-pp.___perf_sw_event
0.21 ± 24% +1.0 1.22 ± 2% perf-profile.self.cycles-pp.__switch_to_asm
0.24 ± 23% +1.1 1.31 ± 5% perf-profile.self.cycles-pp.cpumask_next_wrap
0.23 ± 32% +1.3 1.49 ± 5% perf-profile.self.cycles-pp.update_rq_clock
0.19 ± 28% +1.3 1.45 ± 6% perf-profile.self.cycles-pp.update_curr
0.23 ± 23% +1.3 1.55 ± 6% perf-profile.self.cycles-pp.__update_load_avg_se
0.19 ± 10% +1.3 1.51 ± 10% perf-profile.self.cycles-pp.__enqueue_entity
0.28 ± 22% +1.3 1.63 ± 3% perf-profile.self.cycles-pp.switch_fpu_return
0.11 ± 19% +1.6 1.72 ± 5% perf-profile.self.cycles-pp._raw_spin_lock
0.42 ± 26% +1.7 2.12 ± 2% perf-profile.self.cycles-pp.__schedule
0.19 ± 20% +2.0 2.20 ± 4% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.91 ± 29% +2.2 3.11 ± 12% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.32 ± 19% +7.7 8.05 ± 7% perf-profile.self.cycles-pp.select_idle_sibling
1.44 ± 25% +8.8 10.23 ± 7% perf-profile.self.cycles-pp.available_idle_cpu
439.00 +11.7% 490.50 interrupts.100:IR-PCI-MSI.1572923-edge.eth0-TxRx-59
436.25 +13.1% 493.25 interrupts.101:IR-PCI-MSI.1572924-edge.eth0-TxRx-60
436.25 +12.4% 490.50 interrupts.102:IR-PCI-MSI.1572925-edge.eth0-TxRx-61
436.25 +12.4% 490.50 interrupts.103:IR-PCI-MSI.1572926-edge.eth0-TxRx-62
509.75 ± 3% +545.1% 3288 ±120% interrupts.42:IR-PCI-MSI.1572867-edge.eth0-TxRx-3
474.00 ± 3% +49.7% 709.75 ± 24% interrupts.46:IR-PCI-MSI.1572871-edge.eth0-TxRx-7
588.00 ± 35% +196.0% 1740 ± 50% interrupts.51:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
436.75 ± 2% +12.8% 492.50 interrupts.55:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
438.00 +39.5% 611.00 ± 33% interrupts.57:IR-PCI-MSI.1572882-edge.eth0-TxRx-18
436.25 +13.8% 496.25 interrupts.58:IR-PCI-MSI.1572883-edge.eth0-TxRx-19
436.25 +14.3% 498.50 interrupts.59:IR-PCI-MSI.1572884-edge.eth0-TxRx-20
437.00 +12.2% 490.50 interrupts.60:IR-PCI-MSI.1572885-edge.eth0-TxRx-21
436.25 +12.4% 490.50 interrupts.61:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
436.25 +12.4% 490.50 interrupts.62:IR-PCI-MSI.1572887-edge.eth0-TxRx-23
441.00 +11.8% 493.00 interrupts.63:IR-PCI-MSI.1572888-edge.eth0-TxRx-24
437.25 +13.3% 495.25 interrupts.64:IR-PCI-MSI.1572889-edge.eth0-TxRx-25
436.25 +12.4% 490.50 interrupts.65:IR-PCI-MSI.1572890-edge.eth0-TxRx-26
436.25 +12.4% 490.50 interrupts.68:IR-PCI-MSI.1572893-edge.eth0-TxRx-29
436.25 +12.4% 490.50 interrupts.69:IR-PCI-MSI.1572894-edge.eth0-TxRx-30
436.25 +12.4% 490.50 interrupts.71:IR-PCI-MSI.1572896-edge.eth0-TxRx-32
437.25 ± 2% +12.2% 490.50 interrupts.74:IR-PCI-MSI.1572897-edge.eth0-TxRx-33
436.25 +12.6% 491.25 interrupts.75:IR-PCI-MSI.1572898-edge.eth0-TxRx-34
440.75 +12.4% 495.25 ± 2% interrupts.76:IR-PCI-MSI.1572899-edge.eth0-TxRx-35
436.25 +12.6% 491.00 interrupts.77:IR-PCI-MSI.1572900-edge.eth0-TxRx-36
436.75 +13.1% 493.75 interrupts.78:IR-PCI-MSI.1572901-edge.eth0-TxRx-37
436.25 +12.8% 492.25 interrupts.79:IR-PCI-MSI.1572902-edge.eth0-TxRx-38
436.25 +12.8% 492.25 interrupts.80:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
436.25 +12.4% 490.50 interrupts.81:IR-PCI-MSI.1572904-edge.eth0-TxRx-40
436.25 +12.4% 490.50 interrupts.82:IR-PCI-MSI.1572905-edge.eth0-TxRx-41
436.25 +12.4% 490.50 interrupts.83:IR-PCI-MSI.1572906-edge.eth0-TxRx-42
436.25 +14.6% 499.75 ± 2% interrupts.84:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
436.25 +12.4% 490.50 interrupts.85:IR-PCI-MSI.1572908-edge.eth0-TxRx-44
436.50 +12.4% 490.50 interrupts.86:IR-PCI-MSI.1572909-edge.eth0-TxRx-45
436.25 +12.4% 490.50 interrupts.87:IR-PCI-MSI.1572910-edge.eth0-TxRx-46
436.25 +12.4% 490.50 interrupts.88:IR-PCI-MSI.1572911-edge.eth0-TxRx-47
436.25 +12.6% 491.00 interrupts.89:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
441.25 ± 3% +14.1% 503.25 ± 3% interrupts.90:IR-PCI-MSI.1572913-edge.eth0-TxRx-49
436.25 +12.4% 490.50 interrupts.91:IR-PCI-MSI.1572914-edge.eth0-TxRx-50
436.25 +14.4% 499.25 ± 3% interrupts.92:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
439.25 ± 2% +11.7% 490.50 interrupts.94:IR-PCI-MSI.1572917-edge.eth0-TxRx-53
437.75 +12.1% 490.50 interrupts.95:IR-PCI-MSI.1572918-edge.eth0-TxRx-54
436.50 +12.4% 490.50 interrupts.96:IR-PCI-MSI.1572919-edge.eth0-TxRx-55
436.25 +12.4% 490.50 interrupts.97:IR-PCI-MSI.1572920-edge.eth0-TxRx-56
439.00 ± 2% +11.7% 490.50 interrupts.98:IR-PCI-MSI.1572921-edge.eth0-TxRx-57
442.25 ± 3% +10.9% 490.50 interrupts.99:IR-PCI-MSI.1572922-edge.eth0-TxRx-58
1989 +12.1% 2230 interrupts.9:IR-IO-APIC.9-fasteoi.acpi
6596036 ± 10% -21.1% 5206443 ± 6% interrupts.CAL:Function_call_interrupts
86097 ± 13% -18.4% 70229 ± 7% interrupts.CPU0.CAL:Function_call_interrupts
1785369 +12.4% 2006294 interrupts.CPU0.LOC:Local_timer_interrupts
102823 ± 12% -18.3% 84031 ± 5% interrupts.CPU0.TLB:TLB_shootdowns
388425 ± 12% +27.8% 496548 ± 4% interrupts.CPU0.TRM:Thermal_event_interrupts
1989 +12.1% 2230 interrupts.CPU1.9:IR-IO-APIC.9-fasteoi.acpi
90370 ± 8% -21.4% 70995 ± 6% interrupts.CPU1.CAL:Function_call_interrupts
1784834 +12.4% 2006046 interrupts.CPU1.LOC:Local_timer_interrupts
3088391 ± 15% -20.9% 2441882 ± 7% interrupts.CPU1.RES:Rescheduling_interrupts
105391 ± 8% -18.6% 85793 ± 6% interrupts.CPU1.TLB:TLB_shootdowns
388373 ± 12% +27.9% 496558 ± 4% interrupts.CPU1.TRM:Thermal_event_interrupts
88236 ± 11% -19.5% 71001 ± 7% interrupts.CPU10.CAL:Function_call_interrupts
1785471 +12.3% 2004740 interrupts.CPU10.LOC:Local_timer_interrupts
3040774 ± 4% -21.1% 2398396 ± 8% interrupts.CPU10.RES:Rescheduling_interrupts
104483 ± 10% -18.6% 85025 ± 5% interrupts.CPU10.TLB:TLB_shootdowns
388417 ± 12% +27.8% 496560 ± 4% interrupts.CPU10.TRM:Thermal_event_interrupts
1785039 +12.3% 2005274 interrupts.CPU11.LOC:Local_timer_interrupts
925.75 ±173% +461.3% 5196 ± 58% interrupts.CPU11.NMI:Non-maskable_interrupts
925.75 ±173% +461.3% 5196 ± 58% interrupts.CPU11.PMI:Performance_monitoring_interrupts
3094193 ± 14% -21.5% 2430228 ± 8% interrupts.CPU11.RES:Rescheduling_interrupts
102398 ± 10% -16.1% 85884 ± 5% interrupts.CPU11.TLB:TLB_shootdowns
388413 ± 12% +27.8% 496474 ± 4% interrupts.CPU11.TRM:Thermal_event_interrupts
588.00 ± 35% +196.0% 1740 ± 50% interrupts.CPU12.51:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
87438 ± 9% -18.1% 71580 ± 7% interrupts.CPU12.CAL:Function_call_interrupts
1785376 +12.3% 2005068 interrupts.CPU12.LOC:Local_timer_interrupts
103057 ± 10% -16.4% 86186 ± 6% interrupts.CPU12.TLB:TLB_shootdowns
388416 ± 12% +27.8% 496490 ± 4% interrupts.CPU12.TRM:Thermal_event_interrupts
88735 ± 9% -19.4% 71496 ± 9% interrupts.CPU13.CAL:Function_call_interrupts
1785460 +12.3% 2005445 interrupts.CPU13.LOC:Local_timer_interrupts
104230 ± 9% -17.5% 86038 ± 6% interrupts.CPU13.TLB:TLB_shootdowns
388422 ± 12% +27.8% 496563 ± 4% interrupts.CPU13.TRM:Thermal_event_interrupts
88327 ± 12% -19.0% 71507 ± 7% interrupts.CPU14.CAL:Function_call_interrupts
1785094 +12.3% 2005441 interrupts.CPU14.LOC:Local_timer_interrupts
103317 ± 11% -18.1% 84581 ± 6% interrupts.CPU14.TLB:TLB_shootdowns
388412 ± 12% +27.8% 496564 ± 4% interrupts.CPU14.TRM:Thermal_event_interrupts
86972 ± 10% -17.3% 71898 ± 7% interrupts.CPU15.CAL:Function_call_interrupts
1784668 +12.4% 2005375 interrupts.CPU15.LOC:Local_timer_interrupts
3039799 ± 9% -21.1% 2397505 ± 9% interrupts.CPU15.RES:Rescheduling_interrupts
103367 ± 12% -16.9% 85892 ± 6% interrupts.CPU15.TLB:TLB_shootdowns
388342 ± 12% +27.9% 496559 ± 4% interrupts.CPU15.TRM:Thermal_event_interrupts
436.75 ± 2% +12.8% 492.50 interrupts.CPU16.55:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
1784750 +12.4% 2005330 interrupts.CPU16.LOC:Local_timer_interrupts
55.25 ±171% +7696.8% 4307 ± 64% interrupts.CPU16.NMI:Non-maskable_interrupts
55.25 ±171% +7696.8% 4307 ± 64% interrupts.CPU16.PMI:Performance_monitoring_interrupts
2969996 ± 10% -18.7% 2413141 ± 9% interrupts.CPU16.RES:Rescheduling_interrupts
102367 ± 11% -13.7% 88305 ± 3% interrupts.CPU16.TLB:TLB_shootdowns
388423 ± 12% +27.8% 496563 ± 4% interrupts.CPU16.TRM:Thermal_event_interrupts
1785363 +12.3% 2005849 interrupts.CPU17.LOC:Local_timer_interrupts
388418 ± 12% +27.8% 496560 ± 4% interrupts.CPU17.TRM:Thermal_event_interrupts
438.00 +39.5% 611.00 ± 33% interrupts.CPU18.57:IR-PCI-MSI.1572882-edge.eth0-TxRx-18
88992 ± 12% -22.3% 69145 ± 6% interrupts.CPU18.CAL:Function_call_interrupts
1785398 +12.1% 2001316 interrupts.CPU18.LOC:Local_timer_interrupts
2199105 ± 7% +115.2% 4731679 ± 7% interrupts.CPU18.RES:Rescheduling_interrupts
108465 ± 8% -23.0% 83467 ± 5% interrupts.CPU18.TLB:TLB_shootdowns
329294 ± 3% +15.4% 380131 ± 2% interrupts.CPU18.TRM:Thermal_event_interrupts
436.25 +13.8% 496.25 interrupts.CPU19.58:IR-PCI-MSI.1572883-edge.eth0-TxRx-19
92059 ± 6% -24.2% 69776 ± 5% interrupts.CPU19.CAL:Function_call_interrupts
1784665 +12.2% 2001579 interrupts.CPU19.LOC:Local_timer_interrupts
1284622 ± 7% +225.8% 4185208 ± 4% interrupts.CPU19.RES:Rescheduling_interrupts
108492 ± 7% -23.4% 83059 ± 4% interrupts.CPU19.TLB:TLB_shootdowns
329166 ± 3% +15.5% 380141 ± 2% interrupts.CPU19.TRM:Thermal_event_interrupts
1784512 +12.4% 2005572 interrupts.CPU2.LOC:Local_timer_interrupts
3091132 ± 8% -22.3% 2401342 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts
102562 ± 9% -16.5% 85643 ± 5% interrupts.CPU2.TLB:TLB_shootdowns
388380 ± 12% +27.8% 496544 ± 4% interrupts.CPU2.TRM:Thermal_event_interrupts
436.25 +14.3% 498.50 interrupts.CPU20.59:IR-PCI-MSI.1572884-edge.eth0-TxRx-20
91785 ± 9% -23.9% 69853 ± 4% interrupts.CPU20.CAL:Function_call_interrupts
1785284 +12.1% 2001952 interrupts.CPU20.LOC:Local_timer_interrupts
1281621 ± 12% +203.6% 3890516 ± 7% interrupts.CPU20.RES:Rescheduling_interrupts
106297 ± 9% -19.9% 85180 ± 3% interrupts.CPU20.TLB:TLB_shootdowns
329293 ± 3% +15.4% 380131 ± 2% interrupts.CPU20.TRM:Thermal_event_interrupts
437.00 +12.2% 490.50 interrupts.CPU21.60:IR-PCI-MSI.1572885-edge.eth0-TxRx-21
92635 ± 7% -23.9% 70512 ± 6% interrupts.CPU21.CAL:Function_call_interrupts
1785480 +12.1% 2001723 interrupts.CPU21.LOC:Local_timer_interrupts
1595447 ± 6% +187.8% 4591004 interrupts.CPU21.RES:Rescheduling_interrupts
107575 ± 7% -21.6% 84296 ± 5% interrupts.CPU21.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380086 ± 2% interrupts.CPU21.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU22.61:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
92674 ± 10% -24.4% 70047 ± 6% interrupts.CPU22.CAL:Function_call_interrupts
1784881 +12.1% 2001482 interrupts.CPU22.LOC:Local_timer_interrupts
1605143 ± 6% +179.5% 4486945 ± 5% interrupts.CPU22.RES:Rescheduling_interrupts
108749 ± 10% -23.3% 83409 ± 5% interrupts.CPU22.TLB:TLB_shootdowns
329295 ± 3% +15.4% 380136 ± 2% interrupts.CPU22.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU23.62:IR-PCI-MSI.1572887-edge.eth0-TxRx-23
90975 ± 11% -23.6% 69492 ± 5% interrupts.CPU23.CAL:Function_call_interrupts
1785236 +12.1% 2001811 interrupts.CPU23.LOC:Local_timer_interrupts
1600988 ± 2% +180.2% 4486667 ± 5% interrupts.CPU23.RES:Rescheduling_interrupts
106819 ± 10% -22.3% 83007 ± 4% interrupts.CPU23.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380146 ± 2% interrupts.CPU23.TRM:Thermal_event_interrupts
441.00 +11.8% 493.00 interrupts.CPU24.63:IR-PCI-MSI.1572888-edge.eth0-TxRx-24
91919 ± 11% -23.1% 70689 ± 5% interrupts.CPU24.CAL:Function_call_interrupts
1784996 +12.1% 2000922 interrupts.CPU24.LOC:Local_timer_interrupts
1612237 ± 5% +188.4% 4649702 ± 2% interrupts.CPU24.RES:Rescheduling_interrupts
108187 ± 8% -22.5% 83831 ± 3% interrupts.CPU24.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380122 ± 2% interrupts.CPU24.TRM:Thermal_event_interrupts
437.25 +13.3% 495.25 interrupts.CPU25.64:IR-PCI-MSI.1572889-edge.eth0-TxRx-25
88417 ± 12% -20.8% 70029 ± 4% interrupts.CPU25.CAL:Function_call_interrupts
1785404 +12.0% 1999622 interrupts.CPU25.LOC:Local_timer_interrupts
1612074 ± 2% +182.4% 4551699 interrupts.CPU25.RES:Rescheduling_interrupts
105324 ± 10% -20.1% 84201 ± 4% interrupts.CPU25.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380141 ± 2% interrupts.CPU25.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU26.65:IR-PCI-MSI.1572890-edge.eth0-TxRx-26
92632 ± 12% -23.5% 70907 ± 5% interrupts.CPU26.CAL:Function_call_interrupts
1784730 +12.1% 2000264 interrupts.CPU26.LOC:Local_timer_interrupts
1639148 ± 6% +188.0% 4720149 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
108099 ± 11% -21.6% 84727 ± 4% interrupts.CPU26.TLB:TLB_shootdowns
329271 ± 3% +15.4% 379984 ± 2% interrupts.CPU26.TRM:Thermal_event_interrupts
91312 ± 12% -23.4% 69963 ± 6% interrupts.CPU27.CAL:Function_call_interrupts
1785171 +12.1% 2001743 interrupts.CPU27.LOC:Local_timer_interrupts
1704847 ± 11% +179.4% 4764019 ± 4% interrupts.CPU27.RES:Rescheduling_interrupts
106515 ± 11% -21.6% 83491 ± 5% interrupts.CPU27.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380080 ± 2% interrupts.CPU27.TRM:Thermal_event_interrupts
89178 ± 8% -22.0% 69565 ± 6% interrupts.CPU28.CAL:Function_call_interrupts
1785194 +12.1% 2001836 interrupts.CPU28.LOC:Local_timer_interrupts
1593025 ± 8% +185.7% 4551782 ± 5% interrupts.CPU28.RES:Rescheduling_interrupts
104408 ± 10% -19.2% 84365 ± 6% interrupts.CPU28.TLB:TLB_shootdowns
329294 ± 3% +15.4% 380142 ± 2% interrupts.CPU28.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU29.68:IR-PCI-MSI.1572893-edge.eth0-TxRx-29
90850 ± 12% -21.9% 70993 ± 6% interrupts.CPU29.CAL:Function_call_interrupts
1785398 +12.1% 2000545 interrupts.CPU29.LOC:Local_timer_interrupts
1657508 ± 5% +179.4% 4631109 ± 3% interrupts.CPU29.RES:Rescheduling_interrupts
106383 ± 13% -20.8% 84284 ± 4% interrupts.CPU29.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380133 ± 2% interrupts.CPU29.TRM:Thermal_event_interrupts
509.75 ± 3% +545.1% 3288 ±120% interrupts.CPU3.42:IR-PCI-MSI.1572867-edge.eth0-TxRx-3
87008 ± 11% -19.9% 69730 ± 5% interrupts.CPU3.CAL:Function_call_interrupts
1784446 +12.4% 2005368 interrupts.CPU3.LOC:Local_timer_interrupts
3169674 ± 9% -20.4% 2521963 ± 8% interrupts.CPU3.RES:Rescheduling_interrupts
103703 ± 12% -17.1% 85958 ± 7% interrupts.CPU3.TLB:TLB_shootdowns
388356 ± 12% +27.9% 496551 ± 4% interrupts.CPU3.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU30.69:IR-PCI-MSI.1572894-edge.eth0-TxRx-30
92043 ± 12% -24.2% 69765 ± 5% interrupts.CPU30.CAL:Function_call_interrupts
1785175 +12.1% 2001248 interrupts.CPU30.LOC:Local_timer_interrupts
1637465 ± 6% +191.3% 4769701 ± 3% interrupts.CPU30.RES:Rescheduling_interrupts
106836 ± 11% -22.5% 82810 ± 5% interrupts.CPU30.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380130 ± 2% interrupts.CPU30.TRM:Thermal_event_interrupts
1785301 +12.0% 1999270 interrupts.CPU31.LOC:Local_timer_interrupts
62.75 ±173% +4261.4% 2736 ±113% interrupts.CPU31.NMI:Non-maskable_interrupts
62.75 ±173% +4261.4% 2736 ±113% interrupts.CPU31.PMI:Performance_monitoring_interrupts
1589500 ± 5% +184.1% 4515325 interrupts.CPU31.RES:Rescheduling_interrupts
106576 ± 11% -21.5% 83698 ± 5% interrupts.CPU31.TLB:TLB_shootdowns
329293 ± 3% +15.4% 380131 ± 2% interrupts.CPU31.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU32.71:IR-PCI-MSI.1572896-edge.eth0-TxRx-32
93348 ± 12% -24.4% 70595 ± 8% interrupts.CPU32.CAL:Function_call_interrupts
1785245 +12.1% 2001145 interrupts.CPU32.LOC:Local_timer_interrupts
1626883 ± 6% +176.3% 4494456 ± 5% interrupts.CPU32.RES:Rescheduling_interrupts
107151 ± 12% -21.4% 84236 ± 5% interrupts.CPU32.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380131 ± 2% interrupts.CPU32.TRM:Thermal_event_interrupts
437.25 ± 2% +12.2% 490.50 interrupts.CPU33.74:IR-PCI-MSI.1572897-edge.eth0-TxRx-33
91487 ± 12% -22.7% 70758 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
1785371 +12.0% 1999987 interrupts.CPU33.LOC:Local_timer_interrupts
1578761 ± 6% +201.2% 4754762 ± 6% interrupts.CPU33.RES:Rescheduling_interrupts
106071 ± 12% -21.3% 83430 ± 2% interrupts.CPU33.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380130 ± 2% interrupts.CPU33.TRM:Thermal_event_interrupts
436.25 +12.6% 491.25 interrupts.CPU34.75:IR-PCI-MSI.1572898-edge.eth0-TxRx-34
91116 ± 11% -24.3% 68943 ± 7% interrupts.CPU34.CAL:Function_call_interrupts
1785353 +12.1% 2001525 interrupts.CPU34.LOC:Local_timer_interrupts
1553828 ± 4% +186.0% 4444035 ± 3% interrupts.CPU34.RES:Rescheduling_interrupts
105575 ± 10% -20.7% 83762 ± 4% interrupts.CPU34.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380139 ± 2% interrupts.CPU34.TRM:Thermal_event_interrupts
440.75 +12.4% 495.25 ± 2% interrupts.CPU35.76:IR-PCI-MSI.1572899-edge.eth0-TxRx-35
91396 ± 11% -25.6% 67986 ± 11% interrupts.CPU35.CAL:Function_call_interrupts
1785237 +12.0% 2000288 interrupts.CPU35.LOC:Local_timer_interrupts
1602616 ± 10% +189.7% 4642115 ± 3% interrupts.CPU35.RES:Rescheduling_interrupts
107669 ± 12% -21.3% 84736 ± 4% interrupts.CPU35.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380145 ± 2% interrupts.CPU35.TRM:Thermal_event_interrupts
436.25 +12.6% 491.00 interrupts.CPU36.77:IR-PCI-MSI.1572900-edge.eth0-TxRx-36
93553 ± 8% -19.8% 75044 ± 6% interrupts.CPU36.CAL:Function_call_interrupts
1784636 +12.4% 2005389 interrupts.CPU36.LOC:Local_timer_interrupts
2588984 ± 6% +25.8% 3256348 ± 6% interrupts.CPU36.RES:Rescheduling_interrupts
107053 ± 7% -19.0% 86754 ± 5% interrupts.CPU36.TLB:TLB_shootdowns
388423 ± 12% +27.8% 496560 ± 4% interrupts.CPU36.TRM:Thermal_event_interrupts
436.75 +13.1% 493.75 interrupts.CPU37.78:IR-PCI-MSI.1572901-edge.eth0-TxRx-37
89518 ± 10% -18.9% 72619 ± 5% interrupts.CPU37.CAL:Function_call_interrupts
1784648 +12.4% 2005651 interrupts.CPU37.LOC:Local_timer_interrupts
3112476 ± 10% -20.8% 2464103 ± 5% interrupts.CPU37.RES:Rescheduling_interrupts
102576 ± 10% -14.4% 87794 ± 6% interrupts.CPU37.TLB:TLB_shootdowns
388412 ± 12% +27.8% 496563 ± 4% interrupts.CPU37.TRM:Thermal_event_interrupts
436.25 +12.8% 492.25 interrupts.CPU38.79:IR-PCI-MSI.1572902-edge.eth0-TxRx-38
92566 ± 10% -20.6% 73466 ± 6% interrupts.CPU38.CAL:Function_call_interrupts
1785132 +12.4% 2005883 interrupts.CPU38.LOC:Local_timer_interrupts
3163520 ± 10% -26.0% 2340820 ± 11% interrupts.CPU38.RES:Rescheduling_interrupts
105823 ± 10% -18.0% 86785 ± 4% interrupts.CPU38.TLB:TLB_shootdowns
388401 ± 12% +27.8% 496554 ± 4% interrupts.CPU38.TRM:Thermal_event_interrupts
436.25 +12.8% 492.25 interrupts.CPU39.80:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
93197 ± 11% -21.8% 72854 ± 7% interrupts.CPU39.CAL:Function_call_interrupts
1784624 +12.4% 2005934 interrupts.CPU39.LOC:Local_timer_interrupts
105953 ± 11% -20.8% 83925 ± 6% interrupts.CPU39.TLB:TLB_shootdowns
388418 ± 12% +27.8% 496548 ± 4% interrupts.CPU39.TRM:Thermal_event_interrupts
86883 ± 13% -17.8% 71449 ± 7% interrupts.CPU4.CAL:Function_call_interrupts
1784433 +12.4% 2005462 interrupts.CPU4.LOC:Local_timer_interrupts
3119739 ± 9% -21.1% 2461192 ± 4% interrupts.CPU4.RES:Rescheduling_interrupts
104829 ± 11% -19.1% 84802 ± 5% interrupts.CPU4.TLB:TLB_shootdowns
388417 ± 12% +27.8% 496563 ± 4% interrupts.CPU4.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU40.81:IR-PCI-MSI.1572904-edge.eth0-TxRx-40
91881 ± 12% -21.4% 72188 ± 8% interrupts.CPU40.CAL:Function_call_interrupts
1784831 +12.4% 2005991 interrupts.CPU40.LOC:Local_timer_interrupts
3101288 ± 10% -17.2% 2569221 ± 7% interrupts.CPU40.RES:Rescheduling_interrupts
105884 ± 10% -19.0% 85772 ± 5% interrupts.CPU40.TLB:TLB_shootdowns
388415 ± 12% +27.8% 496564 ± 4% interrupts.CPU40.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU41.82:IR-PCI-MSI.1572905-edge.eth0-TxRx-41
91739 ± 12% -19.9% 73475 ± 7% interrupts.CPU41.CAL:Function_call_interrupts
1785231 +12.3% 2005398 interrupts.CPU41.LOC:Local_timer_interrupts
3223405 ± 11% -27.9% 2325498 ± 5% interrupts.CPU41.RES:Rescheduling_interrupts
104410 ± 11% -18.7% 84846 ± 6% interrupts.CPU41.TLB:TLB_shootdowns
388408 ± 12% +27.8% 496557 ± 4% interrupts.CPU41.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU42.83:IR-PCI-MSI.1572906-edge.eth0-TxRx-42
1785354 +12.4% 2006034 interrupts.CPU42.LOC:Local_timer_interrupts
3068304 ± 9% -22.1% 2390347 ± 6% interrupts.CPU42.RES:Rescheduling_interrupts
107102 ± 9% -20.7% 84890 ± 4% interrupts.CPU42.TLB:TLB_shootdowns
388423 ± 12% +27.8% 496564 ± 4% interrupts.CPU42.TRM:Thermal_event_interrupts
436.25 +14.6% 499.75 ± 2% interrupts.CPU43.84:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
92178 ± 9% -18.3% 75306 ± 7% interrupts.CPU43.CAL:Function_call_interrupts
1784759 +12.4% 2005986 interrupts.CPU43.LOC:Local_timer_interrupts
6434 ± 7% -64.3% 2297 ±106% interrupts.CPU43.NMI:Non-maskable_interrupts
6434 ± 7% -64.3% 2297 ±106% interrupts.CPU43.PMI:Performance_monitoring_interrupts
104773 ± 9% -17.6% 86369 ± 6% interrupts.CPU43.TLB:TLB_shootdowns
388425 ± 12% +27.8% 496556 ± 4% interrupts.CPU43.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU44.85:IR-PCI-MSI.1572908-edge.eth0-TxRx-44
92387 ± 10% -21.9% 72174 ± 9% interrupts.CPU44.CAL:Function_call_interrupts
1784603 +12.4% 2006064 interrupts.CPU44.LOC:Local_timer_interrupts
106179 ± 9% -18.6% 86461 ± 6% interrupts.CPU44.TLB:TLB_shootdowns
388426 ± 12% +27.8% 496562 ± 4% interrupts.CPU44.TRM:Thermal_event_interrupts
436.50 +12.4% 490.50 interrupts.CPU45.86:IR-PCI-MSI.1572909-edge.eth0-TxRx-45
93641 ± 7% -21.9% 73155 ± 6% interrupts.CPU45.CAL:Function_call_interrupts
1784811 +12.4% 2005654 interrupts.CPU45.LOC:Local_timer_interrupts
3093195 ± 11% -15.6% 2609354 ± 4% interrupts.CPU45.RES:Rescheduling_interrupts
106463 ± 7% -19.6% 85593 ± 5% interrupts.CPU45.TLB:TLB_shootdowns
388408 ± 12% +27.8% 496564 ± 4% interrupts.CPU45.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU46.87:IR-PCI-MSI.1572910-edge.eth0-TxRx-46
91739 ± 10% -19.0% 74279 ± 8% interrupts.CPU46.CAL:Function_call_interrupts
1784609 +12.4% 2005522 interrupts.CPU46.LOC:Local_timer_interrupts
2970471 ± 9% -21.0% 2345439 ± 8% interrupts.CPU46.RES:Rescheduling_interrupts
108158 ± 10% -19.8% 86743 ± 4% interrupts.CPU46.TLB:TLB_shootdowns
388368 ± 12% +27.8% 496469 ± 4% interrupts.CPU46.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU47.88:IR-PCI-MSI.1572911-edge.eth0-TxRx-47
92856 ± 9% -20.0% 74327 ± 8% interrupts.CPU47.CAL:Function_call_interrupts
1784835 +12.4% 2005700 interrupts.CPU47.LOC:Local_timer_interrupts
3116242 ± 10% -20.4% 2480398 ± 5% interrupts.CPU47.RES:Rescheduling_interrupts
104927 ± 9% -18.6% 85393 ± 6% interrupts.CPU47.TLB:TLB_shootdowns
388403 ± 12% +27.8% 496554 ± 4% interrupts.CPU47.TRM:Thermal_event_interrupts
436.25 +12.6% 491.00 interrupts.CPU48.89:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
92889 ± 11% -21.5% 72890 ± 8% interrupts.CPU48.CAL:Function_call_interrupts
1784772 +12.4% 2006205 interrupts.CPU48.LOC:Local_timer_interrupts
3175210 ± 12% -19.7% 2548339 ± 13% interrupts.CPU48.RES:Rescheduling_interrupts
105183 ± 11% -18.6% 85618 ± 6% interrupts.CPU48.TLB:TLB_shootdowns
388426 ± 12% +27.8% 496563 ± 4% interrupts.CPU48.TRM:Thermal_event_interrupts
441.25 ± 3% +14.1% 503.25 ± 3% interrupts.CPU49.90:IR-PCI-MSI.1572913-edge.eth0-TxRx-49
90134 ± 10% -18.3% 73603 ± 6% interrupts.CPU49.CAL:Function_call_interrupts
1785145 +12.4% 2005932 interrupts.CPU49.LOC:Local_timer_interrupts
3185609 ± 11% -17.6% 2626502 ± 4% interrupts.CPU49.RES:Rescheduling_interrupts
102737 ± 10% -17.3% 84932 ± 5% interrupts.CPU49.TLB:TLB_shootdowns
388425 ± 12% +27.8% 496560 ± 4% interrupts.CPU49.TRM:Thermal_event_interrupts
92114 ± 9% -23.2% 70721 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
1785135 +12.3% 2005424 interrupts.CPU5.LOC:Local_timer_interrupts
3210646 ± 12% -24.8% 2415935 ± 4% interrupts.CPU5.RES:Rescheduling_interrupts
107349 ± 9% -18.9% 87034 ± 6% interrupts.CPU5.TLB:TLB_shootdowns
388418 ± 12% +27.8% 496564 ± 4% interrupts.CPU5.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU50.91:IR-PCI-MSI.1572914-edge.eth0-TxRx-50
92406 ± 10% -20.2% 73771 ± 8% interrupts.CPU50.CAL:Function_call_interrupts
1785200 +12.4% 2006488 interrupts.CPU50.LOC:Local_timer_interrupts
106635 ± 9% -19.9% 85431 ± 6% interrupts.CPU50.TLB:TLB_shootdowns
388419 ± 12% +27.8% 496564 ± 4% interrupts.CPU50.TRM:Thermal_event_interrupts
436.25 +14.4% 499.25 ± 3% interrupts.CPU51.92:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
91491 ± 9% -20.8% 72473 ± 6% interrupts.CPU51.CAL:Function_call_interrupts
1784683 +12.4% 2005527 interrupts.CPU51.LOC:Local_timer_interrupts
3051494 ± 8% -24.6% 2300032 ± 10% interrupts.CPU51.RES:Rescheduling_interrupts
104954 ± 10% -20.2% 83775 ± 5% interrupts.CPU51.TLB:TLB_shootdowns
388264 ± 12% +27.9% 496462 ± 4% interrupts.CPU51.TRM:Thermal_event_interrupts
92938 ± 11% -19.7% 74594 ± 7% interrupts.CPU52.CAL:Function_call_interrupts
1785451 +12.4% 2006171 interrupts.CPU52.LOC:Local_timer_interrupts
3025681 ± 11% -17.1% 2506800 ± 10% interrupts.CPU52.RES:Rescheduling_interrupts
107100 ± 11% -19.6% 86156 ± 6% interrupts.CPU52.TLB:TLB_shootdowns
388422 ± 12% +27.8% 496563 ± 4% interrupts.CPU52.TRM:Thermal_event_interrupts
439.25 ± 2% +11.7% 490.50 interrupts.CPU53.94:IR-PCI-MSI.1572917-edge.eth0-TxRx-53
93000 ± 12% -20.9% 73549 ± 7% interrupts.CPU53.CAL:Function_call_interrupts
1785290 +12.3% 2005441 interrupts.CPU53.LOC:Local_timer_interrupts
3212623 ± 7% -18.2% 2629478 ± 10% interrupts.CPU53.RES:Rescheduling_interrupts
107303 ± 11% -17.5% 88567 ± 10% interrupts.CPU53.TLB:TLB_shootdowns
388411 ± 12% +27.8% 496557 ± 4% interrupts.CPU53.TRM:Thermal_event_interrupts
437.75 +12.1% 490.50 interrupts.CPU54.95:IR-PCI-MSI.1572918-edge.eth0-TxRx-54
1785170 +12.0% 2000090 interrupts.CPU54.LOC:Local_timer_interrupts
5351 ± 26% -64.5% 1900 ± 99% interrupts.CPU54.NMI:Non-maskable_interrupts
5351 ± 26% -64.5% 1900 ± 99% interrupts.CPU54.PMI:Performance_monitoring_interrupts
2066106 ± 9% +92.7% 3982033 ± 6% interrupts.CPU54.RES:Rescheduling_interrupts
108150 ± 12% -21.2% 85173 ± 5% interrupts.CPU54.TLB:TLB_shootdowns
329294 ± 3% +15.4% 380146 ± 2% interrupts.CPU54.TRM:Thermal_event_interrupts
436.50 +12.4% 490.50 interrupts.CPU55.96:IR-PCI-MSI.1572919-edge.eth0-TxRx-55
91164 ± 12% -19.9% 73014 ± 6% interrupts.CPU55.CAL:Function_call_interrupts
1785472 +12.0% 1999115 interrupts.CPU55.LOC:Local_timer_interrupts
1286654 ± 6% +228.2% 4223034 ± 5% interrupts.CPU55.RES:Rescheduling_interrupts
104943 ± 11% -19.5% 84466 ± 5% interrupts.CPU55.TLB:TLB_shootdowns
329292 ± 3% +15.4% 379918 ± 2% interrupts.CPU55.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU56.97:IR-PCI-MSI.1572920-edge.eth0-TxRx-56
96934 ± 13% -23.4% 74215 ± 6% interrupts.CPU56.CAL:Function_call_interrupts
1785010 +12.1% 2000169 interrupts.CPU56.LOC:Local_timer_interrupts
1312758 ± 17% +194.5% 3866522 ± 3% interrupts.CPU56.RES:Rescheduling_interrupts
109307 ± 12% -22.1% 85139 ± 4% interrupts.CPU56.TLB:TLB_shootdowns
329273 ± 3% +15.4% 380131 ± 2% interrupts.CPU56.TRM:Thermal_event_interrupts
439.00 ± 2% +11.7% 490.50 interrupts.CPU57.98:IR-PCI-MSI.1572921-edge.eth0-TxRx-57
94960 ± 11% -22.9% 73246 ± 5% interrupts.CPU57.CAL:Function_call_interrupts
1785297 +12.0% 2000019 interrupts.CPU57.LOC:Local_timer_interrupts
1666475 ± 10% +169.6% 4492937 interrupts.CPU57.RES:Rescheduling_interrupts
107534 ± 10% -22.0% 83871 ± 5% interrupts.CPU57.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380136 ± 2% interrupts.CPU57.TRM:Thermal_event_interrupts
442.25 ± 3% +10.9% 490.50 interrupts.CPU58.99:IR-PCI-MSI.1572922-edge.eth0-TxRx-58
92818 ± 12% -21.4% 72925 ± 5% interrupts.CPU58.CAL:Function_call_interrupts
1785429 +12.0% 1999528 interrupts.CPU58.LOC:Local_timer_interrupts
1606064 ± 5% +176.8% 4446179 ± 6% interrupts.CPU58.RES:Rescheduling_interrupts
103696 ± 11% -19.5% 83458 ± 4% interrupts.CPU58.TLB:TLB_shootdowns
329293 ± 3% +15.4% 380022 ± 2% interrupts.CPU58.TRM:Thermal_event_interrupts
439.00 +11.7% 490.50 interrupts.CPU59.100:IR-PCI-MSI.1572923-edge.eth0-TxRx-59
93876 ± 13% -20.5% 74669 ± 6% interrupts.CPU59.CAL:Function_call_interrupts
1785085 +12.0% 1999992 interrupts.CPU59.LOC:Local_timer_interrupts
1634812 ± 5% +174.5% 4487501 ± 4% interrupts.CPU59.RES:Rescheduling_interrupts
106929 ± 13% -21.0% 84522 ± 5% interrupts.CPU59.TLB:TLB_shootdowns
329291 ± 3% +15.4% 380111 ± 2% interrupts.CPU59.TRM:Thermal_event_interrupts
89930 ± 9% -20.5% 71520 ± 7% interrupts.CPU6.CAL:Function_call_interrupts
1785220 +12.3% 2005468 interrupts.CPU6.LOC:Local_timer_interrupts
106307 ± 7% -19.9% 85162 ± 6% interrupts.CPU6.TLB:TLB_shootdowns
388421 ± 12% +27.8% 496564 ± 4% interrupts.CPU6.TRM:Thermal_event_interrupts
436.25 +13.1% 493.25 interrupts.CPU60.101:IR-PCI-MSI.1572924-edge.eth0-TxRx-60
98595 ± 11% -25.6% 73379 ± 5% interrupts.CPU60.CAL:Function_call_interrupts
1785116 +12.1% 2000324 interrupts.CPU60.LOC:Local_timer_interrupts
1604314 ± 8% +186.8% 4601614 ± 5% interrupts.CPU60.RES:Rescheduling_interrupts
110442 ± 10% -24.0% 83932 ± 5% interrupts.CPU60.TLB:TLB_shootdowns
329256 ± 3% +15.5% 380140 ± 2% interrupts.CPU60.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU61.102:IR-PCI-MSI.1572925-edge.eth0-TxRx-61
93035 ± 11% -21.4% 73124 ± 5% interrupts.CPU61.CAL:Function_call_interrupts
1785176 +12.1% 2000960 interrupts.CPU61.LOC:Local_timer_interrupts
4567 ± 67% -79.2% 952.00 ±171% interrupts.CPU61.NMI:Non-maskable_interrupts
4567 ± 67% -79.2% 952.00 ±171% interrupts.CPU61.PMI:Performance_monitoring_interrupts
1575186 ± 6% +194.9% 4644760 ± 4% interrupts.CPU61.RES:Rescheduling_interrupts
103665 ± 11% -18.7% 84259 ± 5% interrupts.CPU61.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380127 ± 2% interrupts.CPU61.TRM:Thermal_event_interrupts
436.25 +12.4% 490.50 interrupts.CPU62.103:IR-PCI-MSI.1572926-edge.eth0-TxRx-62
95188 ± 10% -21.9% 74376 ± 6% interrupts.CPU62.CAL:Function_call_interrupts
1785028 +12.0% 1999262 interrupts.CPU62.LOC:Local_timer_interrupts
1613834 ± 8% +184.7% 4594499 ± 3% interrupts.CPU62.RES:Rescheduling_interrupts
106856 ± 10% -21.4% 83987 ± 5% interrupts.CPU62.TLB:TLB_shootdowns
329270 ± 3% +15.4% 380120 ± 2% interrupts.CPU62.TRM:Thermal_event_interrupts
94411 ± 11% -21.3% 74269 ± 5% interrupts.CPU63.CAL:Function_call_interrupts
1785322 +12.0% 1999237 interrupts.CPU63.LOC:Local_timer_interrupts
1669218 ± 6% +187.2% 4794684 ± 3% interrupts.CPU63.RES:Rescheduling_interrupts
105402 ± 10% -18.3% 86076 ± 8% interrupts.CPU63.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380141 ± 2% interrupts.CPU63.TRM:Thermal_event_interrupts
96060 ± 11% -22.6% 74364 ± 6% interrupts.CPU64.CAL:Function_call_interrupts
1785304 +12.0% 1999571 interrupts.CPU64.LOC:Local_timer_interrupts
1681501 ± 7% +164.0% 4438647 ± 3% interrupts.CPU64.RES:Rescheduling_interrupts
108534 ± 10% -22.8% 83813 ± 5% interrupts.CPU64.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380142 ± 2% interrupts.CPU64.TRM:Thermal_event_interrupts
94178 ± 9% -22.7% 72758 ± 8% interrupts.CPU65.CAL:Function_call_interrupts
1785338 +12.0% 1999408 interrupts.CPU65.LOC:Local_timer_interrupts
1674108 ± 4% +182.2% 4724262 ± 3% interrupts.CPU65.RES:Rescheduling_interrupts
107519 ± 10% -22.6% 83259 ± 5% interrupts.CPU65.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380128 ± 2% interrupts.CPU65.TRM:Thermal_event_interrupts
95135 ± 8% -22.4% 73795 ± 5% interrupts.CPU66.CAL:Function_call_interrupts
1785413 +12.0% 2000352 interrupts.CPU66.LOC:Local_timer_interrupts
616.75 ±171% +902.4% 6182 ± 28% interrupts.CPU66.NMI:Non-maskable_interrupts
616.75 ±171% +902.4% 6182 ± 28% interrupts.CPU66.PMI:Performance_monitoring_interrupts
1646166 ± 6% +169.3% 4433267 ± 6% interrupts.CPU66.RES:Rescheduling_interrupts
107190 ± 7% -22.5% 83034 ± 4% interrupts.CPU66.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380118 ± 2% interrupts.CPU66.TRM:Thermal_event_interrupts
95627 ± 10% -22.4% 74249 ± 5% interrupts.CPU67.CAL:Function_call_interrupts
1785314 +12.1% 2001128 interrupts.CPU67.LOC:Local_timer_interrupts
1611146 ± 4% +183.8% 4572204 ± 7% interrupts.CPU67.RES:Rescheduling_interrupts
109283 ± 11% -22.5% 84676 ± 4% interrupts.CPU67.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380142 ± 2% interrupts.CPU67.TRM:Thermal_event_interrupts
95479 ± 11% -20.6% 75791 ± 5% interrupts.CPU68.CAL:Function_call_interrupts
1785192 +12.0% 1999411 interrupts.CPU68.LOC:Local_timer_interrupts
1575394 ± 7% +189.9% 4567277 ± 4% interrupts.CPU68.RES:Rescheduling_interrupts
106972 ± 12% -18.6% 87039 ± 5% interrupts.CPU68.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380129 ± 2% interrupts.CPU68.TRM:Thermal_event_interrupts
93067 ± 14% -19.9% 74538 ± 4% interrupts.CPU69.CAL:Function_call_interrupts
1785257 +12.1% 2000636 interrupts.CPU69.LOC:Local_timer_interrupts
1578663 ± 9% +192.9% 4624518 ± 3% interrupts.CPU69.RES:Rescheduling_interrupts
104271 ± 12% -18.5% 84989 ± 2% interrupts.CPU69.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380066 ± 2% interrupts.CPU69.TRM:Thermal_event_interrupts
474.00 ± 3% +49.7% 709.75 ± 24% interrupts.CPU7.46:IR-PCI-MSI.1572871-edge.eth0-TxRx-7
92062 ± 9% -19.7% 73884 ± 5% interrupts.CPU7.CAL:Function_call_interrupts
1784266 +12.4% 2004980 interrupts.CPU7.LOC:Local_timer_interrupts
2505703 ± 11% -19.0% 2029187 ± 5% interrupts.CPU7.RES:Rescheduling_interrupts
107698 ± 8% -18.4% 87903 ± 6% interrupts.CPU7.TLB:TLB_shootdowns
388281 ± 12% +27.8% 496347 ± 4% interrupts.CPU7.TRM:Thermal_event_interrupts
92337 ± 12% -17.5% 76148 ± 2% interrupts.CPU70.CAL:Function_call_interrupts
1785161 +12.0% 1999757 interrupts.CPU70.LOC:Local_timer_interrupts
1562360 ± 4% +170.8% 4231208 ± 9% interrupts.CPU70.RES:Rescheduling_interrupts
103862 ± 10% -18.8% 84371 ± 3% interrupts.CPU70.TLB:TLB_shootdowns
329296 ± 3% +15.4% 380141 ± 2% interrupts.CPU70.TRM:Thermal_event_interrupts
95829 ± 12% -21.2% 75509 ± 5% interrupts.CPU71.CAL:Function_call_interrupts
1784894 +12.1% 2001287 interrupts.CPU71.LOC:Local_timer_interrupts
1533567 ± 7% +196.9% 4553440 interrupts.CPU71.RES:Rescheduling_interrupts
105883 ± 11% -20.4% 84248 ± 6% interrupts.CPU71.TLB:TLB_shootdowns
329247 ± 3% +15.5% 380146 ± 2% interrupts.CPU71.TRM:Thermal_event_interrupts
88372 ± 11% -20.3% 70389 ± 7% interrupts.CPU8.CAL:Function_call_interrupts
1784452 +12.4% 2005305 interrupts.CPU8.LOC:Local_timer_interrupts
103992 ± 10% -17.4% 85887 ± 5% interrupts.CPU8.TLB:TLB_shootdowns
388422 ± 12% +27.8% 496557 ± 4% interrupts.CPU8.TRM:Thermal_event_interrupts
89290 ± 10% -19.1% 72221 ± 6% interrupts.CPU9.CAL:Function_call_interrupts
1784716 +12.4% 2005710 interrupts.CPU9.LOC:Local_timer_interrupts
104340 ± 10% -16.0% 87647 ± 3% interrupts.CPU9.TLB:TLB_shootdowns
388408 ± 12% +27.8% 496557 ± 4% interrupts.CPU9.TRM:Thermal_event_interrupts
1.285e+08 +12.2% 1.442e+08 interrupts.LOC:Local_timer_interrupts
144.00 +50.0% 216.00 interrupts.MCP:Machine_check_polls
1.677e+08 ± 8% +50.4% 2.522e+08 ± 2% interrupts.RES:Rescheduling_interrupts
7624145 ± 10% -19.7% 6125835 ± 5% interrupts.TLB:TLB_shootdowns
25836833 ± 8% +22.2% 31559779 ± 2% interrupts.TRM:Thermal_event_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/process/50%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/hackbench/0xb000038
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
134091 -10.6% 119937 hackbench.throughput
3.352e+09 ± 2% +16.1% 3.892e+09 ± 5% hackbench.time.involuntary_context_switches
535571 ± 2% +10.6% 592420 ± 4% numa-meminfo.node0.FilePages
12614 ± 18% -12.2% 11078 ± 18% numa-meminfo.node1.Mapped
3472 ± 6% +18.0% 4096 slabinfo.skbuff_head_cache.active_objs
3568 ± 6% +15.7% 4128 slabinfo.skbuff_head_cache.num_objs
730128 ± 5% -52.2% 349073 ± 25% turbostat.C3
14.77 +5.9% 15.64 turbostat.RAMWatt
24613783 ± 5% -32.6% 16595193 ± 23% numa-numastat.node0.local_node
24632941 ± 5% -32.6% 16605181 ± 23% numa-numastat.node0.numa_hit
25827961 -29.9% 18097150 ± 19% numa-numastat.node1.local_node
25836948 -29.9% 18115676 ± 19% numa-numastat.node1.numa_hit
50467170 ± 2% -31.2% 34733456 ± 10% proc-vmstat.numa_hit
50439016 ± 2% -31.2% 34704933 ± 10% proc-vmstat.numa_local
50622650 ± 2% -31.1% 34896854 ± 10% proc-vmstat.pgalloc_normal
50548381 ± 2% -31.1% 34834391 ± 10% proc-vmstat.pgfree
79.00 +2.2% 80.75 vmstat.cpu.sy
19.00 -7.9% 17.50 ± 2% vmstat.cpu.us
13450755 +4.6% 14066265 vmstat.system.cs
1122772 -12.1% 986681 vmstat.system.in
133891 ± 2% +10.6% 148112 ± 4% numa-vmstat.node0.nr_file_pages
12301995 ± 4% -30.2% 8590153 ± 22% numa-vmstat.node0.numa_hit
12282910 ± 4% -30.1% 8580139 ± 22% numa-vmstat.node0.numa_local
12964354 ± 5% -31.2% 8914408 ± 18% numa-vmstat.node1.numa_hit
12806275 ± 5% -31.7% 8746928 ± 19% numa-vmstat.node1.numa_local
33816695 ± 10% +17.8% 39824104 ± 7% cpuidle.C1.time
21015951 ± 3% +19.6% 25127666 ± 7% cpuidle.C1E.time
730991 ± 5% -52.1% 350128 ± 25% cpuidle.C3.usage
1.497e+08 ± 28% +41.6% 2.119e+08 ± 15% cpuidle.C6.time
6771631 ± 4% -45.2% 3713804 ± 15% cpuidle.POLL.time
5209842 ± 4% -44.1% 2913245 ± 15% cpuidle.POLL.usage
26965506 +5.5% 28444381 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg
28257967 +6.6% 30126596 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
582203 ± 9% +37.0% 797353 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
4.65 ± 8% -10.7% 4.16 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
4620 ± 6% -10.7% 4125 ± 4% sched_debug.cfs_rq:/.runnable_weight.stddev
582831 ± 9% +36.6% 796226 ± 7% sched_debug.cfs_rq:/.spread0.stddev
154.73 ± 2% -14.7% 132.00 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
120.93 ± 25% +43.5% 173.50 ± 19% sched_debug.cfs_rq:/.util_est_enqueued.min
184.93 ± 2% -11.0% 164.54 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.stddev
182135 ± 18% +57.1% 286098 ± 6% sched_debug.cpu.avg_idle.avg
185926 ± 19% +49.0% 276991 ± 9% sched_debug.cpu.avg_idle.stddev
3.32 ± 4% -10.8% 2.96 ± 3% sched_debug.cpu.nr_running.stddev
46710201 +9.0% 50934866 ± 5% sched_debug.cpu.nr_switches.avg
50053735 +16.9% 58491253 ± 7% sched_debug.cpu.nr_switches.max
43751407 +6.7% 46698664 ± 4% sched_debug.cpu.nr_switches.min
1344975 ± 10% +78.9% 2406504 ± 16% sched_debug.cpu.nr_switches.stddev
1.63 ± 3% -32.8% 1.09 ± 11% sched_debug.cpu.nr_uninterruptible.avg
1017 ± 27% -59.5% 411.75 ± 16% interrupts.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
6547829 -14.4% 5603910 ± 5% interrupts.CPU16.RES:Rescheduling_interrupts
1017 ± 27% -59.5% 411.75 ± 16% interrupts.CPU19.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
6801598 -14.7% 5801847 ± 5% interrupts.CPU19.RES:Rescheduling_interrupts
6609756 -13.7% 5703084 ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
6446878 ± 2% -11.8% 5689145 ± 7% interrupts.CPU21.RES:Rescheduling_interrupts
6634825 -9.4% 6008203 ± 6% interrupts.CPU24.RES:Rescheduling_interrupts
6801182 -9.8% 6137528 ± 3% interrupts.CPU25.RES:Rescheduling_interrupts
6687252 -12.9% 5821967 ± 6% interrupts.CPU28.RES:Rescheduling_interrupts
6583387 -10.9% 5868659 ± 6% interrupts.CPU29.RES:Rescheduling_interrupts
6585631 -16.6% 5490726 ± 9% interrupts.CPU37.RES:Rescheduling_interrupts
6759973 -13.2% 5867800 ± 7% interrupts.CPU57.RES:Rescheduling_interrupts
6459616 ± 2% -10.5% 5783047 ± 6% interrupts.CPU58.RES:Rescheduling_interrupts
6580818 ± 2% -16.9% 5468945 ± 3% interrupts.CPU60.RES:Rescheduling_interrupts
6723178 ± 2% -12.4% 5887778 ± 3% interrupts.CPU63.RES:Rescheduling_interrupts
6480129 -16.2% 5427168 ± 7% interrupts.CPU64.RES:Rescheduling_interrupts
6439558 ± 2% -11.6% 5691160 ± 5% interrupts.CPU65.RES:Rescheduling_interrupts
6662822 -12.0% 5866390 ± 7% interrupts.CPU68.RES:Rescheduling_interrupts
6602178 -12.0% 5809760 ± 7% interrupts.CPU7.RES:Rescheduling_interrupts
6616172 -14.1% 5685889 ± 7% interrupts.CPU73.RES:Rescheduling_interrupts
6591477 -13.8% 5682655 ± 8% interrupts.CPU81.RES:Rescheduling_interrupts
13.97 +15.8% 16.18 perf-stat.i.MPKI
1.70 -0.1 1.65 perf-stat.i.branch-miss-rate%
3.607e+08 -1.6% 3.549e+08 perf-stat.i.branch-misses
1.22 ± 6% +0.8 2.01 ± 18% perf-stat.i.cache-miss-rate%
17650711 ± 6% +98.0% 34953649 ± 18% perf-stat.i.cache-misses
1.496e+09 +17.3% 1.755e+09 perf-stat.i.cache-references
13503062 +4.6% 14122231 perf-stat.i.context-switches
2946654 ± 2% +18.5% 3491269 ± 3% perf-stat.i.cpu-migrations
29365 ± 8% -30.6% 20387 ± 31% perf-stat.i.cycles-between-cache-misses
0.35 ± 2% +0.0 0.36 ± 2% perf-stat.i.dTLB-store-miss-rate%
2.048e+10 -2.1% 2.006e+10 perf-stat.i.dTLB-stores
55.17 -1.4 53.74 perf-stat.i.iTLB-load-miss-rate%
1.019e+08 +5.5% 1.074e+08 perf-stat.i.iTLB-loads
3068 -3.0% 2975 perf-stat.i.minor-faults
9195204 ± 6% +113.0% 19589023 ± 18% perf-stat.i.node-load-misses
28132 ± 6% +125.3% 63396 ± 13% perf-stat.i.node-loads
58.92 -9.8 49.16 ± 4% perf-stat.i.node-store-miss-rate%
3588196 ± 6% +60.5% 5760265 ± 19% perf-stat.i.node-store-misses
2415953 ± 5% +145.2% 5924042 ± 18% perf-stat.i.node-stores
3068 -3.0% 2975 perf-stat.i.page-faults
13.96 +16.0% 16.19 perf-stat.overall.MPKI
1.70 -0.1 1.65 perf-stat.overall.branch-miss-rate%
1.18 ± 6% +0.8 1.99 ± 18% perf-stat.overall.cache-miss-rate%
13823 ± 6% -47.8% 7221 ± 20% perf-stat.overall.cycles-between-cache-misses
0.35 ± 2% +0.0 0.36 ± 2% perf-stat.overall.dTLB-store-miss-rate%
55.11 -1.4 53.70 perf-stat.overall.iTLB-load-miss-rate%
59.75 -10.5 49.23 perf-stat.overall.node-store-miss-rate%
78192 +13.0% 88370 perf-stat.overall.path-length
3.601e+08 -1.6% 3.543e+08 perf-stat.ps.branch-misses
17622620 ± 6% +98.0% 34899005 ± 18% perf-stat.ps.cache-misses
1.494e+09 +17.3% 1.752e+09 perf-stat.ps.cache-references
13480657 +4.6% 14099389 perf-stat.ps.context-switches
2941749 ± 2% +18.5% 3485631 ± 3% perf-stat.ps.cpu-migrations
2.045e+10 -2.1% 2.003e+10 perf-stat.ps.dTLB-stores
1.017e+08 +5.5% 1.072e+08 perf-stat.ps.iTLB-loads
3064 -3.0% 2972 perf-stat.ps.minor-faults
9180406 ± 6% +113.0% 19558101 ± 18% perf-stat.ps.node-load-misses
28101 ± 6% +125.3% 63314 ± 13% perf-stat.ps.node-loads
3582467 ± 6% +60.5% 5751138 ± 19% perf-stat.ps.node-store-misses
2412086 ± 5% +145.2% 5914636 ± 18% perf-stat.ps.node-stores
3064 -3.0% 2972 perf-stat.ps.page-faults
223485 +9.1% 243918 ± 5% softirqs.CPU0.TIMER
11887 ± 11% -18.2% 9723 ± 6% softirqs.CPU1.SCHED
215411 +9.2% 235267 ± 6% softirqs.CPU1.TIMER
11293 ± 3% -14.3% 9678 ± 10% softirqs.CPU10.SCHED
213916 +9.8% 234908 ± 5% softirqs.CPU10.TIMER
212758 +7.7% 229141 ± 5% softirqs.CPU11.TIMER
11098 ± 2% -15.9% 9337 ± 10% softirqs.CPU12.SCHED
213397 +9.0% 232706 ± 6% softirqs.CPU12.TIMER
212419 +8.3% 230115 ± 6% softirqs.CPU13.TIMER
10777 ± 4% -12.5% 9426 ± 6% softirqs.CPU14.SCHED
11241 ± 3% -11.5% 9950 ± 6% softirqs.CPU15.SCHED
214604 +10.0% 236116 ± 8% softirqs.CPU15.TIMER
11147 -14.9% 9482 ± 8% softirqs.CPU16.SCHED
216343 +8.4% 234558 ± 6% softirqs.CPU16.TIMER
214680 +7.0% 229748 ± 6% softirqs.CPU17.TIMER
212791 +8.6% 230988 ± 6% softirqs.CPU19.TIMER
11211 ± 4% -10.4% 10047 ± 6% softirqs.CPU2.SCHED
219918 +9.8% 241513 ± 6% softirqs.CPU2.TIMER
10825 ± 4% -14.3% 9273 ± 7% softirqs.CPU20.SCHED
215209 +8.8% 234149 ± 6% softirqs.CPU20.TIMER
10869 ± 4% -11.4% 9626 ± 4% softirqs.CPU21.SCHED
216225 +8.7% 235105 ± 5% softirqs.CPU21.TIMER
13013 ± 7% -14.0% 11192 ± 3% softirqs.CPU22.SCHED
220406 +8.9% 239975 ± 4% softirqs.CPU22.TIMER
11925 -14.9% 10145 ± 3% softirqs.CPU23.SCHED
220231 +7.6% 237056 ± 5% softirqs.CPU23.TIMER
11140 ± 2% -11.5% 9861 ± 7% softirqs.CPU24.SCHED
216091 +8.9% 235338 ± 5% softirqs.CPU24.TIMER
11438 ± 3% -16.3% 9578 ± 2% softirqs.CPU25.SCHED
214880 +9.4% 235143 ± 5% softirqs.CPU25.TIMER
214888 +8.8% 233874 ± 5% softirqs.CPU26.TIMER
11410 ± 2% -13.7% 9844 ± 10% softirqs.CPU27.SCHED
215012 +8.7% 233790 ± 4% softirqs.CPU27.TIMER
220618 ± 2% +7.8% 237788 ± 5% softirqs.CPU28.TIMER
11096 ± 5% -14.3% 9513 ± 6% softirqs.CPU29.SCHED
217194 +8.2% 235108 ± 5% softirqs.CPU29.TIMER
11445 ± 2% -16.7% 9536 ± 10% softirqs.CPU3.SCHED
214692 +9.1% 234221 ± 5% softirqs.CPU3.TIMER
11127 ± 5% -10.5% 9954 ± 6% softirqs.CPU30.SCHED
214861 +8.2% 232532 ± 5% softirqs.CPU30.TIMER
217582 +7.8% 234493 ± 4% softirqs.CPU31.TIMER
214606 +8.8% 233473 ± 4% softirqs.CPU32.TIMER
11299 ± 2% -11.7% 9975 ± 6% softirqs.CPU33.SCHED
11503 ± 2% -16.0% 9658 ± 10% softirqs.CPU34.SCHED
213233 +10.9% 236580 ± 6% softirqs.CPU34.TIMER
11162 ± 2% -11.6% 9863 ± 7% softirqs.CPU35.SCHED
11241 -13.9% 9678 ± 6% softirqs.CPU36.SCHED
215681 +9.2% 235446 ± 4% softirqs.CPU36.TIMER
11358 -15.2% 9632 ± 6% softirqs.CPU37.SCHED
219691 ± 2% +18.5% 260407 ± 13% softirqs.CPU37.TIMER
11570 ± 2% -13.1% 10049 ± 6% softirqs.CPU38.SCHED
215559 +8.2% 233189 ± 4% softirqs.CPU38.TIMER
11295 ± 4% -13.1% 9819 ± 10% softirqs.CPU39.SCHED
215781 ± 2% +7.6% 232108 ± 5% softirqs.CPU39.TIMER
11506 ± 7% -13.0% 10006 ± 6% softirqs.CPU4.SCHED
214973 +8.9% 234140 ± 6% softirqs.CPU4.TIMER
11304 ± 2% -13.7% 9759 ± 10% softirqs.CPU40.SCHED
213827 +10.1% 235341 ± 4% softirqs.CPU40.TIMER
11462 ± 4% -13.5% 9913 ± 8% softirqs.CPU41.SCHED
215879 ± 2% +7.2% 231499 ± 5% softirqs.CPU41.TIMER
11418 ± 3% -14.0% 9819 ± 8% softirqs.CPU43.SCHED
214864 +8.3% 232698 ± 4% softirqs.CPU43.TIMER
10706 ± 5% -13.9% 9223 ± 11% softirqs.CPU44.SCHED
217386 +9.4% 237916 ± 5% softirqs.CPU44.TIMER
215191 +9.7% 236017 ± 5% softirqs.CPU45.TIMER
214214 +9.9% 235466 ± 6% softirqs.CPU46.TIMER
214471 +9.9% 235675 ± 5% softirqs.CPU47.TIMER
11251 ± 2% -13.0% 9785 ± 8% softirqs.CPU48.SCHED
212500 +9.9% 233451 ± 6% softirqs.CPU48.TIMER
213609 +9.9% 234657 ± 5% softirqs.CPU49.TIMER
216836 +8.5% 235225 ± 5% softirqs.CPU5.TIMER
215854 +9.0% 235383 ± 5% softirqs.CPU50.TIMER
11468 -14.4% 9820 ± 9% softirqs.CPU51.SCHED
215402 +8.8% 234360 ± 5% softirqs.CPU51.TIMER
214152 +8.8% 233038 ± 5% softirqs.CPU52.TIMER
212843 +8.4% 230716 ± 5% softirqs.CPU53.TIMER
11373 ± 2% -14.9% 9679 ± 7% softirqs.CPU54.SCHED
214008 +9.5% 234380 ± 5% softirqs.CPU54.TIMER
212619 +7.5% 228539 ± 5% softirqs.CPU55.TIMER
11136 ± 3% -16.4% 9312 ± 10% softirqs.CPU56.SCHED
213404 +9.0% 232542 ± 6% softirqs.CPU56.TIMER
11171 ± 4% -11.9% 9837 ± 7% softirqs.CPU57.SCHED
212283 +8.6% 230474 ± 6% softirqs.CPU57.TIMER
11137 ± 4% -14.0% 9578 ± 6% softirqs.CPU58.SCHED
11226 ± 4% -17.2% 9289 ± 4% softirqs.CPU59.SCHED
214193 +23.1% 263668 ± 22% softirqs.CPU59.TIMER
217195 +8.2% 234975 ± 5% softirqs.CPU6.TIMER
11051 ± 4% -14.4% 9460 ± 9% softirqs.CPU60.SCHED
217131 +7.9% 234363 ± 5% softirqs.CPU60.TIMER
11206 ± 3% -12.6% 9793 ± 9% softirqs.CPU61.SCHED
211791 +9.2% 231302 ± 5% softirqs.CPU61.TIMER
214956 ± 2% +8.8% 233770 ± 5% softirqs.CPU62.TIMER
11180 ± 3% -14.8% 9526 ± 8% softirqs.CPU63.SCHED
213413 +8.7% 232008 ± 5% softirqs.CPU63.TIMER
215674 +8.1% 233086 ± 6% softirqs.CPU64.TIMER
11032 ± 3% -15.7% 9302 ± 7% softirqs.CPU65.SCHED
215870 +9.0% 235386 ± 5% softirqs.CPU65.TIMER
11403 ± 3% -14.2% 9786 ± 6% softirqs.CPU66.SCHED
215968 +9.4% 236175 ± 5% softirqs.CPU66.TIMER
11098 ± 3% -7.4% 10281 ± 4% softirqs.CPU67.SCHED
217846 +8.3% 236023 ± 4% softirqs.CPU67.TIMER
11094 -12.2% 9740 ± 7% softirqs.CPU68.SCHED
215512 +9.4% 235796 ± 5% softirqs.CPU68.TIMER
11321 ± 3% -14.1% 9724 ± 3% softirqs.CPU69.SCHED
213564 +9.0% 232867 ± 5% softirqs.CPU69.TIMER
218822 +8.8% 238080 ± 6% softirqs.CPU7.TIMER
11507 ± 2% -12.9% 10026 ± 4% softirqs.CPU70.SCHED
213928 +9.4% 233986 ± 4% softirqs.CPU70.TIMER
213528 +8.5% 231730 ± 4% softirqs.CPU71.TIMER
11262 ± 2% -11.1% 10013 ± 5% softirqs.CPU72.SCHED
11145 ± 3% -14.5% 9533 ± 8% softirqs.CPU73.SCHED
216467 +9.2% 236409 ± 5% softirqs.CPU73.TIMER
11447 ± 4% -13.2% 9935 ± 4% softirqs.CPU74.SCHED
212578 +9.1% 232022 ± 5% softirqs.CPU74.TIMER
215256 +7.8% 232072 ± 5% softirqs.CPU75.TIMER
11324 ± 4% -11.8% 9985 ± 3% softirqs.CPU76.SCHED
214170 +8.6% 232516 ± 5% softirqs.CPU76.TIMER
11646 ± 5% -15.7% 9824 ± 7% softirqs.CPU77.SCHED
213706 +8.0% 230880 ± 5% softirqs.CPU77.TIMER
11397 ± 6% -13.9% 9810 ± 10% softirqs.CPU78.SCHED
11179 ± 3% -11.5% 9894 ± 5% softirqs.CPU79.SCHED
213215 +8.4% 231081 ± 5% softirqs.CPU79.TIMER
11165 ± 4% -16.2% 9352 ± 9% softirqs.CPU8.SCHED
216508 ± 2% +7.9% 233692 ± 6% softirqs.CPU8.TIMER
11203 ± 2% -13.9% 9643 ± 4% softirqs.CPU80.SCHED
216446 +8.3% 234469 ± 5% softirqs.CPU80.TIMER
11151 ± 2% -12.8% 9726 ± 6% softirqs.CPU81.SCHED
11373 ± 3% -13.0% 9891 ± 6% softirqs.CPU82.SCHED
213597 +8.9% 232608 ± 5% softirqs.CPU82.TIMER
214308 +7.9% 231148 ± 5% softirqs.CPU83.TIMER
214384 +20.9% 259149 ± 15% softirqs.CPU84.TIMER
11458 -13.7% 9887 ± 7% softirqs.CPU86.SCHED
215459 +8.2% 233058 ± 5% softirqs.CPU86.TIMER
11303 ± 4% -12.4% 9904 ± 8% softirqs.CPU87.SCHED
214185 +7.9% 231116 ± 5% softirqs.CPU87.TIMER
11030 ± 2% -13.0% 9593 ± 8% softirqs.CPU9.SCHED
213229 +8.7% 231788 ± 6% softirqs.CPU9.TIMER
995635 -12.9% 867688 ± 5% softirqs.SCHED
19052052 +8.6% 20690944 ± 5% softirqs.TIMER
31.32 -1.6 29.74 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
30.24 -1.5 28.76 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
26.06 -0.9 25.17 perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.35 ± 5% -0.9 1.48 ± 12% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
25.36 -0.8 24.54 perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
2.75 ± 4% -0.7 2.08 ± 7% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
4.24 ± 2% -0.6 3.60 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
3.84 ± 2% -0.6 3.23 ± 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
7.83 -0.5 7.30 ± 2% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
1.68 ± 6% -0.5 1.17 ± 10% perf-profile.calltrace.cycles-pp.update_cfs_group.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
8.02 -0.5 7.53 ± 2% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
8.08 -0.5 7.60 ± 2% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.63 ± 4% -0.4 1.19 ± 6% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.new_sync_write.vfs_write.ksys_write
1.71 ± 6% -0.4 1.28 ± 8% perf-profile.calltrace.cycles-pp.update_cfs_group.dequeue_task_fair.__schedule.schedule.pipe_wait
2.60 -0.4 2.19 ± 4% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.03 ± 2% -0.4 1.64 ± 4% perf-profile.calltrace.cycles-pp.security_file_permission.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.47 -0.4 3.08 ± 4% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
1.95 ± 2% -0.3 1.61 ± 3% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.new_sync_write.vfs_write.ksys_write
2.67 -0.3 2.35 ± 3% perf-profile.calltrace.cycles-pp.copy_page_to_iter.pipe_read.new_sync_read.vfs_read.ksys_read
0.56 -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.fsnotify.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.22 ± 4% -0.3 0.94 ± 2% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_write.new_sync_write.vfs_write.ksys_write
1.25 ± 2% -0.3 1.00 ± 5% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_write.ksys_write.do_syscall_64
2.64 -0.2 2.41 ± 2% perf-profile.calltrace.cycles-pp.dequeue_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
1.53 -0.2 1.34 ± 7% perf-profile.calltrace.cycles-pp.selinux_file_permission.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.59 ± 4% -0.2 0.40 ± 57% perf-profile.calltrace.cycles-pp.__enqueue_entity.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
1.34 -0.2 1.16 ± 6% perf-profile.calltrace.cycles-pp.update_load_avg.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
1.07 ± 2% -0.2 0.90 ± 4% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.new_sync_write.vfs_write
1.05 -0.2 0.88 ± 6% perf-profile.calltrace.cycles-pp.update_load_avg.dequeue_entity.dequeue_task_fair.__schedule.schedule
1.56 ± 2% -0.2 1.38 ± 2% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.new_sync_read
1.17 ± 4% -0.2 1.01 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
0.98 ± 2% -0.2 0.82 ± 4% perf-profile.calltrace.cycles-pp.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.66 ± 2% -0.2 1.50 ± 5% perf-profile.calltrace.cycles-pp.copyout.copy_page_to_iter.pipe_read.new_sync_read.vfs_read
0.73 ± 3% -0.2 0.57 ± 2% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_read.ksys_read.do_syscall_64
0.96 ± 2% -0.1 0.82 ± 3% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.new_sync_write
0.89 ± 2% -0.1 0.77 ± 3% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.66 ± 6% -0.1 0.54 ± 2% perf-profile.calltrace.cycles-pp.file_has_perm.security_file_permission.vfs_write.ksys_write.do_syscall_64
2.42 -0.1 2.32 perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
0.89 ± 4% -0.1 0.79 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write
0.74 ± 2% -0.1 0.66 ± 3% perf-profile.calltrace.cycles-pp.update_curr.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.84 -0.1 0.79 ± 3% perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.56 -0.0 0.54 ± 2% perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate
0.77 ± 2% +0.1 0.82 perf-profile.calltrace.cycles-pp.check_preempt_wakeup.check_preempt_curr.ttwu_do_wakeup.try_to_wake_up.autoremove_wake_function
0.90 ± 3% +0.1 0.97 ± 2% perf-profile.calltrace.cycles-pp.native_write_msr
0.69 ± 3% +0.1 0.77 ± 3% perf-profile.calltrace.cycles-pp.update_rq_clock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.80 ± 4% +0.1 0.89 ± 2% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
1.10 +0.1 1.20 perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.73 ± 3% +0.1 0.83 ± 2% perf-profile.calltrace.cycles-pp.__switch_to
1.05 +0.1 1.16 perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
0.63 ± 4% +0.1 0.75 ± 2% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
0.99 ± 5% +0.1 1.12 ± 5% perf-profile.calltrace.cycles-pp.set_task_cpu.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.07 ± 3% +0.1 1.21 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
0.64 ± 6% +0.2 0.80 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.pipe_wait.pipe_read
1.41 ± 3% +0.2 1.59 ± 2% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.77 ± 5% +0.2 1.00 ± 5% perf-profile.calltrace.cycles-pp.load_new_mm_cr3.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop
1.73 ± 3% +0.3 2.06 ± 3% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.14 ±173% +0.5 0.63 ± 5% perf-profile.calltrace.cycles-pp.update_rq_clock.__schedule.schedule.pipe_wait.pipe_read
0.13 ±173% +0.5 0.64 ± 5% perf-profile.calltrace.cycles-pp.migrate_task_rq_fair.set_task_cpu.try_to_wake_up.autoremove_wake_function.__wake_up_common
1.85 ± 5% +0.5 2.38 ± 3% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.6 0.59 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_wait.pipe_read.new_sync_read.vfs_read
0.00 +0.6 0.60 ± 5% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.dequeue_task_fair.__schedule.schedule.pipe_wait
0.00 +0.7 0.67 ± 5% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.15 ±173% +1.0 1.14 ± 12% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
87.66 +1.1 88.78 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
86.87 +1.1 88.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.59 ± 4% +1.4 6.96 ± 3% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.72 ± 4% +1.4 7.12 ± 3% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.87 ± 4% +1.4 7.28 ± 3% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.70 ± 30% +3.3 4.98 ± 13% perf-profile.calltrace.cycles-pp.available_idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
35.54 +3.3 38.87 perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
34.28 +3.5 37.82 perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
31.12 +4.1 35.22 ± 2% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
30.49 +4.1 34.63 ± 2% perf-profile.calltrace.cycles-pp.pipe_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
22.54 ± 2% +6.0 28.53 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
20.58 ± 2% +6.2 26.76 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
19.97 ± 2% +6.2 26.16 ± 3% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
19.73 ± 3% +6.2 25.92 ± 3% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
3.39 ± 20% +7.0 10.39 ± 12% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
4.89 ± 15% +7.1 11.98 ± 10% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
31.34 -1.6 29.75 perf-profile.children.cycles-pp.ksys_read
30.28 -1.5 28.79 perf-profile.children.cycles-pp.vfs_read
4.14 ± 5% -1.3 2.85 ± 7% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.51 ± 6% -1.0 2.54 ± 8% perf-profile.children.cycles-pp.update_cfs_group
26.10 -0.9 25.20 perf-profile.children.cycles-pp.new_sync_read
25.42 -0.8 24.59 perf-profile.children.cycles-pp.pipe_read
4.67 -0.8 3.86 ± 4% perf-profile.children.cycles-pp.security_file_permission
4.33 ± 2% -0.7 3.65 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
4.24 ± 2% -0.6 3.60 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
2.18 ± 4% -0.5 1.64 ± 6% perf-profile.children.cycles-pp.mutex_unlock
7.92 -0.5 7.40 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
8.10 -0.5 7.61 ± 3% perf-profile.children.cycles-pp.activate_task
8.16 -0.5 7.67 ± 3% perf-profile.children.cycles-pp.ttwu_do_activate
4.12 ± 4% -0.5 3.64 ± 5% perf-profile.children.cycles-pp._raw_spin_lock
2.83 -0.4 2.38 ± 6% perf-profile.children.cycles-pp.selinux_file_permission
3.53 -0.4 3.13 ± 4% perf-profile.children.cycles-pp.enqueue_entity
1.99 ± 2% -0.4 1.64 ± 3% perf-profile.children.cycles-pp.copy_page_from_iter
3.34 -0.4 2.99 ± 4% perf-profile.children.cycles-pp.update_load_avg
2.70 -0.3 2.39 ± 3% perf-profile.children.cycles-pp.copy_page_to_iter
2.66 -0.3 2.35 ± 3% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.93 ± 3% -0.3 0.64 ± 7% perf-profile.children.cycles-pp.__mutex_unlock_slowpath
1.41 ± 4% -0.3 1.13 perf-profile.children.cycles-pp.file_has_perm
2.75 -0.3 2.49 ± 2% perf-profile.children.cycles-pp.dequeue_entity
2.29 ± 2% -0.3 2.04 perf-profile.children.cycles-pp.mutex_lock
0.85 ± 3% -0.2 0.61 ± 4% perf-profile.children.cycles-pp.__inode_security_revalidate
0.59 ± 3% -0.2 0.39 ± 6% perf-profile.children.cycles-pp.__mutex_lock
1.79 ± 2% -0.2 1.59 ± 4% perf-profile.children.cycles-pp.__fdget_pos
1.70 ± 2% -0.2 1.52 ± 3% perf-profile.children.cycles-pp.__fget_light
0.93 ± 3% -0.2 0.75 perf-profile.children.cycles-pp.avc_has_perm
1.09 ± 2% -0.2 0.92 ± 4% perf-profile.children.cycles-pp.copyin
2.33 ± 2% -0.2 2.16 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.96 ± 2% -0.2 0.81 ± 2% perf-profile.children.cycles-pp.___might_sleep
1.67 ± 2% -0.2 1.52 ± 5% perf-profile.children.cycles-pp.copyout
0.83 ± 4% -0.1 0.68 ± 2% perf-profile.children.cycles-pp._cond_resched
0.33 ± 5% -0.1 0.19 ± 10% perf-profile.children.cycles-pp.preempt_schedule_common
0.42 ± 2% -0.1 0.28 ± 8% perf-profile.children.cycles-pp.wake_up_q
0.70 ± 3% -0.1 0.59 ± 3% perf-profile.children.cycles-pp.__might_sleep
0.65 ± 5% -0.1 0.54 ± 7% perf-profile.children.cycles-pp.__fsnotify_parent
0.52 ± 2% -0.1 0.43 perf-profile.children.cycles-pp.__might_fault
0.53 ± 5% -0.1 0.45 perf-profile.children.cycles-pp.current_time
0.84 ± 3% -0.1 0.77 perf-profile.children.cycles-pp.fsnotify
0.16 ± 7% -0.1 0.09 ± 13% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.15 ± 8% -0.1 0.08 ± 15% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.22 ± 3% -0.1 0.16 ± 7% perf-profile.children.cycles-pp.wake_q_add
0.49 -0.1 0.44 perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.26 ± 4% -0.1 0.21 ± 2% perf-profile.children.cycles-pp.rcu_all_qs
0.08 ± 14% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__vfs_read
0.25 ± 4% -0.0 0.20 ± 5% perf-profile.children.cycles-pp.finish_wait
0.38 -0.0 0.34 ± 2% perf-profile.children.cycles-pp.__x64_sys_read
0.43 ± 3% -0.0 0.39 ± 2% perf-profile.children.cycles-pp.prepare_to_wait
0.14 ± 20% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.task_tick_fair
0.16 ± 19% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.scheduler_tick
0.10 ± 8% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.mutex_spin_on_owner
0.18 ± 8% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.inode_has_perm
0.21 ± 2% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.deactivate_task
0.20 ± 2% -0.0 0.17 ± 4% perf-profile.children.cycles-pp.generic_pipe_buf_confirm
0.13 ± 3% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__sb_end_write
0.14 ± 9% -0.0 0.12 perf-profile.children.cycles-pp.timespec64_trunc
0.14 ± 3% -0.0 0.11 ± 3% perf-profile.children.cycles-pp.native_irq_return_iret
0.11 ± 4% -0.0 0.08 ± 10% perf-profile.children.cycles-pp.smp_reschedule_interrupt
0.17 ± 4% -0.0 0.15 perf-profile.children.cycles-pp.iov_iter_init
0.11 ± 4% -0.0 0.09 ± 10% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.09 ± 4% -0.0 0.07 perf-profile.children.cycles-pp.__x2apic_send_IPI_dest
0.15 -0.0 0.13 ± 3% perf-profile.children.cycles-pp.rb_insert_color
0.18 ± 2% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.set_next_buddy
0.08 ± 5% -0.0 0.07 perf-profile.children.cycles-pp.interrupt_entry
0.23 ± 2% +0.0 0.24 perf-profile.children.cycles-pp.cpumask_next
0.67 +0.0 0.69 perf-profile.children.cycles-pp.__calc_delta
0.15 ± 3% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.17 ± 6% +0.1 0.23 ± 3% perf-profile.children.cycles-pp.remove_entity_load_avg
0.14 ± 6% +0.1 0.19 ± 7% perf-profile.children.cycles-pp.attach_entity_load_avg
0.80 ± 3% +0.1 0.86 perf-profile.children.cycles-pp.check_preempt_wakeup
0.37 ± 3% +0.1 0.45 ± 4% perf-profile.children.cycles-pp.account_entity_enqueue
0.79 ± 4% +0.1 0.88 ± 5% perf-profile.children.cycles-pp.finish_task_switch
0.80 ± 2% +0.1 0.89 ± 2% perf-profile.children.cycles-pp.put_prev_entity
0.54 ± 2% +0.1 0.65 perf-profile.children.cycles-pp.account_entity_dequeue
1.01 ± 5% +0.1 1.13 ± 5% perf-profile.children.cycles-pp.set_task_cpu
0.49 ± 7% +0.2 0.65 ± 5% perf-profile.children.cycles-pp.migrate_task_rq_fair
1.23 ± 4% +0.2 1.40 ± 2% perf-profile.children.cycles-pp.__switch_to_asm
1.15 ± 2% +0.2 1.33 ± 2% perf-profile.children.cycles-pp.__switch_to
1.42 ± 3% +0.2 1.59 ± 2% perf-profile.children.cycles-pp.switch_fpu_return
0.76 ± 3% +0.2 0.96 ± 3% perf-profile.children.cycles-pp.pick_next_entity
2.21 +0.2 2.42 perf-profile.children.cycles-pp.reweight_entity
2.20 ± 2% +0.2 2.43 perf-profile.children.cycles-pp.load_new_mm_cr3
1.24 ± 5% +0.2 1.48 ± 4% perf-profile.children.cycles-pp.update_rq_clock
1.43 ± 5% +0.3 1.73 ± 2% perf-profile.children.cycles-pp.___perf_sw_event
0.38 ± 15% +0.3 0.68 ± 9% perf-profile.children.cycles-pp.find_next_bit
3.96 +0.3 4.28 perf-profile.children.cycles-pp.pick_next_task_fair
4.43 ± 2% +0.4 4.82 perf-profile.children.cycles-pp.switch_mm_irqs_off
1.11 +0.7 1.83 ± 4% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.46 ± 26% +0.7 1.20 ± 12% perf-profile.children.cycles-pp.cpumask_next_wrap
87.76 +1.1 88.86 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
86.98 +1.1 88.10 perf-profile.children.cycles-pp.do_syscall_64
21.71 +1.3 23.02 perf-profile.children.cycles-pp.__schedule
5.98 ± 4% +1.4 7.34 ± 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
21.59 +1.4 22.99 perf-profile.children.cycles-pp.schedule
1.73 ± 29% +3.3 5.03 ± 13% perf-profile.children.cycles-pp.available_idle_cpu
35.55 +3.3 38.88 perf-profile.children.cycles-pp.ksys_write
34.31 +3.5 37.84 perf-profile.children.cycles-pp.vfs_write
31.14 +4.1 35.23 ± 2% perf-profile.children.cycles-pp.new_sync_write
30.56 +4.2 34.74 ± 2% perf-profile.children.cycles-pp.pipe_write
22.96 ± 2% +6.0 28.93 ± 3% perf-profile.children.cycles-pp.__wake_up_common_lock
20.11 ± 2% +6.1 26.19 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
20.66 ± 2% +6.2 26.82 ± 3% perf-profile.children.cycles-pp.__wake_up_common
20.01 ± 2% +6.2 26.19 ± 3% perf-profile.children.cycles-pp.autoremove_wake_function
3.46 ± 20% +7.0 10.48 ± 12% perf-profile.children.cycles-pp.select_idle_sibling
4.92 ± 15% +7.1 12.01 ± 10% perf-profile.children.cycles-pp.select_task_rq_fair
11.85 ± 3% -2.1 9.72 ± 2% perf-profile.self.cycles-pp.do_syscall_64
4.13 ± 5% -1.3 2.85 ± 8% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.49 ± 6% -1.0 2.52 ± 9% perf-profile.self.cycles-pp.update_cfs_group
4.33 ± 2% -0.7 3.64 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
4.24 ± 2% -0.6 3.60 ± 3% perf-profile.self.cycles-pp.entry_SYSCALL_64
2.12 ± 4% -0.5 1.59 ± 6% perf-profile.self.cycles-pp.mutex_unlock
2.61 -0.3 2.31 ± 3% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.60 ± 2% -0.3 1.30 ± 5% perf-profile.self.cycles-pp.update_load_avg
1.48 ± 3% -0.2 1.25 ± 2% perf-profile.self.cycles-pp.mutex_lock
1.95 -0.2 1.75 ± 7% perf-profile.self.cycles-pp.selinux_file_permission
1.06 ± 5% -0.2 0.86 ± 5% perf-profile.self.cycles-pp.pipe_write
1.46 ± 2% -0.2 1.28 ± 3% perf-profile.self.cycles-pp.pipe_read
1.65 ± 2% -0.2 1.49 ± 4% perf-profile.self.cycles-pp.__fget_light
0.91 ± 3% -0.2 0.74 perf-profile.self.cycles-pp.avc_has_perm
0.94 ± 2% -0.2 0.79 ± 2% perf-profile.self.cycles-pp.___might_sleep
1.84 -0.1 1.72 ± 2% perf-profile.self.cycles-pp.update_curr
1.05 -0.1 0.93 perf-profile.self.cycles-pp.enqueue_task_fair
0.63 ± 3% -0.1 0.52 ± 2% perf-profile.self.cycles-pp.__might_sleep
0.60 ± 6% -0.1 0.51 ± 7% perf-profile.self.cycles-pp.__fsnotify_parent
0.29 ± 3% -0.1 0.19 ± 5% perf-profile.self.cycles-pp.__mutex_lock
0.38 ± 4% -0.1 0.29 ± 4% perf-profile.self.cycles-pp.vfs_write
0.51 -0.1 0.43 ± 4% perf-profile.self.cycles-pp.copy_page_to_iter
0.49 -0.1 0.41 ± 2% perf-profile.self.cycles-pp.new_sync_write
0.35 ± 4% -0.1 0.28 ± 2% perf-profile.self.cycles-pp.copy_page_from_iter
0.81 ± 3% -0.1 0.75 perf-profile.self.cycles-pp.fsnotify
0.26 -0.1 0.19 ± 3% perf-profile.self.cycles-pp.__inode_security_revalidate
0.20 ± 4% -0.1 0.13 ± 6% perf-profile.self.cycles-pp.__mutex_unlock_slowpath
0.29 ± 5% -0.1 0.23 ± 3% perf-profile.self.cycles-pp.file_has_perm
0.22 -0.1 0.16 ± 8% perf-profile.self.cycles-pp.wake_q_add
0.57 ± 2% -0.1 0.52 perf-profile.self.cycles-pp.new_sync_read
0.34 ± 2% -0.1 0.29 ± 3% perf-profile.self.cycles-pp.dequeue_entity
0.26 ± 3% -0.1 0.21 ± 3% perf-profile.self.cycles-pp.ksys_read
0.28 ± 2% -0.0 0.23 perf-profile.self.cycles-pp.security_file_permission
0.22 ± 3% -0.0 0.17 ± 4% perf-profile.self.cycles-pp.finish_wait
0.25 -0.0 0.21 ± 4% perf-profile.self.cycles-pp.ksys_write
0.88 -0.0 0.83 perf-profile.self.cycles-pp.__lock_text_start
0.66 -0.0 0.61 perf-profile.self.cycles-pp.vfs_read
1.37 -0.0 1.32 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.29 ± 4% -0.0 0.25 perf-profile.self.cycles-pp.current_time
0.20 ± 3% -0.0 0.16 ± 4% perf-profile.self.cycles-pp.rcu_all_qs
0.10 ± 8% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.mutex_spin_on_owner
0.42 -0.0 0.39 ± 2% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.34 -0.0 0.31 ± 2% perf-profile.self.cycles-pp.__x64_sys_read
0.09 ± 5% -0.0 0.05 ± 9% perf-profile.self.cycles-pp.wake_up_q
0.15 ± 8% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.inode_has_perm
0.14 ± 5% -0.0 0.11 ± 9% perf-profile.self.cycles-pp.rb_next
0.20 ± 2% -0.0 0.17 ± 4% perf-profile.self.cycles-pp.deactivate_task
0.11 ± 4% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.__fdget_pos
0.24 ± 2% -0.0 0.22 ± 3% perf-profile.self.cycles-pp.prepare_to_wait
0.14 ± 3% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.13 ± 7% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.timespec64_trunc
0.06 -0.0 0.04 ± 57% perf-profile.self.cycles-pp.copyout
0.11 ± 4% -0.0 0.09 ± 10% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.17 ± 4% -0.0 0.15 ± 2% perf-profile.self.cycles-pp.generic_pipe_buf_confirm
0.18 ± 2% -0.0 0.16 ± 4% perf-profile.self.cycles-pp.__might_fault
0.17 ± 2% -0.0 0.16 ± 2% perf-profile.self.cycles-pp.set_next_buddy
0.15 ± 2% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.rb_insert_color
0.12 ± 3% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.__sb_end_write
0.10 ± 8% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.06 -0.0 0.05 perf-profile.self.cycles-pp.kill_fasync
0.14 ± 3% +0.0 0.16 ± 5% perf-profile.self.cycles-pp.exit_to_usermode_loop
0.66 +0.0 0.68 perf-profile.self.cycles-pp.__calc_delta
0.16 ± 5% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.native_load_tls
0.14 ± 3% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.67 +0.0 0.70 perf-profile.self.cycles-pp.dequeue_task_fair
0.23 ± 3% +0.0 0.27 ± 3% perf-profile.self.cycles-pp._cond_resched
0.17 ± 4% +0.0 0.21 ± 6% perf-profile.self.cycles-pp.activate_task
0.35 ± 3% +0.0 0.40 perf-profile.self.cycles-pp.pick_next_entity
1.20 +0.1 1.25 perf-profile.self.cycles-pp.select_task_rq_fair
0.13 ± 6% +0.1 0.18 ± 6% perf-profile.self.cycles-pp.attach_entity_load_avg
0.87 ± 3% +0.1 0.96 ± 3% perf-profile.self.cycles-pp.pick_next_task_fair
0.27 ± 6% +0.1 0.36 ± 7% perf-profile.self.cycles-pp.migrate_task_rq_fair
0.32 ± 2% +0.1 0.41 ± 4% perf-profile.self.cycles-pp.account_entity_enqueue
0.42 +0.1 0.56 ± 2% perf-profile.self.cycles-pp.account_entity_dequeue
2.24 ± 3% +0.1 2.39 ± 2% perf-profile.self.cycles-pp.switch_mm_irqs_off
1.09 ± 2% +0.2 1.26 ± 2% perf-profile.self.cycles-pp.__switch_to
1.20 ± 4% +0.2 1.37 ± 2% perf-profile.self.cycles-pp.__switch_to_asm
1.40 ± 3% +0.2 1.58 ± 2% perf-profile.self.cycles-pp.switch_fpu_return
0.89 ± 6% +0.2 1.10 ± 5% perf-profile.self.cycles-pp.update_rq_clock
2.19 ± 2% +0.2 2.42 perf-profile.self.cycles-pp.load_new_mm_cr3
1.33 ± 4% +0.3 1.60 ± 3% perf-profile.self.cycles-pp.___perf_sw_event
0.34 ± 14% +0.3 0.66 ± 9% perf-profile.self.cycles-pp.find_next_bit
0.28 ± 23% +0.4 0.72 ± 12% perf-profile.self.cycles-pp.cpumask_next_wrap
0.89 +0.7 1.58 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
1.04 +0.7 1.76 ± 4% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.65 ± 12% +3.0 3.61 ± 13% perf-profile.self.cycles-pp.select_idle_sibling
1.71 ± 29% +3.3 4.97 ± 13% perf-profile.self.cycles-pp.available_idle_cpu
***************************************************************************************************
lkp-cfl-e1: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.6/threads/100%/debian-x86_64-2018-04-03.cgz/lkp-cfl-e1/hackbench/0xb8
commit:
43e9f7f231 ("sched/fair: Start tracking SCHED_IDLE tasks count in cfs_rq")
3c29e651e1 ("sched/fair: Fall back to sched-idle CPU if idle CPU isn't found")
43e9f7f231e40e45 3c29e651e16dd3b3179cfb2d055
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:2 -50% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
1:2 -50% :4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
49845 -6.8% 46466 hackbench.throughput
8.776e+08 +13.0% 9.915e+08 hackbench.time.involuntary_context_switches
93623 -6.3% 87707 hackbench.time.minor_page_faults
740.08 -1.4% 729.54 hackbench.time.user_time
1.569e+09 +3.5% 1.623e+09 hackbench.time.voluntary_context_switches
3.072e+08 -6.2% 2.88e+08 hackbench.workload
21.45 -1.7% 21.09 boot-time.boot
1747212 ± 3% -31.0% 1205072 ± 14% cpuidle.POLL.time
1289000 ± 4% -36.6% 817407 ± 13% cpuidle.POLL.usage
4038486 +6.3% 4293331 vmstat.system.cs
394711 -10.2% 354353 ± 2% vmstat.system.in
10059 -15.4% 8506 ± 5% softirqs.CPU0.SCHED
309881 -27.3% 225159 ± 4% softirqs.CPU2.TIMER
220105 +20.8% 265807 ± 16% softirqs.CPU6.TIMER
120937 ± 3% -17.6% 99667 ± 2% softirqs.SCHED
18883 -8.9% 17200 ± 3% slabinfo.kmalloc-32.active_objs
18883 -8.9% 17200 ± 3% slabinfo.kmalloc-32.num_objs
8573 ± 2% -7.6% 7924 slabinfo.lsm_file_cache.active_objs
8573 ± 2% -7.6% 7924 slabinfo.lsm_file_cache.num_objs
5280 -8.8% 4814 slabinfo.task_delay_info.active_objs
5280 -8.8% 4814 slabinfo.task_delay_info.num_objs
65295 -1.9% 64034 proc-vmstat.nr_active_anon
60605 -1.5% 59671 proc-vmstat.nr_anon_pages
12935 -0.9% 12820 proc-vmstat.nr_slab_unreclaimable
65295 -1.9% 64034 proc-vmstat.nr_zone_active_anon
24774010 -16.6% 20673098 proc-vmstat.numa_hit
24774010 -16.6% 20673098 proc-vmstat.numa_local
24839243 -16.5% 20735391 proc-vmstat.pgalloc_normal
673024 -2.4% 657142 proc-vmstat.pgfault
24822112 -16.5% 20717243 proc-vmstat.pgfree
247.95 ± 14% -19.1% 200.59 ± 4% sched_debug.cfs_rq:/.load_avg.max
43.55 ± 3% -19.9% 34.86 ± 6% sched_debug.cfs_rq:/.load_avg.min
55.51 ± 6% -13.2% 48.19 ± 7% sched_debug.cfs_rq:/.load_avg.stddev
20116 ± 8% -12.8% 17531 ± 5% sched_debug.cfs_rq:/.runnable_weight.min
1288 +7.5% 1384 ± 3% sched_debug.cfs_rq:/.util_avg.max
54763 ± 9% +130.8% 126390 ± 25% sched_debug.cpu.avg_idle.avg
301502 ± 18% +84.9% 557406 ± 14% sched_debug.cpu.avg_idle.max
82233 ± 22% +122.7% 183153 ± 17% sched_debug.cpu.avg_idle.stddev
0.00 ± 12% -12.0% 0.00 ± 8% sched_debug.cpu.next_balance.stddev
16.92 ± 4% -7.5% 15.66 ± 3% sched_debug.cpu.nr_running.avg
2.92 ± 10% -20.9% 2.31 ± 16% sched_debug.cpu.nr_uninterruptible.avg
2977 ± 50% -73.0% 803.75 ± 43% interrupts.133:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
546.00 ± 23% +119.5% 1198 ± 16% interrupts.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3
13831842 -11.5% 12245495 ± 2% interrupts.CPU0.RES:Rescheduling_interrupts
45259 ± 5% +68.9% 76440 ± 15% interrupts.CPU0.TRM:Thermal_event_interrupts
2977 ± 50% -73.0% 803.75 ± 43% interrupts.CPU1.133:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
45283 ± 5% +68.9% 76474 ± 15% interrupts.CPU1.TRM:Thermal_event_interrupts
13947887 -11.0% 12413074 interrupts.CPU10.RES:Rescheduling_interrupts
45275 ± 5% +68.9% 76468 ± 15% interrupts.CPU10.TRM:Thermal_event_interrupts
45286 ± 5% +68.9% 76468 ± 15% interrupts.CPU11.TRM:Thermal_event_interrupts
13579741 -11.8% 11974820 ± 5% interrupts.CPU12.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76471 ± 15% interrupts.CPU12.TRM:Thermal_event_interrupts
13668073 -11.3% 12119346 interrupts.CPU13.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76474 ± 15% interrupts.CPU13.TRM:Thermal_event_interrupts
13578574 -7.8% 12518854 interrupts.CPU14.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76467 ± 15% interrupts.CPU14.TRM:Thermal_event_interrupts
13689117 -10.1% 12310290 ± 2% interrupts.CPU15.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76473 ± 15% interrupts.CPU15.TRM:Thermal_event_interrupts
13924470 -10.5% 12461613 interrupts.CPU2.RES:Rescheduling_interrupts
45264 ± 5% +68.9% 76467 ± 15% interrupts.CPU2.TRM:Thermal_event_interrupts
546.00 ± 23% +119.5% 1198 ± 16% interrupts.CPU3.135:IR-PCI-MSI.2097156-edge.eth1-TxRx-3
13951884 -10.5% 12480739 interrupts.CPU3.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76469 ± 15% interrupts.CPU3.TRM:Thermal_event_interrupts
45285 ± 5% +68.9% 76471 ± 15% interrupts.CPU4.TRM:Thermal_event_interrupts
14030690 -10.7% 12530992 ± 3% interrupts.CPU5.RES:Rescheduling_interrupts
45286 ± 5% +68.9% 76471 ± 15% interrupts.CPU5.TRM:Thermal_event_interrupts
45285 ± 5% +68.9% 76474 ± 15% interrupts.CPU6.TRM:Thermal_event_interrupts
45285 ± 5% +68.9% 76472 ± 15% interrupts.CPU7.TRM:Thermal_event_interrupts
13319036 -10.2% 11966842 ± 2% interrupts.CPU8.RES:Rescheduling_interrupts
45280 ± 5% +68.7% 76405 ± 15% interrupts.CPU8.TRM:Thermal_event_interrupts
13401328 -11.8% 11816711 ± 3% interrupts.CPU9.RES:Rescheduling_interrupts
45279 ± 5% +68.9% 76467 ± 15% interrupts.CPU9.TRM:Thermal_event_interrupts
2.19e+08 -9.8% 1.974e+08 ± 2% interrupts.RES:Rescheduling_interrupts
724498 ± 5% +68.9% 1223435 ± 15% interrupts.TRM:Thermal_event_interrupts
53.44 +4.2% 55.68 perf-stat.i.MPKI
0.12 -0.0 0.10 ± 4% perf-stat.i.cache-miss-rate%
1764490 -13.1% 1533362 ± 3% perf-stat.i.cache-misses
1.767e+09 +3.6% 1.832e+09 perf-stat.i.cache-references
4055785 +6.2% 4306694 perf-stat.i.context-switches
336154 +21.5% 408363 perf-stat.i.cpu-migrations
34528 ± 5% +10.3% 38089 ± 3% perf-stat.i.cycles-between-cache-misses
0.01 ± 6% -0.0 0.01 ± 6% perf-stat.i.dTLB-load-miss-rate%
899577 ± 6% -15.2% 762553 ± 5% perf-stat.i.dTLB-load-misses
0.00 ± 7% -0.0 0.00 ± 4% perf-stat.i.dTLB-store-miss-rate%
22816 ± 7% -13.2% 19813 ± 4% perf-stat.i.dTLB-store-misses
6.044e+09 -1.2% 5.969e+09 perf-stat.i.dTLB-stores
31331199 -3.4% 30272777 perf-stat.i.iTLB-load-misses
1061 +3.1% 1094 perf-stat.i.instructions-per-iTLB-miss
1077 -2.9% 1046 perf-stat.i.minor-faults
73018 -10.9% 65028 ± 2% perf-stat.i.node-loads
122632 ± 2% -9.3% 111181 ± 4% perf-stat.i.node-stores
1077 -2.9% 1046 perf-stat.i.page-faults
53.29 +4.2% 55.53 perf-stat.overall.MPKI
0.10 -0.0 0.08 ± 3% perf-stat.overall.cache-miss-rate%
25264 +15.3% 29120 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 ± 6% -0.0 0.01 ± 6% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 7% -0.0 0.00 ± 4% perf-stat.overall.dTLB-store-miss-rate%
1058 +2.9% 1089 perf-stat.overall.instructions-per-iTLB-miss
65158 +6.6% 69487 perf-stat.overall.path-length
1761814 -13.1% 1530883 ± 3% perf-stat.ps.cache-misses
1.765e+09 +3.7% 1.829e+09 perf-stat.ps.cache-references
4048770 +6.2% 4299578 perf-stat.ps.context-switches
335565 +21.5% 407697 perf-stat.ps.cpu-migrations
898100 ± 6% -15.2% 761293 ± 5% perf-stat.ps.dTLB-load-misses
22777 ± 7% -13.2% 19781 ± 4% perf-stat.ps.dTLB-store-misses
6.034e+09 -1.2% 5.96e+09 perf-stat.ps.dTLB-stores
31277565 -3.4% 30222833 perf-stat.ps.iTLB-load-misses
1075 -2.9% 1044 perf-stat.ps.minor-faults
72896 -10.9% 64922 ± 2% perf-stat.ps.node-loads
122424 ± 2% -9.3% 111005 ± 4% perf-stat.ps.node-stores
1075 -2.9% 1044 perf-stat.ps.page-faults
39.37 ±100% -36.4 2.98 ±173% perf-profile.calltrace.cycles-pp.start_thread
21.68 ±100% -20.0 1.64 ±173% perf-profile.calltrace.cycles-pp.__GI___libc_write.start_thread
20.14 ±100% -18.6 1.52 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
20.00 ±100% -18.5 1.51 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
18.05 ±100% -16.7 1.39 ±173% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write.start_thread
16.75 ±100% -15.5 1.22 ±173% perf-profile.calltrace.cycles-pp.__GI___libc_read.start_thread
16.70 ±100% -15.4 1.26 ±173% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
14.98 ±100% -13.9 1.09 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
14.82 ±100% -13.7 1.08 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
13.95 ±100% -12.9 1.03 ±173% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read.start_thread
12.97 ±100% -12.0 0.95 ±173% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
3.69 ± 6% -1.1 2.58 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common
4.25 ± 6% -1.0 3.29 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.54 ± 10% -0.5 1.05 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
1.07 ± 16% -0.5 0.60 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__wake_up_common_lock.pipe_write.new_sync_write
1.14 ± 2% -0.3 0.80 ± 7% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.pipe_wait
0.55 ± 4% -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_read.new_sync_read.vfs_read.ksys_read
1.90 ± 9% -0.2 1.67 ± 7% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.71 ± 7% -0.2 1.48 ± 7% perf-profile.calltrace.cycles-pp.__fget.__fget_light.__fdget_pos.ksys_write.do_syscall_64
0.68 ± 19% +0.2 0.86 perf-profile.calltrace.cycles-pp.update_cfs_group.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.67 ± 16% +0.2 0.88 ± 4% perf-profile.calltrace.cycles-pp.update_cfs_group.dequeue_task_fair.__schedule.schedule.pipe_wait
0.29 ±100% +0.4 0.64 ± 4% perf-profile.calltrace.cycles-pp.___perf_sw_event.__schedule.schedule.pipe_wait.pipe_read
0.31 ±100% +0.4 0.67 ± 4% perf-profile.calltrace.cycles-pp.mutex_lock.pipe_wait.pipe_read.new_sync_read.vfs_read
0.32 ±100% +0.4 0.68 ± 2% perf-profile.calltrace.cycles-pp.pick_next_entity.pick_next_task_fair.__schedule.schedule.pipe_wait
0.29 ±100% +0.4 0.68 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.35 ±100% +0.4 0.78 perf-profile.calltrace.cycles-pp.update_curr.reweight_entity.dequeue_task_fair.__schedule.schedule
1.23 ± 19% +0.4 1.68 ± 2% perf-profile.calltrace.cycles-pp.reweight_entity.dequeue_task_fair.__schedule.schedule.pipe_wait
1.29 ± 18% +0.5 1.76 ± 5% perf-profile.calltrace.cycles-pp.reweight_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.set_next_entity.pick_next_task_fair.__schedule.schedule
0.00 +0.6 0.59 ± 7% perf-profile.calltrace.cycles-pp.__enqueue_entity.put_prev_entity.pick_next_task_fair.__schedule.schedule
8.82 +0.6 9.46 ± 2% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.47 ±100% +0.7 1.14 ± 6% perf-profile.calltrace.cycles-pp.put_prev_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
2.21 ± 10% +0.7 2.88 ± 4% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
9.10 +0.7 9.79 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
9.04 +0.7 9.74 ± 2% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.71 ±100% +0.9 1.61 ± 3% perf-profile.calltrace.cycles-pp.__switch_to
0.68 ± 99% +0.9 1.60 ± 3% perf-profile.calltrace.cycles-pp.native_write_msr
0.86 ±100% +0.9 1.79 ± 2% perf-profile.calltrace.cycles-pp.__switch_to_asm
6.54 +1.0 7.53 ± 2% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
26.96 +1.0 27.95 perf-profile.calltrace.cycles-pp.pipe_read.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.42 ± 63% +1.0 2.47 ± 3% perf-profile.calltrace.cycles-pp.switch_fpu_return.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.54 ± 8% +1.2 5.79 ± 4% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.99 +1.3 16.27 perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.new_sync_read
15.35 +1.3 16.66 perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.new_sync_read.vfs_read
27.37 +1.3 28.70 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write.ksys_write
17.81 +1.6 19.38 perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.new_sync_read.vfs_read.ksys_read
3.95 ± 11% +1.8 5.70 ± 6% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
23.38 +2.0 25.33 ± 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
24.30 +2.0 26.27 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write.vfs_write
23.73 +2.0 25.71 ± 2% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.new_sync_write
5.24 ± 10% +2.0 7.25 ± 6% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
3.23 ± 57% +2.7 5.98 ± 4% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.34 ± 57% +2.8 6.18 ± 4% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.37 ±100% -36.4 2.98 ±173% perf-profile.children.cycles-pp.start_thread
21.68 ±100% -20.0 1.67 ±173% perf-profile.children.cycles-pp.__GI___libc_write
16.99 ±100% -15.7 1.24 ±173% perf-profile.children.cycles-pp.__GI___libc_read
6.84 ± 10% -2.1 4.71 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
45.35 -1.0 44.33 perf-profile.children.cycles-pp.ksys_write
6.70 ± 5% -1.0 5.70 ± 2% perf-profile.children.cycles-pp._raw_spin_lock
5.90 ± 4% -0.6 5.34 ± 6% perf-profile.children.cycles-pp.security_file_permission
3.35 ± 9% -0.6 2.80 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
42.14 -0.5 41.63 perf-profile.children.cycles-pp.vfs_write
3.41 -0.4 3.01 perf-profile.children.cycles-pp.__fdget_pos
3.30 -0.4 2.91 perf-profile.children.cycles-pp.__fget_light
2.71 ± 2% -0.4 2.32 perf-profile.children.cycles-pp.__fget
3.23 -0.3 2.93 ± 3% perf-profile.children.cycles-pp.mutex_lock
2.76 -0.3 2.47 ± 5% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
2.21 ± 3% -0.3 1.93 ± 10% perf-profile.children.cycles-pp.file_has_perm
3.14 -0.2 2.90 ± 4% perf-profile.children.cycles-pp.copy_page_to_iter
3.03 ± 5% -0.2 2.79 ± 5% perf-profile.children.cycles-pp.selinux_file_permission
2.10 ± 5% -0.2 1.86 ± 2% perf-profile.children.cycles-pp.mutex_unlock
1.50 ± 5% -0.2 1.27 ± 2% perf-profile.children.cycles-pp.fput_many
0.68 ± 17% -0.2 0.48 ± 4% perf-profile.children.cycles-pp.__mutex_lock
1.66 ± 2% -0.2 1.47 ± 4% perf-profile.children.cycles-pp.copyout
1.73 -0.2 1.57 ± 2% perf-profile.children.cycles-pp.update_rq_clock
1.30 -0.1 1.17 ± 6% perf-profile.children.cycles-pp.copyin
0.27 ± 11% -0.1 0.15 ± 9% perf-profile.children.cycles-pp.preempt_schedule_common
1.25 ± 3% -0.1 1.15 ± 3% perf-profile.children.cycles-pp.fsnotify
0.79 -0.1 0.69 ± 7% perf-profile.children.cycles-pp.reschedule_interrupt
0.91 -0.1 0.82 ± 5% perf-profile.children.cycles-pp._cond_resched
0.80 ± 2% -0.1 0.71 ± 3% perf-profile.children.cycles-pp.file_update_time
0.29 ± 10% -0.1 0.24 ± 5% perf-profile.children.cycles-pp.rb_insert_color
0.26 -0.0 0.21 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 14% -0.0 0.03 ±100% perf-profile.children.cycles-pp.scheduler_ipi
0.22 ± 6% -0.0 0.18 ± 4% perf-profile.children.cycles-pp.__x64_sys_read
0.25 ± 4% -0.0 0.21 ± 2% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.27 -0.0 0.23 ± 9% perf-profile.children.cycles-pp.timespec64_trunc
0.23 ± 2% -0.0 0.20 ± 5% perf-profile.children.cycles-pp.smp_reschedule_interrupt
0.22 -0.0 0.20 ± 5% perf-profile.children.cycles-pp.__x64_sys_write
0.24 -0.0 0.22 ± 3% perf-profile.children.cycles-pp.__list_add_valid
0.14 -0.0 0.12 ± 10% perf-profile.children.cycles-pp.kill_fasync
0.11 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.perf_exclude_event
0.10 +0.0 0.12 ± 8% perf-profile.children.cycles-pp.is_cpu_allowed
0.09 ± 11% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.rcu_note_context_switch
0.07 +0.0 0.10 ± 10% perf-profile.children.cycles-pp.pkg_thermal_notify
0.17 ± 3% +0.0 0.20 ± 9% perf-profile.children.cycles-pp.deactivate_task
0.16 +0.0 0.21 ± 10% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.08 ± 6% +0.0 0.12 ± 10% perf-profile.children.cycles-pp.intel_thermal_interrupt
0.08 ± 6% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.smp_thermal_interrupt
0.08 +0.1 0.13 ± 9% perf-profile.children.cycles-pp.thermal_interrupt
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.default_wake_function
0.19 ± 5% +0.1 0.25 ± 5% perf-profile.children.cycles-pp.wakeup_preempt_entity
0.19 ± 5% +0.1 0.27 ± 4% perf-profile.children.cycles-pp.finish_wait
0.89 ± 8% +0.1 0.98 perf-profile.children.cycles-pp.__calc_delta
0.63 ± 4% +0.1 0.74 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.33 ± 4% +0.1 0.44 ± 5% perf-profile.children.cycles-pp.set_next_buddy
1.23 ± 3% +0.1 1.34 ± 3% perf-profile.children.cycles-pp.check_preempt_wakeup
0.69 ± 4% +0.1 0.81 ± 3% perf-profile.children.cycles-pp.sched_clock
1.52 ± 3% +0.1 1.65 ± 2% perf-profile.children.cycles-pp.set_next_entity
0.73 ± 4% +0.1 0.88 ± 3% perf-profile.children.cycles-pp.sched_clock_cpu
0.57 ± 4% +0.2 0.74 ± 4% perf-profile.children.cycles-pp.account_entity_dequeue
1.21 ± 4% +0.2 1.39 ± 2% perf-profile.children.cycles-pp.__update_load_avg_se
1.99 ± 3% +0.2 2.20 ± 3% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
1.64 ± 6% +0.2 1.85 perf-profile.children.cycles-pp.__switch_to_asm
1.60 ± 4% +0.2 1.85 ± 3% perf-profile.children.cycles-pp.update_cfs_group
3.91 +0.2 4.15 perf-profile.children.cycles-pp.update_curr
1.04 ± 5% +0.3 1.30 ± 4% perf-profile.children.cycles-pp.___perf_sw_event
1.05 ± 9% +0.3 1.33 ± 2% perf-profile.children.cycles-pp.pick_next_entity
1.57 ± 3% +0.3 1.89 perf-profile.children.cycles-pp.native_write_msr
1.01 ± 9% +0.3 1.33 ± 5% perf-profile.children.cycles-pp.put_prev_entity
2.17 ± 7% +0.3 2.51 ± 2% perf-profile.children.cycles-pp.switch_fpu_return
1.93 ± 7% +0.4 2.36 perf-profile.children.cycles-pp.__switch_to
2.99 ± 4% +0.6 3.57 ± 2% perf-profile.children.cycles-pp.reweight_entity
8.89 +0.6 9.50 ± 2% perf-profile.children.cycles-pp.enqueue_task_fair
9.16 +0.7 9.82 perf-profile.children.cycles-pp.ttwu_do_activate
9.09 +0.7 9.76 ± 2% perf-profile.children.cycles-pp.activate_task
5.18 ± 6% +0.9 6.04 ± 2% perf-profile.children.cycles-pp.pick_next_task_fair
6.71 +0.9 7.63 ± 2% perf-profile.children.cycles-pp.dequeue_task_fair
28.01 +1.4 29.36 ± 2% perf-profile.children.cycles-pp.__wake_up_common_lock
4.91 ± 7% +1.4 6.28 ± 4% perf-profile.children.cycles-pp.exit_to_usermode_loop
18.19 +1.4 19.56 perf-profile.children.cycles-pp.pipe_wait
4.03 ± 11% +1.7 5.75 ± 6% perf-profile.children.cycles-pp.select_idle_sibling
23.86 ± 2% +1.8 25.71 ± 2% perf-profile.children.cycles-pp.try_to_wake_up
24.44 +1.9 26.36 ± 2% perf-profile.children.cycles-pp.__wake_up_common
23.84 +1.9 25.77 ± 2% perf-profile.children.cycles-pp.autoremove_wake_function
5.29 ± 10% +2.0 7.29 ± 6% perf-profile.children.cycles-pp.select_task_rq_fair
20.27 ± 2% +2.3 22.53 ± 2% perf-profile.children.cycles-pp.__schedule
20.36 ± 2% +2.5 22.82 ± 2% perf-profile.children.cycles-pp.schedule
6.82 ± 10% -2.1 4.69 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
2.67 ± 2% -0.4 2.29 perf-profile.self.cycles-pp.__fget
2.72 -0.3 2.44 ± 5% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
1.22 -0.2 0.98 ± 2% perf-profile.self.cycles-pp.update_rq_clock
1.36 ± 3% -0.2 1.13 ± 6% perf-profile.self.cycles-pp.pipe_write
1.48 ± 5% -0.2 1.24 ± 3% perf-profile.self.cycles-pp.fput_many
2.04 ± 5% -0.2 1.81 ± 2% perf-profile.self.cycles-pp.mutex_unlock
1.96 -0.2 1.73 ± 2% perf-profile.self.cycles-pp.mutex_lock
0.66 ± 12% -0.1 0.53 ± 10% perf-profile.self.cycles-pp.file_has_perm
0.38 ± 10% -0.1 0.28 ± 9% perf-profile.self.cycles-pp.ksys_read
0.34 ± 7% -0.1 0.25 ± 7% perf-profile.self.cycles-pp.__mutex_lock
1.21 ± 3% -0.1 1.11 ± 2% perf-profile.self.cycles-pp.fsnotify
1.00 -0.1 0.94 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.60 ± 4% -0.1 0.55 ± 8% perf-profile.self.cycles-pp.vfs_write
0.29 ± 10% -0.1 0.23 ± 3% perf-profile.self.cycles-pp.rb_insert_color
0.32 ± 6% -0.0 0.28 ± 6% perf-profile.self.cycles-pp.ksys_write
0.26 -0.0 0.21 ± 2% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 14% -0.0 0.03 ±100% perf-profile.self.cycles-pp.scheduler_ipi
0.36 ± 5% -0.0 0.32 ± 8% perf-profile.self.cycles-pp.__sb_start_write
0.29 -0.0 0.24 ± 4% perf-profile.self.cycles-pp.check_preempt_curr
0.22 ± 6% -0.0 0.17 ± 6% perf-profile.self.cycles-pp.__x64_sys_read
0.25 -0.0 0.21 ± 8% perf-profile.self.cycles-pp.timespec64_trunc
0.22 ± 4% -0.0 0.19 ± 11% perf-profile.self.cycles-pp.wake_q_add
0.21 ± 2% -0.0 0.18 ± 6% perf-profile.self.cycles-pp.__x64_sys_write
0.21 ± 2% -0.0 0.20 perf-profile.self.cycles-pp.__list_add_valid
0.11 ± 4% -0.0 0.10 ± 7% perf-profile.self.cycles-pp.kill_fasync
0.06 ± 9% +0.0 0.07 ± 5% perf-profile.self.cycles-pp.rcu_note_context_switch
0.15 +0.0 0.18 ± 6% perf-profile.self.cycles-pp.autoremove_wake_function
0.16 +0.0 0.19 ± 9% perf-profile.self.cycles-pp.deactivate_task
0.26 +0.0 0.29 ± 5% perf-profile.self.cycles-pp.rcu_all_qs
0.33 ± 3% +0.0 0.37 ± 4% perf-profile.self.cycles-pp.set_next_entity
0.18 ± 5% +0.0 0.22 ± 6% perf-profile.self.cycles-pp.wakeup_preempt_entity
0.03 ±100% +0.0 0.07 perf-profile.self.cycles-pp.check_cfs_rq_runtime
0.16 ± 6% +0.0 0.20 ± 6% perf-profile.self.cycles-pp.exit_to_usermode_loop
0.16 +0.0 0.20 ± 12% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.20 ± 2% +0.1 0.25 ± 2% perf-profile.self.cycles-pp.activate_task
0.41 ± 6% +0.1 0.46 perf-profile.self.cycles-pp.schedule
1.96 +0.1 2.02 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.15 ± 6% +0.1 0.22 perf-profile.self.cycles-pp.finish_wait
0.41 ± 4% +0.1 0.48 ± 6% perf-profile.self.cycles-pp.account_entity_enqueue
0.61 ± 4% +0.1 0.71 ± 3% perf-profile.self.cycles-pp.native_sched_clock
0.83 ± 2% +0.1 0.93 perf-profile.self.cycles-pp.dequeue_task_fair
0.29 ± 3% +0.1 0.40 ± 5% perf-profile.self.cycles-pp.set_next_buddy
1.02 +0.1 1.15 ± 7% perf-profile.self.cycles-pp.enqueue_task_fair
0.46 ± 12% +0.2 0.61 ± 3% perf-profile.self.cycles-pp.pick_next_entity
1.17 ± 4% +0.2 1.35 ± 2% perf-profile.self.cycles-pp.__update_load_avg_se
0.42 ± 8% +0.2 0.61 ± 5% perf-profile.self.cycles-pp.account_entity_dequeue
1.64 ± 6% +0.2 1.85 perf-profile.self.cycles-pp.__switch_to_asm
1.23 ± 4% +0.2 1.46 ± 3% perf-profile.self.cycles-pp.reweight_entity
1.07 ± 5% +0.2 1.31 ± 4% perf-profile.self.cycles-pp.select_task_rq_fair
0.90 ± 5% +0.2 1.14 ± 5% perf-profile.self.cycles-pp.___perf_sw_event
1.57 ± 4% +0.2 1.82 ± 2% perf-profile.self.cycles-pp.update_cfs_group
1.17 ± 3% +0.3 1.42 ± 7% perf-profile.self.cycles-pp.update_load_avg
1.56 ± 4% +0.3 1.88 perf-profile.self.cycles-pp.native_write_msr
2.16 ± 7% +0.3 2.50 ± 2% perf-profile.self.cycles-pp.switch_fpu_return
1.84 ± 8% +0.4 2.25 ± 2% perf-profile.self.cycles-pp.__switch_to
1.22 +0.5 1.73 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.92 ± 11% +1.3 2.27 ± 6% perf-profile.self.cycles-pp.select_idle_sibling
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 7 months
[ptrace] 201766a20e: kernel_selftests.seccomp.make_fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 201766a20e30f982ccfe36bebfad9602c3ff574a ("ptrace: add PTRACE_GET_SYSCALL_INFO request")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-02
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2019-07-26 16:52:03 make run_tests -C seccomp
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.6-201766a20e30f982ccfe36bebfad9602c3ff574a/tools/testing/selftests/seccomp'
gcc -Wl,-no-as-needed -Wall seccomp_bpf.c -lpthread -o seccomp_bpf
In file included from seccomp_bpf.c:51:0:
seccomp_bpf.c: In function ‘tracer_ptrace’:
seccomp_bpf.c:1787:20: error: ‘PTRACE_EVENTMSG_SYSCALL_ENTRY’ undeclared (first use in this function)
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
seccomp_bpf.c:1787:20: note: each undeclared identifier is reported only once for each function it appears in
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
seccomp_bpf.c:1788:6: error: ‘PTRACE_EVENTMSG_SYSCALL_EXIT’ undeclared (first use in this function)
: PTRACE_EVENTMSG_SYSCALL_EXIT, msg);
^
../kselftest_harness.h:608:13: note: in definition of macro ‘__EXPECT’
__typeof__(_expected) __exp = (_expected); \
^~~~~~~~~
seccomp_bpf.c:1787:2: note: in expansion of macro ‘EXPECT_EQ’
EXPECT_EQ(entry ? PTRACE_EVENTMSG_SYSCALL_ENTRY
^~~~~~~~~
Makefile:12: recipe for target 'seccomp_bpf' failed
make: *** [seccomp_bpf] Error 1
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.6-201766a20e30f982ccfe36bebfad9602c3ff574a/tools/testing/selftests/seccomp'
To reproduce:
# build kernel
cd linux
cp config-5.2.0-10889-g201766a20e30f9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 9 months
[x86/mce] 1de08dccd3: will-it-scale.per_process_ops -14.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -14.1% regression of will-it-scale.per_process_ops due to commit:
commit: 1de08dccd383482a3e88845d3554094d338f5ff9 ("x86/mce: Add a struct mce.kflags field")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: will-it-scale
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
nr_task: 100%
mode: process
test: malloc1
cpufreq_governor: performance
ucode: 0x11
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-20191114.cgz/lkp-knm01/malloc1/will-it-scale/0x11
commit:
9554bfe403 ("x86/mce: Convert the CEC to use the MCE notifier")
1de08dccd3 ("x86/mce: Add a struct mce.kflags field")
9554bfe403bdfc08 1de08dccd383482a3e88845d355
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
%stddev %change %stddev
\ | \
668.00 -14.1% 573.75 will-it-scale.per_process_ops
192559 -14.1% 165344 will-it-scale.workload
424371 -20.3% 338331 ± 8% vmstat.system.in
0.00 ± 13% +0.0 0.00 ± 22% mpstat.cpu.all.soft%
0.54 -0.1 0.47 ± 3% mpstat.cpu.all.usr%
1.205e+08 -13.7% 1.039e+08 numa-numastat.node0.local_node
1.205e+08 -13.7% 1.039e+08 numa-numastat.node0.numa_hit
61585280 -13.1% 53521568 numa-vmstat.node0.numa_hit
61585799 -13.1% 53522027 numa-vmstat.node0.numa_local
1.203e+08 -13.8% 1.037e+08 proc-vmstat.numa_hit
1.203e+08 -13.8% 1.037e+08 proc-vmstat.numa_local
1.205e+08 -13.7% 1.04e+08 proc-vmstat.pgalloc_normal
60608363 -13.6% 52339576 proc-vmstat.pgfault
1.204e+08 -13.8% 1.038e+08 proc-vmstat.pgfree
0.04 ± 9% +17.4% 0.05 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
52.52 ± 5% +11.9% 58.79 ± 2% sched_debug.cfs_rq:/.util_avg.stddev
2049590 ± 7% -12.8% 1788136 ± 4% sched_debug.cpu.avg_idle.avg
418.68 ± 2% -22.1% 326.13 ± 2% sched_debug.cpu.clock.stddev
418.68 ± 2% -22.1% 326.14 ± 2% sched_debug.cpu.clock_task.stddev
158439 ± 8% +29.6% 205376 ± 16% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 -18.8% 0.00 ± 2% sched_debug.cpu.next_balance.stddev
0.00 ±173% +500.0% 0.00 ± 33% sched_debug.cpu.nr_uninterruptible.avg
-49.04 +76.9% -86.75 sched_debug.cpu.nr_uninterruptible.min
1117 +13.3% 1266 ± 3% sched_debug.cpu.sched_count.min
36.92 -2.4 34.52 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
36.91 -2.4 34.51 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.30 -2.3 33.98 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.30 -2.3 33.99 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.55 -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.pagevec_lru_move_fn
0.54 ± 2% -0.3 0.26 ±100% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.release_pages
1.02 -0.1 0.94 perf-profile.calltrace.cycles-pp.page_fault
0.81 -0.1 0.73 perf-profile.calltrace.cycles-pp.handle_mm_fault.do_page_fault.page_fault
0.98 -0.1 0.90 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.77 -0.1 0.68 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
0.69 -0.1 0.61 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault.page_fault
0.65 -0.1 0.58 ± 5% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.release_pages.tlb_flush_mmu.tlb_finish_mmu
0.67 ± 2% -0.1 0.60 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
0.66 -0.1 0.59 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
0.62 -0.1 0.56 ± 4% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.release_pages.tlb_flush_mmu
0.64 ± 2% -0.1 0.57 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.pagevec_lru_move_fn.lru_add_drain_cpu
0.66 -0.1 0.60 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
47.88 +0.1 48.00 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
47.86 +0.1 48.00 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap
47.74 +0.1 47.89 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
96.85 +0.3 97.18 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.33 +0.3 46.67 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
46.30 +0.3 46.64 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu
47.55 +0.3 47.90 perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
47.55 +0.4 47.90 perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__vm_munmap
47.52 +0.4 47.87 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
96.64 +0.4 97.01 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
46.22 +0.5 46.74 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
46.20 +0.5 46.72 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
60.91 +2.6 63.55 perf-profile.calltrace.cycles-pp.munmap
60.60 +2.6 63.24 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
60.84 +2.6 63.47 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
60.59 +2.6 63.23 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
60.83 +2.6 63.47 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
2.21 -0.3 1.90 ± 6% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
2.24 -0.3 1.93 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
2.13 -0.3 1.85 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.84 -0.2 1.60 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
1.57 -0.2 1.39 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.66 -0.1 0.55 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.67 -0.1 0.56 perf-profile.children.cycles-pp.ksys_mmap_pgoff
1.07 -0.1 0.96 ± 5% perf-profile.children.cycles-pp.tick_sched_timer
1.03 -0.1 0.93 ± 5% perf-profile.children.cycles-pp.tick_sched_handle
1.01 -0.1 0.91 ± 5% perf-profile.children.cycles-pp.update_process_times
0.58 -0.1 0.49 perf-profile.children.cycles-pp.do_mmap
1.01 -0.1 0.92 perf-profile.children.cycles-pp.do_page_fault
1.05 -0.1 0.97 perf-profile.children.cycles-pp.page_fault
0.83 -0.1 0.74 ± 2% perf-profile.children.cycles-pp.handle_mm_fault
0.79 ± 2% -0.1 0.71 ± 6% perf-profile.children.cycles-pp.scheduler_tick
0.92 ± 2% -0.1 0.84 ± 3% perf-profile.children.cycles-pp.unmap_vmas
0.78 -0.1 0.70 ± 2% perf-profile.children.cycles-pp.__handle_mm_fault
0.88 -0.1 0.80 ± 3% perf-profile.children.cycles-pp.unmap_page_range
0.70 -0.1 0.62 perf-profile.children.cycles-pp.handle_pte_fault
0.43 -0.1 0.36 perf-profile.children.cycles-pp.mmap_region
0.47 ± 2% -0.1 0.41 ± 2% perf-profile.children.cycles-pp.mmap64
0.55 -0.0 0.50 ± 4% perf-profile.children.cycles-pp.task_tick_fair
0.18 ± 2% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.perf_event_mmap
0.11 -0.0 0.08 ± 6% perf-profile.children.cycles-pp.perf_iterate_sb
0.31 -0.0 0.27 ± 3% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.26 -0.0 0.23 perf-profile.children.cycles-pp.get_page_from_freelist
0.22 -0.0 0.19 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.24 ± 2% -0.0 0.21 ± 4% perf-profile.children.cycles-pp.__pte_alloc
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.change_protection
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.change_p4d_range
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.change_prot_numa
0.11 ± 4% -0.0 0.09 ± 4% perf-profile.children.cycles-pp.free_unref_page_list
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.task_work_run
0.50 -0.0 0.48 perf-profile.children.cycles-pp.exit_to_usermode_loop
0.32 ± 2% -0.0 0.29 ± 4% perf-profile.children.cycles-pp.___might_sleep
0.50 -0.0 0.47 ± 2% perf-profile.children.cycles-pp.task_numa_work
0.16 ± 2% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.prep_new_page
0.16 ± 2% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.alloc_pages_vma
0.14 ± 3% -0.0 0.12 ± 4% perf-profile.children.cycles-pp.clear_page_erms
0.07 ± 6% -0.0 0.05 perf-profile.children.cycles-pp.kmem_cache_free
0.07 -0.0 0.05 ± 8% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.17 ± 2% -0.0 0.15 ± 4% perf-profile.children.cycles-pp._cond_resched
0.12 -0.0 0.10 ± 4% perf-profile.children.cycles-pp.get_unmapped_area
0.09 ± 4% -0.0 0.07 ± 10% perf-profile.children.cycles-pp._raw_spin_lock
0.13 -0.0 0.11 ± 4% perf-profile.children.cycles-pp.__anon_vma_prepare
0.13 ± 3% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.free_pgtables
0.10 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.16 -0.0 0.15 ± 3% perf-profile.children.cycles-pp.irq_exit
0.13 -0.0 0.12 ± 3% perf-profile.children.cycles-pp.free_pgd_range
0.10 -0.0 0.09 ± 4% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.05 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__mod_memcg_state
0.07 +0.0 0.10 perf-profile.children.cycles-pp.__mod_lruvec_state
0.14 ± 13% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.__remove_hrtimer
47.91 +0.1 48.04 perf-profile.children.cycles-pp.tlb_finish_mmu
47.90 +0.1 48.03 perf-profile.children.cycles-pp.tlb_flush_mmu
47.81 +0.1 47.96 perf-profile.children.cycles-pp.release_pages
98.55 +0.1 98.70 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
98.53 +0.1 98.68 perf-profile.children.cycles-pp.do_syscall_64
96.91 +0.3 97.24 perf-profile.children.cycles-pp.__x64_sys_munmap
96.89 +0.3 97.22 perf-profile.children.cycles-pp.__vm_munmap
47.73 +0.3 48.06 perf-profile.children.cycles-pp.pagevec_lru_move_fn
96.88 +0.3 97.21 perf-profile.children.cycles-pp.__do_munmap
47.60 +0.4 47.95 perf-profile.children.cycles-pp.lru_add_drain
47.59 +0.4 47.94 perf-profile.children.cycles-pp.lru_add_drain_cpu
96.67 +0.4 97.03 perf-profile.children.cycles-pp.unmap_region
92.78 +0.8 93.60 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.83 +0.8 93.67 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
60.92 +2.6 63.56 perf-profile.children.cycles-pp.munmap
0.17 ± 6% -0.1 0.06 ± 13% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.47 ± 2% -0.1 0.42 ± 4% perf-profile.self.cycles-pp.unmap_page_range
0.10 ± 4% -0.0 0.07 ± 12% perf-profile.self.cycles-pp.hrtimer_interrupt
0.11 -0.0 0.08 ± 10% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.44 -0.0 0.41 ± 3% perf-profile.self.cycles-pp.change_p4d_range
0.08 -0.0 0.06 ± 9% perf-profile.self.cycles-pp.perf_iterate_sb
0.14 -0.0 0.12 ± 3% perf-profile.self.cycles-pp.clear_page_erms
0.08 ± 5% -0.0 0.07 ± 6% perf-profile.self.cycles-pp._raw_spin_lock
0.07 ± 7% -0.0 0.05 perf-profile.self.cycles-pp.kmem_cache_free
0.08 -0.0 0.07 ± 6% perf-profile.self.cycles-pp.release_pages
0.09 -0.0 0.08 ± 5% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.05 ± 8% +0.0 0.07 ± 14% perf-profile.self.cycles-pp.___perf_sw_event
0.05 +0.0 0.07 ± 5% perf-profile.self.cycles-pp.__mod_memcg_state
0.00 +0.1 0.14 ± 9% perf-profile.self.cycles-pp.__remove_hrtimer
92.77 +0.8 93.60 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
124732 +11.7% 139275 softirqs.CPU0.TIMER
125382 +13.1% 141792 ± 2% softirqs.CPU10.TIMER
124413 +11.5% 138750 softirqs.CPU100.TIMER
123657 +12.2% 138733 softirqs.CPU101.TIMER
123899 +12.1% 138896 softirqs.CPU102.TIMER
124228 +11.7% 138717 softirqs.CPU103.TIMER
123619 +12.3% 138882 softirqs.CPU104.TIMER
123585 +12.7% 139247 softirqs.CPU105.TIMER
123593 +12.0% 138384 softirqs.CPU106.TIMER
123642 +12.2% 138741 softirqs.CPU107.TIMER
123926 +11.7% 138456 softirqs.CPU108.TIMER
123798 +12.0% 138608 softirqs.CPU109.TIMER
124136 +12.0% 139018 softirqs.CPU11.TIMER
123935 +12.5% 139487 softirqs.CPU110.TIMER
124699 +11.1% 138505 softirqs.CPU111.TIMER
123708 +12.3% 138865 softirqs.CPU112.TIMER
123122 +13.1% 139227 softirqs.CPU113.TIMER
123665 +16.1% 143543 ± 7% softirqs.CPU114.TIMER
123887 +11.6% 138272 softirqs.CPU115.TIMER
123873 +11.8% 138498 softirqs.CPU116.TIMER
123987 +11.6% 138362 softirqs.CPU117.TIMER
123891 +11.6% 138254 softirqs.CPU118.TIMER
123315 +16.1% 143144 ± 4% softirqs.CPU119.TIMER
123661 +14.9% 142093 ± 3% softirqs.CPU12.TIMER
123199 +12.3% 138367 softirqs.CPU120.TIMER
123372 +12.1% 138262 softirqs.CPU121.TIMER
123641 +11.6% 137956 softirqs.CPU122.TIMER
123389 +12.6% 138884 softirqs.CPU123.TIMER
123309 +12.4% 138608 softirqs.CPU124.TIMER
123604 +11.9% 138283 softirqs.CPU125.TIMER
123456 +12.2% 138513 softirqs.CPU126.TIMER
123526 +12.8% 139288 softirqs.CPU127.TIMER
123496 +11.9% 138241 softirqs.CPU128.TIMER
123381 +12.9% 139252 softirqs.CPU129.TIMER
125036 +11.1% 138906 softirqs.CPU13.TIMER
123722 +12.5% 139159 softirqs.CPU130.TIMER
123523 +12.1% 138454 softirqs.CPU131.TIMER
123760 +11.8% 138329 softirqs.CPU132.TIMER
123569 +12.2% 138624 softirqs.CPU133.TIMER
123669 +12.1% 138657 softirqs.CPU134.TIMER
122996 +12.7% 138591 softirqs.CPU135.TIMER
123011 +12.6% 138503 softirqs.CPU136.TIMER
123141 +12.6% 138624 softirqs.CPU137.TIMER
123505 +12.0% 138360 softirqs.CPU138.TIMER
123513 +12.0% 138343 softirqs.CPU139.TIMER
124622 +18.6% 147768 ± 6% softirqs.CPU14.TIMER
123088 +12.3% 138241 softirqs.CPU140.TIMER
123186 +13.2% 139395 softirqs.CPU141.TIMER
123249 +12.8% 139061 ± 2% softirqs.CPU142.TIMER
123274 +12.2% 138333 softirqs.CPU143.TIMER
123241 +12.5% 138599 ± 2% softirqs.CPU144.TIMER
123183 +12.1% 138053 softirqs.CPU145.TIMER
124668 +10.7% 137947 softirqs.CPU146.TIMER
123275 +12.7% 138880 softirqs.CPU147.TIMER
123278 +12.3% 138403 softirqs.CPU148.TIMER
123611 +12.3% 138764 softirqs.CPU149.TIMER
124133 +12.0% 139055 softirqs.CPU15.TIMER
123195 +12.1% 138071 softirqs.CPU150.TIMER
123687 +11.7% 138126 softirqs.CPU151.TIMER
123483 +11.9% 138191 softirqs.CPU152.TIMER
123281 +12.1% 138248 softirqs.CPU153.TIMER
124097 +11.3% 138090 softirqs.CPU154.TIMER
123803 +11.7% 138348 softirqs.CPU155.TIMER
122720 +12.6% 138171 softirqs.CPU157.TIMER
123267 +12.7% 138929 softirqs.CPU158.TIMER
123387 +12.0% 138189 softirqs.CPU159.TIMER
124075 +12.0% 138944 softirqs.CPU16.TIMER
123571 +12.4% 138834 softirqs.CPU160.TIMER
123578 +12.0% 138378 softirqs.CPU161.TIMER
123664 +11.9% 138397 softirqs.CPU162.TIMER
123363 +11.9% 138061 softirqs.CPU163.TIMER
123053 +12.4% 138264 softirqs.CPU164.TIMER
123412 +12.1% 138367 softirqs.CPU165.TIMER
123721 +11.7% 138219 softirqs.CPU166.TIMER
123512 +11.8% 138098 softirqs.CPU167.TIMER
123392 +12.1% 138382 softirqs.CPU168.TIMER
123559 +11.9% 138276 softirqs.CPU169.TIMER
124075 +12.5% 139563 softirqs.CPU17.TIMER
123019 +12.5% 138428 softirqs.CPU170.TIMER
123467 +12.0% 138328 softirqs.CPU171.TIMER
123040 +12.3% 138224 softirqs.CPU173.TIMER
123997 +11.6% 138334 softirqs.CPU174.TIMER
123787 +11.8% 138437 softirqs.CPU175.TIMER
123315 +12.2% 138419 softirqs.CPU176.TIMER
123771 +12.1% 138716 softirqs.CPU177.TIMER
123016 +12.2% 138058 softirqs.CPU178.TIMER
122844 +12.4% 138072 softirqs.CPU179.TIMER
123981 +12.1% 138975 softirqs.CPU18.TIMER
123511 +11.8% 138041 softirqs.CPU180.TIMER
123415 +12.0% 138171 softirqs.CPU181.TIMER
122954 +12.9% 138845 softirqs.CPU182.TIMER
123291 +12.0% 138113 softirqs.CPU183.TIMER
122910 +12.4% 138175 softirqs.CPU184.TIMER
123015 +12.8% 138812 ± 2% softirqs.CPU185.TIMER
123197 +11.5% 137396 softirqs.CPU186.TIMER
122914 +12.2% 137870 softirqs.CPU187.TIMER
122854 +12.7% 138509 softirqs.CPU188.TIMER
122864 +12.5% 138211 softirqs.CPU189.TIMER
124311 +11.5% 138654 softirqs.CPU19.TIMER
122961 +12.4% 138166 softirqs.CPU190.TIMER
123015 +12.3% 138134 softirqs.CPU191.TIMER
122906 +12.3% 138029 softirqs.CPU192.TIMER
122988 +12.3% 138098 softirqs.CPU193.TIMER
122863 +11.9% 137464 softirqs.CPU194.TIMER
122883 +12.3% 138039 softirqs.CPU195.TIMER
123157 +12.1% 138023 softirqs.CPU196.TIMER
122831 +12.5% 138204 softirqs.CPU197.TIMER
123113 +11.9% 137755 softirqs.CPU198.TIMER
122809 +12.2% 137771 softirqs.CPU199.TIMER
123833 +12.3% 139118 softirqs.CPU20.TIMER
122908 +12.1% 137806 softirqs.CPU200.TIMER
122641 +12.4% 137887 softirqs.CPU201.TIMER
123187 +12.2% 138253 softirqs.CPU202.TIMER
122997 +12.2% 138012 softirqs.CPU203.TIMER
123088 +11.7% 137542 softirqs.CPU204.TIMER
122928 +12.2% 137903 softirqs.CPU205.TIMER
122990 +12.1% 137911 softirqs.CPU206.TIMER
123028 +12.2% 138095 softirqs.CPU207.TIMER
122473 +12.7% 138050 softirqs.CPU208.TIMER
122665 +12.8% 138325 softirqs.CPU209.TIMER
124436 +11.5% 138798 softirqs.CPU21.TIMER
122387 +12.7% 137900 softirqs.CPU210.TIMER
122693 +12.3% 137746 softirqs.CPU211.TIMER
122651 +12.3% 137710 softirqs.CPU212.TIMER
122995 +11.9% 137644 softirqs.CPU213.TIMER
122918 +11.9% 137535 softirqs.CPU214.TIMER
122662 +12.0% 137431 softirqs.CPU215.TIMER
122226 +12.3% 137244 softirqs.CPU216.TIMER
122549 +12.1% 137338 softirqs.CPU217.TIMER
123475 +11.5% 137651 softirqs.CPU218.TIMER
123553 +11.4% 137635 softirqs.CPU219.TIMER
123621 +13.3% 140099 ± 2% softirqs.CPU22.TIMER
122924 +12.0% 137726 softirqs.CPU220.TIMER
123174 +12.3% 138348 softirqs.CPU221.TIMER
122630 +12.5% 137977 softirqs.CPU222.TIMER
123731 +11.8% 138294 softirqs.CPU223.TIMER
123492 +11.8% 138023 softirqs.CPU224.TIMER
123019 +11.9% 137651 softirqs.CPU225.TIMER
123061 +12.0% 137781 softirqs.CPU226.TIMER
123436 +11.8% 137983 softirqs.CPU227.TIMER
122711 +12.3% 137768 softirqs.CPU228.TIMER
122361 +13.2% 138515 softirqs.CPU229.TIMER
124044 +12.2% 139183 softirqs.CPU23.TIMER
122462 +13.2% 138573 softirqs.CPU230.TIMER
122970 +12.5% 138298 softirqs.CPU231.TIMER
123005 +12.2% 137962 softirqs.CPU232.TIMER
122716 +12.4% 137939 softirqs.CPU233.TIMER
122591 +12.4% 137826 softirqs.CPU234.TIMER
122794 +12.4% 138058 softirqs.CPU235.TIMER
122607 +12.6% 138015 softirqs.CPU236.TIMER
122744 +12.8% 138401 softirqs.CPU237.TIMER
122680 +12.2% 137683 softirqs.CPU238.TIMER
122589 +12.3% 137729 softirqs.CPU239.TIMER
122741 +12.5% 138046 softirqs.CPU240.TIMER
124001 +11.3% 137964 softirqs.CPU241.TIMER
122283 +12.5% 137507 softirqs.CPU242.TIMER
122722 +12.4% 137958 softirqs.CPU243.TIMER
123393 +11.4% 137463 softirqs.CPU244.TIMER
122456 +12.4% 137610 softirqs.CPU245.TIMER
122995 +12.3% 138090 softirqs.CPU246.TIMER
123687 +11.4% 137814 softirqs.CPU247.TIMER
122494 +12.9% 138288 softirqs.CPU248.TIMER
122634 +12.6% 138146 softirqs.CPU249.TIMER
124923 +11.8% 139698 softirqs.CPU25.TIMER
122661 +12.3% 137714 softirqs.CPU250.TIMER
122343 +12.5% 137689 softirqs.CPU251.TIMER
122846 +11.8% 137353 softirqs.CPU252.TIMER
122455 +12.3% 137525 softirqs.CPU253.TIMER
122334 +13.1% 138369 softirqs.CPU254.TIMER
122023 +13.2% 138083 ± 2% softirqs.CPU256.TIMER
122317 +12.2% 137246 softirqs.CPU257.TIMER
122558 +11.6% 136825 softirqs.CPU258.TIMER
122482 +15.8% 141829 ± 6% softirqs.CPU259.TIMER
123637 +12.2% 138768 softirqs.CPU26.TIMER
122120 +12.5% 137405 softirqs.CPU260.TIMER
122386 +12.2% 137365 softirqs.CPU262.TIMER
122398 +12.4% 137560 softirqs.CPU263.TIMER
122108 +12.5% 137359 softirqs.CPU264.TIMER
122101 +15.4% 140873 ± 5% softirqs.CPU265.TIMER
122163 +12.2% 137050 softirqs.CPU266.TIMER
122077 +12.6% 137416 softirqs.CPU267.TIMER
122318 +12.3% 137341 softirqs.CPU268.TIMER
122063 +12.5% 137290 softirqs.CPU269.TIMER
123854 +12.0% 138752 softirqs.CPU27.TIMER
122323 +12.0% 137011 softirqs.CPU270.TIMER
122091 +12.4% 137199 softirqs.CPU271.TIMER
122042 +13.0% 137944 softirqs.CPU272.TIMER
122898 +11.4% 136920 softirqs.CPU273.TIMER
122100 +12.5% 137340 softirqs.CPU274.TIMER
122223 +12.2% 137141 softirqs.CPU275.TIMER
122367 +11.8% 136818 softirqs.CPU276.TIMER
122216 +12.2% 137067 softirqs.CPU277.TIMER
122035 +12.2% 136880 softirqs.CPU278.TIMER
122296 +12.4% 137442 softirqs.CPU279.TIMER
124072 +12.2% 139264 softirqs.CPU28.TIMER
122058 +12.3% 137060 softirqs.CPU280.TIMER
121889 +12.5% 137107 softirqs.CPU281.TIMER
121953 +12.4% 137058 softirqs.CPU282.TIMER
122136 +12.3% 137153 softirqs.CPU283.TIMER
122152 +11.9% 136661 softirqs.CPU284.TIMER
122126 +12.1% 136891 softirqs.CPU285.TIMER
121523 +12.7% 136947 softirqs.CPU286.TIMER
117427 +12.6% 132264 ± 2% softirqs.CPU287.TIMER
124082 +11.7% 138601 softirqs.CPU29.TIMER
124520 +11.4% 138684 softirqs.CPU3.TIMER
125786 ± 3% +10.4% 138855 softirqs.CPU30.TIMER
124607 +11.5% 138958 softirqs.CPU31.TIMER
123701 +13.1% 139886 softirqs.CPU32.TIMER
124593 +11.9% 139391 softirqs.CPU33.TIMER
123719 +12.0% 138526 softirqs.CPU34.TIMER
123959 +11.9% 138665 softirqs.CPU35.TIMER
123758 +12.0% 138556 softirqs.CPU36.TIMER
123856 +11.9% 138597 softirqs.CPU37.TIMER
124053 +16.7% 144775 ± 6% softirqs.CPU38.TIMER
123675 +11.8% 138281 softirqs.CPU39.TIMER
124228 +12.0% 139164 softirqs.CPU4.TIMER
123900 +12.3% 139175 softirqs.CPU40.TIMER
123892 +12.4% 139211 softirqs.CPU41.TIMER
127063 ± 3% +9.1% 138615 softirqs.CPU42.TIMER
123679 +12.2% 138760 softirqs.CPU43.TIMER
124702 +11.9% 139566 softirqs.CPU44.TIMER
123975 +11.9% 138712 softirqs.CPU45.TIMER
124174 +11.6% 138531 softirqs.CPU46.TIMER
123644 +12.1% 138571 softirqs.CPU48.TIMER
123687 +12.3% 138843 softirqs.CPU49.TIMER
124610 +11.6% 139078 softirqs.CPU5.TIMER
124146 +15.8% 143709 ± 3% softirqs.CPU51.TIMER
123635 +12.8% 139412 softirqs.CPU52.TIMER
124065 +12.1% 139088 softirqs.CPU53.TIMER
124147 +14.2% 141788 ± 3% softirqs.CPU54.TIMER
123762 +12.2% 138905 softirqs.CPU55.TIMER
125582 +10.6% 138868 softirqs.CPU57.TIMER
125328 +18.7% 148744 ± 11% softirqs.CPU58.TIMER
123995 +12.1% 138967 softirqs.CPU59.TIMER
124120 +12.1% 139157 softirqs.CPU6.TIMER
124023 +12.3% 139227 softirqs.CPU60.TIMER
123781 +12.1% 138727 softirqs.CPU61.TIMER
123569 +12.5% 138970 softirqs.CPU62.TIMER
123608 +12.5% 139074 softirqs.CPU63.TIMER
123407 +13.1% 139543 softirqs.CPU64.TIMER
127045 ± 4% +9.2% 138760 softirqs.CPU65.TIMER
126610 ± 5% +9.5% 138633 softirqs.CPU66.TIMER
123612 +12.1% 138603 softirqs.CPU67.TIMER
123604 +12.5% 139113 softirqs.CPU68.TIMER
123505 +12.2% 138554 softirqs.CPU69.TIMER
124325 +11.7% 138838 softirqs.CPU7.TIMER
124090 +12.5% 139591 softirqs.CPU70.TIMER
123699 +11.9% 138451 softirqs.CPU71.TIMER
124613 +13.9% 141945 ± 3% softirqs.CPU72.TIMER
123431 +12.3% 138639 softirqs.CPU73.TIMER
123853 +12.1% 138852 softirqs.CPU75.TIMER
124259 +11.8% 138957 softirqs.CPU76.TIMER
123966 +12.1% 138962 softirqs.CPU77.TIMER
123577 +12.1% 138498 softirqs.CPU78.TIMER
123749 +12.0% 138577 softirqs.CPU79.TIMER
124280 +11.7% 138772 softirqs.CPU8.TIMER
124029 +11.7% 138583 softirqs.CPU80.TIMER
123492 +12.4% 138826 softirqs.CPU81.TIMER
124304 +11.6% 138753 softirqs.CPU82.TIMER
123952 +12.0% 138766 softirqs.CPU83.TIMER
123672 +15.0% 142254 ± 3% softirqs.CPU85.TIMER
123910 +12.4% 139236 softirqs.CPU86.TIMER
123664 +12.3% 138908 softirqs.CPU87.TIMER
124068 +12.1% 139076 softirqs.CPU88.TIMER
123802 +12.2% 138885 softirqs.CPU89.TIMER
123826 +12.0% 138649 softirqs.CPU90.TIMER
124086 +11.7% 138621 softirqs.CPU91.TIMER
123586 +12.4% 138925 softirqs.CPU92.TIMER
123696 +12.5% 139160 softirqs.CPU93.TIMER
123720 +11.8% 138367 softirqs.CPU94.TIMER
124536 +11.3% 138666 softirqs.CPU95.TIMER
123950 +12.2% 139071 softirqs.CPU96.TIMER
123876 +12.1% 138860 softirqs.CPU97.TIMER
123700 +12.2% 138807 softirqs.CPU98.TIMER
123452 +12.5% 138905 softirqs.CPU99.TIMER
35617264 +12.0% 39908742 softirqs.TIMER
554.75 ± 65% +547.1% 3590 ± 96% interrupts.32:IR-PCI-MSI.2097155-edge.eth0-TxRx-2
452452 -21.0% 357575 ± 8% interrupts.CPU0.LOC:Local_timer_interrupts
1315 ± 10% +55.9% 2051 ± 16% interrupts.CPU0.RES:Rescheduling_interrupts
452347 -21.1% 357112 ± 8% interrupts.CPU1.LOC:Local_timer_interrupts
452532 -21.3% 356223 ± 8% interrupts.CPU10.LOC:Local_timer_interrupts
448763 -20.9% 354890 ± 8% interrupts.CPU100.LOC:Local_timer_interrupts
448680 -20.8% 355364 ± 8% interrupts.CPU101.LOC:Local_timer_interrupts
447608 -20.7% 355149 ± 8% interrupts.CPU102.LOC:Local_timer_interrupts
448227 -20.9% 354352 ± 8% interrupts.CPU103.LOC:Local_timer_interrupts
448996 -20.6% 356349 ± 8% interrupts.CPU104.LOC:Local_timer_interrupts
448510 -20.7% 355821 ± 8% interrupts.CPU105.LOC:Local_timer_interrupts
447226 -20.7% 354825 ± 8% interrupts.CPU106.LOC:Local_timer_interrupts
446828 -20.9% 353563 ± 8% interrupts.CPU107.LOC:Local_timer_interrupts
446384 -20.5% 355009 ± 8% interrupts.CPU108.LOC:Local_timer_interrupts
446823 -20.7% 354511 ± 8% interrupts.CPU109.LOC:Local_timer_interrupts
452246 -21.1% 356984 ± 8% interrupts.CPU11.LOC:Local_timer_interrupts
447034 -20.5% 355451 ± 8% interrupts.CPU110.LOC:Local_timer_interrupts
447783 -20.9% 354154 ± 8% interrupts.CPU111.LOC:Local_timer_interrupts
446657 -20.5% 355120 ± 8% interrupts.CPU112.LOC:Local_timer_interrupts
445392 -20.3% 354902 ± 8% interrupts.CPU113.LOC:Local_timer_interrupts
448354 -20.7% 355621 ± 8% interrupts.CPU114.LOC:Local_timer_interrupts
72.00 ± 84% +293.8% 283.50 ± 28% interrupts.CPU114.RES:Rescheduling_interrupts
447962 -20.8% 354707 ± 8% interrupts.CPU115.LOC:Local_timer_interrupts
447215 -20.7% 354848 ± 8% interrupts.CPU116.LOC:Local_timer_interrupts
447530 -20.8% 354411 ± 8% interrupts.CPU117.LOC:Local_timer_interrupts
448647 -20.9% 354998 ± 8% interrupts.CPU118.LOC:Local_timer_interrupts
447753 -20.6% 355316 ± 8% interrupts.CPU119.LOC:Local_timer_interrupts
554.75 ± 65% +547.1% 3590 ± 96% interrupts.CPU12.32:IR-PCI-MSI.2097155-edge.eth0-TxRx-2
451965 -20.9% 357310 ± 8% interrupts.CPU12.LOC:Local_timer_interrupts
448234 -20.7% 355312 ± 8% interrupts.CPU120.LOC:Local_timer_interrupts
447794 -20.7% 354971 ± 8% interrupts.CPU121.LOC:Local_timer_interrupts
448691 -20.4% 357243 ± 8% interrupts.CPU122.LOC:Local_timer_interrupts
5632 ± 32% -33.5% 3745 interrupts.CPU122.NMI:Non-maskable_interrupts
5632 ± 32% -33.5% 3745 interrupts.CPU122.PMI:Performance_monitoring_interrupts
447313 -20.2% 356771 ± 9% interrupts.CPU123.LOC:Local_timer_interrupts
91.50 ± 57% -59.0% 37.50 ±128% interrupts.CPU123.RES:Rescheduling_interrupts
448249 -20.3% 357447 ± 8% interrupts.CPU124.LOC:Local_timer_interrupts
448297 -20.6% 355804 ± 8% interrupts.CPU125.LOC:Local_timer_interrupts
449032 -21.0% 354647 ± 8% interrupts.CPU126.LOC:Local_timer_interrupts
447604 -20.8% 354651 ± 8% interrupts.CPU127.LOC:Local_timer_interrupts
447392 -20.4% 356084 ± 8% interrupts.CPU128.LOC:Local_timer_interrupts
169.25 ± 48% +405.2% 855.00 ±119% interrupts.CPU128.RES:Rescheduling_interrupts
447344 -20.7% 354766 ± 8% interrupts.CPU129.LOC:Local_timer_interrupts
6632 ± 24% -29.3% 4686 ± 33% interrupts.CPU129.NMI:Non-maskable_interrupts
6632 ± 24% -29.3% 4686 ± 33% interrupts.CPU129.PMI:Performance_monitoring_interrupts
452316 -21.2% 356567 ± 8% interrupts.CPU13.LOC:Local_timer_interrupts
449103 -20.4% 357697 ± 8% interrupts.CPU130.LOC:Local_timer_interrupts
449086 -20.5% 357205 ± 8% interrupts.CPU131.LOC:Local_timer_interrupts
450238 -21.0% 355846 ± 8% interrupts.CPU132.LOC:Local_timer_interrupts
447135 -20.8% 354142 ± 8% interrupts.CPU133.LOC:Local_timer_interrupts
446450 -20.1% 356602 ± 8% interrupts.CPU134.LOC:Local_timer_interrupts
449356 -20.5% 357068 ± 9% interrupts.CPU135.LOC:Local_timer_interrupts
447938 -20.5% 356253 ± 8% interrupts.CPU136.LOC:Local_timer_interrupts
447648 -20.6% 355408 ± 8% interrupts.CPU137.LOC:Local_timer_interrupts
3763 +74.3% 6559 ± 24% interrupts.CPU137.NMI:Non-maskable_interrupts
3763 +74.3% 6559 ± 24% interrupts.CPU137.PMI:Performance_monitoring_interrupts
446747 -20.3% 355856 ± 8% interrupts.CPU138.LOC:Local_timer_interrupts
447237 -20.8% 354392 ± 8% interrupts.CPU139.LOC:Local_timer_interrupts
452138 -21.0% 357163 ± 8% interrupts.CPU14.LOC:Local_timer_interrupts
448154 -20.5% 356451 ± 8% interrupts.CPU140.LOC:Local_timer_interrupts
176.25 ± 53% +131.8% 408.50 ± 38% interrupts.CPU140.RES:Rescheduling_interrupts
447880 -20.6% 355665 ± 8% interrupts.CPU141.LOC:Local_timer_interrupts
447680 -20.7% 354855 ± 8% interrupts.CPU142.LOC:Local_timer_interrupts
446659 -20.6% 354589 ± 8% interrupts.CPU143.LOC:Local_timer_interrupts
445820 -20.5% 354379 ± 8% interrupts.CPU144.LOC:Local_timer_interrupts
6541 ± 24% -42.8% 3738 interrupts.CPU144.NMI:Non-maskable_interrupts
6541 ± 24% -42.8% 3738 interrupts.CPU144.PMI:Performance_monitoring_interrupts
447253 -20.6% 355128 ± 8% interrupts.CPU145.LOC:Local_timer_interrupts
447763 -20.8% 354763 ± 8% interrupts.CPU146.LOC:Local_timer_interrupts
447772 -20.7% 354890 ± 8% interrupts.CPU147.LOC:Local_timer_interrupts
446939 -20.7% 354440 ± 8% interrupts.CPU148.LOC:Local_timer_interrupts
447464 -20.7% 354793 ± 8% interrupts.CPU149.LOC:Local_timer_interrupts
452738 -21.2% 356559 ± 8% interrupts.CPU15.LOC:Local_timer_interrupts
446799 -20.6% 354599 ± 8% interrupts.CPU150.LOC:Local_timer_interrupts
447879 -20.9% 354287 ± 8% interrupts.CPU151.LOC:Local_timer_interrupts
448426 -20.6% 355888 ± 8% interrupts.CPU152.LOC:Local_timer_interrupts
449366 -20.7% 356542 ± 8% interrupts.CPU153.LOC:Local_timer_interrupts
447867 -20.7% 355067 ± 8% interrupts.CPU154.LOC:Local_timer_interrupts
447719 -20.7% 355013 ± 8% interrupts.CPU155.LOC:Local_timer_interrupts
446511 -20.3% 355972 ± 8% interrupts.CPU156.LOC:Local_timer_interrupts
449192 -20.9% 355497 ± 8% interrupts.CPU157.LOC:Local_timer_interrupts
446758 -20.5% 355083 ± 8% interrupts.CPU158.LOC:Local_timer_interrupts
447084 -20.6% 355107 ± 8% interrupts.CPU159.LOC:Local_timer_interrupts
452662 -21.1% 357106 ± 8% interrupts.CPU16.LOC:Local_timer_interrupts
447831 -20.8% 354684 ± 8% interrupts.CPU160.LOC:Local_timer_interrupts
109.50 ± 15% -37.2% 68.75 ± 22% interrupts.CPU160.RES:Rescheduling_interrupts
447477 -20.6% 355188 ± 8% interrupts.CPU161.LOC:Local_timer_interrupts
448495 -20.8% 355285 ± 8% interrupts.CPU162.LOC:Local_timer_interrupts
449288 -20.9% 355477 ± 8% interrupts.CPU163.LOC:Local_timer_interrupts
447985 -20.6% 355568 ± 8% interrupts.CPU164.LOC:Local_timer_interrupts
446601 -20.5% 355191 ± 8% interrupts.CPU165.LOC:Local_timer_interrupts
448135 -20.6% 355818 ± 8% interrupts.CPU166.LOC:Local_timer_interrupts
448461 -20.6% 356227 ± 8% interrupts.CPU167.LOC:Local_timer_interrupts
447391 -20.7% 354733 ± 8% interrupts.CPU168.LOC:Local_timer_interrupts
446612 -20.5% 354909 ± 8% interrupts.CPU169.LOC:Local_timer_interrupts
452748 -21.3% 356314 ± 8% interrupts.CPU17.LOC:Local_timer_interrupts
447238 -20.4% 355830 ± 8% interrupts.CPU170.LOC:Local_timer_interrupts
448308 -20.9% 354608 ± 8% interrupts.CPU171.LOC:Local_timer_interrupts
449043 -21.2% 354035 ± 8% interrupts.CPU172.LOC:Local_timer_interrupts
449395 -20.9% 355374 ± 9% interrupts.CPU173.LOC:Local_timer_interrupts
5700 ± 32% -17.8% 4687 ± 33% interrupts.CPU173.NMI:Non-maskable_interrupts
5700 ± 32% -17.8% 4687 ± 33% interrupts.CPU173.PMI:Performance_monitoring_interrupts
446781 -20.4% 355457 ± 8% interrupts.CPU174.LOC:Local_timer_interrupts
446541 -20.7% 354087 ± 8% interrupts.CPU175.LOC:Local_timer_interrupts
447728 -20.6% 355499 ± 8% interrupts.CPU176.LOC:Local_timer_interrupts
447740 -20.5% 355885 ± 8% interrupts.CPU177.LOC:Local_timer_interrupts
6623 ± 24% -43.3% 3758 interrupts.CPU177.NMI:Non-maskable_interrupts
6623 ± 24% -43.3% 3758 interrupts.CPU177.PMI:Performance_monitoring_interrupts
447747 -20.7% 355148 ± 8% interrupts.CPU178.LOC:Local_timer_interrupts
447285 -20.6% 354994 ± 8% interrupts.CPU179.LOC:Local_timer_interrupts
450911 -21.1% 355809 ± 8% interrupts.CPU18.LOC:Local_timer_interrupts
447180 -20.7% 354485 ± 8% interrupts.CPU180.LOC:Local_timer_interrupts
447702 -20.7% 354973 ± 8% interrupts.CPU181.LOC:Local_timer_interrupts
447897 -20.7% 355132 ± 8% interrupts.CPU182.LOC:Local_timer_interrupts
449321 -21.0% 355184 ± 8% interrupts.CPU183.LOC:Local_timer_interrupts
448357 -20.8% 354979 ± 8% interrupts.CPU184.LOC:Local_timer_interrupts
165.00 ± 61% -81.2% 31.00 ± 63% interrupts.CPU184.RES:Rescheduling_interrupts
447698 -20.4% 356305 ± 8% interrupts.CPU185.LOC:Local_timer_interrupts
446780 -20.6% 354611 ± 8% interrupts.CPU186.LOC:Local_timer_interrupts
447678 -20.6% 355625 ± 8% interrupts.CPU187.LOC:Local_timer_interrupts
447756 -20.3% 356660 ± 8% interrupts.CPU188.LOC:Local_timer_interrupts
448842 -20.5% 356728 ± 8% interrupts.CPU189.LOC:Local_timer_interrupts
452463 -21.2% 356696 ± 8% interrupts.CPU19.LOC:Local_timer_interrupts
448558 -20.6% 355985 ± 8% interrupts.CPU190.LOC:Local_timer_interrupts
6581 ± 24% -28.9% 4680 ± 34% interrupts.CPU190.NMI:Non-maskable_interrupts
6581 ± 24% -28.9% 4680 ± 34% interrupts.CPU190.PMI:Performance_monitoring_interrupts
448186 -20.7% 355605 ± 8% interrupts.CPU191.LOC:Local_timer_interrupts
447640 -20.7% 354849 ± 8% interrupts.CPU192.LOC:Local_timer_interrupts
447828 -20.5% 355818 ± 8% interrupts.CPU193.LOC:Local_timer_interrupts
449769 -20.7% 356689 ± 8% interrupts.CPU194.LOC:Local_timer_interrupts
449120 -20.4% 357570 ± 9% interrupts.CPU195.LOC:Local_timer_interrupts
5641 ± 32% -33.7% 3738 interrupts.CPU195.NMI:Non-maskable_interrupts
5641 ± 32% -33.7% 3738 interrupts.CPU195.PMI:Performance_monitoring_interrupts
448037 -20.4% 356497 ± 7% interrupts.CPU196.LOC:Local_timer_interrupts
446302 -20.4% 355172 ± 8% interrupts.CPU197.LOC:Local_timer_interrupts
451541 -21.3% 355341 ± 8% interrupts.CPU198.LOC:Local_timer_interrupts
449452 -21.1% 354503 ± 9% interrupts.CPU199.LOC:Local_timer_interrupts
450751 -20.8% 356778 ± 8% interrupts.CPU2.LOC:Local_timer_interrupts
451672 -21.2% 356011 ± 8% interrupts.CPU20.LOC:Local_timer_interrupts
447614 -20.9% 354143 ± 8% interrupts.CPU200.LOC:Local_timer_interrupts
446364 -20.6% 354456 ± 8% interrupts.CPU201.LOC:Local_timer_interrupts
447150 -20.4% 355847 ± 8% interrupts.CPU202.LOC:Local_timer_interrupts
5662 ± 32% -17.4% 4678 ± 34% interrupts.CPU202.NMI:Non-maskable_interrupts
5662 ± 32% -17.4% 4678 ± 34% interrupts.CPU202.PMI:Performance_monitoring_interrupts
447324 -20.5% 355784 ± 8% interrupts.CPU203.LOC:Local_timer_interrupts
450353 -21.1% 355551 ± 8% interrupts.CPU204.LOC:Local_timer_interrupts
449486 -21.1% 354766 ± 8% interrupts.CPU205.LOC:Local_timer_interrupts
47.50 ±118% -84.7% 7.25 ± 15% interrupts.CPU205.RES:Rescheduling_interrupts
447640 -20.7% 355083 ± 8% interrupts.CPU206.LOC:Local_timer_interrupts
447443 -20.4% 356366 ± 8% interrupts.CPU207.LOC:Local_timer_interrupts
446651 -20.5% 355032 ± 8% interrupts.CPU208.LOC:Local_timer_interrupts
446807 -20.4% 355505 ± 8% interrupts.CPU209.LOC:Local_timer_interrupts
5643 ± 32% -33.5% 3751 interrupts.CPU209.NMI:Non-maskable_interrupts
5643 ± 32% -33.5% 3751 interrupts.CPU209.PMI:Performance_monitoring_interrupts
452259 -21.0% 357396 ± 8% interrupts.CPU21.LOC:Local_timer_interrupts
447655 -20.7% 354771 ± 8% interrupts.CPU210.LOC:Local_timer_interrupts
447098 -20.7% 354473 ± 8% interrupts.CPU211.LOC:Local_timer_interrupts
446681 -20.6% 354768 ± 8% interrupts.CPU212.LOC:Local_timer_interrupts
446397 -20.4% 355254 ± 8% interrupts.CPU213.LOC:Local_timer_interrupts
7.75 ± 5% +1032.3% 87.75 ±105% interrupts.CPU213.RES:Rescheduling_interrupts
449196 -20.8% 355834 ± 8% interrupts.CPU214.LOC:Local_timer_interrupts
447889 -20.7% 355386 ± 8% interrupts.CPU215.LOC:Local_timer_interrupts
448595 -21.1% 353795 ± 8% interrupts.CPU216.LOC:Local_timer_interrupts
448232 -21.0% 354308 ± 8% interrupts.CPU217.LOC:Local_timer_interrupts
446761 -20.6% 354787 ± 8% interrupts.CPU218.LOC:Local_timer_interrupts
446095 -20.5% 354569 ± 8% interrupts.CPU219.LOC:Local_timer_interrupts
451994 -21.1% 356839 ± 8% interrupts.CPU22.LOC:Local_timer_interrupts
448837 -21.0% 354621 ± 8% interrupts.CPU220.LOC:Local_timer_interrupts
448495 -21.1% 353771 ± 8% interrupts.CPU221.LOC:Local_timer_interrupts
446731 -20.6% 354886 ± 8% interrupts.CPU222.LOC:Local_timer_interrupts
446768 -20.7% 354324 ± 8% interrupts.CPU223.LOC:Local_timer_interrupts
446684 -20.6% 354750 ± 8% interrupts.CPU224.LOC:Local_timer_interrupts
446780 -20.6% 354621 ± 8% interrupts.CPU225.LOC:Local_timer_interrupts
448071 -20.6% 355846 ± 8% interrupts.CPU226.LOC:Local_timer_interrupts
447211 -20.6% 355043 ± 8% interrupts.CPU227.LOC:Local_timer_interrupts
447185 -20.7% 354717 ± 8% interrupts.CPU228.LOC:Local_timer_interrupts
446926 -20.8% 353965 ± 8% interrupts.CPU229.LOC:Local_timer_interrupts
452110 -20.8% 358288 ± 8% interrupts.CPU23.LOC:Local_timer_interrupts
448401 -20.3% 357175 ± 7% interrupts.CPU230.LOC:Local_timer_interrupts
449244 -20.8% 355611 ± 8% interrupts.CPU231.LOC:Local_timer_interrupts
449265 -21.1% 354397 ± 8% interrupts.CPU232.LOC:Local_timer_interrupts
5647 ± 33% -33.6% 3747 interrupts.CPU232.NMI:Non-maskable_interrupts
5647 ± 33% -33.6% 3747 interrupts.CPU232.PMI:Performance_monitoring_interrupts
447186 -20.7% 354486 ± 8% interrupts.CPU233.LOC:Local_timer_interrupts
448107 -21.0% 354228 ± 8% interrupts.CPU234.LOC:Local_timer_interrupts
7537 -37.9% 4681 ± 35% interrupts.CPU234.NMI:Non-maskable_interrupts
7537 -37.9% 4681 ± 35% interrupts.CPU234.PMI:Performance_monitoring_interrupts
447040 -20.7% 354506 ± 8% interrupts.CPU235.LOC:Local_timer_interrupts
447193 -20.9% 353777 ± 8% interrupts.CPU236.LOC:Local_timer_interrupts
446268 -20.6% 354295 ± 8% interrupts.CPU237.LOC:Local_timer_interrupts
449634 -20.8% 356255 ± 8% interrupts.CPU238.LOC:Local_timer_interrupts
449337 -20.8% 355992 ± 8% interrupts.CPU239.LOC:Local_timer_interrupts
451802 -20.9% 357381 ± 8% interrupts.CPU24.LOC:Local_timer_interrupts
447287 -20.7% 354800 ± 8% interrupts.CPU240.LOC:Local_timer_interrupts
446264 -20.8% 353437 ± 8% interrupts.CPU241.LOC:Local_timer_interrupts
447521 -20.6% 355538 ± 8% interrupts.CPU242.LOC:Local_timer_interrupts
448368 -20.8% 355108 ± 8% interrupts.CPU243.LOC:Local_timer_interrupts
448794 -21.2% 353861 ± 8% interrupts.CPU244.LOC:Local_timer_interrupts
448265 -21.1% 353723 ± 8% interrupts.CPU245.LOC:Local_timer_interrupts
447404 -20.6% 355385 ± 8% interrupts.CPU246.LOC:Local_timer_interrupts
448265 -20.7% 355456 ± 8% interrupts.CPU247.LOC:Local_timer_interrupts
447316 -20.7% 354842 ± 8% interrupts.CPU248.LOC:Local_timer_interrupts
447886 -20.7% 355020 ± 8% interrupts.CPU249.LOC:Local_timer_interrupts
452147 -21.3% 355786 ± 8% interrupts.CPU25.LOC:Local_timer_interrupts
448406 -20.8% 355289 ± 8% interrupts.CPU250.LOC:Local_timer_interrupts
448566 -20.9% 354726 ± 8% interrupts.CPU251.LOC:Local_timer_interrupts
447607 -21.0% 353757 ± 8% interrupts.CPU252.LOC:Local_timer_interrupts
28.50 ± 68% +173.7% 78.00 ± 39% interrupts.CPU252.RES:Rescheduling_interrupts
448037 -20.9% 354453 ± 8% interrupts.CPU253.LOC:Local_timer_interrupts
449413 -20.8% 356025 ± 8% interrupts.CPU254.LOC:Local_timer_interrupts
449548 -20.8% 356202 ± 8% interrupts.CPU255.LOC:Local_timer_interrupts
41.00 ±110% -84.8% 6.25 ± 13% interrupts.CPU255.RES:Rescheduling_interrupts
449239 -21.0% 354887 ± 8% interrupts.CPU256.LOC:Local_timer_interrupts
447192 -20.8% 354224 ± 8% interrupts.CPU257.LOC:Local_timer_interrupts
448177 -20.7% 355300 ± 8% interrupts.CPU258.LOC:Local_timer_interrupts
448275 -20.6% 355711 ± 8% interrupts.CPU259.LOC:Local_timer_interrupts
451573 -21.3% 355400 ± 8% interrupts.CPU26.LOC:Local_timer_interrupts
449188 -20.7% 356207 ± 8% interrupts.CPU260.LOC:Local_timer_interrupts
449309 -20.8% 355852 ± 8% interrupts.CPU261.LOC:Local_timer_interrupts
447847 -20.8% 354847 ± 8% interrupts.CPU262.LOC:Local_timer_interrupts
446950 -20.5% 355229 ± 8% interrupts.CPU263.LOC:Local_timer_interrupts
6496 ± 24% -42.7% 3722 interrupts.CPU263.NMI:Non-maskable_interrupts
6496 ± 24% -42.7% 3722 interrupts.CPU263.PMI:Performance_monitoring_interrupts
449945 -20.8% 356417 ± 8% interrupts.CPU264.LOC:Local_timer_interrupts
450439 -21.0% 355899 ± 8% interrupts.CPU265.LOC:Local_timer_interrupts
450339 -20.7% 357335 ± 8% interrupts.CPU266.LOC:Local_timer_interrupts
450174 -20.2% 359229 ± 9% interrupts.CPU267.LOC:Local_timer_interrupts
451002 -20.9% 356683 ± 8% interrupts.CPU268.LOC:Local_timer_interrupts
450462 -21.1% 355569 ± 8% interrupts.CPU269.LOC:Local_timer_interrupts
452188 -21.0% 357345 ± 8% interrupts.CPU27.LOC:Local_timer_interrupts
453310 -21.7% 355167 ± 8% interrupts.CPU270.LOC:Local_timer_interrupts
451241 -21.2% 355504 ± 8% interrupts.CPU271.LOC:Local_timer_interrupts
451114 -21.3% 355154 ± 8% interrupts.CPU272.LOC:Local_timer_interrupts
450134 -21.1% 355326 ± 8% interrupts.CPU273.LOC:Local_timer_interrupts
450431 -20.9% 356194 ± 8% interrupts.CPU274.LOC:Local_timer_interrupts
19.00 ± 45% +173.7% 52.00 ± 62% interrupts.CPU274.RES:Rescheduling_interrupts
450807 -20.8% 357003 ± 8% interrupts.CPU275.LOC:Local_timer_interrupts
453075 -21.5% 355714 ± 8% interrupts.CPU276.LOC:Local_timer_interrupts
450048 -21.0% 355644 ± 8% interrupts.CPU277.LOC:Local_timer_interrupts
449561 -20.7% 356522 ± 8% interrupts.CPU278.LOC:Local_timer_interrupts
450017 -20.4% 358363 ± 9% interrupts.CPU279.LOC:Local_timer_interrupts
451493 -21.2% 355798 ± 8% interrupts.CPU28.LOC:Local_timer_interrupts
160.00 ± 5% +51.4% 242.25 ± 9% interrupts.CPU28.RES:Rescheduling_interrupts
450158 -21.0% 355626 ± 8% interrupts.CPU280.LOC:Local_timer_interrupts
450175 -21.0% 355780 ± 8% interrupts.CPU281.LOC:Local_timer_interrupts
450065 -20.9% 356051 ± 8% interrupts.CPU282.LOC:Local_timer_interrupts
447777 -20.5% 355761 ± 8% interrupts.CPU283.LOC:Local_timer_interrupts
450110 -20.9% 356052 ± 8% interrupts.CPU284.LOC:Local_timer_interrupts
6524 ± 24% -28.6% 4657 ± 34% interrupts.CPU284.NMI:Non-maskable_interrupts
6524 ± 24% -28.6% 4657 ± 34% interrupts.CPU284.PMI:Performance_monitoring_interrupts
449420 -20.9% 355525 ± 8% interrupts.CPU285.LOC:Local_timer_interrupts
449800 -20.7% 356713 ± 8% interrupts.CPU286.LOC:Local_timer_interrupts
459915 -20.2% 366810 ± 9% interrupts.CPU287.LOC:Local_timer_interrupts
451800 -21.2% 355897 ± 8% interrupts.CPU29.LOC:Local_timer_interrupts
451158 -21.0% 356357 ± 8% interrupts.CPU3.LOC:Local_timer_interrupts
452347 -21.3% 355924 ± 8% interrupts.CPU30.LOC:Local_timer_interrupts
3777 +98.4% 7492 interrupts.CPU30.NMI:Non-maskable_interrupts
3777 +98.4% 7492 interrupts.CPU30.PMI:Performance_monitoring_interrupts
128.75 ± 31% +86.2% 239.75 ± 10% interrupts.CPU30.RES:Rescheduling_interrupts
451634 -20.8% 357831 ± 8% interrupts.CPU31.LOC:Local_timer_interrupts
452787 -20.9% 358017 ± 8% interrupts.CPU32.LOC:Local_timer_interrupts
70.50 ± 38% +279.8% 267.75 ± 54% interrupts.CPU32.RES:Rescheduling_interrupts
451923 -20.9% 357380 ± 8% interrupts.CPU33.LOC:Local_timer_interrupts
452722 -21.4% 355941 ± 8% interrupts.CPU34.LOC:Local_timer_interrupts
448603 -20.4% 357004 ± 8% interrupts.CPU35.LOC:Local_timer_interrupts
452210 -21.3% 355998 ± 8% interrupts.CPU36.LOC:Local_timer_interrupts
451882 -21.3% 355544 ± 8% interrupts.CPU37.LOC:Local_timer_interrupts
453383 -21.4% 356383 ± 8% interrupts.CPU38.LOC:Local_timer_interrupts
452809 -21.0% 357598 ± 8% interrupts.CPU39.LOC:Local_timer_interrupts
452315 -21.0% 357529 ± 8% interrupts.CPU4.LOC:Local_timer_interrupts
450608 -21.2% 354930 ± 8% interrupts.CPU40.LOC:Local_timer_interrupts
450081 -20.9% 355990 ± 8% interrupts.CPU41.LOC:Local_timer_interrupts
450043 -20.8% 356507 ± 8% interrupts.CPU42.LOC:Local_timer_interrupts
22.75 ± 48% +236.3% 76.50 ± 40% interrupts.CPU42.RES:Rescheduling_interrupts
450140 -20.8% 356378 ± 8% interrupts.CPU43.LOC:Local_timer_interrupts
449695 -20.7% 356421 ± 8% interrupts.CPU44.LOC:Local_timer_interrupts
18.50 ± 43% +625.7% 134.25 ± 80% interrupts.CPU44.RES:Rescheduling_interrupts
450168 -20.8% 356636 ± 8% interrupts.CPU45.LOC:Local_timer_interrupts
451761 -21.3% 355664 ± 8% interrupts.CPU46.LOC:Local_timer_interrupts
450482 -20.8% 356781 ± 8% interrupts.CPU47.LOC:Local_timer_interrupts
53.75 ± 80% +245.6% 185.75 ± 57% interrupts.CPU47.RES:Rescheduling_interrupts
450849 -21.0% 356318 ± 8% interrupts.CPU48.LOC:Local_timer_interrupts
449530 -20.8% 356024 ± 8% interrupts.CPU49.LOC:Local_timer_interrupts
452185 -21.2% 356424 ± 8% interrupts.CPU5.LOC:Local_timer_interrupts
452374 -21.2% 356685 ± 8% interrupts.CPU50.LOC:Local_timer_interrupts
451158 -20.7% 357779 ± 9% interrupts.CPU51.LOC:Local_timer_interrupts
451039 -20.6% 357966 ± 8% interrupts.CPU52.LOC:Local_timer_interrupts
5682 ± 32% -34.2% 3740 interrupts.CPU52.NMI:Non-maskable_interrupts
5682 ± 32% -34.2% 3740 interrupts.CPU52.PMI:Performance_monitoring_interrupts
450978 -21.0% 356243 ± 8% interrupts.CPU53.LOC:Local_timer_interrupts
452251 -21.5% 355052 ± 9% interrupts.CPU54.LOC:Local_timer_interrupts
450986 -21.2% 355254 ± 8% interrupts.CPU55.LOC:Local_timer_interrupts
450685 -21.0% 356148 ± 8% interrupts.CPU56.LOC:Local_timer_interrupts
449475 -20.9% 355563 ± 8% interrupts.CPU57.LOC:Local_timer_interrupts
451278 -20.7% 357853 ± 8% interrupts.CPU58.LOC:Local_timer_interrupts
169.75 ± 60% -63.6% 61.75 ± 99% interrupts.CPU58.RES:Rescheduling_interrupts
450102 -20.5% 357925 ± 8% interrupts.CPU59.LOC:Local_timer_interrupts
451723 -21.3% 355340 ± 8% interrupts.CPU6.LOC:Local_timer_interrupts
454798 -21.5% 356820 ± 8% interrupts.CPU60.LOC:Local_timer_interrupts
7545 -50.2% 3758 interrupts.CPU60.NMI:Non-maskable_interrupts
7545 -50.2% 3758 interrupts.CPU60.PMI:Performance_monitoring_interrupts
451436 -21.3% 355493 ± 8% interrupts.CPU61.LOC:Local_timer_interrupts
450910 -20.8% 357077 ± 8% interrupts.CPU62.LOC:Local_timer_interrupts
450686 -20.3% 359334 ± 9% interrupts.CPU63.LOC:Local_timer_interrupts
450216 -20.8% 356564 ± 8% interrupts.CPU64.LOC:Local_timer_interrupts
449965 -21.1% 354963 ± 8% interrupts.CPU65.LOC:Local_timer_interrupts
450709 -21.0% 355952 ± 8% interrupts.CPU66.LOC:Local_timer_interrupts
450160 -21.0% 355775 ± 8% interrupts.CPU67.LOC:Local_timer_interrupts
450297 -21.1% 355375 ± 8% interrupts.CPU68.LOC:Local_timer_interrupts
449937 -20.9% 356013 ± 8% interrupts.CPU69.LOC:Local_timer_interrupts
5674 ± 32% -17.2% 4700 ± 34% interrupts.CPU69.NMI:Non-maskable_interrupts
5674 ± 32% -17.2% 4700 ± 34% interrupts.CPU69.PMI:Performance_monitoring_interrupts
451977 -21.0% 356885 ± 8% interrupts.CPU7.LOC:Local_timer_interrupts
449651 -20.9% 355504 ± 8% interrupts.CPU70.LOC:Local_timer_interrupts
451073 -20.9% 356589 ± 8% interrupts.CPU71.LOC:Local_timer_interrupts
447324 -20.8% 354292 ± 8% interrupts.CPU72.LOC:Local_timer_interrupts
6588 ± 24% -43.2% 3740 interrupts.CPU72.NMI:Non-maskable_interrupts
6588 ± 24% -43.2% 3740 interrupts.CPU72.PMI:Performance_monitoring_interrupts
240.50 ± 7% -36.0% 154.00 ± 50% interrupts.CPU72.RES:Rescheduling_interrupts
447170 -20.7% 354707 ± 8% interrupts.CPU73.LOC:Local_timer_interrupts
448611 -20.8% 355143 ± 8% interrupts.CPU74.LOC:Local_timer_interrupts
447417 -20.8% 354345 ± 8% interrupts.CPU75.LOC:Local_timer_interrupts
448190 -20.8% 355035 ± 8% interrupts.CPU76.LOC:Local_timer_interrupts
448346 -20.9% 354442 ± 8% interrupts.CPU77.LOC:Local_timer_interrupts
6594 ± 24% -43.4% 3735 interrupts.CPU77.NMI:Non-maskable_interrupts
6594 ± 24% -43.4% 3735 interrupts.CPU77.PMI:Performance_monitoring_interrupts
448452 -20.6% 355866 ± 8% interrupts.CPU78.LOC:Local_timer_interrupts
447509 -20.7% 354660 ± 8% interrupts.CPU79.LOC:Local_timer_interrupts
451912 -20.9% 357687 ± 8% interrupts.CPU8.LOC:Local_timer_interrupts
449808 -20.9% 355596 ± 8% interrupts.CPU80.LOC:Local_timer_interrupts
449111 -21.0% 354994 ± 8% interrupts.CPU81.LOC:Local_timer_interrupts
6642 ± 24% -29.3% 4699 ± 34% interrupts.CPU81.NMI:Non-maskable_interrupts
6642 ± 24% -29.3% 4699 ± 34% interrupts.CPU81.PMI:Performance_monitoring_interrupts
447536 -20.8% 354299 ± 8% interrupts.CPU82.LOC:Local_timer_interrupts
448241 -21.0% 354323 ± 8% interrupts.CPU83.LOC:Local_timer_interrupts
447498 -20.7% 355042 ± 8% interrupts.CPU84.LOC:Local_timer_interrupts
484.50 ± 82% -80.6% 94.00 ± 47% interrupts.CPU84.RES:Rescheduling_interrupts
446794 -20.5% 355249 ± 8% interrupts.CPU85.LOC:Local_timer_interrupts
448191 -20.9% 354594 ± 8% interrupts.CPU86.LOC:Local_timer_interrupts
447635 -20.9% 353877 ± 8% interrupts.CPU87.LOC:Local_timer_interrupts
448706 -20.8% 355250 ± 8% interrupts.CPU88.LOC:Local_timer_interrupts
3738 +50.5% 5627 ± 33% interrupts.CPU88.NMI:Non-maskable_interrupts
3738 +50.5% 5627 ± 33% interrupts.CPU88.PMI:Performance_monitoring_interrupts
448312 -20.9% 354709 ± 8% interrupts.CPU89.LOC:Local_timer_interrupts
452329 -21.0% 357333 ± 8% interrupts.CPU9.LOC:Local_timer_interrupts
449864 -21.1% 354745 ± 8% interrupts.CPU90.LOC:Local_timer_interrupts
451193 -21.5% 353997 ± 8% interrupts.CPU91.LOC:Local_timer_interrupts
447166 -20.4% 356159 ± 8% interrupts.CPU92.LOC:Local_timer_interrupts
447364 -20.7% 354752 ± 8% interrupts.CPU93.LOC:Local_timer_interrupts
449764 -21.0% 355410 ± 8% interrupts.CPU94.LOC:Local_timer_interrupts
448393 -20.7% 355788 ± 8% interrupts.CPU95.LOC:Local_timer_interrupts
450191 -21.1% 355404 ± 8% interrupts.CPU96.LOC:Local_timer_interrupts
447651 -20.6% 355343 ± 8% interrupts.CPU97.LOC:Local_timer_interrupts
447679 -20.8% 354531 ± 8% interrupts.CPU98.LOC:Local_timer_interrupts
447448 -20.9% 353795 ± 8% interrupts.CPU99.LOC:Local_timer_interrupts
1.293e+08 -20.8% 1.024e+08 ± 8% interrupts.LOC:Local_timer_interrupts
will-it-scale.per_process_ops
680 +---------------------------------------------------------------------+
| ++.++. .+ .++. + +.++.+ +.++.+.++. : ++.+.+ ++.++.+.++.++.|
660 |-+ + + +.+ ++.+.+ |
640 |-+ |
| |
620 |-+ |
| |
600 |-+ |
| O O OO O O |
580 |-+O OO O O O OO |
560 |-O O O |
| O O |
540 |-+ OO O O O OO O OO O |
| O |
520 +---------------------------------------------------------------------+
will-it-scale.workload
200000 +------------------------------------------------------------------+
195000 |.+ .+ +. +. .+ |
| ++.++. +. +.+ + +.++.+ ++.+.++.+ : ++.++ +.++.++.++.++.|
190000 |-+ + + +.+ +.++.+ |
185000 |-+ |
| |
180000 |-+ |
175000 |-+ |
170000 |-+ O O O |
| O O O O O O |
165000 |-+ O O O O OO O |
160000 |-O |
| O O OO OO OO OO O |
155000 |-+ O OO |
150000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
[x86, sched] 1567c3e346: vm-scalability.median -15.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -15.8% regression of vm-scalability.median due to commit:
commit: 1567c3e3467cddeb019a7b53ec632f834b6a9239 ("x86, sched: Add support for frequency invariance")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
with following parameters:
runtime: 300s
size: 8T
test: anon-cow-seq
cpufreq_governor: performance
ucode: 0x16
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-20191114.cgz/300s/8T/lkp-hsw-4ex1/anon-cow-seq/vm-scalability/0x16
commit:
2a4b03ffc6 ("sched/fair: Prevent unlimited runtime on throttled group")
1567c3e346 ("x86, sched: Add support for frequency invariance")
2a4b03ffc69f2ded 1567c3e3467cddeb019a7b53ec6
---------------- ---------------------------
%stddev %change %stddev
\ | \
210443 -15.8% 177280 vm-scalability.median
0.07 ± 6% -64.9% 0.02 ± 16% vm-scalability.median_stddev
30242018 -16.0% 25399016 vm-scalability.throughput
241304 ± 2% +53.9% 371337 ± 2% vm-scalability.time.involuntary_context_switches
8973780 -12.4% 7862380 vm-scalability.time.minor_page_faults
10507 +7.6% 11301 vm-scalability.time.percent_of_cpu_this_job_got
17759 +22.0% 21664 ± 2% vm-scalability.time.system_time
14801 -10.4% 13262 vm-scalability.time.user_time
1023965 -63.3% 376066 ± 4% vm-scalability.time.voluntary_context_switches
7.967e+09 -11.1% 7.082e+09 vm-scalability.workload
2985 -11.4% 2643 ± 3% slabinfo.khugepaged_mm_slot.active_objs
2985 -11.4% 2643 ± 3% slabinfo.khugepaged_mm_slot.num_objs
36024164 -42.9% 20554898 ± 40% cpuidle.C1.time
7.277e+09 ± 41% -72.1% 2.027e+09 ±160% cpuidle.C3.time
17453257 ± 19% -74.4% 4462508 ±163% cpuidle.C3.usage
214574 ± 10% -50.2% 106926 ± 33% cpuidle.POLL.usage
26.26 -5.4 20.89 mpstat.cpu.all.idle%
0.02 ± 8% -0.0 0.01 ± 9% mpstat.cpu.all.iowait%
40.23 +8.8 49.05 mpstat.cpu.all.sys%
33.49 -3.4 30.05 ± 2% mpstat.cpu.all.usr%
1989988 -20.6% 1580840 ± 4% numa-numastat.node1.local_node
2026261 -20.6% 1608134 ± 4% numa-numastat.node1.numa_hit
3614123 ± 44% -56.4% 1576459 ± 5% numa-numastat.node2.local_node
3641345 ± 44% -56.0% 1603122 ± 5% numa-numastat.node2.numa_hit
26.50 -20.8% 21.00 vmstat.cpu.id
32.75 -10.7% 29.25 vmstat.cpu.us
9800 -34.3% 6434 vmstat.system.cs
290059 -6.9% 270063 ± 4% vmstat.system.in
1.012e+08 +12.3% 1.136e+08 meminfo.Active
1.012e+08 +12.3% 1.136e+08 meminfo.Active(anon)
92123057 +14.3% 1.053e+08 meminfo.AnonHugePages
1.009e+08 +12.3% 1.134e+08 meminfo.AnonPages
1.044e+08 +11.9% 1.169e+08 meminfo.Memused
25495045 +15.0% 29324284 numa-meminfo.node1.Active
25494976 +15.0% 29324244 numa-meminfo.node1.Active(anon)
23164225 +17.2% 27157731 numa-meminfo.node1.AnonHugePages
25417044 +15.1% 29267058 numa-meminfo.node1.AnonPages
26192254 +14.4% 29959010 numa-meminfo.node1.MemUsed
24839952 ± 3% +18.2% 29349199 numa-meminfo.node2.Active
24839901 ± 3% +18.2% 29349040 numa-meminfo.node2.Active(anon)
22604826 ± 3% +20.4% 27211067 numa-meminfo.node2.AnonHugePages
24777054 ± 3% +18.2% 29282273 numa-meminfo.node2.AnonPages
25718372 ± 2% +16.8% 30037056 numa-meminfo.node2.MemUsed
199045 ± 96% -99.6% 743.75 ± 52% numa-meminfo.node2.PageTables
54935 ± 20% -24.5% 41470 ± 10% numa-meminfo.node2.SUnreclaim
418.75 ± 94% -93.7% 26.50 ± 49% numa-meminfo.node3.Inactive(file)
4864 ± 15% +26.9% 6174 ± 10% numa-meminfo.node3.KernelStack
95371 ±172% +233.9% 318444 ± 57% numa-meminfo.node3.PageTables
6347307 +14.8% 7286035 numa-vmstat.node1.nr_active_anon
6328825 +14.9% 7268986 numa-vmstat.node1.nr_anon_pages
11261 +16.9% 13162 numa-vmstat.node1.nr_anon_transparent_hugepages
6347284 +14.8% 7285923 numa-vmstat.node1.nr_zone_active_anon
1324428 -15.2% 1122566 ± 3% numa-vmstat.node1.numa_hit
1204942 -16.0% 1011998 ± 3% numa-vmstat.node1.numa_local
6207902 ± 3% +17.7% 7308204 numa-vmstat.node2.nr_active_anon
6195059 ± 3% +17.6% 7287608 numa-vmstat.node2.nr_anon_pages
11047 ± 3% +19.7% 13218 numa-vmstat.node2.nr_anon_transparent_hugepages
49684 ± 96% -99.6% 186.00 ± 52% numa-vmstat.node2.nr_page_table_pages
13734 ± 20% -24.5% 10367 ± 10% numa-vmstat.node2.nr_slab_unreclaimable
6207738 ± 3% +17.7% 7308081 numa-vmstat.node2.nr_zone_active_anon
2125860 ± 40% -49.5% 1073183 ± 2% numa-vmstat.node2.numa_hit
2015486 ± 42% -52.2% 963317 ± 2% numa-vmstat.node2.numa_local
104.25 ± 94% -93.8% 6.50 ± 49% numa-vmstat.node3.nr_inactive_file
4864 ± 15% +26.7% 6160 ± 10% numa-vmstat.node3.nr_kernel_stack
23833 ±172% +230.0% 78649 ± 57% numa-vmstat.node3.nr_page_table_pages
104.25 ± 94% -93.8% 6.50 ± 49% numa-vmstat.node3.nr_zone_inactive_file
1.12 ± 19% -0.5 0.63 ± 20% perf-profile.calltrace.cycles-pp.do_huge_pmd_numa_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.81 ± 19% -0.2 0.60 ± 20% perf-profile.calltrace.cycles-pp.reuse_swap_page.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.33 ± 22% -0.2 0.17 ± 4% perf-profile.children.cycles-pp.migrate_misplaced_transhuge_page
0.09 ± 23% -0.1 0.03 ±100% perf-profile.children.cycles-pp.ttwu_do_activate
0.08 ± 19% -0.1 0.03 ±100% perf-profile.children.cycles-pp.enqueue_entity
0.15 ± 20% -0.1 0.10 ± 15% perf-profile.children.cycles-pp.wake_up_page_bit
0.14 ± 23% -0.1 0.09 ± 14% perf-profile.children.cycles-pp.__wake_up_common
0.14 ± 20% -0.1 0.09 ± 13% perf-profile.children.cycles-pp.autoremove_wake_function
0.09 ± 20% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.enqueue_task_fair
0.09 ± 23% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.activate_task
0.15 ± 21% -0.0 0.11 ± 17% perf-profile.children.cycles-pp.try_to_wake_up
0.15 ± 11% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.schedule
0.08 ± 10% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.newidle_balance
0.12 ± 16% +0.0 0.17 ± 9% perf-profile.children.cycles-pp.find_busiest_group
0.00 +0.1 0.05 perf-profile.children.cycles-pp.balance_fair
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.arch_scale_freq_tick
0.18 ± 10% +0.1 0.23 ± 8% perf-profile.children.cycles-pp.load_balance
0.00 +0.1 0.09 ± 9% perf-profile.children.cycles-pp.smpboot_thread_fn
0.40 ± 23% +0.3 0.68 ± 17% perf-profile.children.cycles-pp.task_tick_fair
0.01 ±173% +0.3 0.30 ± 16% perf-profile.children.cycles-pp.update_cfs_group
0.01 ±173% +0.3 0.30 ± 16% perf-profile.self.cycles-pp.update_cfs_group
8587 ± 12% -29.1% 6090 ± 9% sched_debug.cfs_rq:/.exec_clock.stddev
0.53 ± 45% -92.1% 0.04 ±173% sched_debug.cfs_rq:/.load_avg.min
0.54 ± 19% +38.3% 0.74 ± 10% sched_debug.cfs_rq:/.nr_running.avg
4.81 ± 12% +266.0% 17.59 ± 7% sched_debug.cfs_rq:/.nr_spread_over.avg
24.01 ± 8% +96.4% 47.14 ± 4% sched_debug.cfs_rq:/.nr_spread_over.max
5.02 ± 10% +123.0% 11.19 ± 6% sched_debug.cfs_rq:/.nr_spread_over.stddev
313.47 ± 18% -55.1% 140.75 ± 51% sched_debug.cfs_rq:/.runnable_load_avg.max
31.75 ± 9% -54.5% 14.46 ± 44% sched_debug.cfs_rq:/.runnable_load_avg.stddev
0.54 ± 19% +38.2% 0.75 ± 9% sched_debug.cpu.nr_running.avg
11567 ± 8% -28.4% 8277 ± 7% sched_debug.cpu.nr_switches.avg
4760 ± 9% -31.7% 3249 ± 4% sched_debug.cpu.nr_switches.stddev
-36.17 +31.8% -47.65 sched_debug.cpu.nr_uninterruptible.min
15.88 ± 2% +35.3% 21.49 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
9718 ± 10% -34.1% 6402 ± 9% sched_debug.cpu.sched_count.avg
4294 ± 8% -37.4% 2688 ± 8% sched_debug.cpu.sched_count.stddev
3936 ± 10% -53.6% 1828 ± 12% sched_debug.cpu.sched_goidle.avg
2045 ± 8% -50.0% 1022 ± 10% sched_debug.cpu.sched_goidle.stddev
4616 ± 9% -36.5% 2933 ± 10% sched_debug.cpu.ttwu_count.avg
11005 ± 9% -22.0% 8579 ± 9% sched_debug.cpu.ttwu_count.max
2271 ± 8% -34.9% 1478 ± 8% sched_debug.cpu.ttwu_count.stddev
915.40 ± 8% +42.3% 1303 ± 6% sched_debug.cpu.ttwu_local.avg
3927 ± 5% +19.3% 4684 ± 10% sched_debug.cpu.ttwu_local.max
488.40 ± 9% +40.6% 686.55 ± 3% sched_debug.cpu.ttwu_local.stddev
25252488 +12.3% 28363408 proc-vmstat.nr_active_anon
25182652 +12.4% 28296707 proc-vmstat.nr_anon_pages
44870 +14.2% 51263 proc-vmstat.nr_anon_transparent_hugepages
148.25 -5.7% 139.75 proc-vmstat.nr_dirtied
10541824 -3.0% 10230220 proc-vmstat.nr_dirty_background_threshold
21109424 -3.0% 20485453 proc-vmstat.nr_dirty_threshold
1.06e+08 -2.9% 1.029e+08 proc-vmstat.nr_free_pages
369.75 -1.4% 364.50 proc-vmstat.nr_inactive_file
99622 +7.3% 106911 proc-vmstat.nr_page_table_pages
144.25 -4.9% 137.25 proc-vmstat.nr_written
25252485 +12.3% 28363396 proc-vmstat.nr_zone_active_anon
369.75 -1.4% 364.50 proc-vmstat.nr_zone_inactive_file
2953466 -10.6% 2639571 proc-vmstat.numa_hint_faults
661711 -9.8% 596566 proc-vmstat.numa_hint_faults_local
11245421 -11.8% 9923535 proc-vmstat.numa_hit
5838019 -7.0% 5428548 proc-vmstat.numa_huge_pte_updates
11136518 -11.9% 9814538 proc-vmstat.numa_local
201352 ± 12% +234.6% 673715 ± 7% proc-vmstat.numa_pages_migrated
2.99e+09 -7.0% 2.78e+09 proc-vmstat.numa_pte_updates
2.382e+09 -18.9% 1.931e+09 proc-vmstat.pgalloc_normal
10046733 -11.2% 8918565 proc-vmstat.pgfault
2.381e+09 -18.9% 1.93e+09 proc-vmstat.pgfree
5.942e+08 -42.6% 3.409e+08 proc-vmstat.pgmigrate_fail
201352 ± 12% +234.6% 673715 ± 7% proc-vmstat.pgmigrate_success
3478289 -11.1% 3091553 proc-vmstat.thp_fault_alloc
8.85 ± 7% -16.8% 7.36 ± 12% perf-stat.i.MPKI
2.766e+10 ± 4% -7.4% 2.562e+10 ± 3% perf-stat.i.branch-instructions
20.31 +1.8 22.10 ± 4% perf-stat.i.cache-miss-rate%
5.089e+08 ± 4% -9.5% 4.604e+08 ± 3% perf-stat.i.cache-references
9955 ± 3% -33.0% 6665 ± 4% perf-stat.i.context-switches
1.73 ± 4% +9.9% 1.90 ± 3% perf-stat.i.cpi
1.287e+11 ± 4% +11.2% 1.431e+11 ± 3% perf-stat.i.cpu-cycles
485.10 ± 5% +108.0% 1009 ± 4% perf-stat.i.cpu-migrations
2.215e+10 ± 4% -6.5% 2.071e+10 ± 3% perf-stat.i.dTLB-loads
7.427e+09 ± 4% -8.1% 6.829e+09 ± 3% perf-stat.i.dTLB-stores
958368 ± 5% -30.9% 662160 ± 2% perf-stat.i.iTLB-load-misses
8.707e+10 ± 4% -7.4% 8.062e+10 ± 3% perf-stat.i.instructions
142754 ± 2% +11.3% 158927 ± 2% perf-stat.i.instructions-per-iTLB-miss
0.67 -13.3% 0.58 perf-stat.i.ipc
33135 ± 4% -10.7% 29584 ± 2% perf-stat.i.minor-faults
70.79 +6.0 76.82 perf-stat.i.node-load-miss-rate%
41028170 ± 4% +14.1% 46801837 ± 3% perf-stat.i.node-load-misses
1.85 ± 5% +0.9 2.78 ± 5% perf-stat.i.node-store-miss-rate%
584076 ± 6% +70.0% 992640 ± 4% perf-stat.i.node-store-misses
53445296 ± 5% -19.0% 43298204 ± 7% perf-stat.i.node-stores
33137 ± 4% -10.7% 29584 ± 2% perf-stat.i.page-faults
5.85 -1.7% 5.75 perf-stat.overall.MPKI
1.48 +19.6% 1.77 perf-stat.overall.cpi
1156 +16.7% 1350 ± 5% perf-stat.overall.cycles-between-cache-misses
92129 ± 6% +25.2% 115378 perf-stat.overall.instructions-per-iTLB-miss
0.67 -16.4% 0.56 perf-stat.overall.ipc
75.40 +3.8 79.17 perf-stat.overall.node-load-miss-rate%
1.11 ± 2% +1.2 2.33 ± 7% perf-stat.overall.node-store-miss-rate%
2.704e+10 -11.0% 2.408e+10 perf-stat.ps.branch-instructions
1.091e+08 -8.5% 99896941 ± 5% perf-stat.ps.cache-misses
4.979e+08 -12.5% 4.357e+08 perf-stat.ps.cache-references
9775 -34.7% 6386 ± 2% perf-stat.ps.context-switches
1.262e+11 +6.5% 1.344e+11 perf-stat.ps.cpu-cycles
487.90 ± 5% +100.7% 979.41 perf-stat.ps.cpu-migrations
2.166e+10 -10.1% 1.948e+10 perf-stat.ps.dTLB-loads
7.26e+09 -11.3% 6.441e+09 perf-stat.ps.dTLB-stores
927296 ± 5% -29.1% 657143 perf-stat.ps.iTLB-load-misses
8.512e+10 -10.9% 7.581e+10 perf-stat.ps.instructions
32070 -11.2% 28469 perf-stat.ps.minor-faults
40537100 +9.2% 44251909 ± 4% perf-stat.ps.node-load-misses
13226459 ± 2% -12.0% 11644845 ± 4% perf-stat.ps.node-loads
582949 ± 3% +65.6% 965166 ± 3% perf-stat.ps.node-store-misses
52116857 -21.9% 40679765 ± 8% perf-stat.ps.node-stores
32070 -11.2% 28469 perf-stat.ps.page-faults
2.642e+13 -11.4% 2.341e+13 perf-stat.total.instructions
229.75 ± 52% -34.1% 151.50 interrupts.102:PCI-MSI.1572921-edge.eth0-TxRx-57
156.75 ± 3% +27.3% 199.50 ± 29% interrupts.47:PCI-MSI.1572866-edge.eth0-TxRx-2
1603 ± 22% -15.6% 1352 ± 4% interrupts.CPU1.NMI:Non-maskable_interrupts
1603 ± 22% -15.6% 1352 ± 4% interrupts.CPU1.PMI:Performance_monitoring_interrupts
1007 ± 19% -38.7% 618.00 ± 8% interrupts.CPU100.RES:Rescheduling_interrupts
1016 ± 19% -33.9% 671.75 ± 5% interrupts.CPU101.RES:Rescheduling_interrupts
2046 ± 32% -19.9% 1638 ± 30% interrupts.CPU102.NMI:Non-maskable_interrupts
2046 ± 32% -19.9% 1638 ± 30% interrupts.CPU102.PMI:Performance_monitoring_interrupts
956.75 ± 18% -33.1% 640.25 ± 5% interrupts.CPU102.RES:Rescheduling_interrupts
909.50 ± 20% -29.1% 644.50 ± 12% interrupts.CPU103.RES:Rescheduling_interrupts
959.50 ± 24% -31.4% 658.00 ± 5% interrupts.CPU104.RES:Rescheduling_interrupts
921.75 ± 14% -30.2% 643.50 ± 3% interrupts.CPU105.RES:Rescheduling_interrupts
901.50 ± 14% -27.8% 650.50 ± 11% interrupts.CPU106.RES:Rescheduling_interrupts
2670 ± 3% -37.5% 1670 ± 31% interrupts.CPU107.NMI:Non-maskable_interrupts
2670 ± 3% -37.5% 1670 ± 31% interrupts.CPU107.PMI:Performance_monitoring_interrupts
2849 -31.6% 1948 ± 33% interrupts.CPU109.NMI:Non-maskable_interrupts
2849 -31.6% 1948 ± 33% interrupts.CPU109.PMI:Performance_monitoring_interrupts
1410 ± 20% +51.0% 2130 ± 23% interrupts.CPU11.NMI:Non-maskable_interrupts
1410 ± 20% +51.0% 2130 ± 23% interrupts.CPU11.PMI:Performance_monitoring_interrupts
2820 ± 3% -30.7% 1955 ± 32% interrupts.CPU111.NMI:Non-maskable_interrupts
2820 ± 3% -30.7% 1955 ± 32% interrupts.CPU111.PMI:Performance_monitoring_interrupts
2808 -30.1% 1962 ± 33% interrupts.CPU116.NMI:Non-maskable_interrupts
2808 -30.1% 1962 ± 33% interrupts.CPU116.PMI:Performance_monitoring_interrupts
622190 -8.1% 571803 ± 4% interrupts.CPU118.LOC:Local_timer_interrupts
621861 -8.3% 570381 ± 4% interrupts.CPU119.LOC:Local_timer_interrupts
2024 ± 25% -30.0% 1417 ± 8% interrupts.CPU12.NMI:Non-maskable_interrupts
2024 ± 25% -30.0% 1417 ± 8% interrupts.CPU12.PMI:Performance_monitoring_interrupts
622130 -8.0% 572612 ± 4% interrupts.CPU120.LOC:Local_timer_interrupts
622145 -8.4% 569980 ± 4% interrupts.CPU121.LOC:Local_timer_interrupts
2131 ± 33% -24.4% 1611 ± 26% interrupts.CPU121.NMI:Non-maskable_interrupts
2131 ± 33% -24.4% 1611 ± 26% interrupts.CPU121.PMI:Performance_monitoring_interrupts
621983 -7.9% 572602 ± 4% interrupts.CPU123.LOC:Local_timer_interrupts
2219 ± 34% -40.0% 1332 ± 5% interrupts.CPU124.NMI:Non-maskable_interrupts
2219 ± 34% -40.0% 1332 ± 5% interrupts.CPU124.PMI:Performance_monitoring_interrupts
622061 -8.0% 572198 ± 4% interrupts.CPU125.LOC:Local_timer_interrupts
817.50 ± 12% -46.8% 435.25 ± 13% interrupts.CPU128.RES:Rescheduling_interrupts
820.75 ± 24% -48.8% 420.00 ± 25% interrupts.CPU129.RES:Rescheduling_interrupts
750.00 ± 25% -42.9% 428.00 ± 24% interrupts.CPU130.RES:Rescheduling_interrupts
2285 ± 22% -38.1% 1415 interrupts.CPU133.NMI:Non-maskable_interrupts
2285 ± 22% -38.1% 1415 interrupts.CPU133.PMI:Performance_monitoring_interrupts
779.25 ± 30% -50.5% 386.00 ± 23% interrupts.CPU133.RES:Rescheduling_interrupts
741.25 ± 28% -47.7% 387.75 ± 28% interrupts.CPU135.RES:Rescheduling_interrupts
728.00 ± 28% -46.2% 391.50 ± 31% interrupts.CPU137.RES:Rescheduling_interrupts
749.25 ± 29% -53.9% 345.50 ± 27% interrupts.CPU138.RES:Rescheduling_interrupts
751.25 ± 27% -49.0% 383.00 ± 29% interrupts.CPU139.RES:Rescheduling_interrupts
688.00 ± 24% -49.8% 345.25 ± 27% interrupts.CPU140.RES:Rescheduling_interrupts
726.00 ± 27% -55.1% 326.25 ± 32% interrupts.CPU141.RES:Rescheduling_interrupts
2272 ± 29% -40.7% 1347 ± 2% interrupts.CPU15.NMI:Non-maskable_interrupts
2272 ± 29% -40.7% 1347 ± 2% interrupts.CPU15.PMI:Performance_monitoring_interrupts
156.75 ± 3% +27.3% 199.50 ± 29% interrupts.CPU2.47:PCI-MSI.1572866-edge.eth0-TxRx-2
2562 ± 9% -38.6% 1573 ± 28% interrupts.CPU2.NMI:Non-maskable_interrupts
2562 ± 9% -38.6% 1573 ± 28% interrupts.CPU2.PMI:Performance_monitoring_interrupts
1344 ± 20% -32.8% 903.75 interrupts.CPU20.RES:Rescheduling_interrupts
1314 ± 16% -29.9% 921.50 ± 5% interrupts.CPU21.RES:Rescheduling_interrupts
120.00 ± 47% -98.3% 2.00 ± 93% interrupts.CPU22.TLB:TLB_shootdowns
2245 ± 24% -35.7% 1443 ± 24% interrupts.CPU23.NMI:Non-maskable_interrupts
2245 ± 24% -35.7% 1443 ± 24% interrupts.CPU23.PMI:Performance_monitoring_interrupts
1338 ± 18% -32.8% 899.00 ± 12% interrupts.CPU23.RES:Rescheduling_interrupts
1243 ± 16% -28.0% 895.75 ± 4% interrupts.CPU24.RES:Rescheduling_interrupts
1360 ± 30% -33.3% 907.50 ± 15% interrupts.CPU25.RES:Rescheduling_interrupts
1273 ± 20% -34.0% 841.00 ± 6% interrupts.CPU26.RES:Rescheduling_interrupts
1273 ± 15% -31.3% 874.25 ± 8% interrupts.CPU27.RES:Rescheduling_interrupts
1315 ± 16% -40.6% 781.00 ± 11% interrupts.CPU28.RES:Rescheduling_interrupts
1320 ± 20% -39.2% 802.75 ± 5% interrupts.CPU29.RES:Rescheduling_interrupts
2094 ± 30% -22.7% 1619 ± 35% interrupts.CPU30.NMI:Non-maskable_interrupts
2094 ± 30% -22.7% 1619 ± 35% interrupts.CPU30.PMI:Performance_monitoring_interrupts
1301 ± 19% -40.6% 773.00 ± 5% interrupts.CPU30.RES:Rescheduling_interrupts
1253 ± 16% -36.9% 791.00 ± 15% interrupts.CPU31.RES:Rescheduling_interrupts
1231 ± 17% -41.5% 720.25 ± 8% interrupts.CPU32.RES:Rescheduling_interrupts
1177 ± 11% -37.3% 738.00 ± 14% interrupts.CPU33.RES:Rescheduling_interrupts
1195 ± 18% -34.0% 788.75 ± 21% interrupts.CPU34.RES:Rescheduling_interrupts
2567 ± 8% -35.1% 1666 ± 35% interrupts.CPU35.NMI:Non-maskable_interrupts
2567 ± 8% -35.1% 1666 ± 35% interrupts.CPU35.PMI:Performance_monitoring_interrupts
1223 ± 18% -39.1% 745.25 ± 8% interrupts.CPU35.RES:Rescheduling_interrupts
621753 -7.9% 572936 ± 4% interrupts.CPU36.LOC:Local_timer_interrupts
2527 ± 10% -34.0% 1667 ± 31% interrupts.CPU40.NMI:Non-maskable_interrupts
2527 ± 10% -34.0% 1667 ± 31% interrupts.CPU40.PMI:Performance_monitoring_interrupts
622323 -7.9% 572967 ± 4% interrupts.CPU45.LOC:Local_timer_interrupts
1832 ± 17% +38.7% 2541 ± 3% interrupts.CPU46.NMI:Non-maskable_interrupts
1832 ± 17% +38.7% 2541 ± 3% interrupts.CPU46.PMI:Performance_monitoring_interrupts
621906 -8.0% 572330 ± 4% interrupts.CPU47.LOC:Local_timer_interrupts
621972 -8.1% 571378 ± 4% interrupts.CPU48.LOC:Local_timer_interrupts
622275 -8.2% 571410 ± 4% interrupts.CPU49.LOC:Local_timer_interrupts
2459 ± 24% -41.7% 1433 ± 10% interrupts.CPU49.NMI:Non-maskable_interrupts
2459 ± 24% -41.7% 1433 ± 10% interrupts.CPU49.PMI:Performance_monitoring_interrupts
622031 -7.9% 573184 ± 4% interrupts.CPU50.LOC:Local_timer_interrupts
622081 -7.9% 572943 ± 4% interrupts.CPU51.LOC:Local_timer_interrupts
622140 -7.8% 573769 ± 4% interrupts.CPU52.LOC:Local_timer_interrupts
622074 -7.6% 574558 ± 4% interrupts.CPU53.LOC:Local_timer_interrupts
229.75 ± 52% -34.1% 151.50 interrupts.CPU57.102:PCI-MSI.1572921-edge.eth0-TxRx-57
1052 ± 38% -59.2% 429.25 ± 16% interrupts.CPU61.RES:Rescheduling_interrupts
830.75 ± 26% -53.3% 388.25 ± 27% interrupts.CPU62.RES:Rescheduling_interrupts
774.50 ± 28% -48.5% 399.00 ± 32% interrupts.CPU67.RES:Rescheduling_interrupts
785.00 ± 33% -51.6% 380.00 ± 32% interrupts.CPU68.RES:Rescheduling_interrupts
890.25 ± 31% -58.3% 371.50 ± 27% interrupts.CPU69.RES:Rescheduling_interrupts
821.50 ± 30% -46.4% 440.50 ± 36% interrupts.CPU70.RES:Rescheduling_interrupts
1897 ± 24% +40.4% 2663 ± 4% interrupts.CPU84.NMI:Non-maskable_interrupts
1897 ± 24% +40.4% 2663 ± 4% interrupts.CPU84.PMI:Performance_monitoring_interrupts
1431 ± 17% -43.5% 808.25 ± 21% interrupts.CPU9.RES:Rescheduling_interrupts
1064 ± 18% -30.1% 743.75 ± 16% interrupts.CPU91.RES:Rescheduling_interrupts
1076 ± 10% -39.6% 649.75 ± 10% interrupts.CPU92.RES:Rescheduling_interrupts
1042 ± 17% -33.8% 690.00 ± 10% interrupts.CPU93.RES:Rescheduling_interrupts
1065 ± 18% -33.4% 709.50 ± 5% interrupts.CPU94.RES:Rescheduling_interrupts
2731 ± 7% -30.6% 1894 ± 28% interrupts.CPU95.NMI:Non-maskable_interrupts
2731 ± 7% -30.6% 1894 ± 28% interrupts.CPU95.PMI:Performance_monitoring_interrupts
998.75 ± 19% -28.4% 715.00 ± 7% interrupts.CPU96.RES:Rescheduling_interrupts
2.75 ± 47% +2009.1% 58.00 ±148% interrupts.CPU96.TLB:TLB_shootdowns
952.50 ± 27% -29.5% 671.25 ± 12% interrupts.CPU97.RES:Rescheduling_interrupts
2667 ± 2% -50.2% 1328 ± 4% interrupts.CPU98.NMI:Non-maskable_interrupts
2667 ± 2% -50.2% 1328 ± 4% interrupts.CPU98.PMI:Performance_monitoring_interrupts
973.00 ± 20% -28.5% 695.25 ± 4% interrupts.CPU98.RES:Rescheduling_interrupts
991.00 ± 18% -34.0% 654.00 ± 8% interrupts.CPU99.RES:Rescheduling_interrupts
135782 ± 14% -25.0% 101847 ± 6% interrupts.RES:Rescheduling_interrupts
140559 ± 2% -11.8% 124028 ± 6% softirqs.CPU0.RCU
145814 ± 3% -13.7% 125791 ± 5% softirqs.CPU1.RCU
138760 -12.0% 122124 ± 6% softirqs.CPU100.RCU
16104 -17.4% 13295 ± 5% softirqs.CPU100.SCHED
16422 ± 5% -16.9% 13653 ± 7% softirqs.CPU101.SCHED
15960 -14.5% 13650 ± 6% softirqs.CPU102.SCHED
15804 ± 8% -13.8% 13616 ± 7% softirqs.CPU103.SCHED
139949 -12.0% 123102 ± 5% softirqs.CPU104.RCU
16155 ± 3% -14.6% 13799 ± 8% softirqs.CPU104.SCHED
16018 ± 4% -14.3% 13725 ± 10% softirqs.CPU105.SCHED
15392 ± 5% -14.4% 13172 ± 6% softirqs.CPU106.SCHED
15218 ± 4% -15.5% 12852 ± 9% softirqs.CPU107.SCHED
143209 -14.2% 122899 ± 5% softirqs.CPU108.RCU
144618 -14.9% 123106 ± 4% softirqs.CPU109.RCU
144915 -14.1% 124470 ± 6% softirqs.CPU110.RCU
145038 -14.7% 123770 ± 5% softirqs.CPU111.RCU
138846 -12.2% 121867 ± 6% softirqs.CPU112.RCU
137438 -11.9% 121069 ± 6% softirqs.CPU113.RCU
141152 -13.4% 122234 ± 5% softirqs.CPU114.RCU
141545 -13.8% 122025 ± 5% softirqs.CPU115.RCU
140302 -13.0% 122108 ± 6% softirqs.CPU116.RCU
140548 -13.2% 122057 ± 5% softirqs.CPU117.RCU
142370 -14.3% 122021 ± 5% softirqs.CPU118.RCU
136694 -11.3% 121193 ± 5% softirqs.CPU119.RCU
137193 -11.5% 121415 ± 6% softirqs.CPU120.RCU
137125 -11.9% 120804 ± 5% softirqs.CPU121.RCU
140181 -13.0% 121893 ± 5% softirqs.CPU122.RCU
137405 -11.4% 121741 ± 5% softirqs.CPU124.RCU
139350 -12.5% 122000 ± 6% softirqs.CPU125.RCU
145339 -14.5% 124311 ± 5% softirqs.CPU126.RCU
14887 ± 18% -32.6% 10033 ± 15% softirqs.CPU126.SCHED
144917 -15.6% 122366 ± 5% softirqs.CPU127.RCU
15094 ± 20% -37.0% 9516 ± 21% softirqs.CPU127.SCHED
140946 -13.8% 121461 ± 6% softirqs.CPU128.RCU
138106 ± 2% -11.1% 122812 ± 6% softirqs.CPU129.RCU
137799 -11.3% 122221 ± 6% softirqs.CPU130.RCU
137425 -10.9% 122427 ± 6% softirqs.CPU131.RCU
140854 -12.9% 122753 ± 5% softirqs.CPU132.RCU
15140 ± 24% -35.8% 9716 ± 19% softirqs.CPU132.SCHED
140764 ± 2% -12.7% 122926 ± 5% softirqs.CPU133.RCU
139626 -11.7% 123235 ± 5% softirqs.CPU134.RCU
141210 ± 2% -12.6% 123477 ± 5% softirqs.CPU135.RCU
142996 ± 3% -14.1% 122901 ± 5% softirqs.CPU136.RCU
140567 ± 2% -12.5% 123022 ± 6% softirqs.CPU140.RCU
14192 ± 21% -34.2% 9336 ± 18% softirqs.CPU141.SCHED
135514 ± 2% -9.8% 122294 ± 5% softirqs.CPU142.RCU
134945 ± 2% -10.0% 121501 ± 6% softirqs.CPU143.RCU
145145 -13.0% 126309 ± 5% softirqs.CPU16.RCU
144489 ± 2% -13.3% 125217 ± 5% softirqs.CPU17.RCU
138706 ± 4% -12.0% 122092 ± 5% softirqs.CPU18.RCU
15348 ± 4% -10.0% 13810 ± 8% softirqs.CPU18.SCHED
140870 ± 2% -10.7% 125784 ± 3% softirqs.CPU19.RCU
16392 ± 5% -15.5% 13843 ± 6% softirqs.CPU19.SCHED
143741 ± 2% -12.4% 125852 ± 5% softirqs.CPU2.RCU
140088 -11.7% 123663 ± 5% softirqs.CPU20.RCU
16092 ± 3% -14.0% 13832 ± 5% softirqs.CPU20.SCHED
138206 -11.0% 123025 ± 5% softirqs.CPU21.RCU
16877 ± 4% -16.7% 14051 ± 4% softirqs.CPU21.SCHED
138653 -11.4% 122814 ± 5% softirqs.CPU22.RCU
17090 ± 3% -19.3% 13797 ± 5% softirqs.CPU22.SCHED
16476 ± 3% -15.6% 13910 ± 5% softirqs.CPU23.SCHED
138808 -11.2% 123204 ± 5% softirqs.CPU24.RCU
16779 -16.2% 14060 ± 5% softirqs.CPU24.SCHED
139811 ± 2% -11.6% 123555 ± 5% softirqs.CPU25.RCU
16734 ± 3% -15.8% 14083 ± 7% softirqs.CPU25.SCHED
138909 -10.9% 123758 ± 5% softirqs.CPU26.RCU
16619 ± 5% -15.0% 14133 ± 5% softirqs.CPU26.SCHED
140257 -12.3% 123025 ± 5% softirqs.CPU27.RCU
17013 ± 5% -17.3% 14061 ± 5% softirqs.CPU27.SCHED
137824 -11.2% 122428 ± 6% softirqs.CPU28.RCU
16470 ± 2% -15.4% 13927 ± 6% softirqs.CPU28.SCHED
135294 ± 2% -9.6% 122364 ± 6% softirqs.CPU29.RCU
17350 ± 3% -20.7% 13761 ± 5% softirqs.CPU29.SCHED
142300 ± 3% -13.1% 123621 ± 6% softirqs.CPU3.RCU
17067 -18.4% 13927 ± 5% softirqs.CPU30.SCHED
17110 ± 4% -17.6% 14099 ± 7% softirqs.CPU31.SCHED
145321 -14.0% 124914 ± 4% softirqs.CPU32.RCU
16952 ± 3% -16.0% 14234 ± 7% softirqs.CPU32.SCHED
143684 -14.0% 123636 ± 5% softirqs.CPU33.RCU
16728 ± 2% -17.3% 13838 ± 5% softirqs.CPU33.SCHED
143358 -13.4% 124129 ± 5% softirqs.CPU34.RCU
17103 ± 3% -17.7% 14082 ± 6% softirqs.CPU34.SCHED
143430 -13.0% 124848 ± 5% softirqs.CPU35.RCU
16705 -17.7% 13746 ± 4% softirqs.CPU35.SCHED
139206 -11.5% 123261 ± 7% softirqs.CPU36.RCU
140252 -12.7% 122378 ± 5% softirqs.CPU37.RCU
140086 -12.6% 122492 ± 5% softirqs.CPU39.RCU
140518 -13.4% 121743 ± 5% softirqs.CPU40.RCU
138605 -11.7% 122371 ± 5% softirqs.CPU41.RCU
141519 -13.5% 122418 ± 5% softirqs.CPU42.RCU
142030 -13.7% 122637 ± 5% softirqs.CPU43.RCU
140694 -13.4% 121845 ± 4% softirqs.CPU44.RCU
140504 -12.1% 123545 ± 5% softirqs.CPU45.RCU
140277 -12.7% 122435 ± 5% softirqs.CPU46.RCU
137267 -11.2% 121919 ± 5% softirqs.CPU47.RCU
140839 -12.9% 122646 ± 5% softirqs.CPU48.RCU
141816 -13.3% 122937 ± 5% softirqs.CPU49.RCU
140868 ± 2% -12.0% 123942 ± 6% softirqs.CPU5.RCU
143643 -14.1% 123392 ± 6% softirqs.CPU50.RCU
141326 -12.7% 123403 ± 6% softirqs.CPU51.RCU
141583 -13.4% 122613 ± 6% softirqs.CPU52.RCU
141100 -13.2% 122445 ± 5% softirqs.CPU53.RCU
139956 -12.1% 122981 ± 6% softirqs.CPU54.RCU
14018 ± 17% -29.8% 9845 ± 15% softirqs.CPU54.SCHED
140114 -11.3% 124217 ± 5% softirqs.CPU55.RCU
143329 -14.8% 122069 ± 6% softirqs.CPU56.RCU
141146 -12.3% 123718 ± 6% softirqs.CPU57.RCU
15264 ± 19% -34.6% 9988 ± 17% softirqs.CPU57.SCHED
140809 -12.4% 123330 ± 6% softirqs.CPU58.RCU
15201 ± 20% -36.3% 9678 ± 20% softirqs.CPU58.SCHED
140568 -12.0% 123769 ± 5% softirqs.CPU59.RCU
15360 ± 20% -33.3% 10252 ± 19% softirqs.CPU59.SCHED
139409 ± 3% -10.9% 124281 ± 6% softirqs.CPU6.RCU
141767 -12.7% 123768 ± 5% softirqs.CPU60.RCU
142046 -12.9% 123657 ± 5% softirqs.CPU61.RCU
15389 ± 22% -34.7% 10052 ± 18% softirqs.CPU61.SCHED
141719 -12.7% 123700 ± 5% softirqs.CPU62.RCU
142499 -12.9% 124068 ± 6% softirqs.CPU63.RCU
14667 ± 20% -31.8% 10005 ± 18% softirqs.CPU63.SCHED
144437 -14.1% 124008 ± 6% softirqs.CPU64.RCU
15191 ± 22% -35.9% 9736 ± 19% softirqs.CPU64.SCHED
141156 ± 2% -12.3% 123743 ± 6% softirqs.CPU65.RCU
139859 -11.2% 124140 ± 6% softirqs.CPU66.RCU
141282 -11.9% 124535 ± 6% softirqs.CPU67.RCU
144228 -13.7% 124434 ± 6% softirqs.CPU68.RCU
142536 -13.0% 123956 ± 6% softirqs.CPU69.RCU
141103 ± 2% -11.8% 124488 ± 6% softirqs.CPU7.RCU
140849 ± 2% -11.5% 124683 ± 6% softirqs.CPU70.RCU
141347 ± 2% -12.4% 123863 ± 6% softirqs.CPU71.RCU
142552 ± 3% -13.6% 123193 ± 5% softirqs.CPU72.RCU
142074 ± 4% -12.3% 124659 ± 7% softirqs.CPU73.RCU
141649 -13.1% 123155 ± 5% softirqs.CPU74.RCU
143525 ± 4% -14.0% 123493 ± 6% softirqs.CPU75.RCU
142772 ± 3% -13.9% 122974 ± 7% softirqs.CPU76.RCU
141178 ± 3% -12.9% 122944 ± 5% softirqs.CPU77.RCU
140777 ± 3% -12.8% 122799 ± 5% softirqs.CPU78.RCU
143039 ± 4% -14.3% 122572 ± 5% softirqs.CPU79.RCU
140336 ± 3% -11.8% 123811 ± 6% softirqs.CPU8.RCU
141727 ± 2% -13.0% 123316 ± 5% softirqs.CPU80.RCU
141813 ± 2% -13.0% 123363 ± 5% softirqs.CPU81.RCU
141978 ± 2% -11.6% 125442 ± 3% softirqs.CPU82.RCU
137611 ± 3% -10.7% 122950 ± 5% softirqs.CPU84.RCU
141077 ± 2% -12.0% 124188 ± 5% softirqs.CPU86.RCU
139221 ± 3% -11.9% 122606 ± 5% softirqs.CPU87.RCU
138177 ± 4% -11.7% 121980 ± 5% softirqs.CPU88.RCU
139882 ± 2% -11.1% 124329 ± 5% softirqs.CPU9.RCU
141843 ± 2% -13.6% 122541 ± 6% softirqs.CPU90.RCU
16251 ± 3% -15.5% 13733 ± 5% softirqs.CPU90.SCHED
143175 ± 2% -14.1% 122926 ± 6% softirqs.CPU91.RCU
17088 ± 3% -19.2% 13801 ± 7% softirqs.CPU91.SCHED
144768 -14.7% 123542 ± 5% softirqs.CPU92.RCU
16851 ± 2% -20.1% 13462 ± 4% softirqs.CPU92.SCHED
144398 ± 4% -13.4% 125018 ± 3% softirqs.CPU93.RCU
16917 ± 3% -19.9% 13559 ± 6% softirqs.CPU93.SCHED
142957 ± 2% -13.6% 123521 ± 5% softirqs.CPU94.RCU
16758 ± 2% -17.8% 13773 ± 4% softirqs.CPU94.SCHED
141885 -13.8% 122263 ± 6% softirqs.CPU95.RCU
16161 ± 5% -14.7% 13789 ± 6% softirqs.CPU95.SCHED
138647 -11.8% 122297 ± 5% softirqs.CPU96.RCU
16590 ± 3% -16.8% 13804 ± 8% softirqs.CPU96.SCHED
15776 ± 3% -13.0% 13731 ± 7% softirqs.CPU97.SCHED
16115 -10.6% 14405 ± 5% softirqs.CPU98.SCHED
140827 -12.8% 122846 ± 5% softirqs.CPU99.RCU
16252 ± 4% -16.2% 13623 ± 5% softirqs.CPU99.SCHED
20190421 -12.1% 17737370 ± 5% softirqs.RCU
2133170 -16.1% 1790167 ± 4% softirqs.SCHED
vm-scalability.time.system_time
22500 +-------------------------------------------------------------------+
22000 |-+ O O |
| O O O O |
21500 |-O O O O O O O |
21000 |-+ O O O O O O O O O O O |
20500 |-+ O O O |
20000 |-+ |
| |
19500 |-+ |
19000 |-+ |
18500 |-+ + |
18000 |.+.. + + : |
| .. + + : .+.. +.. + |
17500 |-+ +.+ + + + + |
17000 +-------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
11400 +-------------------------------------------------------------------+
11300 |-+ O O O O O O O O O |
| O O O O O O O O O O O O O O O O |
11200 |-+ O O |
11100 |-+ |
| |
11000 |-+ |
10900 |-+ |
10800 |-+ |
| |
10700 |-+ |
10600 |-+ |
|.+.. .+. .+. .+.. .+ |
10500 |-+ +.+. +. +.+..+ + |
10400 +-------------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
1.3e+06 +-----------------------------------------------------------------+
1.2e+06 |.+ |
| + .+. .+. .+ |
1.1e+06 |-+ + .+.+..+.+..+ +. + |
1e+06 |-+ + |
| |
900000 |-+ |
800000 |-+ |
700000 |-+ |
| |
600000 |-+ |
500000 |-+ |
| O O O O O O O O O O O O O |
400000 |-O O O O O O O O O O O O O O |
300000 +-----------------------------------------------------------------+
vm-scalability.throughput
3.2e+07 +-----------------------------------------------------------------+
| +.. |
3.1e+07 |++ |
| +.+. .+. .+.+.+..+.+.+ |
3e+07 |-+ +. +. |
| |
2.9e+07 |-+ |
| |
2.8e+07 |-+ |
| |
2.7e+07 |-+ |
| |
2.6e+07 |-+ O O O O O O O O O O O O O |
| O O O O O O O O O O O O |
2.5e+07 +-----------------------------------------------------------------+
vm-scalability.median
220000 +------------------------------------------------------------------+
215000 |-+.. |
|+ +.+.. .+. .+.+..+.+.+..+ |
210000 |-+ + +. |
205000 |-+ |
| |
200000 |-+ |
195000 |-+ |
190000 |-+ |
| |
185000 |-+ |
180000 |-+ O O O O O O O O O O O O O O O |
| O O O O O O O O O O O |
175000 |-+ O |
170000 +------------------------------------------------------------------+
vm-scalability.workload
8.4e+09 +-----------------------------------------------------------------+
| + |
8.2e+09 |:++ |
|: + |
8e+09 |-+ +.+.+..+.+..+.+.+..+.+.+ |
| |
7.8e+09 |-+ |
| |
7.6e+09 |-+ |
| |
7.4e+09 |-+ |
| |
7.2e+09 |-+ |
| O O O O O O O O O O O O O O O O O O O O O O O O O O O |
7e+09 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
1 year, 11 months
Re: [ext4] d3b6f23f71: stress-ng.fiemap.ops_per_sec -60.5% regression
by Xing Zhengjun
Thanks for your quick response, if you need any more test information
about the regression, please let me known.
On 4/13/2020 6:56 PM, Ritesh Harjani wrote:
>
>
> On 4/13/20 2:07 PM, Xing Zhengjun wrote:
>> Hi Harjani,
>>
>> Do you have time to take a look at this? Thanks.
>
> Hello Xing,
>
> I do want to look into this. But as of now I am stuck with another
> mballoc failure issue. I will get back at this once I have some handle
> over that one.
>
> BTW, are you planning to take look at this?
>
> -ritesh
>
>
>>
>> On 4/7/2020 4:00 PM, kernel test robot wrote:
>>> Greeting,
>>>
>>> FYI, we noticed a -60.5% regression of stress-ng.fiemap.ops_per_sec
>>> due to commit:
>>>
>>>
>>> commit: d3b6f23f71670007817a5d59f3fbafab2b794e8c ("ext4: move
>>> ext4_fiemap to use iomap framework")
>>> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
>>>
>>> in testcase: stress-ng
>>> on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz
>>> with 192G memory
>>> with following parameters:
>>>
>>> nr_threads: 10%
>>> disk: 1HDD
>>> testtime: 1s
>>> class: os
>>> cpufreq_governor: performance
>>> ucode: 0x500002c
>>> fs: ext4
>>>
>>>
>>>
>>>
>>>
>>>
>>> Details are as below:
>>> -------------------------------------------------------------------------------------------------->
>>>
>>>
>>>
>>> To reproduce:
>>>
>>> git clone https://github.com/intel/lkp-tests.git
>>> cd lkp-tests
>>> bin/lkp install job.yaml # job file is attached in this email
>>> bin/lkp run job.yaml
>>>
>>> =========================================================================================
>>>
>>> class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
>>>
>>> os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/10%/debian-x86_64-20191114.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
>>>
>>>
>>> commit:
>>> b2c5764262 ("ext4: make ext4_ind_map_blocks work with fiemap")
>>> d3b6f23f71 ("ext4: move ext4_fiemap to use iomap framework")
>>>
>>> b2c5764262edded1 d3b6f23f71670007817a5d59f3f
>>> ---------------- ---------------------------
>>> fail:runs %reproduction fail:runs
>>> | | |
>>> :4 25% 1:4
>>> dmesg.WARNING:at#for_ip_interrupt_entry/0x
>>> 2:4 5% 2:4
>>> perf-profile.calltrace.cycles-pp.sync_regs.error_entry
>>> 2:4 6% 3:4
>>> perf-profile.calltrace.cycles-pp.error_entry
>>> 3:4 9% 3:4
>>> perf-profile.children.cycles-pp.error_entry
>>> 0:4 1% 0:4
>>> perf-profile.self.cycles-pp.error_entry
>>> %stddev %change %stddev
>>> \ | \
>>> 28623 +28.2% 36703 ± 12% stress-ng.daemon.ops
>>> 28632 +28.2% 36704 ± 12%
>>> stress-ng.daemon.ops_per_sec
>>> 566.00 ± 22% -53.2% 265.00 ± 53% stress-ng.dev.ops
>>> 278.81 ± 22% -53.0% 131.00 ± 54% stress-ng.dev.ops_per_sec
>>> 73160 -60.6% 28849 ± 3% stress-ng.fiemap.ops
>>> 72471 -60.5% 28612 ± 3%
>>> stress-ng.fiemap.ops_per_sec
>>> 23421 ± 12% +21.2% 28388 ± 6% stress-ng.filename.ops
>>> 22638 ± 12% +20.3% 27241 ± 6%
>>> stress-ng.filename.ops_per_sec
>>> 21.25 ± 7% -10.6% 19.00 ± 3% stress-ng.iomix.ops
>>> 38.75 ± 49% -47.7% 20.25 ± 96% stress-ng.memhotplug.ops
>>> 34.45 ± 52% -51.8% 16.62 ±106%
>>> stress-ng.memhotplug.ops_per_sec
>>> 1734 ± 10% +31.4% 2278 ± 10% stress-ng.resources.ops
>>> 807.56 ± 5% +35.2% 1091 ± 8%
>>> stress-ng.resources.ops_per_sec
>>> 1007356 ± 3% -16.5% 840642 ± 9% stress-ng.revio.ops
>>> 1007692 ± 3% -16.6% 840711 ± 9%
>>> stress-ng.revio.ops_per_sec
>>> 21812 ± 3% +16.0% 25294 ± 5% stress-ng.sysbadaddr.ops
>>> 21821 ± 3% +15.9% 25294 ± 5%
>>> stress-ng.sysbadaddr.ops_per_sec
>>> 440.75 ± 4% +21.9% 537.25 ± 9% stress-ng.sysfs.ops
>>> 440.53 ± 4% +21.9% 536.86 ± 9%
>>> stress-ng.sysfs.ops_per_sec
>>> 13286582 -11.1% 11805520 ± 6%
>>> stress-ng.time.file_system_outputs
>>> 68253896 +2.4% 69860122 stress-ng.time.minor_page_faults
>>> 197.00 ± 4% -15.9% 165.75 ± 12% stress-ng.xattr.ops
>>> 192.45 ± 5% -16.1% 161.46 ± 11%
>>> stress-ng.xattr.ops_per_sec
>>> 15310 +62.5% 24875 ± 22% stress-ng.zombie.ops
>>> 15310 +62.5% 24874 ± 22%
>>> stress-ng.zombie.ops_per_sec
>>> 203.50 ± 12% -47.3% 107.25 ± 49% vmstat.io.bi
>>> 861318 ± 18% -29.7% 605884 ± 5% meminfo.AnonHugePages
>>> 1062742 ± 14% -20.2% 847853 ± 3% meminfo.AnonPages
>>> 31093 ± 6% +9.6% 34090 ± 3% meminfo.KernelStack
>>> 7151 ± 34% +55.8% 11145 ± 9% meminfo.Mlocked
>>> 1.082e+08 ± 5% -40.2% 64705429 ± 31%
>>> numa-numastat.node0.local_node
>>> 1.082e+08 ± 5% -40.2% 64739883 ± 31%
>>> numa-numastat.node0.numa_hit
>>> 46032662 ± 21% +104.3% 94042918 ± 20%
>>> numa-numastat.node1.local_node
>>> 46074205 ± 21% +104.2% 94072810 ± 20%
>>> numa-numastat.node1.numa_hit
>>> 3942 ± 3% +14.2% 4501 ± 4%
>>> slabinfo.pool_workqueue.active_objs
>>> 4098 ± 3% +14.3% 4683 ± 4%
>>> slabinfo.pool_workqueue.num_objs
>>> 4817 ± 7% +13.3% 5456 ± 8%
>>> slabinfo.proc_dir_entry.active_objs
>>> 5153 ± 6% +12.5% 5797 ± 8%
>>> slabinfo.proc_dir_entry.num_objs
>>> 18598 ± 13% -33.1% 12437 ± 20%
>>> sched_debug.cfs_rq:/.load.avg
>>> 452595 ± 56% -71.4% 129637 ± 76%
>>> sched_debug.cfs_rq:/.load.max
>>> 67675 ± 35% -55.1% 30377 ± 42%
>>> sched_debug.cfs_rq:/.load.stddev
>>> 18114 ± 12% -33.7% 12011 ± 20%
>>> sched_debug.cfs_rq:/.runnable_weight.avg
>>> 448215 ± 58% -72.8% 121789 ± 82%
>>> sched_debug.cfs_rq:/.runnable_weight.max
>>> 67083 ± 37% -56.3% 29305 ± 43%
>>> sched_debug.cfs_rq:/.runnable_weight.stddev
>>> -38032 +434.3% -203212 sched_debug.cfs_rq:/.spread0.avg
>>> -204466 +95.8% -400301 sched_debug.cfs_rq:/.spread0.min
>>> 90.02 ± 25% -58.1% 37.69 ± 52%
>>> sched_debug.cfs_rq:/.util_est_enqueued.avg
>>> 677.54 ± 6% -39.3% 411.50 ± 22%
>>> sched_debug.cfs_rq:/.util_est_enqueued.max
>>> 196.57 ± 8% -47.6% 103.05 ± 36%
>>> sched_debug.cfs_rq:/.util_est_enqueued.stddev
>>> 3.34 ± 23% +34.1% 4.48 ± 4%
>>> sched_debug.cpu.clock.stddev
>>> 3.34 ± 23% +34.1% 4.48 ± 4%
>>> sched_debug.cpu.clock_task.stddev
>>> 402872 ± 7% -11.9% 354819 ± 2%
>>> proc-vmstat.nr_active_anon
>>> 1730331 -9.5% 1566418 ± 5% proc-vmstat.nr_dirtied
>>> 31042 ± 6% +9.3% 33915 ± 3%
>>> proc-vmstat.nr_kernel_stack
>>> 229047 -2.4% 223615 proc-vmstat.nr_mapped
>>> 74008 ± 7% +20.5% 89163 ± 8% proc-vmstat.nr_written
>>> 402872 ± 7% -11.9% 354819 ± 2%
>>> proc-vmstat.nr_zone_active_anon
>>> 50587 ± 11% -25.2% 37829 ± 14%
>>> proc-vmstat.numa_pages_migrated
>>> 457500 -23.1% 351918 ± 31%
>>> proc-vmstat.numa_pte_updates
>>> 81382485 +1.9% 82907822 proc-vmstat.pgfault
>>> 2.885e+08 ± 5% -13.3% 2.502e+08 ± 6% proc-vmstat.pgfree
>>> 42206 ± 12% -46.9% 22399 ± 49% proc-vmstat.pgpgin
>>> 431233 ± 13% -64.8% 151736 ±109% proc-vmstat.pgrotated
>>> 176754 ± 7% -40.2% 105637 ± 31%
>>> proc-vmstat.thp_fault_alloc
>>> 314.50 ± 82% +341.5% 1388 ± 44%
>>> proc-vmstat.unevictable_pgs_stranded
>>> 1075269 ± 14% -41.3% 631388 ± 17% numa-meminfo.node0.Active
>>> 976056 ± 12% -39.7% 588727 ± 19%
>>> numa-meminfo.node0.Active(anon)
>>> 426857 ± 22% -36.4% 271375 ± 13%
>>> numa-meminfo.node0.AnonHugePages
>>> 558590 ± 19% -36.4% 355402 ± 14%
>>> numa-meminfo.node0.AnonPages
>>> 1794824 ± 9% -28.8% 1277157 ± 20%
>>> numa-meminfo.node0.FilePages
>>> 8517 ± 92% -82.7% 1473 ± 89%
>>> numa-meminfo.node0.Inactive(file)
>>> 633118 ± 2% -41.7% 368920 ± 36% numa-meminfo.node0.Mapped
>>> 2958038 ± 12% -27.7% 2139271 ± 12%
>>> numa-meminfo.node0.MemUsed
>>> 181401 ± 5% -13.7% 156561 ± 4%
>>> numa-meminfo.node0.SUnreclaim
>>> 258124 ± 6% -13.0% 224535 ± 5% numa-meminfo.node0.Slab
>>> 702083 ± 16% +31.0% 919406 ± 11% numa-meminfo.node1.Active
>>> 38663 ±107% +137.8% 91951 ± 31%
>>> numa-meminfo.node1.Active(file)
>>> 1154975 ± 7% +41.6% 1635593 ± 12%
>>> numa-meminfo.node1.FilePages
>>> 395813 ± 25% +62.8% 644533 ± 16%
>>> numa-meminfo.node1.Inactive
>>> 394313 ± 25% +62.5% 640686 ± 16%
>>> numa-meminfo.node1.Inactive(anon)
>>> 273317 +88.8% 515976 ± 25% numa-meminfo.node1.Mapped
>>> 2279237 ± 6% +25.7% 2865582 ± 7%
>>> numa-meminfo.node1.MemUsed
>>> 10830 ± 18% +29.6% 14033 ± 9%
>>> numa-meminfo.node1.PageTables
>>> 149390 ± 3% +23.2% 184085 ± 3%
>>> numa-meminfo.node1.SUnreclaim
>>> 569542 ± 16% +74.8% 995336 ± 21% numa-meminfo.node1.Shmem
>>> 220774 ± 5% +20.3% 265656 ± 3% numa-meminfo.node1.Slab
>>> 35623587 ± 5% -11.7% 31444514 ± 3% perf-stat.i.cache-misses
>>> 2.576e+08 ± 5% -6.8% 2.4e+08 ± 2%
>>> perf-stat.i.cache-references
>>> 3585 -7.3% 3323 ± 5%
>>> perf-stat.i.cpu-migrations
>>> 180139 ± 2% +4.2% 187668 perf-stat.i.minor-faults
>>> 69.13 +2.6 71.75 perf-stat.i.node-load-miss-rate%
>>> 4313695 ± 2% -7.4% 3994957 ± 2%
>>> perf-stat.i.node-load-misses
>>> 5466253 ± 11% -17.3% 4521173 ± 6% perf-stat.i.node-loads
>>> 2818674 ± 6% -15.8% 2372542 ± 5% perf-stat.i.node-stores
>>> 227810 +4.6% 238290 perf-stat.i.page-faults
>>> 12.67 ± 4% -7.2% 11.76 ± 2% perf-stat.overall.MPKI
>>> 1.01 ± 4% -0.0 0.97 ± 3%
>>> perf-stat.overall.branch-miss-rate%
>>> 1044 +13.1% 1181 ± 4%
>>> perf-stat.overall.cycles-between-cache-misses
>>> 40.37 ± 4% +3.6 44.00 ± 2%
>>> perf-stat.overall.node-store-miss-rate%
>>> 36139526 ± 5% -12.5% 31625519 ± 3% perf-stat.ps.cache-misses
>>> 2.566e+08 ± 5% -6.9% 2.389e+08 ± 2%
>>> perf-stat.ps.cache-references
>>> 3562 -7.2% 3306 ± 5%
>>> perf-stat.ps.cpu-migrations
>>> 179088 +4.2% 186579 perf-stat.ps.minor-faults
>>> 4323383 ± 2% -7.5% 3999214 perf-stat.ps.node-load-misses
>>> 5607721 ± 10% -18.5% 4568664 ± 6% perf-stat.ps.node-loads
>>> 2855134 ± 7% -16.4% 2387345 ± 5% perf-stat.ps.node-stores
>>> 226270 +4.6% 236709 perf-stat.ps.page-faults
>>> 242305 ± 10% -42.4% 139551 ± 18%
>>> numa-vmstat.node0.nr_active_anon
>>> 135983 ± 17% -37.4% 85189 ± 10%
>>> numa-vmstat.node0.nr_anon_pages
>>> 209.25 ± 16% -38.1% 129.50 ± 10%
>>> numa-vmstat.node0.nr_anon_transparent_hugepages
>>> 449367 ± 9% -29.7% 315804 ± 20%
>>> numa-vmstat.node0.nr_file_pages
>>> 2167 ± 90% -80.6% 419.75 ± 98%
>>> numa-vmstat.node0.nr_inactive_file
>>> 157405 ± 3% -41.4% 92206 ± 35%
>>> numa-vmstat.node0.nr_mapped
>>> 2022 ± 30% -73.3% 539.25 ± 91%
>>> numa-vmstat.node0.nr_mlock
>>> 3336 ± 10% -24.3% 2524 ± 25%
>>> numa-vmstat.node0.nr_page_table_pages
>>> 286158 ± 10% -41.2% 168337 ± 37%
>>> numa-vmstat.node0.nr_shmem
>>> 45493 ± 5% -14.1% 39094 ± 4%
>>> numa-vmstat.node0.nr_slab_unreclaimable
>>> 242294 ± 10% -42.4% 139547 ± 18%
>>> numa-vmstat.node0.nr_zone_active_anon
>>> 2167 ± 90% -80.6% 419.75 ± 98%
>>> numa-vmstat.node0.nr_zone_inactive_file
>>> 54053924 ± 8% -39.3% 32786242 ± 34%
>>> numa-vmstat.node0.numa_hit
>>> 53929628 ± 8% -39.5% 32619715 ± 34%
>>> numa-vmstat.node0.numa_local
>>> 9701 ±107% +136.9% 22985 ± 31%
>>> numa-vmstat.node1.nr_active_file
>>> 202.50 ± 16% -25.1% 151.75 ± 23%
>>> numa-vmstat.node1.nr_anon_transparent_hugepages
>>> 284922 ± 7% +43.3% 408195 ± 13%
>>> numa-vmstat.node1.nr_file_pages
>>> 96002 ± 26% +67.5% 160850 ± 17%
>>> numa-vmstat.node1.nr_inactive_anon
>>> 68077 ± 2% +90.3% 129533 ± 25%
>>> numa-vmstat.node1.nr_mapped
>>> 138482 ± 15% +79.2% 248100 ± 22%
>>> numa-vmstat.node1.nr_shmem
>>> 37396 ± 3% +23.3% 46094 ± 3%
>>> numa-vmstat.node1.nr_slab_unreclaimable
>>> 9701 ±107% +136.9% 22985 ± 31%
>>> numa-vmstat.node1.nr_zone_active_file
>>> 96005 ± 26% +67.5% 160846 ± 17%
>>> numa-vmstat.node1.nr_zone_inactive_anon
>>> 23343661 ± 17% +99.9% 46664267 ± 23%
>>> numa-vmstat.node1.numa_hit
>>> 23248487 ± 17% +100.5% 46610447 ± 23%
>>> numa-vmstat.node1.numa_local
>>> 105745 ± 23% +112.6% 224805 ± 24% softirqs.CPU0.NET_RX
>>> 133310 ± 36% -45.3% 72987 ± 52% softirqs.CPU1.NET_RX
>>> 170110 ± 55% -66.8% 56407 ±147% softirqs.CPU11.NET_RX
>>> 91465 ± 36% -65.2% 31858 ±112% softirqs.CPU13.NET_RX
>>> 164491 ± 57% -77.7% 36641 ±121% softirqs.CPU15.NET_RX
>>> 121069 ± 55% -99.3% 816.75 ± 96% softirqs.CPU17.NET_RX
>>> 81019 ± 4% -8.7% 73967 ± 4% softirqs.CPU20.RCU
>>> 72143 ± 63% -89.8% 7360 ±172% softirqs.CPU22.NET_RX
>>> 270663 ± 17% -57.9% 113915 ± 45% softirqs.CPU24.NET_RX
>>> 20149 ± 76% +474.1% 115680 ± 62% softirqs.CPU26.NET_RX
>>> 14033 ± 70% +977.5% 151211 ± 75% softirqs.CPU27.NET_RX
>>> 27834 ± 94% +476.1% 160357 ± 28% softirqs.CPU28.NET_RX
>>> 35346 ± 68% +212.0% 110290 ± 30% softirqs.CPU29.NET_RX
>>> 34347 ±103% +336.5% 149941 ± 32% softirqs.CPU32.NET_RX
>>> 70077 ± 3% +10.8% 77624 ± 3% softirqs.CPU34.RCU
>>> 36453 ± 84% +339.6% 160253 ± 42% softirqs.CPU36.NET_RX
>>> 72367 ± 2% +10.6% 80043 softirqs.CPU37.RCU
>>> 25239 ±118% +267.7% 92799 ± 45% softirqs.CPU38.NET_RX
>>> 4995 ±170% +1155.8% 62734 ± 62% softirqs.CPU39.NET_RX
>>> 4641 ±145% +1611.3% 79432 ± 90% softirqs.CPU42.NET_RX
>>> 7192 ± 65% +918.0% 73225 ± 66% softirqs.CPU45.NET_RX
>>> 1772 ±166% +1837.4% 34344 ± 63% softirqs.CPU46.NET_RX
>>> 13149 ± 81% +874.7% 128170 ± 58% softirqs.CPU47.NET_RX
>>> 86484 ± 94% -92.6% 6357 ±172% softirqs.CPU48.NET_RX
>>> 129128 ± 27% -95.8% 5434 ±172% softirqs.CPU55.NET_RX
>>> 82772 ± 59% -91.7% 6891 ±164% softirqs.CPU56.NET_RX
>>> 145313 ± 57% -87.8% 17796 ± 88% softirqs.CPU57.NET_RX
>>> 118160 ± 33% -86.3% 16226 ±109% softirqs.CPU58.NET_RX
>>> 94576 ± 56% -94.1% 5557 ±173% softirqs.CPU6.NET_RX
>>> 82900 ± 77% -66.8% 27508 ±171% softirqs.CPU62.NET_RX
>>> 157291 ± 30% -81.1% 29656 ±111% softirqs.CPU64.NET_RX
>>> 135101 ± 28% -80.2% 26748 ± 90% softirqs.CPU67.NET_RX
>>> 146574 ± 56% -100.0% 69.75 ± 98% softirqs.CPU68.NET_RX
>>> 81347 ± 2% -9.0% 74024 ± 2% softirqs.CPU68.RCU
>>> 201729 ± 37% -99.6% 887.50 ±107% softirqs.CPU69.NET_RX
>>> 108454 ± 78% -97.9% 2254 ±169% softirqs.CPU70.NET_RX
>>> 55289 ±104% -89.3% 5942 ±172% softirqs.CPU71.NET_RX
>>> 10112 ±172% +964.6% 107651 ± 89% softirqs.CPU72.NET_RX
>>> 3136 ±171% +1522.2% 50879 ± 66% softirqs.CPU73.NET_RX
>>> 13353 ± 79% +809.2% 121407 ±101% softirqs.CPU74.NET_RX
>>> 75194 ± 3% +10.3% 82957 ± 5% softirqs.CPU75.RCU
>>> 11002 ±173% +1040.8% 125512 ± 61% softirqs.CPU76.NET_RX
>>> 2463 ±173% +2567.3% 65708 ± 77% softirqs.CPU78.NET_RX
>>> 25956 ± 3% -7.8% 23932 ± 3% softirqs.CPU78.SCHED
>>> 16366 ±150% +340.7% 72125 ± 91% softirqs.CPU82.NET_RX
>>> 14553 ±130% +1513.4% 234809 ± 27% softirqs.CPU93.NET_RX
>>> 26314 -9.2% 23884 ± 3% softirqs.CPU93.SCHED
>>> 4582 ± 88% +4903.4% 229268 ± 23% softirqs.CPU94.NET_RX
>>> 11214 ±111% +1762.5% 208867 ± 18% softirqs.CPU95.NET_RX
>>> 1.53 ± 27% -0.5 0.99 ± 17%
>>> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
>>>
>>> 1.52 ± 27% -0.5 0.99 ± 17%
>>> perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
>>>
>>> 1.39 ± 29% -0.5 0.88 ± 21%
>>> perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
>>>
>>> 1.39 ± 29% -0.5 0.88 ± 21%
>>> perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
>>>
>>> 0.50 ± 59% +0.3 0.81 ± 13%
>>> perf-profile.calltrace.cycles-pp.filemap_map_pages.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
>>>
>>> 5.70 ± 9% +0.8 6.47 ± 7%
>>> perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
>>>
>>> 5.48 ± 9% +0.8 6.27 ± 7%
>>> perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.get_signal
>>>
>>> 5.49 ± 9% +0.8 6.28 ± 7%
>>> perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.get_signal.do_signal
>>>
>>> 4.30 ± 4% +1.3 5.60 ± 7%
>>> perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode
>>>
>>> 4.40 ± 4% +1.3 5.69 ± 7%
>>> perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.37 ± 4% +1.3 5.66 ± 7%
>>> perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.36 ± 4% +1.3 5.66 ± 7%
>>> perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.33 ± 4% +1.3 5.62 ± 7%
>>> perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 4.44 ± 4% +1.3 5.74 ± 7%
>>> perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 3.20 ± 10% -2.4 0.78 ±156%
>>> perf-profile.children.cycles-pp.copy_page
>>> 0.16 ± 9% -0.1 0.08 ± 64%
>>> perf-profile.children.cycles-pp.irq_work_interrupt
>>> 0.16 ± 9% -0.1 0.08 ± 64%
>>> perf-profile.children.cycles-pp.smp_irq_work_interrupt
>>> 0.24 ± 5% -0.1 0.17 ± 18%
>>> perf-profile.children.cycles-pp.irq_work_run_list
>>> 0.16 ± 9% -0.1 0.10 ± 24%
>>> perf-profile.children.cycles-pp.irq_work_run
>>> 0.16 ± 9% -0.1 0.10 ± 24%
>>> perf-profile.children.cycles-pp.printk
>>> 0.23 ± 6% -0.1 0.17 ± 9%
>>> perf-profile.children.cycles-pp.__do_execve_file
>>> 0.08 ± 14% -0.1 0.03 ±100%
>>> perf-profile.children.cycles-pp.delay_tsc
>>> 0.16 ± 6% -0.1 0.11 ± 9%
>>> perf-profile.children.cycles-pp.load_elf_binary
>>> 0.16 ± 7% -0.0 0.12 ± 13%
>>> perf-profile.children.cycles-pp.search_binary_handler
>>> 0.20 ± 7% -0.0 0.15 ± 10%
>>> perf-profile.children.cycles-pp.call_usermodehelper_exec_async
>>> 0.19 ± 6% -0.0 0.15 ± 11%
>>> perf-profile.children.cycles-pp.do_execve
>>> 0.08 ± 10% -0.0 0.04 ± 59%
>>> perf-profile.children.cycles-pp.__vunmap
>>> 0.15 ± 3% -0.0 0.11 ± 7%
>>> perf-profile.children.cycles-pp.rcu_idle_exit
>>> 0.12 ± 10% -0.0 0.09 ± 14%
>>> perf-profile.children.cycles-pp.__switch_to_asm
>>> 0.09 ± 13% -0.0 0.07 ± 5%
>>> perf-profile.children.cycles-pp.des3_ede_encrypt
>>> 0.06 ± 11% +0.0 0.09 ± 13%
>>> perf-profile.children.cycles-pp.mark_page_accessed
>>> 0.15 ± 5% +0.0 0.19 ± 12%
>>> perf-profile.children.cycles-pp.apparmor_cred_prepare
>>> 0.22 ± 8% +0.0 0.27 ± 11%
>>> perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
>>> 0.17 ± 2% +0.0 0.22 ± 12%
>>> perf-profile.children.cycles-pp.security_prepare_creds
>>> 0.95 ± 17% +0.3 1.22 ± 14%
>>> perf-profile.children.cycles-pp.filemap_map_pages
>>> 5.92 ± 8% +0.7 6.65 ± 7%
>>> perf-profile.children.cycles-pp.get_signal
>>> 5.66 ± 9% +0.8 6.44 ± 7%
>>> perf-profile.children.cycles-pp.mmput
>>> 5.65 ± 9% +0.8 6.43 ± 7%
>>> perf-profile.children.cycles-pp.exit_mmap
>>> 4.40 ± 4% +1.3 5.70 ± 7%
>>> perf-profile.children.cycles-pp.prepare_exit_to_usermode
>>> 4.45 ± 4% +1.3 5.75 ± 7%
>>> perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
>>>
>>> 3.16 ± 10% -2.4 0.77 ±155%
>>> perf-profile.self.cycles-pp.copy_page
>>> 0.08 ± 14% -0.1 0.03 ±100%
>>> perf-profile.self.cycles-pp.delay_tsc
>>> 0.12 ± 10% -0.0 0.09 ± 14%
>>> perf-profile.self.cycles-pp.__switch_to_asm
>>> 0.08 ± 12% -0.0 0.06 ± 17%
>>> perf-profile.self.cycles-pp.enqueue_task_fair
>>> 0.09 ± 13% -0.0 0.07 ± 5%
>>> perf-profile.self.cycles-pp.des3_ede_encrypt
>>> 0.07 ± 13% +0.0 0.08 ± 19%
>>> perf-profile.self.cycles-pp.__lru_cache_add
>>> 0.19 ± 9% +0.0 0.22 ± 10%
>>> perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
>>> 0.15 ± 5% +0.0 0.19 ± 11%
>>> perf-profile.self.cycles-pp.apparmor_cred_prepare
>>> 0.05 ± 58% +0.0 0.09 ± 13%
>>> perf-profile.self.cycles-pp.mark_page_accessed
>>> 0.58 ± 10% +0.2 0.80 ± 20%
>>> perf-profile.self.cycles-pp.release_pages
>>> 0.75 ±173% +1.3e+05% 1005 ±100%
>>> interrupts.127:PCI-MSI.31981660-edge.i40e-eth0-TxRx-91
>>> 820.75 ±111% -99.9% 0.50 ±173%
>>> interrupts.47:PCI-MSI.31981580-edge.i40e-eth0-TxRx-11
>>> 449.25 ± 86% -100.0% 0.00
>>> interrupts.53:PCI-MSI.31981586-edge.i40e-eth0-TxRx-17
>>> 33.25 ±157% -100.0% 0.00
>>> interrupts.57:PCI-MSI.31981590-edge.i40e-eth0-TxRx-21
>>> 0.75 ±110% +63533.3% 477.25 ±162%
>>> interrupts.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
>>> 561.50 ±160% -100.0% 0.00
>>> interrupts.65:PCI-MSI.31981598-edge.i40e-eth0-TxRx-29
>>> 82921 ± 8% -11.1% 73748 ± 6%
>>> interrupts.CPU11.CAL:Function_call_interrupts
>>> 66509 ± 30% -32.6% 44828 ± 8%
>>> interrupts.CPU14.TLB:TLB_shootdowns
>>> 43105 ± 98% -90.3% 4183 ± 21%
>>> interrupts.CPU17.RES:Rescheduling_interrupts
>>> 148719 ± 70% -69.4% 45471 ± 16%
>>> interrupts.CPU17.TLB:TLB_shootdowns
>>> 85589 ± 42% -52.2% 40884 ± 5%
>>> interrupts.CPU20.TLB:TLB_shootdowns
>>> 222472 ± 41% -98.0% 4360 ± 45%
>>> interrupts.CPU22.RES:Rescheduling_interrupts
>>> 0.50 ±173% +95350.0% 477.25 ±162%
>>> interrupts.CPU25.61:PCI-MSI.31981594-edge.i40e-eth0-TxRx-25
>>> 76029 ± 10% +14.9% 87389 ± 5%
>>> interrupts.CPU25.CAL:Function_call_interrupts
>>> 399042 ± 6% +13.4% 452479 ± 8%
>>> interrupts.CPU27.LOC:Local_timer_interrupts
>>> 561.00 ±161% -100.0% 0.00
>>> interrupts.CPU29.65:PCI-MSI.31981598-edge.i40e-eth0-TxRx-29
>>> 7034 ± 46% +1083.8% 83279 ±138%
>>> interrupts.CPU29.RES:Rescheduling_interrupts
>>> 17829 ± 99% -71.0% 5172 ± 16%
>>> interrupts.CPU30.RES:Rescheduling_interrupts
>>> 5569 ± 15% +2414.7% 140059 ± 94%
>>> interrupts.CPU31.RES:Rescheduling_interrupts
>>> 37674 ± 16% +36.6% 51473 ± 25%
>>> interrupts.CPU31.TLB:TLB_shootdowns
>>> 47905 ± 39% +76.6% 84583 ± 38%
>>> interrupts.CPU34.TLB:TLB_shootdowns
>>> 568.75 ±140% +224.8% 1847 ± 90%
>>> interrupts.CPU36.NMI:Non-maskable_interrupts
>>> 568.75 ±140% +224.8% 1847 ± 90%
>>> interrupts.CPU36.PMI:Performance_monitoring_interrupts
>>> 4236 ± 25% +2168.5% 96092 ± 90%
>>> interrupts.CPU36.RES:Rescheduling_interrupts
>>> 52717 ± 27% +43.3% 75565 ± 28%
>>> interrupts.CPU37.TLB:TLB_shootdowns
>>> 41418 ± 9% +136.6% 98010 ± 50%
>>> interrupts.CPU39.TLB:TLB_shootdowns
>>> 5551 ± 8% +847.8% 52615 ± 66%
>>> interrupts.CPU40.RES:Rescheduling_interrupts
>>> 4746 ± 25% +865.9% 45841 ± 91%
>>> interrupts.CPU42.RES:Rescheduling_interrupts
>>> 37556 ± 11% +24.6% 46808 ± 6%
>>> interrupts.CPU42.TLB:TLB_shootdowns
>>> 21846 ±124% -84.4% 3415 ± 46%
>>> interrupts.CPU48.RES:Rescheduling_interrupts
>>> 891.50 ± 22% -35.2% 577.25 ± 40%
>>> interrupts.CPU49.NMI:Non-maskable_interrupts
>>> 891.50 ± 22% -35.2% 577.25 ± 40%
>>> interrupts.CPU49.PMI:Performance_monitoring_interrupts
>>> 20459 ±120% -79.2% 4263 ± 14%
>>> interrupts.CPU49.RES:Rescheduling_interrupts
>>> 59840 ± 21% -23.1% 46042 ± 16%
>>> interrupts.CPU5.TLB:TLB_shootdowns
>>> 65200 ± 19% -34.5% 42678 ± 9%
>>> interrupts.CPU51.TLB:TLB_shootdowns
>>> 70923 ±153% -94.0% 4270 ± 29%
>>> interrupts.CPU53.RES:Rescheduling_interrupts
>>> 65312 ± 22% -28.7% 46578 ± 14%
>>> interrupts.CPU56.TLB:TLB_shootdowns
>>> 65828 ± 24% �� -33.4% 43846 ± 4%
>>> interrupts.CPU59.TLB:TLB_shootdowns
>>> 72558 ±156% -93.2% 4906 ± 9%
>>> interrupts.CPU6.RES:Rescheduling_interrupts
>>> 68698 ± 34% -32.6% 46327 ± 18%
>>> interrupts.CPU61.TLB:TLB_shootdowns
>>> 109745 ± 44% -57.4% 46711 ± 16%
>>> interrupts.CPU62.TLB:TLB_shootdowns
>>> 89714 ± 44% -48.5% 46198 ± 7%
>>> interrupts.CPU63.TLB:TLB_shootdowns
>>> 59380 ±136% -91.5% 5066 ± 13%
>>> interrupts.CPU69.RES:Rescheduling_interrupts
>>> 40094 ± 18% +133.9% 93798 ± 44%
>>> interrupts.CPU78.TLB:TLB_shootdowns
>>> 129884 ± 72% -55.3% 58034 ±157%
>>> interrupts.CPU8.RES:Rescheduling_interrupts
>>> 69984 ± 11% +51.4% 105957 ± 20%
>>> interrupts.CPU80.CAL:Function_call_interrupts
>>> 32857 ± 10% +128.7% 75131 ± 36%
>>> interrupts.CPU80.TLB:TLB_shootdowns
>>> 35726 ± 16% +34.6% 48081 ± 12%
>>> interrupts.CPU82.TLB:TLB_shootdowns
>>> 73820 ± 17% +28.2% 94643 ± 8%
>>> interrupts.CPU84.CAL:Function_call_interrupts
>>> 38829 ± 28% +190.3% 112736 ± 42%
>>> interrupts.CPU84.TLB:TLB_shootdowns
>>> 36129 ± 4% +47.6% 53329 ± 13%
>>> interrupts.CPU85.TLB:TLB_shootdowns
>>> 4693 ± 7% +1323.0% 66793 ±145%
>>> interrupts.CPU86.RES:Rescheduling_interrupts
>>> 38003 ± 11% +94.8% 74031 ± 43%
>>> interrupts.CPU86.TLB:TLB_shootdowns
>>> 78022 ± 3% +7.9% 84210 ± 3%
>>> interrupts.CPU87.CAL:Function_call_interrupts
>>> 36359 ± 6% +54.9% 56304 ± 48%
>>> interrupts.CPU88.TLB:TLB_shootdowns
>>> 89031 ±105% -95.0% 4475 ± 40%
>>> interrupts.CPU9.RES:Rescheduling_interrupts
>>> 40085 ± 11% +60.6% 64368 ± 27%
>>> interrupts.CPU91.TLB:TLB_shootdowns
>>> 42244 ± 10% +44.8% 61162 ± 35%
>>> interrupts.CPU94.TLB:TLB_shootdowns
>>> 40959 ± 15% +109.4% 85780 ± 41%
>>> interrupts.CPU95.TLB:TLB_shootdowns
>>>
>>>
>>> stress-ng.fiemap.ops
>>> 80000
>>> +-------------------------------------------------------------------+
>>> 75000 |..+. .+.. .+..+.. .+.
>>> .+.. |
>>> | +..+..+..+.+. .+..+.. .+ +. +.
>>> +.+..+..+..+.+..|
>>> 70000 |-+ + +. |
>>> 65000
>>> |-+ |
>>> 60000
>>> |-+ |
>>> 55000
>>> |-+ |
>>> | |
>>> 50000
>>> |-+ |
>>> 45000
>>> |-+ |
>>> 40000
>>> |-+ |
>>> 35000 |-+ O |
>>> | O O O O
>>> O |
>>> 30000 |-+ O O O O O O O O O O O O O O
>>> O O O |
>>> 25000
>>> +-------------------------------------------------------------------+
>>> stress-ng.fiemap.ops_per_sec
>>> 80000
>>> +-------------------------------------------------------------------+
>>> 75000 |.. .+.. .+.. |
>>> | +. .+..+..+.+. .+..+.. .+.+.
>>> +..+.+..+..+.+..+..+..+.+..|
>>> 70000 |-+ +. + +. |
>>> 65000
>>> |-+ |
>>> 60000
>>> |-+ |
>>> 55000
>>> |-+ |
>>> | |
>>> 50000
>>> |-+ |
>>> 45000
>>> |-+ |
>>> 40000
>>> |-+ |
>>> 35000 |-+ O |
>>> | O O O
>>> O |
>>> 30000 |-+ O O O O O O O O O O O O O
>>> O O O |
>>> 25000
>>> +-------------------------------------------------------------------+
>>> [*] bisect-good sample
>>> [O] bisect-bad sample
>>>
>>>
>>>
>>> Disclaimer:
>>> Results have been estimated based on internal Intel analysis and are
>>> provided
>>> for informational purposes only. Any difference in system hardware or
>>> software
>>> design or configuration may affect actual performance.
>>>
>>>
>>> Thanks,
>>> Rong Chen
>>>
>>>
>>> _______________________________________________
>>> LKP mailing list -- lkp(a)lists.01.org
>>> To unsubscribe send an email to lkp-leave(a)lists.01.org
>>>
>>
>
--
Zhengjun Xing
2 years
[sched/fair] 070f5e860e: reaim.jobs_per_min -10.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -10.5% regression of reaim.jobs_per_min due to commit:
commit: 070f5e860ee2bf588c99ef7b4c202451faa48236 ("sched/fair: Take into account runnable_avg to classify group")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: reaim
on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 4G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: five_sec
cpufreq_governor: performance
ucode: 0x21
test-description: REAIM is an updated and improved version of AIM 7 benchmark.
test-url: https://sourceforge.net/projects/re-aim-7/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/100%/debian-x86_64-20191114.cgz/300s/lkp-ivb-d04/five_sec/reaim/0x21
commit:
9f68395333 ("sched/pelt: Add a new runnable average signal")
070f5e860e ("sched/fair: Take into account runnable_avg to classify group")
9f68395333ad7f5b 070f5e860ee2bf588c99ef7b4c2
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:4 -18% 3:4 perf-profile.children.cycles-pp.error_entry
3:4 -12% 3:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
0.68 -10.4% 0.61 reaim.child_systime
67235 -10.5% 60195 reaim.jobs_per_min
16808 -10.5% 15048 reaim.jobs_per_min_child
97.90 -1.2% 96.70 reaim.jti
72000 -10.8% 64216 reaim.max_jobs_per_min
0.36 +11.3% 0.40 reaim.parent_time
1.56 ± 3% +79.1% 2.80 ± 6% reaim.std_dev_percent
0.00 ± 7% +145.9% 0.01 ± 9% reaim.std_dev_time
104276 -16.0% 87616 reaim.time.involuntary_context_switches
15511157 -2.4% 15144312 reaim.time.minor_page_faults
55.00 -7.3% 51.00 reaim.time.percent_of_cpu_this_job_got
88.01 -12.4% 77.12 reaim.time.system_time
79.97 -3.2% 77.38 reaim.time.user_time
216380 -3.4% 208924 reaim.time.voluntary_context_switches
50800 -2.4% 49600 reaim.workload
30.40 ± 2% -4.7% 28.97 ± 2% boot-time.boot
9.38 -0.7 8.66 ± 3% mpstat.cpu.all.sys%
7452 +7.5% 8014 vmstat.system.cs
1457802 ± 16% +49.3% 2176122 ± 13% cpuidle.C1.time
48523684 +43.4% 69570233 ± 22% cpuidle.C1E.time
806543 ± 2% +20.7% 973406 ± 11% cpuidle.C1E.usage
14328 ± 6% +14.5% 16410 ± 8% cpuidle.POLL.time
43300 ± 4% +13.5% 49150 ± 5% softirqs.CPU0.SCHED
118751 -9.3% 107763 softirqs.CPU1.RCU
41679 ± 3% +14.1% 47546 ± 4% softirqs.CPU1.SCHED
42688 ± 3% +12.3% 47931 ± 4% softirqs.CPU2.SCHED
41730 ± 2% +17.7% 49115 ± 4% softirqs.CPU3.SCHED
169399 +14.4% 193744 ± 2% softirqs.SCHED
3419 +1.0% 3453 proc-vmstat.nr_kernel_stack
16365616 -1.8% 16077850 proc-vmstat.numa_hit
16365616 -1.8% 16077850 proc-vmstat.numa_local
93908 -1.6% 92389 proc-vmstat.pgactivate
16269664 -3.9% 15629529 ± 2% proc-vmstat.pgalloc_normal
15918803 -2.3% 15557936 proc-vmstat.pgfault
16644610 -2.0% 16310898 proc-vmstat.pgfree
20125 ±123% +161.7% 52662 ± 30% sched_debug.cfs_rq:/.load.min
348749 ± 10% -11.2% 309562 ± 11% sched_debug.cfs_rq:/.load.stddev
1096 ± 6% -14.4% 938.42 ± 7% sched_debug.cfs_rq:/.load_avg.max
448.46 ± 8% -17.5% 370.19 ± 10% sched_debug.cfs_rq:/.load_avg.stddev
117372 -10.2% 105432 sched_debug.cfs_rq:/.min_vruntime.avg
135242 ± 4% -9.2% 122811 sched_debug.cfs_rq:/.min_vruntime.max
0.53 ± 8% +17.6% 0.62 ± 6% sched_debug.cfs_rq:/.nr_running.avg
29.79 ± 30% -51.0% 14.58 ± 35% sched_debug.cfs_rq:/.nr_spread_over.max
10.21 ± 34% -59.7% 4.12 ± 52% sched_debug.cfs_rq:/.nr_spread_over.stddev
78.25 ± 40% +3304.7% 2664 ± 94% sched_debug.cpu.curr->pid.min
294309 ± 2% +34.3% 395172 ± 12% sched_debug.cpu.nr_switches.min
9.58 ± 35% +84.8% 17.71 ± 40% sched_debug.cpu.nr_uninterruptible.max
-6.88 +120.6% -15.17 sched_debug.cpu.nr_uninterruptible.min
6.41 ± 30% +95.2% 12.52 ± 33% sched_debug.cpu.nr_uninterruptible.stddev
286185 +33.4% 381734 ± 13% sched_debug.cpu.sched_count.min
180416 +11.0% 200247 sched_debug.cpu.sched_goidle.avg
116264 ± 3% +44.6% 168090 ± 15% sched_debug.cpu.sched_goidle.min
476.00 ± 8% +92.4% 915.75 ± 3% interrupts.CAL:Function_call_interrupts
110.50 ± 24% +101.1% 222.25 ± 4% interrupts.CPU0.CAL:Function_call_interrupts
1381 ± 29% +23.7% 1709 ± 26% interrupts.CPU0.NMI:Non-maskable_interrupts
1381 ± 29% +23.7% 1709 ± 26% interrupts.CPU0.PMI:Performance_monitoring_interrupts
3319 ± 9% +50.4% 4991 ± 2% interrupts.CPU0.RES:Rescheduling_interrupts
41.25 ± 30% +274.5% 154.50 ± 15% interrupts.CPU0.TLB:TLB_shootdowns
116.25 ± 23% +96.1% 228.00 ± 16% interrupts.CPU1.CAL:Function_call_interrupts
1183 ± 10% +43.1% 1692 ± 23% interrupts.CPU1.NMI:Non-maskable_interrupts
1183 ± 10% +43.1% 1692 ± 23% interrupts.CPU1.PMI:Performance_monitoring_interrupts
3335 ± 7% +60.4% 5350 ± 5% interrupts.CPU1.RES:Rescheduling_interrupts
36.25 ± 30% +344.1% 161.00 ± 8% interrupts.CPU1.TLB:TLB_shootdowns
131.25 ± 11% +81.1% 237.75 ± 11% interrupts.CPU2.CAL:Function_call_interrupts
3247 ± 2% +62.4% 5274 interrupts.CPU2.RES:Rescheduling_interrupts
34.50 ± 36% +357.2% 157.75 ± 7% interrupts.CPU2.TLB:TLB_shootdowns
118.00 ± 13% +93.0% 227.75 ± 9% interrupts.CPU3.CAL:Function_call_interrupts
3155 ± 4% +68.7% 5322 ± 3% interrupts.CPU3.RES:Rescheduling_interrupts
38.50 ± 16% +303.9% 155.50 ± 3% interrupts.CPU3.TLB:TLB_shootdowns
13057 ± 2% +60.4% 20939 interrupts.RES:Rescheduling_interrupts
150.50 ± 27% +317.8% 628.75 ± 3% interrupts.TLB:TLB_shootdowns
2.00 +0.1 2.09 ± 3% perf-stat.i.branch-miss-rate%
10.26 +1.1 11.36 ± 7% perf-stat.i.cache-miss-rate%
2009706 ± 2% +5.4% 2117525 ± 3% perf-stat.i.cache-misses
16867421 -4.5% 16106908 perf-stat.i.cache-references
7514 +7.6% 8083 perf-stat.i.context-switches
1.51 -3.0% 1.47 perf-stat.i.cpi
2.523e+09 ± 3% -8.8% 2.301e+09 ± 2% perf-stat.i.cpu-cycles
124.54 +157.8% 321.08 perf-stat.i.cpu-migrations
1842 ± 10% -18.6% 1498 ± 6% perf-stat.i.cycles-between-cache-misses
752585 ± 2% -4.1% 721714 perf-stat.i.dTLB-store-misses
590441 +2.7% 606399 perf-stat.i.iTLB-load-misses
68766 +4.0% 71488 ± 2% perf-stat.i.iTLB-loads
1.847e+09 ± 3% -4.7% 1.76e+09 ± 2% perf-stat.i.instructions
3490 ± 4% -8.5% 3195 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.68 +3.7% 0.70 perf-stat.i.ipc
51861 -2.1% 50797 perf-stat.i.minor-faults
51861 -2.1% 50797 perf-stat.i.page-faults
2.68 ± 2% +0.1 2.78 perf-stat.overall.branch-miss-rate%
11.91 +1.2 13.14 ± 2% perf-stat.overall.cache-miss-rate%
1.37 -4.3% 1.31 perf-stat.overall.cpi
1255 -13.4% 1087 ± 2% perf-stat.overall.cycles-between-cache-misses
3127 ± 3% -7.2% 2901 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.73 +4.5% 0.76 perf-stat.overall.ipc
2002763 ± 2% +5.4% 2110303 ± 3% perf-stat.ps.cache-misses
16809816 -4.5% 16051656 perf-stat.ps.cache-references
7489 +7.6% 8055 perf-stat.ps.context-switches
2.514e+09 ± 3% -8.8% 2.293e+09 ± 2% perf-stat.ps.cpu-cycles
124.12 +157.8% 319.95 perf-stat.ps.cpu-migrations
750010 ± 2% -4.1% 719223 perf-stat.ps.dTLB-store-misses
588424 +2.7% 604314 perf-stat.ps.iTLB-load-misses
68533 +4.0% 71246 ± 2% perf-stat.ps.iTLB-loads
1.841e+09 ± 3% -4.7% 1.754e+09 ± 2% perf-stat.ps.instructions
51683 -2.1% 50622 perf-stat.ps.minor-faults
51683 -2.1% 50622 perf-stat.ps.page-faults
5.577e+11 ± 3% -5.1% 5.292e+11 ± 2% perf-stat.total.instructions
7.35 ± 17% -2.7 4.60 ± 10% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
7.74 ± 20% -2.7 5.00 ± 6% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
10.14 ± 8% -2.7 7.44 ± 6% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.66 ± 8% -2.6 8.07 ± 8% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
7.10 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.write._fini
7.10 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.devkmsg_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write
7.09 ± 17% -2.4 4.69 ± 7% perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write.ksys_write
6.20 ± 8% -2.1 4.08 ± 5% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write
5.15 ± 11% -1.8 3.38 ± 4% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write
5.05 ± 11% -1.7 3.31 ± 3% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit
7.41 ± 10% -1.1 6.29 ± 5% perf-profile.calltrace.cycles-pp.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
7.57 ± 11% -1.1 6.46 ± 5% perf-profile.calltrace.cycles-pp.execve
7.46 ± 10% -1.1 6.37 ± 5% perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
7.46 ± 10% -1.1 6.37 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
7.46 ± 10% -1.1 6.37 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
7.03 ± 5% -1.1 5.95 ± 10% perf-profile.calltrace.cycles-pp.brk
5.90 ± 7% -0.9 4.98 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
5.84 ± 7% -0.9 4.93 ± 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
15.77 ± 2% -0.9 14.88 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.86 ± 2% -0.9 14.97 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.88 ± 6% -0.9 2.99 ± 5% perf-profile.calltrace.cycles-pp.kill
1.70 ± 23% -0.8 0.90 ± 10% perf-profile.calltrace.cycles-pp.delay_tsc.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
4.88 ± 8% -0.8 4.08 ± 8% perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
2.39 ± 27% -0.7 1.67 ± 5% perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
2.29 ± 30% -0.7 1.59 ± 5% perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file
2.27 ± 30% -0.7 1.58 ± 5% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
3.11 ± 5% -0.6 2.47 ± 9% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.kill
3.07 ± 5% -0.6 2.45 ± 9% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
2.09 ± 18% -0.4 1.67 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
2.82 ± 9% -0.4 2.40 ± 12% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
2.80 ± 9% -0.4 2.38 ± 12% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
1.11 ± 33% -0.4 0.71 ± 10% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.05 ± 15% -0.4 0.69 ± 13% perf-profile.calltrace.cycles-pp.vt_console_print.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write
1.03 ± 17% -0.4 0.68 ± 13% perf-profile.calltrace.cycles-pp.lf.vt_console_print.console_unlock.vprintk_emit.devkmsg_emit
1.03 ± 17% -0.4 0.68 ± 13% perf-profile.calltrace.cycles-pp.con_scroll.lf.vt_console_print.console_unlock.vprintk_emit
1.03 ± 17% -0.4 0.68 ± 13% perf-profile.calltrace.cycles-pp.fbcon_scroll.con_scroll.lf.vt_console_print.console_unlock
0.96 ± 16% -0.3 0.66 ± 12% perf-profile.calltrace.cycles-pp.fbcon_putcs.fbcon_redraw.fbcon_scroll.con_scroll.lf
1.85 ± 4% -0.3 1.58 ± 8% perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.do_page_fault
0.89 ± 15% -0.3 0.62 ± 12% perf-profile.calltrace.cycles-pp.kill_pid_info.kill_something_info.__x64_sys_kill.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.67 ± 5% -0.3 1.41 ± 9% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
1.02 ± 7% -0.3 0.77 ± 12% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
0.94 ± 16% -0.2 0.70 ± 5% perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
0.98 ± 16% -0.2 0.74 ± 7% perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
1.03 ± 6% -0.2 0.79 ± 10% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
1.00 ± 10% -0.2 0.77 ± 9% perf-profile.calltrace.cycles-pp.new_sync_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.87 ± 10% -0.2 0.66 ± 15% perf-profile.calltrace.cycles-pp.shmem_file_read_iter.new_sync_read.vfs_read.ksys_read.do_syscall_64
1.41 ± 3% -0.2 1.23 ± 7% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
1.88 ± 5% -0.1 1.73 perf-profile.calltrace.cycles-pp.__x64_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.87 ± 5% -0.1 1.73 perf-profile.calltrace.cycles-pp._do_fork.__x64_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.34 ± 11% +7.3 17.66 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
10.18 ± 11% +7.3 17.52 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
11.32 ± 9% +7.7 19.03 ± 8% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
11.32 ± 9% +7.7 19.05 ± 8% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
11.32 ± 9% +7.7 19.05 ± 8% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
11.02 ± 5% +8.1 19.14 ± 7% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
16.04 ± 6% +9.1 25.17 ± 8% perf-profile.calltrace.cycles-pp.secondary_startup_64
55.98 -7.0 48.94 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
55.67 -7.0 48.67 ± 4% perf-profile.children.cycles-pp.do_syscall_64
10.60 ± 16% -3.3 7.30 ± 8% perf-profile.children.cycles-pp.vprintk_emit
13.02 ± 7% -3.0 9.99 ± 7% perf-profile.children.cycles-pp.write
9.92 ± 13% -2.8 7.08 ± 4% perf-profile.children.cycles-pp.console_unlock
10.26 ± 8% -2.7 7.53 ± 6% perf-profile.children.cycles-pp.new_sync_write
10.79 ± 8% -2.6 8.18 ± 8% perf-profile.children.cycles-pp.vfs_write
10.95 ± 8% -2.6 8.36 ± 8% perf-profile.children.cycles-pp.ksys_write
7.17 ± 16% -2.5 4.69 ± 7% perf-profile.children.cycles-pp.devkmsg_write
7.17 ± 16% -2.5 4.69 ± 7% perf-profile.children.cycles-pp.devkmsg_emit
8.65 ± 16% -2.4 6.21 ± 4% perf-profile.children.cycles-pp.serial8250_console_write
8.53 ± 17% -2.4 6.11 ± 4% perf-profile.children.cycles-pp.uart_console_write
7.13 ± 17% -2.4 4.71 ± 6% perf-profile.children.cycles-pp._fini
8.46 ± 16% -2.4 6.07 ± 4% perf-profile.children.cycles-pp.wait_for_xmitr
8.34 ± 16% -2.4 5.97 ± 4% perf-profile.children.cycles-pp.serial8250_console_putchar
5.80 ± 16% -1.6 4.21 ± 6% perf-profile.children.cycles-pp.io_serial_in
7.85 ± 10% -1.2 6.67 ± 5% perf-profile.children.cycles-pp.execve
7.72 ± 11% -1.2 6.55 ± 5% perf-profile.children.cycles-pp.__do_execve_file
5.19 ± 13% -1.1 4.05 ± 8% perf-profile.children.cycles-pp.mmput
5.16 ± 13% -1.1 4.03 ± 8% perf-profile.children.cycles-pp.exit_mmap
7.76 ± 10% -1.1 6.64 ± 5% perf-profile.children.cycles-pp.__x64_sys_execve
7.11 ± 5% -1.1 6.01 ± 10% perf-profile.children.cycles-pp.brk
3.92 ± 6% -0.9 3.03 ± 5% perf-profile.children.cycles-pp.kill
2.63 ± 17% -0.8 1.85 perf-profile.children.cycles-pp.delay_tsc
4.89 ± 8% -0.8 4.12 ± 8% perf-profile.children.cycles-pp.__x64_sys_brk
2.48 ± 27% -0.7 1.74 ± 4% perf-profile.children.cycles-pp.flush_old_exec
3.02 ± 12% -0.7 2.28 ± 12% perf-profile.children.cycles-pp.unmap_page_range
3.15 ± 11% -0.7 2.40 ± 12% perf-profile.children.cycles-pp.unmap_vmas
2.25 ± 19% -0.5 1.75 ± 11% perf-profile.children.cycles-pp.unmap_region
1.27 ± 11% -0.4 0.86 ± 8% perf-profile.children.cycles-pp.vt_console_print
1.24 ± 12% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.lf
1.24 ± 12% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.con_scroll
1.24 ± 12% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.fbcon_scroll
1.79 ± 9% -0.4 1.41 ± 4% perf-profile.children.cycles-pp.release_pages
1.22 ± 11% -0.4 0.85 ± 9% perf-profile.children.cycles-pp.fbcon_redraw
1.17 ± 12% -0.4 0.82 ± 10% perf-profile.children.cycles-pp.fbcon_putcs
1.16 ± 13% -0.3 0.82 ± 10% perf-profile.children.cycles-pp.bit_putcs
0.90 ± 16% -0.3 0.62 ± 12% perf-profile.children.cycles-pp.kill_pid_info
0.95 ± 10% -0.3 0.68 ± 6% perf-profile.children.cycles-pp.drm_fb_helper_cfb_imageblit
0.95 ± 11% -0.3 0.68 ± 6% perf-profile.children.cycles-pp.cfb_imageblit
1.24 ± 7% -0.2 1.01 ± 6% perf-profile.children.cycles-pp.new_sync_read
0.71 ± 4% -0.2 0.49 ± 23% perf-profile.children.cycles-pp.___perf_sw_event
0.55 ± 31% -0.2 0.33 ± 16% perf-profile.children.cycles-pp.unlink_anon_vmas
0.89 ± 11% -0.2 0.67 ± 15% perf-profile.children.cycles-pp.shmem_file_read_iter
0.60 ± 20% -0.2 0.39 ± 20% perf-profile.children.cycles-pp.__send_signal
1.06 ± 6% -0.2 0.85 ± 16% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.88 -0.2 0.68 ± 6% perf-profile.children.cycles-pp.__perf_sw_event
1.49 ± 5% -0.2 1.29 ± 7% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.56 ± 12% -0.2 0.37 ± 11% perf-profile.children.cycles-pp.do_send_sig_info
1.65 ± 8% -0.2 1.47 ± 4% perf-profile.children.cycles-pp.perf_event_mmap
0.69 ± 2% -0.2 0.52 ± 16% perf-profile.children.cycles-pp.page_remove_rmap
0.61 ± 5% -0.2 0.44 ± 15% perf-profile.children.cycles-pp.free_unref_page_list
0.60 ± 6% -0.2 0.43 ± 15% perf-profile.children.cycles-pp.__vm_munmap
0.77 ± 12% -0.2 0.62 ± 12% perf-profile.children.cycles-pp.__might_sleep
0.39 ± 12% -0.2 0.24 ± 18% perf-profile.children.cycles-pp.time
0.46 ± 14% -0.1 0.34 ± 14% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.57 ± 8% -0.1 0.47 ± 14% perf-profile.children.cycles-pp.shmem_undo_range
0.41 ± 12% -0.1 0.30 ± 15% perf-profile.children.cycles-pp.copy_fpstate_to_sigframe
0.76 ± 7% -0.1 0.67 ± 8% perf-profile.children.cycles-pp.__x64_sys_rt_sigreturn
0.26 ± 16% -0.1 0.17 ± 17% perf-profile.children.cycles-pp.mark_page_accessed
0.12 ± 47% -0.1 0.04 ±103% perf-profile.children.cycles-pp.sigaction
0.23 ± 12% -0.1 0.15 ± 11% perf-profile.children.cycles-pp.__vm_enough_memory
0.12 ± 18% -0.1 0.05 ±106% perf-profile.children.cycles-pp.__vsprintf_chk
0.23 ± 20% -0.1 0.17 ± 13% perf-profile.children.cycles-pp.d_add
0.13 ± 23% -0.1 0.07 ± 58% perf-profile.children.cycles-pp.fput_many
0.13 ± 14% -0.1 0.08 ± 24% perf-profile.children.cycles-pp.vfs_unlink
0.11 ± 20% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.04 ± 63% +0.0 0.08 ± 23% perf-profile.children.cycles-pp.uncharge_page
0.06 ± 22% +0.0 0.10 ± 36% perf-profile.children.cycles-pp.sched_exec
0.44 ± 4% +0.0 0.48 ± 4% perf-profile.children.cycles-pp.close
0.14 ± 22% +0.1 0.21 ± 17% perf-profile.children.cycles-pp.pick_next_task_fair
0.10 ± 17% +0.1 0.17 ± 23% perf-profile.children.cycles-pp.__anon_vma_prepare
0.00 +0.1 0.07 ± 24% perf-profile.children.cycles-pp.update_sd_lb_stats
0.07 ± 34% +0.1 0.15 ± 42% perf-profile.children.cycles-pp.file_free_rcu
0.15 ± 27% +0.1 0.23 ± 21% perf-profile.children.cycles-pp.__strcasecmp
0.20 ± 21% +0.1 0.29 ± 8% perf-profile.children.cycles-pp.__pte_alloc
0.14 ± 47% +0.1 0.23 ± 27% perf-profile.children.cycles-pp.update_blocked_averages
0.09 ± 44% +0.1 0.19 ± 18% perf-profile.children.cycles-pp.schedule_idle
0.00 +0.1 0.10 ± 33% perf-profile.children.cycles-pp.newidle_balance
0.00 +0.1 0.10 ± 18% perf-profile.children.cycles-pp.__vmalloc_node_range
0.21 ± 15% +0.1 0.32 ± 25% perf-profile.children.cycles-pp.__wake_up_common
0.63 ± 8% +0.1 0.77 ± 6% perf-profile.children.cycles-pp.rcu_do_batch
0.76 ± 14% +0.1 0.90 ± 9% perf-profile.children.cycles-pp.rcu_core
0.07 ± 90% +0.2 0.27 ±109% perf-profile.children.cycles-pp.security_mmap_addr
0.46 ± 26% +0.3 0.75 ± 13% perf-profile.children.cycles-pp.__sched_text_start
11.32 ± 9% +7.7 19.05 ± 8% perf-profile.children.cycles-pp.start_secondary
11.03 ± 5% +8.1 19.16 ± 7% perf-profile.children.cycles-pp.intel_idle
14.78 ± 6% +8.5 23.24 ± 8% perf-profile.children.cycles-pp.cpuidle_enter
14.76 ± 6% +8.5 23.24 ± 8% perf-profile.children.cycles-pp.cpuidle_enter_state
16.04 ± 6% +9.1 25.17 ± 8% perf-profile.children.cycles-pp.secondary_startup_64
16.04 ± 6% +9.1 25.17 ± 8% perf-profile.children.cycles-pp.cpu_startup_entry
16.04 ± 6% +9.1 25.19 ± 8% perf-profile.children.cycles-pp.do_idle
5.79 ± 16% -1.6 4.21 ± 6% perf-profile.self.cycles-pp.io_serial_in
2.62 ± 17% -0.8 1.85 perf-profile.self.cycles-pp.delay_tsc
5.11 ± 4% -0.6 4.56 ± 5% perf-profile.self.cycles-pp.do_syscall_64
1.44 ± 6% -0.3 1.15 ± 5% perf-profile.self.cycles-pp.unmap_page_range
0.94 ± 11% -0.3 0.68 ± 6% perf-profile.self.cycles-pp.cfb_imageblit
0.65 ± 6% -0.2 0.42 ± 23% perf-profile.self.cycles-pp.___perf_sw_event
1.42 ± 5% -0.2 1.22 ± 7% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.65 ± 13% -0.2 0.47 ± 9% perf-profile.self.cycles-pp.do_page_fault
0.65 ± 9% -0.1 0.52 ± 5% perf-profile.self.cycles-pp.release_pages
0.24 ± 20% -0.1 0.15 ± 16% perf-profile.self.cycles-pp.mark_page_accessed
0.16 ± 28% -0.1 0.08 ± 69% perf-profile.self.cycles-pp.free_unref_page_commit
0.12 ± 24% -0.1 0.04 ± 59% perf-profile.self.cycles-pp.__do_munmap
0.10 ± 24% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.04 ± 57% +0.0 0.07 ± 19% perf-profile.self.cycles-pp.__sbrk
0.04 ± 57% +0.0 0.08 ± 23% perf-profile.self.cycles-pp.update_load_avg
0.04 ± 57% +0.0 0.08 ± 23% perf-profile.self.cycles-pp.uncharge_page
0.26 ± 11% +0.1 0.39 ± 12% perf-profile.self.cycles-pp.copy_page
0.49 ± 13% +0.1 0.63 ± 13% perf-profile.self.cycles-pp.get_page_from_freelist
11.00 ± 5% +8.1 19.14 ± 7% perf-profile.self.cycles-pp.intel_idle
reaim.time.system_time
90 +----------------------------------------------------------------------+
| +. .+ ++.++ .+ |
88 |+.+ +.++. + +++ ++.++.+ +.++.+++.+++.+++. +.++ .+ + :.+++.+|
| + + + + ++.++ + |
86 |-+ |
| |
84 |-+ |
| |
82 |-+ O |
|O OO OO O |
80 |-+ |
| OO OOO O O O O |
78 |-+ O O O O O OO O |
| O OOO O |
76 +----------------------------------------------------------------------+
reaim.time.percent_of_cpu_this_job_got
55 +--------------------------------------------------------------------+
| |
54.5 |-+ |
54 |-+ |
| |
53.5 |-+ |
| |
53 |-+OOO OOO |
| |
52.5 |-+ |
52 |O+ OOO OOO OOO OOO OO |
| |
51.5 |-+ |
| |
51 +--------------------------------------------------------------------+
reaim.parent_time
0.405 +-------------------------------------------------------------------+
0.4 |-+ O |
| O O OOO OO |
0.395 |-+ |
0.39 |-+ |
0.385 |-+ OOOO OOO OOO OOO OOOO |
0.38 |O+OOO |
| |
0.375 |-+ |
0.37 |-+ |
0.365 |-+ |
0.36 |-+ |
| +. .+ +. .+ +. + +. +++. + .+|
0.355 |+. +.+ ++.++ +++. ++.++++ + +++ + +++ + ++.++ + +. : + + |
0.35 +-------------------------------------------------------------------+
reaim.child_systime
0.69 +--------------------------------------------------------------------+
| +.+ ++. ++.+ .++ ++ +++.+ .++ .+++. + .+ : + .+|
0.68 |+.++ ++.+ + ++ +.+ ++ + + +.+++ +.+ + |
0.67 |-+ |
| |
0.66 |-+ |
0.65 |-+ |
| O |
0.64 |O+O OOO |
0.63 |-+ O |
| O O O O O |
0.62 |-+ OO O O O O O |
0.61 |-+ O OO O OO OOOO |
| |
0.6 +--------------------------------------------------------------------+
reaim.jobs_per_min
69000 +-------------------------------------------------------------------+
68000 |-.++ + + .++ .+ .++ + .+++.+ |
|+ +.+ ++.+++.+ + + +++.+++.+++.+++.++++ +.+ + + +.+ +. |
67000 |-+ + + +|
66000 |-+ |
| |
65000 |-+ |
64000 |-+ |
63000 |-+ |
|O OOO O O O O |
62000 |-+ O O OOO O OOO OOOO |
61000 |-+ |
| O OOO OO |
60000 |-+ OO |
59000 +-------------------------------------------------------------------+
reaim.jobs_per_min_child
17500 +-------------------------------------------------------------------+
| |
17000 |-.+++. + + .+++.+ .++ + .+++.+ |
|+ + ++.+++.+ + +++.+++.+++.+++.++++ +.+ + + +.+++.+|
| + |
16500 |-+ |
| |
16000 |-+ |
| |
15500 |O+OOO OOOO OOO OOO OOO OOOO |
| |
| O O OO |
15000 |-+ OOO O |
| |
14500 +-------------------------------------------------------------------+
reaim.max_jobs_per_min
76000 +-------------------------------------------------------------------+
| |
74000 |-+ + + + |
| :: : :: |
| : : : :: : |
72000 |+.++ ++++.+++.+++.+++.++++.+++.+++.+++.++++.+++.+++.+ + +++.+++.+|
| |
70000 |-+ |
| |
68000 |-+ O |
| |
| |
66000 |O+O O OOOO OOO OOO OOO OOOO |
| |
64000 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
2 years
[sched/fair] 0b0695f2b3: phoronix-test-suite.compress-gzip.0.seconds 19.8% regression
by kernel test robot
Hi Vincent Guittot,
Below report FYI.
Last year, we actually reported an improvement "[sched/fair] 0b0695f2b3:
vm-scalability.median 3.1% improvement" on link [1].
but now we found the regression on pts.compress-gzip.
This seems align with what showed in "[v4,00/10] sched/fair: rework the CFS
load balance" (link [2]), where showed the reworked load balance could have
both positive and negative effect for different test suites.
And also from link [3], the patch set risks regressions.
We also confirmed this regression on another platform
(Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory),
below is the data (lower is better).
v5.4 4.1
fcf0553db6f4c79387864f6e4ab4a891601f395e 4.01
0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 4.89
v5.5 5.18
v5.6 4.62
v5.7-rc2 4.53
v5.7-rc3 4.59
It seems there are some recovery on latest kernels, but not fully back.
We were just wondering whether you could share some lights the further works
on the load balance after patch set [2] which could cause the performance
change?
And whether you have plan to refine the load balance algorithm further?
thanks
[1] https://lists.01.org/hyperkitty/list/lkp@lists.01.org/thread/SANC7QLYZKUN...
[2] https://lore.kernel.org/patchwork/cover/1141687/
[3] https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.5-Scheduler
Below is the detail regression report FYI.
Greeting,
FYI, we noticed a 19.8% regression of phoronix-test-suite.compress-gzip.0.seconds due to commit:
commit: 0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912 ("sched/fair: Rework load_balance()")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: phoronix-test-suite
on test machine: 16 threads Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz with 32G memory
with following parameters:
test: compress-gzip-1.2.0
cpufreq_governor: performance
ucode: 0xca
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | phoronix-test-suite: |
| test machine | 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory |
| test parameters | cpufreq_governor=performance |
| | test=compress-gzip-1.2.0 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median 3.1% improvement |
| test machine | 104 threads Skylake with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=8T |
| | test=anon-cow-seq |
| | ucode=0x2000064 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.fault.ops_per_sec -23.1% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=scheduler |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | sc_pid_max=4194304 |
| | testtime=1s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec -33.3% regression |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=1s |
| | ucode=0x500002c |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 42.3% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=30s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 35.1% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=1s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.ioprio.ops_per_sec -20.7% regression |
| test machine | 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory |
| test parameters | class=os |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | fs=ext4 |
| | nr_threads=100% |
| | testtime=1s |
| | ucode=0x500002b |
+------------------+-----------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.schedpolicy.ops_per_sec 43.0% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=interrupt |
| | cpufreq_governor=performance |
| | disk=1HDD |
| | nr_threads=100% |
| | testtime=30s |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <oliver.sang(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-lck-7983/clear-x86_64-phoronix-30140/lkp-cfl-e1/compress-gzip-1.2.0/phoronix-test-suite/0xca
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 4% 0:7 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
6.01 +19.8% 7.20 phoronix-test-suite.compress-gzip.0.seconds
147.57 ± 8% +25.1% 184.54 phoronix-test-suite.time.elapsed_time
147.57 ± 8% +25.1% 184.54 phoronix-test-suite.time.elapsed_time.max
52926 ± 8% -23.8% 40312 meminfo.max_used_kB
0.11 ± 7% -0.0 0.09 ± 3% mpstat.cpu.all.soft%
242384 -1.4% 238931 proc-vmstat.nr_inactive_anon
242384 -1.4% 238931 proc-vmstat.nr_zone_inactive_anon
1.052e+08 ± 27% +56.5% 1.647e+08 ± 10% cpuidle.C1E.time
1041078 ± 22% +54.7% 1610786 ± 7% cpuidle.C1E.usage
3.414e+08 ± 6% +57.6% 5.381e+08 ± 28% cpuidle.C6.time
817897 ± 3% +50.1% 1227607 ± 11% cpuidle.C6.usage
2884 -4.2% 2762 turbostat.Avg_MHz
1041024 ± 22% +54.7% 1610657 ± 7% turbostat.C1E
817802 ± 3% +50.1% 1227380 ± 11% turbostat.C6
66.75 -2.0% 65.42 turbostat.CorWatt
67.28 -2.0% 65.94 turbostat.PkgWatt
32.50 +6.2% 34.50 vmstat.cpu.id
62.50 -2.4% 61.00 vmstat.cpu.us
2443 ± 2% -28.9% 1738 ± 2% vmstat.io.bi
23765 ± 4% +16.5% 27685 vmstat.system.cs
37860 -7.1% 35180 ± 2% vmstat.system.in
3.474e+09 ± 3% -12.7% 3.032e+09 perf-stat.i.branch-instructions
1.344e+08 ± 2% -11.6% 1.188e+08 perf-stat.i.branch-misses
13033225 ± 4% -19.0% 10561032 perf-stat.i.cache-misses
5.105e+08 ± 3% -15.3% 4.322e+08 perf-stat.i.cache-references
24205 ± 4% +16.3% 28161 perf-stat.i.context-switches
30.25 ± 2% +39.7% 42.27 ± 2% perf-stat.i.cpi
4.63e+10 -4.7% 4.412e+10 perf-stat.i.cpu-cycles
3147 ± 4% -8.4% 2882 ± 2% perf-stat.i.cpu-migrations
16724 ± 2% +45.9% 24406 ± 5% perf-stat.i.cycles-between-cache-misses
0.18 ± 13% -0.1 0.12 ± 4% perf-stat.i.dTLB-load-miss-rate%
4.822e+09 ± 3% -11.9% 4.248e+09 perf-stat.i.dTLB-loads
0.07 ± 8% -0.0 0.05 ± 16% perf-stat.i.dTLB-store-miss-rate%
1.623e+09 ± 2% -11.5% 1.436e+09 perf-stat.i.dTLB-stores
1007120 ± 3% -8.9% 917854 ± 2% perf-stat.i.iTLB-load-misses
1.816e+10 ± 3% -12.2% 1.594e+10 perf-stat.i.instructions
2.06 ± 54% -66.0% 0.70 perf-stat.i.major-faults
29896 ± 13% -35.2% 19362 ± 8% perf-stat.i.minor-faults
0.00 ± 9% -0.0 0.00 ± 6% perf-stat.i.node-load-miss-rate%
1295134 ± 3% -14.2% 1111173 perf-stat.i.node-loads
3064949 ± 4% -18.7% 2491063 ± 2% perf-stat.i.node-stores
29898 ± 13% -35.2% 19363 ± 8% perf-stat.i.page-faults
28.10 -3.5% 27.12 perf-stat.overall.MPKI
2.55 -0.1 2.44 ± 2% perf-stat.overall.cache-miss-rate%
2.56 ± 3% +8.5% 2.77 perf-stat.overall.cpi
3567 ± 5% +17.3% 4186 perf-stat.overall.cycles-between-cache-misses
0.02 ± 3% +0.0 0.02 ± 3% perf-stat.overall.dTLB-load-miss-rate%
18031 -3.6% 17375 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.39 ± 3% -7.9% 0.36 perf-stat.overall.ipc
3.446e+09 ± 3% -12.6% 3.011e+09 perf-stat.ps.branch-instructions
1.333e+08 ± 2% -11.5% 1.18e+08 perf-stat.ps.branch-misses
12927998 ± 4% -18.8% 10491818 perf-stat.ps.cache-misses
5.064e+08 ± 3% -15.2% 4.293e+08 perf-stat.ps.cache-references
24011 ± 4% +16.5% 27973 perf-stat.ps.context-switches
4.601e+10 -4.6% 4.391e+10 perf-stat.ps.cpu-cycles
3121 ± 4% -8.3% 2863 ± 2% perf-stat.ps.cpu-migrations
4.783e+09 ± 3% -11.8% 4.219e+09 perf-stat.ps.dTLB-loads
1.61e+09 ± 2% -11.4% 1.426e+09 perf-stat.ps.dTLB-stores
999100 ± 3% -8.7% 911974 ± 2% perf-stat.ps.iTLB-load-misses
1.802e+10 ± 3% -12.1% 1.584e+10 perf-stat.ps.instructions
2.04 ± 54% -65.9% 0.70 perf-stat.ps.major-faults
29656 ± 13% -35.1% 19237 ± 8% perf-stat.ps.minor-faults
1284601 ± 3% -14.1% 1103823 perf-stat.ps.node-loads
3039931 ± 4% -18.6% 2474451 ± 2% perf-stat.ps.node-stores
29658 ± 13% -35.1% 19238 ± 8% perf-stat.ps.page-faults
50384 ± 2% +16.5% 58713 ± 4% softirqs.CPU0.RCU
33143 ± 2% +19.9% 39731 ± 2% softirqs.CPU0.SCHED
72672 +24.0% 90109 softirqs.CPU0.TIMER
22182 ± 4% +26.3% 28008 ± 4% softirqs.CPU1.SCHED
74465 ± 4% +26.3% 94027 ± 3% softirqs.CPU1.TIMER
18680 ± 7% +29.2% 24135 ± 3% softirqs.CPU10.SCHED
75941 ± 2% +21.8% 92486 ± 7% softirqs.CPU10.TIMER
48991 ± 4% +22.7% 60105 ± 5% softirqs.CPU11.RCU
18666 ± 6% +28.4% 23976 ± 4% softirqs.CPU11.SCHED
74896 ± 6% +24.4% 93173 ± 3% softirqs.CPU11.TIMER
49490 +20.5% 59659 ± 2% softirqs.CPU12.RCU
18973 ± 7% +26.0% 23909 ± 3% softirqs.CPU12.SCHED
50620 +19.9% 60677 ± 6% softirqs.CPU13.RCU
19136 ± 6% +23.2% 23577 ± 4% softirqs.CPU13.SCHED
74812 +33.3% 99756 ± 7% softirqs.CPU13.TIMER
50824 +15.9% 58881 ± 3% softirqs.CPU14.RCU
19550 ± 5% +24.1% 24270 ± 4% softirqs.CPU14.SCHED
76801 +22.8% 94309 ± 4% softirqs.CPU14.TIMER
51844 +11.5% 57795 ± 3% softirqs.CPU15.RCU
19204 ± 8% +28.4% 24662 ± 2% softirqs.CPU15.SCHED
74751 +29.9% 97127 ± 3% softirqs.CPU15.TIMER
50307 +17.4% 59062 ± 4% softirqs.CPU2.RCU
22150 +12.2% 24848 softirqs.CPU2.SCHED
79653 ± 2% +21.6% 96829 ± 10% softirqs.CPU2.TIMER
50833 +21.1% 61534 ± 4% softirqs.CPU3.RCU
18935 ± 2% +32.0% 25002 ± 3% softirqs.CPU3.SCHED
50569 +15.8% 58570 ± 4% softirqs.CPU4.RCU
20509 ± 5% +18.3% 24271 softirqs.CPU4.SCHED
80942 ± 2% +15.3% 93304 ± 3% softirqs.CPU4.TIMER
50692 +16.5% 59067 ± 4% softirqs.CPU5.RCU
20237 ± 3% +18.2% 23914 ± 3% softirqs.CPU5.SCHED
78963 +21.8% 96151 ± 2% softirqs.CPU5.TIMER
19709 ± 7% +20.1% 23663 softirqs.CPU6.SCHED
81250 +15.9% 94185 softirqs.CPU6.TIMER
51379 +15.0% 59108 softirqs.CPU7.RCU
19642 ± 5% +28.4% 25227 ± 3% softirqs.CPU7.SCHED
78299 ± 2% +30.3% 102021 ± 4% softirqs.CPU7.TIMER
49723 +19.0% 59169 ± 4% softirqs.CPU8.RCU
20138 ± 6% +21.7% 24501 ± 2% softirqs.CPU8.SCHED
75256 ± 3% +22.8% 92419 ± 2% softirqs.CPU8.TIMER
50406 ± 2% +17.4% 59178 ± 4% softirqs.CPU9.RCU
19182 ± 9% +24.2% 23831 ± 6% softirqs.CPU9.SCHED
73572 ± 5% +30.4% 95951 ± 8% softirqs.CPU9.TIMER
812363 +16.6% 946858 ± 3% softirqs.RCU
330042 ± 4% +23.5% 407533 softirqs.SCHED
1240046 +22.5% 1519539 softirqs.TIMER
251015 ± 21% -84.2% 39587 ±106% sched_debug.cfs_rq:/.MIN_vruntime.avg
537847 ± 4% -44.8% 297100 ± 66% sched_debug.cfs_rq:/.MIN_vruntime.max
257990 ± 5% -63.4% 94515 ± 82% sched_debug.cfs_rq:/.MIN_vruntime.stddev
38935 +47.9% 57601 sched_debug.cfs_rq:/.exec_clock.avg
44119 +40.6% 62013 sched_debug.cfs_rq:/.exec_clock.max
37622 +49.9% 56404 sched_debug.cfs_rq:/.exec_clock.min
47287 ± 7% -70.3% 14036 ± 88% sched_debug.cfs_rq:/.load.min
67.17 -52.9% 31.62 ± 31% sched_debug.cfs_rq:/.load_avg.min
251015 ± 21% -84.2% 39588 ±106% sched_debug.cfs_rq:/.max_vruntime.avg
537847 ± 4% -44.8% 297103 ± 66% sched_debug.cfs_rq:/.max_vruntime.max
257991 ± 5% -63.4% 94516 ± 82% sched_debug.cfs_rq:/.max_vruntime.stddev
529078 ± 3% +45.2% 768398 sched_debug.cfs_rq:/.min_vruntime.avg
547175 ± 2% +44.1% 788582 sched_debug.cfs_rq:/.min_vruntime.max
496420 +48.3% 736148 ± 2% sched_debug.cfs_rq:/.min_vruntime.min
1.33 ± 15% -44.0% 0.75 ± 32% sched_debug.cfs_rq:/.nr_running.avg
0.83 ± 20% -70.0% 0.25 ± 70% sched_debug.cfs_rq:/.nr_running.min
0.54 ± 8% -15.9% 0.45 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
0.33 +62.9% 0.54 ± 8% sched_debug.cfs_rq:/.nr_spread_over.avg
1.33 +54.7% 2.06 ± 17% sched_debug.cfs_rq:/.nr_spread_over.max
0.44 ± 7% +56.4% 0.69 ± 6% sched_debug.cfs_rq:/.nr_spread_over.stddev
130.83 ± 14% -25.6% 97.37 ± 15% sched_debug.cfs_rq:/.runnable_load_avg.avg
45.33 ± 6% -79.3% 9.38 ± 70% sched_debug.cfs_rq:/.runnable_load_avg.min
47283 ± 7% -70.9% 13741 ± 89% sched_debug.cfs_rq:/.runnable_weight.min
1098 ± 8% -27.6% 795.02 ± 24% sched_debug.cfs_rq:/.util_avg.avg
757.50 ± 9% -51.3% 369.25 ± 10% sched_debug.cfs_rq:/.util_avg.min
762.39 ± 11% -44.4% 424.04 ± 46% sched_debug.cfs_rq:/.util_est_enqueued.avg
314.00 ± 18% -78.5% 67.38 ±100% sched_debug.cfs_rq:/.util_est_enqueued.min
142951 ± 5% +22.8% 175502 ± 3% sched_debug.cpu.avg_idle.avg
72112 -18.3% 58937 ± 13% sched_debug.cpu.avg_idle.stddev
127638 ± 7% +39.3% 177858 ± 5% sched_debug.cpu.clock.avg
127643 ± 7% +39.3% 177862 ± 5% sched_debug.cpu.clock.max
127633 ± 7% +39.3% 177855 ± 5% sched_debug.cpu.clock.min
126720 ± 7% +39.4% 176681 ± 5% sched_debug.cpu.clock_task.avg
127168 ± 7% +39.3% 177179 ± 5% sched_debug.cpu.clock_task.max
125240 ± 7% +39.5% 174767 ± 5% sched_debug.cpu.clock_task.min
563.60 ± 2% +25.9% 709.78 ± 9% sched_debug.cpu.clock_task.stddev
1.66 ± 18% -37.5% 1.04 ± 32% sched_debug.cpu.nr_running.avg
0.83 ± 20% -62.5% 0.31 ± 87% sched_debug.cpu.nr_running.min
127617 ± 3% +52.9% 195080 sched_debug.cpu.nr_switches.avg
149901 ± 6% +45.2% 217652 sched_debug.cpu.nr_switches.max
108182 ± 5% +61.6% 174808 sched_debug.cpu.nr_switches.min
0.20 ± 5% -62.5% 0.07 ± 67% sched_debug.cpu.nr_uninterruptible.avg
-29.33 -13.5% -25.38 sched_debug.cpu.nr_uninterruptible.min
92666 ± 8% +66.8% 154559 sched_debug.cpu.sched_count.avg
104565 ± 11% +57.2% 164374 sched_debug.cpu.sched_count.max
80272 ± 10% +77.2% 142238 sched_debug.cpu.sched_count.min
38029 ± 10% +80.4% 68608 sched_debug.cpu.sched_goidle.avg
43413 ± 11% +68.5% 73149 sched_debug.cpu.sched_goidle.max
32420 ± 11% +94.5% 63069 sched_debug.cpu.sched_goidle.min
51567 ± 8% +60.7% 82878 sched_debug.cpu.ttwu_count.avg
79015 ± 9% +45.2% 114717 ± 4% sched_debug.cpu.ttwu_count.max
42919 ± 9% +63.3% 70086 sched_debug.cpu.ttwu_count.min
127632 ± 7% +39.3% 177854 ± 5% sched_debug.cpu_clk
125087 ± 7% +40.1% 175285 ± 5% sched_debug.ktime
127882 ± 6% +39.3% 178163 ± 5% sched_debug.sched_clk
146.00 ± 13% +902.9% 1464 ±143% interrupts.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
3375 ± 93% -94.8% 174.75 ± 26% interrupts.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
297595 ± 8% +22.8% 365351 ± 2% interrupts.CPU0.LOC:Local_timer_interrupts
8402 -21.7% 6577 ± 25% interrupts.CPU0.NMI:Non-maskable_interrupts
8402 -21.7% 6577 ± 25% interrupts.CPU0.PMI:Performance_monitoring_interrupts
937.00 ± 2% +18.1% 1106 ± 11% interrupts.CPU0.RES:Rescheduling_interrupts
146.00 ± 13% +902.9% 1464 ±143% interrupts.CPU1.133:IR-PCI-MSI.2097153-edge.eth1-TxRx-0
297695 ± 8% +22.7% 365189 ± 2% interrupts.CPU1.LOC:Local_timer_interrupts
8412 -20.9% 6655 ± 25% interrupts.CPU1.NMI:Non-maskable_interrupts
8412 -20.9% 6655 ± 25% interrupts.CPU1.PMI:Performance_monitoring_interrupts
297641 ± 8% +22.7% 365268 ± 2% interrupts.CPU10.LOC:Local_timer_interrupts
8365 -10.9% 7455 ± 3% interrupts.CPU10.NMI:Non-maskable_interrupts
8365 -10.9% 7455 ± 3% interrupts.CPU10.PMI:Performance_monitoring_interrupts
297729 ± 8% +22.7% 365238 ± 2% interrupts.CPU11.LOC:Local_timer_interrupts
8376 -21.8% 6554 ± 26% interrupts.CPU11.NMI:Non-maskable_interrupts
8376 -21.8% 6554 ± 26% interrupts.CPU11.PMI:Performance_monitoring_interrupts
297394 ± 8% +22.8% 365274 ± 2% interrupts.CPU12.LOC:Local_timer_interrupts
8393 -10.5% 7512 ± 3% interrupts.CPU12.NMI:Non-maskable_interrupts
8393 -10.5% 7512 ± 3% interrupts.CPU12.PMI:Performance_monitoring_interrupts
297744 ± 8% +22.7% 365243 ± 2% interrupts.CPU13.LOC:Local_timer_interrupts
8353 -10.6% 7469 ± 3% interrupts.CPU13.NMI:Non-maskable_interrupts
8353 -10.6% 7469 ± 3% interrupts.CPU13.PMI:Performance_monitoring_interrupts
148.50 ± 17% -24.2% 112.50 ± 8% interrupts.CPU13.TLB:TLB_shootdowns
297692 ± 8% +22.7% 365311 ± 2% interrupts.CPU14.LOC:Local_timer_interrupts
8374 -10.4% 7501 ± 4% interrupts.CPU14.NMI:Non-maskable_interrupts
8374 -10.4% 7501 ± 4% interrupts.CPU14.PMI:Performance_monitoring_interrupts
297453 ± 8% +22.8% 365311 ± 2% interrupts.CPU15.LOC:Local_timer_interrupts
8336 -22.8% 6433 ± 26% interrupts.CPU15.NMI:Non-maskable_interrupts
8336 -22.8% 6433 ± 26% interrupts.CPU15.PMI:Performance_monitoring_interrupts
699.50 ± 21% +51.3% 1058 ± 10% interrupts.CPU15.RES:Rescheduling_interrupts
3375 ± 93% -94.8% 174.75 ± 26% interrupts.CPU2.134:IR-PCI-MSI.2097154-edge.eth1-TxRx-1
297685 ± 8% +22.7% 365273 ± 2% interrupts.CPU2.LOC:Local_timer_interrupts
8357 -21.2% 6584 ± 25% interrupts.CPU2.NMI:Non-maskable_interrupts
8357 -21.2% 6584 ± 25% interrupts.CPU2.PMI:Performance_monitoring_interrupts
164.00 ± 30% -23.0% 126.25 ± 32% interrupts.CPU2.TLB:TLB_shootdowns
297352 ± 8% +22.9% 365371 ± 2% interrupts.CPU3.LOC:Local_timer_interrupts
8383 -10.6% 7493 ± 4% interrupts.CPU3.NMI:Non-maskable_interrupts
8383 -10.6% 7493 ± 4% interrupts.CPU3.PMI:Performance_monitoring_interrupts
780.50 ± 3% +32.7% 1035 ± 6% interrupts.CPU3.RES:Rescheduling_interrupts
297595 ± 8% +22.8% 365415 ± 2% interrupts.CPU4.LOC:Local_timer_interrupts
8382 -21.4% 6584 ± 25% interrupts.CPU4.NMI:Non-maskable_interrupts
8382 -21.4% 6584 ± 25% interrupts.CPU4.PMI:Performance_monitoring_interrupts
297720 ± 8% +22.7% 365347 ± 2% interrupts.CPU5.LOC:Local_timer_interrupts
8353 -32.0% 5679 ± 34% interrupts.CPU5.NMI:Non-maskable_interrupts
8353 -32.0% 5679 ± 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
727.00 ± 16% +98.3% 1442 ± 47% interrupts.CPU5.RES:Rescheduling_interrupts
297620 ± 8% +22.8% 365343 ± 2% interrupts.CPU6.LOC:Local_timer_interrupts
8388 -10.3% 7526 ± 4% interrupts.CPU6.NMI:Non-maskable_interrupts
8388 -10.3% 7526 ± 4% interrupts.CPU6.PMI:Performance_monitoring_interrupts
156.50 ± 3% -27.6% 113.25 ± 16% interrupts.CPU6.TLB:TLB_shootdowns
297690 ± 8% +22.7% 365363 ± 2% interrupts.CPU7.LOC:Local_timer_interrupts
8390 -23.1% 6449 ± 25% interrupts.CPU7.NMI:Non-maskable_interrupts
8390 -23.1% 6449 ± 25% interrupts.CPU7.PMI:Performance_monitoring_interrupts
918.00 ± 16% +49.4% 1371 ± 7% interrupts.CPU7.RES:Rescheduling_interrupts
120.00 ± 35% +70.8% 205.00 ± 17% interrupts.CPU7.TLB:TLB_shootdowns
297731 ± 8% +22.7% 365368 ± 2% interrupts.CPU8.LOC:Local_timer_interrupts
8393 -32.5% 5668 ± 35% interrupts.CPU8.NMI:Non-maskable_interrupts
8393 -32.5% 5668 ± 35% interrupts.CPU8.PMI:Performance_monitoring_interrupts
297779 ± 8% +22.7% 365399 ± 2% interrupts.CPU9.LOC:Local_timer_interrupts
8430 -10.8% 7517 ± 2% interrupts.CPU9.NMI:Non-maskable_interrupts
8430 -10.8% 7517 ± 2% interrupts.CPU9.PMI:Performance_monitoring_interrupts
956.50 +13.5% 1085 ± 4% interrupts.CPU9.RES:Rescheduling_interrupts
4762118 ± 8% +22.7% 5845069 ± 2% interrupts.LOC:Local_timer_interrupts
134093 -18.2% 109662 ± 11% interrupts.NMI:Non-maskable_interrupts
134093 -18.2% 109662 ± 11% interrupts.PMI:Performance_monitoring_interrupts
66.97 ± 9% -29.9 37.12 ± 49% perf-profile.calltrace.cycles-pp.deflate
66.67 ± 9% -29.7 36.97 ± 50% perf-profile.calltrace.cycles-pp.deflate_medium.deflate
43.24 ± 9% -18.6 24.61 ± 49% perf-profile.calltrace.cycles-pp.longest_match.deflate_medium.deflate
2.29 ± 14% -1.2 1.05 ± 58% perf-profile.calltrace.cycles-pp.deflateSetDictionary
0.74 ± 6% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.read.__libc_start_main
0.74 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
0.73 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
0.73 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read.__libc_start_main
0.73 ± 7% -0.5 0.27 ±100% perf-profile.calltrace.cycles-pp.ksys_read.__x64_sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.26 ±100% +0.6 0.88 ± 45% perf-profile.calltrace.cycles-pp.vfs_statx.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ±100% +0.7 1.02 ± 42% perf-profile.calltrace.cycles-pp.do_sys_open.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.28 ±100% +0.7 0.96 ± 44% perf-profile.calltrace.cycles-pp.__do_sys_newfstatat.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.28 ±100% +0.7 0.96 ± 44% perf-profile.calltrace.cycles-pp.__x64_sys_newfstatat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.34 ±100% +0.7 1.03 ± 42% perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.77 ± 35% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.56 ± 9% +0.8 1.40 ± 45% perf-profile.calltrace.cycles-pp.__schedule.schedule.futex_wait_queue_me.futex_wait.do_futex
0.58 ± 10% +0.9 1.43 ± 45% perf-profile.calltrace.cycles-pp.schedule.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex
0.33 ±100% +0.9 1.21 ± 28% perf-profile.calltrace.cycles-pp.menu_select.cpuidle_select.do_idle.cpu_startup_entry.start_secondary
0.34 ± 99% +0.9 1.27 ± 30% perf-profile.calltrace.cycles-pp.cpuidle_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.00 +1.0 0.96 ± 62% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.62 ± 9% +1.0 1.60 ± 52% perf-profile.calltrace.cycles-pp.futex_wait_queue_me.futex_wait.do_futex.__x64_sys_futex.do_syscall_64
0.68 ± 10% +1.0 1.73 ± 51% perf-profile.calltrace.cycles-pp.futex_wait.do_futex.__x64_sys_futex.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.46 ±100% +1.1 1.60 ± 43% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.47 ±100% +1.2 1.62 ± 43% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
17.73 ± 21% +19.1 36.84 ± 25% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
17.75 ± 20% +19.9 37.63 ± 26% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
17.84 ± 20% +20.0 37.82 ± 26% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
18.83 ± 20% +21.2 40.05 ± 27% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
18.89 ± 20% +21.2 40.11 ± 27% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
18.89 ± 20% +21.2 40.12 ± 27% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.14 ± 20% +22.5 42.66 ± 27% perf-profile.calltrace.cycles-pp.secondary_startup_64
66.97 ± 9% -29.9 37.12 ± 49% perf-profile.children.cycles-pp.deflate
66.83 ± 9% -29.8 37.06 ± 49% perf-profile.children.cycles-pp.deflate_medium
43.58 ± 9% -18.8 24.80 ± 49% perf-profile.children.cycles-pp.longest_match
2.29 ± 14% -1.2 1.12 ± 43% perf-profile.children.cycles-pp.deflateSetDictionary
0.84 -0.3 0.58 ± 19% perf-profile.children.cycles-pp.read
0.52 ± 13% -0.2 0.27 ± 43% perf-profile.children.cycles-pp.fill_window
0.06 +0.0 0.08 ± 13% perf-profile.children.cycles-pp.update_stack_state
0.07 ± 14% +0.0 0.11 ± 24% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.03 ±100% +0.1 0.09 ± 19% perf-profile.children.cycles-pp.find_next_and_bit
0.00 +0.1 0.06 ± 15% perf-profile.children.cycles-pp.refcount_inc_not_zero_checked
0.03 ±100% +0.1 0.08 ± 33% perf-profile.children.cycles-pp.free_pcppages_bulk
0.07 ± 7% +0.1 0.12 ± 36% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.00 +0.1 0.06 ± 28% perf-profile.children.cycles-pp.rb_erase
0.03 ±100% +0.1 0.09 ± 24% perf-profile.children.cycles-pp.shmem_undo_range
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.unlinkat
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.__x64_sys_unlinkat
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.do_unlinkat
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.ovl_destroy_inode
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.shmem_evict_inode
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.shmem_truncate_range
0.05 +0.1 0.12 ± 38% perf-profile.children.cycles-pp.unmap_vmas
0.00 +0.1 0.07 ± 30% perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.1 0.07 ± 26% perf-profile.children.cycles-pp.idle_cpu
0.09 ± 17% +0.1 0.15 ± 19% perf-profile.children.cycles-pp.__update_load_avg_se
0.00 +0.1 0.07 ± 33% perf-profile.children.cycles-pp.unmap_region
0.00 +0.1 0.07 ± 33% perf-profile.children.cycles-pp.__alloc_fd
0.03 ±100% +0.1 0.10 ± 31% perf-profile.children.cycles-pp.destroy_inode
0.03 ±100% +0.1 0.10 ± 30% perf-profile.children.cycles-pp.evict
0.06 ± 16% +0.1 0.13 ± 35% perf-profile.children.cycles-pp.ovl_override_creds
0.00 +0.1 0.07 ± 26% perf-profile.children.cycles-pp.kernel_text_address
0.00 +0.1 0.07 ± 41% perf-profile.children.cycles-pp.file_remove_privs
0.07 ± 23% +0.1 0.14 ± 47% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.03 ±100% +0.1 0.10 ± 24% perf-profile.children.cycles-pp.__dentry_kill
0.03 ±100% +0.1 0.10 ± 29% perf-profile.children.cycles-pp.dentry_unlink_inode
0.03 ±100% +0.1 0.10 ± 29% perf-profile.children.cycles-pp.iput
0.03 ±100% +0.1 0.10 ± 54% perf-profile.children.cycles-pp.__close_fd
0.08 ± 25% +0.1 0.15 ± 35% perf-profile.children.cycles-pp.__switch_to
0.00 +0.1 0.07 ± 29% perf-profile.children.cycles-pp.__switch_to_asm
0.00 +0.1 0.08 ± 24% perf-profile.children.cycles-pp.__kernel_text_address
0.03 ±100% +0.1 0.11 ± 51% perf-profile.children.cycles-pp.enqueue_hrtimer
0.03 ±100% +0.1 0.11 ± 33% perf-profile.children.cycles-pp.rcu_gp_kthread_wake
0.03 ±100% +0.1 0.11 ± 33% perf-profile.children.cycles-pp.swake_up_one
0.03 ±100% +0.1 0.11 ± 33% perf-profile.children.cycles-pp.swake_up_locked
0.10 ± 30% +0.1 0.18 ± 17% perf-profile.children.cycles-pp.irqtime_account_irq
0.03 ±100% +0.1 0.11 ± 38% perf-profile.children.cycles-pp.unmap_page_range
0.00 +0.1 0.09 ± 37% perf-profile.children.cycles-pp.putname
0.03 ±100% +0.1 0.11 ± 28% perf-profile.children.cycles-pp.filemap_map_pages
0.07 ± 28% +0.1 0.16 ± 35% perf-profile.children.cycles-pp.getname
0.03 ±100% +0.1 0.11 ± 40% perf-profile.children.cycles-pp.unmap_single_vma
0.08 ± 29% +0.1 0.17 ± 38% perf-profile.children.cycles-pp.queued_spin_lock_slowpath
0.03 ±100% +0.1 0.12 ± 54% perf-profile.children.cycles-pp.setlocale
0.03 ±100% +0.1 0.12 ± 60% perf-profile.children.cycles-pp.__open64_nocancel
0.00 +0.1 0.09 ± 31% perf-profile.children.cycles-pp.__hrtimer_get_next_event
0.00 +0.1 0.10 ± 28% perf-profile.children.cycles-pp.get_unused_fd_flags
0.00 +0.1 0.10 ± 65% perf-profile.children.cycles-pp.timerqueue_add
0.07 ± 28% +0.1 0.17 ± 42% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.03 ±100% +0.1 0.12 ± 51% perf-profile.children.cycles-pp.__x64_sys_close
0.00 +0.1 0.10 ± 38% perf-profile.children.cycles-pp.do_lookup_x
0.03 ±100% +0.1 0.12 ± 23% perf-profile.children.cycles-pp.kmem_cache_free
0.04 ±100% +0.1 0.14 ± 53% perf-profile.children.cycles-pp.__do_munmap
0.00 +0.1 0.10 ± 35% perf-profile.children.cycles-pp.unwind_get_return_address
0.00 +0.1 0.10 ± 49% perf-profile.children.cycles-pp.shmem_add_to_page_cache
0.07 ± 20% +0.1 0.18 ± 25% perf-profile.children.cycles-pp.find_next_bit
0.08 ± 25% +0.1 0.18 ± 34% perf-profile.children.cycles-pp.dput
0.11 ± 33% +0.1 0.21 ± 37% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.08 ± 5% +0.1 0.19 ± 27% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.11 ± 52% perf-profile.children.cycles-pp.rcu_idle_exit
0.03 ±100% +0.1 0.14 ± 18% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.08 +0.1 0.19 ± 43% perf-profile.children.cycles-pp.exit_mmap
0.09 ± 22% +0.1 0.20 ± 57% perf-profile.children.cycles-pp.set_next_entity
0.07 ± 7% +0.1 0.18 ± 60% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.10 ± 26% +0.1 0.21 ± 32% perf-profile.children.cycles-pp.sched_clock
0.12 ± 25% +0.1 0.23 ± 39% perf-profile.children.cycles-pp.update_cfs_group
0.07 ± 14% +0.1 0.18 ± 45% perf-profile.children.cycles-pp.lapic_next_deadline
0.08 ± 5% +0.1 0.20 ± 44% perf-profile.children.cycles-pp.mmput
0.11 ± 18% +0.1 0.23 ± 41% perf-profile.children.cycles-pp.clockevents_program_event
0.07 ± 7% +0.1 0.18 ± 40% perf-profile.children.cycles-pp.strncpy_from_user
0.00 +0.1 0.12 ± 48% perf-profile.children.cycles-pp.flush_old_exec
0.11 ± 18% +0.1 0.23 ± 37% perf-profile.children.cycles-pp.native_sched_clock
0.09 ± 17% +0.1 0.21 ± 46% perf-profile.children.cycles-pp._dl_sysdep_start
0.12 ± 19% +0.1 0.26 ± 37% perf-profile.children.cycles-pp.tick_program_event
0.09 ± 33% +0.1 0.23 ± 61% perf-profile.children.cycles-pp.mmap_region
0.14 ± 21% +0.1 0.28 ± 39% perf-profile.children.cycles-pp.sched_clock_cpu
0.11 ± 27% +0.1 0.25 ± 56% perf-profile.children.cycles-pp.do_mmap
0.11 ± 36% +0.1 0.26 ± 57% perf-profile.children.cycles-pp.ksys_mmap_pgoff
0.09 ± 17% +0.1 0.23 ± 48% perf-profile.children.cycles-pp.lookup_fast
0.04 ±100% +0.2 0.19 ± 48% perf-profile.children.cycles-pp.open_path
0.11 ± 30% +0.2 0.27 ± 58% perf-profile.children.cycles-pp.vm_mmap_pgoff
0.11 ± 27% +0.2 0.28 ± 37% perf-profile.children.cycles-pp.update_blocked_averages
0.11 +0.2 0.29 ± 38% perf-profile.children.cycles-pp.search_binary_handler
0.11 ± 4% +0.2 0.29 ± 39% perf-profile.children.cycles-pp.load_elf_binary
0.11 ± 30% +0.2 0.30 ± 50% perf-profile.children.cycles-pp.inode_permission
0.04 ±100% +0.2 0.24 ± 55% perf-profile.children.cycles-pp.__GI___open64_nocancel
0.15 ± 29% +0.2 0.35 ± 34% perf-profile.children.cycles-pp.getname_flags
0.16 ± 25% +0.2 0.38 ± 26% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.18 ± 11% +0.2 0.41 ± 39% perf-profile.children.cycles-pp.execve
0.19 ± 5% +0.2 0.42 ± 37% perf-profile.children.cycles-pp.__x64_sys_execve
0.18 ± 2% +0.2 0.42 ± 37% perf-profile.children.cycles-pp.__do_execve_file
0.32 ± 18% +0.3 0.57 ± 33% perf-profile.children.cycles-pp.__account_scheduler_latency
0.21 ± 17% +0.3 0.48 ± 47% perf-profile.children.cycles-pp.schedule_idle
0.20 ± 19% +0.3 0.49 ± 33% perf-profile.children.cycles-pp.tick_nohz_next_event
0.21 ± 26% +0.3 0.55 ± 41% perf-profile.children.cycles-pp.find_busiest_group
0.32 ± 26% +0.3 0.67 ± 52% perf-profile.children.cycles-pp.__handle_mm_fault
0.22 ± 25% +0.4 0.57 ± 49% perf-profile.children.cycles-pp.filename_lookup
0.34 ± 27% +0.4 0.72 ± 50% perf-profile.children.cycles-pp.handle_mm_fault
0.42 ± 32% +0.4 0.80 ± 43% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.36 ± 23% +0.4 0.77 ± 41% perf-profile.children.cycles-pp.load_balance
0.41 ± 30% +0.4 0.82 ± 50% perf-profile.children.cycles-pp.do_page_fault
0.39 ± 30% +0.4 0.80 ± 50% perf-profile.children.cycles-pp.__do_page_fault
0.28 ± 22% +0.4 0.70 ± 37% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.43 ± 31% +0.4 0.86 ± 49% perf-profile.children.cycles-pp.page_fault
0.31 ± 25% +0.5 0.77 ± 46% perf-profile.children.cycles-pp.user_path_at_empty
0.36 ± 20% +0.5 0.84 ± 34% perf-profile.children.cycles-pp.newidle_balance
0.45 ± 21% +0.5 0.95 ± 44% perf-profile.children.cycles-pp.vfs_statx
0.46 ± 20% +0.5 0.97 ± 43% perf-profile.children.cycles-pp.__do_sys_newfstatat
0.47 ± 20% +0.5 0.98 ± 44% perf-profile.children.cycles-pp.__x64_sys_newfstatat
0.29 ± 37% +0.5 0.81 ± 32% perf-profile.children.cycles-pp.io_serial_in
0.53 ± 25% +0.5 1.06 ± 49% perf-profile.children.cycles-pp.path_openat
0.55 ± 24% +0.5 1.09 ± 50% perf-profile.children.cycles-pp.do_filp_open
0.35 ± 2% +0.5 0.90 ± 60% perf-profile.children.cycles-pp.uart_console_write
0.35 ± 4% +0.6 0.91 ± 60% perf-profile.children.cycles-pp.console_unlock
0.35 ± 4% +0.6 0.91 ± 60% perf-profile.children.cycles-pp.univ8250_console_write
0.35 ± 4% +0.6 0.91 ± 60% perf-profile.children.cycles-pp.serial8250_console_write
0.82 ± 23% +0.6 1.42 ± 31% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.irq_work_interrupt
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.irq_work_run
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.perf_duration_warn
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.printk
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.vprintk_func
0.47 ± 28% +0.6 1.10 ± 39% perf-profile.children.cycles-pp.vprintk_default
0.47 ± 28% +0.6 1.11 ± 39% perf-profile.children.cycles-pp.irq_work_run_list
0.49 ± 31% +0.6 1.13 ± 39% perf-profile.children.cycles-pp.vprintk_emit
0.54 ± 19% +0.6 1.17 ± 38% perf-profile.children.cycles-pp.pick_next_task_fair
0.32 ± 7% +0.7 1.02 ± 56% perf-profile.children.cycles-pp.poll_idle
0.60 ± 15% +0.7 1.31 ± 29% perf-profile.children.cycles-pp.menu_select
0.65 ± 27% +0.7 1.36 ± 45% perf-profile.children.cycles-pp.do_sys_open
0.62 ± 15% +0.7 1.36 ± 31% perf-profile.children.cycles-pp.cpuidle_select
0.66 ± 26% +0.7 1.39 ± 44% perf-profile.children.cycles-pp.__x64_sys_openat
1.11 ± 22% +0.9 2.03 ± 31% perf-profile.children.cycles-pp.hrtimer_interrupt
0.89 ± 24% +0.9 1.81 ± 42% perf-profile.children.cycles-pp.futex_wait_queue_me
1.16 ± 27% +1.0 2.13 ± 36% perf-profile.children.cycles-pp.schedule
0.97 ± 23% +1.0 1.97 ± 42% perf-profile.children.cycles-pp.futex_wait
1.33 ± 25% +1.2 2.55 ± 39% perf-profile.children.cycles-pp.__schedule
1.84 ± 26% +1.6 3.42 ± 36% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.76 ± 22% +1.6 3.41 ± 40% perf-profile.children.cycles-pp.do_futex
1.79 ± 22% +1.7 3.49 ± 41% perf-profile.children.cycles-pp.__x64_sys_futex
2.23 ± 20% +1.7 3.98 ± 37% perf-profile.children.cycles-pp.apic_timer_interrupt
17.73 ± 21% +19.1 36.86 ± 25% perf-profile.children.cycles-pp.intel_idle
19.00 ± 21% +21.1 40.13 ± 26% perf-profile.children.cycles-pp.cpuidle_enter_state
19.02 ± 21% +21.2 40.19 ± 26% perf-profile.children.cycles-pp.cpuidle_enter
18.89 ± 20% +21.2 40.12 ± 27% perf-profile.children.cycles-pp.start_secondary
20.14 ± 20% +22.5 42.65 ± 27% perf-profile.children.cycles-pp.cpu_startup_entry
20.08 ± 20% +22.5 42.59 ± 27% perf-profile.children.cycles-pp.do_idle
20.14 ± 20% +22.5 42.66 ± 27% perf-profile.children.cycles-pp.secondary_startup_64
43.25 ± 9% -18.6 24.63 ± 49% perf-profile.self.cycles-pp.longest_match
22.74 ± 11% -10.8 11.97 ± 50% perf-profile.self.cycles-pp.deflate_medium
2.26 ± 14% -1.2 1.11 ± 44% perf-profile.self.cycles-pp.deflateSetDictionary
0.51 ± 12% -0.3 0.24 ± 57% perf-profile.self.cycles-pp.fill_window
0.07 ± 7% +0.0 0.10 ± 24% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.07 ± 7% +0.1 0.12 ± 36% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.08 ± 12% +0.1 0.14 ± 15% perf-profile.self.cycles-pp.__update_load_avg_se
0.06 +0.1 0.13 ± 27% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.08 ± 25% +0.1 0.15 ± 37% perf-profile.self.cycles-pp.__switch_to
0.06 ± 16% +0.1 0.13 ± 29% perf-profile.self.cycles-pp.get_page_from_freelist
0.00 +0.1 0.07 ± 29% perf-profile.self.cycles-pp.__switch_to_asm
0.05 +0.1 0.13 ± 57% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.00 +0.1 0.08 ± 41% perf-profile.self.cycles-pp.interrupt_entry
0.00 +0.1 0.08 ± 61% perf-profile.self.cycles-pp.run_timer_softirq
0.07 ± 23% +0.1 0.15 ± 43% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.03 ±100% +0.1 0.12 ± 43% perf-profile.self.cycles-pp.update_cfs_group
0.08 ± 29% +0.1 0.17 ± 38% perf-profile.self.cycles-pp.queued_spin_lock_slowpath
0.00 +0.1 0.09 ± 29% perf-profile.self.cycles-pp.strncpy_from_user
0.06 ± 16% +0.1 0.15 ± 24% perf-profile.self.cycles-pp.find_next_bit
0.00 +0.1 0.10 ± 35% perf-profile.self.cycles-pp.do_lookup_x
0.00 +0.1 0.10 ± 13% perf-profile.self.cycles-pp.kmem_cache_free
0.06 ± 16% +0.1 0.16 ± 30% perf-profile.self.cycles-pp.do_idle
0.03 ±100% +0.1 0.13 ± 18% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.03 ±100% +0.1 0.14 ± 41% perf-profile.self.cycles-pp.update_blocked_averages
0.11 ± 18% +0.1 0.22 ± 37% perf-profile.self.cycles-pp.native_sched_clock
0.07 ± 14% +0.1 0.18 ± 46% perf-profile.self.cycles-pp.lapic_next_deadline
0.00 +0.1 0.12 ± 65% perf-profile.self.cycles-pp.set_next_entity
0.12 ± 28% +0.1 0.27 ± 32% perf-profile.self.cycles-pp.cpuidle_enter_state
0.15 ± 3% +0.2 0.32 ± 39% perf-profile.self.cycles-pp.io_serial_out
0.25 ± 4% +0.2 0.48 ± 19% perf-profile.self.cycles-pp.menu_select
0.15 ± 22% +0.3 0.42 ± 46% perf-profile.self.cycles-pp.find_busiest_group
0.29 ± 37% +0.4 0.71 ± 42% perf-profile.self.cycles-pp.io_serial_in
0.32 ± 7% +0.7 1.02 ± 56% perf-profile.self.cycles-pp.poll_idle
17.73 ± 21% +19.1 36.79 ± 25% perf-profile.self.cycles-pp.intel_idle
phoronix-test-suite.compress-gzip.0.seconds
8 +-----------------------------------------------------------------------+
| O O O O O O O O |
7 |-+ O O O O O O O O O |
6 |-+ + + + |
| + : + + : + + + : |
5 |-+ : : : : :: : : : : |
| :: : : : :: : : : :: : : |
4 |:+: : : : : : : : : : : : : : : : : |
|: : : : : : : : : + + : : + : : : : : : : |
3 |:+: : : : : : : : : : : : : : : : : : : : |
2 |:+: : : : : : : : : : : : : : : : : : : : : : : |
|: : : : : : : : : : : : : : : : : : : : : : : : |
1 |-: :: : : : : : : : : :: :: :: : : |
| : : : : : : : : : : : : |
0 +-----------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-cfl-d1: 12 threads Intel(R) Core(TM) i7-8700 CPU @ 3.20GHz with 8G memory
***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-09-23.cgz/300s/8T/lkp-skl-fpga01/anon-cow-seq/vm-scalability/0x2000064
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
413301 +3.1% 426103 vm-scalability.median
0.04 ± 2% -34.0% 0.03 ± 12% vm-scalability.median_stddev
43837589 +2.4% 44902458 vm-scalability.throughput
181085 -18.7% 147221 vm-scalability.time.involuntary_context_switches
12762365 ± 2% +3.9% 13262025 vm-scalability.time.minor_page_faults
7773 +2.9% 7997 vm-scalability.time.percent_of_cpu_this_job_got
11449 +1.2% 11589 vm-scalability.time.system_time
12024 +4.7% 12584 vm-scalability.time.user_time
439194 ± 2% +46.0% 641402 ± 2% vm-scalability.time.voluntary_context_switches
1.148e+10 +5.0% 1.206e+10 vm-scalability.workload
0.00 ± 54% +0.0 0.00 ± 17% mpstat.cpu.all.iowait%
4767597 +52.5% 7268430 ± 41% numa-numastat.node1.local_node
4781030 +52.3% 7280347 ± 41% numa-numastat.node1.numa_hit
24.75 -9.1% 22.50 ± 2% vmstat.cpu.id
37.50 +4.7% 39.25 vmstat.cpu.us
6643 ± 3% +15.1% 7647 vmstat.system.cs
12220504 +33.4% 16298593 ± 4% cpuidle.C1.time
260215 ± 6% +55.3% 404158 ± 3% cpuidle.C1.usage
4986034 ± 3% +56.2% 7786811 ± 2% cpuidle.POLL.time
145941 ± 3% +61.2% 235218 ± 2% cpuidle.POLL.usage
1990 +3.0% 2049 turbostat.Avg_MHz
254633 ± 6% +56.7% 398892 ± 4% turbostat.C1
0.04 +0.0 0.05 turbostat.C1%
309.99 +1.5% 314.75 turbostat.RAMWatt
1688 ± 11% +17.4% 1983 ± 5% slabinfo.UNIX.active_objs
1688 ± 11% +17.4% 1983 ± 5% slabinfo.UNIX.num_objs
2460 ± 3% -15.8% 2072 ± 11% slabinfo.dmaengine-unmap-16.active_objs
2460 ± 3% -15.8% 2072 ± 11% slabinfo.dmaengine-unmap-16.num_objs
2814 ± 9% +14.6% 3225 ± 4% slabinfo.sock_inode_cache.active_objs
2814 ± 9% +14.6% 3225 ± 4% slabinfo.sock_inode_cache.num_objs
0.67 ± 5% +0.1 0.73 ± 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault
0.68 ± 6% +0.1 0.74 ± 2% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.05 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.schedule
0.06 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__wake_up_common
0.06 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.wake_up_page_bit
0.23 ± 7% +0.0 0.28 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 +0.1 0.05 perf-profile.children.cycles-pp.drm_fb_helper_sys_imageblit
0.00 +0.1 0.05 perf-profile.children.cycles-pp.sys_imageblit
29026 ± 3% -26.7% 21283 ± 44% numa-vmstat.node0.nr_inactive_anon
30069 ± 3% -20.5% 23905 ± 26% numa-vmstat.node0.nr_shmem
12120 ± 2% -15.5% 10241 ± 12% numa-vmstat.node0.nr_slab_reclaimable
29026 ± 3% -26.7% 21283 ± 44% numa-vmstat.node0.nr_zone_inactive_anon
4010893 +16.1% 4655889 ± 9% numa-vmstat.node1.nr_active_anon
3982581 +16.3% 4632344 ± 9% numa-vmstat.node1.nr_anon_pages
6861 +16.1% 7964 ± 8% numa-vmstat.node1.nr_anon_transparent_hugepages
2317 ± 42% +336.9% 10125 ± 93% numa-vmstat.node1.nr_inactive_anon
6596 ± 4% +18.2% 7799 ± 14% numa-vmstat.node1.nr_kernel_stack
9629 ± 8% +66.4% 16020 ± 41% numa-vmstat.node1.nr_shmem
7558 ± 3% +26.5% 9561 ± 14% numa-vmstat.node1.nr_slab_reclaimable
4010227 +16.1% 4655056 ± 9% numa-vmstat.node1.nr_zone_active_anon
2317 ± 42% +336.9% 10125 ± 93% numa-vmstat.node1.nr_zone_inactive_anon
2859663 ± 2% +46.2% 4179500 ± 36% numa-vmstat.node1.numa_hit
2680260 ± 2% +49.3% 4002218 ± 37% numa-vmstat.node1.numa_local
116661 ± 3% -26.3% 86010 ± 44% numa-meminfo.node0.Inactive
116192 ± 3% -26.7% 85146 ± 44% numa-meminfo.node0.Inactive(anon)
48486 ± 2% -15.5% 40966 ± 12% numa-meminfo.node0.KReclaimable
48486 ± 2% -15.5% 40966 ± 12% numa-meminfo.node0.SReclaimable
120367 ± 3% -20.5% 95642 ± 26% numa-meminfo.node0.Shmem
16210528 +15.2% 18673368 ± 6% numa-meminfo.node1.Active
16210394 +15.2% 18673287 ± 6% numa-meminfo.node1.Active(anon)
14170064 +15.6% 16379835 ± 7% numa-meminfo.node1.AnonHugePages
16113351 +15.3% 18577254 ± 7% numa-meminfo.node1.AnonPages
10534 ± 33% +293.8% 41480 ± 92% numa-meminfo.node1.Inactive
9262 ± 42% +338.2% 40589 ± 93% numa-meminfo.node1.Inactive(anon)
30235 ± 3% +26.5% 38242 ± 14% numa-meminfo.node1.KReclaimable
6594 ± 4% +18.3% 7802 ± 14% numa-meminfo.node1.KernelStack
17083646 +15.1% 19656922 ± 7% numa-meminfo.node1.MemUsed
30235 ± 3% +26.5% 38242 ± 14% numa-meminfo.node1.SReclaimable
38540 ± 8% +66.4% 64117 ± 42% numa-meminfo.node1.Shmem
106342 +19.8% 127451 ± 11% numa-meminfo.node1.Slab
9479688 +4.5% 9905902 proc-vmstat.nr_active_anon
9434298 +4.5% 9856978 proc-vmstat.nr_anon_pages
16194 +4.3% 16895 proc-vmstat.nr_anon_transparent_hugepages
276.75 +3.6% 286.75 proc-vmstat.nr_dirtied
3888633 -1.1% 3845882 proc-vmstat.nr_dirty_background_threshold
7786774 -1.1% 7701168 proc-vmstat.nr_dirty_threshold
39168820 -1.1% 38741444 proc-vmstat.nr_free_pages
50391 +1.0% 50904 proc-vmstat.nr_slab_unreclaimable
257.50 +3.6% 266.75 proc-vmstat.nr_written
9479678 +4.5% 9905895 proc-vmstat.nr_zone_active_anon
1501517 -5.9% 1412958 proc-vmstat.numa_hint_faults
1075936 -13.1% 934706 proc-vmstat.numa_hint_faults_local
17306395 +4.8% 18141722 proc-vmstat.numa_hit
5211079 +4.2% 5427541 proc-vmstat.numa_huge_pte_updates
17272620 +4.8% 18107691 proc-vmstat.numa_local
33774 +0.8% 34031 proc-vmstat.numa_other
690793 ± 3% -13.7% 596166 ± 2% proc-vmstat.numa_pages_migrated
2.669e+09 +4.2% 2.78e+09 proc-vmstat.numa_pte_updates
2.755e+09 +5.6% 2.909e+09 proc-vmstat.pgalloc_normal
13573227 ± 2% +3.6% 14060842 proc-vmstat.pgfault
2.752e+09 +5.6% 2.906e+09 proc-vmstat.pgfree
1.723e+08 ± 2% +14.3% 1.97e+08 ± 8% proc-vmstat.pgmigrate_fail
690793 ± 3% -13.7% 596166 ± 2% proc-vmstat.pgmigrate_success
5015265 +5.0% 5266730 proc-vmstat.thp_deferred_split_page
5019661 +5.0% 5271482 proc-vmstat.thp_fault_alloc
18284 ± 62% -79.9% 3681 ±172% sched_debug.cfs_rq:/.MIN_vruntime.avg
1901618 ± 62% -89.9% 192494 ±172% sched_debug.cfs_rq:/.MIN_vruntime.max
185571 ± 62% -85.8% 26313 ±172% sched_debug.cfs_rq:/.MIN_vruntime.stddev
15241 ± 6% -36.6% 9655 ± 6% sched_debug.cfs_rq:/.exec_clock.stddev
18284 ± 62% -79.9% 3681 ±172% sched_debug.cfs_rq:/.max_vruntime.avg
1901618 ± 62% -89.9% 192494 ±172% sched_debug.cfs_rq:/.max_vruntime.max
185571 ± 62% -85.8% 26313 ±172% sched_debug.cfs_rq:/.max_vruntime.stddev
898812 ± 7% -31.2% 618552 ± 5% sched_debug.cfs_rq:/.min_vruntime.stddev
10.30 ± 12% +34.5% 13.86 ± 6% sched_debug.cfs_rq:/.nr_spread_over.avg
34.75 ± 8% +95.9% 68.08 ± 4% sched_debug.cfs_rq:/.nr_spread_over.max
9.12 ± 11% +82.3% 16.62 ± 9% sched_debug.cfs_rq:/.nr_spread_over.stddev
-1470498 -31.9% -1000709 sched_debug.cfs_rq:/.spread0.min
899820 ± 7% -31.2% 618970 ± 5% sched_debug.cfs_rq:/.spread0.stddev
1589 ± 9% -19.2% 1284 ± 9% sched_debug.cfs_rq:/.util_avg.max
0.54 ± 39% +7484.6% 41.08 ± 92% sched_debug.cfs_rq:/.util_est_enqueued.min
238.84 ± 8% -33.2% 159.61 ± 26% sched_debug.cfs_rq:/.util_est_enqueued.stddev
10787 ± 2% +13.8% 12274 sched_debug.cpu.nr_switches.avg
35242 ± 9% +32.3% 46641 ± 25% sched_debug.cpu.nr_switches.max
9139 ± 3% +16.4% 10636 sched_debug.cpu.sched_count.avg
32025 ± 10% +34.6% 43091 ± 27% sched_debug.cpu.sched_count.max
4016 ± 2% +14.7% 4606 ± 5% sched_debug.cpu.sched_count.min
2960 +38.3% 4093 sched_debug.cpu.sched_goidle.avg
11201 ± 24% +75.8% 19691 ± 26% sched_debug.cpu.sched_goidle.max
1099 ± 6% +56.9% 1725 ± 6% sched_debug.cpu.sched_goidle.min
1877 ± 10% +32.5% 2487 ± 17% sched_debug.cpu.sched_goidle.stddev
4348 ± 3% +19.3% 5188 sched_debug.cpu.ttwu_count.avg
17832 ± 11% +78.6% 31852 ± 29% sched_debug.cpu.ttwu_count.max
1699 ± 6% +28.2% 2178 ± 7% sched_debug.cpu.ttwu_count.min
1357 ± 10% -22.6% 1050 ± 4% sched_debug.cpu.ttwu_local.avg
11483 ± 5% -25.0% 8614 ± 15% sched_debug.cpu.ttwu_local.max
1979 ± 12% -36.8% 1251 ± 10% sched_debug.cpu.ttwu_local.stddev
3.941e+10 +5.0% 4.137e+10 perf-stat.i.branch-instructions
0.02 ± 50% -0.0 0.02 ± 5% perf-stat.i.branch-miss-rate%
67.94 -3.9 63.99 perf-stat.i.cache-miss-rate%
8.329e+08 -1.9% 8.17e+08 perf-stat.i.cache-misses
1.224e+09 +4.5% 1.28e+09 perf-stat.i.cache-references
6650 ± 3% +15.5% 7678 perf-stat.i.context-switches
1.64 -1.8% 1.61 perf-stat.i.cpi
2.037e+11 +2.8% 2.095e+11 perf-stat.i.cpu-cycles
257.56 -4.0% 247.13 perf-stat.i.cpu-migrations
244.94 +4.5% 255.91 perf-stat.i.cycles-between-cache-misses
1189446 ± 2% +3.2% 1227527 perf-stat.i.dTLB-load-misses
2.669e+10 +4.7% 2.794e+10 perf-stat.i.dTLB-loads
0.00 ± 7% -0.0 0.00 perf-stat.i.dTLB-store-miss-rate%
337782 +4.5% 353044 perf-stat.i.dTLB-store-misses
9.096e+09 +4.7% 9.526e+09 perf-stat.i.dTLB-stores
39.50 +2.1 41.64 perf-stat.i.iTLB-load-miss-rate%
296305 ± 2% +9.0% 323020 perf-stat.i.iTLB-load-misses
1.238e+11 +4.9% 1.299e+11 perf-stat.i.instructions
428249 ± 2% -4.4% 409553 perf-stat.i.instructions-per-iTLB-miss
0.61 +1.6% 0.62 perf-stat.i.ipc
44430 +3.8% 46121 perf-stat.i.minor-faults
54.82 +3.9 58.73 perf-stat.i.node-load-miss-rate%
68519419 ± 4% -11.7% 60479057 ± 6% perf-stat.i.node-load-misses
49879161 ± 3% -20.7% 39554915 ± 4% perf-stat.i.node-loads
44428 +3.8% 46119 perf-stat.i.page-faults
0.02 -0.0 0.01 ± 5% perf-stat.overall.branch-miss-rate%
68.03 -4.2 63.83 perf-stat.overall.cache-miss-rate%
1.65 -2.0% 1.61 perf-stat.overall.cpi
244.61 +4.8% 256.41 perf-stat.overall.cycles-between-cache-misses
30.21 +2.2 32.38 perf-stat.overall.iTLB-load-miss-rate%
417920 ± 2% -3.7% 402452 perf-stat.overall.instructions-per-iTLB-miss
0.61 +2.1% 0.62 perf-stat.overall.ipc
57.84 +2.6 60.44 perf-stat.overall.node-load-miss-rate%
3.925e+10 +5.1% 4.124e+10 perf-stat.ps.branch-instructions
8.295e+08 -1.8% 8.144e+08 perf-stat.ps.cache-misses
1.219e+09 +4.6% 1.276e+09 perf-stat.ps.cache-references
6625 ± 3% +15.4% 7648 perf-stat.ps.context-switches
2.029e+11 +2.9% 2.088e+11 perf-stat.ps.cpu-cycles
256.82 -4.2% 246.09 perf-stat.ps.cpu-migrations
1184763 ± 2% +3.3% 1223366 perf-stat.ps.dTLB-load-misses
2.658e+10 +4.8% 2.786e+10 perf-stat.ps.dTLB-loads
336658 +4.5% 351710 perf-stat.ps.dTLB-store-misses
9.059e+09 +4.8% 9.497e+09 perf-stat.ps.dTLB-stores
295140 ± 2% +9.0% 321824 perf-stat.ps.iTLB-load-misses
1.233e+11 +5.0% 1.295e+11 perf-stat.ps.instructions
44309 +3.7% 45933 perf-stat.ps.minor-faults
68208972 ± 4% -11.6% 60272675 ± 6% perf-stat.ps.node-load-misses
49689740 ± 3% -20.7% 39401789 ± 4% perf-stat.ps.node-loads
44308 +3.7% 45932 perf-stat.ps.page-faults
3.732e+13 +5.1% 3.922e+13 perf-stat.total.instructions
14949 ± 2% +14.5% 17124 ± 11% softirqs.CPU0.SCHED
9940 +37.8% 13700 ± 24% softirqs.CPU1.SCHED
9370 ± 2% +28.2% 12014 ± 16% softirqs.CPU10.SCHED
17637 ± 2% -16.5% 14733 ± 16% softirqs.CPU101.SCHED
17846 ± 3% -17.4% 14745 ± 16% softirqs.CPU103.SCHED
9552 +24.7% 11916 ± 17% softirqs.CPU11.SCHED
9210 ± 5% +27.9% 11784 ± 16% softirqs.CPU12.SCHED
9378 ± 3% +27.7% 11974 ± 16% softirqs.CPU13.SCHED
9164 ± 2% +29.4% 11856 ± 18% softirqs.CPU14.SCHED
9215 +21.2% 11170 ± 19% softirqs.CPU15.SCHED
9118 ± 2% +29.1% 11772 ± 16% softirqs.CPU16.SCHED
9413 +29.2% 12165 ± 18% softirqs.CPU17.SCHED
9309 ± 2% +29.9% 12097 ± 17% softirqs.CPU18.SCHED
9423 +26.1% 11880 ± 15% softirqs.CPU19.SCHED
9010 ± 7% +37.8% 12420 ± 18% softirqs.CPU2.SCHED
9382 ± 3% +27.0% 11916 ± 15% softirqs.CPU20.SCHED
9102 ± 4% +30.0% 11830 ± 16% softirqs.CPU21.SCHED
9543 ± 3% +23.4% 11780 ± 18% softirqs.CPU22.SCHED
8998 ± 5% +29.2% 11630 ± 18% softirqs.CPU24.SCHED
9254 ± 2% +23.9% 11462 ± 19% softirqs.CPU25.SCHED
18450 ± 4% -16.9% 15341 ± 16% softirqs.CPU26.SCHED
17551 ± 4% -14.8% 14956 ± 13% softirqs.CPU27.SCHED
17575 ± 4% -14.6% 15010 ± 14% softirqs.CPU28.SCHED
17515 ± 5% -14.2% 15021 ± 13% softirqs.CPU29.SCHED
17715 ± 2% -16.1% 14856 ± 13% softirqs.CPU30.SCHED
17754 ± 4% -16.1% 14904 ± 13% softirqs.CPU31.SCHED
17675 ± 2% -17.0% 14679 ± 21% softirqs.CPU32.SCHED
17625 ± 2% -16.0% 14813 ± 13% softirqs.CPU34.SCHED
17619 ± 2% -14.7% 15024 ± 14% softirqs.CPU35.SCHED
17887 ± 3% -17.0% 14841 ± 14% softirqs.CPU36.SCHED
17658 ± 3% -16.3% 14771 ± 12% softirqs.CPU38.SCHED
17501 ± 2% -15.3% 14816 ± 14% softirqs.CPU39.SCHED
9360 ± 2% +25.4% 11740 ± 14% softirqs.CPU4.SCHED
17699 ± 4% -16.2% 14827 ± 14% softirqs.CPU42.SCHED
17580 ± 3% -16.5% 14679 ± 15% softirqs.CPU43.SCHED
17658 ± 3% -17.1% 14644 ± 14% softirqs.CPU44.SCHED
17452 ± 4% -14.0% 15001 ± 15% softirqs.CPU46.SCHED
17599 ± 4% -17.4% 14544 ± 14% softirqs.CPU47.SCHED
17792 ± 3% -16.5% 14864 ± 14% softirqs.CPU48.SCHED
17333 ± 2% -16.7% 14445 ± 14% softirqs.CPU49.SCHED
9483 +32.3% 12547 ± 24% softirqs.CPU5.SCHED
17842 ± 3% -15.9% 14997 ± 16% softirqs.CPU51.SCHED
9051 ± 2% +23.3% 11160 ± 13% softirqs.CPU52.SCHED
9385 ± 3% +25.2% 11752 ± 16% softirqs.CPU53.SCHED
9446 ± 6% +24.9% 11798 ± 14% softirqs.CPU54.SCHED
10006 ± 6% +22.4% 12249 ± 14% softirqs.CPU55.SCHED
9657 +22.0% 11780 ± 16% softirqs.CPU57.SCHED
9399 +27.5% 11980 ± 15% softirqs.CPU58.SCHED
9234 ± 3% +27.7% 11795 ± 14% softirqs.CPU59.SCHED
9726 ± 6% +24.0% 12062 ± 16% softirqs.CPU6.SCHED
9165 ± 2% +23.7% 11342 ± 14% softirqs.CPU60.SCHED
9357 ± 2% +25.8% 11774 ± 15% softirqs.CPU61.SCHED
9406 ± 3% +25.2% 11780 ± 16% softirqs.CPU62.SCHED
9489 +23.2% 11688 ± 15% softirqs.CPU63.SCHED
9399 ± 2% +23.5% 11604 ± 16% softirqs.CPU65.SCHED
8950 ± 2% +31.6% 11774 ± 16% softirqs.CPU66.SCHED
9260 +21.7% 11267 ± 19% softirqs.CPU67.SCHED
9187 +27.1% 11672 ± 17% softirqs.CPU68.SCHED
9443 ± 2% +25.5% 11847 ± 17% softirqs.CPU69.SCHED
9144 ± 3% +28.0% 11706 ± 16% softirqs.CPU7.SCHED
9276 ± 2% +28.0% 11871 ± 17% softirqs.CPU70.SCHED
9494 +21.4% 11526 ± 14% softirqs.CPU71.SCHED
9124 ± 3% +27.8% 11657 ± 17% softirqs.CPU72.SCHED
9189 ± 3% +25.9% 11568 ± 16% softirqs.CPU73.SCHED
9392 ± 2% +23.7% 11619 ± 16% softirqs.CPU74.SCHED
17821 ± 3% -14.7% 15197 ± 17% softirqs.CPU78.SCHED
17581 ± 2% -15.7% 14827 ± 15% softirqs.CPU79.SCHED
9123 +28.2% 11695 ± 15% softirqs.CPU8.SCHED
17524 ± 2% -16.7% 14601 ± 14% softirqs.CPU80.SCHED
17644 ± 3% -16.2% 14782 ± 14% softirqs.CPU81.SCHED
17705 ± 3% -18.6% 14414 ± 22% softirqs.CPU84.SCHED
17679 ± 2% -14.1% 15185 ± 11% softirqs.CPU85.SCHED
17434 ± 3% -15.5% 14724 ± 14% softirqs.CPU86.SCHED
17409 ± 2% -15.0% 14794 ± 13% softirqs.CPU87.SCHED
17470 ± 3% -15.7% 14730 ± 13% softirqs.CPU88.SCHED
17748 ± 4% -17.1% 14721 ± 12% softirqs.CPU89.SCHED
9323 +28.0% 11929 ± 17% softirqs.CPU9.SCHED
17471 ± 2% -16.9% 14525 ± 13% softirqs.CPU90.SCHED
17900 ± 3% -17.0% 14850 ± 14% softirqs.CPU94.SCHED
17599 ± 4% -17.4% 14544 ± 15% softirqs.CPU95.SCHED
17697 ± 4% -17.7% 14569 ± 13% softirqs.CPU96.SCHED
17561 ± 3% -15.1% 14901 ± 13% softirqs.CPU97.SCHED
17404 ± 3% -16.1% 14601 ± 13% softirqs.CPU98.SCHED
17802 ± 3% -19.4% 14344 ± 15% softirqs.CPU99.SCHED
1310 ± 10% -17.0% 1088 ± 5% interrupts.CPU1.RES:Rescheduling_interrupts
3427 +13.3% 3883 ± 9% interrupts.CPU10.CAL:Function_call_interrupts
736.50 ± 20% +34.4% 989.75 ± 17% interrupts.CPU100.RES:Rescheduling_interrupts
3421 ± 3% +14.6% 3921 ± 9% interrupts.CPU101.CAL:Function_call_interrupts
4873 ± 8% +16.2% 5662 ± 7% interrupts.CPU101.NMI:Non-maskable_interrupts
4873 ± 8% +16.2% 5662 ± 7% interrupts.CPU101.PMI:Performance_monitoring_interrupts
629.50 ± 19% +83.2% 1153 ± 46% interrupts.CPU101.RES:Rescheduling_interrupts
661.75 ± 14% +25.7% 832.00 ± 13% interrupts.CPU102.RES:Rescheduling_interrupts
4695 ± 5% +15.5% 5420 ± 9% interrupts.CPU103.NMI:Non-maskable_interrupts
4695 ± 5% +15.5% 5420 ± 9% interrupts.CPU103.PMI:Performance_monitoring_interrupts
3460 +12.1% 3877 ± 9% interrupts.CPU11.CAL:Function_call_interrupts
691.50 ± 7% +41.0% 975.00 ± 32% interrupts.CPU19.RES:Rescheduling_interrupts
3413 ± 2% +13.4% 3870 ± 10% interrupts.CPU20.CAL:Function_call_interrupts
3413 ± 2% +13.4% 3871 ± 10% interrupts.CPU22.CAL:Function_call_interrupts
863.00 ± 36% +45.3% 1254 ± 24% interrupts.CPU23.RES:Rescheduling_interrupts
659.75 ± 12% +83.4% 1209 ± 20% interrupts.CPU26.RES:Rescheduling_interrupts
615.00 ± 10% +87.8% 1155 ± 14% interrupts.CPU27.RES:Rescheduling_interrupts
663.75 ± 5% +67.9% 1114 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts
3421 ± 4% +13.4% 3879 ± 9% interrupts.CPU29.CAL:Function_call_interrupts
805.25 ± 16% +33.0% 1071 ± 15% interrupts.CPU29.RES:Rescheduling_interrupts
3482 ± 3% +11.0% 3864 ± 8% interrupts.CPU3.CAL:Function_call_interrupts
819.75 ± 19% +48.4% 1216 ± 12% interrupts.CPU30.RES:Rescheduling_interrupts
777.25 ± 8% +31.6% 1023 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
844.50 ± 25% +41.7% 1196 ± 20% interrupts.CPU32.RES:Rescheduling_interrupts
722.75 ± 14% +94.2% 1403 ± 26% interrupts.CPU33.RES:Rescheduling_interrupts
3944 ± 25% +36.8% 5394 ± 9% interrupts.CPU34.NMI:Non-maskable_interrupts
3944 ± 25% +36.8% 5394 ± 9% interrupts.CPU34.PMI:Performance_monitoring_interrupts
781.75 ± 9% +45.3% 1136 ± 27% interrupts.CPU34.RES:Rescheduling_interrupts
735.50 ± 9% +33.3% 980.75 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
691.75 ± 10% +41.6% 979.50 ± 13% interrupts.CPU36.RES:Rescheduling_interrupts
727.00 ± 16% +47.7% 1074 ± 15% interrupts.CPU37.RES:Rescheduling_interrupts
4413 ± 7% +24.9% 5511 ± 9% interrupts.CPU38.NMI:Non-maskable_interrupts
4413 ± 7% +24.9% 5511 ± 9% interrupts.CPU38.PMI:Performance_monitoring_interrupts
708.75 ± 25% +62.6% 1152 ± 22% interrupts.CPU38.RES:Rescheduling_interrupts
666.50 ± 7% +57.8% 1052 ± 13% interrupts.CPU39.RES:Rescheduling_interrupts
765.75 ± 11% +25.2% 958.75 ± 14% interrupts.CPU4.RES:Rescheduling_interrupts
3395 ± 2% +15.1% 3908 ± 10% interrupts.CPU40.CAL:Function_call_interrupts
770.00 ± 16% +45.3% 1119 ± 18% interrupts.CPU40.RES:Rescheduling_interrupts
740.50 ± 26% +61.9% 1198 ± 19% interrupts.CPU41.RES:Rescheduling_interrupts
3459 ± 2% +12.9% 3905 ± 11% interrupts.CPU42.CAL:Function_call_interrupts
4530 ± 5% +22.8% 5564 ± 9% interrupts.CPU42.NMI:Non-maskable_interrupts
4530 ± 5% +22.8% 5564 ± 9% interrupts.CPU42.PMI:Performance_monitoring_interrupts
3330 ± 25% +60.0% 5328 ± 10% interrupts.CPU44.NMI:Non-maskable_interrupts
3330 ± 25% +60.0% 5328 ± 10% interrupts.CPU44.PMI:Performance_monitoring_interrupts
686.25 ± 9% +48.4% 1018 ± 10% interrupts.CPU44.RES:Rescheduling_interrupts
702.00 ± 15% +38.6% 973.25 ± 5% interrupts.CPU45.RES:Rescheduling_interrupts
4742 ± 7% +19.3% 5657 ± 8% interrupts.CPU46.NMI:Non-maskable_interrupts
4742 ± 7% +19.3% 5657 ± 8% interrupts.CPU46.PMI:Performance_monitoring_interrupts
732.75 ± 6% +51.9% 1113 ± 7% interrupts.CPU46.RES:Rescheduling_interrupts
775.50 ± 17% +41.3% 1095 ± 6% interrupts.CPU47.RES:Rescheduling_interrupts
670.75 ± 5% +60.7% 1078 ± 6% interrupts.CPU48.RES:Rescheduling_interrupts
4870 ± 8% +16.5% 5676 ± 7% interrupts.CPU49.NMI:Non-maskable_interrupts
4870 ± 8% +16.5% 5676 ± 7% interrupts.CPU49.PMI:Performance_monitoring_interrupts
694.75 ± 12% +25.8% 874.00 ± 11% interrupts.CPU49.RES:Rescheduling_interrupts
686.00 ± 9% +52.0% 1042 ± 20% interrupts.CPU50.RES:Rescheduling_interrupts
3361 +17.2% 3938 ± 9% interrupts.CPU51.CAL:Function_call_interrupts
4707 ± 6% +16.0% 5463 ± 8% interrupts.CPU51.NMI:Non-maskable_interrupts
4707 ± 6% +16.0% 5463 ± 8% interrupts.CPU51.PMI:Performance_monitoring_interrupts
638.75 ± 12% +28.6% 821.25 ± 15% interrupts.CPU54.RES:Rescheduling_interrupts
677.50 ± 8% +51.8% 1028 ± 29% interrupts.CPU58.RES:Rescheduling_interrupts
3465 ± 2% +12.0% 3880 ± 9% interrupts.CPU6.CAL:Function_call_interrupts
641.25 ± 2% +26.1% 808.75 ± 10% interrupts.CPU60.RES:Rescheduling_interrupts
599.75 ± 2% +45.6% 873.50 ± 8% interrupts.CPU62.RES:Rescheduling_interrupts
661.50 ± 9% +52.4% 1008 ± 27% interrupts.CPU63.RES:Rescheduling_interrupts
611.00 ± 12% +31.1% 801.00 ± 13% interrupts.CPU69.RES:Rescheduling_interrupts
3507 ± 2% +10.8% 3888 ± 9% interrupts.CPU7.CAL:Function_call_interrupts
664.00 ± 5% +32.3% 878.50 ± 23% interrupts.CPU70.RES:Rescheduling_interrupts
5780 ± 9% -38.8% 3540 ± 37% interrupts.CPU73.NMI:Non-maskable_interrupts
5780 ± 9% -38.8% 3540 ± 37% interrupts.CPU73.PMI:Performance_monitoring_interrupts
5787 ± 9% -26.7% 4243 ± 28% interrupts.CPU76.NMI:Non-maskable_interrupts
5787 ± 9% -26.7% 4243 ± 28% interrupts.CPU76.PMI:Performance_monitoring_interrupts
751.50 ± 15% +88.0% 1413 ± 37% interrupts.CPU78.RES:Rescheduling_interrupts
725.50 ± 12% +82.9% 1327 ± 36% interrupts.CPU79.RES:Rescheduling_interrupts
714.00 ± 18% +33.2% 951.00 ± 15% interrupts.CPU80.RES:Rescheduling_interrupts
706.25 ± 19% +55.6% 1098 ± 27% interrupts.CPU82.RES:Rescheduling_interrupts
4524 ± 6% +19.6% 5409 ± 8% interrupts.CPU83.NMI:Non-maskable_interrupts
4524 ± 6% +19.6% 5409 ± 8% interrupts.CPU83.PMI:Performance_monitoring_interrupts
666.75 ± 15% +37.3% 915.50 ± 4% interrupts.CPU83.RES:Rescheduling_interrupts
782.50 ± 26% +57.6% 1233 ± 21% interrupts.CPU84.RES:Rescheduling_interrupts
622.75 ± 12% +77.8% 1107 ± 17% interrupts.CPU85.RES:Rescheduling_interrupts
3465 ± 3% +13.5% 3933 ± 9% interrupts.CPU86.CAL:Function_call_interrupts
714.75 ± 14% +47.0% 1050 ± 10% interrupts.CPU86.RES:Rescheduling_interrupts
3519 ± 2% +11.7% 3929 ± 9% interrupts.CPU87.CAL:Function_call_interrupts
582.75 ± 10% +54.2% 898.75 ± 11% interrupts.CPU87.RES:Rescheduling_interrupts
713.00 ± 10% +36.6% 974.25 ± 11% interrupts.CPU88.RES:Rescheduling_interrupts
690.50 ± 13% +53.0% 1056 ± 13% interrupts.CPU89.RES:Rescheduling_interrupts
3477 +11.0% 3860 ± 8% interrupts.CPU9.CAL:Function_call_interrupts
684.50 ± 14% +39.7% 956.25 ± 11% interrupts.CPU90.RES:Rescheduling_interrupts
3946 ± 21% +39.8% 5516 ± 10% interrupts.CPU91.NMI:Non-maskable_interrupts
3946 ± 21% +39.8% 5516 ± 10% interrupts.CPU91.PMI:Performance_monitoring_interrupts
649.00 ± 13% +54.3% 1001 ± 6% interrupts.CPU91.RES:Rescheduling_interrupts
674.25 ± 21% +39.5% 940.25 ± 11% interrupts.CPU92.RES:Rescheduling_interrupts
3971 ± 26% +41.2% 5606 ± 8% interrupts.CPU94.NMI:Non-maskable_interrupts
3971 ± 26% +41.2% 5606 ± 8% interrupts.CPU94.PMI:Performance_monitoring_interrupts
4129 ± 22% +33.2% 5499 ± 9% interrupts.CPU95.NMI:Non-maskable_interrupts
4129 ± 22% +33.2% 5499 ± 9% interrupts.CPU95.PMI:Performance_monitoring_interrupts
685.75 ± 14% +38.0% 946.50 ± 9% interrupts.CPU96.RES:Rescheduling_interrupts
4630 ± 11% +18.3% 5477 ± 8% interrupts.CPU97.NMI:Non-maskable_interrupts
4630 ± 11% +18.3% 5477 ± 8% interrupts.CPU97.PMI:Performance_monitoring_interrupts
4835 ± 9% +16.3% 5622 ± 9% interrupts.CPU98.NMI:Non-maskable_interrupts
4835 ± 9% +16.3% 5622 ± 9% interrupts.CPU98.PMI:Performance_monitoring_interrupts
596.25 ± 11% +81.8% 1083 ± 9% interrupts.CPU98.RES:Rescheduling_interrupts
674.75 ± 17% +43.7% 969.50 ± 5% interrupts.CPU99.RES:Rescheduling_interrupts
78.25 ± 13% +21.4% 95.00 ± 10% interrupts.IWI:IRQ_work_interrupts
85705 ± 6% +26.0% 107990 ± 6% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/sc_pid_max/tbox_group/testcase/testtime/ucode:
scheduler/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/4194304/lkp-bdw-ep6/stress-ng/1s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
887157 ± 4% -23.1% 682080 ± 3% stress-ng.fault.ops
887743 ± 4% -23.1% 682337 ± 3% stress-ng.fault.ops_per_sec
9537184 ± 10% -21.2% 7518352 ± 14% stress-ng.hrtimers.ops_per_sec
360922 ± 13% -21.1% 284734 ± 6% stress-ng.kill.ops
361115 ± 13% -21.1% 284810 ± 6% stress-ng.kill.ops_per_sec
23260649 -26.9% 17006477 ± 24% stress-ng.mq.ops
23255884 -26.9% 17004540 ± 24% stress-ng.mq.ops_per_sec
3291588 ± 3% +42.5% 4690316 ± 2% stress-ng.schedpolicy.ops
3327913 ± 3% +41.5% 4709770 ± 2% stress-ng.schedpolicy.ops_per_sec
48.14 -2.2% 47.09 stress-ng.time.elapsed_time
48.14 -2.2% 47.09 stress-ng.time.elapsed_time.max
5480 +3.7% 5681 stress-ng.time.percent_of_cpu_this_job_got
2249 +1.3% 2278 stress-ng.time.system_time
902759 ± 4% -22.6% 698616 ± 3% proc-vmstat.unevictable_pgs_culled
98767954 ± 7% +16.4% 1.15e+08 ± 7% cpuidle.C1.time
1181676 ± 12% -43.2% 671022 ± 37% cpuidle.C6.usage
2.21 ± 7% +0.4 2.62 ± 10% turbostat.C1%
1176838 ± 12% -43.2% 668921 ± 37% turbostat.C6
3961223 ± 4% +12.8% 4469620 ± 5% vmstat.memory.cache
439.50 ± 3% +14.7% 504.00 ± 9% vmstat.procs.r
0.42 ± 7% -15.6% 0.35 ± 13% sched_debug.cfs_rq:/.nr_running.stddev
0.00 ± 4% -18.1% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
0.41 ± 7% -15.1% 0.35 ± 13% sched_debug.cpu.nr_running.stddev
9367 ± 9% -12.8% 8166 ± 2% softirqs.CPU1.SCHED
35143 ± 6% -12.0% 30930 ± 2% softirqs.CPU22.TIMER
31997 ± 4% -7.5% 29595 ± 2% softirqs.CPU27.TIMER
3.64 ±173% -100.0% 0.00 iostat.sda.await.max
3.64 ±173% -100.0% 0.00 iostat.sda.r_await.max
3.90 ±173% -100.0% 0.00 iostat.sdc.await.max
3.90 ±173% -100.0% 0.00 iostat.sdc.r_await.max
12991737 ± 10% +61.5% 20979642 ± 8% numa-numastat.node0.local_node
13073590 ± 10% +61.1% 21059448 ± 8% numa-numastat.node0.numa_hit
20903562 ± 3% -32.2% 14164789 ± 3% numa-numastat.node1.local_node
20993788 ± 3% -32.1% 14245636 ± 3% numa-numastat.node1.numa_hit
90229 ± 4% -10.4% 80843 ± 9% numa-numastat.node1.other_node
50.75 ± 90% +1732.0% 929.75 ±147% interrupts.CPU23.IWI:IRQ_work_interrupts
40391 ± 59% -57.0% 17359 ± 11% interrupts.CPU24.RES:Rescheduling_interrupts
65670 ± 11% -48.7% 33716 ± 54% interrupts.CPU42.RES:Rescheduling_interrupts
42201 ± 46% -57.1% 18121 ± 35% interrupts.CPU49.RES:Rescheduling_interrupts
293869 ± 44% +103.5% 598082 ± 23% interrupts.CPU52.LOC:Local_timer_interrupts
17367 ± 8% +120.5% 38299 ± 44% interrupts.CPU55.RES:Rescheduling_interrupts
1.127e+08 +3.8% 1.17e+08 ± 2% perf-stat.i.branch-misses
11.10 +1.2 12.26 ± 6% perf-stat.i.cache-miss-rate%
4.833e+10 ± 3% +4.7% 5.06e+10 perf-stat.i.instructions
15009442 ± 4% +14.3% 17150138 ± 3% perf-stat.i.node-load-misses
47.12 ± 5% +3.2 50.37 ± 5% perf-stat.i.node-store-miss-rate%
6016833 ± 7% +17.0% 7036803 ± 3% perf-stat.i.node-store-misses
1.044e+10 ± 2% +4.0% 1.086e+10 perf-stat.ps.branch-instructions
1.364e+10 ± 3% +4.0% 1.418e+10 perf-stat.ps.dTLB-loads
4.804e+10 ± 2% +4.1% 5.003e+10 perf-stat.ps.instructions
14785608 ± 5% +11.3% 16451530 ± 3% perf-stat.ps.node-load-misses
5968712 ± 7% +13.4% 6769847 ± 3% perf-stat.ps.node-store-misses
13588 ± 4% +29.4% 17585 ± 9% slabinfo.Acpi-State.active_objs
13588 ± 4% +29.4% 17585 ± 9% slabinfo.Acpi-State.num_objs
20859 ± 3% -8.6% 19060 ± 4% slabinfo.kmalloc-192.num_objs
488.00 ± 25% +41.0% 688.00 ± 5% slabinfo.kmalloc-rcl-128.active_objs
488.00 ± 25% +41.0% 688.00 ± 5% slabinfo.kmalloc-rcl-128.num_objs
39660 ± 3% +11.8% 44348 ± 2% slabinfo.radix_tree_node.active_objs
44284 ± 3% +12.3% 49720 slabinfo.radix_tree_node.num_objs
5811 ± 15% +16.1% 6746 ± 14% slabinfo.sighand_cache.active_objs
402.00 ± 15% +17.5% 472.50 ± 14% slabinfo.sighand_cache.active_slabs
6035 ± 15% +17.5% 7091 ± 14% slabinfo.sighand_cache.num_objs
402.00 ± 15% +17.5% 472.50 ± 14% slabinfo.sighand_cache.num_slabs
10282 ± 10% +12.9% 11604 ± 9% slabinfo.signal_cache.active_objs
11350 ± 10% +12.8% 12808 ± 9% slabinfo.signal_cache.num_objs
732920 ± 9% +162.0% 1919987 ± 11% numa-meminfo.node0.Active
732868 ± 9% +162.0% 1919814 ± 11% numa-meminfo.node0.Active(anon)
545019 ± 6% +61.0% 877443 ± 17% numa-meminfo.node0.AnonHugePages
695015 ± 10% +46.8% 1020150 ± 14% numa-meminfo.node0.AnonPages
638322 ± 4% +448.2% 3499399 ± 5% numa-meminfo.node0.FilePages
81008 ± 14% +2443.4% 2060329 ± 3% numa-meminfo.node0.Inactive
80866 ± 14% +2447.4% 2060022 ± 3% numa-meminfo.node0.Inactive(anon)
86504 ± 10% +2287.3% 2065084 ± 3% numa-meminfo.node0.Mapped
2010104 +160.8% 5242366 ± 5% numa-meminfo.node0.MemUsed
16453 ± 15% +159.2% 42640 numa-meminfo.node0.PageTables
112769 ± 13% +2521.1% 2955821 ± 7% numa-meminfo.node0.Shmem
1839527 ± 4% -60.2% 732645 ± 23% numa-meminfo.node1.Active
1839399 ± 4% -60.2% 732637 ± 23% numa-meminfo.node1.Active(anon)
982237 ± 7% -45.9% 531445 ± 27% numa-meminfo.node1.AnonHugePages
1149348 ± 8% -41.2% 676067 ± 25% numa-meminfo.node1.AnonPages
3170649 ± 4% -77.2% 723230 ± 7% numa-meminfo.node1.FilePages
1960718 ± 4% -91.8% 160773 ± 31% numa-meminfo.node1.Inactive
1960515 ± 4% -91.8% 160722 ± 31% numa-meminfo.node1.Inactive(anon)
118489 ± 11% -20.2% 94603 ± 3% numa-meminfo.node1.KReclaimable
1966065 ± 4% -91.5% 166789 ± 29% numa-meminfo.node1.Mapped
5034310 ± 3% -60.2% 2003121 ± 9% numa-meminfo.node1.MemUsed
42684 ± 10% -64.2% 15283 ± 21% numa-meminfo.node1.PageTables
118489 ± 11% -20.2% 94603 ± 3% numa-meminfo.node1.SReclaimable
2644708 ± 5% -91.9% 214268 ± 24% numa-meminfo.node1.Shmem
147513 ± 20% +244.2% 507737 ± 7% numa-vmstat.node0.nr_active_anon
137512 ± 21% +105.8% 282999 ± 3% numa-vmstat.node0.nr_anon_pages
210.25 ± 33% +124.7% 472.50 ± 11% numa-vmstat.node0.nr_anon_transparent_hugepages
158008 ± 4% +454.7% 876519 ± 6% numa-vmstat.node0.nr_file_pages
18416 ± 27% +2711.4% 517747 ± 3% numa-vmstat.node0.nr_inactive_anon
26255 ± 22% +34.3% 35251 ± 10% numa-vmstat.node0.nr_kernel_stack
19893 ± 23% +2509.5% 519129 ± 3% numa-vmstat.node0.nr_mapped
3928 ± 22% +179.4% 10976 ± 4% numa-vmstat.node0.nr_page_table_pages
26623 ± 18% +2681.9% 740635 ± 7% numa-vmstat.node0.nr_shmem
147520 ± 20% +244.3% 507885 ± 7% numa-vmstat.node0.nr_zone_active_anon
18415 ± 27% +2711.5% 517739 ± 3% numa-vmstat.node0.nr_zone_inactive_anon
6937137 ± 8% +55.9% 10814957 ± 7% numa-vmstat.node0.numa_hit
6860210 ± 8% +56.6% 10739902 ± 7% numa-vmstat.node0.numa_local
425559 ± 13% -52.9% 200300 ± 17% numa-vmstat.node1.nr_active_anon
786341 ± 4% -76.6% 183664 ± 7% numa-vmstat.node1.nr_file_pages
483646 ± 4% -90.8% 44606 ± 29% numa-vmstat.node1.nr_inactive_anon
485120 ± 4% -90.5% 46130 ± 27% numa-vmstat.node1.nr_mapped
10471 ± 6% -61.3% 4048 ± 18% numa-vmstat.node1.nr_page_table_pages
654852 ± 5% -91.4% 56439 ± 25% numa-vmstat.node1.nr_shmem
29681 ± 11% -20.3% 23669 ± 3% numa-vmstat.node1.nr_slab_reclaimable
425556 ± 13% -52.9% 200359 ± 17% numa-vmstat.node1.nr_zone_active_anon
483649 ± 4% -90.8% 44600 ± 29% numa-vmstat.node1.nr_zone_inactive_anon
10527487 ± 5% -31.3% 7233899 ± 6% numa-vmstat.node1.numa_hit
10290625 ± 5% -31.9% 7006050 ± 7% numa-vmstat.node1.numa_local
***************************************************************************************************
lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-fedora-25/100%/debian-x86_64-2019-11-14.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
6684836 -33.3% 4457559 ± 4% stress-ng.schedpolicy.ops
6684766 -33.3% 4457633 ± 4% stress-ng.schedpolicy.ops_per_sec
19978129 -28.8% 14231813 ± 16% stress-ng.time.involuntary_context_switches
82.49 ± 2% -5.2% 78.23 stress-ng.time.user_time
106716 ± 29% +40.3% 149697 ± 2% meminfo.max_used_kB
4.07 ± 22% +1.2 5.23 ± 5% mpstat.cpu.all.irq%
2721317 ± 10% +66.5% 4531100 ± 22% cpuidle.POLL.time
71470 ± 18% +41.1% 100822 ± 11% cpuidle.POLL.usage
841.00 ± 41% -50.4% 417.25 ± 17% numa-meminfo.node0.Dirty
7096 ± 7% +25.8% 8930 ± 9% numa-meminfo.node1.KernelStack
68752 ± 90% -45.9% 37169 ±143% sched_debug.cfs_rq:/.runnable_weight.stddev
654.93 ± 11% +19.3% 781.09 ± 2% sched_debug.cpu.clock_task.stddev
183.06 ± 83% -76.9% 42.20 ± 17% iostat.sda.await.max
627.47 ±102% -96.7% 20.52 ± 38% iostat.sda.r_await.max
183.08 ± 83% -76.9% 42.24 ± 17% iostat.sda.w_await.max
209.00 ± 41% -50.2% 104.00 ± 17% numa-vmstat.node0.nr_dirty
209.50 ± 41% -50.4% 104.00 ± 17% numa-vmstat.node0.nr_zone_write_pending
6792 ± 8% +34.4% 9131 ± 7% numa-vmstat.node1.nr_kernel_stack
3.57 ±173% +9.8 13.38 ± 25% perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.57 ±173% +9.8 13.38 ± 25% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
3.57 ±173% +9.8 13.39 ± 25% perf-profile.children.cycles-pp.proc_reg_read
3.57 ±173% +12.6 16.16 ± 28% perf-profile.children.cycles-pp.seq_read
7948 ± 56% -53.1% 3730 ± 5% softirqs.CPU25.RCU
6701 ± 33% -46.7% 3570 ± 5% softirqs.CPU34.RCU
8232 ± 89% -60.5% 3247 softirqs.CPU50.RCU
326269 ± 16% -27.4% 236940 softirqs.RCU
68066 +7.9% 73438 proc-vmstat.nr_active_anon
67504 +7.8% 72783 proc-vmstat.nr_anon_pages
7198 ± 19% +34.2% 9658 ± 2% proc-vmstat.nr_page_table_pages
40664 ± 8% +10.1% 44766 proc-vmstat.nr_slab_unreclaimable
68066 +7.9% 73438 proc-vmstat.nr_zone_active_anon
1980169 ± 4% -5.3% 1875307 proc-vmstat.numa_hit
1960247 ± 4% -5.4% 1855033 proc-vmstat.numa_local
956008 ± 16% -17.8% 786247 proc-vmstat.pgfault
26598 ± 76% +301.2% 106716 ± 45% interrupts.CPU1.RES:Rescheduling_interrupts
151212 ± 39% -67.3% 49451 ± 57% interrupts.CPU26.RES:Rescheduling_interrupts
1013586 ± 2% -10.9% 903528 ± 7% interrupts.CPU27.LOC:Local_timer_interrupts
1000980 ± 2% -11.4% 886740 ± 8% interrupts.CPU31.LOC:Local_timer_interrupts
1021043 ± 3% -9.9% 919686 ± 6% interrupts.CPU32.LOC:Local_timer_interrupts
125222 ± 51% -86.0% 17483 ±106% interrupts.CPU33.RES:Rescheduling_interrupts
1003735 ± 2% -11.1% 891833 ± 8% interrupts.CPU34.LOC:Local_timer_interrupts
1021799 ± 2% -13.2% 886665 ± 8% interrupts.CPU38.LOC:Local_timer_interrupts
997788 ± 2% -13.2% 866427 ± 10% interrupts.CPU42.LOC:Local_timer_interrupts
1001618 -11.6% 885490 ± 9% interrupts.CPU45.LOC:Local_timer_interrupts
22321 ± 58% +550.3% 145153 ± 22% interrupts.CPU9.RES:Rescheduling_interrupts
3151 ± 53% +67.3% 5273 ± 8% slabinfo.avc_xperms_data.active_objs
3151 ± 53% +67.3% 5273 ± 8% slabinfo.avc_xperms_data.num_objs
348.75 ± 13% +39.8% 487.50 ± 5% slabinfo.biovec-128.active_objs
348.75 ± 13% +39.8% 487.50 ± 5% slabinfo.biovec-128.num_objs
13422 ± 97% +121.1% 29678 ± 2% slabinfo.btrfs_extent_map.active_objs
14638 ± 98% +117.8% 31888 ± 2% slabinfo.btrfs_extent_map.num_objs
3835 ± 18% +40.9% 5404 ± 7% slabinfo.dmaengine-unmap-16.active_objs
3924 ± 18% +39.9% 5490 ± 8% slabinfo.dmaengine-unmap-16.num_objs
3482 ± 96% +119.1% 7631 ± 10% slabinfo.khugepaged_mm_slot.active_objs
3573 ± 96% +119.4% 7839 ± 10% slabinfo.khugepaged_mm_slot.num_objs
8629 ± 52% -49.2% 4384 slabinfo.kmalloc-rcl-64.active_objs
8629 ± 52% -49.2% 4384 slabinfo.kmalloc-rcl-64.num_objs
2309 ± 57% +82.1% 4206 ± 5% slabinfo.mnt_cache.active_objs
2336 ± 57% +80.8% 4224 ± 5% slabinfo.mnt_cache.num_objs
5320 ± 48% +69.1% 8999 ± 23% slabinfo.pool_workqueue.active_objs
165.75 ± 48% +69.4% 280.75 ± 23% slabinfo.pool_workqueue.active_slabs
5320 ± 48% +69.2% 8999 ± 23% slabinfo.pool_workqueue.num_objs
165.75 ± 48% +69.4% 280.75 ± 23% slabinfo.pool_workqueue.num_slabs
3306 ± 15% +27.0% 4199 ± 3% slabinfo.task_group.active_objs
3333 ± 16% +30.1% 4336 ± 3% slabinfo.task_group.num_objs
14.74 ± 2% +1.8 16.53 ± 2% perf-stat.i.cache-miss-rate%
22459727 ± 20% +46.7% 32955572 ± 4% perf-stat.i.cache-misses
33575 ± 19% +68.8% 56658 ± 13% perf-stat.i.cpu-migrations
0.03 ± 20% +0.0 0.05 ± 8% perf-stat.i.dTLB-load-miss-rate%
6351703 ± 33% +47.2% 9352532 ± 9% perf-stat.i.dTLB-load-misses
0.45 ± 3% -3.0% 0.44 perf-stat.i.ipc
4711345 ± 18% +43.9% 6780944 ± 7% perf-stat.i.node-load-misses
82.51 +4.5 86.97 perf-stat.i.node-store-miss-rate%
2861142 ± 31% +60.8% 4601146 ± 5% perf-stat.i.node-store-misses
0.92 ± 6% -0.1 0.85 ± 2% perf-stat.overall.branch-miss-rate%
0.02 ± 3% +0.0 0.02 ± 4% perf-stat.overall.dTLB-store-miss-rate%
715.05 ± 5% +9.9% 785.50 ± 4% perf-stat.overall.instructions-per-iTLB-miss
0.44 ± 2% -5.4% 0.42 ± 2% perf-stat.overall.ipc
79.67 +2.1 81.80 ± 2% perf-stat.overall.node-store-miss-rate%
22237897 ± 19% +46.4% 32560557 ± 5% perf-stat.ps.cache-misses
32491 ± 18% +70.5% 55390 ± 13% perf-stat.ps.cpu-migrations
6071108 ± 31% +45.0% 8804767 ± 9% perf-stat.ps.dTLB-load-misses
1866 ± 98% -91.9% 150.48 ± 2% perf-stat.ps.major-faults
4593546 ± 16% +42.4% 6541402 ± 7% perf-stat.ps.node-load-misses
2757176 ± 29% +58.4% 4368169 ± 5% perf-stat.ps.node-store-misses
1.303e+12 ± 3% -9.8% 1.175e+12 ± 3% perf-stat.total.instructions
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
%stddev %change %stddev
\ | \
98245522 +42.3% 1.398e+08 stress-ng.schedpolicy.ops
3274860 +42.3% 4661027 stress-ng.schedpolicy.ops_per_sec
3.473e+08 -9.7% 3.137e+08 stress-ng.sigq.ops
11576537 -9.7% 10454846 stress-ng.sigq.ops_per_sec
38097605 ± 6% +10.3% 42011440 ± 4% stress-ng.sigrt.ops
1269646 ± 6% +10.3% 1400024 ± 4% stress-ng.sigrt.ops_per_sec
3.628e+08 ± 4% -21.5% 2.848e+08 ± 10% stress-ng.time.involuntary_context_switches
7040 +2.9% 7245 stress-ng.time.percent_of_cpu_this_job_got
15.09 ± 3% -13.4% 13.07 ± 5% iostat.cpu.idle
14.82 ± 3% -2.0 12.80 ± 5% mpstat.cpu.all.idle%
3.333e+08 ± 17% +59.9% 5.331e+08 ± 22% cpuidle.C1.time
5985148 ± 23% +112.5% 12719679 ± 20% cpuidle.C1E.usage
14.50 ± 3% -12.1% 12.75 ± 6% vmstat.cpu.id
1113131 ± 2% -10.5% 996285 ± 3% vmstat.system.cs
2269 +2.4% 2324 turbostat.Avg_MHz
0.64 ± 17% +0.4 1.02 ± 23% turbostat.C1%
5984799 ± 23% +112.5% 12719086 ± 20% turbostat.C1E
4.17 ± 32% -46.0% 2.25 ± 38% turbostat.Pkg%pc2
216.57 +2.1% 221.12 turbostat.PkgWatt
13.33 ± 3% +3.9% 13.84 turbostat.RAMWatt
99920 +13.6% 113486 ± 15% proc-vmstat.nr_active_anon
5738 +1.2% 5806 proc-vmstat.nr_inactive_anon
46788 +2.1% 47749 proc-vmstat.nr_slab_unreclaimable
99920 +13.6% 113486 ± 15% proc-vmstat.nr_zone_active_anon
5738 +1.2% 5806 proc-vmstat.nr_zone_inactive_anon
3150 ± 2% +35.4% 4265 ± 33% proc-vmstat.numa_huge_pte_updates
1641223 +34.3% 2203844 ± 32% proc-vmstat.numa_pte_updates
13575 ± 18% +62.1% 21999 ± 4% slabinfo.ext4_extent_status.active_objs
13954 ± 17% +57.7% 21999 ± 4% slabinfo.ext4_extent_status.num_objs
2527 ± 4% +9.8% 2774 ± 2% slabinfo.khugepaged_mm_slot.active_objs
2527 ± 4% +9.8% 2774 ± 2% slabinfo.khugepaged_mm_slot.num_objs
57547 ± 8% -15.3% 48743 ± 9% slabinfo.kmalloc-rcl-64.active_objs
898.75 ± 8% -15.3% 761.00 ± 9% slabinfo.kmalloc-rcl-64.active_slabs
57547 ± 8% -15.3% 48743 ± 9% slabinfo.kmalloc-rcl-64.num_objs
898.75 ± 8% -15.3% 761.00 ± 9% slabinfo.kmalloc-rcl-64.num_slabs
1.014e+10 +1.7% 1.031e+10 perf-stat.i.branch-instructions
13.37 ± 4% +2.0 15.33 ± 3% perf-stat.i.cache-miss-rate%
1.965e+11 +2.6% 2.015e+11 perf-stat.i.cpu-cycles
20057708 ± 4% +13.9% 22841468 ± 4% perf-stat.i.iTLB-loads
4.973e+10 +1.4% 5.042e+10 perf-stat.i.instructions
3272 ± 2% +2.9% 3366 perf-stat.i.minor-faults
4500892 ± 3% +18.9% 5351518 ± 6% perf-stat.i.node-store-misses
3.91 +1.3% 3.96 perf-stat.overall.cpi
69.62 -1.5 68.11 perf-stat.overall.iTLB-load-miss-rate%
1.047e+10 +1.3% 1.061e+10 perf-stat.ps.branch-instructions
1117454 ± 2% -10.6% 999467 ± 3% perf-stat.ps.context-switches
1.986e+11 +2.4% 2.033e+11 perf-stat.ps.cpu-cycles
19614413 ± 4% +13.6% 22288555 ± 4% perf-stat.ps.iTLB-loads
3493 -1.1% 3453 perf-stat.ps.minor-faults
4546636 ± 3% +17.0% 5321658 ± 5% perf-stat.ps.node-store-misses
0.64 ± 3% -0.2 0.44 ± 57% perf-profile.calltrace.cycles-pp.common_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.66 ± 3% -0.1 0.58 ± 7% perf-profile.children.cycles-pp.common_timer_get
0.44 ± 4% -0.1 0.39 ± 5% perf-profile.children.cycles-pp.posix_ktime_get_ts
0.39 ± 5% -0.0 0.34 ± 6% perf-profile.children.cycles-pp.ktime_get_ts64
0.07 ± 17% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.task_tick_fair
0.08 ± 15% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.scheduler_tick
0.46 ± 5% +0.1 0.54 ± 6% perf-profile.children.cycles-pp.__might_sleep
0.69 ± 8% +0.2 0.85 ± 12% perf-profile.children.cycles-pp.___might_sleep
0.90 ± 5% -0.2 0.73 ± 9% perf-profile.self.cycles-pp.__might_fault
0.40 ± 6% -0.1 0.33 ± 9% perf-profile.self.cycles-pp.do_timer_gettime
0.50 ± 4% -0.1 0.45 ± 7% perf-profile.self.cycles-pp.put_itimerspec64
0.32 ± 2% -0.0 0.27 ± 9% perf-profile.self.cycles-pp.update_curr_fair
0.20 ± 6% -0.0 0.18 ± 2% perf-profile.self.cycles-pp.ktime_get_ts64
0.08 ± 23% +0.0 0.12 ± 8% perf-profile.self.cycles-pp._raw_spin_trylock
0.42 ± 5% +0.1 0.50 ± 6% perf-profile.self.cycles-pp.__might_sleep
0.66 ± 9% +0.2 0.82 ± 12% perf-profile.self.cycles-pp.___might_sleep
47297 ± 13% +19.7% 56608 ± 5% softirqs.CPU13.SCHED
47070 ± 3% +20.5% 56735 ± 7% softirqs.CPU2.SCHED
55443 ± 9% -20.2% 44250 ± 2% softirqs.CPU28.SCHED
56633 ± 3% -12.6% 49520 ± 7% softirqs.CPU34.SCHED
56599 ± 11% -18.0% 46384 ± 2% softirqs.CPU36.SCHED
56909 ± 9% -18.4% 46438 ± 6% softirqs.CPU40.SCHED
45062 ± 9% +28.1% 57709 ± 9% softirqs.CPU45.SCHED
43959 +28.7% 56593 ± 9% softirqs.CPU49.SCHED
46235 ± 10% +22.2% 56506 ± 11% softirqs.CPU5.SCHED
44779 ± 12% +22.5% 54859 ± 11% softirqs.CPU57.SCHED
46739 ± 10% +21.1% 56579 ± 8% softirqs.CPU6.SCHED
53129 ± 4% -13.1% 46149 ± 8% softirqs.CPU70.SCHED
55822 ± 7% -20.5% 44389 ± 8% softirqs.CPU73.SCHED
56011 ± 5% -11.4% 49610 ± 7% softirqs.CPU77.SCHED
55263 ± 9% -13.2% 47942 ± 12% softirqs.CPU78.SCHED
58792 ± 14% -21.3% 46291 ± 9% softirqs.CPU81.SCHED
53341 ± 7% -13.7% 46041 ± 10% softirqs.CPU83.SCHED
59096 ± 15% -23.9% 44998 ± 6% softirqs.CPU85.SCHED
36647 -98.5% 543.00 ± 61% numa-meminfo.node0.Active(file)
620922 ± 4% -10.4% 556566 ± 5% numa-meminfo.node0.FilePages
21243 ± 3% -36.2% 13543 ± 41% numa-meminfo.node0.Inactive
20802 ± 3% -35.3% 13455 ± 42% numa-meminfo.node0.Inactive(anon)
15374 ± 9% -27.2% 11193 ± 8% numa-meminfo.node0.KernelStack
21573 -34.7% 14084 ± 14% numa-meminfo.node0.Mapped
1136795 ± 5% -12.4% 995965 ± 6% numa-meminfo.node0.MemUsed
16420 ± 6% -66.0% 5580 ± 18% numa-meminfo.node0.PageTables
108182 ± 2% -18.5% 88150 ± 3% numa-meminfo.node0.SUnreclaim
166467 ± 2% -15.8% 140184 ± 4% numa-meminfo.node0.Slab
181705 ± 36% +63.8% 297623 ± 10% numa-meminfo.node1.Active
320.75 ± 27% +11187.0% 36203 numa-meminfo.node1.Active(file)
2208 ± 38% +362.1% 10207 ± 54% numa-meminfo.node1.Inactive
2150 ± 39% +356.0% 9804 ± 58% numa-meminfo.node1.Inactive(anon)
41819 ± 10% +17.3% 49068 ± 6% numa-meminfo.node1.KReclaimable
11711 ± 5% +47.2% 17238 ± 22% numa-meminfo.node1.KernelStack
10642 +68.3% 17911 ± 11% numa-meminfo.node1.Mapped
952520 ± 6% +20.3% 1146337 ± 3% numa-meminfo.node1.MemUsed
12342 ± 15% +92.4% 23741 ± 9% numa-meminfo.node1.PageTables
41819 ± 10% +17.3% 49068 ± 6% numa-meminfo.node1.SReclaimable
80394 ± 3% +27.1% 102206 ± 3% numa-meminfo.node1.SUnreclaim
122214 ± 3% +23.8% 151275 ± 3% numa-meminfo.node1.Slab
9160 -98.5% 135.25 ± 61% numa-vmstat.node0.nr_active_file
155223 ± 4% -10.4% 139122 ± 5% numa-vmstat.node0.nr_file_pages
5202 ± 3% -35.4% 3362 ± 42% numa-vmstat.node0.nr_inactive_anon
109.50 ± 14% -80.1% 21.75 ±160% numa-vmstat.node0.nr_inactive_file
14757 ± 3% -34.4% 9676 ± 12% numa-vmstat.node0.nr_kernel_stack
5455 -34.9% 3549 ± 12% numa-vmstat.node0.nr_mapped
4069 ± 6% -68.3% 1289 ± 24% numa-vmstat.node0.nr_page_table_pages
26943 ± 2% -19.2% 21761 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
2240 ± 6% -97.8% 49.00 ± 69% numa-vmstat.node0.nr_written
9160 -98.5% 135.25 ± 61% numa-vmstat.node0.nr_zone_active_file
5202 ± 3% -35.4% 3362 ± 42% numa-vmstat.node0.nr_zone_inactive_anon
109.50 ± 14% -80.1% 21.75 ±160% numa-vmstat.node0.nr_zone_inactive_file
79.75 ± 28% +11247.0% 9049 numa-vmstat.node1.nr_active_file
542.25 ± 41% +352.1% 2451 ± 58% numa-vmstat.node1.nr_inactive_anon
14.00 ±140% +617.9% 100.50 ± 35% numa-vmstat.node1.nr_inactive_file
11182 ± 4% +28.9% 14415 ± 4% numa-vmstat.node1.nr_kernel_stack
2728 ± 3% +67.7% 4576 ± 9% numa-vmstat.node1.nr_mapped
3056 ± 15% +88.2% 5754 ± 8% numa-vmstat.node1.nr_page_table_pages
10454 ± 10% +17.3% 12262 ± 7% numa-vmstat.node1.nr_slab_reclaimable
20006 ± 3% +25.0% 25016 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
19.00 ± 52% +11859.2% 2272 ± 2% numa-vmstat.node1.nr_written
79.75 ± 28% +11247.0% 9049 numa-vmstat.node1.nr_zone_active_file
542.25 ± 41% +352.1% 2451 ± 58% numa-vmstat.node1.nr_zone_inactive_anon
14.00 ±140% +617.9% 100.50 ± 35% numa-vmstat.node1.nr_zone_inactive_file
173580 ± 21% +349.5% 780280 ± 7% sched_debug.cfs_rq:/.MIN_vruntime.avg
6891819 ± 37% +109.1% 14412817 ± 9% sched_debug.cfs_rq:/.MIN_vruntime.max
1031500 ± 25% +189.1% 2982452 ± 8% sched_debug.cfs_rq:/.MIN_vruntime.stddev
149079 +13.6% 169354 ± 2% sched_debug.cfs_rq:/.exec_clock.min
8550 ± 3% -59.7% 3442 ± 32% sched_debug.cfs_rq:/.exec_clock.stddev
4.95 ± 6% -15.2% 4.20 ± 10% sched_debug.cfs_rq:/.load_avg.min
173580 ± 21% +349.5% 780280 ± 7% sched_debug.cfs_rq:/.max_vruntime.avg
6891819 ± 37% +109.1% 14412817 ± 9% sched_debug.cfs_rq:/.max_vruntime.max
1031500 ± 25% +189.1% 2982452 ± 8% sched_debug.cfs_rq:/.max_vruntime.stddev
16144141 +27.9% 20645199 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg
17660392 +27.7% 22546402 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
13747718 +36.8% 18802595 ± 5% sched_debug.cfs_rq:/.min_vruntime.min
0.17 ± 11% +35.0% 0.22 ± 15% sched_debug.cfs_rq:/.nr_running.stddev
10.64 ± 14% -26.4% 7.83 ± 12% sched_debug.cpu.clock.stddev
10.64 ± 14% -26.4% 7.83 ± 12% sched_debug.cpu.clock_task.stddev
7093 ± 42% -65.9% 2420 ±120% sched_debug.cpu.curr->pid.min
2434979 ± 2% -18.6% 1981697 ± 3% sched_debug.cpu.nr_switches.avg
3993189 ± 6% -22.2% 3104832 ± 5% sched_debug.cpu.nr_switches.max
-145.03 -42.8% -82.90 sched_debug.cpu.nr_uninterruptible.min
2097122 ± 6% +38.7% 2908923 ± 6% sched_debug.cpu.sched_count.min
809684 ± 13% -30.5% 562929 ± 17% sched_debug.cpu.sched_count.stddev
307565 ± 4% -15.1% 261231 ± 3% sched_debug.cpu.ttwu_count.min
207286 ± 6% -16.4% 173387 ± 3% sched_debug.cpu.ttwu_local.min
125963 ± 23% +53.1% 192849 ± 2% sched_debug.cpu.ttwu_local.stddev
2527246 +10.8% 2800959 ± 3% sched_debug.cpu.yld_count.avg
1294266 ± 4% +53.7% 1989264 ± 2% sched_debug.cpu.yld_count.min
621332 ± 9% -38.4% 382813 ± 22% sched_debug.cpu.yld_count.stddev
899.50 ± 28% -48.2% 465.75 ± 42% interrupts.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
372.50 ± 7% +169.5% 1004 ± 40% interrupts.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
6201 ± 8% +17.9% 7309 ± 3% interrupts.CPU0.CAL:Function_call_interrupts
653368 ± 47% +159.4% 1695029 ± 17% interrupts.CPU0.RES:Rescheduling_interrupts
7104 ± 7% +13.6% 8067 interrupts.CPU1.CAL:Function_call_interrupts
2094 ± 59% +89.1% 3962 ± 10% interrupts.CPU10.TLB:TLB_shootdowns
7309 ± 8% +11.2% 8125 interrupts.CPU11.CAL:Function_call_interrupts
2089 ± 62% +86.2% 3890 ± 11% interrupts.CPU13.TLB:TLB_shootdowns
7068 ± 8% +15.2% 8144 ± 2% interrupts.CPU14.CAL:Function_call_interrupts
7112 ± 7% +13.6% 8079 ± 3% interrupts.CPU15.CAL:Function_call_interrupts
1950 ± 61% +103.5% 3968 ± 11% interrupts.CPU15.TLB:TLB_shootdowns
899.50 ± 28% -48.2% 465.75 ± 42% interrupts.CPU16.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
2252 ± 47% +62.6% 3664 ± 15% interrupts.CPU16.TLB:TLB_shootdowns
7111 ± 8% +14.8% 8167 ± 3% interrupts.CPU18.CAL:Function_call_interrupts
1972 ± 60% +96.3% 3872 ± 9% interrupts.CPU18.TLB:TLB_shootdowns
372.50 ± 7% +169.5% 1004 ± 40% interrupts.CPU19.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
2942 ± 12% -57.5% 1251 ± 22% interrupts.CPU22.TLB:TLB_shootdowns
7819 -12.2% 6861 ± 3% interrupts.CPU23.CAL:Function_call_interrupts
3327 ± 12% -62.7% 1241 ± 29% interrupts.CPU23.TLB:TLB_shootdowns
7767 ± 3% -14.0% 6683 ± 5% interrupts.CPU24.CAL:Function_call_interrupts
3185 ± 21% -63.8% 1154 ± 14% interrupts.CPU24.TLB:TLB_shootdowns
7679 ± 4% -11.3% 6812 ± 2% interrupts.CPU25.CAL:Function_call_interrupts
3004 ± 28% -63.4% 1100 ± 7% interrupts.CPU25.TLB:TLB_shootdowns
3187 ± 17% -61.3% 1232 ± 35% interrupts.CPU26.TLB:TLB_shootdowns
3193 ± 16% -59.3% 1299 ± 34% interrupts.CPU27.TLB:TLB_shootdowns
3059 ± 21% -58.0% 1285 ± 32% interrupts.CPU28.TLB:TLB_shootdowns
7798 ± 4% -13.8% 6719 ± 7% interrupts.CPU29.CAL:Function_call_interrupts
3122 ± 20% -62.3% 1178 ± 37% interrupts.CPU29.TLB:TLB_shootdowns
7727 ± 2% -11.6% 6827 ± 5% interrupts.CPU30.CAL:Function_call_interrupts
3102 ± 18% -59.4% 1259 ± 33% interrupts.CPU30.TLB:TLB_shootdowns
3269 ± 24% -58.1% 1371 ± 48% interrupts.CPU31.TLB:TLB_shootdowns
7918 ± 3% -14.5% 6771 interrupts.CPU32.CAL:Function_call_interrupts
3324 ± 18% -70.7% 973.50 ± 18% interrupts.CPU32.TLB:TLB_shootdowns
2817 ± 27% -60.2% 1121 ± 26% interrupts.CPU33.TLB:TLB_shootdowns
7956 ± 3% -11.8% 7018 ± 4% interrupts.CPU34.CAL:Function_call_interrupts
3426 ± 21% -70.3% 1018 ± 29% interrupts.CPU34.TLB:TLB_shootdowns
3121 ± 17% -70.3% 926.75 ± 22% interrupts.CPU35.TLB:TLB_shootdowns
7596 ± 4% -10.6% 6793 ± 3% interrupts.CPU36.CAL:Function_call_interrupts
2900 ± 30% -62.3% 1094 ± 34% interrupts.CPU36.TLB:TLB_shootdowns
7863 -13.1% 6833 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
3259 ± 15% -65.9% 1111 ± 20% interrupts.CPU37.TLB:TLB_shootdowns
3230 ± 26% -64.0% 1163 ± 39% interrupts.CPU38.TLB:TLB_shootdowns
7728 ± 5% -13.8% 6662 ± 7% interrupts.CPU39.CAL:Function_call_interrupts
2950 ± 29% -61.6% 1133 ± 26% interrupts.CPU39.TLB:TLB_shootdowns
6864 ± 3% +18.7% 8147 interrupts.CPU4.CAL:Function_call_interrupts
1847 ± 59% +118.7% 4039 ± 7% interrupts.CPU4.TLB:TLB_shootdowns
7951 ± 6% -15.0% 6760 ± 2% interrupts.CPU40.CAL:Function_call_interrupts
3200 ± 30% -72.3% 886.50 ± 39% interrupts.CPU40.TLB:TLB_shootdowns
7819 ± 6% -11.3% 6933 ± 2% interrupts.CPU41.CAL:Function_call_interrupts
3149 ± 28% -62.9% 1169 ± 24% interrupts.CPU41.TLB:TLB_shootdowns
7884 ± 4% -11.0% 7019 ± 2% interrupts.CPU42.CAL:Function_call_interrupts
3248 ± 16% -63.4% 1190 ± 23% interrupts.CPU42.TLB:TLB_shootdowns
7659 ± 5% -12.7% 6690 ± 3% interrupts.CPU43.CAL:Function_call_interrupts
490732 ± 20% +114.5% 1052606 ± 47% interrupts.CPU43.RES:Rescheduling_interrupts
1432688 ± 34% -67.4% 467217 ± 43% interrupts.CPU47.RES:Rescheduling_interrupts
7122 ± 8% +16.0% 8259 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
1868 ± 65% +118.4% 4079 ± 8% interrupts.CPU48.TLB:TLB_shootdowns
7165 ± 8% +11.3% 7977 ± 5% interrupts.CPU49.CAL:Function_call_interrupts
1961 ± 59% +98.4% 3891 ± 4% interrupts.CPU49.TLB:TLB_shootdowns
461807 ± 47% +190.8% 1342990 ± 48% interrupts.CPU5.RES:Rescheduling_interrupts
7167 ± 7% +15.4% 8273 interrupts.CPU50.CAL:Function_call_interrupts
2027 ± 51% +103.9% 4134 ± 8% interrupts.CPU50.TLB:TLB_shootdowns
7163 ± 9% +16.3% 8328 interrupts.CPU51.CAL:Function_call_interrupts
660073 ± 33% +74.0% 1148640 ± 25% interrupts.CPU51.RES:Rescheduling_interrupts
2043 ± 64% +95.8% 4000 ± 5% interrupts.CPU51.TLB:TLB_shootdowns
7428 ± 9% +13.5% 8434 ± 2% interrupts.CPU52.CAL:Function_call_interrupts
2280 ± 61% +85.8% 4236 ± 9% interrupts.CPU52.TLB:TLB_shootdowns
7144 ± 11% +17.8% 8413 interrupts.CPU53.CAL:Function_call_interrupts
1967 ± 67% +104.7% 4026 ± 5% interrupts.CPU53.TLB:TLB_shootdowns
7264 ± 10% +15.6% 8394 ± 4% interrupts.CPU54.CAL:Function_call_interrupts
7045 ± 11% +18.7% 8365 ± 2% interrupts.CPU56.CAL:Function_call_interrupts
2109 ± 59% +91.6% 4041 ± 10% interrupts.CPU56.TLB:TLB_shootdowns
7307 ± 9% +15.3% 8428 ± 2% interrupts.CPU57.CAL:Function_call_interrupts
2078 ± 64% +96.5% 4085 ± 6% interrupts.CPU57.TLB:TLB_shootdowns
6834 ± 12% +19.8% 8190 ± 3% interrupts.CPU58.CAL:Function_call_interrupts
612496 ± 85% +122.5% 1362815 ± 27% interrupts.CPU58.RES:Rescheduling_interrupts
1884 ± 69% +112.0% 3995 ± 8% interrupts.CPU58.TLB:TLB_shootdowns
7185 ± 8% +15.9% 8329 interrupts.CPU59.CAL:Function_call_interrupts
1982 ± 58% +101.1% 3986 ± 5% interrupts.CPU59.TLB:TLB_shootdowns
7051 ± 6% +13.1% 7975 interrupts.CPU6.CAL:Function_call_interrupts
1831 ± 49% +102.1% 3701 ± 8% interrupts.CPU6.TLB:TLB_shootdowns
7356 ± 8% +16.2% 8548 interrupts.CPU60.CAL:Function_call_interrupts
2124 ± 57% +92.8% 4096 ± 5% interrupts.CPU60.TLB:TLB_shootdowns
7243 ± 9% +15.1% 8334 interrupts.CPU61.CAL:Function_call_interrupts
572423 ± 71% +110.0% 1201919 ± 40% interrupts.CPU61.RES:Rescheduling_interrupts
7295 ± 9% +14.7% 8369 interrupts.CPU63.CAL:Function_call_interrupts
2139 ± 57% +85.7% 3971 ± 3% interrupts.CPU63.TLB:TLB_shootdowns
7964 ± 2% -15.6% 6726 ± 5% interrupts.CPU66.CAL:Function_call_interrupts
3198 ± 21% -65.0% 1119 ± 24% interrupts.CPU66.TLB:TLB_shootdowns
8103 ± 2% -17.5% 6687 ± 9% interrupts.CPU67.CAL:Function_call_interrupts
3357 ± 18% -62.9% 1244 ± 32% interrupts.CPU67.TLB:TLB_shootdowns
7772 ± 2% -14.0% 6687 ± 8% interrupts.CPU68.CAL:Function_call_interrupts
2983 ± 17% -59.2% 1217 ± 15% interrupts.CPU68.TLB:TLB_shootdowns
7986 ± 4% -13.8% 6887 ± 4% interrupts.CPU69.CAL:Function_call_interrupts
3192 ± 24% -65.0% 1117 ± 30% interrupts.CPU69.TLB:TLB_shootdowns
7070 ± 6% +14.6% 8100 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
697891 ± 32% +54.4% 1077890 ± 18% interrupts.CPU7.RES:Rescheduling_interrupts
1998 ± 55% +97.1% 3938 ± 10% interrupts.CPU7.TLB:TLB_shootdowns
8085 -13.4% 7002 ± 3% interrupts.CPU70.CAL:Function_call_interrupts
1064985 ± 35% -62.5% 398986 ± 29% interrupts.CPU70.RES:Rescheduling_interrupts
3347 ± 12% -61.7% 1280 ± 24% interrupts.CPU70.TLB:TLB_shootdowns
2916 ± 16% -58.8% 1201 ± 39% interrupts.CPU71.TLB:TLB_shootdowns
3314 ± 19% -61.3% 1281 ± 26% interrupts.CPU72.TLB:TLB_shootdowns
3119 ± 18% -61.5% 1200 ± 39% interrupts.CPU73.TLB:TLB_shootdowns
7992 ± 4% -12.6% 6984 ± 3% interrupts.CPU74.CAL:Function_call_interrupts
3187 ± 21% -56.8% 1378 ± 40% interrupts.CPU74.TLB:TLB_shootdowns
7953 ± 4% -12.0% 6999 ± 4% interrupts.CPU75.CAL:Function_call_interrupts
3072 ± 26% -56.8% 1327 ± 34% interrupts.CPU75.TLB:TLB_shootdowns
8119 ± 5% -12.4% 7109 ± 7% interrupts.CPU76.CAL:Function_call_interrupts
3418 ± 20% -67.5% 1111 ± 31% interrupts.CPU76.TLB:TLB_shootdowns
7804 ± 5% -11.4% 6916 ± 4% interrupts.CPU77.CAL:Function_call_interrupts
7976 ± 5% -14.4% 6826 ± 3% interrupts.CPU78.CAL:Function_call_interrupts
3209 ± 27% -71.8% 904.75 ± 28% interrupts.CPU78.TLB:TLB_shootdowns
8187 ± 4% -14.6% 6991 ± 3% interrupts.CPU79.CAL:Function_call_interrupts
3458 ± 20% -67.5% 1125 ± 36% interrupts.CPU79.TLB:TLB_shootdowns
7122 ± 7% +14.2% 8136 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
2096 ± 63% +87.4% 3928 ± 8% interrupts.CPU8.TLB:TLB_shootdowns
8130 ± 5% -17.2% 6728 ± 5% interrupts.CPU81.CAL:Function_call_interrupts
3253 ± 24% -70.6% 955.00 ± 38% interrupts.CPU81.TLB:TLB_shootdowns
7940 ± 5% -13.9% 6839 ± 5% interrupts.CPU82.CAL:Function_call_interrupts
2952 ± 26% -66.3% 996.00 ± 51% interrupts.CPU82.TLB:TLB_shootdowns
7900 ± 6% -13.4% 6844 ± 3% interrupts.CPU83.CAL:Function_call_interrupts
3012 ± 34% -68.3% 956.00 ± 17% interrupts.CPU83.TLB:TLB_shootdowns
7952 ± 6% -15.8% 6695 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
3049 ± 31% -75.5% 746.50 ± 27% interrupts.CPU84.TLB:TLB_shootdowns
8065 ± 6% -15.7% 6798 interrupts.CPU85.CAL:Function_call_interrupts
3222 ± 23% -69.7% 976.00 ± 13% interrupts.CPU85.TLB:TLB_shootdowns
8049 ± 5% -13.2% 6983 ± 4% interrupts.CPU86.CAL:Function_call_interrupts
3159 ± 19% -61.9% 1202 ± 27% interrupts.CPU86.TLB:TLB_shootdowns
8154 ± 8% -16.9% 6773 ± 3% interrupts.CPU87.CAL:Function_call_interrupts
1432962 ± 21% -48.5% 737989 ± 30% interrupts.CPU87.RES:Rescheduling_interrupts
3186 ± 33% -72.3% 881.75 ± 21% interrupts.CPU87.TLB:TLB_shootdowns
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/stress-ng/1s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
3345449 +35.1% 4518187 ± 5% stress-ng.schedpolicy.ops
3347036 +35.1% 4520740 ± 5% stress-ng.schedpolicy.ops_per_sec
11464910 ± 6% -23.3% 8796455 ± 11% stress-ng.sigq.ops
11452565 ± 6% -23.3% 8786844 ± 11% stress-ng.sigq.ops_per_sec
228736 +20.7% 276087 ± 20% stress-ng.sleep.ops
157479 +23.0% 193722 ± 21% stress-ng.sleep.ops_per_sec
14584704 -5.8% 13744640 ± 4% stress-ng.timerfd.ops
14546032 -5.7% 13718862 ± 4% stress-ng.timerfd.ops_per_sec
27.24 ±105% +283.9% 104.58 ±109% iostat.sdb.r_await.max
122324 ± 35% +63.9% 200505 ± 21% meminfo.AnonHugePages
47267 ± 26% +155.2% 120638 ± 45% numa-meminfo.node1.AnonHugePages
22880 ± 6% -9.9% 20605 ± 3% softirqs.CPU57.TIMER
636196 ± 24% +38.5% 880847 ± 7% cpuidle.C1.usage
55936214 ± 20% +63.9% 91684673 ± 18% cpuidle.C1E.time
1.175e+08 ± 22% +101.8% 2.372e+08 ± 29% cpuidle.C3.time
4.242e+08 ± 6% -39.1% 2.584e+08 ± 39% cpuidle.C6.time
59.50 ± 34% +66.0% 98.75 ± 22% proc-vmstat.nr_anon_transparent_hugepages
25612 ± 10% +13.8% 29146 ± 4% proc-vmstat.nr_kernel_stack
2783465 ± 9% +14.5% 3187157 ± 9% proc-vmstat.pgalloc_normal
1743 ± 28% +43.8% 2507 ± 23% proc-vmstat.thp_deferred_split_page
1765 ± 30% +43.2% 2529 ± 22% proc-vmstat.thp_fault_alloc
811.00 ± 3% -13.8% 699.00 ± 7% slabinfo.kmem_cache_node.active_objs
864.00 ± 3% -13.0% 752.00 ± 7% slabinfo.kmem_cache_node.num_objs
8686 ± 7% +13.6% 9869 ± 3% slabinfo.pid.active_objs
8690 ± 7% +13.8% 9890 ± 3% slabinfo.pid.num_objs
9813 ± 6% +15.7% 11352 ± 3% slabinfo.task_delay_info.active_objs
9813 ± 6% +15.7% 11352 ± 3% slabinfo.task_delay_info.num_objs
79.22 ± 10% -41.1% 46.68 ± 22% sched_debug.cfs_rq:/.load_avg.avg
242.49 ± 6% -29.6% 170.70 ± 17% sched_debug.cfs_rq:/.load_avg.stddev
43.14 ± 29% -67.1% 14.18 ± 66% sched_debug.cfs_rq:/.removed.load_avg.avg
201.73 ± 15% -50.1% 100.68 ± 60% sched_debug.cfs_rq:/.removed.load_avg.stddev
1987 ± 28% -67.3% 650.09 ± 66% sched_debug.cfs_rq:/.removed.runnable_sum.avg
9298 ± 15% -50.3% 4616 ± 60% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
18.17 ± 27% -68.6% 5.70 ± 63% sched_debug.cfs_rq:/.removed.util_avg.avg
87.61 ± 13% -52.6% 41.48 ± 59% sched_debug.cfs_rq:/.removed.util_avg.stddev
633327 ± 24% +38.4% 876596 ± 7% turbostat.C1
2.75 ± 22% +1.8 4.52 ± 17% turbostat.C1E%
5.76 ± 22% +6.1 11.82 ± 30% turbostat.C3%
20.69 ± 5% -8.1 12.63 ± 38% turbostat.C6%
15.62 ± 6% +18.4% 18.50 ± 8% turbostat.CPU%c1
1.56 ± 16% +208.5% 4.82 ± 38% turbostat.CPU%c3
12.81 ± 4% -48.1% 6.65 ± 43% turbostat.CPU%c6
5.02 ± 8% -34.6% 3.28 ± 14% turbostat.Pkg%pc2
0.85 ± 57% -84.7% 0.13 ±173% turbostat.Pkg%pc6
88.25 ± 13% +262.6% 320.00 ± 71% interrupts.CPU10.TLB:TLB_shootdowns
116.25 ± 36% +151.6% 292.50 ± 68% interrupts.CPU19.TLB:TLB_shootdowns
109.25 ± 8% +217.4% 346.75 ±106% interrupts.CPU2.TLB:TLB_shootdowns
15180 ±111% +303.9% 61314 ± 32% interrupts.CPU23.RES:Rescheduling_interrupts
111.50 ± 26% +210.3% 346.00 ± 79% interrupts.CPU3.TLB:TLB_shootdowns
86.50 ± 35% +413.0% 443.75 ± 66% interrupts.CPU33.TLB:TLB_shootdowns
728.00 ± 8% +29.6% 943.50 ± 16% interrupts.CPU38.CAL:Function_call_interrupts
1070 ± 72% +84.9% 1979 ± 9% interrupts.CPU54.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
41429 ± 64% -73.7% 10882 ± 73% interrupts.CPU59.RES:Rescheduling_interrupts
26330 ± 85% -73.3% 7022 ± 86% interrupts.CPU62.RES:Rescheduling_interrupts
103.00 ± 22% +181.3% 289.75 ± 92% interrupts.CPU65.TLB:TLB_shootdowns
100.00 ± 40% +365.0% 465.00 ± 71% interrupts.CPU70.TLB:TLB_shootdowns
110.25 ± 18% +308.4% 450.25 ± 71% interrupts.CPU80.TLB:TLB_shootdowns
93.50 ± 42% +355.1% 425.50 ± 82% interrupts.CPU84.TLB:TLB_shootdowns
104.50 ± 18% +289.7% 407.25 ± 68% interrupts.CPU87.TLB:TLB_shootdowns
1.76 ± 3% -0.1 1.66 ± 4% perf-stat.i.branch-miss-rate%
8.08 ± 6% +2.0 10.04 perf-stat.i.cache-miss-rate%
18031213 ± 4% +27.2% 22939937 ± 3% perf-stat.i.cache-misses
4.041e+08 -1.9% 3.965e+08 perf-stat.i.cache-references
31764 ± 26% -40.6% 18859 ± 10% perf-stat.i.cycles-between-cache-misses
66.18 -1.5 64.71 perf-stat.i.iTLB-load-miss-rate%
4503482 ± 8% +19.5% 5382698 ± 5% perf-stat.i.node-load-misses
3892859 ± 2% +16.6% 4538750 ± 4% perf-stat.i.node-store-misses
1526815 ± 13% +25.8% 1921178 ± 9% perf-stat.i.node-stores
4.72 ± 4% +1.3 6.00 ± 3% perf-stat.overall.cache-miss-rate%
9120 ± 6% -18.9% 7394 ± 2% perf-stat.overall.cycles-between-cache-misses
18237318 ± 4% +25.4% 22866104 ± 3% perf-stat.ps.cache-misses
4392089 ± 8% +18.1% 5189251 ± 5% perf-stat.ps.node-load-misses
1629766 ± 2% +17.9% 1920947 ± 13% perf-stat.ps.node-loads
3694566 ± 2% +16.1% 4288126 ± 4% perf-stat.ps.node-store-misses
1536866 ± 12% +23.7% 1901141 ± 7% perf-stat.ps.node-stores
38.20 ± 18% -13.2 24.96 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
38.20 ± 18% -13.2 24.96 ± 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.98 ± 67% -7.2 0.73 ±173% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release
7.98 ± 67% -7.2 0.73 ±173% perf-profile.calltrace.cycles-pp.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput
7.98 ± 67% -7.2 0.73 ±173% perf-profile.calltrace.cycles-pp.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput.task_work_run
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.do_group_exit.get_signal.do_signal.exit_to_usermode_loop.do_syscall_64
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.do_signal
4.27 ± 66% -3.5 0.73 ±173% perf-profile.calltrace.cycles-pp.read
4.05 ± 71% -3.3 0.73 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
4.05 ± 71% -3.3 0.73 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
13.30 ± 38% -8.2 5.07 ± 62% perf-profile.children.cycles-pp.task_work_run
12.47 ± 46% -7.4 5.07 ± 62% perf-profile.children.cycles-pp.exit_to_usermode_loop
12.47 ± 46% -7.4 5.07 ± 62% perf-profile.children.cycles-pp.__fput
7.98 ± 67% -7.2 0.73 ±173% perf-profile.children.cycles-pp.perf_remove_from_context
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.children.cycles-pp.do_signal
11.86 ± 41% -6.8 5.07 ± 62% perf-profile.children.cycles-pp.get_signal
9.43 ± 21% -4.7 4.72 ± 67% perf-profile.children.cycles-pp.ksys_read
9.43 ± 21% -4.7 4.72 ± 67% perf-profile.children.cycles-pp.vfs_read
4.27 ± 66% -3.5 0.73 ±173% perf-profile.children.cycles-pp.read
3.86 ±101% -3.1 0.71 ±173% perf-profile.children.cycles-pp._raw_spin_lock
3.86 ±101% -3.1 0.71 ±173% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.86 ±101% -3.1 0.71 ±173% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-csl-2sp5: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/fs/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
os/gcc-7/performance/1HDD/ext4/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002b
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:8 dmesg.WARNING:at_ip_selinux_file_ioctl/0x
%stddev %change %stddev
\ | \
122451 ± 11% -19.9% 98072 ± 15% stress-ng.ioprio.ops
116979 ± 11% -20.7% 92815 ± 16% stress-ng.ioprio.ops_per_sec
274187 ± 21% -26.7% 201013 ± 11% stress-ng.kill.ops
274219 ± 21% -26.7% 201040 ± 11% stress-ng.kill.ops_per_sec
3973765 -10.1% 3570462 ± 5% stress-ng.lockf.ops
3972581 -10.2% 3568935 ± 5% stress-ng.lockf.ops_per_sec
10719 ± 8% -39.9% 6442 ± 22% stress-ng.procfs.ops
9683 ± 3% -39.3% 5878 ± 22% stress-ng.procfs.ops_per_sec
6562721 -35.1% 4260609 ± 8% stress-ng.schedpolicy.ops
6564233 -35.1% 4261479 ± 8% stress-ng.schedpolicy.ops_per_sec
1070988 +21.4% 1299977 ± 7% stress-ng.sigrt.ops
1061773 +21.2% 1286618 ± 7% stress-ng.sigrt.ops_per_sec
1155684 ± 5% -14.8% 984531 ± 16% stress-ng.symlink.ops
991624 ± 4% -23.8% 755147 ± 41% stress-ng.symlink.ops_per_sec
6925 -12.1% 6086 ± 27% stress-ng.time.percent_of_cpu_this_job_got
24.68 +9.3 33.96 ± 52% mpstat.cpu.all.idle%
171.00 ± 2% -55.3% 76.50 ± 60% numa-vmstat.node1.nr_inactive_file
171.00 ± 2% -55.3% 76.50 ± 60% numa-vmstat.node1.nr_zone_inactive_file
2.032e+11 -12.5% 1.777e+11 ± 27% perf-stat.i.cpu-cycles
2.025e+11 -12.0% 1.782e+11 ± 27% perf-stat.ps.cpu-cycles
25.00 +37.5% 34.38 ± 51% vmstat.cpu.id
68.00 -13.2% 59.00 ± 27% vmstat.cpu.sy
25.24 +37.0% 34.57 ± 51% iostat.cpu.idle
68.21 -12.7% 59.53 ± 27% iostat.cpu.system
4.31 ±100% +200.6% 12.96 ± 63% iostat.sda.r_await.max
1014 ± 2% -17.1% 841.00 ± 10% meminfo.Inactive(file)
30692 ± 12% -20.9% 24280 ± 30% meminfo.Mlocked
103627 ± 27% -32.7% 69720 meminfo.Percpu
255.50 ± 2% -18.1% 209.25 ± 10% proc-vmstat.nr_inactive_file
255.50 ± 2% -18.1% 209.25 ± 10% proc-vmstat.nr_zone_inactive_file
185035 ± 22% -22.2% 143917 ± 25% proc-vmstat.pgmigrate_success
2107 -12.3% 1848 ± 27% turbostat.Avg_MHz
69.00 -7.1% 64.12 ± 8% turbostat.PkgTmp
94.63 -2.2% 92.58 ± 4% turbostat.RAMWatt
96048 +26.8% 121800 ± 8% softirqs.CPU10.NET_RX
96671 ± 4% +34.2% 129776 ± 6% softirqs.CPU15.NET_RX
171243 ± 3% -12.9% 149135 ± 8% softirqs.CPU25.NET_RX
165317 ± 4% -11.4% 146494 ± 9% softirqs.CPU27.NET_RX
139558 -24.5% 105430 ± 14% softirqs.CPU58.NET_RX
147836 -15.8% 124408 ± 6% softirqs.CPU63.NET_RX
129568 -13.8% 111624 ± 10% softirqs.CPU66.NET_RX
1050 ± 2% +14.2% 1198 ± 9% slabinfo.biovec-128.active_objs
1050 ± 2% +14.2% 1198 ± 9% slabinfo.biovec-128.num_objs
23129 +19.6% 27668 ± 6% slabinfo.kmalloc-512.active_objs
766.50 +17.4% 899.75 ± 6% slabinfo.kmalloc-512.active_slabs
24535 +17.4% 28806 ± 6% slabinfo.kmalloc-512.num_objs
766.50 +17.4% 899.75 ± 6% slabinfo.kmalloc-512.num_slabs
1039 ± 4% -4.3% 994.12 ± 6% slabinfo.sock_inode_cache.active_slabs
40527 ± 4% -4.3% 38785 ± 6% slabinfo.sock_inode_cache.num_objs
1039 ± 4% -4.3% 994.12 ± 6% slabinfo.sock_inode_cache.num_slabs
1549456 -43.6% 873443 ± 24% sched_debug.cfs_rq:/.min_vruntime.stddev
73.25 ± 5% +74.8% 128.03 ± 31% sched_debug.cfs_rq:/.nr_spread_over.stddev
18.60 ± 57% -63.8% 6.73 ± 64% sched_debug.cfs_rq:/.removed.load_avg.avg
79.57 ± 44% -44.1% 44.52 ± 55% sched_debug.cfs_rq:/.removed.load_avg.stddev
857.10 ± 57% -63.8% 310.09 ± 64% sched_debug.cfs_rq:/.removed.runnable_sum.avg
3664 ± 44% -44.1% 2049 ± 55% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
4.91 ± 42% -45.3% 2.69 ± 61% sched_debug.cfs_rq:/.removed.util_avg.avg
1549544 -43.6% 874006 ± 24% sched_debug.cfs_rq:/.spread0.stddev
786.14 ± 6% -20.1% 628.46 ± 23% sched_debug.cfs_rq:/.util_avg.avg
1415 ± 8% -16.7% 1178 ± 18% sched_debug.cfs_rq:/.util_avg.max
467435 ± 15% +46.7% 685829 ± 15% sched_debug.cpu.avg_idle.avg
17972 ± 8% +631.2% 131410 ± 34% sched_debug.cpu.avg_idle.min
7.66 ± 26% +209.7% 23.72 ± 54% sched_debug.cpu.clock.stddev
7.66 ± 26% +209.7% 23.72 ± 54% sched_debug.cpu.clock_task.stddev
618063 ± 5% -17.0% 513085 ± 5% sched_debug.cpu.max_idle_balance_cost.max
12083 ± 28% -85.4% 1768 ±231% sched_debug.cpu.max_idle_balance_cost.stddev
12857 ± 16% +2117.7% 285128 ±106% sched_debug.cpu.yld_count.min
0.55 ± 6% -0.2 0.37 ± 51% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.30 ± 21% -0.2 0.14 ±105% perf-profile.children.cycles-pp.yield_task_fair
0.32 ± 6% -0.2 0.16 ± 86% perf-profile.children.cycles-pp.rmap_walk_anon
0.19 -0.1 0.10 ± 86% perf-profile.children.cycles-pp.page_mapcount_is_zero
0.19 -0.1 0.10 ± 86% perf-profile.children.cycles-pp.total_mapcount
0.14 -0.1 0.09 ± 29% perf-profile.children.cycles-pp.start_kernel
0.11 ± 9% -0.0 0.07 ± 47% perf-profile.children.cycles-pp.__switch_to
0.10 ± 14% -0.0 0.06 ± 45% perf-profile.children.cycles-pp.switch_fpu_return
0.08 ± 6% -0.0 0.04 ± 79% perf-profile.children.cycles-pp.__update_load_avg_se
0.12 ± 13% -0.0 0.09 ± 23% perf-profile.children.cycles-pp.native_write_msr
0.31 ± 6% -0.2 0.15 ± 81% perf-profile.self.cycles-pp.poll_idle
0.50 ± 6% -0.2 0.35 ± 50% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.18 ± 2% -0.1 0.10 ± 86% perf-profile.self.cycles-pp.total_mapcount
0.10 ± 14% -0.0 0.06 ± 45% perf-profile.self.cycles-pp.switch_fpu_return
0.10 ± 10% -0.0 0.06 ± 47% perf-profile.self.cycles-pp.__switch_to
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.prep_new_page
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.llist_add_batch
0.07 ± 14% -0.0 0.04 ± 79% perf-profile.self.cycles-pp.__update_load_avg_se
0.12 ± 13% -0.0 0.09 ± 23% perf-profile.self.cycles-pp.native_write_msr
66096 ± 99% -99.8% 148.50 ± 92% interrupts.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
543.50 ± 39% -73.3% 145.38 ± 81% interrupts.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
169.00 ± 28% -55.3% 75.50 ± 83% interrupts.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
224.00 ± 14% -57.6% 95.00 ± 87% interrupts.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
680.00 ± 28% -80.5% 132.75 ± 82% interrupts.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
327.50 ± 31% -39.0% 199.62 ± 60% interrupts.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
217.50 ± 19% -51.7% 105.12 ± 79% interrupts.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
375.00 ± 46% -78.5% 80.50 ± 82% interrupts.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
196.50 ± 3% -51.6% 95.12 ± 74% interrupts.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
442.50 ± 45% -73.1% 118.88 ± 90% interrupts.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
271.00 ± 8% -53.2% 126.88 ± 75% interrupts.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
145448 ± 4% -41.6% 84975 ± 42% interrupts.CPU1.RES:Rescheduling_interrupts
11773 ± 19% -38.1% 7290 ± 52% interrupts.CPU13.TLB:TLB_shootdowns
24177 ± 15% +356.5% 110368 ± 58% interrupts.CPU16.RES:Rescheduling_interrupts
3395 ± 3% +78.3% 6055 ± 18% interrupts.CPU17.NMI:Non-maskable_interrupts
3395 ± 3% +78.3% 6055 ± 18% interrupts.CPU17.PMI:Performance_monitoring_interrupts
106701 ± 41% -55.6% 47425 ± 56% interrupts.CPU18.RES:Rescheduling_interrupts
327.50 ± 31% -39.3% 198.88 ± 60% interrupts.CPU24.60:PCI-MSI.31981593-edge.i40e-eth0-TxRx-24
411618 +53.6% 632283 ± 77% interrupts.CPU25.LOC:Local_timer_interrupts
16189 ± 26% -53.0% 7611 ± 66% interrupts.CPU25.TLB:TLB_shootdowns
407253 +54.4% 628596 ± 78% interrupts.CPU26.LOC:Local_timer_interrupts
216.50 ± 19% -51.8% 104.25 ± 80% interrupts.CPU27.63:PCI-MSI.31981596-edge.i40e-eth0-TxRx-27
7180 -20.9% 5682 ± 25% interrupts.CPU29.NMI:Non-maskable_interrupts
7180 -20.9% 5682 ± 25% interrupts.CPU29.PMI:Performance_monitoring_interrupts
15186 ± 12% -45.5% 8276 ± 49% interrupts.CPU3.TLB:TLB_shootdowns
13092 ± 19% -29.5% 9231 ± 35% interrupts.CPU30.TLB:TLB_shootdowns
13204 ± 26% -29.3% 9336 ± 19% interrupts.CPU31.TLB:TLB_shootdowns
374.50 ± 46% -78.7% 79.62 ± 83% interrupts.CPU34.70:PCI-MSI.31981603-edge.i40e-eth0-TxRx-34
7188 -25.6% 5345 ± 26% interrupts.CPU35.NMI:Non-maskable_interrupts
7188 -25.6% 5345 ± 26% interrupts.CPU35.PMI:Performance_monitoring_interrupts
196.00 ± 4% -52.0% 94.12 ± 75% interrupts.CPU36.72:PCI-MSI.31981605-edge.i40e-eth0-TxRx-36
12170 ± 20% -34.3% 7998 ± 32% interrupts.CPU39.TLB:TLB_shootdowns
442.00 ± 45% -73.3% 118.12 ± 91% interrupts.CPU43.79:PCI-MSI.31981612-edge.i40e-eth0-TxRx-43
12070 ± 15% -37.2% 7581 ± 49% interrupts.CPU43.TLB:TLB_shootdowns
7177 -27.6% 5195 ± 26% interrupts.CPU45.NMI:Non-maskable_interrupts
7177 -27.6% 5195 ± 26% interrupts.CPU45.PMI:Performance_monitoring_interrupts
271.00 ± 8% -53.4% 126.38 ± 75% interrupts.CPU46.82:PCI-MSI.31981615-edge.i40e-eth0-TxRx-46
3591 +84.0% 6607 ± 12% interrupts.CPU46.NMI:Non-maskable_interrupts
3591 +84.0% 6607 ± 12% interrupts.CPU46.PMI:Performance_monitoring_interrupts
57614 ± 30% -34.0% 38015 ± 28% interrupts.CPU46.RES:Rescheduling_interrupts
149154 ± 41% -47.2% 78808 ± 51% interrupts.CPU51.RES:Rescheduling_interrupts
30366 ± 28% +279.5% 115229 ± 42% interrupts.CPU52.RES:Rescheduling_interrupts
29690 +355.5% 135237 ± 57% interrupts.CPU54.RES:Rescheduling_interrupts
213106 ± 2% -66.9% 70545 ± 43% interrupts.CPU59.RES:Rescheduling_interrupts
225753 ± 7% -72.9% 61212 ± 72% interrupts.CPU60.RES:Rescheduling_interrupts
12430 ± 14% -41.5% 7276 ± 52% interrupts.CPU61.TLB:TLB_shootdowns
44552 ± 22% +229.6% 146864 ± 36% interrupts.CPU65.RES:Rescheduling_interrupts
126088 ± 56% -35.3% 81516 ± 73% interrupts.CPU66.RES:Rescheduling_interrupts
170880 ± 15% -62.9% 63320 ± 52% interrupts.CPU68.RES:Rescheduling_interrupts
186033 ± 10% -39.8% 112012 ± 41% interrupts.CPU69.RES:Rescheduling_interrupts
679.50 ± 29% -80.5% 132.25 ± 82% interrupts.CPU7.43:PCI-MSI.31981576-edge.i40e-eth0-TxRx-7
124750 ± 18% -39.4% 75553 ± 43% interrupts.CPU7.RES:Rescheduling_interrupts
158500 ± 47% -52.1% 75915 ± 67% interrupts.CPU71.RES:Rescheduling_interrupts
11846 ± 11% -32.5% 8001 ± 47% interrupts.CPU72.TLB:TLB_shootdowns
66095 ± 99% -99.8% 147.62 ± 93% interrupts.CPU73.109:PCI-MSI.31981642-edge.i40e-eth0-TxRx-73
7221 ± 2% -31.0% 4982 ± 35% interrupts.CPU73.NMI:Non-maskable_interrupts
7221 ± 2% -31.0% 4982 ± 35% interrupts.CPU73.PMI:Performance_monitoring_interrupts
15304 ± 14% -47.9% 7972 ± 31% interrupts.CPU73.TLB:TLB_shootdowns
10918 ± 3% -31.9% 7436 ± 36% interrupts.CPU74.TLB:TLB_shootdowns
543.00 ± 39% -73.3% 144.75 ± 81% interrupts.CPU76.112:PCI-MSI.31981645-edge.i40e-eth0-TxRx-76
12214 ± 14% -40.9% 7220 ± 38% interrupts.CPU79.TLB:TLB_shootdowns
168.00 ± 29% -55.7% 74.50 ± 85% interrupts.CPU80.116:PCI-MSI.31981649-edge.i40e-eth0-TxRx-80
28619 ± 3% +158.4% 73939 ± 44% interrupts.CPU80.RES:Rescheduling_interrupts
12258 -34.3% 8056 ± 29% interrupts.CPU80.TLB:TLB_shootdowns
7214 -19.5% 5809 ± 24% interrupts.CPU82.NMI:Non-maskable_interrupts
7214 -19.5% 5809 ± 24% interrupts.CPU82.PMI:Performance_monitoring_interrupts
13522 ± 11% -41.2% 7949 ± 29% interrupts.CPU84.TLB:TLB_shootdowns
223.50 ± 14% -57.8% 94.25 ± 88% interrupts.CPU85.121:PCI-MSI.31981654-edge.i40e-eth0-TxRx-85
11989 ± 2% -31.7% 8194 ± 22% interrupts.CPU85.TLB:TLB_shootdowns
121153 ± 29% -41.4% 70964 ± 58% interrupts.CPU86.RES:Rescheduling_interrupts
11731 ± 8% -40.7% 6957 ± 36% interrupts.CPU86.TLB:TLB_shootdowns
12192 ± 22% -35.8% 7824 ± 43% interrupts.CPU87.TLB:TLB_shootdowns
11603 ± 19% -31.8% 7915 ± 41% interrupts.CPU89.TLB:TLB_shootdowns
10471 ± 5% -27.0% 7641 ± 31% interrupts.CPU91.TLB:TLB_shootdowns
7156 -20.9% 5658 ± 23% interrupts.CPU92.NMI:Non-maskable_interrupts
7156 -20.9% 5658 ± 23% interrupts.CPU92.PMI:Performance_monitoring_interrupts
99802 ± 20% -43.6% 56270 ± 47% interrupts.CPU92.RES:Rescheduling_interrupts
109162 ± 18% -28.7% 77839 ± 26% interrupts.CPU93.RES:Rescheduling_interrupts
15044 ± 29% -44.4% 8359 ± 30% interrupts.CPU93.TLB:TLB_shootdowns
110749 ± 19% -47.3% 58345 ± 48% interrupts.CPU94.RES:Rescheduling_interrupts
7245 -21.4% 5697 ± 25% interrupts.CPU95.NMI:Non-maskable_interrupts
7245 -21.4% 5697 ± 25% interrupts.CPU95.PMI:Performance_monitoring_interrupts
1969 ± 5% +491.7% 11653 ± 81% interrupts.IWI:IRQ_work_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
interrupt/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-09-23.cgz/lkp-bdw-ep6/stress-ng/30s/0xb000038
commit:
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
0b0695f2b3 ("sched/fair: Rework load_balance()")
fcf0553db6f4c793 0b0695f2b34a4afa3f6e9aa1ff0
---------------- ---------------------------
%stddev %change %stddev
\ | \
98318389 +43.0% 1.406e+08 stress-ng.schedpolicy.ops
3277346 +43.0% 4685146 stress-ng.schedpolicy.ops_per_sec
3.506e+08 ± 4% -10.3% 3.146e+08 ± 3% stress-ng.sigq.ops
11684738 ± 4% -10.3% 10485353 ± 3% stress-ng.sigq.ops_per_sec
3.628e+08 ± 6% -19.4% 2.925e+08 ± 6% stress-ng.time.involuntary_context_switches
29456 +2.8% 30285 stress-ng.time.system_time
7636655 ± 9% +46.6% 11197377 ± 27% cpuidle.C1E.usage
1111483 ± 3% -9.5% 1005829 vmstat.system.cs
22638222 ± 4% +16.5% 26370816 ± 11% meminfo.Committed_AS
28908 ± 6% +24.6% 36020 ± 16% meminfo.KernelStack
7636543 ± 9% +46.6% 11196090 ± 27% turbostat.C1E
3.46 ± 16% -61.2% 1.35 ± 7% turbostat.Pkg%pc2
217.54 +1.7% 221.33 turbostat.PkgWatt
13.34 ± 2% +5.8% 14.11 turbostat.RAMWatt
525.50 ± 8% -15.7% 443.00 ± 12% slabinfo.biovec-128.active_objs
525.50 ± 8% -15.7% 443.00 ± 12% slabinfo.biovec-128.num_objs
28089 ± 12% -33.0% 18833 ± 22% slabinfo.pool_workqueue.active_objs
877.25 ± 12% -32.6% 591.00 ± 21% slabinfo.pool_workqueue.active_slabs
28089 ± 12% -32.6% 18925 ± 21% slabinfo.pool_workqueue.num_objs
877.25 ± 12% -32.6% 591.00 ± 21% slabinfo.pool_workqueue.num_slabs
846.75 ± 6% -18.0% 694.75 ± 9% slabinfo.skbuff_fclone_cache.active_objs
846.75 ± 6% -18.0% 694.75 ± 9% slabinfo.skbuff_fclone_cache.num_objs
63348 ± 6% -20.7% 50261 ± 4% softirqs.CPU14.SCHED
44394 ± 4% +21.4% 53880 ± 8% softirqs.CPU42.SCHED
52246 ± 7% -15.1% 44352 softirqs.CPU47.SCHED
58350 ± 4% -11.0% 51914 ± 7% softirqs.CPU6.SCHED
58009 ± 7% -23.8% 44206 ± 4% softirqs.CPU63.SCHED
49166 ± 6% +23.4% 60683 ± 9% softirqs.CPU68.SCHED
44594 ± 7% +14.3% 50951 ± 8% softirqs.CPU78.SCHED
46407 ± 9% +19.6% 55515 ± 8% softirqs.CPU84.SCHED
55555 ± 8% -15.5% 46933 ± 4% softirqs.CPU9.SCHED
198757 ± 18% +44.1% 286316 ± 9% numa-meminfo.node0.Active
189280 ± 19% +37.1% 259422 ± 7% numa-meminfo.node0.Active(anon)
110438 ± 33% +68.3% 185869 ± 16% numa-meminfo.node0.AnonHugePages
143458 ± 28% +67.7% 240547 ± 13% numa-meminfo.node0.AnonPages
12438 ± 16% +61.9% 20134 ± 37% numa-meminfo.node0.KernelStack
1004379 ± 7% +16.4% 1168764 ± 4% numa-meminfo.node0.MemUsed
357111 ± 24% -41.6% 208655 ± 29% numa-meminfo.node1.Active
330094 ± 22% -39.6% 199339 ± 32% numa-meminfo.node1.Active(anon)
265924 ± 25% -52.2% 127138 ± 46% numa-meminfo.node1.AnonHugePages
314059 ± 22% -49.6% 158305 ± 36% numa-meminfo.node1.AnonPages
15386 ± 16% -25.1% 11525 ± 15% numa-meminfo.node1.KernelStack
1200805 ± 11% -18.6% 977595 ± 7% numa-meminfo.node1.MemUsed
965.50 ± 15% -29.3% 682.25 ± 43% numa-meminfo.node1.Mlocked
46762 ± 18% +37.8% 64452 ± 8% numa-vmstat.node0.nr_active_anon
35393 ± 27% +68.9% 59793 ± 12% numa-vmstat.node0.nr_anon_pages
52.75 ± 33% +71.1% 90.25 ± 15% numa-vmstat.node0.nr_anon_transparent_hugepages
15.00 ± 96% +598.3% 104.75 ± 15% numa-vmstat.node0.nr_inactive_file
11555 ± 22% +68.9% 19513 ± 41% numa-vmstat.node0.nr_kernel_stack
550.25 ±162% +207.5% 1691 ± 48% numa-vmstat.node0.nr_written
46762 ± 18% +37.8% 64452 ± 8% numa-vmstat.node0.nr_zone_active_anon
15.00 ± 96% +598.3% 104.75 ± 15% numa-vmstat.node0.nr_zone_inactive_file
82094 ± 22% -39.5% 49641 ± 32% numa-vmstat.node1.nr_active_anon
78146 ± 23% -49.5% 39455 ± 37% numa-vmstat.node1.nr_anon_pages
129.00 ± 25% -52.3% 61.50 ± 47% numa-vmstat.node1.nr_anon_transparent_hugepages
107.75 ± 12% -85.4% 15.75 ±103% numa-vmstat.node1.nr_inactive_file
14322 ± 11% -21.1% 11304 ± 11% numa-vmstat.node1.nr_kernel_stack
241.00 ± 15% -29.5% 170.00 ± 43% numa-vmstat.node1.nr_mlock
82094 ± 22% -39.5% 49641 ± 32% numa-vmstat.node1.nr_zone_active_anon
107.75 ± 12% -85.4% 15.75 ±103% numa-vmstat.node1.nr_zone_inactive_file
0.81 ± 5% +0.2 0.99 ± 10% perf-profile.calltrace.cycles-pp.task_rq_lock.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime
0.60 ± 11% +0.2 0.83 ± 9% perf-profile.calltrace.cycles-pp.___might_sleep.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime
1.73 ± 9% +0.3 2.05 ± 8% perf-profile.calltrace.cycles-pp.__might_fault._copy_to_user.put_itimerspec64.__x64_sys_timer_gettime.do_syscall_64
3.92 ± 5% +0.6 4.49 ± 7% perf-profile.calltrace.cycles-pp.task_sched_runtime.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime
4.17 ± 4% +0.6 4.78 ± 7% perf-profile.calltrace.cycles-pp.cpu_clock_sample.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64
5.72 ± 3% +0.7 6.43 ± 7% perf-profile.calltrace.cycles-pp.posix_cpu_timer_get.do_timer_gettime.__x64_sys_timer_gettime.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.24 ± 54% -0.2 0.07 ±131% perf-profile.children.cycles-pp.ext4_inode_csum_set
0.45 ± 3% +0.1 0.56 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.84 ± 5% +0.2 1.03 ± 9% perf-profile.children.cycles-pp.task_rq_lock
0.66 ± 8% +0.2 0.88 ± 7% perf-profile.children.cycles-pp.___might_sleep
1.83 ± 9% +0.3 2.16 ± 8% perf-profile.children.cycles-pp.__might_fault
4.04 ± 5% +0.6 4.62 ± 7% perf-profile.children.cycles-pp.task_sched_runtime
4.24 ± 4% +0.6 4.87 ± 7% perf-profile.children.cycles-pp.cpu_clock_sample
5.77 ± 3% +0.7 6.48 ± 7% perf-profile.children.cycles-pp.posix_cpu_timer_get
0.22 ± 11% +0.1 0.28 ± 15% perf-profile.self.cycles-pp.cpu_clock_sample
0.47 ± 7% +0.1 0.55 ± 5% perf-profile.self.cycles-pp.update_curr
0.28 ± 5% +0.1 0.38 ± 14% perf-profile.self.cycles-pp.task_rq_lock
0.42 ± 3% +0.1 0.53 ± 4% perf-profile.self.cycles-pp.__might_sleep
0.50 ± 5% +0.1 0.61 ± 11% perf-profile.self.cycles-pp.task_sched_runtime
0.63 ± 9% +0.2 0.85 ± 7% perf-profile.self.cycles-pp.___might_sleep
9180611 ± 5% +40.1% 12859327 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
1479571 ± 6% +57.6% 2331469 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.stddev
7951 ± 6% -52.5% 3773 ± 17% sched_debug.cfs_rq:/.exec_clock.stddev
321306 ± 39% -44.2% 179273 sched_debug.cfs_rq:/.load.max
9180613 ± 5% +40.1% 12859327 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
1479571 ± 6% +57.6% 2331469 ± 14% sched_debug.cfs_rq:/.max_vruntime.stddev
16622378 +20.0% 19940069 ± 7% sched_debug.cfs_rq:/.min_vruntime.avg
18123901 +19.7% 21686545 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
14338218 ± 3% +27.4% 18267927 ± 7% sched_debug.cfs_rq:/.min_vruntime.min
0.17 ± 16% +23.4% 0.21 ± 11% sched_debug.cfs_rq:/.nr_running.stddev
319990 ± 39% -44.6% 177347 sched_debug.cfs_rq:/.runnable_weight.max
-2067420 -33.5% -1375445 sched_debug.cfs_rq:/.spread0.min
1033 ± 8% -13.7% 891.85 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.max
93676 ± 16% -29.0% 66471 ± 17% sched_debug.cpu.avg_idle.min
10391 ± 52% +118.9% 22750 ± 15% sched_debug.cpu.curr->pid.avg
14393 ± 35% +113.2% 30689 ± 17% sched_debug.cpu.curr->pid.max
3041 ± 38% +161.8% 7963 ± 11% sched_debug.cpu.curr->pid.stddev
3.38 ± 6% -16.3% 2.83 ± 5% sched_debug.cpu.nr_running.max
2412687 ± 4% -16.0% 2027251 ± 3% sched_debug.cpu.nr_switches.avg
4038819 ± 3% -20.2% 3223112 ± 5% sched_debug.cpu.nr_switches.max
834203 ± 17% -37.8% 518798 ± 27% sched_debug.cpu.nr_switches.stddev
45.85 ± 13% +41.2% 64.75 ± 18% sched_debug.cpu.nr_uninterruptible.max
1937209 ± 2% +58.5% 3070891 ± 3% sched_debug.cpu.sched_count.min
1074023 ± 13% -57.9% 451958 ± 12% sched_debug.cpu.sched_count.stddev
1283769 ± 7% +65.1% 2118907 ± 7% sched_debug.cpu.yld_count.min
714244 ± 5% -51.9% 343373 ± 22% sched_debug.cpu.yld_count.stddev
12.54 ± 9% -18.8% 10.18 ± 15% perf-stat.i.MPKI
1.011e+10 +2.6% 1.038e+10 perf-stat.i.branch-instructions
13.22 ± 5% +2.5 15.75 ± 3% perf-stat.i.cache-miss-rate%
21084021 ± 6% +33.9% 28231058 ± 6% perf-stat.i.cache-misses
1143861 ± 5% -12.1% 1005721 ± 6% perf-stat.i.context-switches
1.984e+11 +1.8% 2.02e+11 perf-stat.i.cpu-cycles
1.525e+10 +1.3% 1.544e+10 perf-stat.i.dTLB-loads
65.46 -2.7 62.76 ± 3% perf-stat.i.iTLB-load-miss-rate%
20360883 ± 4% +10.5% 22500874 ± 4% perf-stat.i.iTLB-loads
4.963e+10 +2.0% 5.062e+10 perf-stat.i.instructions
181557 -2.4% 177113 perf-stat.i.msec
5350122 ± 8% +26.5% 6765332 ± 7% perf-stat.i.node-load-misses
4264320 ± 3% +24.8% 5321600 ± 4% perf-stat.i.node-store-misses
6.12 ± 5% +1.5 7.60 ± 2% perf-stat.overall.cache-miss-rate%
7646 ± 6% -17.7% 6295 ± 3% perf-stat.overall.cycles-between-cache-misses
69.29 -1.1 68.22 perf-stat.overall.iTLB-load-miss-rate%
61.11 ± 2% +6.6 67.71 ± 5% perf-stat.overall.node-load-miss-rate%
74.82 +1.8 76.58 perf-stat.overall.node-store-miss-rate%
1.044e+10 +1.8% 1.063e+10 perf-stat.ps.branch-instructions
26325951 ± 6% +22.9% 32366684 ± 2% perf-stat.ps.cache-misses
1115530 ± 3% -9.5% 1009780 perf-stat.ps.context-switches
1.536e+10 +1.0% 1.552e+10 perf-stat.ps.dTLB-loads
44718416 ± 2% +5.8% 47308605 ± 3% perf-stat.ps.iTLB-load-misses
19831973 ± 4% +11.1% 22040029 ± 4% perf-stat.ps.iTLB-loads
5.064e+10 +1.4% 5.137e+10 perf-stat.ps.instructions
5454694 ± 9% +26.4% 6892365 ± 6% perf-stat.ps.node-load-misses
4263688 ± 4% +24.9% 5325279 ± 4% perf-stat.ps.node-store-misses
3.001e+13 +1.7% 3.052e+13 perf-stat.total.instructions
18550 -74.9% 4650 ±173% interrupts.76:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
7642 ± 9% -20.4% 6086 ± 2% interrupts.CPU0.CAL:Function_call_interrupts
4376 ± 22% -75.4% 1077 ± 41% interrupts.CPU0.TLB:TLB_shootdowns
8402 ± 5% -19.0% 6806 interrupts.CPU1.CAL:Function_call_interrupts
4559 ± 20% -73.7% 1199 ± 15% interrupts.CPU1.TLB:TLB_shootdowns
8423 ± 4% -20.2% 6725 ± 2% interrupts.CPU10.CAL:Function_call_interrupts
4536 ± 14% -75.0% 1135 ± 20% interrupts.CPU10.TLB:TLB_shootdowns
8303 ± 3% -18.2% 6795 ± 2% interrupts.CPU11.CAL:Function_call_interrupts
4404 ± 11% -71.6% 1250 ± 35% interrupts.CPU11.TLB:TLB_shootdowns
8491 ± 6% -21.3% 6683 interrupts.CPU12.CAL:Function_call_interrupts
4723 ± 20% -77.2% 1077 ± 17% interrupts.CPU12.TLB:TLB_shootdowns
8403 ± 5% -20.3% 6700 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
4557 ± 19% -74.2% 1175 ± 22% interrupts.CPU13.TLB:TLB_shootdowns
8459 ± 4% -18.6% 6884 interrupts.CPU14.CAL:Function_call_interrupts
4559 ± 18% -69.8% 1376 ± 13% interrupts.CPU14.TLB:TLB_shootdowns
8305 ± 7% -17.7% 6833 ± 2% interrupts.CPU15.CAL:Function_call_interrupts
4261 ± 25% -67.6% 1382 ± 24% interrupts.CPU15.TLB:TLB_shootdowns
8277 ± 5% -19.1% 6696 ± 3% interrupts.CPU16.CAL:Function_call_interrupts
4214 ± 22% -69.6% 1282 ± 8% interrupts.CPU16.TLB:TLB_shootdowns
8258 ± 5% -18.9% 6694 ± 3% interrupts.CPU17.CAL:Function_call_interrupts
4461 ± 19% -74.1% 1155 ± 21% interrupts.CPU17.TLB:TLB_shootdowns
8457 ± 6% -20.6% 6717 interrupts.CPU18.CAL:Function_call_interrupts
4889 ± 34% +60.0% 7822 interrupts.CPU18.NMI:Non-maskable_interrupts
4889 ± 34% +60.0% 7822 interrupts.CPU18.PMI:Performance_monitoring_interrupts
4731 ± 22% -77.2% 1078 ± 10% interrupts.CPU18.TLB:TLB_shootdowns
8160 ± 5% -18.1% 6684 interrupts.CPU19.CAL:Function_call_interrupts
4311 ± 20% -74.2% 1114 ± 13% interrupts.CPU19.TLB:TLB_shootdowns
8464 ± 2% -18.2% 6927 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
4938 ± 14% -70.5% 1457 ± 18% interrupts.CPU2.TLB:TLB_shootdowns
8358 ± 6% -19.7% 6715 ± 3% interrupts.CPU20.CAL:Function_call_interrupts
4567 ± 24% -74.6% 1160 ± 35% interrupts.CPU20.TLB:TLB_shootdowns
8460 ± 4% -22.3% 6577 ± 2% interrupts.CPU21.CAL:Function_call_interrupts
4514 ± 18% -76.0% 1084 ± 22% interrupts.CPU21.TLB:TLB_shootdowns
6677 ± 6% +19.6% 7988 ± 9% interrupts.CPU22.CAL:Function_call_interrupts
1288 ± 14% +209.1% 3983 ± 35% interrupts.CPU22.TLB:TLB_shootdowns
6751 ± 2% +24.0% 8370 ± 9% interrupts.CPU23.CAL:Function_call_interrupts
1037 ± 29% +323.0% 4388 ± 36% interrupts.CPU23.TLB:TLB_shootdowns
6844 +20.6% 8251 ± 9% interrupts.CPU24.CAL:Function_call_interrupts
1205 ± 17% +229.2% 3967 ± 40% interrupts.CPU24.TLB:TLB_shootdowns
6880 +21.9% 8389 ± 7% interrupts.CPU25.CAL:Function_call_interrupts
1228 ± 19% +245.2% 4240 ± 35% interrupts.CPU25.TLB:TLB_shootdowns
6494 ± 8% +25.1% 8123 ± 9% interrupts.CPU26.CAL:Function_call_interrupts
1141 ± 13% +262.5% 4139 ± 32% interrupts.CPU26.TLB:TLB_shootdowns
6852 +19.2% 8166 ± 7% interrupts.CPU27.CAL:Function_call_interrupts
1298 ± 8% +197.1% 3857 ± 31% interrupts.CPU27.TLB:TLB_shootdowns
6563 ± 6% +25.2% 8214 ± 8% interrupts.CPU28.CAL:Function_call_interrupts
1176 ± 8% +237.1% 3964 ± 33% interrupts.CPU28.TLB:TLB_shootdowns
6842 ± 2% +21.4% 8308 ± 8% interrupts.CPU29.CAL:Function_call_interrupts
1271 ± 11% +223.8% 4118 ± 33% interrupts.CPU29.TLB:TLB_shootdowns
8418 ± 3% -21.1% 6643 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
4677 ± 11% -75.1% 1164 ± 16% interrupts.CPU3.TLB:TLB_shootdowns
6798 ± 3% +21.8% 8284 ± 7% interrupts.CPU30.CAL:Function_call_interrupts
1219 ± 12% +236.3% 4102 ± 30% interrupts.CPU30.TLB:TLB_shootdowns
6503 ± 4% +25.9% 8186 ± 6% interrupts.CPU31.CAL:Function_call_interrupts
1046 ± 15% +289.1% 4072 ± 32% interrupts.CPU31.TLB:TLB_shootdowns
6949 ± 3% +17.2% 8141 ± 8% interrupts.CPU32.CAL:Function_call_interrupts
1241 ± 23% +210.6% 3854 ± 34% interrupts.CPU32.TLB:TLB_shootdowns
1487 ± 26% +161.6% 3889 ± 46% interrupts.CPU33.TLB:TLB_shootdowns
1710 ± 44% +140.1% 4105 ± 36% interrupts.CPU34.TLB:TLB_shootdowns
6957 ± 2% +15.2% 8012 ± 9% interrupts.CPU35.CAL:Function_call_interrupts
1165 ± 8% +223.1% 3765 ± 38% interrupts.CPU35.TLB:TLB_shootdowns
1423 ± 24% +173.4% 3892 ± 33% interrupts.CPU36.TLB:TLB_shootdowns
1279 ± 29% +224.2% 4148 ± 39% interrupts.CPU37.TLB:TLB_shootdowns
1301 ± 20% +226.1% 4244 ± 35% interrupts.CPU38.TLB:TLB_shootdowns
6906 ± 2% +18.5% 8181 ± 8% interrupts.CPU39.CAL:Function_call_interrupts
368828 ± 20% +96.2% 723710 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
1438 ± 12% +174.8% 3951 ± 33% interrupts.CPU39.TLB:TLB_shootdowns
8399 ± 5% -19.2% 6788 ± 2% interrupts.CPU4.CAL:Function_call_interrupts
4567 ± 18% -72.7% 1245 ± 28% interrupts.CPU4.TLB:TLB_shootdowns
6895 +22.4% 8439 ± 9% interrupts.CPU40.CAL:Function_call_interrupts
1233 ± 11% +247.1% 4280 ± 36% interrupts.CPU40.TLB:TLB_shootdowns
6819 ± 2% +21.3% 8274 ± 9% interrupts.CPU41.CAL:Function_call_interrupts
1260 ± 14% +207.1% 3871 ± 38% interrupts.CPU41.TLB:TLB_shootdowns
1301 ± 9% +204.7% 3963 ± 36% interrupts.CPU42.TLB:TLB_shootdowns
6721 ± 3% +22.3% 8221 ± 7% interrupts.CPU43.CAL:Function_call_interrupts
1237 ± 19% +224.8% 4017 ± 35% interrupts.CPU43.TLB:TLB_shootdowns
8422 ± 8% -22.7% 6506 ± 5% interrupts.CPU44.CAL:Function_call_interrupts
15261375 ± 7% -7.8% 14064176 interrupts.CPU44.LOC:Local_timer_interrupts
4376 ± 25% -75.7% 1063 ± 26% interrupts.CPU44.TLB:TLB_shootdowns
8451 ± 5% -23.7% 6448 ± 6% interrupts.CPU45.CAL:Function_call_interrupts
4351 ± 18% -74.9% 1094 ± 12% interrupts.CPU45.TLB:TLB_shootdowns
8705 ± 6% -21.2% 6860 ± 2% interrupts.CPU46.CAL:Function_call_interrupts
4787 ± 20% -69.5% 1462 ± 16% interrupts.CPU46.TLB:TLB_shootdowns
8334 ± 3% -18.9% 6763 interrupts.CPU47.CAL:Function_call_interrupts
4126 ± 10% -71.3% 1186 ± 18% interrupts.CPU47.TLB:TLB_shootdowns
8578 ± 4% -21.7% 6713 interrupts.CPU48.CAL:Function_call_interrupts
4520 ± 15% -74.5% 1154 ± 23% interrupts.CPU48.TLB:TLB_shootdowns
8450 ± 8% -18.8% 6863 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
4494 ± 24% -66.5% 1505 ± 22% interrupts.CPU49.TLB:TLB_shootdowns
8307 ± 4% -18.0% 6816 ± 2% interrupts.CPU5.CAL:Function_call_interrupts
7845 -37.4% 4908 ± 34% interrupts.CPU5.NMI:Non-maskable_interrupts
7845 -37.4% 4908 ± 34% interrupts.CPU5.PMI:Performance_monitoring_interrupts
4429 ± 17% -69.8% 1339 ± 20% interrupts.CPU5.TLB:TLB_shootdowns
8444 ± 4% -21.7% 6613 interrupts.CPU50.CAL:Function_call_interrupts
4282 ± 16% -76.0% 1029 ± 17% interrupts.CPU50.TLB:TLB_shootdowns
8750 ± 6% -22.2% 6803 interrupts.CPU51.CAL:Function_call_interrupts
4755 ± 20% -73.1% 1277 ± 15% interrupts.CPU51.TLB:TLB_shootdowns
8478 ± 6% -20.2% 6766 ± 2% interrupts.CPU52.CAL:Function_call_interrupts
4337 ± 20% -72.6% 1190 ± 22% interrupts.CPU52.TLB:TLB_shootdowns
8604 ± 7% -21.5% 6750 ± 4% interrupts.CPU53.CAL:Function_call_interrupts
4649 ± 17% -74.3% 1193 ± 23% interrupts.CPU53.TLB:TLB_shootdowns
8317 ± 9% -19.4% 6706 ± 3% interrupts.CPU54.CAL:Function_call_interrupts
4372 ± 12% -75.4% 1076 ± 29% interrupts.CPU54.TLB:TLB_shootdowns
8439 ± 3% -18.5% 6876 interrupts.CPU55.CAL:Function_call_interrupts
4415 ± 11% -71.6% 1254 ± 17% interrupts.CPU55.TLB:TLB_shootdowns
8869 ± 6% -22.6% 6864 ± 2% interrupts.CPU56.CAL:Function_call_interrupts
517594 ± 13% +123.3% 1155539 ± 25% interrupts.CPU56.RES:Rescheduling_interrupts
5085 ± 22% -74.9% 1278 ± 17% interrupts.CPU56.TLB:TLB_shootdowns
8682 ± 4% -21.7% 6796 ± 2% interrupts.CPU57.CAL:Function_call_interrupts
4808 ± 19% -74.1% 1243 ± 13% interrupts.CPU57.TLB:TLB_shootdowns
8626 ± 7% -21.8% 6746 ± 2% interrupts.CPU58.CAL:Function_call_interrupts
4816 ± 20% -79.1% 1007 ± 28% interrupts.CPU58.TLB:TLB_shootdowns
8759 ± 8% -20.3% 6984 interrupts.CPU59.CAL:Function_call_interrupts
4840 ± 22% -70.6% 1423 ± 14% interrupts.CPU59.TLB:TLB_shootdowns
8167 ± 6% -19.0% 6615 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
4129 ± 21% -75.4% 1017 ± 24% interrupts.CPU6.TLB:TLB_shootdowns
8910 ± 4% -23.7% 6794 ± 3% interrupts.CPU60.CAL:Function_call_interrupts
5017 ± 12% -77.8% 1113 ± 15% interrupts.CPU60.TLB:TLB_shootdowns
8689 ± 5% -21.6% 6808 interrupts.CPU61.CAL:Function_call_interrupts
4715 ± 20% -77.6% 1055 ± 19% interrupts.CPU61.TLB:TLB_shootdowns
8574 ± 4% -18.9% 6953 ± 2% interrupts.CPU62.CAL:Function_call_interrupts
4494 ± 17% -72.3% 1244 ± 7% interrupts.CPU62.TLB:TLB_shootdowns
8865 ± 3% -25.4% 6614 ± 7% interrupts.CPU63.CAL:Function_call_interrupts
4870 ± 12% -76.8% 1130 ± 12% interrupts.CPU63.TLB:TLB_shootdowns
8724 ± 7% -20.2% 6958 ± 3% interrupts.CPU64.CAL:Function_call_interrupts
4736 ± 16% -72.6% 1295 ± 7% interrupts.CPU64.TLB:TLB_shootdowns
8717 ± 6% -23.7% 6653 ± 4% interrupts.CPU65.CAL:Function_call_interrupts
4626 ± 19% -76.5% 1087 ± 21% interrupts.CPU65.TLB:TLB_shootdowns
6671 +24.7% 8318 ± 9% interrupts.CPU66.CAL:Function_call_interrupts
1091 ± 8% +249.8% 3819 ± 32% interrupts.CPU66.TLB:TLB_shootdowns
6795 ± 2% +26.9% 8624 ± 9% interrupts.CPU67.CAL:Function_call_interrupts
1098 ± 24% +299.5% 4388 ± 39% interrupts.CPU67.TLB:TLB_shootdowns
6704 ± 5% +25.8% 8431 ± 8% interrupts.CPU68.CAL:Function_call_interrupts
1214 ± 15% +236.1% 4083 ± 36% interrupts.CPU68.TLB:TLB_shootdowns
1049 ± 15% +326.2% 4473 ± 33% interrupts.CPU69.TLB:TLB_shootdowns
8554 ± 6% -19.6% 6874 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
4753 ± 19% -71.7% 1344 ± 16% interrupts.CPU7.TLB:TLB_shootdowns
1298 ± 13% +227.4% 4249 ± 38% interrupts.CPU70.TLB:TLB_shootdowns
6976 +19.9% 8362 ± 7% interrupts.CPU71.CAL:Function_call_interrupts
1232748 ± 18% -57.3% 525824 ± 33% interrupts.CPU71.RES:Rescheduling_interrupts
1253 ± 9% +211.8% 3909 ± 31% interrupts.CPU71.TLB:TLB_shootdowns
1316 ± 22% +188.7% 3800 ± 33% interrupts.CPU72.TLB:TLB_shootdowns
6665 ± 5% +26.5% 8429 ± 8% interrupts.CPU73.CAL:Function_call_interrupts
1202 ± 13% +234.1% 4017 ± 37% interrupts.CPU73.TLB:TLB_shootdowns
6639 ± 5% +27.0% 8434 ± 8% interrupts.CPU74.CAL:Function_call_interrupts
1079 ± 16% +269.4% 3986 ± 36% interrupts.CPU74.TLB:TLB_shootdowns
1055 ± 12% +301.2% 4235 ± 34% interrupts.CPU75.TLB:TLB_shootdowns
7011 ± 3% +21.6% 8522 ± 8% interrupts.CPU76.CAL:Function_call_interrupts
1223 ± 13% +230.7% 4047 ± 35% interrupts.CPU76.TLB:TLB_shootdowns
6886 ± 7% +25.6% 8652 ± 10% interrupts.CPU77.CAL:Function_call_interrupts
1316 ± 16% +229.8% 4339 ± 36% interrupts.CPU77.TLB:TLB_shootdowns
7343 ± 5% +19.1% 8743 ± 9% interrupts.CPU78.CAL:Function_call_interrupts
1699 ± 37% +144.4% 4152 ± 31% interrupts.CPU78.TLB:TLB_shootdowns
7136 ± 4% +21.4% 8666 ± 9% interrupts.CPU79.CAL:Function_call_interrupts
1094 ± 13% +276.2% 4118 ± 34% interrupts.CPU79.TLB:TLB_shootdowns
8531 ± 5% -19.5% 6869 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
4764 ± 16% -71.0% 1382 ± 14% interrupts.CPU8.TLB:TLB_shootdowns
1387 ± 29% +181.8% 3910 ± 38% interrupts.CPU80.TLB:TLB_shootdowns
1114 ± 30% +259.7% 4007 ± 36% interrupts.CPU81.TLB:TLB_shootdowns
7012 +23.9% 8685 ± 8% interrupts.CPU82.CAL:Function_call_interrupts
1274 ± 12% +255.4% 4530 ± 27% interrupts.CPU82.TLB:TLB_shootdowns
6971 ± 3% +23.8% 8628 ± 9% interrupts.CPU83.CAL:Function_call_interrupts
1156 ± 18% +260.1% 4162 ± 34% interrupts.CPU83.TLB:TLB_shootdowns
7030 ± 4% +21.0% 8504 ± 8% interrupts.CPU84.CAL:Function_call_interrupts
1286 ± 23% +224.0% 4166 ± 31% interrupts.CPU84.TLB:TLB_shootdowns
7059 +22.4% 8644 ± 11% interrupts.CPU85.CAL:Function_call_interrupts
1421 ± 22% +208.8% 4388 ± 33% interrupts.CPU85.TLB:TLB_shootdowns
7018 ± 2% +22.8% 8615 ± 9% interrupts.CPU86.CAL:Function_call_interrupts
1258 ± 8% +231.1% 4167 ± 34% interrupts.CPU86.TLB:TLB_shootdowns
1338 ± 3% +217.9% 4255 ± 31% interrupts.CPU87.TLB:TLB_shootdowns
8376 ± 4% -19.0% 6787 ± 2% interrupts.CPU9.CAL:Function_call_interrupts
4466 ± 17% -71.2% 1286 ± 18% interrupts.CPU9.TLB:TLB_shootdowns
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
2 years
[exec] 166d03c9ec: ltp.execveat02.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-7):
commit: 166d03c9eca66be5b1ab2eae775598d1b0314cb7 ("[PATCH 2/4] exec: Relocate S_ISREG() check")
url: https://github.com/0day-ci/linux/commits/Kees-Cook/Relocate-execve-sanity...
base: https://git.kernel.org/cgit/linux/kernel/git/jack/linux-fs.git fsnotify
in testcase: ltp
with following parameters:
disk: 1HDD
fs: xfs
test: syscalls_part1
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
<<<test_start>>>
tag=execveat02 stime=1590373229
cmdline="execveat02"
contacts=""
analysis=exit
<<<test_output>>>
tst_test.c:1246: INFO: Timeout per run is 0h 05m 00s
execveat02.c:64: PASS: execveat() fails as expected: EBADF (9)
execveat02.c:64: PASS: execveat() fails as expected: EINVAL (22)
execveat02.c:61: FAIL: execveat() fails unexpectedly, expected: ELOOP: EACCES (13)
execveat02.c:64: PASS: execveat() fails as expected: ENOTDIR (20)
Summary:
passed 3
failed 1
skipped 0
warnings 0
<<<execution_status>>>
initiation_status="ok"
duration=0 termination_type=exited termination_id=1 corefile=no
cutime=0 cstime=1
<<<test_end>>>
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc1-00034-g166d03c9eca66 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
2 years
b614345f52 ("x86/entry: Clarify irq_{enter,exit}_rcu()"): WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:3680 lockdep_hardirqs_on_prepare
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git x86/entry
commit b614345f52bcde8299a53132f5e48a9eb5a1f320
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Fri May 29 23:27:39 2020 +0200
Commit: Thomas Gleixner <tglx(a)linutronix.de>
CommitDate: Sat May 30 10:00:10 2020 +0200
x86/entry: Clarify irq_{enter,exit}_rcu()
Because:
irq_enter_rcu() includes lockdep_hardirq_enter()
irq_exit_rcu() does *NOT* include lockdep_hardirq_exit()
Which resulted in two 'stray' lockdep_hardirq_exit() calls in
idtentry.h, and me spending a long time trying to find the matching
enter calls.
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Link: https://lkml.kernel.org/r/20200529213321.359433429@infradead.org
0f81407e6e x86/entry: Remove DBn stacks
b614345f52 x86/entry: Clarify irq_{enter,exit}_rcu()
5980d208e5 x86/idt: Consolidate idt functionality
+--------------------------------------------------------------------------------------+------------+------------+------------+
| | 0f81407e6e | b614345f52 | 5980d208e5 |
+--------------------------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 37 | 0 | 0 |
| boot_failures | 2 | 13 | 343 |
| Kernel_panic-not_syncing:VFS:Unable_to_mount_root_fs_on_unknown-block(#,#) | 2 | | |
| WARNING:at_kernel/locking/lockdep.c:#lockdep_hardirqs_on_prepare | 0 | 13 | 342 |
| RIP:lockdep_hardirqs_on_prepare | 0 | 13 | 342 |
| RIP:default_idle | 0 | 13 | 338 |
| RIP:__do_softirq | 0 | 0 | 5 |
| RIP:_raw_spin_unlock_irqrestore | 0 | 0 | 1 |
| RIP:_raw_spin_unlock_irq | 0 | 0 | 4 |
| BUG:unable_to_handle_page_fault_for_address | 0 | 0 | 2 |
| BUG:kernel_hang_in_test_stage | 0 | 0 | 4 |
| WARNING:at_fs/read_write.c:#vfs_copy_file_range | 0 | 0 | 1 |
| RIP:vfs_copy_file_range | 0 | 0 | 1 |
| INFO:rcu_preempt_self-detected_stall_on_CPU | 0 | 0 | 1 |
| RIP:iov_iter_copy_from_user_atomic | 0 | 0 | 1 |
| RIP:bvec_iter_advance | 0 | 0 | 1 |
| RIP:rcu_check_gp_start_stall | 0 | 0 | 1 |
| BUG:kernel_hang_in_early-boot_stage,last_printk:Probing_EDD(edd=off_to_disable)...ok | 0 | 0 | 1 |
| INFO:rcu_preempt_detected_stalls_on_CPUs/tasks | 0 | 0 | 1 |
| RIP:___might_sleep | 0 | 0 | 1 |
| RIP:arch_local_save_flags | 0 | 0 | 1 |
| RIP:fgraph_trace | 0 | 0 | 1 |
+--------------------------------------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.429563] smp: Brought up 1 node, 2 CPUs
[ 0.430121] smpboot: Max logical packages: 2
[ 0.430717] smpboot: Total of 2 processors activated (9199.98 BogoMIPS)
[ 0.432170] ------------[ cut here ]------------
[ 0.432830] DEBUG_LOCKS_WARN_ON(current->hardirq_context)
[ 0.432855] WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:3680 lockdep_hardirqs_on_prepare+0xee/0x13f
[ 0.434909] Modules linked in:
[ 0.435334] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 5.7.0-rc5-00398-gb614345f52bcd #1
[ 0.435551] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.435551] RIP: 0010:lockdep_hardirqs_on_prepare+0xee/0x13f
[ 0.435551] Code: 00 00 00 74 29 e8 fb 41 90 00 85 c0 74 6a 83 3d a6 75 d3 01 00 75 61 48 c7 c6 aa 0a 92 82 48 c7 c7 09 ac 90 82 e8 de 2d fb ff <0f> 0b eb 4a 65 48 8b 2c 25 c0 23 01 00 48 8b 85 d8 08 00 00 ff 85
[ 0.435551] RSP: 0000:ffffffff82c03de0 EFLAGS: 00010082
[ 0.435551] RAX: 000000000000002d RBX: 0000000000000001 RCX: 00000000000000de
[ 0.435551] RDX: ffffffff82c188c0 RSI: ffffffff82e6f9c0 RDI: ffffffff81144e31
[ 0.435551] RBP: ffffffff82c03e08 R08: 0000000019cc7922 R09: 000000000000002d
[ 0.435551] R10: 0000000000000000 R11: 0000000000000018 R12: 0000000000000000
[ 0.435551] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
[ 0.435551] FS: 0000000000000000(0000) GS:ffff88823fa00000(0000) knlGS:0000000000000000
[ 0.435551] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.435551] CR2: 00000000ffffffff CR3: 0000000002c11001 CR4: 00000000001606b0
[ 0.435551] Call Trace:
[ 0.435551] idtentry_exit_cond_rcu+0x78/0xc6
[ 0.435551] asm_sysvec_reschedule_ipi+0x12/0x20
[ 0.435551] RIP: 0010:default_idle+0x22/0x31
[ 0.435551] Code: ff cc cc cc cc cc cc cc e8 93 f4 5d ff e8 0c 41 ff ff bf 01 00 00 00 89 c6 e8 2a 5b 5c ff e8 63 11 76 ff e8 04 5a 5c ff fb f4 <e8> ef 40 ff ff 83 cf ff 89 c6 e9 0f 5b 5c ff e8 62 f4 5d ff 53 65
[ 0.435551] RSP: 0000:ffffffff82c03eb0 EFLAGS: 00000202
[ 0.435551] RAX: 0000000000001631 RBX: ffffffff82c188c0 RCX: 0000000019c0f324
[ 0.435551] RDX: ffffffff82c188c0 RSI: 0000000000000006 RDI: ffffffff81a4b813
[ 0.435551] RBP: 0000000000000000 R08: 0000000019c0f439 R09: 00000000000c0025
[ 0.435551] R10: 0000000000000000 R11: 0000000000000002 R12: 0000000000000000
[ 0.435551] R13: 0000000000000000 R14: 0000000000000000 R15: ffff88807ffff1b3
[ 0.435551] ? default_idle+0x1b/0x31
[ 0.435551] default_idle_call+0x1f/0x24
[ 0.435551] do_idle+0xba/0x1a2
[ 0.435551] cpu_startup_entry+0x1d/0x1f
[ 0.435551] start_kernel+0x62c/0x63b
[ 0.435551] ? copy_bootdata+0x18/0x55
[ 0.435551] secondary_startup_64+0xb6/0xc0
[ 0.435551] irq event stamp: 5682
[ 0.435551] hardirqs last enabled at (5681): [<ffffffff81a4b813>] default_idle+0x1b/0x31
[ 0.435551] hardirqs last disabled at (5682): [<ffffffff81a3e568>] sysvec_reschedule_ipi+0x10/0x206
[ 0.435551] softirqs last enabled at (5622): [<ffffffff8101249a>] fpregs_unlock+0x0/0x2c
[ 0.435551] softirqs last disabled at (5620): [<ffffffff8101239b>] fpregs_lock+0x0/0x1b
[ 0.435551] ---[ end trace 16c55d73b2811f58 ]---
[ 0.471573] node 0 initialised, 1245092 pages in 32ms
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 5980d208e5ef28455e9e8b08f6250b443a2f0893 5a7462b1f9c19312da0e489b859184cc88229bad --
git bisect good ff98610a03285516b578821549973f969118d6a3 # 09:49 G 10 0 0 0 x86/entry, mce: Disallow #DB during #MC
git bisect bad 029149180d1d6e05e81e7db0d46c00960ab2e84f # 10:04 B 0 2 18 0 x86/entry: Rename trace_hardirqs_off_prepare()
git bisect good 8449e768dcb85b4d8db51482d8c9260bb05ccabc # 16:10 G 11 0 0 0 x86/entry: Remove debug IDT frobbing
git bisect good 0f81407e6e4cf7e878f1e5d6423324dbd966acba # 16:39 G 10 0 0 0 x86/entry: Remove DBn stacks
git bisect bad b614345f52bcde8299a53132f5e48a9eb5a1f320 # 18:12 B 0 3 19 0 x86/entry: Clarify irq_{enter,exit}_rcu()
# first bad commit: [b614345f52bcde8299a53132f5e48a9eb5a1f320] x86/entry: Clarify irq_{enter,exit}_rcu()
git bisect good 0f81407e6e4cf7e878f1e5d6423324dbd966acba # 18:21 G 31 0 0 2 x86/entry: Remove DBn stacks
# extra tests with debug options
git bisect bad b614345f52bcde8299a53132f5e48a9eb5a1f320 # 18:41 B 0 6 22 0 x86/entry: Clarify irq_{enter,exit}_rcu()
# extra tests on head commit of tip/x86/entry
git bisect bad 5980d208e5ef28455e9e8b08f6250b443a2f0893 # 18:58 B 0 342 362 1 x86/idt: Consolidate idt functionality
# bad: [5980d208e5ef28455e9e8b08f6250b443a2f0893] x86/idt: Consolidate idt functionality
# extra tests on revert first bad commit
git bisect good 375bc8902bee1ddc5098b9063d15956aef8ebf18 # 19:48 G 10 0 0 0 Revert "x86/entry: Clarify irq_{enter,exit}_rcu()"
# good: [375bc8902bee1ddc5098b9063d15956aef8ebf18] Revert "x86/entry: Clarify irq_{enter,exit}_rcu()"
# extra tests on tip/master
# 119: [14bf8733b3c1887abab08371c47e68f2afbc0b93] Merge branch 'x86/urgent'
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
2 years