FYI, we noticed a -51.9% regression of vm-scalability.throughput due to commit:
commit 84b1c66da9468dcbcf33f656a1bd6cfab2d6c80d ("sched/fair: Remove
scale_load_down() for load_avg")
git://bee.sh.intel.com/git/ydu19/tip sched_avg_opt_and_cleanup_v2.7
in testcase: vm-scalability
on test machine: lkp-hsw01: 56 threads Grantley Haswell-EP with 64G memory
with following parameters:
cpufreq_governor=performance/runtime=300s/size=2T/test=shm-xread-seq-mt
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone
git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/2T/lkp-hsw01/shm-xread-seq-mt/vm-scalability
commit:
1b3afd693c00319f4303d470d62807e245a79f65
84b1c66da9468dcbcf33f656a1bd6cfab2d6c80d
1b3afd693c00319f 84b1c66da9468dcbcf33f656a1
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.Spurious_LAPIC_timer_interrupt_on_cpu
%stddev %change %stddev
\ | \
37668391 ± 0% -51.9% 18116025 ± 0% vm-scalability.throughput
115.57 ± 0% +103.6% 235.25 ± 0% vm-scalability.time.elapsed_time
115.57 ± 0% +103.6% 235.25 ± 0% vm-scalability.time.elapsed_time.max
93867376 ± 0% +114.0% 2.008e+08 ± 0% vm-scalability.time.minor_page_faults
912.20 ± 0% +261.3% 3296 ± 0% vm-scalability.time.system_time
1255 ± 0% -12.4% 1099 ± 0% vm-scalability.time.user_time
48785949 ± 0% +155.8% 1.248e+08 ± 0%
vm-scalability.time.voluntary_context_switches
132413 ± 1% +9.3% 144712 ± 2% meminfo.Active
30501 ± 4% +38.0% 42097 ± 6% meminfo.Active(anon)
9932 ± 15% -80.9% 1893 ±164% numa-numastat.node0.other_node
917.50 ±170% +890.5% 9087 ± 34% numa-numastat.node1.other_node
62614 ± 1% +17.6% 73665 ± 8% numa-meminfo.node0.Active
11382 ± 12% +103.4% 23149 ± 22% numa-meminfo.node0.Active(anon)
6983 ± 14% -41.9% 4057 ± 47% numa-meminfo.node1.AnonHugePages
25.75 ± 1% -30.1% 18.00 ± 0% vmstat.procs.r
714664 ± 0% +41.6% 1011775 ± 0% vmstat.system.cs
60803 ± 0% +1.5% 61729 ± 0% vmstat.system.in
7772 ± 3% +27.0% 9868 ± 5% softirqs.NET_RX
174881 ± 9% +291.3% 684355 ± 1% softirqs.RCU
850755 ± 1% +73.2% 1473464 ± 0% softirqs.SCHED
1252182 ± 3% +86.2% 2331071 ± 1% softirqs.TIMER
2845 ± 12% +103.4% 5786 ± 22% numa-vmstat.node0.nr_active_anon
6889706 ± 7% +13.2% 7800953 ± 0% numa-vmstat.node0.numa_hit
6814259 ± 7% +13.7% 7749981 ± 0% numa-vmstat.node0.numa_local
75446 ± 2% -32.4% 50971 ± 57% numa-vmstat.node0.numa_other
16485 ± 9% +148.8% 41020 ± 70% numa-vmstat.node1.numa_other
1074 ± 0% +1.9% 1094 ± 0% turbostat.Avg_MHz
24.50 ± 0% +154.7% 62.39 ± 0% turbostat.CPU%c1
40.00 ± 0% -94.6% 2.18 ± 2% turbostat.CPU%c6
55.75 ± 1% +15.2% 64.25 ± 1% turbostat.CoreTmp
32.69 ± 0% -97.2% 0.91 ± 19% turbostat.Pkg%pc2
60.25 ± 2% +14.5% 69.00 ± 1% turbostat.PkgTmp
174.85 ± 0% +20.1% 209.98 ± 0% turbostat.PkgWatt
21.27 ± 2% +36.3% 28.99 ± 0% turbostat.RAMWatt
9.077e+08 ± 0% +464.7% 5.126e+09 ± 0% cpuidle.C1-HSW.time
34488483 ± 0% +204.4% 1.05e+08 ± 0% cpuidle.C1-HSW.usage
32091141 ± 0% +2179.6% 7.316e+08 ± 1% cpuidle.C1E-HSW.time
425737 ± 1% +2620.6% 11582628 ± 1% cpuidle.C1E-HSW.usage
2503376 ± 4% +1103.9% 30136985 ± 1% cpuidle.C3-HSW.time
9350 ± 4% +1849.4% 182271 ± 1% cpuidle.C3-HSW.usage
3.183e+09 ± 0% -22.4% 2.469e+09 ± 0% cpuidle.C6-HSW.time
3397123 ± 0% -23.8% 2587962 ± 0% cpuidle.C6-HSW.usage
48923914 ± 3% +260.5% 1.764e+08 ± 2% cpuidle.POLL.time
725127 ± 0% +232.9% 2413961 ± 0% cpuidle.POLL.usage
7624 ± 4% +38.0% 10524 ± 6% proc-vmstat.nr_active_anon
7966 ± 5% +90.8% 15197 ± 2% proc-vmstat.numa_hint_faults
4343 ± 6% +86.1% 8082 ± 2% proc-vmstat.numa_hint_faults_local
26506353 ± 0% +11.5% 29544930 ± 0% proc-vmstat.numa_hit
26495504 ± 0% +11.5% 29533949 ± 0% proc-vmstat.numa_local
1239 ± 4% +609.5% 8793 ± 97% proc-vmstat.numa_pages_migrated
8062 ± 5% +703.3% 64769 ± 87% proc-vmstat.numa_pte_updates
6587 ± 7% +29.0% 8498 ± 8% proc-vmstat.pgactivate
25724719 ± 0% +11.5% 28680319 ± 0% proc-vmstat.pgalloc_normal
94133198 ± 0% +113.9% 2.014e+08 ± 0% proc-vmstat.pgfault
24677874 ± 2% +13.9% 28118027 ± 2% proc-vmstat.pgfree
1239 ± 4% +609.5% 8793 ± 97% proc-vmstat.pgmigrate_success
9.148e+10 ± 0% +45.6% 1.332e+11 ± 0% perf-stat.L1-dcache-load-misses
2.408e+12 ± 0% +35.9% 3.273e+12 ± 1% perf-stat.L1-dcache-loads
7.486e+11 ± 0% +50.4% 1.126e+12 ± 1% perf-stat.L1-dcache-stores
1.858e+10 ± 0% +54.6% 2.874e+10 ± 0% perf-stat.L1-icache-load-misses
90333622 ± 2% +6383.1% 5.856e+09 ± 1% perf-stat.LLC-load-misses
1.831e+10 ± 0% +78.9% 3.275e+10 ± 1% perf-stat.LLC-loads
65489933 ± 1% +4490.6% 3.006e+09 ± 2% perf-stat.LLC-store-misses
5.403e+09 ± 0% +109.8% 1.134e+10 ± 0% perf-stat.LLC-stores
3.322e+12 ± 2% +25.8% 4.179e+12 ± 3% perf-stat.branch-instructions
4.477e+09 ± 2% +125.8% 1.011e+10 ± 0% perf-stat.branch-load-misses
3.251e+12 ± 1% +27.7% 4.152e+12 ± 1% perf-stat.branch-loads
4.533e+09 ± 3% +122.6% 1.009e+10 ± 1% perf-stat.branch-misses
2.258e+11 ± 0% +102.4% 4.571e+11 ± 0% perf-stat.bus-cycles
1.664e+08 ± 4% +5294.7% 8.976e+09 ± 2% perf-stat.cache-misses
3.473e+10 ± 2% +59.0% 5.523e+10 ± 1% perf-stat.cache-references
84147940 ± 0% +185.6% 2.403e+08 ± 0% perf-stat.context-switches
6.927e+12 ± 2% +108.4% 1.444e+13 ± 0% perf-stat.cpu-cycles
19621508 ± 0% +115.8% 42339374 ± 0% perf-stat.cpu-migrations
1.247e+09 ± 5% +102.5% 2.525e+09 ± 4% perf-stat.dTLB-load-misses
2.413e+12 ± 0% +33.7% 3.225e+12 ± 2% perf-stat.dTLB-loads
1.103e+08 ± 6% +26.7% 1.398e+08 ± 4% perf-stat.dTLB-store-misses
7.431e+11 ± 1% +50.5% 1.118e+12 ± 1% perf-stat.dTLB-stores
6.621e+08 ± 1% +174.7% 1.819e+09 ± 0% perf-stat.iTLB-loads
1.014e+13 ± 2% +33.2% 1.351e+13 ± 1% perf-stat.instructions
94122563 ± 0% +113.9% 2.013e+08 ± 0% perf-stat.minor-faults
77552115 ± 8% +7339.8% 5.77e+09 ± 2% perf-stat.node-load-misses
17020549 ± 9% +584.8% 1.165e+08 ± 2% perf-stat.node-loads
10450654 ± 5% +17487.2% 1.838e+09 ± 2% perf-stat.node-store-misses
52951139 ± 2% +2079.9% 1.154e+09 ± 2% perf-stat.node-stores
94122616 ± 0% +113.9% 2.013e+08 ± 0% perf-stat.page-faults
5.857e+12 ± 0% +103.7% 1.193e+13 ± 2% perf-stat.ref-cycles
8.88 ± 2% -26.3% 6.54 ± 1%
perf-profile.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
7.95 ± 1% -34.1% 5.24 ± 1%
perf-profile.cycles-pp.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
4.66 ± 2% -32.3% 3.15 ± 1%
perf-profile.cycles-pp.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
4.83 ± 1% -26.7% 3.54 ± 1%
perf-profile.cycles-pp.__lock_page.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
2.82 ± 4% -53.9% 1.30 ± 1%
perf-profile.cycles-pp.__schedule.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
2.67 ± 1% -24.9% 2.01 ± 2%
perf-profile.cycles-pp.__schedule.schedule.schedule_timeout.io_schedule_timeout.bit_wait_io
4.72 ± 1% -26.4% 3.48 ± 1%
perf-profile.cycles-pp.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp.shmem_fault
15.90 ± 2% -47.1% 8.41 ± 1%
perf-profile.cycles-pp.__wake_up.__wake_up_bit.unlock_page.handle_pte_fault.handle_mm_fault
16.06 ± 2% -46.6% 8.57 ± 1%
perf-profile.cycles-pp.__wake_up_bit.unlock_page.handle_pte_fault.handle_mm_fault.__do_page_fault
15.43 ± 2% -46.5% 8.25 ± 1%
perf-profile.cycles-pp.__wake_up_common.__wake_up.__wake_up_bit.unlock_page.handle_pte_fault
1.69 ± 6% -100.0% 0.00 ± -1%
perf-profile.cycles-pp._raw_spin_lock.__schedule.schedule.schedule_preempt_disabled.cpu_startup_entry
16.88 ± 2% +83.5% 30.96 ± 1%
perf-profile.cycles-pp._raw_spin_lock.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
2.10 ± 2% -100.0% 0.00 ± -1%
perf-profile.cycles-pp._raw_spin_lock.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function
0.00 ± -1% +Inf% 2.46 ± 2%
perf-profile.cycles-pp.activate_task.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry.start_secondary
10.39 ± 2% -47.5% 5.45 ± 1%
perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
15.20 ± 2% -47.1% 8.04 ± 1%
perf-profile.cycles-pp.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit
3.36 ± 1% -27.4% 2.44 ± 2%
perf-profile.cycles-pp.bit_wait_io.__wait_on_bit_lock.__lock_page.find_lock_entry.shmem_getpage_gfp
45.79 ± 2% -7.5% 42.36 ± 1%
perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
45.76 ± 2% -7.5% 42.34 ± 1%
perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
1.44 ± 3% -30.3% 1.01 ± 1%
perf-profile.cycles-pp.deactivate_task.__schedule.schedule.schedule_timeout.io_schedule_timeout
15.11 ± 2% -47.7% 7.90 ± 1%
perf-profile.cycles-pp.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common.__wake_up
1.17 ± 3% -26.3% 0.86 ± 0%
perf-profile.cycles-pp.dequeue_entity.dequeue_task_fair.deactivate_task.__schedule.schedule
1.31 ± 3% -29.5% 0.93 ± 1%
perf-profile.cycles-pp.dequeue_task_fair.deactivate_task.__schedule.schedule.schedule_timeout
8.06 ± 2% -30.3% 5.62 ± 0%
perf-profile.cycles-pp.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.00 ± -1% +Inf% 2.28 ± 2%
perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending
9.95 ± 2% -47.3% 5.24 ± 1%
perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.00 ± -1% +Inf% 2.39 ± 2%
perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry
10.19 ± 2% -47.7% 5.33 ± 1%
perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
1.42 ± 3% -22.1% 1.11 ± 1%
perf-profile.cycles-pp.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
5.42 ± 1% -24.6% 4.09 ± 1%
perf-profile.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
3.34 ± 1% -27.3% 2.43 ± 1%
perf-profile.cycles-pp.io_schedule_timeout.bit_wait_io.__wait_on_bit_lock.__lock_page.find_lock_entry
1.04 ± 2% -49.4% 0.53 ± 2%
perf-profile.cycles-pp.is_ftrace_trampoline.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
1.57 ± 1% -52.9% 0.74 ± 2%
perf-profile.cycles-pp.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
1.69 ± 6% -100.0% 0.00 ± -1%
perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.schedule_preempt_disabled
16.79 ± 2% +83.4% 30.80 ± 1%
perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.handle_pte_fault.handle_mm_fault.__do_page_fault
2.10 ± 2% -100.0% 0.00 ± -1%
perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.default_wake_function.autoremove_wake_function
7.35 ± 2% -30.5% 5.11 ± 0%
perf-profile.cycles-pp.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.98 ± 3% -27.9% 0.70 ± 1%
perf-profile.cycles-pp.radix_tree_next_chunk.filemap_map_pages.handle_pte_fault.handle_mm_fault.__do_page_fault
8.08 ± 2% -30.3% 5.63 ± 0%
perf-profile.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
0.00 ± -1% +Inf% 2.74 ± 1%
perf-profile.cycles-pp.sched_ttwu_pending.cpu_startup_entry.start_secondary
2.90 ± 4% -52.2% 1.39 ± 1%
perf-profile.cycles-pp.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
2.78 ± 1% -25.1% 2.08 ± 2%
perf-profile.cycles-pp.schedule.schedule_timeout.io_schedule_timeout.bit_wait_io.__wait_on_bit_lock
2.92 ± 4% -52.0% 1.40 ± 2%
perf-profile.cycles-pp.schedule_preempt_disabled.cpu_startup_entry.start_secondary
2.81 ± 1% -25.7% 2.09 ± 2%
perf-profile.cycles-pp.schedule_timeout.io_schedule_timeout.bit_wait_io.__wait_on_bit_lock.__lock_page
1.33 ± 2% -27.8% 0.96 ± 1%
perf-profile.cycles-pp.select_task_rq_fair.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function
7.89 ± 1% -34.0% 5.21 ± 1%
perf-profile.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault
7.26 ± 1% -31.4% 4.98 ± 1%
perf-profile.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.handle_mm_fault
14.94 ± 2% -48.1% 7.75 ± 1%
perf-profile.cycles-pp.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function.__wake_up_common
0.00 ± -1% +Inf% 2.57 ± 1%
perf-profile.cycles-pp.ttwu_do_activate.sched_ttwu_pending.cpu_startup_entry.start_secondary
10.79 ± 3% -47.2% 5.70 ± 1%
perf-profile.cycles-pp.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function.wake_bit_function
16.31 ± 2% -44.8% 9.01 ± 1%
perf-profile.cycles-pp.unlock_page.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault
15.22 ± 2% -47.1% 8.05 ± 1%
perf-profile.cycles-pp.wake_bit_function.__wake_up_common.__wake_up.__wake_up_bit.unlock_page
354.29 ± 56% +25157.9% 89486 ± 38% sched_debug.cfs_rq:/.MIN_vruntime.avg
11512 ± 14% +17893.8% 2071559 ± 24% sched_debug.cfs_rq:/.MIN_vruntime.max
1888 ± 32% +20508.3% 389104 ± 27% sched_debug.cfs_rq:/.MIN_vruntime.stddev
9944 ± 0% +203.0% 30129 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
12551 ± 5% +220.5% 40231 ± 0% sched_debug.cfs_rq:/.exec_clock.max
7832 ± 3% +143.3% 19059 ± 1% sched_debug.cfs_rq:/.exec_clock.min
1731 ± 7% +436.0% 9283 ± 0% sched_debug.cfs_rq:/.exec_clock.stddev
306374 ± 9% -97.8% 6654 ± 13% sched_debug.cfs_rq:/.load.avg
1025203 ± 0% -96.2% 39390 ± 6% sched_debug.cfs_rq:/.load.max
448149 ± 3% -97.7% 10314 ± 3% sched_debug.cfs_rq:/.load.stddev
116.25 ± 4% +14275.9% 16711 ± 13% sched_debug.cfs_rq:/.load_avg.avg
653.62 ± 6% +38475.7% 252140 ± 3% sched_debug.cfs_rq:/.load_avg.max
148.90 ± 17% +38464.5% 57422 ± 9% sched_debug.cfs_rq:/.load_avg.stddev
354.29 ± 56% +25157.9% 89486 ± 38% sched_debug.cfs_rq:/.max_vruntime.avg
11512 ± 14% +17893.8% 2071559 ± 24% sched_debug.cfs_rq:/.max_vruntime.max
1888 ± 32% +20508.3% 389104 ± 27% sched_debug.cfs_rq:/.max_vruntime.stddev
11289 ± 0% +17228.9% 1956363 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
24343 ± 6% +10296.9% 2530993 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
8522 ± 3% +15221.3% 1305727 ± 1% sched_debug.cfs_rq:/.min_vruntime.min
3036 ± 5% +17356.2% 530134 ± 0% sched_debug.cfs_rq:/.min_vruntime.stddev
37.28 ± 46% +2740.5% 1058 ± 40% sched_debug.cfs_rq:/.runnable_load_avg.avg
209.75 ± 22% +10452.6% 22134 ±117% sched_debug.cfs_rq:/.runnable_load_avg.max
55.64 ± 37% +6209.2% 3510 ± 92%
sched_debug.cfs_rq:/.runnable_load_avg.stddev
-11180 ±-13% +4743.3% -541529 ± 0% sched_debug.cfs_rq:/.spread0.avg
1874 ± 99% +1671.2% 33198 ± 17% sched_debug.cfs_rq:/.spread0.max
-13949 ±-12% +8446.9% -1192219 ± -1% sched_debug.cfs_rq:/.spread0.min
3037 ± 5% +17354.8% 530186 ± 0% sched_debug.cfs_rq:/.spread0.stddev
248.46 ± 6% +14.5% 284.43 ± 0% sched_debug.cfs_rq:/.util_avg.avg
641.38 ± 7% -17.7% 527.69 ± 2% sched_debug.cfs_rq:/.util_avg.max
62.62 ± 88% -95.7% 2.69 ± 98% sched_debug.cfs_rq:/.util_avg.min
391160 ± 3% -45.1% 214776 ± 10% sched_debug.cpu.avg_idle.avg
71428 ± 6% +81.9% 129931 ± 3% sched_debug.cpu.clock.avg
71441 ± 6% +81.9% 129935 ± 3% sched_debug.cpu.clock.max
71398 ± 7% +82.0% 129924 ± 3% sched_debug.cpu.clock.min
71428 ± 6% +81.9% 129931 ± 3% sched_debug.cpu.clock_task.avg
71441 ± 6% +81.9% 129935 ± 3% sched_debug.cpu.clock_task.max
71398 ± 7% +82.0% 129924 ± 3% sched_debug.cpu.clock_task.min
17.79 ± 9% +4314.3% 785.21 ± 62% sched_debug.cpu.cpu_load[0].avg
207.50 ± 22% +10351.3% 21686 ±120% sched_debug.cpu.cpu_load[0].max
42.02 ± 16% +7431.5% 3165 ±104% sched_debug.cpu.cpu_load[0].stddev
87.72 ± 6% +3905.5% 3513 ± 45% sched_debug.cpu.cpu_load[1].avg
261.00 ± 4% +48754.4% 127510 ± 73% sched_debug.cpu.cpu_load[1].max
78.13 ± 7% +22146.8% 17380 ± 69% sched_debug.cpu.cpu_load[1].stddev
83.88 ± 5% +3295.3% 2848 ± 34% sched_debug.cpu.cpu_load[2].avg
255.75 ± 10% +36120.3% 92633 ± 64% sched_debug.cpu.cpu_load[2].max
75.91 ± 9% +16771.2% 12807 ± 60% sched_debug.cpu.cpu_load[2].stddev
79.24 ± 6% +2565.6% 2112 ± 26% sched_debug.cpu.cpu_load[3].avg
246.88 ± 15% +24717.2% 61267 ± 52% sched_debug.cpu.cpu_load[3].max
72.85 ± 11% +11627.4% 8543 ± 49% sched_debug.cpu.cpu_load[3].stddev
75.02 ± 9% +1959.3% 1544 ± 22% sched_debug.cpu.cpu_load[4].avg
229.00 ± 11% +16544.1% 38114 ± 45% sched_debug.cpu.cpu_load[4].max
68.73 ± 12% +7759.6% 5401 ± 42% sched_debug.cpu.cpu_load[4].stddev
708.89 ± 8% +49.1% 1056 ± 0% sched_debug.cpu.curr->pid.avg
2307 ± 0% +57.8% 3642 ± 4% sched_debug.cpu.curr->pid.max
1009 ± 3% +41.6% 1428 ± 1% sched_debug.cpu.curr->pid.stddev
297811 ± 11% -97.8% 6577 ± 6% sched_debug.cpu.load.avg
1025387 ± 0% -96.3% 37468 ± 16% sched_debug.cpu.load.max
445802 ± 4% -97.8% 9997 ± 3% sched_debug.cpu.load.stddev
0.00 ± 30% -46.8% 0.00 ± 18% sched_debug.cpu.next_balance.stddev
19083 ± 3% +307.4% 77750 ± 0% sched_debug.cpu.nr_load_updates.avg
24124 ± 2% +300.4% 96586 ± 1% sched_debug.cpu.nr_load_updates.max
15854 ± 6% +265.1% 57892 ± 1% sched_debug.cpu.nr_load_updates.min
2205 ± 26% +586.6% 15141 ± 0% sched_debug.cpu.nr_load_updates.stddev
2.75 ± 15% -34.1% 1.81 ± 11% sched_debug.cpu.nr_running.max
0.67 ± 3% -20.6% 0.53 ± 6% sched_debug.cpu.nr_running.stddev
383955 ± 0% +328.5% 1645106 ± 0% sched_debug.cpu.nr_switches.avg
468859 ± 3% +368.6% 2197053 ± 0% sched_debug.cpu.nr_switches.max
310050 ± 4% +236.1% 1041992 ± 1% sched_debug.cpu.nr_switches.min
57708 ± 8% +782.2% 509119 ± 0% sched_debug.cpu.nr_switches.stddev
0.24 ± 8% +101.4% 0.49 ± 12% sched_debug.cpu.nr_uninterruptible.avg
1057 ± 8% -22.5% 819.00 ± 13% sched_debug.cpu.nr_uninterruptible.max
-1123 ± -3% -29.1% -796.88 ± -2% sched_debug.cpu.nr_uninterruptible.min
735.97 ± 5% -10.7% 657.49 ± 2% sched_debug.cpu.nr_uninterruptible.stddev
383743 ± 0% +328.8% 1645352 ± 0% sched_debug.cpu.sched_count.avg
489071 ± 3% +354.4% 2222429 ± 0% sched_debug.cpu.sched_count.max
309779 ± 4% +236.3% 1041674 ± 1% sched_debug.cpu.sched_count.min
57799 ± 8% +781.7% 509642 ± 0% sched_debug.cpu.sched_count.stddev
160215 ± 0% +392.3% 788761 ± 0% sched_debug.cpu.sched_goidle.avg
185615 ± 3% +453.2% 1026840 ± 0% sched_debug.cpu.sched_goidle.max
137784 ± 4% +277.4% 519987 ± 1% sched_debug.cpu.sched_goidle.min
16111 ± 19% +1276.8% 221816 ± 0% sched_debug.cpu.sched_goidle.stddev
222670 ± 0% +284.0% 855043 ± 0% sched_debug.cpu.ttwu_count.avg
280679 ± 3% +317.7% 1172374 ± 0% sched_debug.cpu.ttwu_count.max
170029 ± 4% +204.5% 517707 ± 1% sched_debug.cpu.ttwu_count.min
43065 ± 4% +569.8% 288430 ± 0% sched_debug.cpu.ttwu_count.stddev
14304 ± 0% -96.3% 532.37 ± 5% sched_debug.cpu.ttwu_local.avg
21467 ± 4% -95.6% 950.25 ± 16% sched_debug.cpu.ttwu_local.max
7884 ± 3% -95.6% 350.69 ± 8% sched_debug.cpu.ttwu_local.min
5851 ± 1% -97.8% 128.39 ± 16% sched_debug.cpu.ttwu_local.stddev
71435 ± 6% +81.9% 129928 ± 3% sched_debug.cpu_clk
68712 ± 7% +85.1% 127177 ± 3% sched_debug.ktime
71435 ± 6% +81.9% 129928 ± 3% sched_debug.sched_clk
vm-scalability.throughput
4e+07 ++----------------------------------------------------------------+
| *.****.** ***.****. *.* *.***
| ***.****.* * * .* *** ** |
3.5e+07 ***.****.****.****.* * * |
| |
| |
3e+07 ++ |
| |
2.5e+07 ++ |
| |
| |
2e+07 ++ |
| OOOO OOOO OOOO |
OOO OOOO OOOO |
1.5e+07 ++----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong