FYI, we noticed vm-scalability.throughput -48.0% regression due to commit:
commit e24d283289a6a4cef9caceec31d389dfeeb21b48 ("locking/rwsem: Add reader-owned
state to the owner field")
https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git locking/atomic
in testcase: vm-scalability
on test machine: lkp-hsx04: 144 threads Brickland Haswell-EX with 512G memory
with following parameters: cpufreq_governor=performance/runtime=300s/test=small-allocs
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsx04/small-allocs/vm-scalability
commit:
c37dd24e4c7ab010715a56b39c4c21302c96e8d7
e24d283289a6a4cef9caceec31d389dfeeb21b48
c37dd24e4c7ab010 e24d283289a6a4cef9caceec31
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.Spurious_LAPIC_timer_interrupt_on_cpu
%stddev %change %stddev
\ | \
24139001 ± 0% -48.0% 12555858 ± 0% vm-scalability.throughput
299.32 ± 2% +29.7% 388.19 ± 0% vm-scalability.time.elapsed_time
299.32 ± 2% +29.7% 388.19 ± 0% vm-scalability.time.elapsed_time.max
6298 ± 4% +1999.6% 132237 ± 1%
vm-scalability.time.involuntary_context_switches
1.074e+09 ± 0% -21.9% 8.391e+08 ± 0% vm-scalability.time.minor_page_faults
915.50 ± 1% +546.1% 5915 ± 0%
vm-scalability.time.percent_of_cpu_this_job_got
1532 ± 2% +1339.2% 22061 ± 1% vm-scalability.time.system_time
1207 ± 0% -25.1% 903.83 ± 0% vm-scalability.time.user_time
55730114 ± 3% -72.4% 15365125 ± 3%
vm-scalability.time.voluntary_context_switches
354.90 ± 2% +24.2% 440.93 ± 0% uptime.boot
47743 ± 2% -16.8% 39734 ± 1% uptime.idle
11033 ± 3% +16.6% 12866 ± 5% softirqs.NET_RX
465644 ± 8% +77.0% 824221 ± 4% softirqs.RCU
822386 ± 3% -34.4% 539306 ± 3% softirqs.SCHED
2832016 ± 3% +362.3% 13091690 ± 1% softirqs.TIMER
13892334 ± 1% -23.8% 10580102 ± 0% vmstat.memory.cache
8.25 ± 5% +624.2% 59.75 ± 5% vmstat.procs.r
371331 ± 1% -78.1% 81249 ± 3% vmstat.system.cs
14807 ± 2% +404.7% 74724 ± 0% vmstat.system.in
2340 ± 0% +10.0% 2574 ± 0% slabinfo.task_struct.active_objs
2373 ± 0% +10.0% 2611 ± 0% slabinfo.task_struct.num_objs
70183069 ± 1% -27.0% 51225575 ± 0% slabinfo.vm_area_struct.active_objs
1612599 ± 1% -27.8% 1164410 ± 0% slabinfo.vm_area_struct.active_slabs
70954363 ± 1% -27.8% 51234087 ± 0% slabinfo.vm_area_struct.num_objs
1612599 ± 1% -27.8% 1164410 ± 0% slabinfo.vm_area_struct.num_slabs
3.236e+09 ± 3% +103.1% 6.571e+09 ± 4% cpuidle.C1-HSW.time
52299979 ± 5% -85.0% 7820785 ± 2% cpuidle.C1-HSW.usage
1.494e+08 ± 5% +262.2% 5.412e+08 ± 1% cpuidle.C1E-HSW.time
50766 ± 27% +2917.0% 1531626 ± 4% cpuidle.C1E-HSW.usage
56165957 ± 28% +354.5% 2.553e+08 ± 4% cpuidle.C3-HSW.time
3.61e+10 ± 2% -31.1% 2.486e+10 ± 1% cpuidle.C6-HSW.time
2950756 ± 12% +98.9% 5870384 ± 5% cpuidle.C6-HSW.usage
3.25 ±173% +1e+06% 33965 ± 8%
latency_stats.hits.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.do_syscall_64.return_from_SYSCALL_64
2106 ±173% +22894.8% 484444 ± 6%
latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.do_syscall_64.return_from_SYSCALL_64
16006 ± 16% +3278.4% 540764 ± 5%
latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 10171 ± 46%
latency_stats.max.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2110 ±173% +2.6e+06% 54674725 ± 5%
latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.do_syscall_64.return_from_SYSCALL_64
0.00 ± -1% +Inf% 25016 ± 62%
latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 180087 ± 88%
latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
532977 ± 27% +118.0% 1162030 ± 3% numa-numastat.node0.local_node
532977 ± 27% +118.0% 1162030 ± 3% numa-numastat.node0.numa_hit
3764421 ± 7% -68.4% 1188656 ± 3% numa-numastat.node1.local_node
3764421 ± 7% -68.4% 1188656 ± 3% numa-numastat.node1.numa_hit
520550 ± 52% +120.9% 1149813 ± 6% numa-numastat.node2.local_node
520550 ± 52% +120.9% 1149813 ± 6% numa-numastat.node2.numa_hit
515312 ± 31% +119.8% 1132865 ± 3% numa-numastat.node3.local_node
515312 ± 31% +119.8% 1132865 ± 3% numa-numastat.node3.numa_hit
299.32 ± 2% +29.7% 388.19 ± 0% time.elapsed_time
299.32 ± 2% +29.7% 388.19 ± 0% time.elapsed_time.max
6298 ± 4% +1999.6% 132237 ± 1% time.involuntary_context_switches
1.074e+09 ± 0% -21.9% 8.391e+08 ± 0% time.minor_page_faults
915.50 ± 1% +546.1% 5915 ± 0% time.percent_of_cpu_this_job_got
1532 ± 2% +1339.2% 22061 ± 1% time.system_time
1207 ± 0% -25.1% 903.83 ± 0% time.user_time
55730114 ± 3% -72.4% 15365125 ± 3% time.voluntary_context_switches
86101 ± 2% +298.5% 343071 ± 0% meminfo.Active
71618 ± 2% +358.7% 328527 ± 0% meminfo.Active(anon)
792738 ± 0% +33.1% 1055133 ± 0% meminfo.Cached
535375 ± 13% +55.8% 834329 ± 4% meminfo.Committed_AS
18980 ± 1% +25.7% 23852 ± 0% meminfo.Mapped
5481932 ± 1% -27.1% 3996491 ± 0% meminfo.PageTables
13082002 ± 1% -27.6% 9476457 ± 0% meminfo.SUnreclaim
305087 ± 0% +86.0% 567502 ± 0% meminfo.Shmem
13153870 ± 1% -27.4% 9549827 ± 0% meminfo.Slab
6.36 ± 1% +546.4% 41.10 ± 1% turbostat.%Busy
183.50 ± 1% +548.0% 1189 ± 0% turbostat.Avg_MHz
21.78 ± 2% +116.4% 47.13 ± 0% turbostat.CPU%c1
0.12 ± 19% +239.6% 0.41 ± 11% turbostat.CPU%c3
71.74 ± 0% -84.2% 11.37 ± 4% turbostat.CPU%c6
41.75 ± 2% +25.1% 52.25 ± 3% turbostat.CoreTmp
47.18 ± 3% -96.8% 1.51 ± 39% turbostat.Pkg%pc2
45.75 ± 4% +20.2% 55.00 ± 4% turbostat.PkgTmp
256.81 ± 0% +76.3% 452.63 ± 0% turbostat.PkgWatt
221.02 ± 2% +10.9% 245.12 ± 0% turbostat.RAMWatt
789.50 ± 2% +106.9% 1633 ± 33% numa-vmstat.node0.nr_mapped
97513 ± 55% +166.2% 259578 ± 1% numa-vmstat.node0.nr_page_table_pages
240535 ± 52% +155.9% 615613 ± 1% numa-vmstat.node0.nr_slab_unreclaimable
496249 ± 25% +72.5% 856005 ± 4% numa-vmstat.node0.numa_hit
496249 ± 25% +72.5% 856005 ± 4% numa-vmstat.node0.numa_local
1085904 ± 6% -80.7% 209372 ± 0% numa-vmstat.node1.nr_page_table_pages
2571159 ± 6% -80.6% 498282 ± 0% numa-vmstat.node1.nr_slab_unreclaimable
2616970 ± 6% -67.3% 856044 ± 4% numa-vmstat.node1.numa_hit
2616969 ± 6% -67.3% 856043 ± 4% numa-vmstat.node1.numa_local
772.50 ± 0% +46.7% 1133 ± 18% numa-vmstat.node2.nr_mapped
85446 ± 57% +213.0% 267414 ± 1% numa-vmstat.node3.nr_page_table_pages
210910 ± 54% +200.5% 633780 ± 1% numa-vmstat.node3.nr_slab_unreclaimable
383969 ± 29% +91.9% 736890 ± 5% numa-vmstat.node3.numa_hit
383968 ± 29% +91.9% 736890 ± 5% numa-vmstat.node3.numa_local
3160 ± 2% +99.2% 6296 ± 36% numa-meminfo.node0.Mapped
2010620 ± 37% +112.0% 4263497 ± 3% numa-meminfo.node0.MemUsed
390044 ± 55% +166.2% 1038368 ± 1% numa-meminfo.node0.PageTables
962114 ± 52% +155.9% 2462482 ± 1% numa-meminfo.node0.SUnreclaim
978361 ± 52% +153.7% 2482081 ± 1% numa-meminfo.node0.Slab
15136220 ± 6% -78.1% 3311792 ± 3% numa-meminfo.node1.MemUsed
4343393 ± 6% -80.7% 837538 ± 0% numa-meminfo.node1.PageTables
10283945 ± 6% -80.6% 1993155 ± 0% numa-meminfo.node1.SUnreclaim
10305199 ± 6% -80.5% 2010656 ± 0% numa-meminfo.node1.Slab
3092 ± 0% +45.9% 4510 ± 19% numa-meminfo.node2.Mapped
1900064 ± 73% +118.1% 4143155 ± 5% numa-meminfo.node2.MemUsed
1712999 ± 38% +138.4% 4083839 ± 2% numa-meminfo.node3.MemUsed
341760 ± 57% +213.0% 1069692 ± 1% numa-meminfo.node3.PageTables
843596 ± 54% +200.5% 2535176 ± 1% numa-meminfo.node3.SUnreclaim
861861 ± 53% +196.3% 2553465 ± 1% numa-meminfo.node3.Slab
17897 ± 2% +358.9% 82133 ± 0% proc-vmstat.nr_active_anon
198273 ± 0% +33.1% 263885 ± 0% proc-vmstat.nr_file_pages
4744 ± 1% +25.7% 5966 ± 0% proc-vmstat.nr_mapped
1370309 ± 1% -27.1% 999157 ± 0% proc-vmstat.nr_page_table_pages
76247 ± 0% +86.1% 141863 ± 0% proc-vmstat.nr_shmem
3270282 ± 1% -27.6% 2369169 ± 0% proc-vmstat.nr_slab_unreclaimable
3768 ± 19% +438.8% 20303 ± 0% proc-vmstat.numa_hint_faults
1556 ± 19% +482.9% 9069 ± 3% proc-vmstat.numa_hint_faults_local
5326780 ± 0% -13.2% 4625815 ± 0% proc-vmstat.numa_hit
5326780 ± 0% -13.2% 4625814 ± 0% proc-vmstat.numa_local
999.50 ± 9% +73.4% 1733 ± 3% proc-vmstat.numa_pages_migrated
5252 ± 11% +308.7% 21467 ± 1% proc-vmstat.numa_pte_updates
14562 ± 0% +334.9% 63332 ± 0% proc-vmstat.pgactivate
10094 ± 30% +200.3% 30314 ± 3% proc-vmstat.pgalloc_dma32
7826803 ± 0% -15.9% 6584736 ± 0% proc-vmstat.pgalloc_normal
1.075e+09 ± 0% -21.8% 8.402e+08 ± 0% proc-vmstat.pgfault
6848954 ± 4% -12.1% 6017910 ± 5% proc-vmstat.pgfree
999.50 ± 9% +73.4% 1733 ± 3% proc-vmstat.pgmigrate_success
5.55 ± 2% -100.0% 0.00 ± -1%
perf-profile.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
10.99 ± 2% -93.2% 0.74 ± 2%
perf-profile.__do_page_fault.do_page_fault.page_fault
0.98 ± 4% -100.0% 0.00 ± -1%
perf-profile.__fget.fget.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.49 ± 3% -100.0% 0.00 ± -1%
perf-profile.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
1.41 ± 4% -100.0% 0.00 ± -1%
perf-profile.__schedule.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write
1.08 ± 4% -100.0% 0.00 ± -1%
perf-profile.__schedule.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
4.02 ± 3% -100.0% 0.00 ± -1%
perf-profile.__vma_link_rb.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
5.06 ± 1% -100.0% 0.00 ± -1%
perf-profile._raw_spin_lock_irqsave.try_to_wake_up.wake_up_q.rwsem_wake.call_rwsem_wake
6.61 ± 2% -100.0% 0.00 ± -1%
perf-profile.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_q.rwsem_wake
3.50 ± 3% -100.0% 0.00 ± -1%
perf-profile.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff
3.72 ± 9% -70.7% 1.09 ± 3%
perf-profile.call_cpuidle.cpu_startup_entry.start_secondary
21.76 ± 1% +330.8% 93.75 ± 0%
perf-profile.call_rwsem_down_write_failed.down_write.vma_link.mmap_region.do_mmap
15.52 ± 1% -95.5% 0.70 ± 1%
perf-profile.call_rwsem_wake.up_write.vma_link.mmap_region.do_mmap
8.42 ± 2% -85.2% 1.25 ± 3%
perf-profile.cpu_startup_entry.start_secondary
3.67 ± 10% -70.3% 1.09 ± 3%
perf-profile.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
3.64 ± 10% -70.7% 1.07 ± 3%
perf-profile.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
63.02 ± 1% +53.5% 96.71 ± 0%
perf-profile.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
11.46 ± 2% -93.2% 0.78 ± 2% perf-profile.do_page_fault.page_fault
22.30 ± 1% +320.8% 93.83 ± 0%
perf-profile.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
5.04 ± 2% -100.0% 0.00 ± -1%
perf-profile.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
6.34 ± 2% -100.0% 0.00 ± -1%
perf-profile.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
6.44 ± 2% -100.0% 0.00 ± -1%
perf-profile.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_q
66.17 ± 0% +46.9% 97.23 ± 0% perf-profile.entry_SYSCALL_64_fastpath
0.98 ± 5% -100.0% 0.00 ± -1%
perf-profile.fget.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
1.75 ± 3% -100.0% 0.00 ± -1%
perf-profile.find_vma.__do_page_fault.do_page_fault.page_fault
3.70 ± 3% -100.0% 0.00 ± -1%
perf-profile.get_unmapped_area.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
3.76 ± 3% -100.0% 0.00 ± -1%
perf-profile.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.05 ± 2% -100.0% 0.00 ± -1%
perf-profile.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.55 ± 10% -70.1% 1.06 ± 3%
perf-profile.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
58.78 ± 1% +64.2% 96.49 ± 0%
perf-profile.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
9.24 ± 2% -97.3% 0.25 ±100% perf-profile.native_irq_return_iret
4.66 ± 2% -100.0% 0.00 ± -1%
perf-profile.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.try_to_wake_up.wake_up_q.rwsem_wake
8.78 ± 0% +945.7% 91.81 ± 0%
perf-profile.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
11.64 ± 2% -93.3% 0.78 ± 2% perf-profile.page_fault
3.44 ± 4% -100.0% 0.00 ± -1%
perf-profile.perf_event_aux.part.51.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
0.99 ± 3% -100.0% 0.00 ± -1%
perf-profile.perf_event_aux_ctx.perf_event_aux.part.51.perf_event_mmap.mmap_region.do_mmap
4.78 ± 3% -100.0% 0.00 ± -1%
perf-profile.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff
4.51 ± 3% -100.0% 0.00 ± -1%
perf-profile.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
21.70 ± 1% +332.1% 93.75 ± 0%
perf-profile.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link.mmap_region
9.58 ± 2% -86.6% 1.28 ± 1%
perf-profile.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
15.49 ± 1% -95.5% 0.70 ± 1%
perf-profile.rwsem_wake.call_rwsem_wake.up_write.vma_link.mmap_region
5.05 ± 2% -100.0% 0.00 ± -1%
perf-profile.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
1.54 ± 3% -100.0% 0.00 ± -1%
perf-profile.schedule.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
1.14 ± 4% -100.0% 0.00 ± -1%
perf-profile.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
1.17 ± 4% -100.0% 0.00 ± -1%
perf-profile.schedule_preempt_disabled.cpu_startup_entry.start_secondary
8.47 ± 2% -85.2% 1.25 ± 3% perf-profile.start_secondary
65.31 ± 1% +48.4% 96.88 ± 0%
perf-profile.sys_mmap.entry_SYSCALL_64_fastpath
65.12 ± 1% +48.8% 96.86 ± 0%
perf-profile.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.98 ± 2% -100.0% 0.00 ± -1%
perf-profile.tick_nohz_idle_enter.cpu_startup_entry.start_secondary
13.67 ± 1% -97.2% 0.38 ± 57%
perf-profile.try_to_wake_up.wake_up_q.rwsem_wake.call_rwsem_wake.up_write
7.10 ± 2% -100.0% 0.00 ± -1%
perf-profile.ttwu_do_activate.try_to_wake_up.wake_up_q.rwsem_wake.call_rwsem_wake
3.43 ± 3% -100.0% 0.00 ± -1%
perf-profile.unmapped_area_topdown.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff
16.84 ± 1% -94.4% 0.94 ± 1%
perf-profile.up_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
64.03 ± 1% +51.1% 96.78 ± 0%
perf-profile.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
2.58 ± 4% -100.0% 0.00 ± -1%
perf-profile.vma_compute_subtree_gap.__vma_link_rb.vma_link.mmap_region.do_mmap
6.72 ± 2% -88.9% 0.74 ± 1%
perf-profile.vma_interval_tree_insert.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
51.39 ± 1% +86.7% 95.97 ± 0%
perf-profile.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff
1.64 ± 2% -100.0% 0.00 ± -1%
perf-profile.vmacache_find.__do_page_fault.do_page_fault.page_fault
14.27 ± 1% -96.3% 0.52 ± 2%
perf-profile.wake_up_q.rwsem_wake.call_rwsem_wake.up_write.vma_link
10463 ± 7% +636.3% 77037 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
65159 ± 4% +116.0% 140713 ± 0% sched_debug.cfs_rq:/.exec_clock.max
8.59 ± 10% +1.2e+05% 10590 ± 8% sched_debug.cfs_rq:/.exec_clock.min
20889 ± 5% +144.4% 51052 ± 0% sched_debug.cfs_rq:/.exec_clock.stddev
75624 ± 7% +376.4% 360297 ± 9% sched_debug.cfs_rq:/.load.avg
259528 ± 3% +70.1% 441549 ± 1% sched_debug.cfs_rq:/.load.stddev
35.36 ± 8% +888.0% 349.36 ± 4% sched_debug.cfs_rq:/.load_avg.avg
472.18 ± 4% +97.7% 933.61 ± 1% sched_debug.cfs_rq:/.load_avg.max
93.62 ± 4% +296.7% 371.44 ± 2% sched_debug.cfs_rq:/.load_avg.stddev
11149 ± 7% +644.1% 82960 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
77111 ± 4% +115.4% 166103 ± 1% sched_debug.cfs_rq:/.min_vruntime.max
33.03 ± 16% +34364.2% 11384 ± 8% sched_debug.cfs_rq:/.min_vruntime.min
21855 ± 4% +153.1% 55322 ± 0% sched_debug.cfs_rq:/.min_vruntime.stddev
0.07 ± 8% +393.4% 0.37 ± 9% sched_debug.cfs_rq:/.nr_running.avg
0.25 ± 3% +77.4% 0.45 ± 1% sched_debug.cfs_rq:/.nr_running.stddev
0.87 ± 76% -67.0% 0.29 ± 0% sched_debug.cfs_rq:/.nr_spread_over.max
20.55 ± 10% +1341.8% 296.33 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.avg
424.18 ± 4% +117.3% 921.61 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
73.46 ± 6% +406.9% 372.33 ± 2%
sched_debug.cfs_rq:/.runnable_load_avg.stddev
-14545 ±-25% +454.1% -80596 ± -6% sched_debug.cfs_rq:/.spread0.avg
51417 ± 5% -95.0% 2552 ±170% sched_debug.cfs_rq:/.spread0.max
-25662 ±-13% +493.0% -152174 ± -3% sched_debug.cfs_rq:/.spread0.min
21855 ± 4% +153.1% 55325 ± 0% sched_debug.cfs_rq:/.spread0.stddev
62.95 ± 4% +494.4% 374.18 ± 5% sched_debug.cfs_rq:/.util_avg.avg
635.11 ± 1% +55.0% 984.68 ± 0% sched_debug.cfs_rq:/.util_avg.max
158.75 ± 5% +148.5% 394.44 ± 2% sched_debug.cfs_rq:/.util_avg.stddev
875095 ± 0% -34.2% 575548 ± 1% sched_debug.cpu.avg_idle.avg
296076 ± 2% +18.4% 350683 ± 1% sched_debug.cpu.avg_idle.stddev
191988 ± 7% +22.1% 234365 ± 1% sched_debug.cpu.clock.avg
191997 ± 7% +22.1% 234376 ± 1% sched_debug.cpu.clock.max
191977 ± 7% +22.1% 234354 ± 1% sched_debug.cpu.clock.min
191988 ± 7% +22.1% 234365 ± 1% sched_debug.cpu.clock_task.avg
191997 ± 7% +22.1% 234376 ± 1% sched_debug.cpu.clock_task.max
191977 ± 7% +22.1% 234354 ± 1% sched_debug.cpu.clock_task.min
11.88 ± 6% +2391.7% 296.00 ± 10% sched_debug.cpu.cpu_load[0].avg
408.09 ± 5% +125.8% 921.50 ± 0% sched_debug.cpu.cpu_load[0].max
57.65 ± 2% +545.5% 372.11 ± 2% sched_debug.cpu.cpu_load[0].stddev
32.74 ± 8% +902.0% 328.06 ± 6% sched_debug.cpu.cpu_load[1].avg
463.61 ± 4% +99.0% 922.61 ± 0% sched_debug.cpu.cpu_load[1].max
93.84 ± 3% +304.2% 379.25 ± 1% sched_debug.cpu.cpu_load[1].stddev
30.12 ± 7% +979.8% 325.29 ± 6% sched_debug.cpu.cpu_load[2].avg
438.04 ± 6% +109.1% 915.75 ± 0% sched_debug.cpu.cpu_load[2].max
85.26 ± 2% +338.6% 373.90 ± 1% sched_debug.cpu.cpu_load[2].stddev
28.44 ± 7% +1036.6% 323.20 ± 6% sched_debug.cpu.cpu_load[3].avg
420.74 ± 6% +116.4% 910.68 ± 0% sched_debug.cpu.cpu_load[3].max
80.28 ± 2% +361.9% 370.82 ± 2% sched_debug.cpu.cpu_load[3].stddev
27.23 ± 7% +1077.1% 320.55 ± 6% sched_debug.cpu.cpu_load[4].avg
396.61 ± 6% +127.1% 900.50 ± 0% sched_debug.cpu.cpu_load[4].max
76.94 ± 2% +376.8% 366.82 ± 2% sched_debug.cpu.cpu_load[4].stddev
235.69 ± 5% +357.1% 1077 ± 8% sched_debug.cpu.curr->pid.avg
6956 ± 6% +20.0% 8348 ± 0% sched_debug.cpu.curr->pid.max
903.29 ± 1% +56.0% 1408 ± 1% sched_debug.cpu.curr->pid.stddev
77256 ± 7% +366.1% 360075 ± 9% sched_debug.cpu.load.avg
263553 ± 2% +67.3% 441053 ± 1% sched_debug.cpu.load.stddev
0.00 ± 8% +101.6% 0.00 ± 0% sched_debug.cpu.next_balance.stddev
58266 ± 8% +102.1% 117782 ± 0% sched_debug.cpu.nr_load_updates.avg
133378 ± 2% +26.1% 168128 ± 0% sched_debug.cpu.nr_load_updates.max
21971 ± 14% +189.1% 63516 ± 1% sched_debug.cpu.nr_load_updates.min
28738 ± 5% +40.5% 40386 ± 0% sched_debug.cpu.nr_load_updates.stddev
0.08 ± 6% +391.2% 0.37 ± 9% sched_debug.cpu.nr_running.avg
1.00 ± 0% +50.0% 1.50 ± 10% sched_debug.cpu.nr_running.max
0.26 ± 2% +78.4% 0.46 ± 2% sched_debug.cpu.nr_running.stddev
345325 ± 11% -65.8% 118270 ± 4% sched_debug.cpu.nr_switches.avg
2118306 ± 4% -86.1% 294560 ± 5% sched_debug.cpu.nr_switches.max
399.08 ± 7% +13742.8% 55244 ± 6% sched_debug.cpu.nr_switches.min
671281 ± 4% -95.3% 31593 ± 8% sched_debug.cpu.nr_switches.stddev
0.73 ± 3% -31.0% 0.50 ± 6% sched_debug.cpu.nr_uninterruptible.avg
-66.07 ± -9% +41.7% -93.64 ±-12% sched_debug.cpu.nr_uninterruptible.min
354406 ± 12% -65.6% 121960 ± 4% sched_debug.cpu.sched_count.avg
2174091 ± 5% -85.4% 317654 ± 4% sched_debug.cpu.sched_count.max
171.33 ± 18% +32844.9% 56445 ± 6% sched_debug.cpu.sched_count.min
687726 ± 4% -95.0% 34670 ± 6% sched_debug.cpu.sched_count.stddev
172257 ± 11% -66.3% 58126 ± 4% sched_debug.cpu.sched_goidle.avg
1057582 ± 4% -86.3% 144970 ± 5% sched_debug.cpu.sched_goidle.max
75.65 ± 15% +35674.3% 27063 ± 6% sched_debug.cpu.sched_goidle.min
335348 ± 4% -95.3% 15620 ± 8% sched_debug.cpu.sched_goidle.stddev
184659 ± 12% -66.2% 62420 ± 4% sched_debug.cpu.ttwu_count.avg
1085187 ± 5% -89.2% 117333 ± 5% sched_debug.cpu.ttwu_count.max
76.58 ± 24% +14314.3% 11037 ± 9% sched_debug.cpu.ttwu_count.min
357336 ± 4% -88.7% 40436 ± 5% sched_debug.cpu.ttwu_count.stddev
177.87 ± 11% +347.9% 796.69 ± 2% sched_debug.cpu.ttwu_local.avg
955.39 ± 26% +144.0% 2331 ± 15% sched_debug.cpu.ttwu_local.max
47.56 ± 23% +762.7% 410.29 ± 7% sched_debug.cpu.ttwu_local.min
127.40 ± 16% +172.7% 347.37 ± 9% sched_debug.cpu.ttwu_local.stddev
191979 ± 7% +22.1% 234354 ± 1% sched_debug.cpu_clk
189651 ± 7% +22.3% 231918 ± 1% sched_debug.ktime
191979 ± 7% +22.1% 234354 ± 1% sched_debug.sched_clk
vm-scalability.throughput
2.6e+07 *+------------------------------------*----------*----------------+
| *.*.*.* .*.*.*.*.*.*.*.**.*.*.*.*.* **.*.*. + *.*.*.* |
2.4e+07 ++ * * *.*.*.*.*
| |
2.2e+07 ++ |
| |
2e+07 ++ |
| |
1.8e+07 ++ |
| |
1.6e+07 ++ |
| |
1.4e+07 ++ |
O O O O OO O O O O O O O OO O O O O O O OO |
1.2e+07 ++----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone
git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong