Greeting,
FYI, we noticed a 9.1% improvement of will-it-scale.per_process_ops due to commit:
commit: 60f97516388a0fc63bcaf31a1cb81d22d4b765b4 ("mm/page_alloc.c: Set ppc->high
fraction default to 512")
https://git.kernel.org/cgit/linux/kernel/git/saeed/linux.git net-next-mlx4
in testcase: will-it-scale
on test machine: 192 threads Skylake-SP with 256G memory
with following parameters:
nr_task: 100%
mode: process
test: page_fault2
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel
copies to see if the testcase will scale. It builds both a process and threads based test
in order to see any differences between the two.
test-url:
https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min 9.8% improvement
|
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
|
| test parameters | cpufreq_governor=performance
|
| | load=3000
|
| | test=page_test
|
+------------------+-----------------------------------------------------------------------+
| testcase: change | lmbench3: lmbench3.PIPE.bandwidth.MB/sec 21.9% improvement
|
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
|
| test parameters | cpufreq_governor=performance
|
| | mode=development
|
| | nr_threads=50%
|
| | test=PIPE
|
| | test_memory_size=50%
|
| | ucode=0xb00002e
|
+------------------+-----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone
https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-2018-04-03.cgz/lkp-skl-4sp1/page_fault2/will-it-scale
commit:
78ca31f6bc ("net/mlx4: Change number of max MSIXs from 64 to 1024")
60f9751638 ("mm/page_alloc.c: Set ppc->high fraction default to 512")
78ca31f6bc2c76ac 60f97516388a0fc63bcaf31a1cb
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 4% 2:4
perf-profile.calltrace.cycles-pp.sync_regs.error_entry.testcase
5:4 8% 5:4
perf-profile.calltrace.cycles-pp.error_entry.testcase
5:4 8% 5:4 perf-profile.children.cycles-pp.error_entry
2:4 4% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
53775 +9.1% 58672 will-it-scale.per_process_ops
10325022 +9.1% 11265130 will-it-scale.workload
2.01 ± 4% +0.2 2.20 ± 2% mpstat.cpu.all.usr%
142061 ± 11% -11.6% 125624 ± 2% softirqs.CPU109.TIMER
167.29 +2.6% 171.59 turbostat.RAMWatt
4969 -8.3% 4556 vmstat.system.cs
7.776e+08 +9.7% 8.528e+08 numa-numastat.node0.local_node
7.776e+08 +9.7% 8.528e+08 numa-numastat.node0.numa_hit
43894 ± 16% -62.5% 16439 ± 66% numa-numastat.node0.other_node
28438 ± 44% +58.7% 45125 ± 12% numa-numastat.node1.other_node
4.72 -17.6% 3.89 irq_exception_noise.__do_page_fault.80th
46.52 -83.0% 7.91 ± 3% irq_exception_noise.__do_page_fault.90th
50.66 +63.9% 83.01 irq_exception_noise.__do_page_fault.95th
57.32 +78.4% 102.24 ± 2% irq_exception_noise.__do_page_fault.99th
3066 ± 6% -16.2% 2569 ± 5% irq_exception_noise.softirq_nr
3.128e+09 +9.1% 3.412e+09 proc-vmstat.numa_hit
3.128e+09 +9.1% 3.412e+09 proc-vmstat.numa_local
3.129e+09 +9.1% 3.412e+09 proc-vmstat.pgalloc_normal
3.115e+09 +9.1% 3.397e+09 proc-vmstat.pgfault
3.128e+09 +9.1% 3.412e+09 proc-vmstat.pgfree
5751 ± 10% -25.3% 4297 ± 6% slabinfo.eventpoll_pwq.active_objs
5751 ± 10% -25.3% 4297 ± 6% slabinfo.eventpoll_pwq.num_objs
6512 ± 10% -15.4% 5510 ± 9% slabinfo.kmalloc-rcl-64.active_objs
6512 ± 10% -15.4% 5510 ± 9% slabinfo.kmalloc-rcl-64.num_objs
524.50 ± 8% +14.7% 601.50 ± 8% slabinfo.skbuff_fclone_cache.active_objs
524.50 ± 8% +14.7% 601.50 ± 8% slabinfo.skbuff_fclone_cache.num_objs
29885158 ± 2% -31.1% 20579741 ± 7% sched_debug.cfs_rq:/.MIN_vruntime.max
29885158 ± 2% -31.1% 20579741 ± 7% sched_debug.cfs_rq:/.max_vruntime.max
34663 ± 6% -24.6% 26135 ± 16% sched_debug.cpu.nr_switches.stddev
34440 ± 6% -25.1% 25811 ± 16% sched_debug.cpu.sched_count.stddev
695.10 ± 35% +31.8% 916.17 ± 21% sched_debug.cpu.sched_goidle.stddev
2742 -8.8% 2500 sched_debug.cpu.ttwu_count.avg
17218 ± 6% -25.1% 12900 ± 16% sched_debug.cpu.ttwu_count.stddev
2607 -9.2% 2367 sched_debug.cpu.ttwu_local.avg
17131 ± 6% -25.2% 12821 ± 17% sched_debug.cpu.ttwu_local.stddev
34992 ± 12% +57.5% 55106 ± 6% numa-meminfo.node0.KReclaimable
9098 ± 10% +95.6% 17793 ± 23% numa-meminfo.node0.Mapped
34992 ± 12% +57.5% 55106 ± 6% numa-meminfo.node0.SReclaimable
63787 ± 7% +28.8% 82188 ± 8% numa-meminfo.node0.SUnreclaim
98780 ± 9% +39.0% 137296 ± 6% numa-meminfo.node0.Slab
14203 ± 14% -26.0% 10516 ± 14% numa-meminfo.node1.Mapped
272149 ± 5% -3.5% 262648 ± 4% numa-meminfo.node1.Unevictable
49906 ± 6% -31.6% 34140 ± 20% numa-meminfo.node2.KReclaimable
24382 ± 29% -32.1% 16553 ± 2% numa-meminfo.node2.PageTables
49906 ± 6% -31.6% 34140 ± 20% numa-meminfo.node2.SReclaimable
78716 ± 11% -24.4% 59505 ± 11% numa-meminfo.node2.SUnreclaim
128624 ± 9% -27.2% 93646 ± 14% numa-meminfo.node2.Slab
2278 ± 10% +97.5% 4499 ± 21% numa-vmstat.node0.nr_mapped
8748 ± 12% +57.5% 13776 ± 6% numa-vmstat.node0.nr_slab_reclaimable
15946 ± 7% +28.9% 20547 ± 8% numa-vmstat.node0.nr_slab_unreclaimable
3.835e+08 +11.6% 4.279e+08 numa-vmstat.node0.numa_hit
3.834e+08 +11.6% 4.279e+08 numa-vmstat.node0.numa_local
43664 ± 17% -62.3% 16459 ± 65% numa-vmstat.node0.numa_other
3605 ± 14% -26.3% 2655 ± 14% numa-vmstat.node1.nr_mapped
68036 ± 5% -3.5% 65661 ± 4% numa-vmstat.node1.nr_unevictable
68036 ± 5% -3.5% 65661 ± 4% numa-vmstat.node1.nr_zone_unevictable
3.865e+08 +10.6% 4.274e+08 numa-vmstat.node1.numa_hit
3.864e+08 +10.6% 4.273e+08 numa-vmstat.node1.numa_local
6083 ± 29% -32.0% 4139 ± 2% numa-vmstat.node2.nr_page_table_pages
12477 ± 6% -31.6% 8535 ± 20% numa-vmstat.node2.nr_slab_reclaimable
19677 ± 11% -24.4% 14876 ± 11% numa-vmstat.node2.nr_slab_unreclaimable
3.884e+08 +10.3% 4.283e+08 numa-vmstat.node2.numa_hit
3.883e+08 +10.3% 4.281e+08 numa-vmstat.node2.numa_local
3.878e+08 +10.7% 4.294e+08 numa-vmstat.node3.numa_hit
3.877e+08 +10.7% 4.293e+08 numa-vmstat.node3.numa_local
26.76 -4.6% 25.52 ± 2% perf-stat.i.MPKI
1.088e+10 +5.6% 1.149e+10 perf-stat.i.branch-instructions
67273401 -2.1% 65836751 perf-stat.i.branch-misses
59.45 +3.0 62.49 perf-stat.i.cache-miss-rate%
8.705e+08 +4.7% 9.113e+08 perf-stat.i.cache-misses
4721 -11.1% 4197 perf-stat.i.context-switches
7.66 -6.2% 7.19 perf-stat.i.cpi
55.33 -3.9% 53.20 perf-stat.i.cpu-migrations
494.82 -3.7% 476.56 ± 2% perf-stat.i.cycles-between-cache-misses
1.497e+10 +6.3% 1.592e+10 perf-stat.i.dTLB-loads
93308105 +7.7% 1.005e+08 perf-stat.i.dTLB-store-misses
7.58e+09 +7.8% 8.174e+09 perf-stat.i.dTLB-stores
1.141e+08 ± 3% +10.7% 1.262e+08 ± 6% perf-stat.i.iTLB-loads
5.464e+10 +6.0% 5.793e+10 perf-stat.i.instructions
0.13 +6.6% 0.14 perf-stat.i.ipc
10174625 +8.2% 11004859 perf-stat.i.minor-faults
1.962e+08 +18.7% 2.328e+08 perf-stat.i.node-loads
5.10 +1.6 6.74 ± 5% perf-stat.i.node-store-miss-rate%
3031373 +11.4% 3377511 perf-stat.i.node-store-misses
56918216 -9.5% 51536120 perf-stat.i.node-stores
10178999 +8.2% 11010518 perf-stat.i.page-faults
26.72 -6.4% 25.00 perf-stat.overall.MPKI
0.62 -0.0 0.57 perf-stat.overall.branch-miss-rate%
59.60 +3.3 62.92 perf-stat.overall.cache-miss-rate%
7.65 -6.1% 7.18 perf-stat.overall.cpi
480.23 -5.0% 456.23
perf-stat.overall.cycles-between-cache-misses
0.13 +6.5% 0.14 perf-stat.overall.ipc
0.43 ± 10% -0.1 0.34 perf-stat.overall.node-load-miss-rate%
5.06 +1.1 6.15 perf-stat.overall.node-store-miss-rate%
1617960 -2.1% 1584621 perf-stat.overall.path-length
1.081e+10 +5.6% 1.142e+10 perf-stat.ps.branch-instructions
67186069 -2.3% 65653528 perf-stat.ps.branch-misses
8.644e+08 +4.8% 9.056e+08 perf-stat.ps.cache-misses
4711 -10.1% 4234 perf-stat.ps.context-switches
55.38 ± 2% -3.7% 53.33 perf-stat.ps.cpu-migrations
1.488e+10 +6.3% 1.582e+10 perf-stat.ps.dTLB-loads
92669520 +7.8% 99918550 perf-stat.ps.dTLB-store-misses
7.535e+09 +7.8% 8.123e+09 perf-stat.ps.dTLB-stores
5.429e+10 +6.0% 5.756e+10 perf-stat.ps.instructions
10113507 +8.3% 10947922 perf-stat.ps.minor-faults
1.948e+08 +18.8% 2.314e+08 perf-stat.ps.node-loads
3010974 +11.5% 3357072 perf-stat.ps.node-store-misses
56529573 -9.4% 51222328 perf-stat.ps.node-stores
10112933 +8.3% 10949314 perf-stat.ps.page-faults
1.671e+13 +6.9% 1.785e+13 perf-stat.total.instructions
35.35 -2.9 32.44 ± 5%
perf-profile.calltrace.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu
35.28 -2.9 32.41 ± 5%
perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_unref_page_list.release_pages
32.09 -2.4 29.65 ± 6%
perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range
32.45 -2.4 30.05 ± 6%
perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
39.69 -2.1 37.56
perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault
39.95 -2.1 37.82
perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault
40.17 -2.1 38.07
perf-profile.calltrace.cycles-pp.alloc_pages_vma.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
37.95 -2.0 35.91
perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.__handle_mm_fault
37.87 -2.0 35.87
perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
33.87 -1.0 32.90
perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas.unmap_region
34.06 -0.9 33.15
perf-profile.calltrace.cycles-pp.tlb_flush_mmu.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
39.77 -0.8 38.98 perf-profile.calltrace.cycles-pp.munmap
39.76 -0.8 38.98
perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
39.76 -0.8 38.98
perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
39.74 -0.8 38.96
perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
39.74 -0.8 38.96
perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.74 -0.8 38.96
perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
39.74 -0.8 38.96
perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
4.28 -0.3 3.94 ± 3%
perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
4.22 -0.3 3.90 ± 3%
perf-profile.calltrace.cycles-pp.free_pcppages_bulk.free_unref_page_list.release_pages.tlb_flush_mmu.tlb_finish_mmu
4.46 -0.2 4.27
perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
4.50 -0.2 4.32
perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
4.48 -0.2 4.30
perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap
1.13 -0.1 1.03
perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
1.34 -0.1 1.25
perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
1.42 -0.1 1.35
perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
1.68 -0.1 1.62
perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.56 -0.1 1.50
perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.69 +0.0 0.73
perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault
0.79 +0.1 0.85
perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode.testcase
0.99 +0.1 1.10
perf-profile.calltrace.cycles-pp._raw_spin_lock.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
58.52 +0.7 59.19
perf-profile.calltrace.cycles-pp.page_fault.testcase
59.28 +0.8 60.03 perf-profile.calltrace.cycles-pp.testcase
6.92 +1.1 7.97
perf-profile.calltrace.cycles-pp.copy_page.copy_user_highpage.__handle_mm_fault.handle_mm_fault.__do_page_fault
7.00 +1.1 8.05
perf-profile.calltrace.cycles-pp.copy_user_highpage.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.47 ± 2% +1.3 2.80 ± 13%
perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault
1.43 ± 2% +1.3 2.77 ± 14%
perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte
0.84 +1.3 2.18 ± 57%
perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.unmap_page_range
0.85 +1.3 2.19 ± 57%
perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.unmap_page_range.unmap_vmas
2.38 +1.4 3.77 ± 10%
perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault
2.46 +1.4 3.86 ± 10%
perf-profile.calltrace.cycles-pp.__lru_cache_add.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
4.21 +1.5 5.76 ± 7%
perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
4.16 +1.6 5.71 ± 7%
perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
74.45 -4.9 69.59 ± 2%
perf-profile.children.cycles-pp._raw_spin_lock
36.35 -2.8 33.57 ± 5%
perf-profile.children.cycles-pp.free_pcppages_bulk
36.76 -2.7 34.02 ± 5%
perf-profile.children.cycles-pp.free_unref_page_list
39.87 -2.1 37.73
perf-profile.children.cycles-pp.get_page_from_freelist
40.09 -2.1 37.96
perf-profile.children.cycles-pp.__alloc_pages_nodemask
40.21 -2.1 38.09
perf-profile.children.cycles-pp.alloc_pages_vma
75.70 -2.1 73.61
perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
38.42 -1.1 37.29
perf-profile.children.cycles-pp.release_pages
38.54 -1.1 37.46
perf-profile.children.cycles-pp.tlb_flush_mmu
39.77 -0.8 38.98 perf-profile.children.cycles-pp.munmap
39.89 -0.8 39.10
perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
39.89 -0.8 39.10
perf-profile.children.cycles-pp.do_syscall_64
39.74 -0.8 38.96 perf-profile.children.cycles-pp.__vm_munmap
39.74 -0.8 38.96
perf-profile.children.cycles-pp.__x64_sys_munmap
39.74 -0.8 38.96
perf-profile.children.cycles-pp.unmap_region
39.74 -0.8 38.97 perf-profile.children.cycles-pp.__do_munmap
4.51 -0.2 4.33
perf-profile.children.cycles-pp.tlb_finish_mmu
1.13 -0.1 1.04
perf-profile.children.cycles-pp.find_get_entry
1.44 -0.1 1.37
perf-profile.children.cycles-pp.shmem_getpage_gfp
1.34 -0.1 1.27
perf-profile.children.cycles-pp.find_lock_entry
1.57 -0.1 1.51 perf-profile.children.cycles-pp.shmem_fault
1.69 -0.1 1.63 perf-profile.children.cycles-pp.__do_fault
0.12 +0.0 0.13
perf-profile.children.cycles-pp.___might_sleep
0.06 +0.0 0.07
perf-profile.children.cycles-pp.mem_cgroup_update_lru_size
0.12 ± 7% +0.0 0.14 ± 3%
perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.22 ± 4% +0.0 0.23 ± 2%
perf-profile.children.cycles-pp.__count_memcg_events
0.16 +0.0 0.18 ± 4%
perf-profile.children.cycles-pp.page_remove_rmap
0.30 +0.0 0.33 ± 2% perf-profile.children.cycles-pp.xas_load
0.25 +0.0 0.28
perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.34 +0.0 0.38
perf-profile.children.cycles-pp.__mod_lruvec_state
0.33 +0.0 0.36
perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.34 ± 2% +0.0 0.38
perf-profile.children.cycles-pp.__mod_memcg_state
0.68 +0.0 0.72 perf-profile.children.cycles-pp.sync_regs
0.70 +0.0 0.74
perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.92 +0.0 0.97
perf-profile.children.cycles-pp.native_irq_return_iret
0.79 +0.1 0.85
perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.20 ± 3% +0.1 0.27 ± 6%
perf-profile.children.cycles-pp.free_pages_and_swap_cache
1.00 +0.1 1.07 ± 2%
perf-profile.children.cycles-pp.__list_del_entry_valid
60.07 +0.8 60.86 perf-profile.children.cycles-pp.testcase
7.00 +1.1 8.05
perf-profile.children.cycles-pp.copy_user_highpage
6.94 +1.1 7.99 perf-profile.children.cycles-pp.copy_page
2.40 +1.4 3.79 ± 10%
perf-profile.children.cycles-pp.pagevec_lru_move_fn
2.47 +1.4 3.87 ± 10%
perf-profile.children.cycles-pp.__lru_cache_add
4.19 +1.5 5.74 ± 7%
perf-profile.children.cycles-pp.alloc_set_pte
4.21 +1.6 5.77 ± 7%
perf-profile.children.cycles-pp.finish_fault
2.45 ± 2% +2.8 5.26 ± 33%
perf-profile.children.cycles-pp._raw_spin_lock_irqsave
75.70 -2.1 73.61
perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.82 -0.1 0.71 perf-profile.self.cycles-pp.find_get_entry
0.73 ± 2% -0.1 0.64
perf-profile.self.cycles-pp.get_page_from_freelist
0.10 +0.0 0.11
perf-profile.self.cycles-pp.__mod_zone_page_state
0.05 +0.0 0.06
perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.12 +0.0 0.13 perf-profile.self.cycles-pp.___might_sleep
0.14 ± 3% +0.0 0.15 ± 3%
perf-profile.self.cycles-pp.page_remove_rmap
0.36 +0.0 0.39
perf-profile.self.cycles-pp.__handle_mm_fault
0.21 ± 3% +0.0 0.23 ± 3%
perf-profile.self.cycles-pp.__count_memcg_events
0.25 +0.0 0.28 ± 3% perf-profile.self.cycles-pp.xas_load
0.38 +0.0 0.41
perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.68 +0.0 0.71 perf-profile.self.cycles-pp.sync_regs
0.34 +0.0 0.37
perf-profile.self.cycles-pp.__mod_memcg_state
0.17 ± 3% +0.0 0.20 ± 4%
perf-profile.self.cycles-pp.free_unref_page_list
0.28 +0.0 0.32 ± 2% perf-profile.self.cycles-pp.release_pages
0.67 +0.0 0.71
perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.92 +0.1 0.97
perf-profile.self.cycles-pp.native_irq_return_iret
0.20 ± 4% +0.1 0.27 ± 6%
perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.99 +0.1 1.07 ± 2%
perf-profile.self.cycles-pp.__list_del_entry_valid
1.90 +0.1 1.99 perf-profile.self.cycles-pp.testcase
0.85 +0.1 0.99 ± 4%
perf-profile.self.cycles-pp.free_pcppages_bulk
0.96 +0.3 1.25 ± 3%
perf-profile.self.cycles-pp.unmap_page_range
6.91 +1.0 7.96 perf-profile.self.cycles-pp.copy_page
412.00 ± 62% -49.2% 209.25 ± 28%
interrupts.33:PCI-MSI.26738690-edge.eth0-TxRx-1
453.50 ± 25% -49.3% 229.75 ± 19%
interrupts.34:PCI-MSI.26738691-edge.eth0-TxRx-2
267.50 ± 63% +169.4% 720.75 ± 50%
interrupts.35:PCI-MSI.26738692-edge.eth0-TxRx-3
1131833 +10.7% 1252460 ± 4% interrupts.CAL:Function_call_interrupts
134.50 ± 34% +99.3% 268.00 ± 25% interrupts.CPU1.RES:Rescheduling_interrupts
453.50 ± 25% -49.3% 229.75 ± 19%
interrupts.CPU10.34:PCI-MSI.26738691-edge.eth0-TxRx-2
5847 +11.6% 6524 ± 4%
interrupts.CPU100.CAL:Function_call_interrupts
5845 +11.5% 6517 ± 4%
interrupts.CPU101.CAL:Function_call_interrupts
5593 ± 10% +17.0% 6546 ± 4%
interrupts.CPU102.CAL:Function_call_interrupts
5861 +12.0% 6563 ± 5%
interrupts.CPU104.CAL:Function_call_interrupts
5876 +9.1% 6411 ± 6%
interrupts.CPU105.CAL:Function_call_interrupts
5856 +11.1% 6506 ± 4%
interrupts.CPU106.CAL:Function_call_interrupts
49.50 ±112% +929.8% 509.75 ± 96%
interrupts.CPU107.RES:Rescheduling_interrupts
31.75 ±108% +704.7% 255.50 ± 62%
interrupts.CPU108.RES:Rescheduling_interrupts
5901 +11.0% 6547 ± 5%
interrupts.CPU109.CAL:Function_call_interrupts
267.50 ± 63% +169.4% 720.75 ± 50%
interrupts.CPU11.35:PCI-MSI.26738692-edge.eth0-TxRx-3
5908 +12.0% 6619 ± 5%
interrupts.CPU110.CAL:Function_call_interrupts
5965 +11.3% 6640 ± 5%
interrupts.CPU111.CAL:Function_call_interrupts
5930 +11.2% 6593 ± 5%
interrupts.CPU112.CAL:Function_call_interrupts
5963 +11.9% 6675 ± 5%
interrupts.CPU113.CAL:Function_call_interrupts
5970 +11.0% 6627 ± 4%
interrupts.CPU118.CAL:Function_call_interrupts
5984 +9.8% 6571 ± 4%
interrupts.CPU119.CAL:Function_call_interrupts
50.75 ± 92% +590.1% 350.25 ± 67%
interrupts.CPU12.RES:Rescheduling_interrupts
5970 +10.4% 6589 ± 4%
interrupts.CPU120.CAL:Function_call_interrupts
5926 +9.5% 6488 ± 5%
interrupts.CPU121.CAL:Function_call_interrupts
5992 +10.3% 6610 ± 4%
interrupts.CPU122.CAL:Function_call_interrupts
5788 ± 6% +13.8% 6586 ± 5%
interrupts.CPU123.CAL:Function_call_interrupts
6009 +9.5% 6581 ± 5%
interrupts.CPU125.CAL:Function_call_interrupts
5991 +11.3% 6668 ± 5%
interrupts.CPU126.CAL:Function_call_interrupts
5975 +10.6% 6608 ± 4%
interrupts.CPU127.CAL:Function_call_interrupts
5989 +10.5% 6615 ± 5%
interrupts.CPU128.CAL:Function_call_interrupts
5925 +11.9% 6632 ± 5%
interrupts.CPU129.CAL:Function_call_interrupts
5930 +10.6% 6559 ± 5%
interrupts.CPU130.CAL:Function_call_interrupts
5940 +11.1% 6600 ± 5%
interrupts.CPU131.CAL:Function_call_interrupts
5902 +11.2% 6562 ± 4%
interrupts.CPU132.CAL:Function_call_interrupts
5863 +11.6% 6545 ± 4%
interrupts.CPU133.CAL:Function_call_interrupts
5870 ± 3% +12.1% 6580 ± 3%
interrupts.CPU134.CAL:Function_call_interrupts
5888 +11.9% 6587 ± 4%
interrupts.CPU135.CAL:Function_call_interrupts
5828 +10.6% 6446 ± 4%
interrupts.CPU136.CAL:Function_call_interrupts
5898 +11.1% 6551 ± 4%
interrupts.CPU137.CAL:Function_call_interrupts
5936 ± 2% +10.9% 6581 ± 4%
interrupts.CPU138.CAL:Function_call_interrupts
5864 +10.3% 6470 ± 4%
interrupts.CPU14.CAL:Function_call_interrupts
5900 ± 2% +12.3% 6628 ± 5%
interrupts.CPU140.CAL:Function_call_interrupts
5922 ± 2% +11.4% 6595 ± 5%
interrupts.CPU142.CAL:Function_call_interrupts
5916 ± 2% +12.1% 6629 ± 5%
interrupts.CPU143.CAL:Function_call_interrupts
5785 ± 4% +12.2% 6491 ± 4%
interrupts.CPU145.CAL:Function_call_interrupts
424.75 ± 89% -87.3% 53.75 ± 73%
interrupts.CPU145.RES:Rescheduling_interrupts
5939 +10.6% 6570 ± 5%
interrupts.CPU147.CAL:Function_call_interrupts
5304 ± 13% +23.5% 6552 ± 4%
interrupts.CPU149.CAL:Function_call_interrupts
5839 +11.0% 6481 ± 4%
interrupts.CPU15.CAL:Function_call_interrupts
24.50 ± 60% +612.2% 174.50 ±101%
interrupts.CPU15.RES:Rescheduling_interrupts
5600 ± 10% +17.6% 6583 ± 4%
interrupts.CPU150.CAL:Function_call_interrupts
1062 ± 93% -87.8% 129.75 ±107%
interrupts.CPU150.RES:Rescheduling_interrupts
5951 ± 2% +10.6% 6580 ± 5%
interrupts.CPU151.CAL:Function_call_interrupts
402.25 ± 33% -80.8% 77.25 ±100%
interrupts.CPU151.RES:Rescheduling_interrupts
5921 +11.7% 6612 ± 4%
interrupts.CPU152.CAL:Function_call_interrupts
394.00 ± 80% -80.8% 75.75 ±119%
interrupts.CPU152.RES:Rescheduling_interrupts
308.25 ± 40% -88.0% 37.00 ± 85%
interrupts.CPU153.RES:Rescheduling_interrupts
5917 +10.9% 6563 ± 3%
interrupts.CPU154.CAL:Function_call_interrupts
5880 +12.0% 6587 ± 3%
interrupts.CPU155.CAL:Function_call_interrupts
183.75 ± 22% -69.3% 56.50 ±125%
interrupts.CPU155.RES:Rescheduling_interrupts
5875 +11.7% 6564 ± 4%
interrupts.CPU156.CAL:Function_call_interrupts
164.50 ± 38% -77.2% 37.50 ± 82%
interrupts.CPU156.RES:Rescheduling_interrupts
5905 +11.1% 6560 ± 4%
interrupts.CPU157.CAL:Function_call_interrupts
5895 +11.1% 6549 ± 4%
interrupts.CPU158.CAL:Function_call_interrupts
5978 +10.4% 6598 ± 4%
interrupts.CPU159.CAL:Function_call_interrupts
73.75 ± 39% -90.2% 7.25 ± 5%
interrupts.CPU159.RES:Rescheduling_interrupts
5835 +11.0% 6479 ± 4%
interrupts.CPU16.CAL:Function_call_interrupts
5979 ± 2% +10.2% 6592 ± 4%
interrupts.CPU160.CAL:Function_call_interrupts
252.25 ±110% -82.1% 45.25 ±138%
interrupts.CPU160.RES:Rescheduling_interrupts
5901 ± 2% +9.9% 6484 ± 4%
interrupts.CPU161.CAL:Function_call_interrupts
5970 +11.1% 6633 ± 5%
interrupts.CPU162.CAL:Function_call_interrupts
5837 ± 4% +12.9% 6591 ± 5%
interrupts.CPU163.CAL:Function_call_interrupts
5872 ± 4% +13.0% 6637 ± 5%
interrupts.CPU165.CAL:Function_call_interrupts
435.75 ±107% -78.7% 93.00 ± 87%
interrupts.CPU165.RES:Rescheduling_interrupts
5950 ± 2% +10.9% 6598 ± 5%
interrupts.CPU167.CAL:Function_call_interrupts
5641 ± 7% +16.9% 6592 ± 5%
interrupts.CPU168.CAL:Function_call_interrupts
5835 +11.0% 6475 ± 5%
interrupts.CPU17.CAL:Function_call_interrupts
5950 +10.3% 6561 ± 5%
interrupts.CPU170.CAL:Function_call_interrupts
5971 +10.2% 6578 ± 5%
interrupts.CPU172.CAL:Function_call_interrupts
5952 +11.1% 6615 ± 4%
interrupts.CPU173.CAL:Function_call_interrupts
5988 ± 2% +9.6% 6562 ± 4%
interrupts.CPU174.CAL:Function_call_interrupts
5980 ± 2% +10.4% 6601 ± 4%
interrupts.CPU175.CAL:Function_call_interrupts
5940 ± 2% +11.4% 6616 ± 4%
interrupts.CPU176.CAL:Function_call_interrupts
5933 +11.1% 6592 ± 4%
interrupts.CPU178.CAL:Function_call_interrupts
5898 ± 2% +11.4% 6570 ± 4%
interrupts.CPU179.CAL:Function_call_interrupts
5857 +11.0% 6500 ± 5%
interrupts.CPU18.CAL:Function_call_interrupts
40.25 ±128% +120.5% 88.75 ± 70%
interrupts.CPU18.RES:Rescheduling_interrupts
5937 +12.0% 6648 ± 4%
interrupts.CPU180.CAL:Function_call_interrupts
5919 ± 2% +11.8% 6619 ± 4%
interrupts.CPU181.CAL:Function_call_interrupts
5914 ± 2% +11.3% 6583 ± 4%
interrupts.CPU182.CAL:Function_call_interrupts
5880 ± 2% +12.1% 6593 ± 4%
interrupts.CPU183.CAL:Function_call_interrupts
5886 +11.8% 6578 ± 4%
interrupts.CPU184.CAL:Function_call_interrupts
22.00 ± 39% +152.3% 55.50 ± 75%
interrupts.CPU184.RES:Rescheduling_interrupts
5942 ± 3% +11.3% 6611 ± 5%
interrupts.CPU185.CAL:Function_call_interrupts
5979 ± 3% +8.4% 6483 ± 4%
interrupts.CPU186.CAL:Function_call_interrupts
5954 ± 3% +10.0% 6551 ± 4%
interrupts.CPU187.CAL:Function_call_interrupts
5949 ± 3% +10.3% 6563 ± 4%
interrupts.CPU188.CAL:Function_call_interrupts
5856 +10.7% 6480 ± 4%
interrupts.CPU19.CAL:Function_call_interrupts
24.50 ± 54% +427.6% 129.25 ± 67%
interrupts.CPU19.RES:Rescheduling_interrupts
5867 +9.3% 6413 ± 4%
interrupts.CPU20.CAL:Function_call_interrupts
5872 +9.7% 6440 ± 4%
interrupts.CPU21.CAL:Function_call_interrupts
5939 +9.3% 6489 ± 4%
interrupts.CPU22.CAL:Function_call_interrupts
5952 +8.8% 6475 ± 4%
interrupts.CPU23.CAL:Function_call_interrupts
5947 +9.1% 6488 ± 4%
interrupts.CPU26.CAL:Function_call_interrupts
5633 ± 9% +14.9% 6473 ± 3%
interrupts.CPU27.CAL:Function_call_interrupts
5949 +8.3% 6441 ± 3%
interrupts.CPU28.CAL:Function_call_interrupts
5922 +8.9% 6449 ± 3%
interrupts.CPU29.CAL:Function_call_interrupts
6014 +7.7% 6477 ± 3%
interrupts.CPU3.CAL:Function_call_interrupts
37.25 ± 84% +276.5% 140.25 ± 88% interrupts.CPU3.RES:Rescheduling_interrupts
5915 +10.0% 6504 ± 4%
interrupts.CPU32.CAL:Function_call_interrupts
5900 +10.0% 6491 ± 4%
interrupts.CPU33.CAL:Function_call_interrupts
5892 +10.2% 6495 ± 4%
interrupts.CPU34.CAL:Function_call_interrupts
5865 +10.7% 6494 ± 5%
interrupts.CPU35.CAL:Function_call_interrupts
5862 +10.3% 6465 ± 4%
interrupts.CPU36.CAL:Function_call_interrupts
5787 ± 2% +12.3% 6497 ± 4%
interrupts.CPU37.CAL:Function_call_interrupts
5982 +8.5% 6492 ± 4%
interrupts.CPU4.CAL:Function_call_interrupts
5903 +10.2% 6508 ± 5%
interrupts.CPU41.CAL:Function_call_interrupts
5890 +10.7% 6520 ± 5%
interrupts.CPU42.CAL:Function_call_interrupts
5902 +7.8% 6361 ± 4%
interrupts.CPU43.CAL:Function_call_interrupts
5894 +10.6% 6519 ± 4%
interrupts.CPU45.CAL:Function_call_interrupts
5905 +10.4% 6521 ± 4%
interrupts.CPU46.CAL:Function_call_interrupts
5910 ± 2% +10.0% 6500 ± 4%
interrupts.CPU47.CAL:Function_call_interrupts
5895 ± 2% +10.7% 6524 ± 4%
interrupts.CPU48.CAL:Function_call_interrupts
5777 ± 2% +12.3% 6488 ± 4%
interrupts.CPU49.CAL:Function_call_interrupts
5904 +9.7% 6478 ± 4%
interrupts.CPU51.CAL:Function_call_interrupts
5769 ± 4% +13.3% 6534 ± 4%
interrupts.CPU52.CAL:Function_call_interrupts
5318 ± 10% +23.0% 6542 ± 4%
interrupts.CPU53.CAL:Function_call_interrupts
5607 ± 7% +17.2% 6569 ± 4%
interrupts.CPU54.CAL:Function_call_interrupts
1119 ± 60% -85.5% 162.50 ±146%
interrupts.CPU54.RES:Rescheduling_interrupts
5885 +10.5% 6503 ± 5%
interrupts.CPU55.CAL:Function_call_interrupts
552.50 ± 33% -80.8% 106.25 ± 86%
interrupts.CPU55.RES:Rescheduling_interrupts
5883 +10.8% 6518 ± 4%
interrupts.CPU56.CAL:Function_call_interrupts
917.00 ± 66% -91.9% 74.00 ±121%
interrupts.CPU56.RES:Rescheduling_interrupts
662.75 ± 72% -90.1% 65.75 ± 82%
interrupts.CPU57.RES:Rescheduling_interrupts
5866 +11.0% 6510 ± 5%
interrupts.CPU58.CAL:Function_call_interrupts
339.75 ± 14% -81.4% 63.25 ± 86%
interrupts.CPU58.RES:Rescheduling_interrupts
5834 +11.8% 6524 ± 4%
interrupts.CPU59.CAL:Function_call_interrupts
309.50 ± 22% -79.7% 62.75 ± 92%
interrupts.CPU59.RES:Rescheduling_interrupts
5859 +11.5% 6531 ± 4%
interrupts.CPU60.CAL:Function_call_interrupts
5863 +11.2% 6522 ± 5%
interrupts.CPU61.CAL:Function_call_interrupts
5899 +10.9% 6544 ± 5%
interrupts.CPU62.CAL:Function_call_interrupts
5888 +11.4% 6559 ± 5%
interrupts.CPU63.CAL:Function_call_interrupts
5814 ± 3% +13.0% 6567 ± 5%
interrupts.CPU64.CAL:Function_call_interrupts
5912 +10.5% 6535 ± 4%
interrupts.CPU65.CAL:Function_call_interrupts
5994 +9.4% 6556 ± 4%
interrupts.CPU66.CAL:Function_call_interrupts
287.75 ± 71% -89.5% 30.25 ±108%
interrupts.CPU66.RES:Rescheduling_interrupts
5748 ± 4% +14.7% 6591 ± 5%
interrupts.CPU67.CAL:Function_call_interrupts
5741 ± 5% +14.3% 6564 ± 4%
interrupts.CPU69.CAL:Function_call_interrupts
5950 +10.4% 6571 ± 5%
interrupts.CPU70.CAL:Function_call_interrupts
5807 ± 2% +13.8% 6611 ± 4%
interrupts.CPU71.CAL:Function_call_interrupts
5679 ± 7% +16.8% 6636 ± 5%
interrupts.CPU72.CAL:Function_call_interrupts
5843 +12.4% 6570 ± 5%
interrupts.CPU73.CAL:Function_call_interrupts
164.50 ± 47% -63.4% 60.25 ±113%
interrupts.CPU73.RES:Rescheduling_interrupts
5975 ± 2% +9.2% 6526 ± 4%
interrupts.CPU74.CAL:Function_call_interrupts
5934 +10.9% 6578 ± 5%
interrupts.CPU76.CAL:Function_call_interrupts
5899 +10.9% 6544 ± 5%
interrupts.CPU79.CAL:Function_call_interrupts
19.25 ± 19% +1671.4% 341.00 ±143%
interrupts.CPU81.RES:Rescheduling_interrupts
5882 +12.5% 6615 ± 5%
interrupts.CPU82.CAL:Function_call_interrupts
5917 +12.6% 6662 ± 4%
interrupts.CPU83.CAL:Function_call_interrupts
5874 +11.9% 6572 ± 4%
interrupts.CPU84.CAL:Function_call_interrupts
5926 ± 2% +11.6% 6616 ± 5%
interrupts.CPU85.CAL:Function_call_interrupts
5948 +11.5% 6630 ± 5%
interrupts.CPU86.CAL:Function_call_interrupts
5905 +11.0% 6554 ± 4%
interrupts.CPU87.CAL:Function_call_interrupts
5832 ± 4% +13.3% 6610 ± 4%
interrupts.CPU88.CAL:Function_call_interrupts
5935 +11.5% 6615 ± 5%
interrupts.CPU89.CAL:Function_call_interrupts
412.00 ± 62% -49.2% 209.25 ± 28%
interrupts.CPU9.33:PCI-MSI.26738690-edge.eth0-TxRx-1
5892 +11.8% 6589 ± 4%
interrupts.CPU90.CAL:Function_call_interrupts
5903 +10.6% 6526 ± 5%
interrupts.CPU91.CAL:Function_call_interrupts
5857 +12.9% 6614 ± 5%
interrupts.CPU92.CAL:Function_call_interrupts
5918 +11.9% 6626 ± 5%
interrupts.CPU94.CAL:Function_call_interrupts
5851 +12.3% 6572 ± 5%
interrupts.CPU95.CAL:Function_call_interrupts
5888 +8.2% 6372 ± 6%
interrupts.CPU96.CAL:Function_call_interrupts
irq_exception_noise.__do_page_fault.80th
5.2 +-+-------------------------------------------------------------------+
| |
5 +-+ + + +. + |
| : + .+ .+ : : : + :: + |
4.8 +-+.+. : + + .+.++.+ + .+. .+.+. : : .+.+. : + : :.+.+. .+. + +|
| + + + + + + + + + + +.+ |
4.6 +-+ |
| |
4.4 +-+ |
| |
4.2 +-+ O O |
O O O O O |
4 +-+ O O O O O O |
| O O O O |
3.8 +-+------------------------O-O-----O-O--------------------------------+
irq_exception_noise.__do_page_fault.90th
50 +-+--------------------------------------------------------------------+
|.+.+.+ +.+.+.+.+.+.+.+.+.+.+.+.+.+ +.+.+.+ +.+.+.+.+.+.+.+.+.+.+.|
45 +-+ |
40 +-+ |
| |
35 +-+ |
30 +-+ |
| |
25 +-+ |
20 +-+ |
| |
15 +-+ |
10 +-+ |
O O O O O O O O O O O O O O O O O O OO O |
5 +-+--------------------------------------------------------------------+
irq_exception_noise.__do_page_fault.95th
90 +-+--------------------------------------------------------------------+
| O O O O O O |
85 O-+ O O O O O O O O O |
80 +-+ O O O O O |
| |
75 +-+ |
70 +-+ |
| |
65 +-+ |
60 +-+ |
| |
55 +-+ .+ .+ +. .+. |
50 +-+.+.+ + .+.+. .+.+.+ + .+.+.+.+.+ +.+.+.+ +. .+. .+.+.+.+.+.+.+.|
| + + + + + |
45 +-+--------------------------------------------------------------------+
irq_exception_noise.__do_page_fault.99th
110 +-+---------------O---------------------------------------------------+
O O O O O O O O O O |
100 +-+ O O O O O O O O |
| O O |
| |
90 +-+ |
| |
80 +-+ |
| |
70 +-+ |
| |
| .+ .+. .+. |
60 +-+.+.+ + .+.+.+.++. .+.+.+.+.+.+.+ +.+.+.+ +. .+ .+.+.+.+. .+.|
| + + + +.+ + |
50 +-+-------------------------------------------------------------------+
will-it-scale.per_process_ops
60000 +-+-----------------------------------------------------------------+
| O O O |
59000 O-O O O O O O O O O O O O O |
58000 +-+ O O O O |
| |
57000 +-+ |
56000 +-+ |
| |
55000 +-+ |
54000 +-+.+. .+. .+.+.+. +. .+. .+ .+.+ +.+. .+ .+.|
|.+ +.++.+.+ + + +.+ +.+.+.+ +.+ : : + +.+.+ |
53000 +-+ : : |
52000 +-+ :: |
| + |
51000 +-+-----------------------------------------------------------------+
will-it-scale.workload
1.16e+07 +-+--------------------------------------------------------------+
| O |
1.14e+07 +-O O O O OO |
1.12e+07 O-+ OO O O O O O O OO O |
| O O |
1.1e+07 +-+ |
1.08e+07 +-+ |
| |
1.06e+07 +-+ |
1.04e+07 +-+.+ .+. .+. .+. .+ +. .+.|
|.+ +.+.+.+.++.+.+.+.++ +.+ ++.+.+ ++.+ : + +.+.+.++ |
1.02e+07 +-+ : : |
1e+07 +-+ :: |
| + |
9.8e+06 +-+--------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep2: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/3000/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep2/page_test/aim7
commit:
78ca31f6bc ("net/mlx4: Change number of max MSIXs from 64 to 1024")
60f9751638 ("mm/page_alloc.c: Set ppc->high fraction default to 512")
78ca31f6bc2c76ac 60f97516388a0fc63bcaf31a1cb
---------------- ---------------------------
%stddev %change %stddev
\ | \
238624 +9.8% 262037 aim7.jobs-per-min
75.68 -8.9% 68.93 aim7.time.elapsed_time
75.68 -8.9% 68.93 aim7.time.elapsed_time.max
1049552 +11.8% 1173879 aim7.time.involuntary_context_switches
5515 -10.8% 4922 aim7.time.system_time
9174 +29.7% 11901 ± 7% aim7.time.voluntary_context_switches
10400 ± 6% +38.0% 14348 ± 29% cpuidle.C1.usage
2341 -2.3% 2287 turbostat.Avg_MHz
13791996 -11.3% 12234596 ± 4% turbostat.IRQ
1754 +14.3% 2004 ± 5% vmstat.procs.r
14408 +19.7% 17252 vmstat.system.cs
993.50 ± 7% +16.5% 1157 ± 7% slabinfo.pool_workqueue.active_objs
994.00 ± 7% +16.5% 1158 ± 7% slabinfo.pool_workqueue.num_objs
3468 +10.8% 3845 ± 2% slabinfo.task_struct.active_objs
92628 +9.4% 101293 ± 2% slabinfo.vm_area_struct.active_objs
1093130 ± 2% +23.9% 1354826 proc-vmstat.nr_active_anon
1053719 +20.6% 1270896 ± 2% proc-vmstat.nr_anon_pages
48624 +11.9% 54408 ± 3% proc-vmstat.nr_kernel_stack
84948 +18.6% 100760 ± 2% proc-vmstat.nr_page_table_pages
1093130 ± 2% +23.9% 1354825 proc-vmstat.nr_zone_active_anon
4371572 +21.5% 5309764 ± 4% meminfo.Active
4371404 +21.5% 5309596 ± 4% meminfo.Active(anon)
4210160 +18.7% 4995961 ± 4% meminfo.AnonPages
48481 +11.7% 54139 ± 3% meminfo.KernelStack
6891666 +15.5% 7958463 ± 3% meminfo.Memused
339393 ± 3% +16.6% 395859 ± 5% meminfo.PageTables
314.38 +140.3% 755.50 ± 46% sched_debug.cfs_rq:/.load.min
8605941 -38.3% 5311208 ± 25% sched_debug.cfs_rq:/.min_vruntime.avg
19256166 -45.7% 10464484 ± 30% sched_debug.cfs_rq:/.min_vruntime.max
1156285 ± 5% +59.8% 1847295 ± 14% sched_debug.cfs_rq:/.min_vruntime.min
7139595 -62.7% 2665437 ± 67% sched_debug.cfs_rq:/.min_vruntime.stddev
45.54 -26.8% 33.32 ± 5% sched_debug.cfs_rq:/.nr_spread_over.avg
1417 ± 10% -51.2% 691.25 ± 20% sched_debug.cfs_rq:/.nr_spread_over.max
181.01 ± 6% -48.8% 92.72 ± 17% sched_debug.cfs_rq:/.nr_spread_over.stddev
314.25 +135.8% 740.88 ± 48% sched_debug.cfs_rq:/.runnable_weight.min
8385173 ± 60% -53.0% 3943418 ± 13% sched_debug.cfs_rq:/.spread0.max
7143576 -62.6% 2669695 ± 67% sched_debug.cfs_rq:/.spread0.stddev
1.38 ±113% +12545.5% 173.88 ± 24% sched_debug.cfs_rq:/.util_avg.min
335.18 ± 5% -18.4% 273.49 ± 6% sched_debug.cfs_rq:/.util_avg.stddev
551.94 ± 12% +55.7% 859.12 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.avg
0.50 +15750.0% 79.25 ± 29% sched_debug.cfs_rq:/.util_est_enqueued.min
22.88 ± 20% +977.4% 246.49 ± 52% sched_debug.cpu.clock.stddev
22.88 ± 20% +977.5% 246.49 ± 52% sched_debug.cpu.clock_task.stddev
7.95 ± 4% +9.1% 8.68 ± 3% sched_debug.cpu.cpu_load[4].avg
314.38 +140.3% 755.50 ± 46% sched_debug.cpu.load.min
0.00 ± 7% +527.3% 0.00 ± 49% sched_debug.cpu.next_balance.stddev
1275 ± 3% +19.2% 1519 ± 9% sched_debug.cpu.nr_load_updates.stddev
9.34 ± 8% +41.1% 13.18 ± 13% sched_debug.cpu.nr_running.avg
0.50 +225.0% 1.62 ± 33% sched_debug.cpu.nr_running.min
6071 +16.1% 7051 sched_debug.cpu.nr_switches.avg
4285 ± 2% +31.8% 5647 ± 5% sched_debug.cpu.nr_switches.min
12.75 ± 33% +102.0% 25.75 ± 22% sched_debug.cpu.nr_uninterruptible.max
-13.50 +116.7% -29.25 sched_debug.cpu.nr_uninterruptible.min
4.40 ± 16% +145.6% 10.81 ± 13% sched_debug.cpu.nr_uninterruptible.stddev
5836 +13.4% 6619 sched_debug.cpu.sched_count.avg
4208 ± 3% +26.7% 5331 ± 5% sched_debug.cpu.sched_count.min
39.75 ± 3% +57.9% 62.75 ± 12% sched_debug.cpu.ttwu_count.min
415.63 -14.3% 356.26 ± 8% sched_debug.cpu.ttwu_count.stddev
33.25 +11.7% 37.12 ± 4% sched_debug.cpu.ttwu_local.min
355.25 ± 4% -19.0% 287.79 ± 12% sched_debug.cpu.ttwu_local.stddev
1.951e+10 +2.6% 2.003e+10 perf-stat.i.branch-instructions
79324533 +5.7% 83859108 ± 2% perf-stat.i.branch-misses
11.68 +0.8 12.52 perf-stat.i.cache-miss-rate%
1.017e+08 +15.1% 1.17e+08 perf-stat.i.cache-misses
7.578e+08 +5.3% 7.982e+08 perf-stat.i.cache-references
14736 +21.4% 17888 perf-stat.i.context-switches
2.088e+11 -2.0% 2.047e+11 perf-stat.i.cpu-cycles
5761 ± 4% +35.1% 7783 ± 6% perf-stat.i.cpu-migrations
2.372e+10 +3.8% 2.462e+10 perf-stat.i.dTLB-loads
0.34 -0.0 0.34 perf-stat.i.dTLB-store-miss-rate%
35552575 +8.1% 38436033 perf-stat.i.dTLB-store-misses
9.735e+09 +8.9% 1.06e+10 perf-stat.i.dTLB-stores
20313022 ± 2% +8.3% 21989394 perf-stat.i.iTLB-load-misses
8.706e+10 +3.2% 8.988e+10 perf-stat.i.instructions
3965 ± 3% -6.6% 3705 ± 3% perf-stat.i.instructions-per-iTLB-miss
6218805 +9.4% 6804504 perf-stat.i.minor-faults
2220320 +21.5% 2696669 ± 9% perf-stat.i.node-load-misses
20226466 +15.6% 23372135 perf-stat.i.node-loads
13.29 ± 12% +3.8 17.13 ± 3% perf-stat.i.node-store-miss-rate%
1690422 +12.4% 1900866 perf-stat.i.node-store-misses
16181623 +9.8% 17770316 perf-stat.i.node-stores
6235897 +9.3% 6816015 perf-stat.i.page-faults
8.70 +2.1% 8.88 perf-stat.overall.MPKI
0.41 +0.0 0.42 perf-stat.overall.branch-miss-rate%
13.43 +1.2 14.65 perf-stat.overall.cache-miss-rate%
2.40 -5.1% 2.28 perf-stat.overall.cpi
2052 -14.7% 1749
perf-stat.overall.cycles-between-cache-misses
0.42 +5.3% 0.44 perf-stat.overall.ipc
9.45 +0.2 9.66 perf-stat.overall.node-store-miss-rate%
78236970 +5.4% 82446273 ± 2% perf-stat.ps.branch-misses
1.006e+08 +14.3% 1.151e+08 perf-stat.ps.cache-misses
7.495e+08 +4.8% 7.853e+08 perf-stat.ps.cache-references
14636 +20.5% 17633 perf-stat.ps.context-switches
2.066e+11 -2.5% 2.013e+11 perf-stat.ps.cpu-cycles
5740 ± 4% +33.2% 7648 ± 7% perf-stat.ps.cpu-migrations
2.347e+10 +3.2% 2.422e+10 perf-stat.ps.dTLB-loads
35182612 +7.5% 37827421 perf-stat.ps.dTLB-store-misses
9.63e+09 +8.3% 1.043e+10 perf-stat.ps.dTLB-stores
20099572 ± 2% +7.7% 21642568 perf-stat.ps.iTLB-load-misses
6180094 +8.7% 6720452 perf-stat.ps.minor-faults
2193415 ± 2% +20.7% 2647735 ± 9% perf-stat.ps.node-load-misses
20022606 +14.5% 22934734 perf-stat.ps.node-loads
1672498 +11.8% 1869846 perf-stat.ps.node-store-misses
16024684 +9.1% 17490720 perf-stat.ps.node-stores
6180093 +8.7% 6719421 perf-stat.ps.page-faults
6.694e+12 -6.1% 6.285e+12 perf-stat.total.instructions
12.14 ±103% -9.9 2.27 ±173%
perf-profile.calltrace.cycles.do_page_fault.page_fault
12.14 ±103% -9.9 2.27 ±173%
perf-profile.calltrace.cycles.__do_page_fault.do_page_fault.page_fault
12.14 ±103% -9.9 2.27 ±173%
perf-profile.calltrace.cycles.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.__schedule.schedule.worker_thread.kthread.ret_from_fork
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.find_busiest_group.load_balance.pick_next_task_fair.__schedule.schedule
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.kthread.ret_from_fork
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput.task_work_run
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release.__fput
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.smp_call_function_single.event_function_call.perf_remove_from_context.perf_event_release_kernel.perf_release
9.82 ±107% -9.8 0.00 perf-profile.calltrace.cycles.ret_from_fork
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.worker_thread.kthread.ret_from_fork
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.schedule.worker_thread.kthread.ret_from_fork
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.pick_next_task_fair.__schedule.schedule.worker_thread.kthread
9.82 ±107% -9.8 0.00
perf-profile.calltrace.cycles.load_balance.pick_next_task_fair.__schedule.schedule.worker_thread
16.07 ± 63% -8.9 7.14 ±173%
perf-profile.calltrace.cycles.task_work_run.do_exit.do_group_exit.get_signal.do_signal
16.07 ± 63% -8.9 7.14 ±173%
perf-profile.calltrace.cycles.__fput.task_work_run.do_exit.do_group_exit.get_signal
16.07 ± 63% -8.9 7.14 ±173%
perf-profile.calltrace.cycles.perf_release.__fput.task_work_run.do_exit.do_group_exit
16.07 ± 63% -8.9 7.14 ±173%
perf-profile.calltrace.cycles.perf_event_release_kernel.perf_release.__fput.task_work_run.do_exit
9.82 ±107% -7.5 2.27 ±173%
perf-profile.calltrace.cycles.update_sd_lb_stats.find_busiest_group.load_balance.pick_next_task_fair.__schedule
6.70 ±100% -4.4 2.27 ±173%
perf-profile.calltrace.cycles.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.70 ±100% -4.4 2.27 ±173%
perf-profile.calltrace.cycles.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.70 ±100% -4.4 2.27 ±173%
perf-profile.calltrace.cycles.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.70 ±100% -4.4 2.27 ±173%
perf-profile.calltrace.cycles.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64
0.00 +14.6 14.56 ± 23%
perf-profile.calltrace.cycles.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.kthread
9.82 ±107% -9.8 0.00
perf-profile.children.cycles.perf_remove_from_context
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.ret_from_fork
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.worker_thread
9.82 ±107% -9.8 0.00 perf-profile.children.cycles.schedule
16.07 ± 63% -8.9 7.14 ±173% perf-profile.children.cycles.task_work_run
16.07 ± 63% -8.9 7.14 ±173% perf-profile.children.cycles.__fput
16.07 ± 63% -8.9 7.14 ±173% perf-profile.children.cycles.perf_release
16.07 ± 63% -8.9 7.14 ±173%
perf-profile.children.cycles.perf_event_release_kernel
9.82 ±107% -7.5 2.27 ±173%
perf-profile.children.cycles.update_sd_lb_stats
9.82 ±107% -7.5 2.27 ±173%
perf-profile.children.cycles.find_busiest_group
9.82 ±107% -7.5 2.27 ±173% perf-profile.children.cycles.load_balance
9.82 ±107% -7.5 2.27 ±173%
perf-profile.children.cycles.pick_next_task_fair
11.25 ±101% -7.1 4.17 ±173% perf-profile.children.cycles.___might_sleep
6.70 ±100% -6.7 0.00 perf-profile.children.cycles.free_pgd_range
6.70 ±100% -6.7 0.00 perf-profile.children.cycles.free_p4d_range
9.82 ±107% -5.3 4.54 ±173%
perf-profile.children.cycles.smp_call_function_single
9.82 ±107% -5.3 4.54 ±173%
perf-profile.children.cycles.event_function_call
6.70 ±100% -4.4 2.27 ±173%
perf-profile.children.cycles.__x64_sys_execve
6.70 ±100% -4.4 2.27 ±173%
perf-profile.children.cycles.__do_execve_file
6.70 ±100% -4.4 2.27 ±173%
perf-profile.children.cycles.search_binary_handler
6.70 ±100% -4.4 2.27 ±173%
perf-profile.children.cycles.load_elf_binary
0.00 +12.3 12.29 ± 26% perf-profile.children.cycles.link_path_walk
0.00 +14.6 14.56 ± 23% perf-profile.children.cycles.do_filp_open
0.00 +14.6 14.56 ± 23% perf-profile.children.cycles.path_openat
0.00 +20.4 20.40 ± 39% perf-profile.children.cycles.do_sys_open
11.25 ±101% -7.1 4.17 ±173% perf-profile.self.cycles.___might_sleep
6.70 ±100% -2.2 4.54 ±173%
perf-profile.self.cycles.smp_call_function_single
35954 ± 2% -8.3% 32952 ± 4% softirqs.CPU0.TIMER
33924 -9.8% 30589 softirqs.CPU12.TIMER
33942 ± 2% -9.8% 30629 ± 5% softirqs.CPU14.TIMER
33732 ± 2% -7.8% 31094 ± 3% softirqs.CPU15.TIMER
33659 ± 2% -8.8% 30713 ± 3% softirqs.CPU17.TIMER
33919 -8.9% 30897 ± 4% softirqs.CPU18.TIMER
34395 -10.4% 30801 ± 6% softirqs.CPU19.TIMER
3083 ± 8% +187.1% 8850 ± 37% softirqs.CPU2.RCU
33611 ± 3% -7.7% 31038 ± 3% softirqs.CPU20.TIMER
33836 ± 3% -9.5% 30628 softirqs.CPU23.TIMER
33967 ± 2% -12.0% 29882 ± 3% softirqs.CPU24.TIMER
35481 ± 7% -15.3% 30066 ± 2% softirqs.CPU25.TIMER
33818 -10.8% 30159 ± 2% softirqs.CPU27.TIMER
33489 -9.6% 30259 ± 2% softirqs.CPU28.TIMER
33545 -10.3% 30075 ± 2% softirqs.CPU29.TIMER
33940 -11.9% 29915 ± 2% softirqs.CPU30.TIMER
34286 ± 2% -12.2% 30104 ± 2% softirqs.CPU31.TIMER
33721 -11.6% 29811 ± 3% softirqs.CPU32.TIMER
33742 -12.2% 29612 ± 3% softirqs.CPU33.TIMER
33571 -11.4% 29745 ± 3% softirqs.CPU34.TIMER
33867 -12.7% 29565 ± 3% softirqs.CPU35.TIMER
33966 ± 2% -12.1% 29862 ± 3% softirqs.CPU36.TIMER
34002 ± 2% -11.5% 30077 ± 3% softirqs.CPU37.TIMER
33946 -11.2% 30144 ± 3% softirqs.CPU38.TIMER
33555 -10.8% 29933 ± 3% softirqs.CPU39.TIMER
34694 ± 6% -12.3% 30421 ± 7% softirqs.CPU4.TIMER
33590 -11.3% 29782 ± 3% softirqs.CPU40.TIMER
33179 ± 2% -10.6% 29662 ± 3% softirqs.CPU41.TIMER
33880 ± 4% -11.6% 29941 ± 2% softirqs.CPU43.TIMER
33510 -8.8% 30572 ± 3% softirqs.CPU44.TIMER
34723 ± 7% -15.1% 29488 ± 6% softirqs.CPU45.TIMER
33641 -8.6% 30763 ± 4% softirqs.CPU46.TIMER
36893 ± 4% -10.3% 33082 ± 3% softirqs.CPU48.TIMER
2765 ± 2% +146.4% 6814 ± 39% softirqs.CPU49.RCU
33024 -6.9% 30741 ± 3% softirqs.CPU52.TIMER
33338 -9.0% 30329 ± 3% softirqs.CPU55.TIMER
33356 ± 2% -10.9% 29718 ± 4% softirqs.CPU56.TIMER
33314 -9.5% 30143 ± 4% softirqs.CPU57.TIMER
33000 ± 3% -8.1% 30332 ± 4% softirqs.CPU58.TIMER
38046 ± 21% -19.5% 30644 ± 4% softirqs.CPU59.TIMER
34805 -10.0% 31316 ± 3% softirqs.CPU6.TIMER
33131 ± 2% -7.8% 30547 ± 4% softirqs.CPU60.TIMER
33120 ± 2% -7.9% 30503 ± 2% softirqs.CPU61.TIMER
33280 -9.5% 30133 ± 2% softirqs.CPU62.TIMER
33268 -7.9% 30624 ± 4% softirqs.CPU63.TIMER
33359 -9.6% 30145 ± 3% softirqs.CPU65.TIMER
33056 ± 2% -9.9% 29795 ± 3% softirqs.CPU67.TIMER
33303 -11.3% 29548 ± 2% softirqs.CPU69.TIMER
33401 -11.2% 29658 ± 2% softirqs.CPU70.TIMER
33579 -12.1% 29529 ± 2% softirqs.CPU71.TIMER
33535 ± 2% -12.3% 29412 ± 2% softirqs.CPU72.TIMER
37862 ± 21% -21.7% 29664 ± 2% softirqs.CPU73.TIMER
34032 ± 3% -13.1% 29576 ± 2% softirqs.CPU74.TIMER
33764 ± 2% -13.4% 29247 ± 3% softirqs.CPU75.TIMER
33684 ± 2% -12.9% 29325 ± 3% softirqs.CPU76.TIMER
33420 ± 2% -11.6% 29548 ± 2% softirqs.CPU77.TIMER
33191 -11.5% 29378 ± 3% softirqs.CPU78.TIMER
33195 -11.1% 29510 ± 2% softirqs.CPU79.TIMER
33332 -11.4% 29527 ± 2% softirqs.CPU80.TIMER
33492 ± 2% -11.5% 29655 ± 2% softirqs.CPU81.TIMER
33277 -9.6% 30068 ± 4% softirqs.CPU82.TIMER
33155 -11.2% 29427 ± 2% softirqs.CPU83.TIMER
33205 -10.5% 29730 ± 2% softirqs.CPU84.TIMER
32874 ± 2% -10.2% 29537 ± 2% softirqs.CPU85.TIMER
33248 -11.0% 29605 ± 2% softirqs.CPU86.TIMER
41246 ± 19% -28.4% 29532 softirqs.CPU87.TIMER
262187 +92.7% 505328 ± 16% softirqs.RCU
2984407 -9.7% 2696016 ± 3% softirqs.TIMER
156252 -11.3% 138634 ± 4% interrupts.CPU0.LOC:Local_timer_interrupts
155582 -11.4% 137894 ± 4% interrupts.CPU1.LOC:Local_timer_interrupts
155294 -10.9% 138304 ± 4% interrupts.CPU10.LOC:Local_timer_interrupts
154946 -10.8% 138222 ± 4% interrupts.CPU11.LOC:Local_timer_interrupts
155903 -11.3% 138329 ± 4% interrupts.CPU12.LOC:Local_timer_interrupts
155226 -10.9% 138278 ± 4% interrupts.CPU13.LOC:Local_timer_interrupts
155668 -12.2% 136627 ± 3% interrupts.CPU14.LOC:Local_timer_interrupts
155963 -11.4% 138236 ± 4% interrupts.CPU15.LOC:Local_timer_interrupts
155761 -11.5% 137868 ± 4% interrupts.CPU16.LOC:Local_timer_interrupts
154649 -10.8% 138021 ± 4% interrupts.CPU17.LOC:Local_timer_interrupts
155951 -11.6% 137907 ± 4% interrupts.CPU18.LOC:Local_timer_interrupts
156031 -11.9% 137535 ± 4% interrupts.CPU19.LOC:Local_timer_interrupts
155323 -11.3% 137839 ± 4% interrupts.CPU2.LOC:Local_timer_interrupts
155295 -11.4% 137530 ± 4% interrupts.CPU20.LOC:Local_timer_interrupts
155878 -11.3% 138258 ± 4% interrupts.CPU21.LOC:Local_timer_interrupts
155826 -11.4% 138126 ± 4% interrupts.CPU22.LOC:Local_timer_interrupts
154996 -10.7% 138421 ± 4% interrupts.CPU23.LOC:Local_timer_interrupts
155823 -11.6% 137681 ± 3% interrupts.CPU24.LOC:Local_timer_interrupts
155900 -11.3% 138258 ± 4% interrupts.CPU25.LOC:Local_timer_interrupts
156026 -11.1% 138675 ± 4% interrupts.CPU26.LOC:Local_timer_interrupts
156065 -11.2% 138624 ± 4% interrupts.CPU27.LOC:Local_timer_interrupts
155134 -10.8% 138317 ± 4% interrupts.CPU28.LOC:Local_timer_interrupts
155977 -11.3% 138287 ± 4% interrupts.CPU29.LOC:Local_timer_interrupts
155019 -10.7% 138485 ± 4% interrupts.CPU3.LOC:Local_timer_interrupts
156170 -11.5% 138240 ± 4% interrupts.CPU30.LOC:Local_timer_interrupts
156008 -11.3% 138312 ± 4% interrupts.CPU31.LOC:Local_timer_interrupts
155070 -11.2% 137728 ± 3% interrupts.CPU32.LOC:Local_timer_interrupts
155798 -11.7% 137637 ± 3% interrupts.CPU33.LOC:Local_timer_interrupts
155199 -11.4% 137518 ± 3% interrupts.CPU34.LOC:Local_timer_interrupts
155905 -11.7% 137607 ± 3% interrupts.CPU35.LOC:Local_timer_interrupts
155961 -11.9% 137475 ± 3% interrupts.CPU36.LOC:Local_timer_interrupts
9.25 ±118% +370.3% 43.50 ± 87%
interrupts.CPU36.RES:Rescheduling_interrupts
155888 -11.7% 137651 ± 3% interrupts.CPU37.LOC:Local_timer_interrupts
8.50 ±133% +2229.4% 198.00 ±140%
interrupts.CPU37.RES:Rescheduling_interrupts
155894 -11.3% 138300 ± 4% interrupts.CPU38.LOC:Local_timer_interrupts
155998 -11.3% 138325 ± 4% interrupts.CPU39.LOC:Local_timer_interrupts
155942 -12.4% 136662 ± 4% interrupts.CPU4.LOC:Local_timer_interrupts
155795 -11.7% 137517 ± 3% interrupts.CPU40.LOC:Local_timer_interrupts
155043 -11.3% 137505 ± 3% interrupts.CPU41.LOC:Local_timer_interrupts
156027 -11.4% 138257 ± 4% interrupts.CPU42.LOC:Local_timer_interrupts
155105 -10.9% 138176 ± 4% interrupts.CPU43.LOC:Local_timer_interrupts
155785 -11.3% 138155 ± 4% interrupts.CPU44.LOC:Local_timer_interrupts
155037 -11.9% 136645 ± 3% interrupts.CPU45.LOC:Local_timer_interrupts
155780 -11.2% 138308 ± 4% interrupts.CPU46.LOC:Local_timer_interrupts
154545 -10.6% 138183 ± 4% interrupts.CPU47.LOC:Local_timer_interrupts
155901 -11.2% 138377 ± 4% interrupts.CPU48.LOC:Local_timer_interrupts
155844 -11.9% 137312 ± 4% interrupts.CPU49.LOC:Local_timer_interrupts
155883 -11.2% 138494 ± 4% interrupts.CPU5.LOC:Local_timer_interrupts
155349 -11.9% 136908 ± 3% interrupts.CPU50.LOC:Local_timer_interrupts
155083 -10.8% 138342 ± 4% interrupts.CPU51.LOC:Local_timer_interrupts
154977 -10.8% 138283 ± 4% interrupts.CPU52.LOC:Local_timer_interrupts
154864 -11.3% 137418 ± 4% interrupts.CPU53.LOC:Local_timer_interrupts
155844 -11.2% 138326 ± 4% interrupts.CPU54.LOC:Local_timer_interrupts
155776 -11.3% 138109 ± 4% interrupts.CPU55.LOC:Local_timer_interrupts
155486 -12.2% 136576 ± 4% interrupts.CPU56.LOC:Local_timer_interrupts
155812 -11.8% 137402 ± 4% interrupts.CPU57.LOC:Local_timer_interrupts
154795 -11.2% 137388 ± 4% interrupts.CPU58.LOC:Local_timer_interrupts
155118 -11.3% 137590 ± 4% interrupts.CPU59.LOC:Local_timer_interrupts
155919 -11.3% 138296 ± 4% interrupts.CPU6.LOC:Local_timer_interrupts
154221 -10.8% 137520 ± 4% interrupts.CPU60.LOC:Local_timer_interrupts
154838 -10.8% 138165 ± 4% interrupts.CPU61.LOC:Local_timer_interrupts
155591 -11.7% 137447 ± 4% interrupts.CPU62.LOC:Local_timer_interrupts
155969 -11.8% 137496 ± 4% interrupts.CPU63.LOC:Local_timer_interrupts
155891 -11.8% 137460 ± 4% interrupts.CPU64.LOC:Local_timer_interrupts
155293 -11.5% 137478 ± 4% interrupts.CPU65.LOC:Local_timer_interrupts
155143 -10.9% 138276 ± 4% interrupts.CPU66.LOC:Local_timer_interrupts
155074 -10.7% 138471 ± 4% interrupts.CPU67.LOC:Local_timer_interrupts
155145 -10.9% 138232 ± 4% interrupts.CPU68.LOC:Local_timer_interrupts
156022 -11.3% 138366 ± 4% interrupts.CPU69.LOC:Local_timer_interrupts
5.50 ± 81% +3395.5% 192.25 ±152%
interrupts.CPU69.RES:Rescheduling_interrupts
155096 -11.3% 137612 ± 4% interrupts.CPU7.LOC:Local_timer_interrupts
155904 -11.3% 138305 ± 4% interrupts.CPU70.LOC:Local_timer_interrupts
156019 -11.3% 138393 ± 4% interrupts.CPU71.LOC:Local_timer_interrupts
156061 -11.5% 138062 ± 3% interrupts.CPU72.LOC:Local_timer_interrupts
155876 -11.5% 138010 ± 4% interrupts.CPU73.LOC:Local_timer_interrupts
155994 -11.9% 137467 ± 3% interrupts.CPU74.LOC:Local_timer_interrupts
155981 -11.7% 137752 ± 3% interrupts.CPU75.LOC:Local_timer_interrupts
155979 -11.8% 137545 ± 3% interrupts.CPU76.LOC:Local_timer_interrupts
13.75 ±136% +280.0% 52.25 ± 88%
interrupts.CPU76.RES:Rescheduling_interrupts
155815 -11.3% 138268 ± 4% interrupts.CPU77.LOC:Local_timer_interrupts
155821 -11.7% 137609 ± 3% interrupts.CPU78.LOC:Local_timer_interrupts
155875 -11.3% 138327 ± 4% interrupts.CPU79.LOC:Local_timer_interrupts
155000 -11.3% 137467 ± 4% interrupts.CPU8.LOC:Local_timer_interrupts
155831 -11.8% 137520 ± 3% interrupts.CPU80.LOC:Local_timer_interrupts
155820 -11.2% 138418 ± 4% interrupts.CPU81.LOC:Local_timer_interrupts
155902 -11.3% 138272 ± 4% interrupts.CPU82.LOC:Local_timer_interrupts
5.75 ±163% +1556.5% 95.25 ± 99%
interrupts.CPU82.RES:Rescheduling_interrupts
155904 -11.3% 138251 ± 4% interrupts.CPU83.LOC:Local_timer_interrupts
155971 -11.4% 138129 ± 4% interrupts.CPU84.LOC:Local_timer_interrupts
155061 -11.0% 138013 ± 4% interrupts.CPU85.LOC:Local_timer_interrupts
155985 -11.6% 137878 ± 4% interrupts.CPU86.LOC:Local_timer_interrupts
155927 -11.2% 138438 ± 4% interrupts.CPU87.LOC:Local_timer_interrupts
155191 -11.5% 137353 ± 4% interrupts.CPU9.LOC:Local_timer_interrupts
13692319 -11.4% 12137408 ± 4% interrupts.LOC:Local_timer_interrupts
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_threads/rootfs/tbox_group/test/test_memory_size/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/development/50%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/PIPE/50%/lmbench3/0xb00002e
commit:
78ca31f6bc ("net/mlx4: Change number of max MSIXs from 64 to 1024")
60f9751638 ("mm/page_alloc.c: Set ppc->high fraction default to 512")
78ca31f6bc2c76ac 60f97516388a0fc63bcaf31a1cb
---------------- ---------------------------
%stddev %change %stddev
\ | \
45852 +21.9% 55876 lmbench3.PIPE.bandwidth.MB/sec
5463 -22.6% 4228 ± 30% lmbench3.time.percent_of_cpu_this_job_got
10056 ± 6% -13.5% 8699 ± 2% lmbench3.time.system_time
33336 ± 5% +36.2% 45403 ± 16% softirqs.CPU17.SCHED
38.65 +12.4% 43.45 ± 7% boot-time.boot
2898 +15.0% 3332 ± 9% boot-time.idle
49.14 +12.6 61.75 ± 18% mpstat.cpu.all.idle%
50.06 -12.7 37.34 ± 31% mpstat.cpu.all.sys%
1408 ± 6% +23.6% 1740 ± 6% slabinfo.avc_xperms_data.active_objs
1408 ± 6% +23.6% 1740 ± 6% slabinfo.avc_xperms_data.num_objs
49.00 +25.9% 61.67 ± 17% vmstat.cpu.id
49.25 -25.5% 36.67 ± 30% vmstat.cpu.sy
1701 -4.7% 1622 ± 3% proc-vmstat.nr_page_table_pages
6121 ± 56% +167.0% 16341 ± 11% proc-vmstat.numa_pages_migrated
6121 ± 56% +167.0% 16341 ± 11% proc-vmstat.pgmigrate_success
1788 ± 72% -77.9% 394.67 ± 21% numa-meminfo.node1.Inactive
1701 ± 79% -80.1% 338.00 ± 7% numa-meminfo.node1.Inactive(anon)
6188 ± 9% -12.0% 5447 ± 2% numa-meminfo.node1.KernelStack
2623 ± 32% -43.2% 1489 ± 19% numa-meminfo.node1.PageTables
425.00 ± 79% -80.2% 84.33 ± 7% numa-vmstat.node1.nr_inactive_anon
6188 ± 9% -12.0% 5446 ± 2% numa-vmstat.node1.nr_kernel_stack
655.75 ± 32% -43.2% 372.33 ± 19% numa-vmstat.node1.nr_page_table_pages
425.00 ± 79% -80.2% 84.33 ± 7% numa-vmstat.node1.nr_zone_inactive_anon
1493 -23.2% 1146 ± 27% turbostat.Avg_MHz
54.03 -12.2 41.82 ± 26% turbostat.Busy%
10.32 ± 3% -2.0 8.27 ± 25% turbostat.C1%
204.25 -13.1% 177.40 ± 14% turbostat.PkgWatt
11.88 -6.1% 11.16 ± 2% turbostat.RAMWatt
317.75 ± 42% -56.2% 139.33 ± 38%
interrupts.36:PCI-MSI.1572867-edge.eth0-TxRx-3
94.00 ± 6% +39.4% 131.00 ± 28%
interrupts.92:PCI-MSI.1572923-edge.eth0-TxRx-59
317.75 ± 42% -56.2% 139.33 ± 38%
interrupts.CPU3.36:PCI-MSI.1572867-edge.eth0-TxRx-3
2329 ± 5% +37.0% 3191 ± 33%
interrupts.CPU44.CAL:Function_call_interrupts
2148 ± 10% +51.9% 3263 ± 35%
interrupts.CPU50.CAL:Function_call_interrupts
94.00 ± 6% +39.4% 131.00 ± 28%
interrupts.CPU59.92:PCI-MSI.1572923-edge.eth0-TxRx-59
2298 ± 2% +41.7% 3257 ± 35%
interrupts.CPU59.CAL:Function_call_interrupts
29892 ± 4% -6.0% 28112 ± 4%
interrupts.CPU60.RES:Rescheduling_interrupts
5013 ± 62% -75.8% 1215 ± 59%
interrupts.CPU65.NMI:Non-maskable_interrupts
5013 ± 62% -75.8% 1215 ± 59%
interrupts.CPU65.PMI:Performance_monitoring_interrupts
2296 ± 3% +41.4% 3247 ± 33%
interrupts.CPU71.CAL:Function_call_interrupts
4566 ± 40% -63.8% 1652 ± 79%
interrupts.CPU87.NMI:Non-maskable_interrupts
4566 ± 40% -63.8% 1652 ± 79%
interrupts.CPU87.PMI:Performance_monitoring_interrupts
2215 ± 10% +42.5% 3156 ± 33%
interrupts.CPU9.CAL:Function_call_interrupts
589.00 ± 8% -19.3% 475.33 ± 5% interrupts.TLB:TLB_shootdowns
1.465e+10 -24.8% 1.102e+10 ± 30% perf-stat.i.branch-instructions
1.88 ± 13% +2.2 4.09 ± 63% perf-stat.i.cache-miss-rate%
1.331e+11 -23.4% 1.019e+11 ± 28% perf-stat.i.cpu-cycles
1.715e+10 -22.9% 1.322e+10 ± 30% perf-stat.i.dTLB-loads
56.76 ± 2% -5.7 51.11 ± 4% perf-stat.i.iTLB-load-miss-rate%
6.077e+10 -23.6% 4.641e+10 ± 29% perf-stat.i.instructions
23695 ± 3% -27.7% 17142 ± 25% perf-stat.i.instructions-per-iTLB-miss
19.67 ± 2% +12.1% 22.05 perf-stat.overall.MPKI
0.49 +0.1 0.59 ± 15% perf-stat.overall.branch-miss-rate%
19041 ± 8% -29.6% 13402 ± 39%
perf-stat.overall.cycles-between-cache-misses
17134 ± 2% -17.6% 14124 ± 13%
perf-stat.overall.instructions-per-iTLB-miss
1.457e+10 -24.8% 1.096e+10 ± 30% perf-stat.ps.branch-instructions
1.323e+11 -23.4% 1.014e+11 ± 28% perf-stat.ps.cpu-cycles
1.706e+10 -22.9% 1.315e+10 ± 30% perf-stat.ps.dTLB-loads
6.043e+10 -23.6% 4.617e+10 ± 29% perf-stat.ps.instructions
1.16e+13 ± 6% -12.2% 1.018e+13 ± 2% perf-stat.total.instructions
40010 ± 15% -26.4% 29449 ± 32% sched_debug.cfs_rq:/.exec_clock.avg
38212 ± 14% -30.1% 26726 ± 33% sched_debug.cfs_rq:/.exec_clock.min
322.75 ± 10% +158.0% 832.72 ± 70% sched_debug.cfs_rq:/.load_avg.max
3421408 ± 15% -28.0% 2464418 ± 35% sched_debug.cfs_rq:/.min_vruntime.avg
3493241 ± 16% -27.1% 2546488 ± 34% sched_debug.cfs_rq:/.min_vruntime.max
3277398 ± 15% -29.5% 2310348 ± 33% sched_debug.cfs_rq:/.min_vruntime.min
0.65 ± 13% -27.8% 0.47 ± 22% sched_debug.cfs_rq:/.nr_running.avg
0.06 ± 14% +74.4% 0.10 ± 26% sched_debug.cfs_rq:/.nr_spread_over.avg
753.19 ± 12% -25.4% 561.82 ± 27% sched_debug.cfs_rq:/.util_avg.avg
1654 ± 12% -15.5% 1398 ± 13% sched_debug.cfs_rq:/.util_avg.max
607.16 ± 13% -25.1% 454.68 ± 30% sched_debug.cfs_rq:/.util_est_enqueued.avg
1376 ± 9% -31.1% 947.89 ± 15% sched_debug.cfs_rq:/.util_est_enqueued.max
228.22 ± 4% -27.7% 165.11 ± 15%
sched_debug.cfs_rq:/.util_est_enqueued.stddev
10073 ± 56% +274.5% 37720 ± 53% sched_debug.cpu.avg_idle.min
150879 ± 7% -21.4% 118605 ± 6% sched_debug.cpu.avg_idle.stddev
1.15 ± 20% -60.2% 0.46 ± 45% sched_debug.cpu.cpu_load[1].min
1855684 ± 5% -23.1% 1427848 ± 27% sched_debug.cpu.nr_switches.avg
4011313 ± 8% -20.5% 3190464 ± 24% sched_debug.cpu.nr_switches.max
1754340 ± 5% -22.9% 1351964 ± 26% sched_debug.cpu.nr_switches.min
-7.67 +43.4% -10.99 sched_debug.cpu.nr_uninterruptible.min
1855325 ± 5% -23.1% 1427549 ± 27% sched_debug.cpu.sched_count.avg
4008427 ± 8% -20.4% 3189563 ± 24% sched_debug.cpu.sched_count.max
1754161 ± 5% -23.0% 1351450 ± 26% sched_debug.cpu.sched_count.min
915862 ± 5% -22.8% 706844 ± 26% sched_debug.cpu.sched_goidle.avg
1988061 ± 8% -20.1% 1588229 ± 24% sched_debug.cpu.sched_goidle.max
871494 ± 5% -22.8% 672571 ± 26% sched_debug.cpu.sched_goidle.min
938616 ± 6% -23.3% 719865 ± 27% sched_debug.cpu.ttwu_count.avg
2019971 ± 8% -21.0% 1596694 ± 24% sched_debug.cpu.ttwu_count.max
879784 ± 5% -23.1% 676751 ± 26% sched_debug.cpu.ttwu_count.min
120270 ± 66% -53.4% 55992 ± 84% sched_debug.cpu.ttwu_local.max
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen