[mm] 755d6edc1a: will-it-scale.per_process_ops -4.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -4.1% regression of will-it-scale.per_process_ops due to commit:
commit: 755d6edc1aee4489c90975ec093d724d5492cecd ("[PATCH] mm: release the spinlock on zap_pte_range")
url: https://github.com/0day-ci/linux/commits/Minchan-Kim/mm-release-the-spinl...
in testcase: will-it-scale
on test machine: 8 threads Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz with 16G memory
with following parameters:
nr_task: 100%
mode: process
test: malloc1
cpufreq_governor: performance
ucode: 0x21
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-2019-05-14.cgz/lkp-ivb-d01/malloc1/will-it-scale/0x21
commit:
v5.3-rc2
755d6edc1a ("mm: release the spinlock on zap_pte_range")
v5.3-rc2 755d6edc1aee4489c90975ec093
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:5 -20% :4 dmesg.RIP:__d_lookup_rcu
1:5 -20% :4 dmesg.RIP:mnt_drop_write
:5 20% 1:4 kmsg.ab33a8>]usb_hcd_irq
:5 20% 1:4 kmsg.b445f28>]usb_hcd_irq
:5 20% 1:4 kmsg.cdf63ef>]usb_hcd_irq
1:5 -20% :4 kmsg.d4af11>]usb_hcd_irq
1:5 -20% :4 kmsg.d9>]usb_hcd_irq
:5 20% 1:4 kmsg.f805d78>]usb_hcd_irq
5:5 -7% 4:4 perf-profile.calltrace.cycles-pp.error_entry
7:5 -39% 5:4 perf-profile.children.cycles-pp.error_entry
0:5 -1% 0:4 perf-profile.children.cycles-pp.error_exit
5:5 -30% 4:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
119757 -4.1% 114839 will-it-scale.per_process_ops
958059 -4.1% 918718 will-it-scale.workload
2429 ± 16% -34.5% 1591 ± 32% cpuidle.C1.usage
0.97 ± 88% -0.7 0.26 mpstat.cpu.all.idle%
78.40 +2.0% 80.00 vmstat.cpu.sy
45.42 +2.1% 46.38 turbostat.CorWatt
50.46 +2.0% 51.45 turbostat.PkgWatt
6641 ± 4% +8.6% 7215 ± 8% slabinfo.anon_vma_chain.num_objs
1327 ± 3% +23.0% 1632 ± 5% slabinfo.kmalloc-96.active_objs
1327 ± 3% +23.0% 1632 ± 5% slabinfo.kmalloc-96.num_objs
1235 ± 30% +37.7% 1700 ± 18% interrupts.29:PCI-MSI.409600-edge.eth0
4361 ± 81% +149.4% 10876 ± 32% interrupts.CPU0.NMI:Non-maskable_interrupts
4361 ± 81% +149.4% 10876 ± 32% interrupts.CPU0.PMI:Performance_monitoring_interrupts
1235 ± 30% +37.7% 1700 ± 18% interrupts.CPU7.29:PCI-MSI.409600-edge.eth0
93196 +9.1% 101723 ± 6% sched_debug.cfs_rq:/.load.min
15.37 ± 11% +13.6% 17.46 ± 3% sched_debug.cfs_rq:/.nr_spread_over.max
5.01 ± 11% +14.5% 5.74 ± 4% sched_debug.cfs_rq:/.nr_spread_over.stddev
53.80 ± 15% +41.6% 76.21 ± 7% sched_debug.cfs_rq:/.util_avg.stddev
60098 +1.6% 61056 proc-vmstat.nr_active_anon
6867 -1.2% 6781 proc-vmstat.nr_slab_unreclaimable
60098 +1.6% 61056 proc-vmstat.nr_zone_active_anon
5.757e+08 -4.2% 5.517e+08 proc-vmstat.numa_hit
5.757e+08 -4.2% 5.517e+08 proc-vmstat.numa_local
5.758e+08 -4.1% 5.52e+08 proc-vmstat.pgalloc_normal
2.881e+08 -4.1% 2.762e+08 proc-vmstat.pgfault
5.758e+08 -4.1% 5.52e+08 proc-vmstat.pgfree
2.861e+09 ± 41% +41.1% 4.038e+09 perf-stat.i.branch-instructions
41921318 ± 38% +34.9% 56552695 ± 2% perf-stat.i.cache-references
2.173e+10 ± 41% +34.9% 2.931e+10 perf-stat.i.cpu-cycles
2.26e+09 ± 41% +41.3% 3.194e+09 perf-stat.i.dTLB-stores
57813 ± 26% +66.7% 96370 ± 6% perf-stat.i.iTLB-loads
1.365e+10 ± 41% +37.9% 1.882e+10 perf-stat.i.instructions
661.20 ± 40% +45.4% 961.52 perf-stat.i.instructions-per-iTLB-miss
0.47 ± 41% +37.3% 0.64 perf-stat.i.ipc
948620 -3.5% 915067 perf-stat.i.minor-faults
948620 -3.5% 915067 perf-stat.i.page-faults
0.51 ± 7% -0.1 0.45 perf-stat.overall.branch-miss-rate%
1.59 -2.4% 1.56 perf-stat.overall.cpi
0.38 -0.0 0.35 ± 2% perf-stat.overall.dTLB-store-miss-rate%
875.11 +8.7% 950.89 perf-stat.overall.instructions-per-iTLB-miss
0.63 +2.4% 0.64 perf-stat.overall.ipc
4337585 ± 41% +42.3% 6173557 perf-stat.overall.path-length
2.855e+09 ± 41% +41.0% 4.028e+09 perf-stat.ps.branch-instructions
41833739 ± 38% +34.8% 56408902 ± 2% perf-stat.ps.cache-references
2.255e+09 ± 41% +41.2% 3.186e+09 perf-stat.ps.dTLB-stores
57677 ± 26% +66.7% 96124 ± 6% perf-stat.ps.iTLB-loads
1.362e+10 ± 41% +37.8% 1.877e+10 perf-stat.ps.instructions
946368 -3.6% 912714 perf-stat.ps.minor-faults
946368 -3.6% 912714 perf-stat.ps.page-faults
4.155e+12 ± 41% +36.5% 5.672e+12 perf-stat.total.instructions
20.10 -0.7 19.42 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
17.83 -0.7 17.17 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
5.47 ± 2% -0.5 4.92 ± 4% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
5.75 ± 2% -0.5 5.20 ± 4% perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
5.69 ± 2% -0.5 5.17 ± 4% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__vm_munmap
2.61 -0.5 2.16 ± 12% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
2.09 ± 2% -0.4 1.67 ± 15% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
2.81 ± 2% -0.2 2.56 ± 2% perf-profile.calltrace.cycles-pp.__anon_vma_prepare.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
2.62 ± 2% -0.2 2.45 ± 2% perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region
1.89 ± 2% -0.2 1.73 perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.unmap_region.__do_munmap.__vm_munmap
3.05 ± 2% -0.1 2.91 perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
1.07 ± 3% -0.1 0.95 ± 2% perf-profile.calltrace.cycles-pp.percpu_counter_add_batch.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.91 ± 3% -0.1 0.84 ± 4% perf-profile.calltrace.cycles-pp.native_flush_tlb.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
1.94 ± 3% +0.1 2.06 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
1.31 ± 8% +0.1 1.45 perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.31 ± 81% +0.2 0.54 ± 3% perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.do_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
2.27 ± 50% +0.7 2.97 ± 3% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
43.67 +2.4 46.10 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
39.41 ± 2% +2.7 42.07 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
18.28 ± 2% +3.7 21.95 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
17.43 ± 2% +3.7 21.12 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
35.89 ± 50% +11.0 46.92 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
36.13 ± 50% +11.1 47.22 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
51.68 ± 50% +14.5 66.17 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
51.90 ± 50% +14.5 66.42 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
17.89 -0.7 17.20 perf-profile.children.cycles-pp.handle_mm_fault
20.13 -0.7 19.45 perf-profile.children.cycles-pp.__do_page_fault
5.25 ± 2% -0.6 4.62 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
5.50 ± 2% -0.6 4.95 ± 4% perf-profile.children.cycles-pp.pagevec_lru_move_fn
5.93 ± 2% -0.5 5.39 ± 4% perf-profile.children.cycles-pp.lru_add_drain
5.86 ± 2% -0.5 5.33 ± 4% perf-profile.children.cycles-pp.lru_add_drain_cpu
2.80 ± 2% -0.3 2.55 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
2.86 ± 2% -0.3 2.60 ± 2% perf-profile.children.cycles-pp.__anon_vma_prepare
1.92 ± 3% -0.2 1.75 perf-profile.children.cycles-pp.unlink_anon_vmas
1.88 ± 4% -0.2 1.72 ± 2% perf-profile.children.cycles-pp.percpu_counter_add_batch
2.03 ± 3% -0.2 1.88 perf-profile.children.cycles-pp.free_pgtables
3.06 ± 2% -0.1 2.92 perf-profile.children.cycles-pp.flush_tlb_mm_range
0.89 ± 5% -0.1 0.76 ± 6% perf-profile.children.cycles-pp.__might_sleep
1.58 ± 2% -0.1 1.45 perf-profile.children.cycles-pp.native_flush_tlb
1.97 -0.1 1.85 ± 2% perf-profile.children.cycles-pp.flush_tlb_func_common
0.41 ± 8% -0.1 0.32 ± 8% perf-profile.children.cycles-pp.___pte_free_tlb
0.10 ± 14% -0.1 0.03 ±100% perf-profile.children.cycles-pp.should_fail_alloc_page
0.55 ± 3% -0.1 0.49 ± 4% perf-profile.children.cycles-pp.down_write
0.10 ± 19% -0.1 0.05 ± 58% perf-profile.children.cycles-pp.should_failslab
0.28 ± 10% -0.0 0.23 perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.11 ± 19% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.policy_nodemask
0.10 ± 11% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.__vma_link_file
0.11 ± 9% -0.0 0.08 ± 6% perf-profile.children.cycles-pp.anon_vma_chain_link
0.13 ± 8% -0.0 0.11 ± 4% perf-profile.children.cycles-pp.try_charge
0.18 ± 6% -0.0 0.16 ± 5% perf-profile.children.cycles-pp.inc_zone_page_state
0.14 ± 2% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.10 ± 17% +0.0 0.14 ± 7% perf-profile.children.cycles-pp.strlen
0.52 ± 2% +0.0 0.56 ± 3% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.17 ± 16% +0.0 0.21 ± 6% perf-profile.children.cycles-pp.uncharge_page
0.08 ± 16% +0.0 0.13 ± 7% perf-profile.children.cycles-pp.__vma_link_list
0.26 ± 6% +0.1 0.31 ± 6% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.00 +0.1 0.06 ± 22% perf-profile.children.cycles-pp.__get_vma_policy
0.13 ± 9% +0.1 0.19 ± 9% perf-profile.children.cycles-pp.vma_merge
0.02 ±122% +0.1 0.09 ± 11% perf-profile.children.cycles-pp.kthread_blkcg
0.25 ± 11% +0.1 0.33 ± 6% perf-profile.children.cycles-pp.get_task_policy
0.00 +0.1 0.08 ± 5% perf-profile.children.cycles-pp.memcpy
0.25 ± 9% +0.1 0.35 ± 2% perf-profile.children.cycles-pp.memcpy_erms
1.97 ± 2% +0.1 2.09 perf-profile.children.cycles-pp.get_unmapped_area
1.34 ± 7% +0.1 1.47 ± 2% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.38 ± 5% +0.1 0.52 ± 5% perf-profile.children.cycles-pp.alloc_pages_current
3.08 ± 2% +0.2 3.24 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
64.46 +2.0 66.45 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
64.19 +2.0 66.19 perf-profile.children.cycles-pp.do_syscall_64
43.77 +2.4 46.18 perf-profile.children.cycles-pp.__do_munmap
44.49 +2.5 46.95 perf-profile.children.cycles-pp.__vm_munmap
44.77 +2.5 47.24 perf-profile.children.cycles-pp.__x64_sys_munmap
39.43 ± 2% +2.7 42.10 perf-profile.children.cycles-pp.unmap_region
18.07 ± 2% +3.7 21.73 perf-profile.children.cycles-pp.unmap_page_range
18.29 ± 2% +3.7 21.97 perf-profile.children.cycles-pp.unmap_vmas
6.02 ± 3% -0.5 5.57 ± 3% perf-profile.self.cycles-pp.do_syscall_64
1.73 -0.1 1.59 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
1.56 ± 2% -0.1 1.44 perf-profile.self.cycles-pp.native_flush_tlb
0.34 ± 11% -0.1 0.24 ± 7% perf-profile.self.cycles-pp.strlcpy
0.57 ± 5% -0.1 0.49 ± 6% perf-profile.self.cycles-pp.unlink_anon_vmas
0.68 ± 4% -0.1 0.60 ± 8% perf-profile.self.cycles-pp._raw_spin_lock
0.37 ± 5% -0.1 0.31 ± 6% perf-profile.self.cycles-pp.cpumask_any_but
0.42 ± 7% -0.1 0.36 ± 6% perf-profile.self.cycles-pp.handle_mm_fault
0.23 ± 7% -0.1 0.18 ± 4% perf-profile.self.cycles-pp.__perf_sw_event
0.10 ± 23% -0.0 0.06 ± 9% perf-profile.self.cycles-pp.policy_nodemask
0.09 ± 11% -0.0 0.04 ± 59% perf-profile.self.cycles-pp.__vma_link_file
0.13 ± 6% -0.0 0.10 ± 8% perf-profile.self.cycles-pp.try_charge
0.14 ± 2% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.10 ± 15% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.strlen
0.09 ± 17% +0.0 0.12 ± 5% perf-profile.self.cycles-pp.memcg_check_events
0.07 ± 19% +0.0 0.10 ± 7% perf-profile.self.cycles-pp.__vma_link_list
0.16 ± 16% +0.0 0.20 ± 5% perf-profile.self.cycles-pp.uncharge_page
0.24 ± 7% +0.0 0.28 ± 2% perf-profile.self.cycles-pp.memcpy_erms
0.04 ± 53% +0.0 0.09 ± 8% perf-profile.self.cycles-pp.do_page_fault
0.42 ± 9% +0.1 0.48 ± 7% perf-profile.self.cycles-pp.find_next_bit
0.13 ± 10% +0.1 0.19 ± 8% perf-profile.self.cycles-pp.vma_merge
0.02 ±122% +0.1 0.09 ± 11% perf-profile.self.cycles-pp.kthread_blkcg
0.25 ± 10% +0.1 0.32 ± 7% perf-profile.self.cycles-pp.get_task_policy
0.00 +0.1 0.08 ± 6% perf-profile.self.cycles-pp.memcpy
0.14 ± 5% +0.1 0.25 ± 15% perf-profile.self.cycles-pp.alloc_pages_current
3.08 ± 2% +0.2 3.23 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.43 ± 10% +0.2 0.58 ± 6% perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
11.00 ± 2% +3.6 14.56 perf-profile.self.cycles-pp.unmap_page_range
will-it-scale.per_process_ops
120000 +-+----------------------------------------------------------------+
| +. +. +.. .. + +. +. + |
119000 +-+ +.+..+..+ |
118000 +-+ |
| |
117000 +-+ |
| O |
116000 O-+O O O O |
| O O O O O |
115000 +-+ O O O O O O O O
114000 +-+ |
| O O O O O |
113000 +-+ O |
| |
112000 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Oliver Sang
3 years
[locking/rwsem] 7813430057: vm-scalability.median -42.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -42.5% regression of vm-scalability.median due to commit:
commit: 78134300579a45f527ca173ec8fdb4701b69f16e ("locking/rwsem: Don't call owner_on_cpu() on read-owner")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
runtime: 300s
test: small-allocs-mt
cpufreq_governor: performance
ucode: 0xb000036
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/lkp-bdw-ep4/small-allocs-mt/vm-scalability/0xb000036
commit:
f9adc23ee9 ("futex: Cleanup generic SMP variant of arch_futex_atomic_op_inuser()")
7813430057 ("locking/rwsem: Don't call owner_on_cpu() on read-owner")
f9adc23ee91e6f56 78134300579a45f527ca173ec8f
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -16% 0:4 perf-profile.children.cycles-pp.error_entry
0:4 -12% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
10295 ± 10% -42.5% 5918 ± 3% vm-scalability.median
904912 ± 10% -42.5% 520174 ± 3% vm-scalability.throughput
60474168 ± 10% -42.5% 34747417 ± 3% vm-scalability.time.minor_page_faults
551.25 ± 7% +22.3% 674.25 ± 3% vm-scalability.time.percent_of_cpu_this_job_got
1523 ± 6% +29.0% 1965 ± 3% vm-scalability.time.system_time
161.90 ± 13% -51.5% 78.55 ± 4% vm-scalability.time.user_time
32285656 ± 13% -74.5% 8235971 ± 3% vm-scalability.time.voluntary_context_switches
2.721e+08 ± 10% -42.5% 1.563e+08 ± 3% vm-scalability.workload
37054490 ± 39% -62.6% 13844254 ± 62% cpuidle.C3.usage
5.34 ± 6% +2.0 7.38 ± 2% mpstat.cpu.all.sys%
0.59 ± 12% -0.3 0.32 ± 4% mpstat.cpu.all.usr%
93.50 -1.6% 92.00 vmstat.cpu.id
209869 ± 13% -73.9% 54735 ± 3% vmstat.system.cs
29124 ± 9% -42.5% 16745 ± 4% numa-vmstat.node0.nr_page_table_pages
91040 ± 8% -35.0% 59192 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
29038 ± 11% -42.4% 16734 ± 4% numa-vmstat.node1.nr_page_table_pages
88159 ± 10% -35.8% 56580 ± 4% numa-vmstat.node1.nr_slab_unreclaimable
2763837 ± 3% -12.8% 2410986 meminfo.Memused
232955 ± 10% -42.6% 133795 ± 4% meminfo.PageTables
716735 ± 9% -35.3% 463602 ± 3% meminfo.SUnreclaim
788566 ± 8% -32.1% 535591 ± 3% meminfo.Slab
11681 ± 5% -19.5% 9400 meminfo.max_used_kB
240.75 ± 5% +19.5% 287.75 ± 2% turbostat.Avg_MHz
8.68 ± 4% +1.4 10.12 ± 3% turbostat.Busy%
37053983 ± 39% -62.6% 13843816 ± 62% turbostat.C3
54.52 ± 6% -25.0% 40.91 ± 14% turbostat.CPU%c1
12.86 ± 3% -6.7% 12.00 ± 2% turbostat.RAMWatt
116649 ± 9% -42.5% 67086 ± 4% numa-meminfo.node0.PageTables
364217 ± 8% -34.9% 237199 ± 2% numa-meminfo.node0.SUnreclaim
401594 ± 5% -30.7% 278271 ± 2% numa-meminfo.node0.Slab
116289 ± 11% -42.3% 67047 ± 3% numa-meminfo.node1.PageTables
352683 ± 10% -35.7% 226762 ± 4% numa-meminfo.node1.SUnreclaim
387136 ± 11% -33.4% 257677 ± 4% numa-meminfo.node1.Slab
740.00 ± 3% +10.8% 820.00 ± 3% slabinfo.kmem_cache_node.active_objs
783.25 ± 3% +10.2% 863.00 ± 3% slabinfo.kmem_cache_node.num_objs
2947330 ± 11% -43.3% 1670662 ± 4% slabinfo.vm_area_struct.active_objs
73683 ± 11% -43.3% 41766 ± 4% slabinfo.vm_area_struct.active_slabs
2947343 ± 11% -43.3% 1670680 ± 4% slabinfo.vm_area_struct.num_objs
73683 ± 11% -43.3% 41766 ± 4% slabinfo.vm_area_struct.num_slabs
58082 ± 10% -42.3% 33491 ± 4% proc-vmstat.nr_page_table_pages
179181 ± 9% -35.3% 115969 ± 3% proc-vmstat.nr_slab_unreclaimable
8.75 ±133% +8705.7% 770.50 ±149% proc-vmstat.numa_hint_faults_local
1030609 ± 2% -10.7% 920710 proc-vmstat.numa_hit
1014892 ± 2% -11.0% 903417 proc-vmstat.numa_local
8262 ±160% +287.5% 32019 ± 65% proc-vmstat.numa_pte_updates
4248 ± 4% +14.4% 4860 ± 5% proc-vmstat.pgactivate
1237129 ± 3% -13.4% 1071267 ± 2% proc-vmstat.pgalloc_normal
61279755 ± 10% -42.0% 35555364 ± 3% proc-vmstat.pgfault
1153261 ± 4% -15.6% 972958 ± 12% proc-vmstat.pgfree
12276 ± 8% +14.2% 14023 ± 4% sched_debug.cfs_rq:/.exec_clock.max
816.73 ± 7% +29.3% 1056 ± 2% sched_debug.cfs_rq:/.exec_clock.stddev
90308 ± 8% +33.2% 120279 ± 13% sched_debug.cfs_rq:/.min_vruntime.avg
122001 ± 6% +30.7% 159494 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
8108 ± 6% +35.2% 10962 ± 2% sched_debug.cfs_rq:/.min_vruntime.stddev
3.80 ± 28% +49.4% 5.67 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.avg
19.20 ± 24% +33.7% 25.68 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.stddev
50559 ± 12% +32.5% 67006 ± 2% sched_debug.cfs_rq:/.spread0.max
8108 ± 6% +35.2% 10962 ± 2% sched_debug.cfs_rq:/.spread0.stddev
715.29 ± 11% +31.0% 937.06 ± 3% sched_debug.cfs_rq:/.util_avg.max
0.62 ± 96% -100.0% 0.00 sched_debug.cfs_rq:/.util_avg.min
136.54 ± 16% +49.4% 204.01 ± 11% sched_debug.cfs_rq:/.util_avg.stddev
248.08 ± 6% +44.4% 358.25 ± 19% sched_debug.cfs_rq:/.util_est_enqueued.max
472136 ± 20% +74.7% 824658 ± 3% sched_debug.cpu.avg_idle.avg
48568 ± 17% -49.5% 24506 ± 29% sched_debug.cpu.avg_idle.min
161426 ± 19% +27.8% 206264 ± 4% sched_debug.cpu.avg_idle.stddev
2.64 ± 3% +7.0% 2.83 ± 3% sched_debug.cpu.clock.stddev
2.64 ± 3% +7.0% 2.83 ± 3% sched_debug.cpu.clock_task.stddev
362539 ± 13% -75.3% 89466 ± 12% sched_debug.cpu.nr_switches.avg
385763 ± 12% -71.0% 111713 ± 13% sched_debug.cpu.nr_switches.max
339521 ± 14% -78.2% 74098 ± 14% sched_debug.cpu.nr_switches.min
8.45 ± 6% -17.0% 7.01 ± 9% sched_debug.cpu.nr_uninterruptible.stddev
363377 ± 13% -75.2% 90290 ± 13% sched_debug.cpu.sched_count.avg
408058 ± 13% -65.0% 142960 ± 15% sched_debug.cpu.sched_count.max
339524 ± 14% -78.1% 74224 ± 14% sched_debug.cpu.sched_count.min
180453 ± 13% -75.7% 43933 ± 13% sched_debug.cpu.sched_goidle.avg
191582 ± 12% -71.6% 54363 ± 14% sched_debug.cpu.sched_goidle.max
169052 ± 14% -78.5% 36425 ± 14% sched_debug.cpu.sched_goidle.min
183653 ± 13% -75.1% 45783 ± 13% sched_debug.cpu.ttwu_count.avg
202354 ± 13% -72.8% 55054 ± 15% sched_debug.cpu.ttwu_count.max
168679 ± 13% -75.2% 41903 ± 13% sched_debug.cpu.ttwu_count.min
8022 ± 24% -75.1% 2000 ± 12% sched_debug.cpu.ttwu_count.stddev
1218 ± 6% +20.2% 1464 ± 6% sched_debug.cpu.ttwu_local.max
164.12 ± 17% +41.9% 232.82 ± 3% sched_debug.cpu.ttwu_local.stddev
8.44 ± 11% -36.3% 5.38 ± 11% perf-stat.i.MPKI
3.178e+09 ± 4% +15.6% 3.674e+09 perf-stat.i.branch-instructions
9232346 ± 8% -38.0% 5722174 ± 9% perf-stat.i.cache-misses
1.006e+08 ± 15% -26.2% 74257221 ± 6% perf-stat.i.cache-references
212153 ± 13% -73.9% 55331 ± 3% perf-stat.i.context-switches
2.115e+10 ± 5% +18.7% 2.511e+10 ± 2% perf-stat.i.cpu-cycles
482.97 ± 22% -73.1% 129.87 ± 5% perf-stat.i.cpu-migrations
2713 ± 9% +98.8% 5392 ± 16% perf-stat.i.cycles-between-cache-misses
0.20 ± 4% -0.1 0.09 ± 2% perf-stat.i.dTLB-load-miss-rate%
6517636 ± 10% -48.8% 3334950 ± 2% perf-stat.i.dTLB-load-misses
3.377e+09 ± 6% +26.6% 4.274e+09 ± 2% perf-stat.i.dTLB-loads
1125113 ± 8% -25.5% 838294 ± 4% perf-stat.i.dTLB-store-misses
7.672e+08 ± 5% -21.6% 6.017e+08 perf-stat.i.dTLB-stores
40.86 ± 11% +10.2 51.09 ± 6% perf-stat.i.iTLB-load-miss-rate%
1489763 ± 10% -34.9% 970146 ± 6% perf-stat.i.iTLB-load-misses
2248250 ± 14% -58.7% 928739 ± 6% perf-stat.i.iTLB-loads
1.262e+10 ± 4% +23.4% 1.558e+10 perf-stat.i.instructions
8679 ± 6% +87.6% 16280 ± 5% perf-stat.i.instructions-per-iTLB-miss
0.59 +4.0% 0.62 perf-stat.i.ipc
200189 ± 10% -41.7% 116680 ± 2% perf-stat.i.minor-faults
3344481 ± 10% -52.0% 1606079 ± 3% perf-stat.i.node-load-misses
84375 ± 20% -56.2% 36926 ± 37% perf-stat.i.node-loads
1565682 ± 10% -47.0% 830068 ± 3% perf-stat.i.node-store-misses
869275 ± 12% -46.0% 469789 ± 3% perf-stat.i.node-stores
200189 ± 10% -41.7% 116680 ± 2% perf-stat.i.page-faults
7.94 ± 11% -39.9% 4.77 ± 6% perf-stat.overall.MPKI
1.68 -3.8% 1.61 perf-stat.overall.cpi
2299 ± 4% +92.8% 4434 ± 11% perf-stat.overall.cycles-between-cache-misses
0.19 ± 4% -0.1 0.08 ± 2% perf-stat.overall.dTLB-load-miss-rate%
40.08 ± 10% +11.0 51.08 ± 5% perf-stat.overall.iTLB-load-miss-rate%
8525 ± 6% +89.0% 16114 ± 5% perf-stat.overall.instructions-per-iTLB-miss
0.60 +4.0% 0.62 perf-stat.overall.ipc
14280 ± 5% +112.6% 30354 ± 2% perf-stat.overall.path-length
3.167e+09 ± 4% +15.6% 3.662e+09 perf-stat.ps.branch-instructions
9201666 ± 8% -38.0% 5703837 ± 9% perf-stat.ps.cache-misses
1.002e+08 ± 15% -26.2% 74012239 ± 6% perf-stat.ps.cache-references
211434 ± 13% -73.9% 55142 ± 3% perf-stat.ps.context-switches
2.108e+10 ± 5% +18.7% 2.502e+10 ± 2% perf-stat.ps.cpu-cycles
481.54 ± 22% -73.1% 129.46 ± 5% perf-stat.ps.cpu-migrations
6495785 ± 10% -48.8% 3323804 ± 2% perf-stat.ps.dTLB-load-misses
3.365e+09 ± 6% +26.6% 4.26e+09 ± 2% perf-stat.ps.dTLB-loads
1121400 ± 8% -25.5% 835514 ± 4% perf-stat.ps.dTLB-store-misses
7.647e+08 ± 5% -21.6% 5.997e+08 perf-stat.ps.dTLB-stores
1484812 ± 10% -34.9% 966928 ± 6% perf-stat.ps.iTLB-load-misses
2240685 ± 14% -58.7% 925639 ± 6% perf-stat.ps.iTLB-loads
1.258e+10 ± 4% +23.4% 1.552e+10 perf-stat.ps.instructions
199509 ± 10% -41.7% 116282 ± 2% perf-stat.ps.minor-faults
3333192 ± 10% -52.0% 1600811 ± 3% perf-stat.ps.node-load-misses
84222 ± 20% -56.0% 37029 ± 37% perf-stat.ps.node-loads
1560349 ± 10% -47.0% 827246 ± 3% perf-stat.ps.node-store-misses
866336 ± 12% -46.0% 468214 ± 3% perf-stat.ps.node-stores
199509 ± 10% -41.7% 116282 ± 2% perf-stat.ps.page-faults
3.862e+12 ± 4% +22.8% 4.743e+12 ± 2% perf-stat.total.instructions
15090 ± 9% -30.9% 10424 ± 6% softirqs.CPU0.RCU
14722 ± 5% -28.2% 10570 ± 4% softirqs.CPU1.RCU
14675 ± 6% -25.8% 10884 ± 4% softirqs.CPU10.RCU
14751 ± 4% -27.5% 10698 ± 8% softirqs.CPU11.RCU
14902 ± 8% -27.8% 10765 ± 7% softirqs.CPU12.RCU
14421 ± 3% -26.2% 10637 ± 7% softirqs.CPU13.RCU
14747 ± 5% -27.1% 10757 ± 6% softirqs.CPU14.RCU
14293 -27.0% 10439 ± 7% softirqs.CPU15.RCU
14692 ± 8% -28.9% 10443 ± 6% softirqs.CPU16.RCU
13768 ± 5% -25.4% 10275 ± 6% softirqs.CPU17.RCU
13952 ± 6% -28.1% 10030 ± 6% softirqs.CPU18.RCU
13840 ± 3% -27.2% 10079 ± 6% softirqs.CPU19.RCU
19253 ± 4% -19.3% 15537 ± 4% softirqs.CPU2.RCU
14429 ± 5% -27.2% 10501 ± 7% softirqs.CPU20.RCU
13775 ± 4% -20.6% 10943 ± 12% softirqs.CPU21.RCU
15718 ± 7% -28.4% 11254 ± 6% softirqs.CPU22.RCU
14218 ± 4% -24.3% 10769 ± 10% softirqs.CPU23.RCU
13918 ± 4% -27.7% 10065 ± 7% softirqs.CPU24.RCU
13805 ± 5% -24.3% 10447 ± 16% softirqs.CPU25.RCU
14106 ± 5% -27.2% 10269 ± 8% softirqs.CPU26.RCU
14291 ± 5% -28.4% 10237 ± 7% softirqs.CPU27.RCU
14123 ± 4% -23.4% 10821 ± 7% softirqs.CPU28.RCU
13957 ± 5% -26.7% 10235 ± 7% softirqs.CPU29.RCU
14378 ± 3% -22.0% 11219 ± 15% softirqs.CPU3.RCU
14266 ± 3% -22.5% 11060 ± 12% softirqs.CPU30.RCU
13865 ± 4% -30.4% 9645 ± 7% softirqs.CPU31.RCU
14048 ± 4% -29.3% 9932 ± 5% softirqs.CPU32.RCU
14296 ± 4% -29.6% 10069 ± 12% softirqs.CPU33.RCU
14998 ± 8% -26.4% 11037 ± 11% softirqs.CPU34.RCU
14342 ± 5% -27.1% 10453 ± 5% softirqs.CPU35.RCU
14865 ± 4% -26.4% 10942 ± 3% softirqs.CPU36.RCU
14698 ± 4% -26.3% 10834 ± 4% softirqs.CPU37.RCU
14639 ± 5% -24.6% 11036 ± 6% softirqs.CPU38.RCU
13850 ± 8% -23.8% 10559 ± 5% softirqs.CPU39.RCU
14451 ± 3% -24.8% 10868 ± 6% softirqs.CPU4.RCU
14373 ± 5% -23.8% 10958 ± 2% softirqs.CPU40.RCU
14794 ± 5% -25.7% 10990 ± 2% softirqs.CPU41.RCU
14561 ± 5% -25.4% 10866 ± 3% softirqs.CPU42.RCU
14823 ± 4% -24.0% 11261 ± 10% softirqs.CPU43.RCU
11986 ± 3% -24.8% 9019 ± 2% softirqs.CPU44.RCU
14143 ± 5% -26.1% 10446 ± 6% softirqs.CPU45.RCU
14225 ± 4% -26.3% 10487 ± 8% softirqs.CPU46.RCU
15111 ± 10% -27.9% 10899 ± 7% softirqs.CPU47.RCU
14841 ± 10% -28.5% 10611 ± 6% softirqs.CPU48.RCU
14323 ± 5% -22.3% 11123 ± 3% softirqs.CPU49.RCU
14523 ± 3% -26.5% 10680 ± 5% softirqs.CPU5.RCU
14354 ± 4% -27.4% 10414 ± 6% softirqs.CPU50.RCU
14319 ± 4% -22.5% 11099 ± 12% softirqs.CPU51.RCU
13868 ± 5% -28.7% 9882 ± 9% softirqs.CPU52.RCU
14449 ± 4% -26.9% 10559 ± 11% softirqs.CPU53.RCU
14865 ± 8% -30.3% 10366 ± 7% softirqs.CPU54.RCU
14631 ± 2% -23.3% 11227 ± 15% softirqs.CPU55.RCU
15058 ± 4% -28.2% 10810 ± 7% softirqs.CPU56.RCU
14406 ± 4% -25.8% 10685 ± 6% softirqs.CPU57.RCU
14768 ± 3% -26.1% 10908 ± 6% softirqs.CPU58.RCU
14789 ± 6% -26.6% 10857 ± 5% softirqs.CPU59.RCU
14731 ± 5% -26.6% 10812 ± 6% softirqs.CPU6.RCU
13350 ± 2% -24.3% 10108 ± 7% softirqs.CPU60.RCU
13291 ± 5% -26.8% 9732 ± 8% softirqs.CPU61.RCU
13438 ± 6% -27.1% 9803 ± 8% softirqs.CPU62.RCU
13352 ± 4% -26.0% 9882 ± 6% softirqs.CPU63.RCU
13710 ± 5% -25.2% 10260 ± 6% softirqs.CPU64.RCU
13618 ± 4% -29.0% 9667 ± 8% softirqs.CPU65.RCU
14217 ± 6% -25.3% 10615 ± 7% softirqs.CPU66.RCU
14102 ± 5% -26.7% 10333 ± 8% softirqs.CPU67.RCU
14124 ± 4% -27.1% 10295 ± 8% softirqs.CPU68.RCU
13718 ± 5% -32.5% 9256 ± 9% softirqs.CPU69.RCU
14427 ± 4% -25.3% 10776 ± 4% softirqs.CPU7.RCU
13928 ± 4% -26.2% 10273 ± 9% softirqs.CPU70.RCU
13906 ± 6% -26.0% 10295 ± 6% softirqs.CPU71.RCU
14128 ± 5% -26.1% 10438 ± 8% softirqs.CPU72.RCU
14056 ± 4% -27.3% 10212 ± 7% softirqs.CPU73.RCU
14121 ± 5% -27.0% 10306 ± 10% softirqs.CPU74.RCU
13156 ± 4% -24.5% 9933 ± 15% softirqs.CPU75.RCU
13735 ± 5% -28.0% 9894 ± 15% softirqs.CPU76.RCU
13347 ± 4% -22.9% 10286 ± 3% softirqs.CPU77.RCU
13722 ± 9% -27.4% 9968 ± 7% softirqs.CPU78.RCU
13310 ± 5% -27.4% 9660 ± 6% softirqs.CPU79.RCU
14992 ± 11% -31.8% 10220 ± 9% softirqs.CPU8.RCU
13988 ± 5% -27.7% 10118 ± 5% softirqs.CPU80.RCU
13918 ± 4% -27.0% 10156 ± 5% softirqs.CPU81.RCU
13849 ± 5% -27.0% 10110 ± 5% softirqs.CPU82.RCU
14076 ± 6% -27.2% 10248 softirqs.CPU83.RCU
13473 ± 5% -24.3% 10205 softirqs.CPU84.RCU
13814 ± 5% -25.2% 10334 ± 5% softirqs.CPU85.RCU
13938 ± 4% -27.7% 10077 ± 4% softirqs.CPU86.RCU
14634 ± 9% -30.4% 10189 ± 5% softirqs.CPU87.RCU
13692 ± 11% -25.4% 10219 ± 9% softirqs.CPU9.RCU
1255273 ± 4% -26.4% 923618 ± 6% softirqs.RCU
308.00 ± 38% -41.3% 180.75 ± 7% interrupts.36:PCI-MSI.1572867-edge.eth0-TxRx-3
296.75 ± 31% -31.6% 203.00 ± 12% interrupts.42:PCI-MSI.1572873-edge.eth0-TxRx-9
196.50 ± 10% -17.4% 162.25 ± 3% interrupts.48:PCI-MSI.1572879-edge.eth0-TxRx-15
5727 ± 7% -38.4% 3530 ± 19% interrupts.CPU0.RES:Rescheduling_interrupts
5869 ± 14% -43.1% 3342 ± 10% interrupts.CPU1.RES:Rescheduling_interrupts
5447 ± 9% -40.8% 3226 ± 7% interrupts.CPU10.RES:Rescheduling_interrupts
40.50 ±101% +126.5% 91.75 ± 38% interrupts.CPU10.TLB:TLB_shootdowns
5312 ± 11% -40.9% 3137 ± 15% interrupts.CPU11.RES:Rescheduling_interrupts
49.00 ± 63% +78.1% 87.25 ± 27% interrupts.CPU11.TLB:TLB_shootdowns
5528 ± 7% -40.0% 3316 ± 15% interrupts.CPU12.RES:Rescheduling_interrupts
5491 ± 8% -44.6% 3042 ± 6% interrupts.CPU13.RES:Rescheduling_interrupts
5147 ± 8% -44.2% 2873 ± 14% interrupts.CPU14.RES:Rescheduling_interrupts
196.50 ± 10% -17.4% 162.25 ± 3% interrupts.CPU15.48:PCI-MSI.1572879-edge.eth0-TxRx-15
5139 ± 7% -50.1% 2566 ± 15% interrupts.CPU15.RES:Rescheduling_interrupts
5509 ± 11% -36.7% 3489 ± 10% interrupts.CPU16.RES:Rescheduling_interrupts
5478 ± 5% -44.1% 3065 ± 16% interrupts.CPU17.RES:Rescheduling_interrupts
5258 ± 12% -36.9% 3319 ± 9% interrupts.CPU18.RES:Rescheduling_interrupts
5652 ± 11% -35.5% 3643 ± 17% interrupts.CPU19.RES:Rescheduling_interrupts
5383 ± 14% -45.1% 2954 ± 13% interrupts.CPU2.RES:Rescheduling_interrupts
5356 ± 9% -46.0% 2894 ± 14% interrupts.CPU20.RES:Rescheduling_interrupts
5763 ± 19% -50.8% 2836 ± 11% interrupts.CPU21.RES:Rescheduling_interrupts
5639 ± 7% -39.4% 3418 ± 12% interrupts.CPU22.RES:Rescheduling_interrupts
5579 ± 6% -33.4% 3713 ± 25% interrupts.CPU23.RES:Rescheduling_interrupts
912.75 ± 33% +45.7% 1329 ± 6% interrupts.CPU24.NMI:Non-maskable_interrupts
912.75 ± 33% +45.7% 1329 ± 6% interrupts.CPU24.PMI:Performance_monitoring_interrupts
5440 ± 6% -39.2% 3305 ± 13% interrupts.CPU24.RES:Rescheduling_interrupts
5716 ± 7% -36.5% 3628 ± 9% interrupts.CPU25.RES:Rescheduling_interrupts
5544 ± 8% -37.3% 3474 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
948.00 ± 35% +67.1% 1583 ± 17% interrupts.CPU27.NMI:Non-maskable_interrupts
948.00 ± 35% +67.1% 1583 ± 17% interrupts.CPU27.PMI:Performance_monitoring_interrupts
5776 ± 7% -31.0% 3985 ± 14% interrupts.CPU27.RES:Rescheduling_interrupts
972.75 ± 24% +57.4% 1531 ± 14% interrupts.CPU28.NMI:Non-maskable_interrupts
972.75 ± 24% +57.4% 1531 ± 14% interrupts.CPU28.PMI:Performance_monitoring_interrupts
5638 ± 6% -40.9% 3334 ± 15% interrupts.CPU28.RES:Rescheduling_interrupts
5537 ± 5% -30.1% 3869 ± 22% interrupts.CPU29.RES:Rescheduling_interrupts
308.00 ± 38% -41.3% 180.75 ± 7% interrupts.CPU3.36:PCI-MSI.1572867-edge.eth0-TxRx-3
5446 ± 14% -39.7% 3282 ± 8% interrupts.CPU3.RES:Rescheduling_interrupts
1284 ± 3% +21.6% 1561 ± 5% interrupts.CPU30.NMI:Non-maskable_interrupts
1284 ± 3% +21.6% 1561 ± 5% interrupts.CPU30.PMI:Performance_monitoring_interrupts
5483 ± 12% -43.1% 3119 ± 15% interrupts.CPU30.RES:Rescheduling_interrupts
5823 ± 7% -42.4% 3356 ± 14% interrupts.CPU31.RES:Rescheduling_interrupts
28.25 ± 89% +187.6% 81.25 ± 40% interrupts.CPU31.TLB:TLB_shootdowns
5776 ± 7% -33.6% 3836 ± 13% interrupts.CPU32.RES:Rescheduling_interrupts
5675 ± 10% -45.1% 3117 ± 12% interrupts.CPU33.RES:Rescheduling_interrupts
1170 ± 3% +23.4% 1443 ± 11% interrupts.CPU34.NMI:Non-maskable_interrupts
1170 ± 3% +23.4% 1443 ± 11% interrupts.CPU34.PMI:Performance_monitoring_interrupts
5754 ± 9% -43.7% 3240 ± 14% interrupts.CPU34.RES:Rescheduling_interrupts
1252 ± 6% +15.6% 1447 ± 8% interrupts.CPU35.NMI:Non-maskable_interrupts
1252 ± 6% +15.6% 1447 ± 8% interrupts.CPU35.PMI:Performance_monitoring_interrupts
5266 ± 8% -41.2% 3099 ± 11% interrupts.CPU35.RES:Rescheduling_interrupts
5239 ± 10% -41.2% 3082 ± 23% interrupts.CPU36.RES:Rescheduling_interrupts
5576 ± 12% -43.0% 3175 ± 11% interrupts.CPU37.RES:Rescheduling_interrupts
5176 ± 6% -36.5% 3287 ± 16% interrupts.CPU38.RES:Rescheduling_interrupts
5559 ± 6% -35.5% 3583 ± 19% interrupts.CPU39.RES:Rescheduling_interrupts
5568 ± 9% -41.9% 3233 ± 15% interrupts.CPU4.RES:Rescheduling_interrupts
1096 ± 21% +37.8% 1511 ± 11% interrupts.CPU40.NMI:Non-maskable_interrupts
1096 ± 21% +37.8% 1511 ± 11% interrupts.CPU40.PMI:Performance_monitoring_interrupts
5151 ± 8% -34.8% 3360 ± 10% interrupts.CPU40.RES:Rescheduling_interrupts
5181 ± 9% -42.5% 2981 ± 16% interrupts.CPU41.RES:Rescheduling_interrupts
1139 ± 28% +36.7% 1557 ± 6% interrupts.CPU42.NMI:Non-maskable_interrupts
1139 ± 28% +36.7% 1557 ± 6% interrupts.CPU42.PMI:Performance_monitoring_interrupts
4909 ± 10% -36.5% 3117 ± 18% interrupts.CPU42.RES:Rescheduling_interrupts
5249 ± 5% -37.2% 3296 ± 9% interrupts.CPU43.RES:Rescheduling_interrupts
1146 ± 29% +31.0% 1501 ± 8% interrupts.CPU44.NMI:Non-maskable_interrupts
1146 ± 29% +31.0% 1501 ± 8% interrupts.CPU44.PMI:Performance_monitoring_interrupts
5653 ± 20% -45.4% 3084 ± 7% interrupts.CPU44.RES:Rescheduling_interrupts
5853 ± 5% -48.0% 3042 ± 3% interrupts.CPU45.RES:Rescheduling_interrupts
32.75 ± 47% +221.4% 105.25 ± 45% interrupts.CPU45.TLB:TLB_shootdowns
5898 ± 10% -43.1% 3358 ± 10% interrupts.CPU46.RES:Rescheduling_interrupts
5392 ± 9% -39.0% 3290 ± 18% interrupts.CPU47.RES:Rescheduling_interrupts
5924 ± 6% -46.5% 3170 ± 11% interrupts.CPU48.RES:Rescheduling_interrupts
5938 ± 5% -41.4% 3480 ± 8% interrupts.CPU49.RES:Rescheduling_interrupts
31.00 ± 63% +177.4% 86.00 ± 39% interrupts.CPU49.TLB:TLB_shootdowns
5543 ± 9% -42.4% 3193 ± 7% interrupts.CPU5.RES:Rescheduling_interrupts
5277 ± 5% -41.4% 3092 ± 3% interrupts.CPU50.RES:Rescheduling_interrupts
6101 ± 9% -44.3% 3399 ± 10% interrupts.CPU51.RES:Rescheduling_interrupts
5834 ± 12% -40.3% 3481 ± 6% interrupts.CPU52.RES:Rescheduling_interrupts
6214 ± 9% -40.2% 3717 ± 12% interrupts.CPU53.RES:Rescheduling_interrupts
25.75 ± 95% +198.1% 76.75 ± 46% interrupts.CPU53.TLB:TLB_shootdowns
6278 ± 11% -52.7% 2967 ± 13% interrupts.CPU54.RES:Rescheduling_interrupts
5668 ± 11% -43.4% 3205 ± 4% interrupts.CPU55.RES:Rescheduling_interrupts
5820 ± 8% -39.9% 3500 ± 3% interrupts.CPU56.RES:Rescheduling_interrupts
5928 ± 6% -43.5% 3350 ± 8% interrupts.CPU57.RES:Rescheduling_interrupts
35.00 ± 34% +127.1% 79.50 ± 29% interrupts.CPU57.TLB:TLB_shootdowns
5632 ± 11% -44.7% 3115 ± 16% interrupts.CPU58.RES:Rescheduling_interrupts
5440 ± 8% -47.4% 2863 ± 8% interrupts.CPU59.RES:Rescheduling_interrupts
5485 ± 12% -44.5% 3044 ± 8% interrupts.CPU6.RES:Rescheduling_interrupts
1153 ± 25% +32.2% 1525 ± 7% interrupts.CPU60.NMI:Non-maskable_interrupts
1153 ± 25% +32.2% 1525 ± 7% interrupts.CPU60.PMI:Performance_monitoring_interrupts
5514 ± 6% -45.5% 3006 ± 11% interrupts.CPU60.RES:Rescheduling_interrupts
5600 ± 7% -36.7% 3547 ± 14% interrupts.CPU61.RES:Rescheduling_interrupts
5612 ± 7% -30.8% 3885 ± 27% interrupts.CPU62.RES:Rescheduling_interrupts
1214 ± 5% +22.3% 1485 ± 5% interrupts.CPU63.NMI:Non-maskable_interrupts
1214 ± 5% +22.3% 1485 ± 5% interrupts.CPU63.PMI:Performance_monitoring_interrupts
6114 ± 11% -45.5% 3335 ± 13% interrupts.CPU63.RES:Rescheduling_interrupts
5297 ± 14% -45.4% 2891 ± 17% interrupts.CPU64.RES:Rescheduling_interrupts
1309 ± 10% +23.4% 1616 ± 9% interrupts.CPU65.NMI:Non-maskable_interrupts
1309 ± 10% +23.4% 1616 ± 9% interrupts.CPU65.PMI:Performance_monitoring_interrupts
5366 ± 8% -49.0% 2738 ± 17% interrupts.CPU65.RES:Rescheduling_interrupts
1267 ± 6% +23.6% 1566 ± 13% interrupts.CPU66.NMI:Non-maskable_interrupts
1267 ± 6% +23.6% 1566 ± 13% interrupts.CPU66.PMI:Performance_monitoring_interrupts
5244 ± 12% -37.8% 3261 ± 15% interrupts.CPU66.RES:Rescheduling_interrupts
5758 ± 11% -41.1% 3390 ± 23% interrupts.CPU67.RES:Rescheduling_interrupts
5453 ± 7% -35.8% 3501 ± 12% interrupts.CPU68.RES:Rescheduling_interrupts
5786 ± 11% -41.0% 3413 ± 20% interrupts.CPU69.RES:Rescheduling_interrupts
5628 ± 10% -45.4% 3072 ± 6% interrupts.CPU7.RES:Rescheduling_interrupts
1199 ± 3% +23.0% 1474 ± 7% interrupts.CPU70.NMI:Non-maskable_interrupts
1199 ± 3% +23.0% 1474 ± 7% interrupts.CPU70.PMI:Performance_monitoring_interrupts
5714 ± 7% -32.3% 3870 ± 13% interrupts.CPU70.RES:Rescheduling_interrupts
1242 ± 2% +22.5% 1521 ± 8% interrupts.CPU71.NMI:Non-maskable_interrupts
1242 ± 2% +22.5% 1521 ± 8% interrupts.CPU71.PMI:Performance_monitoring_interrupts
5650 ± 10% -38.1% 3496 ± 13% interrupts.CPU71.RES:Rescheduling_interrupts
73.75 ± 33% -55.3% 33.00 ± 40% interrupts.CPU71.TLB:TLB_shootdowns
1298 ± 14% +16.8% 1515 ± 12% interrupts.CPU72.NMI:Non-maskable_interrupts
1298 ± 14% +16.8% 1515 ± 12% interrupts.CPU72.PMI:Performance_monitoring_interrupts
5255 ± 6% -45.6% 2857 ± 15% interrupts.CPU72.RES:Rescheduling_interrupts
5522 ± 7% -37.1% 3473 ± 19% interrupts.CPU73.RES:Rescheduling_interrupts
1255 ± 5% +19.3% 1497 ± 10% interrupts.CPU74.NMI:Non-maskable_interrupts
1255 ± 5% +19.3% 1497 ± 10% interrupts.CPU74.PMI:Performance_monitoring_interrupts
5540 ± 8% -39.1% 3371 ± 18% interrupts.CPU74.RES:Rescheduling_interrupts
5489 ± 3% -45.6% 2986 ± 17% interrupts.CPU75.RES:Rescheduling_interrupts
5618 ± 7% -37.9% 3489 ± 22% interrupts.CPU76.RES:Rescheduling_interrupts
1284 ± 13% +30.2% 1673 ± 12% interrupts.CPU77.NMI:Non-maskable_interrupts
1284 ± 13% +30.2% 1673 ± 12% interrupts.CPU77.PMI:Performance_monitoring_interrupts
5621 ± 6% -29.8% 3943 ± 19% interrupts.CPU77.RES:Rescheduling_interrupts
1166 ± 4% +24.9% 1456 ± 9% interrupts.CPU78.NMI:Non-maskable_interrupts
1166 ± 4% +24.9% 1456 ± 9% interrupts.CPU78.PMI:Performance_monitoring_interrupts
5624 ± 9% -36.3% 3585 ± 19% interrupts.CPU78.RES:Rescheduling_interrupts
30.75 ± 81% +166.7% 82.00 ± 47% interrupts.CPU78.TLB:TLB_shootdowns
5642 ± 10% -33.2% 3772 ± 15% interrupts.CPU79.RES:Rescheduling_interrupts
5223 ± 8% -35.4% 3374 ± 8% interrupts.CPU8.RES:Rescheduling_interrupts
1309 ± 10% +23.4% 1615 ± 11% interrupts.CPU80.NMI:Non-maskable_interrupts
1309 ± 10% +23.4% 1615 ± 11% interrupts.CPU80.PMI:Performance_monitoring_interrupts
5464 ± 12% -35.6% 3521 ± 20% interrupts.CPU80.RES:Rescheduling_interrupts
1254 ± 5% +28.0% 1606 ± 5% interrupts.CPU81.NMI:Non-maskable_interrupts
1254 ± 5% +28.0% 1606 ± 5% interrupts.CPU81.PMI:Performance_monitoring_interrupts
5620 ± 7% -40.9% 3323 ± 9% interrupts.CPU81.RES:Rescheduling_interrupts
1210 ± 2% +20.2% 1454 ± 7% interrupts.CPU82.NMI:Non-maskable_interrupts
1210 ± 2% +20.2% 1454 ± 7% interrupts.CPU82.PMI:Performance_monitoring_interrupts
5422 ± 8% -37.6% 3382 ± 20% interrupts.CPU82.RES:Rescheduling_interrupts
5565 ± 4% -36.0% 3559 ± 14% interrupts.CPU83.RES:Rescheduling_interrupts
5603 ± 7% -33.3% 3739 ± 16% interrupts.CPU84.RES:Rescheduling_interrupts
5346 ± 5% -34.6% 3498 ± 15% interrupts.CPU85.RES:Rescheduling_interrupts
5122 ± 8% -41.4% 3000 ± 11% interrupts.CPU86.RES:Rescheduling_interrupts
5468 ± 7% -35.8% 3509 ± 12% interrupts.CPU87.RES:Rescheduling_interrupts
296.75 ± 31% -31.6% 203.00 ± 12% interrupts.CPU9.42:PCI-MSI.1572873-edge.eth0-TxRx-9
5324 ± 10% -42.6% 3053 ± 7% interrupts.CPU9.RES:Rescheduling_interrupts
102430 ± 6% +16.8% 119675 ± 6% interrupts.NMI:Non-maskable_interrupts
102430 ± 6% +16.8% 119675 ± 6% interrupts.PMI:Performance_monitoring_interrupts
488877 ± 8% -40.4% 291343 ± 3% interrupts.RES:Rescheduling_interrupts
67.21 ± 2% -13.1 54.11 ± 3% perf-profile.calltrace.cycles-pp.secondary_startup_64
66.51 ± 2% -12.9 53.62 ± 3% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
66.51 ± 2% -12.9 53.62 ± 3% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
66.47 ± 2% -12.9 53.59 ± 3% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
51.36 -8.5 42.83 ± 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
56.55 -7.7 48.81 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
56.16 -7.7 48.47 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
3.27 ± 17% -2.3 0.98 ± 17% perf-profile.calltrace.cycles-pp.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.45 ± 16% -2.1 0.31 ±100% perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
2.72 ± 10% -1.9 0.77 ± 12% perf-profile.calltrace.cycles-pp.rwsem_wake.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.73 ± 17% -1.9 0.83 ± 17% perf-profile.calltrace.cycles-pp.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry.start_secondary
2.72 ± 17% -1.9 0.83 ± 17% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.sched_ttwu_pending.do_idle.cpu_startup_entry
2.71 ± 17% -1.9 0.83 ± 17% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending.do_idle
2.52 ± 17% -1.7 0.78 ± 16% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.sched_ttwu_pending
2.30 ± 10% -1.6 0.67 ± 12% perf-profile.calltrace.cycles-pp.wake_up_q.rwsem_wake.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
2.08 ± 17% -1.6 0.47 ± 59% perf-profile.calltrace.cycles-pp.schedule.rwsem_down_read_slowpath.__do_page_fault.do_page_fault.page_fault
2.26 ± 10% -1.6 0.66 ± 13% perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_q.rwsem_wake.vm_mmap_pgoff.ksys_mmap_pgoff
2.31 ± 18% -1.6 0.71 ± 12% perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.29 ± 18% -1.6 0.71 ± 11% perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
2.04 ± 17% -1.6 0.47 ± 59% perf-profile.calltrace.cycles-pp.__schedule.schedule.rwsem_down_read_slowpath.__do_page_fault.do_page_fault
10.72 ± 4% -1.4 9.31 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
10.71 ± 4% -1.4 9.31 ± 10% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.47 ± 4% -1.3 9.16 ± 11% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.39 ± 4% -1.3 9.12 ± 11% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.63 ± 13% -1.0 0.67 ± 9% perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.09 ± 14% -0.8 0.29 ±100% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.28 ± 14% -0.7 0.62 ± 11% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2.83 ± 8% -0.6 2.25 ± 8% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.80 ± 7% -0.3 0.46 ± 59% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
1.42 ± 10% -0.3 1.09 ± 9% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
1.10 ± 8% -0.3 0.85 ± 10% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.63 ± 13% +0.2 0.82 ± 14% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.70 ± 14% +0.2 0.90 ± 14% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
0.88 ± 13% +0.2 1.10 ± 11% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.46 ± 9% +0.3 1.79 ± 7% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.74 ± 20% +1.6 2.31 ± 10% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_optimistic_spin.rwsem_down_read_slowpath.__do_page_fault.do_page_fault
5.93 ± 2% +1.7 7.62 ± 12% perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.86 ± 2% +1.7 7.58 ± 12% perf-profile.calltrace.cycles-pp.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
4.87 ± 4% +2.2 7.11 ± 13% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff
3.71 ± 5% +2.3 6.04 ± 14% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_write_slowpath.down_write_killable.vm_mmap_pgoff
19.29 ± 10% +16.1 35.34 ± 4% perf-profile.calltrace.cycles-pp.page_fault
19.24 ± 10% +16.1 35.32 ± 4% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
19.09 ± 10% +16.2 35.26 ± 4% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
15.78 ± 15% +17.7 33.52 ± 4% perf-profile.calltrace.cycles-pp.rwsem_down_read_slowpath.__do_page_fault.do_page_fault.page_fault
8.85 ± 29% +18.5 27.35 ± 5% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_optimistic_spin.rwsem_down_read_slowpath.__do_page_fault.do_page_fault
10.65 ± 28% +20.5 31.17 ± 4% perf-profile.calltrace.cycles-pp.rwsem_optimistic_spin.rwsem_down_read_slowpath.__do_page_fault.do_page_fault.page_fault
67.21 ± 2% -13.1 54.11 ± 3% perf-profile.children.cycles-pp.secondary_startup_64
67.21 ± 2% -13.1 54.11 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
67.23 ± 2% -13.1 54.13 ± 3% perf-profile.children.cycles-pp.do_idle
66.51 ± 2% -12.9 53.62 ± 3% perf-profile.children.cycles-pp.start_secondary
51.47 -8.5 42.96 ± 3% perf-profile.children.cycles-pp.intel_idle
57.16 -7.9 49.27 ± 4% perf-profile.children.cycles-pp.cpuidle_enter
57.14 -7.9 49.26 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
4.84 ± 16% -3.3 1.55 ± 13% perf-profile.children.cycles-pp.__schedule
4.04 ± 14% -2.8 1.24 ± 14% perf-profile.children.cycles-pp.ttwu_do_activate
4.04 ± 14% -2.8 1.24 ± 14% perf-profile.children.cycles-pp.activate_task
4.03 ± 14% -2.8 1.23 ± 14% perf-profile.children.cycles-pp.enqueue_task_fair
3.78 ± 14% -2.6 1.16 ± 13% perf-profile.children.cycles-pp.enqueue_entity
3.36 ± 16% -2.4 1.00 ± 17% perf-profile.children.cycles-pp.sched_ttwu_pending
3.20 ± 8% -2.1 1.08 ± 11% perf-profile.children.cycles-pp.rwsem_wake
2.59 ± 15% -1.8 0.79 ± 14% perf-profile.children.cycles-pp.__account_scheduler_latency
2.72 ± 9% -1.8 0.93 ± 11% perf-profile.children.cycles-pp.wake_up_q
2.67 ± 9% -1.8 0.92 ± 11% perf-profile.children.cycles-pp.try_to_wake_up
2.56 ± 14% -1.7 0.85 ± 13% perf-profile.children.cycles-pp.schedule
2.34 ± 18% -1.6 0.72 ± 12% perf-profile.children.cycles-pp.schedule_idle
2.00 ± 14% -1.5 0.48 ± 6% perf-profile.children.cycles-pp.switch_mm_irqs_off
10.72 ± 4% -1.4 9.31 ± 10% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.90 ± 30% -1.4 0.50 ± 14% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
10.71 ± 4% -1.4 9.31 ± 10% perf-profile.children.cycles-pp.do_syscall_64
1.93 ± 28% -1.4 0.54 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irq
10.47 ± 4% -1.3 9.16 ± 11% perf-profile.children.cycles-pp.ksys_mmap_pgoff
10.40 ± 4% -1.3 9.12 ± 11% perf-profile.children.cycles-pp.vm_mmap_pgoff
1.60 ± 15% -1.1 0.51 ± 15% perf-profile.children.cycles-pp.stack_trace_save_tsk
1.64 ± 13% -1.0 0.67 ± 9% perf-profile.children.cycles-pp.do_mmap
1.37 ± 17% -0.9 0.45 ± 17% perf-profile.children.cycles-pp.arch_stack_walk
1.02 ± 13% -0.8 0.19 ± 6% perf-profile.children.cycles-pp.switch_mm
1.14 ± 14% -0.8 0.34 ± 16% perf-profile.children.cycles-pp.dequeue_task_fair
1.06 ± 13% -0.7 0.31 ± 14% perf-profile.children.cycles-pp.dequeue_entity
1.17 ± 9% -0.7 0.50 ± 10% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
1.29 ± 14% -0.7 0.62 ± 12% perf-profile.children.cycles-pp.handle_mm_fault
0.95 ± 16% -0.7 0.29 ± 15% perf-profile.children.cycles-pp.unwind_next_frame
1.02 ± 12% -0.6 0.40 ± 6% perf-profile.children.cycles-pp.update_load_avg
2.88 ± 8% -0.6 2.29 ± 8% perf-profile.children.cycles-pp.menu_select
1.10 ± 14% -0.6 0.53 ± 10% perf-profile.children.cycles-pp.__handle_mm_fault
0.86 ± 15% -0.6 0.30 ± 8% perf-profile.children.cycles-pp.get_unmapped_area
0.82 ± 15% -0.5 0.29 ± 8% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.97 ± 13% -0.5 0.45 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.90 ± 10% -0.5 0.39 ± 8% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.79 ± 18% -0.5 0.28 ± 22% perf-profile.children.cycles-pp.pick_next_task_fair
0.78 ± 15% -0.5 0.28 ± 9% perf-profile.children.cycles-pp.unmapped_area_topdown
0.57 ± 9% -0.4 0.18 ± 20% perf-profile.children.cycles-pp.select_task_rq_fair
0.72 ± 12% -0.4 0.34 ± 10% perf-profile.children.cycles-pp.mmap_region
0.49 ± 10% -0.3 0.14 ± 10% perf-profile.children.cycles-pp.rwsem_mark_wake
0.71 ± 11% -0.3 0.38 ± 8% perf-profile.children.cycles-pp._raw_spin_lock
1.45 ± 10% -0.3 1.11 ± 9% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.44 ± 15% -0.3 0.14 ± 15% perf-profile.children.cycles-pp.update_curr
0.38 ± 15% -0.3 0.08 ± 17% perf-profile.children.cycles-pp.load_new_mm_cr3
0.43 ± 17% -0.3 0.14 ± 11% perf-profile.children.cycles-pp.finish_task_switch
0.43 ± 19% -0.3 0.15 ± 10% perf-profile.children.cycles-pp.___perf_sw_event
0.41 ± 18% -0.3 0.14 ± 23% perf-profile.children.cycles-pp.set_next_entity
0.51 ± 13% -0.2 0.26 ± 8% perf-profile.children.cycles-pp.do_anonymous_page
1.12 ± 8% -0.2 0.87 ± 10% perf-profile.children.cycles-pp.tick_nohz_next_event
0.31 ± 14% -0.2 0.07 ± 30% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.82 ± 7% -0.2 0.59 ± 14% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.40 ± 10% -0.2 0.18 ± 9% perf-profile.children.cycles-pp.vma_link
0.38 ± 11% -0.2 0.16 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.34 ± 13% -0.2 0.12 ± 6% perf-profile.children.cycles-pp.__update_load_avg_se
0.29 ± 13% -0.2 0.09 ± 9% perf-profile.children.cycles-pp.__switch_to
0.21 ± 5% -0.2 0.03 ±100% perf-profile.children.cycles-pp.__list_del_entry_valid
0.27 ± 13% -0.2 0.09 ± 14% perf-profile.children.cycles-pp.update_cfs_group
0.31 ± 9% -0.2 0.13 ± 5% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.25 ± 10% -0.2 0.07 ± 26% perf-profile.children.cycles-pp.update_cfs_rq_h_load
0.26 ± 19% -0.2 0.09 ± 17% perf-profile.children.cycles-pp.__unwind_start
0.54 ± 3% -0.2 0.37 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
0.22 ± 17% -0.2 0.06 ± 16% perf-profile.children.cycles-pp.__switch_to_asm
0.26 ± 15% -0.2 0.10 ± 10% perf-profile.children.cycles-pp.__perf_sw_event
0.34 ± 13% -0.2 0.18 ± 10% perf-profile.children.cycles-pp.down_read_trylock
0.31 ± 11% -0.1 0.17 ± 4% perf-profile.children.cycles-pp.up_read
0.25 ± 14% -0.1 0.11 ± 22% perf-profile.children.cycles-pp.nr_iowait_cpu
0.31 ± 9% -0.1 0.17 ± 14% perf-profile.children.cycles-pp.find_vma
0.26 ± 14% -0.1 0.12 ± 14% perf-profile.children.cycles-pp.update_ts_time_stats
0.46 ± 2% -0.1 0.32 ± 5% perf-profile.children.cycles-pp.native_write_msr
0.18 ± 24% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.orc_find
0.20 ± 21% -0.1 0.07 ± 23% perf-profile.children.cycles-pp.__orc_find
0.34 ± 7% -0.1 0.21 ± 5% perf-profile.children.cycles-pp.sched_clock
0.33 ± 6% -0.1 0.20 ± 5% perf-profile.children.cycles-pp.native_sched_clock
0.30 ± 9% -0.1 0.17 ± 16% perf-profile.children.cycles-pp.down_read
0.19 ± 18% -0.1 0.06 ± 11% perf-profile.children.cycles-pp.ttwu_do_wakeup
0.34 ± 10% -0.1 0.22 ± 7% perf-profile.children.cycles-pp.sched_clock_cpu
0.51 ± 6% -0.1 0.40 ± 14% perf-profile.children.cycles-pp.__next_timer_interrupt
0.20 ± 11% -0.1 0.09 ± 9% perf-profile.children.cycles-pp.vma_interval_tree_insert
0.13 ± 13% -0.1 0.03 ±100% perf-profile.children.cycles-pp.pick_next_task_idle
0.27 ± 9% -0.1 0.17 ± 8% perf-profile.children.cycles-pp.__lock_text_start
0.15 ± 9% -0.1 0.06 ± 20% perf-profile.children.cycles-pp.stack_trace_consume_entry_nosched
0.18 ± 14% -0.1 0.10 ± 15% perf-profile.children.cycles-pp.vmacache_find
0.13 ± 28% -0.1 0.05 ± 63% perf-profile.children.cycles-pp.unwind_get_return_address
0.28 ± 15% -0.1 0.20 ± 10% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.11 ± 33% -0.1 0.04 ±102% perf-profile.children.cycles-pp.__kernel_text_address
0.35 ± 6% -0.1 0.28 ± 9% perf-profile.children.cycles-pp.read_tsc
0.17 ± 13% -0.1 0.10 ± 18% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.12 ± 18% -0.1 0.06 ± 16% perf-profile.children.cycles-pp.perf_event_mmap
0.09 ± 34% -0.1 0.03 ±102% perf-profile.children.cycles-pp.kernel_text_address
0.17 ± 16% -0.1 0.11 ± 9% perf-profile.children.cycles-pp.rcu_idle_exit
0.12 ± 10% -0.0 0.08 ± 13% perf-profile.children.cycles-pp.rcu_eqs_enter
0.10 ± 18% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.rcu_eqs_exit
0.10 ± 18% -0.0 0.07 ± 15% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.22 ± 5% +0.1 0.30 ± 15% perf-profile.children.cycles-pp.tick_irq_enter
0.13 ± 27% +0.1 0.22 ± 11% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.28 +0.1 0.38 ± 15% perf-profile.children.cycles-pp.irq_enter
0.71 ± 13% +0.2 0.95 ± 11% perf-profile.children.cycles-pp.update_process_times
0.78 ± 13% +0.2 1.03 ± 11% perf-profile.children.cycles-pp.tick_sched_handle
0.96 ± 12% +0.3 1.24 ± 10% perf-profile.children.cycles-pp.tick_sched_timer
1.59 ± 9% +0.4 1.97 ± 7% perf-profile.children.cycles-pp.__hrtimer_run_queues
1.57 ± 10% +0.7 2.31 ± 3% perf-profile.children.cycles-pp.rwsem_spin_on_owner
5.93 ± 2% +1.7 7.62 ± 12% perf-profile.children.cycles-pp.down_write_killable
5.86 ± 2% +1.7 7.59 ± 12% perf-profile.children.cycles-pp.rwsem_down_write_slowpath
19.30 ± 10% +16.0 35.35 ± 4% perf-profile.children.cycles-pp.page_fault
19.24 ± 10% +16.1 35.33 ± 4% perf-profile.children.cycles-pp.do_page_fault
19.09 ± 10% +16.2 35.27 ± 4% perf-profile.children.cycles-pp.__do_page_fault
15.78 ± 15% +17.7 33.52 ± 4% perf-profile.children.cycles-pp.rwsem_down_read_slowpath
12.57 ± 22% +20.8 33.40 ± 4% perf-profile.children.cycles-pp.osq_lock
15.96 ± 20% +23.3 39.22 ± 3% perf-profile.children.cycles-pp.rwsem_optimistic_spin
50.41 -7.7 42.72 ± 3% perf-profile.self.cycles-pp.intel_idle
1.90 ± 30% -1.4 0.50 ± 14% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
1.62 ± 14% -1.2 0.40 ± 8% perf-profile.self.cycles-pp.switch_mm_irqs_off
0.79 ± 15% -0.5 0.26 ± 12% perf-profile.self.cycles-pp.__schedule
0.78 ± 15% -0.5 0.27 ± 8% perf-profile.self.cycles-pp.unmapped_area_topdown
0.81 ± 11% -0.4 0.36 ± 7% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.85 ± 11% -0.4 0.42 ± 8% perf-profile.self.cycles-pp.rwsem_down_read_slowpath
0.52 ± 11% -0.4 0.14 ± 9% perf-profile.self.cycles-pp.__account_scheduler_latency
0.53 ± 13% -0.4 0.15 ± 14% perf-profile.self.cycles-pp.unwind_next_frame
0.61 ± 8% -0.3 0.27 ± 7% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.47 ± 10% -0.3 0.15 ± 10% perf-profile.self.cycles-pp.try_to_wake_up
0.66 ± 10% -0.3 0.34 ± 6% perf-profile.self.cycles-pp._raw_spin_lock
0.54 ± 15% -0.3 0.24 ± 14% perf-profile.self.cycles-pp.__handle_mm_fault
0.38 ± 15% -0.3 0.08 ± 17% perf-profile.self.cycles-pp.load_new_mm_cr3
0.40 ± 16% -0.3 0.13 ± 12% perf-profile.self.cycles-pp.finish_task_switch
0.37 ± 21% -0.2 0.13 ± 10% perf-profile.self.cycles-pp.___perf_sw_event
0.40 ± 13% -0.2 0.17 ± 5% perf-profile.self.cycles-pp.update_load_avg
1.16 ± 8% -0.2 0.94 ± 7% perf-profile.self.cycles-pp.menu_select
0.43 ± 11% -0.2 0.22 ± 8% perf-profile.self.cycles-pp.do_idle
0.28 ± 14% -0.2 0.08 ± 11% perf-profile.self.cycles-pp.enqueue_entity
0.32 ± 13% -0.2 0.12 ± 5% perf-profile.self.cycles-pp.__update_load_avg_se
0.28 ± 12% -0.2 0.09 ± 9% perf-profile.self.cycles-pp.__switch_to
0.28 ± 9% -0.2 0.09 ± 20% perf-profile.self.cycles-pp.select_task_rq_fair
0.24 ± 14% -0.2 0.06 ± 60% perf-profile.self.cycles-pp.enqueue_task_fair
0.21 ± 5% -0.2 0.03 ±100% perf-profile.self.cycles-pp.__list_del_entry_valid
0.41 ± 7% -0.2 0.23 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.27 ± 12% -0.2 0.09 ± 14% perf-profile.self.cycles-pp.update_cfs_group
0.25 ± 10% -0.2 0.07 ± 26% perf-profile.self.cycles-pp.update_cfs_rq_h_load
0.30 ± 9% -0.2 0.12 ± 6% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.54 ± 3% -0.2 0.37 ± 2% perf-profile.self.cycles-pp.native_irq_return_iret
0.25 ± 10% -0.2 0.08 ± 16% perf-profile.self.cycles-pp.update_curr
0.29 ± 11% -0.2 0.13 ± 6% perf-profile.self.cycles-pp.update_rq_clock
0.24 ± 18% -0.2 0.08 ± 22% perf-profile.self.cycles-pp.set_next_entity
0.28 ± 10% -0.2 0.12 ± 16% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.21 ± 5% -0.2 0.05 perf-profile.self.cycles-pp.stack_trace_save_tsk
0.22 ± 17% -0.2 0.06 ± 16% perf-profile.self.cycles-pp.__switch_to_asm
0.23 ± 19% -0.2 0.07 ± 19% perf-profile.self.cycles-pp.pick_next_task_fair
0.33 ± 13% -0.2 0.18 ± 10% perf-profile.self.cycles-pp.down_read_trylock
0.31 ± 11% -0.1 0.17 ± 3% perf-profile.self.cycles-pp.up_read
0.20 ± 11% -0.1 0.06 ± 14% perf-profile.self.cycles-pp.dequeue_entity
0.24 ± 15% -0.1 0.11 ± 19% perf-profile.self.cycles-pp.nr_iowait_cpu
0.46 ± 2% -0.1 0.32 ± 5% perf-profile.self.cycles-pp.native_write_msr
0.20 ± 20% -0.1 0.07 ± 23% perf-profile.self.cycles-pp.__orc_find
0.18 ± 23% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.orc_find
0.31 ± 6% -0.1 0.18 ± 6% perf-profile.self.cycles-pp.native_sched_clock
0.27 ± 10% -0.1 0.15 ± 18% perf-profile.self.cycles-pp.down_read
0.20 ± 11% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.14 ± 5% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.rwsem_mark_wake
0.17 ± 8% -0.1 0.08 ± 8% perf-profile.self.cycles-pp.rwsem_down_write_slowpath
0.18 ± 14% -0.1 0.10 ± 15% perf-profile.self.cycles-pp.vmacache_find
0.15 ± 27% -0.1 0.08 ± 20% perf-profile.self.cycles-pp.__do_page_fault
0.34 ± 5% -0.1 0.27 ± 10% perf-profile.self.cycles-pp.read_tsc
0.22 ± 8% -0.1 0.15 ± 11% perf-profile.self.cycles-pp.__lock_text_start
0.28 ± 7% -0.1 0.21 ± 17% perf-profile.self.cycles-pp.find_next_bit
0.12 ± 6% -0.1 0.07 ± 10% perf-profile.self.cycles-pp.find_vma
0.09 ± 17% -0.1 0.04 ± 60% perf-profile.self.cycles-pp.handle_mm_fault
0.10 ± 15% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.do_syscall_64
0.23 ± 7% -0.0 0.19 ± 11% perf-profile.self.cycles-pp.__next_timer_interrupt
0.07 ± 11% -0.0 0.03 ±102% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.12 ± 12% -0.0 0.08 ± 8% perf-profile.self.cycles-pp.rcu_eqs_enter
0.10 ± 18% -0.0 0.07 ± 10% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.12 ± 37% +0.0 0.15 ± 35% perf-profile.self.cycles-pp.update_blocked_averages
0.11 ± 26% +0.1 0.18 ± 16% perf-profile.self.cycles-pp.rcu_sched_clock_irq
1.13 ± 4% +0.2 1.36 ± 3% perf-profile.self.cycles-pp.rwsem_spin_on_owner
2.04 ± 23% +2.2 4.23 ± 8% perf-profile.self.cycles-pp.rwsem_optimistic_spin
12.51 ± 22% +20.7 33.24 ± 4% perf-profile.self.cycles-pp.osq_lock
vm-scalability.median
12000 +-+-----------------------------------------------------------------+
|.+ :: +..+ :: +.+ :: + |
11000 +-++ : : +.. .+ : : : : : : : : : : + |
10000 +-+ + : : : + : .+ : : : : : : : : : : .. :|
| + : : : .+.+.. .+. + : + : : + : : + :|
9000 +-+ :: +. + + :: :: |
| + + + |
8000 +-+ |
| |
7000 +-+ O O |
6000 +-+ O O |
O O O O O O O O O O O |
5000 +-+ O |
| O |
4000 +-+-----------------------------------------------------------------+
vm-scalability.time.user_time
220 +-+-------------------------------------------------------------------+
| |
200 +-+ + + + +.. + |
180 +-+ :: : + :: : :: + |
| + : : .+ : +.. : : : + : : : : + |
160 +-+ + : : .+. : : : : : + : : : : + :|
| + : + : .+.. .+. : + : : + : : :+ :|
140 +-+ :.. : .+ +.+. + : :: + |
| + +. + + |
120 +-+ |
100 +-+ |
| O O |
80 +-+ O O O O |
O O O O O O O O |
60 +-+O-O----------------------O-----------------------------------------+
vm-scalability.time.voluntary_context_switches
4e+07 +-+---------------------------------------------------------------+
|.+ +..+ +..+ + + |
3.5e+07 +-+: + + : : + : : + :: : :|
| : : : .. : + : : : + : : : + : : : :|
3e+07 +-+ : : : .+.+ : .. + .+..+. : :: + : :: + : :: |
| + +. +.+ + + + + + + + |
2.5e+07 +-+ |
| |
2e+07 +-+ |
| |
1.5e+07 +-+ |
| |
1e+07 +-+ O O O O |
O O O O O O O O O O O O O |
5e+06 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
3 years
[sched] b7d48fb89a: WARNING:at_kernel/sched/sched.h:#pick_next_task_fair
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: b7d48fb89aa672ee3acaac2864a3bfda81606ef1 ("sched: Add task_struct pointer to sched_class::set_curr_task")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/peterz/queue.git sched/core
in testcase: rcutorture
with following parameters:
runtime: 300s
test: cpuhotplug
torture_type: srcud
test-description: rcutorture is rcutorture kernel module load/unload test.
test-url: https://www.kernel.org/doc/Documentation/RCU/torture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------+------------+------------+
| | 0f922d0d29 | b7d48fb89a |
+------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 8 | 8 |
| BUG:kernel_reboot-without-warning_in_test_stage | 8 | |
| WARNING:at_kernel/sched/sched.h:#pick_next_task_fair | 0 | 8 |
| RIP:pick_next_task_fair | 0 | 8 |
| WARNING:at_kernel/sched/sched.h:#sched_cpu_dying | 0 | 8 |
| RIP:sched_cpu_dying | 0 | 8 |
+------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 38.967800] WARNING: CPU: 1 PID: 15 at kernel/sched/sched.h:1754 pick_next_task_fair+0x758/0x780
[ 38.971162] Modules linked in: rcutorture torture intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul crc32c_intel bochs_drm ghash_clmulni_intel drm_vram_helper ppdev ttm snd_pcm snd_timer snd drm_kms_helper soundcore pcspkr drm joydev serio_raw parport_pc floppy parport qemu_fw_cfg ata_generic i2c_piix4 pata_acpi
[ 38.978871] CPU: 1 PID: 15 Comm: migration/1 Not tainted 5.3.0-rc1-00087-gb7d48fb89aa672 #1
[ 38.980246] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 38.981631] RIP: 0010:pick_next_task_fair+0x758/0x780
[ 38.982468] Code: 48 0f af 85 18 01 00 00 48 83 c1 01 48 f7 f1 48 89 83 10 0a 00 00 e9 66 fa ff ff bf 02 00 00 00 e8 8d 3a ff ff e9 04 fa ff ff <0f> 0b e9 19 fb ff ff 80 3d 65 5e 61 01 00 0f 85 30 fc ff ff e8 bf
[ 38.985513] RSP: 0000:ffffb98c8008bcf8 EFLAGS: 00010002
[ 38.986376] RAX: ffffffffa2011e00 RBX: ffff8b7ffbb2b3c0 RCX: ffffffffa264d540
[ 38.987544] RDX: ffffb98c8008bd80 RSI: 0000000000000001 RDI: ffff8b7ffbb2b3c0
[ 38.988728] RBP: ffffb98c8008bdc0 R08: 0000000912a90774 R09: 0000000000000004
[ 38.989913] R10: ffffb98c8008bd38 R11: 0000000000000003 R12: ffff8b7ffbb2b3c0
[ 38.991079] R13: ffffffffa264d540 R14: ffffb98c8008bd80 R15: 0000000000000000
[ 38.992251] FS: 0000000000000000(0000) GS:ffff8b7ffbb00000(0000) knlGS:0000000000000000
[ 38.993579] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 38.994522] CR2: 000000005788d734 CR3: 000000007e104000 CR4: 00000000000406e0
[ 38.995689] Call Trace:
[ 38.996124] ? sched_cpu_starting+0xf0/0xf0
[ 38.996823] sched_cpu_dying+0x2a9/0x430
[ 38.997484] ? sched_cpu_starting+0xf0/0xf0
[ 38.998231] cpuhp_invoke_callback+0x86/0x5d0
[ 38.998957] ? cpu_disable_common+0x217/0x230
[ 38.999684] take_cpu_down+0x60/0xb0
[ 39.000283] multi_cpu_stop+0x6b/0x100
[ 39.000909] ? stop_machine_yield+0x10/0x10
[ 39.001612] cpu_stopper_thread+0x94/0x100
[ 39.002293] ? smpboot_thread_fn+0x2f/0x1e0
[ 39.002988] ? smpboot_thread_fn+0x74/0x1e0
[ 39.003683] ? smpboot_thread_fn+0x14e/0x1e0
[ 39.004393] smpboot_thread_fn+0x149/0x1e0
[ 39.005073] ? sort_range+0x20/0x20
[ 39.005662] kthread+0x11e/0x140
[ 39.006203] ? kthread_park+0xa0/0xa0
[ 39.006814] ret_from_fork+0x35/0x40
[ 39.007411] ---[ end trace baf171d5e73cc4a2 ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc1-00087-gb7d48fb89aa672 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years
[sched/fair] ac1f97be8d: WARNING:at_kernel/sched/sched.h:#dl_server_init
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: ac1f97be8df98718a3d545cb6bb6ee4159e4300c ("sched/fair: Add trivial fair server")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/peterz/queue.git sched/wip-deadline
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------------------------------------+------------+------------+
| | f4fa60e1b1 | ac1f97be8d |
+---------------------------------------------------------+------------+------------+
| boot_successes | 5 | 0 |
| boot_failures | 6 | 14 |
| BUG:kernel_reboot-without-warning_in_test_stage | 6 | |
| WARNING:at_kernel/sched/sched.h:#dl_server_init | 0 | 14 |
| RIP:dl_server_init | 0 | 14 |
| WARNING:at_kernel/sched/deadline.c:#task_non_contending | 0 | 3 |
| RIP:task_non_contending | 0 | 3 |
| kernel_BUG_at_kernel/sched/deadline.c | 0 | 3 |
| invalid_opcode:#[##] | 0 | 3 |
| RIP:enqueue_dl_entity | 0 | 3 |
| RIP:default_idle | 0 | 2 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 2 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 1 |
+---------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 3.215886] WARNING: CPU: 0 PID: 0 at kernel/sched/sched.h:1108 dl_server_init+0x68/0xca
[ 3.237973] Modules linked in:
[ 3.242573] CPU: 0 PID: 0 Comm: swapper Not tainted 5.3.0-rc1-00388-gac1f97be8df987 #4
[ 3.254600] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 3.267232] RIP: 0010:dl_server_init+0x68/0xca
[ 3.274057] Code: 0b 83 bd 20 0a 00 00 01 4c 8b 63 48 77 1e 80 3d 3b 6e 6b 01 00 75 15 48 c7 c7 93 24 34 82 c6 05 2b 6e 6b 01 01 e8 d1 38 fc ff <0f> 0b 4c 39 a5 28 0a 00 00 79 0e 48 c7 c7 45 d6 32 82 e8 32 5f 01
[ 3.302613] RSP: 0000:ffffffff82603eb8 EFLAGS: 00010082
[ 3.310703] RAX: 0000000000000000 RBX: ffff88813fc2a2e8 RCX: 0000000000000000
[ 3.321660] RDX: 0000000000000026 RSI: ffffffff82c4afa6 RDI: ffffffff82c4b3a6
[ 3.332536] RBP: ffff88813fc29a00 R08: ffffffff82c4af80 R09: 0000000000000026
[ 3.343234] R10: 0000000000000000 R11: ffffffff82603ccf R12: 0000000000000000
[ 3.354133] R13: 0000000000029a00 R14: ffff88813fc29bc0 R15: ffff88813fc29a40
[ 3.365277] FS: 0000000000000000(0000) GS:ffff88813fc00000(0000) knlGS:0000000000000000
[ 3.377714] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3.386474] CR2: ffff88813ffff000 CR3: 0000000002612000 CR4: 00000000000006b0
[ 3.397423] Call Trace:
[ 3.401223] sched_init+0x37e/0x3fe
[ 3.406629] start_kernel+0x267/0x50a
[ 3.412198] ? x86_family+0x5/0x1d
[ 3.417442] secondary_startup_64+0xb6/0xc0
[ 3.423800] random: get_random_bytes called from init_oops_id+0x22/0x31 with crng_init=0
[ 3.436167] ---[ end trace b2352b5ced6eae1c ]---
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc1-00388-gac1f97be8df987 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
3 years
[compat_ioctl] 97ca70e16d: mdadm-selftests.enchmarks/mdadm-selftests/tests/06name.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 97ca70e16dc76bad69ad774cf0bd0286c97d63b4 ("compat_ioctl: scsi: move ioctl handling into drivers")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/arnd/playground.git to-build
in testcase: mdadm-selftests
with following parameters:
disk: 1HDD
test_prefix: 06
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Welcome to fdisk (util-linux 2.29.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x8b600b10.
Command (m for help): Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): Partition number (1-4, default 1): First sector (2048-536870911, default 2048): Last sector, +sectors or +size{K,M,G,T,P} (2048-536870911, default 536870911):
Created a new partition 1 of type 'Linux' and of size 5 GiB.
Command (m for help): The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
2019-07-28 07:22:06 mkdir -p /var/tmp
2019-07-28 07:22:06 mke2fs -t ext3 -b 1024 -J size=1 -q /dev/vda1
2019-07-28 07:22:08 mount -t ext3 /dev/vda1 /var/tmp
sed -e 's/{DEFAULT_METADATA}/1.2/g' \
-e 's,{MAP_PATH},/run/mdadm/map,g' mdadm.8.in > mdadm.8
/usr/bin/install -D -m 644 mdadm.8 /usr/share/man/man8/mdadm.8
/usr/bin/install -D -m 644 mdmon.8 /usr/share/man/man8/mdmon.8
/usr/bin/install -D -m 644 md.4 /usr/share/man/man4/md.4
/usr/bin/install -D -m 644 mdadm.conf.5 /usr/share/man/man5/mdadm.conf.5
/usr/bin/install -D -m 644 udev-md-raid-creating.rules /lib/udev/rules.d/01-md-raid-creating.rules
/usr/bin/install -D -m 644 udev-md-raid-arrays.rules /lib/udev/rules.d/63-md-raid-arrays.rules
/usr/bin/install -D -m 644 udev-md-raid-assembly.rules /lib/udev/rules.d/64-md-raid-assembly.rules
/usr/bin/install -D -m 644 udev-md-clustered-confirm-device.rules /lib/udev/rules.d/69-md-clustered-confirm-device.rules
/usr/bin/install -D -m 755 mdadm /sbin/mdadm
/usr/bin/install -D -m 755 mdmon /sbin/mdmon
Testing on linux-5.3.0-rc1-g97ca70e16dc76b kernel
/lkp/benchmarks/mdadm-selftests/tests/06name... FAILED - see /var/tmp/06name.log and /var/tmp/fail06name.log for details
06name TIMEOUT
Testing on linux-5.3.0-rc1-g97ca70e16dc76b kernel
/lkp/benchmarks/mdadm-selftests/tests/06sysfs... succeeded
06sysfs TIMEOUT
Testing on linux-5.3.0-rc1-g97ca70e16dc76b kernel
/lkp/benchmarks/mdadm-selftests/tests/06wrmostly... FAILED - see /var/tmp/06wrmostly.log and /var/tmp/fail06wrmostly.log for details
06wrmostly TIMEOUT
To reproduce:
# build kernel
cd linux
cp config-5.3.0-rc1-g97ca70e16dc76b .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years
[mm] 6471384af2: kernel_BUG_at_mm/slub.c
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 6471384af2a6530696fc0203bafe4de41a23c9ef ("mm: security: introduce init_on_alloc=1 and init_on_free=1 boot options")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------+------------+------------+
| | ba5c5e4a5d | 6471384af2 |
+------------------------------------------+------------+------------+
| boot_successes | 8 | 0 |
| boot_failures | 2 | 15 |
| invoked_oom-killer:gfp_mask=0x | 1 | |
| Mem-Info | 1 | |
| kernel_BUG_at_security/keys/keyring.c | 1 | |
| invalid_opcode:#[##] | 1 | 15 |
| RIP:__key_link_begin | 1 | |
| Kernel_panic-not_syncing:Fatal_exception | 1 | 15 |
| kernel_BUG_at_mm/slub.c | 0 | 15 |
| RIP:kfree | 0 | 15 |
+------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 4.478342] kernel BUG at mm/slub.c:306!
[ 4.482437] invalid opcode: 0000 [#1] PREEMPT PTI
[ 4.485750] CPU: 0 PID: 0 Comm: swapper Not tainted 5.2.0-05754-g6471384a #4
[ 4.490635] RIP: 0010:kfree+0x58a/0x5c0
[ 4.493679] Code: 48 83 05 78 37 51 02 01 0f 0b 48 83 05 7e 37 51 02 01 48 83 05 7e 37 51 02 01 48 83 05 7e 37 51 02 01 48 83 05 d6 37 51 02 01 <0f> 0b 48 83 05 d4 37 51 02 01 48 83 05 d4 37 51 02 01 48 83 05 d4
[ 4.506827] RSP: 0000:ffffffff82603d90 EFLAGS: 00010002
[ 4.510475] RAX: ffff8c3976c04320 RBX: ffff8c3976c04300 RCX: 0000000000000000
[ 4.515420] RDX: ffff8c3976c04300 RSI: 0000000000000000 RDI: ffff8c3976c04320
[ 4.520331] RBP: ffffffff82603db8 R08: 0000000000000000 R09: 0000000000000000
[ 4.525288] R10: ffff8c3976c04320 R11: ffffffff8289e1e0 R12: ffffd52cc8db0100
[ 4.530180] R13: ffff8c3976c01a00 R14: ffffffff810f10d4 R15: ffff8c3976c04300
[ 4.535079] FS: 0000000000000000(0000) GS:ffffffff8266b000(0000) knlGS:0000000000000000
[ 4.540628] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4.544593] CR2: ffff8c397ffff000 CR3: 0000000125020000 CR4: 00000000000406b0
[ 4.549558] Call Trace:
[ 4.551266] apply_wqattrs_prepare+0x154/0x280
[ 4.554357] apply_workqueue_attrs_locked+0x4e/0xe0
[ 4.557728] apply_workqueue_attrs+0x36/0x60
[ 4.560654] alloc_workqueue+0x25a/0x6d0
[ 4.563381] ? kmem_cache_alloc_trace+0x1e3/0x500
[ 4.566628] ? __mutex_unlock_slowpath+0x44/0x3f0
[ 4.569875] workqueue_init_early+0x246/0x348
[ 4.573025] start_kernel+0x3c7/0x7ec
[ 4.575558] x86_64_start_reservations+0x40/0x49
[ 4.578738] x86_64_start_kernel+0xda/0xe4
[ 4.581600] secondary_startup_64+0xb6/0xc0
[ 4.584473] Modules linked in:
[ 4.586620] ---[ end trace f67eb9af4d8d492b ]---
To reproduce:
# build kernel
cd linux
cp config-5.2.0-05754-g6471384a .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
3 years
Re: [LKP] [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
by Rong Chen
Hi,
On 7/31/19 6:21 PM, Michel Dänzer wrote:
> On 2019-07-31 11:25 a.m., Huang, Ying wrote:
>> Hi, Daniel,
>>
>> Daniel Vetter <daniel(a)ffwll.ch> writes:
>>
>>> On Tue, Jul 30, 2019 at 10:27 PM Dave Airlie <airlied(a)gmail.com> wrote:
>>>> On Wed, 31 Jul 2019 at 05:00, Daniel Vetter <daniel(a)ffwll.ch> wrote:
>>>>> On Tue, Jul 30, 2019 at 8:50 PM Thomas Zimmermann <tzimmermann(a)suse.de> wrote:
>>>>>> Hi
>>>>>>
>>>>>> Am 30.07.19 um 20:12 schrieb Daniel Vetter:
>>>>>>> On Tue, Jul 30, 2019 at 7:50 PM Thomas Zimmermann <tzimmermann(a)suse.de> wrote:
>>>>>>>> Am 29.07.19 um 11:51 schrieb kernel test robot:
>>>>>>>>> Greeting,
>>>>>>>>>
>>>>>>>>> FYI, we noticed a -18.8% regression of vm-scalability.median due to commit:>
>>>>>>>>>
>>>>>>>>> commit: 90f479ae51afa45efab97afdde9b94b9660dd3e4 ("drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation")
>>>>>>>>> https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
>>>>>>>> Daniel, Noralf, we may have to revert this patch.
>>>>>>>>
>>>>>>>> I expected some change in display performance, but not in VM. Since it's
>>>>>>>> a server chipset, probably no one cares much about display performance.
>>>>>>>> So that seemed like a good trade-off for re-using shared code.
>>>>>>>>
>>>>>>>> Part of the patch set is that the generic fb emulation now maps and
>>>>>>>> unmaps the fbdev BO when updating the screen. I guess that's the cause
>>>>>>>> of the performance regression. And it should be visible with other
>>>>>>>> drivers as well if they use a shadow FB for fbdev emulation.
>>>>>>> For fbcon we should need to do any maps/unamps at all, this is for the
>>>>>>> fbdev mmap support only. If the testcase mentioned here tests fbdev
>>>>>>> mmap handling it's pretty badly misnamed :-) And as long as you don't
>>>>>>> have an fbdev mmap there shouldn't be any impact at all.
>>>>>> The ast and mgag200 have only a few MiB of VRAM, so we have to get the
>>>>>> fbdev BO out if it's not being displayed. If not being mapped, it can be
>>>>>> evicted and make room for X, etc.
>>>>>>
>>>>>> To make this work, the BO's memory is mapped and unmapped in
>>>>>> drm_fb_helper_dirty_work() before being updated from the shadow FB. [1]
>>>>>> That fbdev mapping is established on each screen update, more or less.
>>>>>> From my (yet unverified) understanding, this causes the performance
>>>>>> regression in the VM code.
>>>>>>
>>>>>> The original code in mgag200 used to kmap the fbdev BO while it's being
>>>>>> displayed; [2] and the drawing code only mapped it when necessary (i.e.,
>>>>>> not being display). [3]
>>>>> Hm yeah, this vmap/vunmap is going to be pretty bad. We indeed should
>>>>> cache this.
>>>>>
>>>>>> I think this could be added for VRAM helpers as well, but it's still a
>>>>>> workaround and non-VRAM drivers might also run into such a performance
>>>>>> regression if they use the fbdev's shadow fb.
>>>>> Yeah agreed, fbdev emulation should try to cache the vmap.
>>>>>
>>>>>> Noralf mentioned that there are plans for other DRM clients besides the
>>>>>> console. They would as well run into similar problems.
>>>>>>
>>>>>>>> The thing is that we'd need another generic fbdev emulation for ast and
>>>>>>>> mgag200 that handles this issue properly.
>>>>>>> Yeah I dont think we want to jump the gun here. If you can try to
>>>>>>> repro locally and profile where we're wasting cpu time I hope that
>>>>>>> should sched a light what's going wrong here.
>>>>>> I don't have much time ATM and I'm not even officially at work until
>>>>>> late Aug. I'd send you the revert and investigate later. I agree that
>>>>>> using generic fbdev emulation would be preferable.
>>>>> Still not sure that's the right thing to do really. Yes it's a
>>>>> regression, but vm testcases shouldn run a single line of fbcon or drm
>>>>> code. So why this is impacted so heavily by a silly drm change is very
>>>>> confusing to me. We might be papering over a deeper and much more
>>>>> serious issue ...
>>>> It's a regression, the right thing is to revert first and then work
>>>> out the right thing to do.
>>> Sure, but I have no idea whether the testcase is doing something
>>> reasonable. If it's accidentally testing vm scalability of fbdev and
>>> there's no one else doing something this pointless, then it's not a
>>> real bug. Plus I think we're shooting the messenger here.
>>>
>>>> It's likely the test runs on the console and printfs stuff out while running.
>>> But why did we not regress the world if a few prints on the console
>>> have such a huge impact? We didn't get an entire stream of mails about
>>> breaking stuff ...
>> The regression seems not related to the commit. But we have retested
>> and confirmed the regression. Hard to understand what happens.
> Does the regressed test cause any output on console while it's
> measuring? If so, it's probably accidentally measuring fbcon/DRM code in
> addition to the workload it's trying to measure.
>
Sorry, I'm not familiar with DRM, we enabled the console to output logs,
and attached please find the log file.
"Command line: ... console=tty0 earlyprintk=ttyS0,115200
console=ttyS0,115200 vga=normal rw"
Best Regards,
Rong Chen
3 years
Re: [LKP] [drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
by Thomas Zimmermann
Hi
Am 01.08.19 um 15:30 schrieb Michel Dänzer:
> On 2019-08-01 8:19 a.m., Rong Chen wrote:
>> Hi,
>>
>> On 7/31/19 6:21 PM, Michel Dänzer wrote:
>>> On 2019-07-31 11:25 a.m., Huang, Ying wrote:
>>>> Hi, Daniel,
>>>>
>>>> Daniel Vetter <daniel(a)ffwll.ch> writes:
>>>>
>>>>> On Tue, Jul 30, 2019 at 10:27 PM Dave Airlie <airlied(a)gmail.com> wrote:
>>>>>> On Wed, 31 Jul 2019 at 05:00, Daniel Vetter <daniel(a)ffwll.ch> wrote:
>>>>>>> On Tue, Jul 30, 2019 at 8:50 PM Thomas Zimmermann
>>>>>>> <tzimmermann(a)suse.de> wrote:
>>>>>>>> Hi
>>>>>>>>
>>>>>>>> Am 30.07.19 um 20:12 schrieb Daniel Vetter:
>>>>>>>>> On Tue, Jul 30, 2019 at 7:50 PM Thomas Zimmermann
>>>>>>>>> <tzimmermann(a)suse.de> wrote:
>>>>>>>>>> Am 29.07.19 um 11:51 schrieb kernel test robot:
>>>>>>>>>>> Greeting,
>>>>>>>>>>>
>>>>>>>>>>> FYI, we noticed a -18.8% regression of vm-scalability.median
>>>>>>>>>>> due to commit:>
>>>>>>>>>>>
>>>>>>>>>>> commit: 90f479ae51afa45efab97afdde9b94b9660dd3e4
>>>>>>>>>>> ("drm/mgag200: Replace struct mga_fbdev with generic
>>>>>>>>>>> framebuffer emulation")
>>>>>>>>>>> https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git
>>>>>>>>>>> master
>>>>>>>>>> Daniel, Noralf, we may have to revert this patch.
>>>>>>>>>>
>>>>>>>>>> I expected some change in display performance, but not in VM.
>>>>>>>>>> Since it's
>>>>>>>>>> a server chipset, probably no one cares much about display
>>>>>>>>>> performance.
>>>>>>>>>> So that seemed like a good trade-off for re-using shared code.
>>>>>>>>>>
>>>>>>>>>> Part of the patch set is that the generic fb emulation now maps
>>>>>>>>>> and
>>>>>>>>>> unmaps the fbdev BO when updating the screen. I guess that's
>>>>>>>>>> the cause
>>>>>>>>>> of the performance regression. And it should be visible with other
>>>>>>>>>> drivers as well if they use a shadow FB for fbdev emulation.
>>>>>>>>> For fbcon we should need to do any maps/unamps at all, this is
>>>>>>>>> for the
>>>>>>>>> fbdev mmap support only. If the testcase mentioned here tests fbdev
>>>>>>>>> mmap handling it's pretty badly misnamed :-) And as long as you
>>>>>>>>> don't
>>>>>>>>> have an fbdev mmap there shouldn't be any impact at all.
>>>>>>>> The ast and mgag200 have only a few MiB of VRAM, so we have to
>>>>>>>> get the
>>>>>>>> fbdev BO out if it's not being displayed. If not being mapped, it
>>>>>>>> can be
>>>>>>>> evicted and make room for X, etc.
>>>>>>>>
>>>>>>>> To make this work, the BO's memory is mapped and unmapped in
>>>>>>>> drm_fb_helper_dirty_work() before being updated from the shadow
>>>>>>>> FB. [1]
>>>>>>>> That fbdev mapping is established on each screen update, more or
>>>>>>>> less.
>>>>>>>> From my (yet unverified) understanding, this causes the performance
>>>>>>>> regression in the VM code.
>>>>>>>>
>>>>>>>> The original code in mgag200 used to kmap the fbdev BO while it's
>>>>>>>> being
>>>>>>>> displayed; [2] and the drawing code only mapped it when necessary
>>>>>>>> (i.e.,
>>>>>>>> not being display). [3]
>>>>>>> Hm yeah, this vmap/vunmap is going to be pretty bad. We indeed should
>>>>>>> cache this.
>>>>>>>
>>>>>>>> I think this could be added for VRAM helpers as well, but it's
>>>>>>>> still a
>>>>>>>> workaround and non-VRAM drivers might also run into such a
>>>>>>>> performance
>>>>>>>> regression if they use the fbdev's shadow fb.
>>>>>>> Yeah agreed, fbdev emulation should try to cache the vmap.
>>>>>>>
>>>>>>>> Noralf mentioned that there are plans for other DRM clients
>>>>>>>> besides the
>>>>>>>> console. They would as well run into similar problems.
>>>>>>>>
>>>>>>>>>> The thing is that we'd need another generic fbdev emulation for
>>>>>>>>>> ast and
>>>>>>>>>> mgag200 that handles this issue properly.
>>>>>>>>> Yeah I dont think we want to jump the gun here. If you can try to
>>>>>>>>> repro locally and profile where we're wasting cpu time I hope that
>>>>>>>>> should sched a light what's going wrong here.
>>>>>>>> I don't have much time ATM and I'm not even officially at work until
>>>>>>>> late Aug. I'd send you the revert and investigate later. I agree
>>>>>>>> that
>>>>>>>> using generic fbdev emulation would be preferable.
>>>>>>> Still not sure that's the right thing to do really. Yes it's a
>>>>>>> regression, but vm testcases shouldn run a single line of fbcon or
>>>>>>> drm
>>>>>>> code. So why this is impacted so heavily by a silly drm change is
>>>>>>> very
>>>>>>> confusing to me. We might be papering over a deeper and much more
>>>>>>> serious issue ...
>>>>>> It's a regression, the right thing is to revert first and then work
>>>>>> out the right thing to do.
>>>>> Sure, but I have no idea whether the testcase is doing something
>>>>> reasonable. If it's accidentally testing vm scalability of fbdev and
>>>>> there's no one else doing something this pointless, then it's not a
>>>>> real bug. Plus I think we're shooting the messenger here.
>>>>>
>>>>>> It's likely the test runs on the console and printfs stuff out
>>>>>> while running.
>>>>> But why did we not regress the world if a few prints on the console
>>>>> have such a huge impact? We didn't get an entire stream of mails about
>>>>> breaking stuff ...
>>>> The regression seems not related to the commit. But we have retested
>>>> and confirmed the regression. Hard to understand what happens.
>>> Does the regressed test cause any output on console while it's
>>> measuring? If so, it's probably accidentally measuring fbcon/DRM code in
>>> addition to the workload it's trying to measure.
>>>
>>
>> Sorry, I'm not familiar with DRM, we enabled the console to output logs,
>> and attached please find the log file.
>>
>> "Command line: ... console=tty0 earlyprintk=ttyS0,115200
>> console=ttyS0,115200 vga=normal rw"
>
> I assume the
>
> user :notice: [ xxx.xxxx] xxxxxxxxx bytes / xxxxxxx usecs = xxxxx KB/s
>
> lines are generated by the test?
>
> If so, unless the test is intended to measure console performance, it
> should be fixed not to generate output to console (while it's measuring).
Yes, the test prints quite a lot of text to the console. It shouldn't do
that.
Best regards
Thomas
>
>
--
Thomas Zimmermann
Graphics Driver Developer
SUSE Linux GmbH, Maxfeldstrasse 5, 90409 Nuernberg, Germany
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah
HRB 21284 (AG Nürnberg)
3 years