[mm/memcg] bcc1e930b6: unixbench.score -1.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -1.4% regression of unixbench.score due to commit:
commit: bcc1e930b6f74bfd316a77ec9eeaf8285aeef8fb ("mm/memcg: add debug checking in lock_page_memcg")
https://github.com/alexshi/linux.git lru-next
in testcase: unixbench
on test machine: 8 threads Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz with 16G memory
with following parameters:
runtime: 300s
nr_task: 30%
test: shell1
cpufreq_governor: performance
ucode: 0x21
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -2.3% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=50% |
| | test=page_fault3 |
| | ucode=0xb000038 |
+------------------+-----------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/30%/debian-x86_64-2019-11-14.cgz/300s/lkp-ivb-d01/shell1/unixbench/0x21
commit:
e1fc16cab8 (" mm/lru: add debug checking for page memcg moving")
bcc1e930b6 ("mm/memcg: add debug checking in lock_page_memcg")
e1fc16cab8c618fc bcc1e930b6f74bfd316a77ec9ee
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 50% 2:4 dmesg.RIP:cpuidle_enter_state
2:4 -50% :4 dmesg.RIP:loop
1:4 -25% :4 dmesg.RIP:poll_idle
1:4 -25% :4 kmsg.c5>]usb_hcd_irq
1:4 -25% :4 kmsg.cf34dbf>]usb_hcd_irq
1:4 -25% :4 kmsg.d774564>]usb_hcd_irq
1:4 -25% :4 kmsg.e006>]usb_hcd_irq
:4 25% 1:4 kmsg.e>]usb_hcd_irq
:4 25% 1:4 kmsg.f0bb>]usb_hcd_irq
3:4 -3% 3:4 perf-profile.children.cycles-pp.error_entry
2:4 -2% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
5709 -1.4% 5628 unixbench.score
309419 +1.2% 313234 unixbench.time.involuntary_context_switches
2.397e+08 -1.5% 2.361e+08 unixbench.time.minor_page_faults
683.02 +5.2% 718.23 unixbench.time.system_time
1145 -2.6% 1115 unixbench.time.user_time
7368583 -1.3% 7271071 unixbench.time.voluntary_context_switches
6559 ± 5% -8.4% 6006 slabinfo.kmalloc-64.active_objs
6559 ± 5% -8.4% 6006 slabinfo.kmalloc-64.num_objs
7.073e+08 ± 3% +29.5% 9.159e+08 cpuidle.C1E.time
6572166 +24.4% 8176348 ± 4% cpuidle.C1E.usage
1.425e+09 ± 13% -65.1% 4.978e+08 ± 11% cpuidle.C3.time
5284995 ± 10% -56.1% 2320396 ± 13% cpuidle.C3.usage
1.293e+09 ± 19% +42.2% 1.838e+09 ± 8% cpuidle.C6.time
3013695 ± 12% +27.6% 3844106 ± 7% cpuidle.C6.usage
6635 ± 89% -93.6% 421.65 ±165% sched_debug.cfs_rq:/.MIN_vruntime.avg
53083 ± 89% -93.6% 3373 ±165% sched_debug.cfs_rq:/.MIN_vruntime.max
17555 ± 89% -93.6% 1115 ±165% sched_debug.cfs_rq:/.MIN_vruntime.stddev
6635 ± 89% -93.6% 421.65 ±165% sched_debug.cfs_rq:/.max_vruntime.avg
53083 ± 89% -93.6% 3373 ±165% sched_debug.cfs_rq:/.max_vruntime.max
17555 ± 89% -93.6% 1115 ±165% sched_debug.cfs_rq:/.max_vruntime.stddev
11445 ± 2% -3.5% 11043 proc-vmstat.nr_shmem
1.878e+08 -1.6% 1.849e+08 proc-vmstat.numa_hit
1.878e+08 -1.6% 1.849e+08 proc-vmstat.numa_local
9103 -4.4% 8705 proc-vmstat.pgactivate
1.97e+08 -1.6% 1.939e+08 proc-vmstat.pgalloc_normal
2.406e+08 -1.5% 2.37e+08 proc-vmstat.pgfault
1.97e+08 -1.6% 1.939e+08 proc-vmstat.pgfree
3374257 -1.5% 3324554 proc-vmstat.unevictable_pgs_culled
6571910 +24.4% 8176149 ± 4% turbostat.C1E
13.28 ± 4% +4.4 17.70 ± 3% turbostat.C1E%
5284838 ± 10% -56.1% 2320252 ± 13% turbostat.C3
26.53 ± 4% -16.9 9.63 ± 13% turbostat.C3%
3013527 ± 12% +27.6% 3843950 ± 7% turbostat.C6
23.95 ± 9% +11.5 35.45 ± 6% turbostat.C6%
0.03 ± 69% +16245.5% 4.50 ± 65% turbostat.CPU%c6
3.77 ± 66% -84.3% 0.59 ±162% turbostat.Pkg%pc3
2.47 +0.4 2.87 ± 10% perf-stat.i.branch-miss-rate%
4.37 ± 4% +0.8 5.21 ± 8% perf-stat.i.cache-miss-rate%
2108 ± 17% -21.2% 1661 ± 7% perf-stat.i.cycles-between-cache-misses
0.77 ± 5% +0.1 0.84 perf-stat.i.dTLB-load-miss-rate%
0.17 ± 5% +0.0 0.18 perf-stat.i.dTLB-store-miss-rate%
4.56 +0.0 4.60 perf-stat.overall.cache-miss-rate%
1.02 +1.1% 1.03 perf-stat.overall.cpi
2056 +1.5% 2087 perf-stat.overall.instructions-per-iTLB-miss
0.98 -1.0% 0.97 perf-stat.overall.ipc
1645 ± 46% -62.0% 624.75 ± 31% interrupts.59:PCI-MSI.530434-edge.eth4-TxRx-1
78945 ± 3% -11.1% 70202 interrupts.CAL:Function_call_interrupts
6541 ± 33% -29.2% 4632 ± 13% interrupts.CPU0.NMI:Non-maskable_interrupts
6541 ± 33% -29.2% 4632 ± 13% interrupts.CPU0.PMI:Performance_monitoring_interrupts
4809 -19.6% 3866 ± 3% interrupts.CPU0.TLB:TLB_shootdowns
1645 ± 46% -62.0% 624.75 ± 31% interrupts.CPU1.59:PCI-MSI.530434-edge.eth4-TxRx-1
9948 ± 4% -12.6% 8699 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
8439 ± 4% -42.9% 4816 ± 23% interrupts.CPU1.NMI:Non-maskable_interrupts
8439 ± 4% -42.9% 4816 ± 23% interrupts.CPU1.PMI:Performance_monitoring_interrupts
4833 ± 2% -20.4% 3849 ± 3% interrupts.CPU1.TLB:TLB_shootdowns
9944 ± 4% -12.8% 8671 interrupts.CPU2.CAL:Function_call_interrupts
4853 -20.8% 3843 interrupts.CPU2.TLB:TLB_shootdowns
4802 -18.1% 3933 ± 2% interrupts.CPU3.TLB:TLB_shootdowns
4845 -19.8% 3886 ± 2% interrupts.CPU4.TLB:TLB_shootdowns
4879 -17.8% 4010 ± 3% interrupts.CPU5.TLB:TLB_shootdowns
4878 -19.4% 3931 ± 3% interrupts.CPU6.TLB:TLB_shootdowns
9982 ± 4% -11.9% 8791 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
4829 -18.2% 3950 ± 3% interrupts.CPU7.TLB:TLB_shootdowns
51915 ± 13% -20.4% 41330 ± 11% interrupts.NMI:Non-maskable_interrupts
51915 ± 13% -20.4% 41330 ± 11% interrupts.PMI:Performance_monitoring_interrupts
38731 -19.3% 31270 ± 2% interrupts.TLB:TLB_shootdowns
45.12 -2.9 42.18 ± 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
46.83 -2.5 44.33 ± 2% perf-profile.calltrace.cycles-pp.secondary_startup_64
1.56 ± 2% +0.1 1.62 perf-profile.calltrace.cycles-pp.mmap64
1.40 ± 2% +0.1 1.47 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
1.40 ± 2% +0.1 1.48 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mmap64
1.25 +0.1 1.33 ± 2% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
1.20 +0.1 1.28 ± 3% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
1.82 ± 3% +0.1 1.93 ± 2% perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
1.81 ± 2% +0.1 1.93 ± 2% perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
2.27 ± 2% +0.1 2.38 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.66 ± 2% +0.1 1.79 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
1.68 ± 3% +0.1 1.81 ± 3% perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file
0.77 ± 3% +0.2 0.93 ± 4% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
0.73 ± 4% +0.2 0.88 ± 5% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.flush_old_exec
0.58 ± 7% +0.2 0.80 ± 2% perf-profile.calltrace.cycles-pp.alloc_set_pte.filemap_map_pages.handle_pte_fault.__handle_mm_fault.handle_mm_fault
1.79 +0.4 2.15 ± 2% perf-profile.calltrace.cycles-pp.filemap_map_pages.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.66 +0.4 2.05 ± 6% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
3.21 ± 3% +0.4 3.61 ± 4% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
3.18 ± 3% +0.4 3.60 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
1.72 ± 2% +0.4 2.13 ± 5% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
0.45 ± 59% +0.4 0.90 ± 2% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
4.33 +0.5 4.81 ± 3% perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
4.20 ± 3% +0.5 4.69 ± 3% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.20 ± 3% +0.5 4.69 ± 3% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.19 ± 3% +0.5 4.69 ± 3% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.90 +0.5 5.40 ± 2% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
4.68 +0.5 5.19 ± 3% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +0.6 0.55 ± 2% perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.filemap_map_pages.handle_pte_fault.__handle_mm_fault
5.71 +0.6 6.26 ± 2% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
6.07 +0.6 6.64 ± 2% perf-profile.calltrace.cycles-pp.page_fault
5.86 +0.6 6.43 ± 2% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.31 ±100% +0.6 0.88 ± 2% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.00 +0.6 0.58 ± 7% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.26 ±100% +0.6 0.88 ± 9% perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
11.76 +0.7 12.43 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
11.71 +0.7 12.38 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
45.12 -2.9 42.18 ± 2% perf-profile.children.cycles-pp.intel_idle
45.89 -2.5 43.34 ± 2% perf-profile.children.cycles-pp.cpuidle_enter_state
45.89 -2.5 43.34 ± 2% perf-profile.children.cycles-pp.cpuidle_enter
46.83 -2.5 44.32 ± 2% perf-profile.children.cycles-pp.do_idle
46.83 -2.5 44.33 ± 2% perf-profile.children.cycles-pp.secondary_startup_64
46.83 -2.5 44.33 ± 2% perf-profile.children.cycles-pp.cpu_startup_entry
0.79 ± 4% -0.1 0.72 ± 4% perf-profile.children.cycles-pp.ksys_read
0.22 ± 9% -0.0 0.18 ± 9% perf-profile.children.cycles-pp.shift_arg_pages
0.20 ± 6% -0.0 0.17 ± 12% perf-profile.children.cycles-pp.up_write
0.45 ± 3% -0.0 0.42 ± 6% perf-profile.children.cycles-pp.copy_page_range
0.27 ± 4% -0.0 0.23 ± 4% perf-profile.children.cycles-pp.copyout
0.11 ± 16% -0.0 0.08 ± 17% perf-profile.children.cycles-pp.__errno_location
0.20 ± 2% -0.0 0.18 ± 3% perf-profile.children.cycles-pp.brk
0.14 ± 5% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.__x64_sys_brk
0.07 ± 12% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.08 ± 6% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.simple_write_begin
0.06 ± 6% +0.0 0.08 ± 15% perf-profile.children.cycles-pp.ima_file_check
0.09 ± 7% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.evict
0.43 ± 2% +0.0 0.45 ± 2% perf-profile.children.cycles-pp._IO_padn
0.07 ± 12% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.truncate_inode_pages_range
0.09 ± 16% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.__cxa_atexit
0.20 ± 2% +0.0 0.23 ± 8% perf-profile.children.cycles-pp.terminate_walk
0.28 ± 3% +0.0 0.32 ± 6% perf-profile.children.cycles-pp.do_wait
0.18 ± 4% +0.0 0.22 ± 3% perf-profile.children.cycles-pp.generic_file_write_iter
0.14 ± 5% +0.0 0.18 ± 4% perf-profile.children.cycles-pp.generic_perform_write
0.30 ± 5% +0.0 0.34 ± 5% perf-profile.children.cycles-pp.__do_sys_wait4
0.29 ± 4% +0.0 0.34 ± 6% perf-profile.children.cycles-pp.kernel_wait4
0.03 ±100% +0.0 0.07 ± 10% perf-profile.children.cycles-pp.get_task_policy
0.16 ± 7% +0.0 0.21 ± 4% perf-profile.children.cycles-pp.__generic_file_write_iter
0.58 ± 5% +0.0 0.63 ± 3% perf-profile.children.cycles-pp.find_vma
0.35 ± 5% +0.0 0.40 ± 7% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.06 ± 22% perf-profile.children.cycles-pp.__put_task_struct
0.07 ± 14% +0.1 0.13 ± 11% perf-profile.children.cycles-pp.PageHuge
0.00 +0.1 0.06 ± 13% perf-profile.children.cycles-pp.__getpid
0.54 ± 2% +0.1 0.60 ± 9% perf-profile.children.cycles-pp.vfs_write
0.30 ± 9% +0.1 0.37 ± 9% perf-profile.children.cycles-pp.vmacache_find
0.55 ± 2% +0.1 0.62 ± 9% perf-profile.children.cycles-pp.ksys_write
1.57 +0.1 1.64 perf-profile.children.cycles-pp.mmap64
0.29 ± 6% +0.1 0.36 ± 8% perf-profile.children.cycles-pp.__slab_free
0.15 ± 10% +0.1 0.23 ± 10% perf-profile.children.cycles-pp.rcu_cblist_dequeue
1.35 +0.1 1.45 ± 2% perf-profile.children.cycles-pp.walk_component
0.44 ± 5% +0.1 0.54 ± 5% perf-profile.children.cycles-pp.__x64_sys_munmap
3.38 +0.1 3.47 perf-profile.children.cycles-pp.do_filp_open
3.72 +0.1 3.81 perf-profile.children.cycles-pp.do_sys_open
0.71 ± 3% +0.1 0.81 ± 6% perf-profile.children.cycles-pp.unmap_region
3.11 +0.1 3.22 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.73 ± 3% +0.1 0.84 perf-profile.children.cycles-pp.__vm_munmap
2.77 +0.1 2.89 perf-profile.children.cycles-pp.do_mmap
0.58 ± 4% +0.1 0.70 ± 6% perf-profile.children.cycles-pp.kmem_cache_free
1.14 ± 3% +0.1 1.27 perf-profile.children.cycles-pp.__do_munmap
1.95 ± 2% +0.1 2.08 ± 2% perf-profile.children.cycles-pp.flush_old_exec
0.41 ± 3% +0.1 0.55 ± 6% perf-profile.children.cycles-pp.unlock_page
0.55 ± 4% +0.3 0.82 ± 7% perf-profile.children.cycles-pp.rcu_do_batch
0.62 ± 5% +0.3 0.90 ± 7% perf-profile.children.cycles-pp.rcu_core
0.67 ± 9% +0.3 0.99 ± 7% perf-profile.children.cycles-pp.irq_exit
0.75 ± 7% +0.3 1.08 ± 6% perf-profile.children.cycles-pp.__softirqentry_text_start
1.21 ± 6% +0.4 1.58 perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.26 ± 5% +0.4 1.64 ± 2% perf-profile.children.cycles-pp.apic_timer_interrupt
1.25 ± 5% +0.5 1.73 ± 2% perf-profile.children.cycles-pp.alloc_set_pte
0.58 ± 4% +0.5 1.07 ± 2% perf-profile.children.cycles-pp.page_add_file_rmap
4.32 ± 3% +0.5 4.82 ± 3% perf-profile.children.cycles-pp.do_group_exit
4.32 ± 3% +0.5 4.82 ± 3% perf-profile.children.cycles-pp.__x64_sys_exit_group
4.31 ± 3% +0.5 4.81 ± 3% perf-profile.children.cycles-pp.do_exit
0.80 ± 3% +0.5 1.33 ± 7% perf-profile.children.cycles-pp.page_remove_rmap
5.01 +0.6 5.56 ± 3% perf-profile.children.cycles-pp.mmput
4.96 +0.6 5.53 ± 3% perf-profile.children.cycles-pp.exit_mmap
2.63 +0.6 3.22 ± 4% perf-profile.children.cycles-pp.unmap_page_range
2.75 +0.6 3.35 ± 3% perf-profile.children.cycles-pp.unmap_vmas
3.52 +0.8 4.29 perf-profile.children.cycles-pp.filemap_map_pages
7.21 +0.9 8.08 ± 2% perf-profile.children.cycles-pp.handle_pte_fault
10.35 +0.9 11.27 perf-profile.children.cycles-pp.page_fault
9.51 +0.9 10.45 ± 2% perf-profile.children.cycles-pp.__do_page_fault
8.32 +0.9 9.25 ± 2% perf-profile.children.cycles-pp.handle_mm_fault
7.95 +0.9 8.89 ± 2% perf-profile.children.cycles-pp.__handle_mm_fault
9.74 +1.0 10.70 ± 2% perf-profile.children.cycles-pp.do_page_fault
0.18 ± 12% +1.0 1.18 ± 6% perf-profile.children.cycles-pp.lock_page_memcg
27.68 +1.2 28.83 perf-profile.children.cycles-pp.do_syscall_64
27.76 +1.2 28.92 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
45.09 -2.9 42.15 ± 2% perf-profile.self.cycles-pp.intel_idle
0.06 ± 6% -0.0 0.03 ±100% perf-profile.self.cycles-pp.getname_flags
0.19 ± 6% -0.0 0.16 ± 11% perf-profile.self.cycles-pp.up_write
0.17 ± 4% -0.0 0.15 ± 4% perf-profile.self.cycles-pp._cond_resched
0.08 ± 5% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.xas_start
0.16 ± 5% +0.0 0.18 ± 4% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.08 ± 8% +0.0 0.10 ± 10% perf-profile.self.cycles-pp.memset_erms
0.07 ± 13% +0.0 0.09 perf-profile.self.cycles-pp.xas_find
0.08 ± 14% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.lookup_fast
0.16 ± 9% +0.0 0.19 ± 11% perf-profile.self.cycles-pp.perf_iterate_sb
0.27 ± 7% +0.0 0.31 ± 8% perf-profile.self.cycles-pp.kmem_cache_free
0.03 ±100% +0.0 0.07 ± 10% perf-profile.self.cycles-pp.get_task_policy
0.12 ± 11% +0.1 0.17 ± 14% perf-profile.self.cycles-pp.file_free_rcu
0.07 ± 12% +0.1 0.12 ± 13% perf-profile.self.cycles-pp.PageHuge
0.30 ± 7% +0.1 0.36 ± 9% perf-profile.self.cycles-pp.vmacache_find
0.29 ± 6% +0.1 0.36 ± 7% perf-profile.self.cycles-pp.__slab_free
0.15 ± 10% +0.1 0.23 ± 10% perf-profile.self.cycles-pp.rcu_cblist_dequeue
1.25 ± 2% +0.1 1.35 ± 3% perf-profile.self.cycles-pp.unmap_page_range
0.39 ± 4% +0.1 0.53 ± 6% perf-profile.self.cycles-pp.unlock_page
1.72 +0.1 1.86 ± 3% perf-profile.self.cycles-pp.filemap_map_pages
0.17 ± 13% +1.0 1.15 ± 6% perf-profile.self.cycles-pp.lock_page_memcg
unixbench.time.minor_page_faults
2.41e+08 +-+--------------------------------------------------------------+
| |
2.4e+08 +-++.+..+.+..+ .+.+.. |
| + .+..+.+..+..+.+. +..+. .+.+..|
| + .+..+.+..+..+ +. |
2.39e+08 +-+ + |
| |
2.38e+08 +-+ |
| |
2.37e+08 +-+ |
| |
| O O O O O O O O |
2.36e+08 +-+O O O O O O O O O O O |
O O O O |
2.35e+08 +-+--------------------------------------------------------------+
unixbench.time.voluntary_context_switches
7.42e+06 +-+--------------------------------------------------------------+
| .+.. |
7.4e+06 +-++ +.+..+.. .+.. |
7.38e+06 +-+ + +.+.. .+.+..+.+..+..+.+..+.+.. .+ .+..|
| +. +. + .+ |
7.36e+06 +-+ +. |
7.34e+06 +-+ |
| |
7.32e+06 +-+ |
7.3e+06 +-+ |
| |
7.28e+06 +-+ O O O O |
7.26e+06 +-+O O O O O O O O O O O O O O O O O |
O O |
7.24e+06 +-+--------------------------------------------------------------+
unixbench.score
5740 +-+------------------------------------------------------------------+
|..+..+. .+..+ |
5720 +-+ +. + .+.+.. .+.. |
| + .+.. .+..+..+.+..+..+. +. .+..+..|
5700 +-+ + +..+..+.+. + |
| |
5680 +-+ |
| |
5660 +-+ |
| |
5640 +-+ |
| O O O O O O O O |
5620 +-+O O O O O O O O O O O O |
O O O |
5600 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/50%/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/page_fault3/will-it-scale/0xb000038
commit:
e1fc16cab8 (" mm/lru: add debug checking for page memcg moving")
bcc1e930b6 ("mm/memcg: add debug checking in lock_page_memcg")
e1fc16cab8c618fc bcc1e930b6f74bfd316a77ec9ee
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:2 35% 2:3 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
10:2 185% 14:3 perf-profile.calltrace.cycles-pp.error_entry
0:2 4% 0:3 perf-profile.children.cycles-pp.error_exit
11:2 194% 15:3 perf-profile.children.cycles-pp.error_entry
0:2 2% 0:3 perf-profile.self.cycles-pp.error_exit
9:2 159% 12:3 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
708100 -2.3% 691511 will-it-scale.per_process_ops
31156421 -2.3% 30426532 will-it-scale.workload
17.00 -5.9% 16.00 vmstat.cpu.us
44.21 -1.8% 43.41 boot-time.boot
3457 -2.0% 3386 ± 2% boot-time.idle
5795 ± 97% +358.4% 26568 ± 9% numa-numastat.node0.other_node
22688 ± 24% -87.4% 2870 ±139% numa-numastat.node1.other_node
12099 ± 5% -10.8% 10795 ± 4% numa-vmstat.node0.nr_slab_reclaimable
16009 ± 3% +14.7% 18370 ± 6% numa-vmstat.node0.nr_slab_unreclaimable
17803 ± 4% -10.6% 15923 ± 8% numa-vmstat.node1.nr_slab_unreclaimable
48401 ± 5% -10.8% 43184 ± 4% numa-meminfo.node0.KReclaimable
48401 ± 5% -10.8% 43184 ± 4% numa-meminfo.node0.SReclaimable
64040 ± 3% +14.7% 73480 ± 6% numa-meminfo.node0.SUnreclaim
71213 ± 4% -10.6% 63693 ± 8% numa-meminfo.node1.SUnreclaim
20768644 -2.1% 20338217 proc-vmstat.numa_hit
20740149 -2.1% 20308772 proc-vmstat.numa_local
1722 ± 7% -81.0% 327.33 ± 82% proc-vmstat.numa_pages_migrated
20822414 -2.1% 20391958 proc-vmstat.pgalloc_normal
9.374e+09 -2.3% 9.155e+09 proc-vmstat.pgfault
20793329 -2.1% 20361691 proc-vmstat.pgfree
1722 ± 7% -81.0% 327.33 ± 82% proc-vmstat.pgmigrate_success
22262397 ± 93% -95.3% 1040096 cpuidle.C1.time
216984 ± 62% -67.8% 69824 ± 3% cpuidle.C1.usage
2.964e+09 ± 11% -87.9% 3.591e+08 ±139% cpuidle.C1E.time
15453867 -86.5% 2082721 ±137% cpuidle.C1E.usage
24232673 ± 4% +28180.4% 6.853e+09 ± 53% cpuidle.C3.time
82206 +19055.7% 15747147 ± 39% cpuidle.C3.usage
1.021e+10 ± 3% -40.5% 6.076e+09 ± 52% cpuidle.C6.time
56164 ± 5% -12.3% 49272 ± 7% cpuidle.POLL.time
214295 ± 62% -68.0% 68499 ± 3% turbostat.C1
0.09 ± 88% -0.1 0.00 turbostat.C1%
15452780 -86.5% 2082055 ±138% turbostat.C1E
11.11 ± 11% -9.8 1.35 ±139% turbostat.C1E%
81052 +19326.8% 15745776 ± 39% turbostat.C3
0.09 +25.5 25.62 ± 53% turbostat.C3%
38.28 ± 3% -15.6 22.71 ± 52% turbostat.C6%
0.14 +61.9% 0.23 ± 11% turbostat.Pkg%pc2
16.91 -1.5% 16.66 turbostat.RAMWatt
468.00 ± 8% -13.9% 403.00 ± 4% slabinfo.bdev_cache.active_objs
468.00 ± 8% -13.9% 403.00 ± 4% slabinfo.bdev_cache.num_objs
592.00 ± 8% -29.7% 416.00 ± 16% slabinfo.kmalloc-rcl-128.active_objs
592.00 ± 8% -29.7% 416.00 ± 16% slabinfo.kmalloc-rcl-128.num_objs
1701 ± 13% -16.9% 1414 ± 11% slabinfo.kmalloc-rcl-96.active_objs
1701 ± 13% -16.9% 1414 ± 11% slabinfo.kmalloc-rcl-96.num_objs
3408 ± 5% +16.4% 3968 slabinfo.skbuff_head_cache.active_objs
3472 ± 3% +14.3% 3968 slabinfo.skbuff_head_cache.num_objs
862.50 ± 2% +29.8% 1119 ± 16% slabinfo.task_group.active_objs
862.50 ± 2% +29.8% 1119 ± 16% slabinfo.task_group.num_objs
0.00 +2e+25% 250.09 ± 70% sched_debug.cfs_rq:/.MIN_vruntime.stddev
14.03 ± 12% -53.9% 6.47 ± 39% sched_debug.cfs_rq:/.exec_clock.min
0.00 +2e+25% 250.09 ± 70% sched_debug.cfs_rq:/.max_vruntime.stddev
13232 -25.1% 9907 ± 14% sched_debug.cfs_rq:/.min_vruntime.min
28178 +264.9% 102824 ± 60% sched_debug.cpu.avg_idle.min
55736 -18.8% 45283 ± 3% sched_debug.cpu.nr_switches.max
6357 -9.8% 5735 ± 4% sched_debug.cpu.nr_switches.stddev
76.00 ± 18% -38.2% 47.00 ± 7% sched_debug.cpu.nr_uninterruptible.max
11.78 ± 2% -14.9% 10.03 ± 9% sched_debug.cpu.nr_uninterruptible.stddev
52387 -19.0% 42455 ± 3% sched_debug.cpu.sched_count.max
26188 -19.0% 21217 ± 3% sched_debug.cpu.sched_goidle.max
26140 -20.2% 20863 ± 6% sched_debug.cpu.ttwu_count.max
2928 -12.3% 2567 ± 3% sched_debug.cpu.ttwu_count.stddev
5919 -18.3% 4836 ± 11% sched_debug.cpu.ttwu_local.max
906.33 ± 9% -13.0% 788.27 ± 11% sched_debug.cpu.ttwu_local.stddev
411.50 ± 19% +267.2% 1511 ± 13% interrupts.34:IR-PCI-MSI.1572865-edge.eth0-TxRx-0
259.50 ± 8% +97.3% 512.00 ± 29% interrupts.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
260.50 ± 5% -28.1% 187.33 ± 14% interrupts.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
2165 ± 33% +83.4% 3971 ± 22% interrupts.CPU1.NMI:Non-maskable_interrupts
2165 ± 33% +83.4% 3971 ± 22% interrupts.CPU1.PMI:Performance_monitoring_interrupts
439.50 ± 36% -84.7% 67.33 ± 35% interrupts.CPU1.RES:Rescheduling_interrupts
411.50 ± 19% +267.2% 1511 ± 13% interrupts.CPU13.34:IR-PCI-MSI.1572865-edge.eth0-TxRx-0
259.50 ± 8% +97.3% 512.00 ± 29% interrupts.CPU16.37:IR-PCI-MSI.1572868-edge.eth0-TxRx-3
2890 +58.7% 4588 ± 26% interrupts.CPU18.NMI:Non-maskable_interrupts
2890 +58.7% 4588 ± 26% interrupts.CPU18.PMI:Performance_monitoring_interrupts
260.50 ± 5% -28.1% 187.33 ± 14% interrupts.CPU19.40:IR-PCI-MSI.1572871-edge.eth0-TxRx-6
18.00 ± 5% +1109.3% 217.67 ±111% interrupts.CPU2.RES:Rescheduling_interrupts
15119 ± 99% -99.9% 10.33 ± 87% interrupts.CPU21.RES:Rescheduling_interrupts
457.50 +413.6% 2349 ± 46% interrupts.CPU22.CAL:Function_call_interrupts
2882 +59.8% 4604 ± 26% interrupts.CPU23.NMI:Non-maskable_interrupts
2882 +59.8% 4604 ± 26% interrupts.CPU23.PMI:Performance_monitoring_interrupts
2882 +60.0% 4611 ± 26% interrupts.CPU24.NMI:Non-maskable_interrupts
2882 +60.0% 4611 ± 26% interrupts.CPU24.PMI:Performance_monitoring_interrupts
2166 ± 33% +113.1% 4616 ± 26% interrupts.CPU25.NMI:Non-maskable_interrupts
2166 ± 33% +113.1% 4616 ± 26% interrupts.CPU25.PMI:Performance_monitoring_interrupts
2889 +58.0% 4564 ± 26% interrupts.CPU26.NMI:Non-maskable_interrupts
2889 +58.0% 4564 ± 26% interrupts.CPU26.PMI:Performance_monitoring_interrupts
5523 ± 99% -99.9% 7.67 ± 81% interrupts.CPU28.RES:Rescheduling_interrupts
2882 +59.9% 4609 ± 26% interrupts.CPU29.NMI:Non-maskable_interrupts
2882 +59.9% 4609 ± 26% interrupts.CPU29.PMI:Performance_monitoring_interrupts
32.00 ± 40% +4693.8% 1534 ±134% interrupts.CPU3.RES:Rescheduling_interrupts
2880 +59.0% 4579 ± 26% interrupts.CPU30.NMI:Non-maskable_interrupts
2880 +59.0% 4579 ± 26% interrupts.CPU30.PMI:Performance_monitoring_interrupts
130479 ± 12% -14.7% 111248 ± 12% softirqs.CPU0.TIMER
40610 ± 8% -32.5% 27403 ± 52% softirqs.CPU1.SCHED
90458 ± 2% -8.9% 82439 ± 2% softirqs.CPU10.RCU
39804 ± 6% -35.5% 25654 ± 52% softirqs.CPU12.SCHED
42421 -10.2% 38106 ± 7% softirqs.CPU17.SCHED
152249 ± 8% -29.6% 107141 ± 11% softirqs.CPU17.TIMER
39761 ± 6% -24.0% 30200 ± 29% softirqs.CPU20.SCHED
126085 ± 20% -14.7% 107515 ± 17% softirqs.CPU22.TIMER
40377 ± 6% -5.3% 38236 ± 5% softirqs.CPU23.SCHED
115810 ± 17% -11.1% 102973 ± 13% softirqs.CPU24.TIMER
40350 ± 5% -6.3% 37805 ± 4% softirqs.CPU26.SCHED
42841 ± 10% -22.9% 33030 ± 27% softirqs.CPU28.SCHED
117685 ± 18% -11.6% 104073 ± 13% softirqs.CPU28.TIMER
39643 ± 7% -15.7% 33405 ± 24% softirqs.CPU29.SCHED
39809 ± 6% -32.2% 26985 ± 58% softirqs.CPU32.SCHED
83627 ± 5% -6.1% 78503 ± 2% softirqs.CPU33.RCU
39718 ± 7% -8.7% 36264 ± 10% softirqs.CPU33.SCHED
40787 ± 5% -22.2% 31742 ± 34% softirqs.CPU37.SCHED
39501 ± 7% -26.4% 29061 ± 34% softirqs.CPU41.SCHED
40555 ± 5% -5.3% 38424 ± 5% softirqs.CPU43.SCHED
118059 ± 19% -13.4% 102223 ± 12% softirqs.CPU43.TIMER
130945 ± 27% -29.1% 92816 ± 4% softirqs.CPU49.TIMER
104687 ± 19% -22.2% 81482 ± 4% softirqs.CPU5.RCU
118564 ± 12% -11.4% 105088 ± 7% softirqs.CPU57.RCU
96934 ± 3% -4.6% 92440 ± 4% softirqs.CPU57.TIMER
96886 ± 5% -4.3% 92674 ± 5% softirqs.CPU59.TIMER
39789 ± 6% -4.0% 38187 ± 6% softirqs.CPU6.SCHED
95980 ± 5% -3.1% 93049 ± 4% softirqs.CPU65.TIMER
94712 ± 4% -2.8% 92019 ± 4% softirqs.CPU75.TIMER
115892 ± 13% -12.6% 101314 ± 9% softirqs.CPU76.RCU
93892 ± 4% -2.9% 91192 ± 3% softirqs.CPU79.TIMER
72900 ± 42% -47.8% 38088 ± 8% softirqs.CPU8.SCHED
1.664e+10 -2.8% 1.616e+10 perf-stat.i.branch-instructions
1.04e+08 -2.4% 1.014e+08 perf-stat.i.branch-misses
50037998 -2.0% 49038527 perf-stat.i.cache-misses
1.613e+08 -1.6% 1.587e+08 perf-stat.i.cache-references
1.50 +1.8% 1.52 perf-stat.i.cpi
1.28 ± 11% +19.4% 1.53 ± 6% perf-stat.i.cpu-migrations
2467 +1.9% 2514 perf-stat.i.cycles-between-cache-misses
0.13 -0.0 0.13 perf-stat.i.dTLB-load-miss-rate%
33318247 -3.9% 32021730 perf-stat.i.dTLB-load-misses
2.465e+10 -3.1% 2.388e+10 perf-stat.i.dTLB-loads
7.443e+08 -2.9% 7.223e+08 perf-stat.i.dTLB-store-misses
1.503e+10 -2.1% 1.472e+10 perf-stat.i.dTLB-stores
8.248e+10 -1.9% 8.091e+10 perf-stat.i.instructions
0.67 -1.8% 0.66 perf-stat.i.ipc
31061904 -2.5% 30284487 perf-stat.i.minor-faults
3.08 +1.1 4.16 ± 9% perf-stat.i.node-load-miss-rate%
148677 +33.0% 197754 ± 12% perf-stat.i.node-load-misses
31061770 -2.5% 30284449 perf-stat.i.page-faults
1.50 +1.8% 1.52 perf-stat.overall.cpi
2465 +1.9% 2513 perf-stat.overall.cycles-between-cache-misses
0.13 -0.0 0.13 perf-stat.overall.dTLB-load-miss-rate%
0.67 -1.8% 0.66 perf-stat.overall.ipc
3.05 +1.1 4.11 ± 9% perf-stat.overall.node-load-miss-rate%
1.658e+10 -2.8% 1.611e+10 perf-stat.ps.branch-instructions
1.036e+08 -2.4% 1.011e+08 perf-stat.ps.branch-misses
49869998 -2.0% 48873399 perf-stat.ps.cache-misses
1.608e+08 -1.6% 1.582e+08 perf-stat.ps.cache-references
1.28 ± 11% +19.3% 1.53 ± 6% perf-stat.ps.cpu-migrations
33206014 -3.9% 31913467 perf-stat.ps.dTLB-load-misses
2.457e+10 -3.1% 2.379e+10 perf-stat.ps.dTLB-loads
7.417e+08 -2.9% 7.199e+08 perf-stat.ps.dTLB-store-misses
1.498e+10 -2.1% 1.467e+10 perf-stat.ps.dTLB-stores
8.22e+10 -1.9% 8.064e+10 perf-stat.ps.instructions
30957134 -2.5% 30181947 perf-stat.ps.minor-faults
148203 +33.0% 197116 ± 12% perf-stat.ps.node-load-misses
30957006 -2.5% 30181908 perf-stat.ps.page-faults
2.487e+13 -1.7% 2.445e+13 perf-stat.total.instructions
21.07 ± 6% -2.4 18.66 ± 6% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
24.94 ± 5% -2.0 22.95 ± 7% perf-profile.calltrace.cycles-pp.page_fault
24.52 ± 5% -1.9 22.59 ± 7% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
16.11 ± 5% -1.8 14.30 ± 6% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
23.20 ± 5% -1.8 21.40 ± 7% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
18.07 ± 5% -1.2 16.88 ± 7% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
15.98 ± 5% -1.0 15.00 ± 7% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
6.69 ± 5% -0.7 5.95 ± 7% perf-profile.calltrace.cycles-pp.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
6.45 ± 5% -0.7 5.74 ± 7% perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
13.52 ± 5% -0.7 12.83 ± 7% perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
5.34 ± 5% -0.6 4.79 ± 7% perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault.__handle_mm_fault
4.69 ± 5% -0.5 4.18 ± 7% perf-profile.calltrace.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault.handle_pte_fault
3.29 ± 5% -0.4 2.94 ± 7% perf-profile.calltrace.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
2.51 ± 5% -0.3 2.20 ± 8% perf-profile.calltrace.cycles-pp.fault_dirty_shared_page.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.58 ± 6% -0.2 0.36 ± 71% perf-profile.calltrace.cycles-pp.__count_memcg_events.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.85 ± 3% -0.2 1.69 ± 7% perf-profile.calltrace.cycles-pp.xas_load.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault
1.23 ± 5% -0.1 1.10 ± 7% perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.09 ± 5% -0.1 0.97 ± 5% perf-profile.calltrace.cycles-pp.__perf_sw_event.do_page_fault.page_fault
1.09 ± 4% -0.1 0.98 ± 5% perf-profile.calltrace.cycles-pp.__mod_lruvec_state.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault
0.82 ± 4% -0.1 0.72 ± 6% perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.do_page_fault.page_fault
1.02 ± 4% -0.1 0.92 ± 7% perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
0.61 ± 4% -0.1 0.55 ± 5% perf-profile.calltrace.cycles-pp.find_vma.__do_page_fault.do_page_fault.page_fault
0.60 ± 4% -0.0 0.56 ± 6% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
3.35 ± 5% +0.5 3.83 ± 6% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault.handle_mm_fault
1.92 ± 5% +0.6 2.51 ± 5% perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
0.00 +0.7 0.67 ± 6% perf-profile.calltrace.cycles-pp.lock_page_memcg.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault
0.00 +0.7 0.68 ± 7% perf-profile.calltrace.cycles-pp.lock_page_memcg.page_remove_rmap.unmap_page_range.unmap_vmas.unmap_region
1.74 ± 5% +0.7 2.43 ± 6% perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
21.07 ± 6% -2.4 18.67 ± 6% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
25.10 ± 5% -2.0 23.10 ± 7% perf-profile.children.cycles-pp.page_fault
24.54 ± 5% -1.9 22.61 ± 7% perf-profile.children.cycles-pp.do_page_fault
16.16 ± 5% -1.8 14.33 ± 6% perf-profile.children.cycles-pp.prepare_exit_to_usermode
23.28 ± 5% -1.8 21.48 ± 7% perf-profile.children.cycles-pp.__do_page_fault
18.27 ± 5% -1.2 17.04 ± 7% perf-profile.children.cycles-pp.handle_mm_fault
16.04 ± 5% -1.0 15.05 ± 7% perf-profile.children.cycles-pp.__handle_mm_fault
6.71 ± 5% -0.7 5.97 ± 7% perf-profile.children.cycles-pp.__do_fault
6.71 ± 5% -0.7 5.98 ± 6% perf-profile.children.cycles-pp.native_irq_return_iret
6.47 ± 5% -0.7 5.76 ± 7% perf-profile.children.cycles-pp.shmem_fault
13.62 ± 5% -0.7 12.92 ± 7% perf-profile.children.cycles-pp.handle_pte_fault
5.35 ± 5% -0.5 4.80 ± 7% perf-profile.children.cycles-pp.shmem_getpage_gfp
4.76 ± 5% -0.5 4.24 ± 7% perf-profile.children.cycles-pp.find_lock_entry
3.33 ± 4% -0.4 2.97 ± 7% perf-profile.children.cycles-pp.find_get_entry
2.59 ± 5% -0.3 2.26 ± 8% perf-profile.children.cycles-pp.fault_dirty_shared_page
2.34 ± 5% -0.2 2.09 ± 6% perf-profile.children.cycles-pp.__perf_sw_event
1.89 ± 4% -0.2 1.67 ± 6% perf-profile.children.cycles-pp.___perf_sw_event
1.90 ± 3% -0.2 1.72 ± 7% perf-profile.children.cycles-pp.xas_load
0.76 ± 2% -0.1 0.61 ± 9% perf-profile.children.cycles-pp.__set_page_dirty_no_writeback
1.10 ± 7% -0.1 0.97 ± 6% perf-profile.children.cycles-pp.sync_regs
0.69 ± 8% -0.1 0.58 ± 6% perf-profile.children.cycles-pp.set_page_dirty
0.83 ± 7% -0.1 0.73 ± 7% perf-profile.children.cycles-pp.page_mapping
0.60 ± 7% -0.1 0.51 ± 5% perf-profile.children.cycles-pp.___might_sleep
2.01 ± 4% -0.1 1.93 ± 6% perf-profile.children.cycles-pp.__mod_lruvec_state
0.55 ± 5% -0.1 0.47 ± 7% perf-profile.children.cycles-pp.up_read
0.43 ± 5% -0.1 0.36 ± 4% perf-profile.children.cycles-pp.unlock_page
0.58 ± 3% -0.1 0.51 ± 7% perf-profile.children.cycles-pp.down_read_trylock
0.58 ± 7% -0.1 0.52 ± 8% perf-profile.children.cycles-pp.__count_memcg_events
0.59 ± 3% -0.1 0.53 ± 6% perf-profile.children.cycles-pp.__mod_memcg_state
0.61 ± 4% -0.1 0.55 ± 5% perf-profile.children.cycles-pp.find_vma
0.55 ± 4% -0.1 0.50 ± 5% perf-profile.children.cycles-pp.vmacache_find
0.61 ± 4% -0.0 0.57 ± 5% perf-profile.children.cycles-pp.tlb_flush_mmu
0.46 ± 3% -0.0 0.42 ± 7% perf-profile.children.cycles-pp.release_pages
0.39 ± 5% -0.0 0.35 ± 9% perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.54 ± 4% -0.0 0.49 ± 6% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.27 ± 7% -0.0 0.23 ± 9% perf-profile.children.cycles-pp.__might_sleep
0.56 ± 4% -0.0 0.53 ± 7% perf-profile.children.cycles-pp.apic_timer_interrupt
0.20 ± 7% -0.0 0.16 ± 2% perf-profile.children.cycles-pp.tick_sched_timer
0.24 ± 6% -0.0 0.21 ± 2% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.47 ± 5% -0.0 0.44 ± 8% perf-profile.children.cycles-pp._raw_spin_lock
0.18 ± 11% -0.0 0.15 ± 5% perf-profile.children.cycles-pp.update_process_times
0.33 ± 4% -0.0 0.31 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
0.22 ± 6% -0.0 0.19 ± 7% perf-profile.children.cycles-pp.mem_cgroup_from_task
0.14 -0.0 0.12 ± 4% perf-profile.children.cycles-pp.native_iret
0.18 ± 5% -0.0 0.16 ± 10% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.16 -0.0 0.14 ± 9% perf-profile.children.cycles-pp.timestamp_truncate
0.11 ± 9% -0.0 0.09 ± 9% perf-profile.children.cycles-pp.perf_swevent_event
0.11 ± 9% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.pmd_pfn
0.10 ± 5% -0.0 0.08 perf-profile.children.cycles-pp.__tlb_remove_page_size
0.15 ± 3% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.page_rmapping
0.11 ± 4% -0.0 0.10 ± 8% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.03 ±100% +0.0 0.06 perf-profile.children.cycles-pp.native_set_pte_at
0.12 ± 4% +0.1 0.20 ± 8% perf-profile.children.cycles-pp.PageHuge
1.94 ± 5% +0.6 2.52 ± 5% perf-profile.children.cycles-pp.page_add_file_rmap
1.79 ± 5% +0.7 2.47 ± 6% perf-profile.children.cycles-pp.page_remove_rmap
0.22 ± 6% +1.2 1.42 ± 7% perf-profile.children.cycles-pp.lock_page_memcg
16.04 ± 5% -1.8 14.23 ± 6% perf-profile.self.cycles-pp.prepare_exit_to_usermode
6.69 ± 5% -0.7 5.96 ± 6% perf-profile.self.cycles-pp.native_irq_return_iret
4.92 ± 6% -0.6 4.33 ± 7% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
2.28 ± 5% -0.3 2.01 ± 8% perf-profile.self.cycles-pp.__handle_mm_fault
1.58 ± 4% -0.2 1.40 ± 6% perf-profile.self.cycles-pp.___perf_sw_event
1.38 ± 6% -0.2 1.20 ± 8% perf-profile.self.cycles-pp.find_get_entry
1.10 ± 6% -0.2 0.94 ± 7% perf-profile.self.cycles-pp.shmem_fault
1.47 ± 7% -0.2 1.31 ± 6% perf-profile.self.cycles-pp.handle_mm_fault
1.60 ± 4% -0.2 1.45 ± 6% perf-profile.self.cycles-pp.xas_load
1.38 ± 7% -0.1 1.23 ± 7% perf-profile.self.cycles-pp.__do_page_fault
0.71 -0.1 0.57 ± 9% perf-profile.self.cycles-pp.__set_page_dirty_no_writeback
0.97 ± 6% -0.1 0.86 ± 7% perf-profile.self.cycles-pp.sync_regs
0.77 ± 7% -0.1 0.67 ± 8% perf-profile.self.cycles-pp.page_mapping
0.59 ± 7% -0.1 0.51 ± 4% perf-profile.self.cycles-pp.___might_sleep
0.61 ± 5% -0.1 0.54 ± 6% perf-profile.self.cycles-pp.find_lock_entry
0.54 ± 5% -0.1 0.46 ± 7% perf-profile.self.cycles-pp.up_read
0.54 ± 6% -0.1 0.47 ± 6% perf-profile.self.cycles-pp.handle_pte_fault
0.58 ± 5% -0.1 0.51 ± 7% perf-profile.self.cycles-pp.alloc_set_pte
0.56 ± 7% -0.1 0.50 ± 9% perf-profile.self.cycles-pp.__count_memcg_events
0.55 ± 4% -0.1 0.49 ± 7% perf-profile.self.cycles-pp.down_read_trylock
0.55 ± 2% -0.1 0.48 ± 5% perf-profile.self.cycles-pp.__mod_memcg_state
0.40 ± 7% -0.1 0.34 ± 4% perf-profile.self.cycles-pp.unlock_page
0.55 ± 3% -0.1 0.50 ± 5% perf-profile.self.cycles-pp.vmacache_find
0.49 ± 5% -0.1 0.44 ± 5% perf-profile.self.cycles-pp.page_fault
0.45 ± 3% -0.0 0.41 ± 7% perf-profile.self.cycles-pp.release_pages
0.35 ± 5% -0.0 0.31 ± 8% perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.27 ± 3% -0.0 0.23 ± 5% perf-profile.self.cycles-pp.set_page_dirty
0.47 ± 6% -0.0 0.44 ± 7% perf-profile.self.cycles-pp._raw_spin_lock
0.22 ± 9% -0.0 0.19 ± 7% perf-profile.self.cycles-pp.__might_sleep
0.21 ± 2% -0.0 0.18 ± 9% perf-profile.self.cycles-pp.fault_dirty_shared_page
0.20 ± 7% -0.0 0.17 ± 4% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.14 -0.0 0.12 ± 4% perf-profile.self.cycles-pp.native_iret
0.11 ± 9% -0.0 0.09 ± 9% perf-profile.self.cycles-pp.perf_swevent_event
0.17 ± 5% -0.0 0.15 ± 5% perf-profile.self.cycles-pp.do_page_fault
0.15 ± 3% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.__do_fault
0.15 -0.0 0.13 ± 7% perf-profile.self.cycles-pp.timestamp_truncate
0.13 ± 7% -0.0 0.12 ± 8% perf-profile.self.cycles-pp._cond_resched
0.17 ± 5% -0.0 0.16 ± 6% perf-profile.self.cycles-pp.perf_exclude_event
0.08 ± 5% -0.0 0.07 ± 6% perf-profile.self.cycles-pp.pmd_pfn
0.10 ± 5% +0.0 0.13 ± 9% perf-profile.self.cycles-pp.PageHuge
0.70 ± 5% +0.1 0.81 ± 3% perf-profile.self.cycles-pp.page_add_file_rmap
0.21 ± 9% +1.2 1.41 ± 7% perf-profile.self.cycles-pp.lock_page_memcg
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year
83fb1dde4f: WARNING:at_drivers/gpu/drm/vkms/vkms_crtc.c:#vkms_vblank_simulate
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 83fb1dde4f3464052160dd4a0d549d8e7ba0abed ("vkms fbdev")
git://git.infradead.org/users/ezequielg/linux unbind-your-drm-love-1
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------------------------+------------+------------+
| | baff7233df | 83fb1dde4f |
+-------------------------------------------------------------------+------------+------------+
| boot_successes | 374 | 337 |
| boot_failures | 3 | 39 |
| invoked_oom-killer:gfp_mask=0x | 3 | 1 |
| Mem-Info | 3 | 1 |
| WARNING:at_drivers/gpu/drm/vkms/vkms_crtc.c:#vkms_vblank_simulate | 0 | 38 |
| EIP:vkms_vblank_simulate | 0 | 38 |
| EIP:uart_write | 0 | 4 |
| EIP:__do_softirq | 0 | 27 |
| EIP:console_unlock | 0 | 7 |
| EIP:default_idle | 0 | 2 |
| EIP:e1000_xmit_frame | 0 | 2 |
| EIP:__do_sys_fstat64 | 0 | 1 |
| EIP:clear_highpage | 0 | 2 |
| EIP:kernel_init_free_pages | 0 | 7 |
| EIP:do_fast_syscall_32 | 0 | 1 |
| EIP:finish_task_switch | 0 | 1 |
| EIP:css_tryget | 0 | 1 |
| EIP:start_thread | 0 | 1 |
| EIP:e1000_update_stats | 0 | 5 |
| EIP:prepare_exit_to_usermode | 0 | 1 |
| EIP:serial8250_tx_empty | 0 | 1 |
| EIP:fault_in_pages_readable | 0 | 1 |
| EIP:clear_user | 0 | 1 |
| WARNING:CPU:#PID:#at/kbuild/src/consu | 0 | 1 |
| EIP:__mod_lruvec_state | 0 | 1 |
| EIP:copy_user_highpage | 0 | 1 |
| EIP:__d_lookup_rcu | 0 | 1 |
+-------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 2.943030] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest0/status
[ 2.944024] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest1/status
[ 2.944927] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest2/status
[ 2.945914] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest3/status
[ 2.947041] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest5/status
[ 2.948067] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest6/status
[ 2.949017] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest7/status
[ 2.950057] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/status
[ 2.951099] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/test-unittest8/property-foo
[ 2.951947] OF: overlay: node_overlaps_later_cs: #6 overlaps with #7 @/testcase-data/overlay-node/test-bus/test-unittest8
[ 2.952698] OF: overlay: overlay #6 is not topmost
[ 2.954580] i2c i2c-1: Added multiplexed i2c bus 2
[ 2.954953] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest12/status
[ 2.956045] OF: overlay: WARNING: memory leak will occur if overlay removed, property: /testcase-data/overlay-node/test-bus/i2c-test-bus/test-unittest13/status
[ 2.957655] i2c i2c-1: Added multiplexed i2c bus 3
[ 2.958484] ### dt-test ### FAIL of_unittest_overlay_high_level():2475 overlay_base_root not initialized
[ 2.959163] ### dt-test ### end of unittest - 237 passed, 1 failed
[ 2.961465] 8021q: adding VLAN 0 to HW filter on device eth0
[ 2.973139] Sending DHCP requests .
[ 4.995602] e1000: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
[ 4.997961] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
[ 5.525039] ., OK
[ 5.525717] IP-Config: Got DHCP answer from 10.0.2.2, my address is 10.0.2.15
[ 5.526860] IP-Config: Complete:
[ 5.527416] device=eth0, hwaddr=52:54:00:12:34:56, ipaddr=10.0.2.15, mask=255.255.255.0, gw=10.0.2.2
[ 5.528910] host=vm-snb-i386-8ba8dfe5ab26, domain=, nis-domain=(none)
[ 5.530038] bootserver=10.0.2.2, rootserver=10.0.2.2, rootpath=
[ 5.530039] nameserver0=10.0.2.3
[ 5.531891] ALSA device list:
[ 5.532426] No soundcards found.
[ 5.533894] Freeing unused kernel image (initmem) memory: 1092K
[ 5.535000] Write protecting kernel text and read-only data: 20000k
[ 5.536037] rodata_test: all tests were successful
[ 5.536419] Run /init as init process
[ 5.540746] random: init: uninitialized urandom read (12 bytes read)
[ 5.591625] random: mountall: uninitialized urandom read (12 bytes read)
[ 5.670620] udevd[352]: starting version 175
LKP: HOSTNAME vm-snb-i386-8ba8dfe5ab26, MAC 7e:dc:c2:73:9d:bd, kernel 5.5.0-rc2-00003-g83fb1dde4f346 1, serial console /dev/ttyS0
[ 6.151250] Kernel tests: Boot OK!
[ 6.151253]
[ 6.178354] /lkp/lkp/src/bin/run-lkp
[ 6.178356]
[ 6.214833] init: failsafe main process (552) killed by TERM signal
[ 6.275335] random: dd: uninitialized urandom read (4096 bytes read)
[ 6.309458] init: rc-sysinit main process (570) killed by TERM signal
[ 6.441064] RESULT_ROOT=/result/trinity/300s/vm-snb-i386/quantal-core-i386-2019-04-26.cgz/i386-randconfig-h002-20191222/gcc-7/83fb1dde4f3464052160dd4a0d549d8e7ba0abed/166
[ 6.441066]
[ 6.478284] job=/lkp/jobs/scheduled/vm-snb-i386-8ba8dfe5ab26/trinity-300s-quantal-core-i386-2019-04-26.cgz-83fb1dde4f346405216-20191223-23423-iguiih-190.yaml
[ 6.478286]
[ 6.581980] result_service=raw_upload, RESULT_MNT=/inn/result, RESULT_ROOT=/inn/result/trinity/300s/vm-snb-i386/quantal-core-i386-2019-04-26.cgz/i386-randconfig-h002-20191222/gcc-7/83fb1dde4f3464052160dd4a0d549d8e7ba0abed/166
[ 6.581985]
[ 6.602212] run-job /lkp/jobs/scheduled/vm-snb-i386-8ba8dfe5ab26/trinity-300s-quantal-core-i386-2019-04-26.cgz-83fb1dde4f346405216-20191223-23423-iguiih-190.yaml
[ 6.602214]
[ 6.626294] init: networking main process (647) terminated with status 1
[ 6.656286] /usr/bin/wget -q --timeout=1800 --tries=1 --local-encoding=UTF-8 http://inn:80/~lkp/cgi-bin/lkp-jobfile-append-var?job_file=/lkp/jobs/sche... -O /dev/null
[ 6.656289]
[ 7.143211] target ucode:
[ 7.143214]
[ 8.236219] Seeding trinity based on i386-randconfig-h002-20191222
[ 8.236221]
[ 8.245888] groupadd: group 'nogroup' already exists
[ 8.245890]
[ 8.255261] 2019-12-22 19:08:17 chroot --userspec nobody:nogroup / trinity -q -q -l off -s 3430292853 -x get_robust_list -x remap_file_pages -N 999999999
[ 8.255262]
[ 8.377512] ------------[ cut here ]------------
[ 8.377955] WARNING: CPU: 0 PID: 734 at drivers/gpu/drm/vkms/vkms_crtc.c:21 vkms_vblank_simulate+0x3a/0x10d
[ 8.378881] Modules linked in:
[ 8.379117] CPU: 0 PID: 734 Comm: trinity Tainted: G T 5.5.0-rc2-00003-g83fb1dde4f346 #1
[ 8.379796] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 8.380442] EIP: vkms_vblank_simulate+0x3a/0x10d
[ 8.380784] Code: 89 45 ec 8b 73 28 8b 7b 2c 8b 43 20 8b 40 1c e8 43 a9 73 00 57 56 89 d1 89 c2 89 d8 e8 81 29 ba ff 83 f0 01 09 d0 5e 5f 74 02 <0f> 0b ff 05 54 9d 3a c2 8b 45 ec e8 45 84 fc ff 84 c0 75 0b 68 f2
[ 8.382214] EAX: 00000007 EBX: ef440db8 ECX: 00000000 EDX: 00000000
[ 8.382691] ESI: 00fe4c00 EDI: 00000000 EBP: f2821f14 ESP: f2821f00
[ 8.383190] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010002
[ 8.383685] CR0: 80050033 CR2: b7f15000 CR3: 2fc3b000 CR4: 00040690
[ 8.384182] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 8.384638] DR6: fffe0ff0 DR7: 00000400
[ 8.384963] Call Trace:
[ 8.385149] <IRQ>
[ 8.385305] ? vkms_disable_vblank+0xf/0xf
[ 8.385610] __hrtimer_run_queues+0x103/0x1ac
[ 8.386055] ? vkms_disable_vblank+0xf/0xf
[ 8.386377] hrtimer_run_queues+0xc0/0xd1
[ 8.386681] run_local_timers+0x8/0x21
[ 8.386991] update_process_times+0x17/0x3c
[ 8.387300] tick_periodic+0x65/0x71
[ 8.387646] tick_handle_periodic+0x13/0x4f
[ 8.388001] ? irq_set_chained_handler_and_data+0x4a/0x4a
[ 8.388398] timer_interrupt+0xf/0x16
[ 8.388671] __handle_irq_event_percpu+0x6d/0x131
[ 8.389058] ? irq_set_chained_handler_and_data+0x4a/0x4a
[ 8.389455] handle_irq_event_percpu+0x17/0x3d
[ 8.389783] handle_irq_event+0x23/0x35
[ 8.390107] handle_level_irq+0x49/0x70
[ 8.390410] handle_irq+0x5e/0x6e
[ 8.390656] </IRQ>
[ 8.390861] do_IRQ+0x60/0x85
[ 8.391085] common_interrupt+0xfe/0x104
[ 8.391379] EIP: clear_highpage+0x14/0x20
[ 8.391676] Code: eb 02 31 c0 5d c3 55 8b 50 04 89 e5 f6 c2 01 74 03 8d 42 ff 5d c3 55 89 e5 57 e8 56 09 f4 ff 89 c2 b9 00 04 00 00 31 c0 89 d7 <f3> ab 89 d0 e8 b9 07 f4 ff 5f 5d c3 55 83 c0 38 89 e5 53 89 cb 83
[ 8.393111] EAX: 00000000 EBX: 00000001 ECX: 00000400 EDX: ffffb000
[ 8.393568] ESI: ef96e630 EDI: ffffb000 EBP: efc7be50 ESP: efc7be4c
[ 8.394064] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010246
[ 8.394571] ? clear_highpage+0x14/0x20
[ 8.394886] shmem_getpage_gfp+0x2c2/0x39a
[ 8.395189] shmem_fault+0x143/0x161
[ 8.395456] ? fput+0xd/0xf
[ 8.395667] __do_fault+0x39/0x57
[ 8.395956] handle_mm_fault+0x669/0x796
[ 8.396258] __do_page_fault+0x1fa/0x2c7
[ 8.396548] ? kvm_async_pf_task_wake+0x13d/0x13d
[ 8.396936] do_page_fault+0xa4/0xac
[ 8.397201] ? kvm_async_pf_task_wake+0x13d/0x13d
[ 8.397546] do_async_page_fault+0x23/0x49
[ 8.397890] common_exception_read_cr2+0x14c/0x151
[ 8.398242] EIP: 0x807e58a
[ 8.398465] Code: 8d 92 80 00 00 00 73 96 5f 81 c1 80 00 00 00 01 ca ff 24 8d 70 8c 0f 08 90 8d b4 26 00 00 00 00 89 d7 89 ca c1 e9 02 83 e2 03 <f3> ab 74 12 83 fa 02 72 0b 66 89 07 83 c7 02 83 ea 02 74 02 88 07
[ 8.399905] EAX: 67676767 EBX: 0813d000 ECX: 00039a82 EDX: 00000000
[ 8.400362] ESI: b7599000 EDI: b7f15000 EBP: bfc1e548 ESP: bfc1e448
[ 8.400819] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00010246
[ 8.401356] ? kvm_async_pf_task_wake+0x13d/0x13d
[ 8.401702] ---[ end trace fb536cd465a3af93 ]---
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc2-00003-g83fb1dde4f346 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 1 month
[btrfs] 2c54397216: WARNING:at_fs/btrfs/block-group.c:#btrfs_create_pending_block_groups[btrfs]
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 2c54397216ca913d2df97baf2957cf905cb33b64 ("btrfs: kill min_allocable_bytes in inc_block_group_ro")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: xfstests
with following parameters:
disk: 6HDD
fs: btrfs
test: btrfs-group02
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------------------------------+------------+------------+
| | bd879386d3 | 2c54397216 |
+-----------------------------------------------------------------------------+------------+------------+
| boot_successes | 12 | 0 |
| boot_failures | 0 | 12 |
| WARNING:at_fs/btrfs/block-group.c:#btrfs_create_pending_block_groups[btrfs] | 0 | 12 |
| RIP:btrfs_create_pending_block_groups[btrfs] | 0 | 12 |
+-----------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 198.327017] WARNING: CPU: 1 PID: 13361 at fs/btrfs/block-group.c:1876 btrfs_create_pending_block_groups+0x1dc/0x250 [btrfs]
[ 198.332369] Modules linked in: btrfs blake2b_generic xor zstd_decompress zstd_compress raid6_pq libcrc32c ext4 mbcache jbd2 dm_mod bochs_drm drm_vram_helper drm_ttm_helper ttm drm_kms_helper intel_rapl_msr intel_rapl_common sr_mod cdrom sg crct10dif_pclmul ata_generic pata_acpi crc32_pclmul crc32c_intel ghash_clmulni_intel snd_pcm syscopyarea sysfillrect sysimgblt ppdev fb_sys_fops snd_timer snd drm soundcore aesni_intel pcspkr crypto_simd joydev cryptd glue_helper serio_raw ata_piix libata i2c_piix4 parport_pc floppy parport ip_tables [last unloaded: blake2b_generic]
[ 198.339163] CPU: 1 PID: 13361 Comm: btrfs Not tainted 5.5.0-rc2-00031-g2c54397216ca9 #1
[ 198.340479] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 198.341737] RIP: 0010:btrfs_create_pending_block_groups+0x1dc/0x250 [btrfs]
[ 198.342899] Code: e9 58 ff ff ff 48 8b 45 50 f0 48 0f ba a8 38 ce 00 00 02 72 17 41 83 fc fb 74 32 44 89 e6 48 c7 c7 a8 1e 62 c0 e8 14 26 4a e0 <0f> 0b 44 89 e1 ba 54 07 00 00 48 c7 c6 00 01 61 c0 48 89 ef e8 33
[ 198.345655] RSP: 0018:ffffa4cbc086fad8 EFLAGS: 00010282
[ 198.346708] RAX: 0000000000000000 RBX: ffff92721da17d08 RCX: 0000000000000000
[ 198.347950] RDX: 0000000000000001 RSI: ffff92727fd19b38 RDI: ffff92727fd19b38
[ 198.349150] RBP: ffff92715aad3d68 R08: 0000000000000644 R09: 0000000000aaaaaa
[ 198.350323] R10: ffff92720f8fc8e8 R11: ffff927257aa9750 R12: 00000000ffffffe4
[ 198.351546] R13: ffff92715aad3dc0 R14: ffff9272098f0000 R15: ffff92721db3f800
[ 198.352786] FS: 00007f8e2e6b68c0(0000) GS:ffff92727fd00000(0000) knlGS:0000000000000000
[ 198.353994] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 198.355119] CR2: 0000000002072048 CR3: 00000001d0178000 CR4: 00000000000406e0
[ 198.356425] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 198.357676] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 198.358896] Call Trace:
[ 198.359949] btrfs_start_dirty_block_groups+0x3ad/0x540 [btrfs]
[ 198.361216] btrfs_commit_transaction+0x2a8/0xaa0 [btrfs]
[ 198.362292] ? start_transaction+0xba/0x4b0 [btrfs]
[ 198.363409] reset_balance_state+0x15d/0x1a0 [btrfs]
[ 198.364607] btrfs_balance+0x47f/0x6c0 [btrfs]
[ 198.365655] btrfs_ioctl_balance+0x2c2/0x370 [btrfs]
[ 198.366716] btrfs_ioctl+0x153f/0x2f50 [btrfs]
[ 198.367755] ? handle_pte_fault+0x411/0xb70
[ 198.368799] ? _cond_resched+0x19/0x30
[ 198.369760] ? __handle_mm_fault+0x53b/0x660
[ 198.370723] ? cred_has_capability+0x85/0x130
[ 198.371732] ? do_vfs_ioctl+0xa1/0x740
[ 198.372708] ? btrfs_ioctl_get_supported_features+0x30/0x30 [btrfs]
[ 198.373758] do_vfs_ioctl+0xa1/0x740
[ 198.374655] ? handle_mm_fault+0xdd/0x210
[ 198.375626] ksys_ioctl+0x70/0x80
[ 198.376545] __x64_sys_ioctl+0x16/0x20
[ 198.377504] do_syscall_64+0x5b/0x1f0
[ 198.378415] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 198.379451] RIP: 0033:0x7f8e2d73e017
[ 198.380413] Code: 00 00 00 48 8b 05 81 7e 2b 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 51 7e 2b 00 f7 d8 64 89 01 48
[ 198.383120] RSP: 002b:00007ffe6f498f98 EFLAGS: 00000202 ORIG_RAX: 0000000000000010
[ 198.384387] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f8e2d73e017
[ 198.385616] RDX: 00007ffe6f499028 RSI: 00000000c4009420 RDI: 0000000000000004
[ 198.386803] RBP: 00007ffe6f499028 R08: 000000000206a000 R09: 0000000000000000
[ 198.388013] R10: 0000000000000541 R11: 0000000000000202 R12: 0000000000000004
[ 198.389126] R13: 00007ffe6f49bb97 R14: 0000000000000003 R15: 0000000000000000
[ 198.390220] ---[ end trace 910e18c1102a58da ]---
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc2-00031-g2c54397216ca9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 1 month
[mm, meminit] 7dfa51beac: stress-ng.hdd.ops_per_sec 34.4% improvement
by kernel test robot
Greeting,
FYI, we noticed a 34.4% improvement of stress-ng.hdd.ops_per_sec due to commit:
commit: 7dfa51beacac6616dc460746238ecc5fe658b676 ("mm, meminit: recalculate pcpu batch and high limits after init completes")
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable-rc.git linux-4.19.y
in testcase: stress-ng
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 192G memory
with following parameters:
nr_threads: 100%
disk: 1HDD
testtime: 1s
class: io
cpufreq_governor: performance
ucode: 0x500002c
In addition to that, the commit also has significant impact on the following tests:
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/disk/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime/ucode:
io/gcc-7/performance/1HDD/x86_64-rhel-7.6/100%/debian-x86_64-2019-11-14.cgz/lkp-csl-2sp5/stress-ng/1s/0x500002c
commit:
8e6bf4bc3a ("mm: memcontrol: fix network errors from failing __GFP_ATOMIC charges")
7dfa51beac ("mm, meminit: recalculate pcpu batch and high limits after init completes")
8e6bf4bc3a88e4b8 7dfa51beacac6616dc460746238
---------------- ---------------------------
%stddev %change %stddev
\ | \
1064569 +34.0% 1426854 stress-ng.hdd.ops
1059095 +34.4% 1423309 stress-ng.hdd.ops_per_sec
7224801 ± 7% +26.1% 9110665 stress-ng.revio.ops
7214686 ± 7% +26.2% 9105844 stress-ng.revio.ops_per_sec
1062 +33.5% 1418 stress-ng.sync-file.ops
1057 +33.9% 1415 stress-ng.sync-file.ops_per_sec
7365 +22.3% 9006 ± 9% stress-ng.time.maximum_resident_set_size
538839 ± 7% +34.2% 722944 ± 24% cpuidle.C1.time
0.03 ± 32% -0.0 0.00 ±117% mpstat.cpu.all.soft%
237.06 +2.6% 243.21 turbostat.PkgWatt
105.80 +5.1% 111.22 turbostat.RAMWatt
6479872 +9.0% 7063040 ± 7% meminfo.DirectMap2M
536012 ± 5% -11.0% 477132 ± 3% meminfo.DirectMap4k
2546625 +10.7% 2818272 ± 2% meminfo.Memused
7375 ± 27% +40.8% 10381 ± 13% softirqs.CPU3.TIMER
8600 ± 4% +17.4% 10094 ± 16% softirqs.CPU50.TIMER
207438 +22.0% 252975 ± 17% softirqs.RCU
13429090 ± 2% +28.8% 17295184 ± 2% numa-numastat.node0.local_node
13433894 ± 2% +28.9% 17318737 ± 2% numa-numastat.node0.numa_hit
13154044 ± 3% +33.6% 17572232 numa-numastat.node1.local_node
13176005 ± 3% +33.4% 17580204 ± 2% numa-numastat.node1.numa_hit
50502 ± 4% +11.9% 56492 ± 6% slabinfo.radix_tree_node.active_objs
911.00 ± 3% +11.7% 1017 ± 6% slabinfo.radix_tree_node.active_slabs
51038 ± 3% +11.7% 57016 ± 6% slabinfo.radix_tree_node.num_objs
911.00 ± 3% +11.7% 1017 ± 6% slabinfo.radix_tree_node.num_slabs
3.105e+10 ± 8% +9.0% 3.384e+10 ± 6% perf-stat.i.branch-instructions
64.63 +4.6 69.26 perf-stat.i.cache-miss-rate%
4.041e+10 ± 8% +8.6% 4.39e+10 ± 7% perf-stat.i.dTLB-loads
1.795e+11 ± 10% +12.0% 2.01e+11 ± 7% perf-stat.i.instructions
0.72 ± 8% +9.5% 0.79 ± 8% perf-stat.i.ipc
829.43 ± 14% +21.3% 1006 ± 6% perf-stat.overall.instructions-per-iTLB-miss
0.71 ± 8% +12.4% 0.80 ± 9% perf-stat.overall.ipc
4768 ± 8% -25.4% 3555 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
-8629 -39.7% -5199 sched_debug.cfs_rq:/.spread0.min
4772 ± 8% -25.5% 3555 ± 7% sched_debug.cfs_rq:/.spread0.stddev
226.50 ± 9% -14.0% 194.75 ± 9% sched_debug.cpu.cpu_load[4].max
23.73 ± 9% -13.7% 20.49 ± 8% sched_debug.cpu.cpu_load[4].stddev
212.01 ± 6% -17.6% 174.64 ± 6% sched_debug.cpu.curr->pid.avg
587.30 ± 2% -8.0% 540.43 ± 4% sched_debug.cpu.curr->pid.stddev
0.12 ± 7% -15.6% 0.10 ± 9% sched_debug.cpu.nr_running.avg
0.32 ± 3% -7.2% 0.30 ± 3% sched_debug.cpu.nr_running.stddev
6243 ±121% +1386.3% 92800 ± 56% numa-meminfo.node0.AnonHugePages
14002 ± 23% +31.6% 18424 ± 6% numa-meminfo.node0.Inactive
13778 ± 22% +27.3% 17538 ± 6% numa-meminfo.node0.Inactive(anon)
8142 ± 10% +14.6% 9328 ± 7% numa-meminfo.node0.KernelStack
14482 ± 20% +19.9% 17369 ± 7% numa-meminfo.node0.Mapped
3814 ± 75% +104.4% 7798 ± 41% numa-meminfo.node0.PageTables
14459 ± 22% +29.2% 18686 ± 7% numa-meminfo.node0.Shmem
7037 ± 48% -62.4% 2646 ± 43% numa-meminfo.node1.Inactive
6367 ± 49% -58.5% 2643 ± 43% numa-meminfo.node1.Inactive(anon)
7070 ± 47% -58.3% 2946 ± 49% numa-meminfo.node1.Shmem
20873 +5.1% 21935 proc-vmstat.nr_slab_reclaimable
930.50 ± 52% +381.2% 4477 ± 69% proc-vmstat.numa_hint_faults
244.50 ±169% +1253.6% 3309 ± 97% proc-vmstat.numa_hint_faults_local
25260764 ± 2% +25.6% 31729676 ± 4% proc-vmstat.numa_hit
25243592 ± 2% +25.6% 31707862 ± 4% proc-vmstat.numa_local
685.25 ± 13% +629.7% 5000 ± 76% proc-vmstat.numa_pages_migrated
3303 ± 82% +2080.2% 72026 ± 52% proc-vmstat.numa_pte_updates
26447003 +31.7% 34820729 proc-vmstat.pgalloc_normal
26290115 +31.6% 34589162 proc-vmstat.pgfree
685.25 ± 13% +629.7% 5000 ± 76% proc-vmstat.pgmigrate_success
26120647 +30.8% 34163755 proc-vmstat.unevictable_pgs_culled
3460 ± 22% +26.7% 4384 ± 6% numa-vmstat.node0.nr_inactive_anon
55.25 ±172% +300.0% 221.00 ± 2% numa-vmstat.node0.nr_inactive_file
999.00 ± 79% +94.5% 1943 ± 40% numa-vmstat.node0.nr_page_table_pages
3635 ± 23% +28.5% 4671 ± 7% numa-vmstat.node0.nr_shmem
3460 ± 22% +26.7% 4384 ± 6% numa-vmstat.node0.nr_zone_inactive_anon
55.25 ±172% +300.0% 221.00 ± 2% numa-vmstat.node0.nr_zone_inactive_file
5533282 ± 2% +27.4% 7051670 numa-vmstat.node0.numa_hit
5533203 ± 2% +27.3% 7044738 numa-vmstat.node0.numa_local
204.75 ± 33% +3352.5% 7069 ± 55% numa-vmstat.node0.numa_other
200298 ± 4% -11.5% 177233 ± 6% numa-vmstat.node1.nr_file_pages
1587 ± 50% -58.4% 660.50 ± 43% numa-vmstat.node1.nr_inactive_anon
1766 ± 48% -58.3% 736.25 ± 49% numa-vmstat.node1.nr_shmem
1587 ± 50% -58.4% 660.50 ± 43% numa-vmstat.node1.nr_zone_inactive_anon
5283433 ± 5% +30.5% 6894903 ± 2% numa-vmstat.node1.numa_hit
5113579 ± 5% +31.6% 6730735 ± 2% numa-vmstat.node1.numa_local
6.46 ±101% -4.2 2.27 ±173% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.calltrace.cycles-pp.do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.calltrace.cycles-pp.page_fault
4.01 ±105% -4.0 0.00 perf-profile.calltrace.cycles-pp.lookup_fast.path_openat.do_filp_open.do_sys_open.do_syscall_64
6.46 ±101% -4.2 2.27 ±173% perf-profile.children.cycles-pp.do_page_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.children.cycles-pp.__do_page_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.children.cycles-pp.__handle_mm_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.children.cycles-pp.do_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.children.cycles-pp.handle_mm_fault
6.46 ±101% -4.2 2.27 ±173% perf-profile.children.cycles-pp.page_fault
4.01 ±105% -2.8 1.19 ±173% perf-profile.children.cycles-pp.lookup_fast
4.98 ± 72% -2.7 2.27 ±173% perf-profile.children.cycles-pp.read
23.50 ±114% +348.9% 105.50 ± 57% interrupts.CPU13.RES:Rescheduling_interrupts
11174 ± 2% -16.2% 9366 ± 11% interrupts.CPU24.LOC:Local_timer_interrupts
14.75 ± 46% +845.8% 139.50 ± 45% interrupts.CPU24.RES:Rescheduling_interrupts
10522 ± 5% -9.8% 9488 ± 7% interrupts.CPU32.LOC:Local_timer_interrupts
72.75 ± 30% -76.6% 17.00 ± 85% interrupts.CPU32.RES:Rescheduling_interrupts
10478 ± 4% -9.3% 9508 ± 7% interrupts.CPU33.LOC:Local_timer_interrupts
10559 ± 5% -10.4% 9457 ± 8% interrupts.CPU35.LOC:Local_timer_interrupts
10548 ± 4% -8.6% 9639 ± 7% interrupts.CPU36.LOC:Local_timer_interrupts
10537 ± 5% -9.5% 9538 ± 8% interrupts.CPU37.LOC:Local_timer_interrupts
10512 ± 5% -6.6% 9820 ± 2% interrupts.CPU44.LOC:Local_timer_interrupts
10538 ± 5% -10.2% 9466 ± 7% interrupts.CPU45.LOC:Local_timer_interrupts
10534 ± 4% -11.8% 9287 ± 10% interrupts.CPU47.LOC:Local_timer_interrupts
10424 ± 5% -11.9% 9186 ± 11% interrupts.CPU72.LOC:Local_timer_interrupts
10459 ± 4% -10.2% 9387 ± 7% interrupts.CPU73.LOC:Local_timer_interrupts
10457 ± 4% -10.2% 9395 ± 7% interrupts.CPU74.LOC:Local_timer_interrupts
10503 ± 5% -8.3% 9629 ± 9% interrupts.CPU83.LOC:Local_timer_interrupts
88.75 ±103% -82.0% 16.00 ±108% interrupts.CPU85.RES:Rescheduling_interrupts
10520 ± 5% -8.8% 9596 ± 5% interrupts.CPU89.LOC:Local_timer_interrupts
10534 ± 4% -10.4% 9440 ± 8% interrupts.CPU91.LOC:Local_timer_interrupts
10540 ± 4% -9.6% 9525 ± 6% interrupts.CPU92.LOC:Local_timer_interrupts
10512 ± 4% -23.5% 8041 ± 26% interrupts.CPU95.LOC:Local_timer_interrupts
stress-ng.hdd.ops
1.6e+06 +-+---------------------------------------------------------------+
| O O O |
1.4e+06 O-+ O O O O O O O O O O O |
| O |
1.2e+06 +-+ .+. .+.. .+..|
1e+06 +-++.. .+..+..+. +..+ + +. +.+..+..+..+.+..+..+..+ |
| + : : : |
800000 +-+ : :: : |
| : : : : |
600000 +-+ : : : : |
400000 +-+ : : : : |
| : : : : |
200000 +-+ : : :: |
| : : |
0 +-+O--------------------------O-----O-----------------------------+
stress-ng.hdd.ops_per_sec
1.6e+06 +-+---------------------------------------------------------------+
| O |
1.4e+06 O-+ O O O O O O O O O O O O O |
| O |
1.2e+06 +-+ .+. .+.. |
1e+06 +-++.. .+..+..+. +..+ + +. +.+..+..+..+.+..+..+..+.+..|
| + : : : |
800000 +-+ : :: : |
| : : : : |
600000 +-+ : : : : |
400000 +-+ : : : : |
| : : : : |
200000 +-+ : : :: |
| : : |
0 +-+O--------------------------O-----O-----------------------------+
stress-ng.revio.ops
1e+07 +-+-----------------------------------------------------------------+
9e+06 O-+ O O O O O O O O O O O O |
| O O O |
8e+06 +-++..+.. .+..+..+..+..+ .+.. .+.+..+..+..+..+..+.+.. |
7e+06 +-+ + : + +. +. .|
| : : : +. |
6e+06 +-+ : :: : |
5e+06 +-+ : : : : |
4e+06 +-+ : : : : |
| : : : : |
3e+06 +-+ : : : : |
2e+06 +-+ : : : : |
| :: : : |
1e+06 +-+ : : |
0 +-+O---------------------------O-----O------------------------------+
stress-ng.revio.ops_per_sec
1e+07 +-+-----------------------------------------------------------------+
9e+06 O-+ O O O O O O O O O O O |
| O O O O |
8e+06 +-++..+.. .+..+..+..+..+ .+.. .+.+..+..+..+..+..+.+.. |
7e+06 +-+ + : + +. +. .|
| : : : +. |
6e+06 +-+ : :: : |
5e+06 +-+ : : : : |
4e+06 +-+ : : : : |
| : : : : |
3e+06 +-+ : : : : |
2e+06 +-+ : : : : |
| :: : : |
1e+06 +-+ : : |
0 +-+O---------------------------O-----O------------------------------+
stress-ng.sync-file.ops
1600 +-+------------------------------------------------------------------+
O O O |
1400 +-+ O O O O O O O O O O O O O |
1200 +-+ |
|.. .+.. .+. .+.. |
1000 +-++. +. +..+..+..+ + + +..+..+..+..+..+..+.+..+..+..|
| : : : |
800 +-+ : : : : |
| : : : : |
600 +-+ : : : : |
400 +-+ : : : : |
| : : : : |
200 +-+ : : : : |
| : : |
0 +-+O----------------------------O----O-------------------------------+
stress-ng.sync-file.ops_per_sec
1600 +-+------------------------------------------------------------------+
O O |
1400 +-+ O O O O O O O O O O O O O O |
1200 +-+ |
|.. .+.. .+. .+.. |
1000 +-++. +. +..+..+..+ + + +..+..+..+..+..+..+.+..+..+..|
| : : : |
800 +-+ : : : : |
| : : : : |
600 +-+ : : : : |
400 +-+ : : : : |
| : : : : |
200 +-+ : : : : |
| : : |
0 +-+O----------------------------O----O-------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
vm-snb: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 1 month
[mm/lru] 3f8db6a891: reaim.jobs_per_min -35.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -35.2% regression of reaim.jobs_per_min due to commit:
commit: 3f8db6a8910d27f136ed15a613642e96a450cd02 ("mm/lru: replace pgdat lru_lock with lruvec lock")
https://github.com/alexshi/linux.git lru-next
in testcase: reaim
on test machine: 48 threads Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 1000t
test: mem_rtns_1
cpufreq_governor: performance
ucode: 0x42e
test-description: REAIM is an updated and improved version of AIM 7 benchmark.
test-url: https://sourceforge.net/projects/re-aim-7/
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops -1.1% regression |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=16 |
| | test=page_fault3 |
| | ucode=0x500002c |
+------------------+---------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1000t/debian-x86_64-2019-11-14.cgz/300s/lkp-ivb-2ep1/mem_rtns_1/reaim/0x42e
commit:
86b4695edf ("mm/vmscan: remove unnecessary lruvec adding")
3f8db6a891 ("mm/lru: replace pgdat lru_lock with lruvec lock")
86b4695edff2a203 3f8db6a8910d27f136ed15a6136
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
3:4 -75% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
5:4 -63% 2:4 perf-profile.calltrace.cycles-pp.error_entry
5:4 -67% 2:4 perf-profile.children.cycles-pp.error_entry
4:4 -50% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
1256 +83.3% 2303 reaim.child_systime
150775 -35.2% 97735 reaim.jobs_per_min
150.78 -35.2% 97.73 reaim.jobs_per_min_child
152897 -35.5% 98604 reaim.max_jobs_per_min
39.80 +54.3% 61.40 reaim.parent_time
5.33 +58.7% 8.46 reaim.std_dev_time
335.28 -5.3% 317.54 reaim.time.elapsed_time
335.28 -5.3% 317.54 reaim.time.elapsed_time.max
3782105 -2.8% 3676557 reaim.time.involuntary_context_switches
1.258e+09 -37.4% 7.872e+08 reaim.time.minor_page_faults
4545 +1.8% 4629 reaim.time.percent_of_cpu_this_job_got
10051 +14.6% 11516 reaim.time.system_time
5191 ± 2% -38.6% 3185 ± 6% reaim.time.user_time
36468 ± 2% -40.7% 21622 reaim.time.voluntary_context_switches
800000 -37.5% 500000 reaim.workload
6.846e+08 ± 12% -29.7% 4.815e+08 ± 11% cpuidle.C6.time
968496 ± 7% -34.3% 636310 ± 5% cpuidle.C6.usage
61.50 +21.1% 74.50 ± 2% vmstat.cpu.sy
31.25 ± 2% -34.4% 20.50 ± 7% vmstat.cpu.us
12337 +1.9% 12572 vmstat.system.cs
24.06 -6.9 17.17 ± 5% mpstat.cpu.all.gnice%
5.31 ± 4% -1.6 3.70 ± 5% mpstat.cpu.all.idle%
46.57 ± 2% +15.4 61.96 ± 3% mpstat.cpu.all.sys%
24.06 -6.9 17.17 ± 5% mpstat.cpu.all.usr%
6.268e+08 -37.5% 3.917e+08 numa-numastat.node0.local_node
6.268e+08 -37.5% 3.917e+08 numa-numastat.node0.numa_hit
6.344e+08 -37.2% 3.987e+08 numa-numastat.node1.local_node
6.344e+08 -37.2% 3.987e+08 numa-numastat.node1.numa_hit
2825 +1.8% 2877 turbostat.Avg_MHz
962929 ± 7% -34.3% 632908 ± 5% turbostat.C6
178.02 -2.2% 174.17 turbostat.CorWatt
207.31 -1.9% 203.39 turbostat.PkgWatt
230634 ± 12% +22.8% 283278 numa-meminfo.node0.Active
228377 ± 11% +23.1% 281041 numa-meminfo.node0.Active(anon)
79411 ± 28% +59.7% 126798 ± 8% numa-meminfo.node0.AnonHugePages
210965 ± 11% +29.6% 273401 ± 2% numa-meminfo.node0.AnonPages
9406 ± 60% +89.3% 17810 ± 21% numa-meminfo.node0.KernelStack
97368 ± 20% -53.5% 45323 ± 17% numa-meminfo.node1.AnonHugePages
232506 ± 10% -22.0% 181380 ± 4% numa-meminfo.node1.AnonPages
81316 ± 18% -31.2% 55972 ± 26% numa-meminfo.node1.SUnreclaim
57100 ± 11% +23.1% 70269 numa-vmstat.node0.nr_active_anon
52756 ± 10% +29.6% 68357 ± 2% numa-vmstat.node0.nr_anon_pages
9387 ± 60% +89.6% 17796 ± 21% numa-vmstat.node0.nr_kernel_stack
57100 ± 11% +23.1% 70270 numa-vmstat.node0.nr_zone_active_anon
3.155e+08 -37.7% 1.966e+08 numa-vmstat.node0.numa_hit
3.154e+08 -37.7% 1.966e+08 numa-vmstat.node0.numa_local
58273 ± 10% -22.2% 45316 ± 4% numa-vmstat.node1.nr_anon_pages
20337 ± 18% -31.2% 13990 ± 26% numa-vmstat.node1.nr_slab_unreclaimable
3.194e+08 -37.3% 2.003e+08 numa-vmstat.node1.numa_hit
3.193e+08 -37.3% 2.002e+08 numa-vmstat.node1.numa_local
6511 -19.7% 5229 slabinfo.Acpi-State.active_objs
6511 -19.7% 5229 slabinfo.Acpi-State.num_objs
420.75 ± 7% -31.0% 290.25 ± 6% slabinfo.buffer_head.active_objs
420.75 ± 7% -31.0% 290.25 ± 6% slabinfo.buffer_head.num_objs
6595 -21.4% 5180 slabinfo.files_cache.active_objs
6597 -21.5% 5180 slabinfo.files_cache.num_objs
5049 ± 6% -16.8% 4201 slabinfo.mm_struct.active_objs
5116 ± 6% -17.2% 4236 slabinfo.mm_struct.num_objs
13949 -22.2% 10847 slabinfo.vmap_area.active_objs
13949 -22.1% 10868 slabinfo.vmap_area.num_objs
117434 +2.3% 120139 proc-vmstat.nr_active_anon
111346 +2.1% 113726 proc-vmstat.nr_anon_pages
24692 +1.8% 25128 proc-vmstat.nr_kernel_stack
9976 +4.7% 10445 proc-vmstat.nr_shmem
35477 -2.2% 34703 proc-vmstat.nr_slab_unreclaimable
117434 +2.3% 120139 proc-vmstat.nr_zone_active_anon
1.261e+09 -37.3% 7.902e+08 proc-vmstat.numa_hit
1.261e+09 -37.3% 7.902e+08 proc-vmstat.numa_local
8558 ± 4% +14.7% 9814 ± 2% proc-vmstat.pgactivate
1.261e+09 -37.3% 7.905e+08 proc-vmstat.pgalloc_normal
1.259e+09 -37.4% 7.879e+08 proc-vmstat.pgfault
1.261e+09 -37.3% 7.904e+08 proc-vmstat.pgfree
337.50 ± 21% -20.4% 268.50 ± 3% interrupts.35:PCI-MSI.2621441-edge.eth0-TxRx-0
180.75 ± 3% +31.5% 237.75 ± 26% interrupts.36:PCI-MSI.2621442-edge.eth0-TxRx-1
185.25 ± 54% +190.6% 538.25 ± 65% interrupts.CPU12.RES:Rescheduling_interrupts
1061 ± 20% -57.5% 450.75 ± 33% interrupts.CPU13.RES:Rescheduling_interrupts
1326 ± 65% -78.5% 285.50 ± 32% interrupts.CPU14.RES:Rescheduling_interrupts
4977 ± 28% +42.2% 7077 ± 23% interrupts.CPU2.NMI:Non-maskable_interrupts
4977 ± 28% +42.2% 7077 ± 23% interrupts.CPU2.PMI:Performance_monitoring_interrupts
337.50 ± 21% -20.4% 268.50 ± 3% interrupts.CPU24.35:PCI-MSI.2621441-edge.eth0-TxRx-0
180.75 ± 3% +31.5% 237.75 ± 26% interrupts.CPU25.36:PCI-MSI.2621442-edge.eth0-TxRx-1
4161 +89.7% 7894 ± 9% interrupts.CPU25.NMI:Non-maskable_interrupts
4161 +89.7% 7894 ± 9% interrupts.CPU25.PMI:Performance_monitoring_interrupts
251.75 ± 55% +118.3% 549.50 ± 14% interrupts.CPU36.RES:Rescheduling_interrupts
4162 +49.7% 6231 ± 33% interrupts.CPU39.NMI:Non-maskable_interrupts
4162 +49.7% 6231 ± 33% interrupts.CPU39.PMI:Performance_monitoring_interrupts
882.00 ± 19% -55.8% 390.00 ± 31% interrupts.CPU40.RES:Rescheduling_interrupts
6011 ± 31% +35.9% 8169 ± 2% interrupts.CPU46.NMI:Non-maskable_interrupts
6011 ± 31% +35.9% 8169 ± 2% interrupts.CPU46.PMI:Performance_monitoring_interrupts
32.12 ± 5% +18.0% 37.91 ± 9% sched_debug.cfs_rq:/.load_avg.avg
42.31 ± 6% +33.0% 56.27 ± 20% sched_debug.cfs_rq:/.load_avg.stddev
147.71 ± 16% -33.2% 98.62 ± 4% sched_debug.cfs_rq:/.nr_spread_over.max
22.30 ± 11% -20.4% 17.76 ± 5% sched_debug.cfs_rq:/.nr_spread_over.stddev
4.39 ± 33% +140.2% 10.55 ± 33% sched_debug.cfs_rq:/.removed.load_avg.avg
26.43 ± 14% +52.0% 40.17 ± 16% sched_debug.cfs_rq:/.removed.load_avg.stddev
201.95 ± 34% +140.8% 486.26 ± 33% sched_debug.cfs_rq:/.removed.runnable_sum.avg
1215 ± 14% +52.5% 1852 ± 16% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
1.05 ± 47% +214.1% 3.28 ± 42% sched_debug.cfs_rq:/.removed.util_avg.avg
46.62 ± 51% +49.0% 69.46 ± 24% sched_debug.cfs_rq:/.removed.util_avg.max
6.74 ± 50% +91.5% 12.91 ± 28% sched_debug.cfs_rq:/.removed.util_avg.stddev
65.25 ± 11% -24.4% 49.33 ± 7% sched_debug.cpu.clock.stddev
65.25 ± 11% -24.4% 49.33 ± 7% sched_debug.cpu.clock_task.stddev
7927 -26.4% 5832 sched_debug.cpu.curr->pid.avg
8901 -17.6% 7337 sched_debug.cpu.curr->pid.max
6200 ± 14% -33.4% 4128 ± 14% sched_debug.cpu.curr->pid.min
0.00 ± 8% -19.6% 0.00 ± 3% sched_debug.cpu.next_balance.stddev
16.40 -14.4% 14.03 ± 4% sched_debug.cpu.nr_running.avg
21.46 ± 3% -20.2% 17.12 ± 3% sched_debug.cpu.nr_running.max
2.51 ± 15% -31.0% 1.73 ± 27% sched_debug.cpu.nr_running.stddev
55750 ± 4% +11.5% 62190 ± 4% sched_debug.cpu.nr_switches.max
3828 ± 8% +30.2% 4982 ± 7% sched_debug.cpu.nr_switches.stddev
53621 ± 4% +11.4% 59722 ± 5% sched_debug.cpu.sched_count.max
3574 ± 9% +31.1% 4686 ± 9% sched_debug.cpu.sched_count.stddev
256.14 ± 9% -29.2% 181.32 sched_debug.cpu.sched_goidle.avg
71.75 ± 8% -47.6% 37.58 ± 10% sched_debug.cpu.sched_goidle.min
2070 ± 6% +21.7% 2519 ± 9% sched_debug.cpu.ttwu_count.stddev
1986 ± 7% +22.1% 2425 ± 9% sched_debug.cpu.ttwu_local.stddev
147167 ± 2% -10.1% 132251 ± 3% softirqs.CPU0.RCU
146213 -10.2% 131325 ± 3% softirqs.CPU1.RCU
145937 ± 2% -10.6% 130404 ± 3% softirqs.CPU10.RCU
145337 -10.1% 130706 ± 3% softirqs.CPU11.RCU
147594 ± 12% -17.6% 121678 softirqs.CPU12.TIMER
146327 -10.4% 131166 ± 3% softirqs.CPU13.RCU
134798 ± 2% -10.9% 120058 softirqs.CPU13.TIMER
145707 ± 2% -10.5% 130466 ± 3% softirqs.CPU14.RCU
134965 ± 2% -11.6% 119247 softirqs.CPU14.TIMER
145600 -9.9% 131146 ± 3% softirqs.CPU15.RCU
133677 -10.5% 119602 softirqs.CPU15.TIMER
145869 -10.6% 130418 ± 3% softirqs.CPU16.RCU
140072 ± 4% -14.9% 119144 softirqs.CPU16.TIMER
146326 -9.9% 131820 ± 3% softirqs.CPU17.RCU
145378 ± 2% -9.8% 131199 ± 3% softirqs.CPU18.RCU
135557 -10.9% 120781 softirqs.CPU18.TIMER
146484 ± 2% -10.1% 131686 ± 3% softirqs.CPU19.RCU
147182 -10.7% 131382 ± 3% softirqs.CPU2.RCU
133456 ± 3% -9.9% 120242 ± 2% softirqs.CPU2.TIMER
145022 -10.2% 130284 ± 3% softirqs.CPU22.RCU
144504 -9.8% 130312 ± 3% softirqs.CPU23.RCU
145432 -10.2% 130661 ± 3% softirqs.CPU24.RCU
132211 ± 3% -10.8% 117918 softirqs.CPU24.TIMER
145864 ± 2% -10.7% 130295 ± 3% softirqs.CPU25.RCU
146589 ± 2% -10.4% 131284 ± 2% softirqs.CPU27.RCU
146241 -10.1% 131496 ± 3% softirqs.CPU28.RCU
133203 -9.1% 121082 softirqs.CPU29.TIMER
146889 -10.9% 130825 ± 3% softirqs.CPU3.RCU
145746 -10.4% 130647 ± 3% softirqs.CPU31.RCU
145482 -9.7% 131311 ± 4% softirqs.CPU33.RCU
145426 -10.2% 130527 ± 2% softirqs.CPU36.RCU
140620 ± 5% -14.8% 119775 softirqs.CPU36.TIMER
145113 -9.4% 131471 ± 3% softirqs.CPU37.RCU
133851 ± 2% -10.3% 120127 softirqs.CPU37.TIMER
145868 -10.3% 130805 ± 3% softirqs.CPU38.RCU
135219 ± 2% -11.6% 119560 softirqs.CPU38.TIMER
133913 -11.0% 119245 softirqs.CPU39.TIMER
147147 -11.0% 130977 ± 2% softirqs.CPU40.RCU
147046 ± 11% -18.7% 119578 softirqs.CPU40.TIMER
145801 -10.3% 130752 ± 3% softirqs.CPU41.RCU
136666 -11.2% 121291 softirqs.CPU41.TIMER
146254 -10.8% 130459 ± 3% softirqs.CPU42.RCU
136118 -11.7% 120167 softirqs.CPU42.TIMER
145703 -10.7% 130143 ± 3% softirqs.CPU43.RCU
145480 -10.6% 130085 ± 3% softirqs.CPU44.RCU
134308 -10.1% 120684 softirqs.CPU44.TIMER
145525 -10.4% 130327 ± 3% softirqs.CPU46.RCU
144406 -8.9% 131611 ± 2% softirqs.CPU47.RCU
146107 -9.4% 132340 ± 3% softirqs.CPU5.RCU
145811 -10.6% 130385 ± 3% softirqs.CPU6.RCU
146703 -10.8% 130847 ± 3% softirqs.CPU7.RCU
133023 -8.9% 121238 softirqs.CPU7.TIMER
146387 -10.2% 131480 ± 4% softirqs.CPU8.RCU
146163 -10.6% 130620 ± 3% softirqs.CPU9.RCU
7002674 -10.1% 6292467 ± 3% softirqs.RCU
262220 -16.9% 218007 softirqs.SCHED
6463046 -9.8% 5830110 softirqs.TIMER
11.38 ± 2% -4.7% 10.85 perf-stat.i.MPKI
1.435e+10 -17.6% 1.182e+10 perf-stat.i.branch-instructions
0.73 ± 3% -0.1 0.62 ± 5% perf-stat.i.branch-miss-rate%
75798369 -24.7% 57105038 perf-stat.i.branch-misses
3.58 ± 2% +0.8 4.38 ± 2% perf-stat.i.cache-miss-rate%
21360494 +4.1% 22230436 perf-stat.i.cache-misses
7.202e+08 -21.7% 5.638e+08 perf-stat.i.cache-references
12419 +2.0% 12664 perf-stat.i.context-switches
2.09 +23.9% 2.59 perf-stat.i.cpi
1.352e+11 +2.0% 1.379e+11 perf-stat.i.cpu-cycles
465.77 ± 3% -7.9% 429.20 ± 4% perf-stat.i.cpu-migrations
1.27 ± 3% -0.3 1.01 ± 10% perf-stat.i.dTLB-load-miss-rate%
2.198e+08 ± 3% -36.3% 1.4e+08 ± 10% perf-stat.i.dTLB-load-misses
1.714e+10 -20.0% 1.372e+10 perf-stat.i.dTLB-loads
51768110 -33.0% 34676958 perf-stat.i.dTLB-store-misses
1.223e+10 -32.8% 8.216e+09 perf-stat.i.dTLB-stores
72210596 -28.9% 51370136 ± 6% perf-stat.i.iTLB-load-misses
3781066 -33.8% 2504573 ± 4% perf-stat.i.iTLB-loads
6.774e+10 -20.0% 5.42e+10 perf-stat.i.instructions
953.48 +13.6% 1083 ± 4% perf-stat.i.instructions-per-iTLB-miss
0.49 -20.8% 0.39 perf-stat.i.ipc
3746809 -33.8% 2480437 perf-stat.i.minor-faults
20.45 ± 4% -3.1 17.37 ± 4% perf-stat.i.node-load-miss-rate%
1407391 ± 6% -14.4% 1204332 ± 5% perf-stat.i.node-load-misses
5788848 +4.5% 6050133 perf-stat.i.node-loads
7.02 ± 16% -2.6 4.44 ± 16% perf-stat.i.node-store-miss-rate%
1201935 ± 18% -37.3% 753735 ± 21% perf-stat.i.node-store-misses
18330686 ± 2% +4.6% 19175870 perf-stat.i.node-stores
3746810 -33.8% 2480436 perf-stat.i.page-faults
10.63 -2.2% 10.40 perf-stat.overall.MPKI
0.53 -0.0 0.48 perf-stat.overall.branch-miss-rate%
2.97 +1.0 3.94 perf-stat.overall.cache-miss-rate%
2.00 +27.4% 2.54 perf-stat.overall.cpi
1.27 ± 3% -0.3 1.01 ± 10% perf-stat.overall.dTLB-load-miss-rate%
938.22 +12.8% 1058 ± 5% perf-stat.overall.instructions-per-iTLB-miss
0.50 -21.5% 0.39 perf-stat.overall.ipc
19.55 ± 5% -2.9 16.60 ± 5% perf-stat.overall.node-load-miss-rate%
6.16 ± 19% -2.4 3.79 ± 21% perf-stat.overall.node-store-miss-rate%
28445967 +21.0% 34417392 perf-stat.overall.path-length
1.43e+10 -17.7% 1.178e+10 perf-stat.ps.branch-instructions
75565842 -24.7% 56923409 perf-stat.ps.branch-misses
21294871 +4.1% 22159176 perf-stat.ps.cache-misses
7.179e+08 -21.7% 5.619e+08 perf-stat.ps.cache-references
12380 +2.0% 12623 perf-stat.ps.context-switches
1.348e+11 +1.9% 1.374e+11 perf-stat.ps.cpu-cycles
464.31 ± 3% -7.9% 427.76 ± 3% perf-stat.ps.cpu-migrations
2.191e+08 ± 3% -36.3% 1.395e+08 ± 10% perf-stat.ps.dTLB-load-misses
1.708e+10 -20.0% 1.367e+10 perf-stat.ps.dTLB-loads
51605351 -33.0% 34563907 perf-stat.ps.dTLB-store-misses
1.219e+10 -32.8% 8.189e+09 perf-stat.ps.dTLB-stores
71983560 -28.9% 51202442 ± 6% perf-stat.ps.iTLB-load-misses
3769182 -33.8% 2496417 ± 4% perf-stat.ps.iTLB-loads
6.753e+10 -20.0% 5.402e+10 perf-stat.ps.instructions
3735030 -33.8% 2472258 perf-stat.ps.minor-faults
1403009 ± 6% -14.4% 1200429 ± 5% perf-stat.ps.node-load-misses
5770687 +4.5% 6030480 perf-stat.ps.node-loads
1198226 ± 18% -37.3% 751192 ± 20% perf-stat.ps.node-store-misses
18273230 ± 2% +4.6% 19112969 perf-stat.ps.node-stores
3735030 -33.8% 2472257 perf-stat.ps.page-faults
2.276e+13 -24.4% 1.721e+13 perf-stat.total.instructions
41.55 ± 2% -41.5 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
40.87 ± 2% -40.9 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu
73.09 -17.1 55.95 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
72.76 -17.0 55.78 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
69.75 -15.6 54.15 perf-profile.calltrace.cycles-pp.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
67.33 -14.5 52.88 perf-profile.calltrace.cycles-pp.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
48.56 -11.0 37.56 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
59.95 -11.0 48.99 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
48.37 -10.9 37.45 perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap.__x64_sys_brk
45.75 -9.7 36.07 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
5.79 -5.8 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
5.75 -5.7 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
4.91 ± 2% -2.4 2.47 ± 3% perf-profile.calltrace.cycles-pp.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
4.49 ± 2% -2.2 2.26 ± 3% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault.handle_mm_fault
4.40 ± 2% -2.1 2.35 ± 3% perf-profile.calltrace.cycles-pp.__split_vma.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.73 ± 2% -1.8 1.90 ± 4% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault.__handle_mm_fault
3.31 ± 2% -1.6 1.71 ± 4% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.27 ± 2% -1.4 1.86 ± 2% perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
2.66 ± 2% -1.3 1.35 ± 3% perf-profile.calltrace.cycles-pp.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_pte_fault
2.52 ± 2% -1.2 1.28 ± 4% perf-profile.calltrace.cycles-pp.clear_page_erms.prep_new_page.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
2.79 ± 2% -1.2 1.61 ± 2% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__x64_sys_brk
2.22 ± 2% -1.1 1.16 ± 5% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
1.99 ± 2% -1.0 1.02 ± 2% perf-profile.calltrace.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region.__do_munmap
1.58 -0.8 0.81 ± 5% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
1.63 ± 3% -0.8 0.86 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64
1.81 ± 3% -0.8 1.05 ± 3% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.__do_munmap.__x64_sys_brk.do_syscall_64
0.98 ± 2% -0.7 0.25 ±100% perf-profile.calltrace.cycles-pp.find_vma.__do_munmap.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.41 ± 3% -0.7 0.70 ± 4% perf-profile.calltrace.cycles-pp.anon_vma_clone.__split_vma.__do_munmap.__x64_sys_brk.do_syscall_64
1.07 ± 3% -0.7 0.42 ± 57% perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.unmap_region.__do_munmap.__x64_sys_brk
1.30 ± 3% -0.6 0.66 ± 5% perf-profile.calltrace.cycles-pp.mem_cgroup_uncharge_list.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
1.20 ± 2% -0.6 0.61 ± 5% perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
1.19 -0.6 0.63 ± 3% perf-profile.calltrace.cycles-pp.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu.unmap_region
1.04 -0.5 0.54 ± 3% perf-profile.calltrace.cycles-pp.native_flush_tlb_one_user.flush_tlb_func_common.flush_tlb_mm_range.tlb_flush_mmu.tlb_finish_mmu
1.00 ± 2% -0.4 0.57 ± 2% perf-profile.calltrace.cycles-pp.do_brk_flags.__x64_sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.94 +0.1 1.07 ± 3% perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.handle_pte_fault.__handle_mm_fault
6.47 +2.2 8.72 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk.do_syscall_64
6.44 +2.3 8.70 ± 2% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap.__x64_sys_brk
6.25 +2.4 8.61 ± 2% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.__do_munmap
0.00 +8.0 7.99 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu
0.00 +8.1 8.06 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
0.00 +8.2 8.15 ± 2% perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
16.85 +21.9 38.78 perf-profile.calltrace.cycles-pp.page_fault
16.66 +22.1 38.72 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
16.12 +22.3 38.43 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
14.20 ± 2% +23.3 37.50 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
13.58 ± 2% +23.6 37.19 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
12.91 ± 2% +23.9 36.84 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
5.54 +27.5 33.01 perf-profile.calltrace.cycles-pp.__lru_cache_add.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
5.42 +27.5 32.94 perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.handle_pte_fault.__handle_mm_fault.handle_mm_fault
0.00 +30.7 30.70 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add
0.00 +31.0 31.02 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add.handle_pte_fault
0.00 +31.3 31.29 perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.pagevec_lru_move_fn.__lru_cache_add.handle_pte_fault.__handle_mm_fault
0.00 +33.0 32.97 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.release_pages.tlb_flush_mmu
0.00 +33.3 33.27 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.lock_page_lruvec_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu
0.00 +33.3 33.33 perf-profile.calltrace.cycles-pp.lock_page_lruvec_irqsave.release_pages.tlb_flush_mmu.tlb_finish_mmu.unmap_region
73.14 -17.1 55.99 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
72.82 -17.0 55.83 perf-profile.children.cycles-pp.do_syscall_64
69.76 -15.6 54.16 perf-profile.children.cycles-pp.__x64_sys_brk
67.39 -14.5 52.91 perf-profile.children.cycles-pp.__do_munmap
48.57 -11.0 37.56 perf-profile.children.cycles-pp.tlb_finish_mmu
59.98 -11.0 49.01 perf-profile.children.cycles-pp.unmap_region
48.40 -10.9 37.47 perf-profile.children.cycles-pp.tlb_flush_mmu
46.03 -9.6 36.39 perf-profile.children.cycles-pp.release_pages
4.94 ± 2% -2.4 2.49 ± 3% perf-profile.children.cycles-pp.alloc_pages_vma
4.54 ± 2% -2.3 2.29 ± 3% perf-profile.children.cycles-pp.__alloc_pages_nodemask
4.43 ± 2% -2.1 2.36 ± 3% perf-profile.children.cycles-pp.__split_vma
3.75 ± 2% -1.8 1.91 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
3.31 ± 2% -1.6 1.71 ± 4% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.29 ± 2% -1.4 1.87 ± 2% perf-profile.children.cycles-pp.unmap_vmas
2.70 ± 2% -1.3 1.37 ± 4% perf-profile.children.cycles-pp.prep_new_page
2.53 ± 2% -1.2 1.29 ± 4% perf-profile.children.cycles-pp.clear_page_erms
2.84 ± 2% -1.2 1.63 ± 2% perf-profile.children.cycles-pp.unmap_page_range
2.23 ± 2% -1.1 1.17 ± 5% perf-profile.children.cycles-pp.prepare_exit_to_usermode
2.00 ± 2% -1.0 1.02 ± 3% perf-profile.children.cycles-pp.flush_tlb_mm_range
2.04 ± 3% -0.9 1.17 ± 2% perf-profile.children.cycles-pp.__vma_adjust
1.74 ± 2% -0.8 0.89 ± 5% perf-profile.children.cycles-pp.syscall_return_via_sysret
1.63 ± 3% -0.8 0.86 perf-profile.children.cycles-pp.entry_SYSCALL_64
1.40 -0.7 0.67 ± 2% perf-profile.children.cycles-pp.find_vma
1.42 ± 2% -0.7 0.71 ± 4% perf-profile.children.cycles-pp.anon_vma_clone
1.32 ± 3% -0.7 0.67 ± 5% perf-profile.children.cycles-pp.mem_cgroup_uncharge_list
1.21 ± 2% -0.6 0.61 ± 6% perf-profile.children.cycles-pp.free_pgtables
1.19 -0.6 0.63 ± 3% perf-profile.children.cycles-pp.flush_tlb_func_common
1.22 ± 3% -0.6 0.66 ± 4% perf-profile.children.cycles-pp.native_irq_return_iret
1.10 ± 3% -0.6 0.55 ± 6% perf-profile.children.cycles-pp.unlink_anon_vmas
1.06 ± 2% -0.5 0.55 ± 3% perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.95 ± 2% -0.5 0.49 ± 5% perf-profile.children.cycles-pp.___might_sleep
0.93 -0.4 0.48 ± 4% perf-profile.children.cycles-pp.vm_area_dup
0.95 ± 2% -0.4 0.51 ± 2% perf-profile.children.cycles-pp.__perf_sw_event
0.85 -0.4 0.42 ± 4% perf-profile.children.cycles-pp.free_unref_page_list
1.00 ± 2% -0.4 0.57 ± 2% perf-profile.children.cycles-pp.do_brk_flags
0.90 ± 2% -0.4 0.47 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc
0.89 -0.4 0.47 ± 5% perf-profile.children.cycles-pp.down_write
0.71 ± 9% -0.4 0.33 ± 10% perf-profile.children.cycles-pp.kmem_cache_free
0.79 ± 2% -0.4 0.41 ± 2% perf-profile.children.cycles-pp.___perf_sw_event
0.73 ± 4% -0.4 0.37 ± 8% perf-profile.children.cycles-pp.remove_vma
0.59 ± 2% -0.3 0.28 ± 2% perf-profile.children.cycles-pp.vmacache_find
0.70 ± 7% -0.3 0.39 ± 3% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.57 ± 5% -0.3 0.27 ± 4% perf-profile.children.cycles-pp.down_write_killable
0.67 ± 6% -0.3 0.36 ± 7% perf-profile.children.cycles-pp._cond_resched
0.54 ± 3% -0.3 0.26 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.53 ± 4% -0.3 0.26 ± 4% perf-profile.children.cycles-pp.cpumask_any_but
0.48 ± 4% -0.2 0.23 ± 3% perf-profile.children.cycles-pp.mem_cgroup_commit_charge
0.52 ± 5% -0.2 0.29 ± 2% perf-profile.children.cycles-pp.uncharge_batch
0.52 ± 8% -0.2 0.29 ± 11% perf-profile.children.cycles-pp._raw_spin_lock
0.46 ± 4% -0.2 0.24 ± 5% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.39 ± 4% -0.2 0.19 ± 5% perf-profile.children.cycles-pp.up_write
0.38 ± 5% -0.2 0.18 ± 6% perf-profile.children.cycles-pp.free_unref_page_prepare
0.37 ± 7% -0.2 0.20 ± 2% perf-profile.children.cycles-pp.__mod_memcg_state
0.33 ± 4% -0.2 0.17 ± 3% perf-profile.children.cycles-pp.sync_regs
0.66 ± 5% -0.2 0.50 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.30 ± 7% -0.2 0.15 ± 5% perf-profile.children.cycles-pp.__count_memcg_events
0.31 ± 6% -0.1 0.17 ± 5% perf-profile.children.cycles-pp.rcu_all_qs
0.32 ± 6% -0.1 0.17 ± 2% perf-profile.children.cycles-pp.__vma_link_rb
0.26 ± 3% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.mem_cgroup_charge_statistics
0.43 ± 3% -0.1 0.30 ± 3% perf-profile.children.cycles-pp.__mod_lruvec_state
0.28 ± 4% -0.1 0.14 ± 3% perf-profile.children.cycles-pp.find_next_bit
0.27 ± 5% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.23 ± 13% -0.1 0.10 ± 65% perf-profile.children.cycles-pp.rcu_core
0.30 ± 7% -0.1 0.18 ± 36% perf-profile.children.cycles-pp.irq_exit
0.29 ± 6% -0.1 0.16 ± 37% perf-profile.children.cycles-pp.__softirqentry_text_start
0.28 ± 6% -0.1 0.16 ± 2% perf-profile.children.cycles-pp.vma_merge
0.23 ± 4% -0.1 0.11 ± 4% perf-profile.children.cycles-pp.__vma_rb_erase
0.32 ± 5% -0.1 0.21 ± 2% perf-profile.children.cycles-pp.page_remove_rmap
0.21 ± 9% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.__rb_insert_augmented
0.19 ± 17% -0.1 0.08 ± 32% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.23 ± 3% -0.1 0.11 ± 4% perf-profile.children.cycles-pp.up_read
0.24 ± 6% -0.1 0.14 ± 6% perf-profile.children.cycles-pp.free_pages_and_swap_cache
0.21 ± 20% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.__mod_node_page_state
0.28 ± 3% -0.1 0.18 ± 2% perf-profile.children.cycles-pp.perf_event_mmap
0.22 ± 3% -0.1 0.12 ± 7% perf-profile.children.cycles-pp.anon_vma_chain_link
0.35 ± 11% -0.1 0.25 ± 5% perf-profile.children.cycles-pp.mem_cgroup_page_lruvec
0.20 ± 5% -0.1 0.10 ± 8% perf-profile.children.cycles-pp.free_unref_page_commit
0.23 ± 6% -0.1 0.14 ± 3% perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.19 ± 7% -0.1 0.10 ± 15% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.17 ± 4% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.sync_mm_rss
0.18 ± 3% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.__mod_zone_page_state
0.18 ± 4% -0.1 0.10 ± 8% perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.18 ± 8% -0.1 0.10 perf-profile.children.cycles-pp.tlb_gather_mmu
0.20 ± 24% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
0.14 -0.1 0.06 ± 13% perf-profile.children.cycles-pp.get_task_policy
0.16 ± 6% -0.1 0.09 ± 4% perf-profile.children.cycles-pp.get_unmapped_area
0.11 ± 17% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.try_charge
0.11 ± 4% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.15 ± 10% -0.1 0.09 ± 9% perf-profile.children.cycles-pp.downgrade_write
0.32 ± 4% -0.1 0.26 ± 5% perf-profile.children.cycles-pp.__list_add_valid
0.15 ± 3% -0.1 0.09 ± 4% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.13 ± 9% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.userfaultfd_unmap_prep
0.09 ± 4% -0.1 0.03 ±100% perf-profile.children.cycles-pp.page_evictable
0.09 ± 21% -0.1 0.03 ±100% perf-profile.children.cycles-pp.memcpy_erms
0.14 ± 27% -0.1 0.08 ± 17% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.12 ± 5% -0.1 0.06 ± 13% perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.11 ± 3% -0.1 0.06 ± 9% perf-profile.children.cycles-pp.down_read_trylock
0.10 ± 9% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.uncharge_page
0.14 -0.1 0.09 ± 4% perf-profile.children.cycles-pp.perf_iterate_sb
0.10 ± 8% -0.1 0.05 ± 8% perf-profile.children.cycles-pp.unmap_single_vma
0.11 ± 4% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.cred_has_capability
0.09 ± 9% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.security_mmap_addr
0.10 ± 4% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.lru_cache_add_active_or_unevictable
0.11 ± 4% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.memcg_check_events
0.10 ± 4% -0.0 0.06 perf-profile.children.cycles-pp.free_pgd_range
0.07 ± 6% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.pmd_none_or_trans_huge_or_clear_bad
0.08 ± 5% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.00 +0.1 0.12 ± 6% perf-profile.children.cycles-pp.lock_page_memcg
1.22 +0.2 1.41 ± 3% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.00 +0.7 0.67 ± 23% perf-profile.children.cycles-pp.unlock_page_lruvec_irqrestore
6.58 +2.2 8.79 ± 2% perf-profile.children.cycles-pp.lru_add_drain
6.52 +2.2 8.75 ± 2% perf-profile.children.cycles-pp.lru_add_drain_cpu
51.68 +20.8 72.49 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
50.86 +20.9 71.76 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
16.91 +21.9 38.82 perf-profile.children.cycles-pp.page_fault
16.67 +22.1 38.73 perf-profile.children.cycles-pp.do_page_fault
16.16 +22.3 38.45 perf-profile.children.cycles-pp.__do_page_fault
14.23 ± 2% +23.3 37.52 perf-profile.children.cycles-pp.handle_mm_fault
13.61 ± 2% +23.6 37.20 perf-profile.children.cycles-pp.__handle_mm_fault
12.97 ± 2% +23.9 36.87 perf-profile.children.cycles-pp.handle_pte_fault
5.55 +27.5 33.03 perf-profile.children.cycles-pp.__lru_cache_add
11.69 +29.9 41.59 perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.00 +72.8 72.83 perf-profile.children.cycles-pp.lock_page_lruvec_irqsave
2.81 -1.3 1.49 ± 3% perf-profile.self.cycles-pp.do_syscall_64
2.51 ± 2% -1.2 1.28 ± 4% perf-profile.self.cycles-pp.clear_page_erms
2.09 -1.0 1.06 ± 4% perf-profile.self.cycles-pp.prepare_exit_to_usermode
1.73 ± 2% -0.8 0.89 ± 5% perf-profile.self.cycles-pp.syscall_return_via_sysret
1.63 ± 3% -0.8 0.86 perf-profile.self.cycles-pp.entry_SYSCALL_64
1.72 -0.7 0.98 perf-profile.self.cycles-pp.unmap_page_range
1.21 ± 3% -0.6 0.66 ± 4% perf-profile.self.cycles-pp.native_irq_return_iret
1.09 ± 3% -0.5 0.54 ± 3% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
1.04 ± 2% -0.5 0.55 ± 3% perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.93 ± 2% -0.5 0.47 ± 5% perf-profile.self.cycles-pp.___might_sleep
0.70 ± 9% -0.4 0.32 ± 11% perf-profile.self.cycles-pp.kmem_cache_free
0.73 -0.4 0.36 perf-profile.self.cycles-pp.find_vma
0.70 ± 3% -0.4 0.34 ± 5% perf-profile.self.cycles-pp.mem_cgroup_uncharge_list
0.73 ± 2% -0.3 0.38 ± 5% perf-profile.self.cycles-pp.kmem_cache_alloc
0.68 ± 3% -0.3 0.35 ± 2% perf-profile.self.cycles-pp.___perf_sw_event
0.65 -0.3 0.32 ± 3% perf-profile.self.cycles-pp.__do_page_fault
0.77 ± 3% -0.3 0.45 ± 3% perf-profile.self.cycles-pp.__vma_adjust
0.69 -0.3 0.37 ± 3% perf-profile.self.cycles-pp.__do_munmap
0.57 -0.3 0.27 ± 3% perf-profile.self.cycles-pp.vmacache_find
0.61 ± 2% -0.3 0.32 ± 3% perf-profile.self.cycles-pp.__handle_mm_fault
0.61 ± 4% -0.3 0.33 ± 4% perf-profile.self.cycles-pp.get_page_from_freelist
0.55 -0.3 0.28 ± 2% perf-profile.self.cycles-pp.__x64_sys_brk
0.51 ± 3% -0.3 0.25 ± 6% perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.46 ± 4% -0.2 0.23 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.51 ± 8% -0.2 0.28 ± 11% perf-profile.self.cycles-pp._raw_spin_lock
0.47 ± 4% -0.2 0.27 ± 4% perf-profile.self.cycles-pp.handle_pte_fault
0.36 ± 5% -0.2 0.17 ± 6% perf-profile.self.cycles-pp.free_unref_page_prepare
0.36 ± 4% -0.2 0.18 ± 6% perf-profile.self.cycles-pp.up_write
0.36 ± 10% -0.2 0.18 ± 2% perf-profile.self.cycles-pp.anon_vma_clone
0.36 ± 3% -0.2 0.19 ± 6% perf-profile.self.cycles-pp.down_write
0.33 ± 5% -0.2 0.16 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.35 ± 7% -0.2 0.18 ± 2% perf-profile.self.cycles-pp.__mod_memcg_state
0.34 ± 2% -0.2 0.18 ± 3% perf-profile.self.cycles-pp.handle_mm_fault
0.31 -0.2 0.15 ± 4% perf-profile.self.cycles-pp.down_write_killable
0.31 ± 6% -0.2 0.15 ± 2% perf-profile.self.cycles-pp.sync_regs
0.65 ± 5% -0.2 0.49 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.28 ± 6% -0.1 0.14 ± 6% perf-profile.self.cycles-pp._cond_resched
0.29 ± 7% -0.1 0.15 ± 5% perf-profile.self.cycles-pp.__count_memcg_events
0.27 ± 4% -0.1 0.13 ± 9% perf-profile.self.cycles-pp.flush_tlb_mm_range
0.27 ± 3% -0.1 0.13 ± 6% perf-profile.self.cycles-pp.unlink_anon_vmas
0.26 ± 4% -0.1 0.11 ± 4% perf-profile.self.cycles-pp.cpumask_any_but
0.29 ± 3% -0.1 0.16 ± 2% perf-profile.self.cycles-pp.tlb_flush_mmu
0.27 ± 3% -0.1 0.13 ± 6% perf-profile.self.cycles-pp.vm_area_dup
0.21 ± 4% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.page_fault
0.27 ± 3% -0.1 0.14 ± 3% perf-profile.self.cycles-pp.find_next_bit
0.26 ± 4% -0.1 0.14 ± 3% perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.25 ± 3% -0.1 0.12 ± 5% perf-profile.self.cycles-pp.free_unref_page_list
0.25 -0.1 0.13 ± 3% perf-profile.self.cycles-pp.lru_add_drain_cpu
0.30 ± 4% -0.1 0.17 ± 2% perf-profile.self.cycles-pp.__vma_link_rb
0.19 ± 15% -0.1 0.07 ± 35% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.24 ± 7% -0.1 0.13 ± 6% perf-profile.self.cycles-pp.rcu_all_qs
0.20 ± 9% -0.1 0.09 ± 4% perf-profile.self.cycles-pp.__rb_insert_augmented
0.22 ± 6% -0.1 0.11 ± 4% perf-profile.self.cycles-pp.__vma_rb_erase
0.20 ± 5% -0.1 0.09 perf-profile.self.cycles-pp.__split_vma
0.22 ± 3% -0.1 0.11 ± 3% perf-profile.self.cycles-pp.up_read
0.17 ± 2% -0.1 0.06 ± 13% perf-profile.self.cycles-pp.__mod_zone_page_state
0.82 ± 3% -0.1 0.72 ± 2% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.24 ± 8% -0.1 0.13 ± 8% perf-profile.self.cycles-pp.uncharge_batch
0.34 ± 10% -0.1 0.25 ± 4% perf-profile.self.cycles-pp.mem_cgroup_page_lruvec
0.19 ± 23% -0.1 0.10 ± 7% perf-profile.self.cycles-pp.__mod_node_page_state
0.18 ± 6% -0.1 0.09 ± 5% perf-profile.self.cycles-pp.unmap_vmas
0.18 ± 9% -0.1 0.09 ± 14% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.16 ± 5% -0.1 0.07 ± 11% perf-profile.self.cycles-pp.sync_mm_rss
0.17 ± 4% -0.1 0.09 ± 5% perf-profile.self.cycles-pp.prep_new_page
0.18 ± 7% -0.1 0.10 ± 4% perf-profile.self.cycles-pp.tlb_gather_mmu
0.15 ± 8% -0.1 0.07 ± 5% perf-profile.self.cycles-pp.mem_cgroup_commit_charge
0.19 ± 6% -0.1 0.11 ± 3% perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
0.17 ± 7% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.tlb_finish_mmu
0.17 ± 28% -0.1 0.10 ± 8% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.13 ± 3% -0.1 0.06 ± 11% perf-profile.self.cycles-pp.get_task_policy
0.17 ± 2% -0.1 0.10 ± 14% perf-profile.self.cycles-pp.__perf_sw_event
0.16 ± 5% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.unmap_region
0.15 ± 7% -0.1 0.08 ± 12% perf-profile.self.cycles-pp.downgrade_write
0.15 ± 2% -0.1 0.09 ± 4% perf-profile.self.cycles-pp.alloc_pages_vma
0.15 -0.1 0.09 ± 4% perf-profile.self.cycles-pp.flush_tlb_func_common
0.12 ± 8% -0.1 0.06 ± 6% perf-profile.self.cycles-pp.userfaultfd_unmap_prep
0.10 ± 7% -0.1 0.04 ± 57% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.12 ± 4% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.__lru_cache_add
0.14 ± 12% -0.1 0.08 ± 10% perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.15 ± 3% -0.1 0.09 ± 7% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.31 ± 4% -0.1 0.25 ± 5% perf-profile.self.cycles-pp.__list_add_valid
0.11 ± 7% -0.1 0.06 ± 9% perf-profile.self.cycles-pp.down_read_trylock
0.08 ± 8% -0.1 0.03 ±100% perf-profile.self.cycles-pp.remove_vma
0.18 ± 5% -0.1 0.13 ± 9% perf-profile.self.cycles-pp.mem_cgroup_try_charge
0.11 ± 4% -0.1 0.06 ± 13% perf-profile.self.cycles-pp.free_unref_page_commit
0.08 ± 6% -0.1 0.03 ±100% perf-profile.self.cycles-pp.anon_vma_chain_link
0.10 ± 4% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.lru_cache_add_active_or_unevictable
0.10 ± 7% -0.1 0.05 perf-profile.self.cycles-pp.mem_cgroup_charge_statistics
0.09 ± 9% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.uncharge_page
0.09 ± 10% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.free_pgtables
0.10 ± 4% -0.0 0.06 perf-profile.self.cycles-pp.free_pgd_range
0.09 ± 8% -0.0 0.05 perf-profile.self.cycles-pp.unmap_single_vma
0.11 ± 4% -0.0 0.07 ± 6% perf-profile.self.cycles-pp.perf_iterate_sb
0.22 ± 4% -0.0 0.19 ± 6% perf-profile.self.cycles-pp.__mod_lruvec_state
0.20 ± 7% -0.0 0.17 ± 3% perf-profile.self.cycles-pp.page_remove_rmap
0.09 ± 9% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.memcg_check_events
0.08 ± 5% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.06 ± 6% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.pmd_none_or_trans_huge_or_clear_bad
0.00 +0.1 0.12 ± 5% perf-profile.self.cycles-pp.lock_page_memcg
0.74 +0.1 0.87 ± 3% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.00 +0.3 0.32 ± 21% perf-profile.self.cycles-pp.lock_page_lruvec_irqsave
50.86 +20.9 71.75 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
reaim.time.user_time
7000 +-+------------------------------------------------------------------+
| |
6000 +-+ + +.. + |
| .. + .+. + + + .+. .+.. |
5000 +-+ +.+..+.+ +..+.+ +.+.+..+ +.+. + + |
O O O O O O O O O O O O O O O O O O O O O O O O O |
4000 +-+ |
| O O O
3000 +-+ O |
| |
2000 +-+ |
| |
1000 +-+ |
| |
0 +-+---------O--------O-----------------------------------------------+
reaim.time.minor_page_faults
1.4e+09 +-+---------------------------------------------------------------+
|.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+..+.+.+.+.+ |
1.2e+09 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O O O |
1e+09 +-+ |
| |
8e+08 +-+ O O O O
| |
6e+08 +-+ |
| |
4e+08 +-+ |
| |
2e+08 +-+ |
| |
0 +-+--------O--------O---------------------------------------------+
reaim.parent_time
70 +-+--------------------------------------------------------------------+
| |
60 +-+ O O O O
| |
50 +-+ |
| O O |
40 O-O..O.O.O..+.O..O.O.+..O.O.O..O.O..O.O.O..+.O.O..O.+ O O O O |
| |
30 +-+ |
| |
20 +-+ |
| |
10 +-+ |
| |
0 +-+---------O--------O-------------------------------------------------+
reaim.jobs_per_min
160000 +-+----------------------------------------------------------------+
O.O.O..O.O.+.O..O.O.+.O..O.O.O.O..O.O.O.+.O..O.O.+ O O O O |
140000 +-+ O O |
120000 +-+ |
| |
100000 +-+ O O O O
| |
80000 +-+ |
| |
60000 +-+ |
40000 +-+ |
| |
20000 +-+ |
| |
0 +-+--------O--------O----------------------------------------------+
reaim.jobs_per_min_child
160 +-+-------------------------------------------------------------------+
O.O..O.O.O..+.O.O..O.+.O..O.O.O..O.O.O..O.+.O..O.O.+ O O O O |
140 +-+ O O |
120 +-+ |
| |
100 +-+ O O O O
| |
80 +-+ |
| |
60 +-+ |
40 +-+ |
| |
20 +-+ |
| |
0 +-+---------O--------O------------------------------------------------+
reaim.max_jobs_per_min
160000 +-+----------------------------------------------------------------+
O.O.O..O.O.+ O O O +.O..O.O.O.O..O.O.O.+.O..O.O.+ O O O O |
140000 +-+ O O |
120000 +-+ |
| |
100000 +-+ O O O O
| |
80000 +-+ |
| |
60000 +-+ |
40000 +-+ |
| |
20000 +-+ |
| |
0 +-+--------O--------O----------------------------------------------+
reaim.workload
800000 +-+----------------------------------------------------------------+
| |
700000 O-O O O O O O O O O O O O O O O O O O O O O O O O |
| |
600000 +-+ |
500000 +-+ O O O O
| |
400000 +-+ |
| |
300000 +-+ |
200000 +-+ |
| |
100000 +-+ |
| |
0 +-+--------O--------O----------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2ap2: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/process/16/debian-x86_64-2019-11-14.cgz/lkp-csl-2ap2/page_fault3/will-it-scale/0x500002c
commit:
86b4695edf ("mm/vmscan: remove unnecessary lruvec adding")
3f8db6a891 ("mm/lru: replace pgdat lru_lock with lruvec lock")
86b4695edff2a203 3f8db6a8910d27f136ed15a6136
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :2 dmesg.WARNING:at_ip___perf_sw_event/0x
26:4 -483% 6:2 perf-profile.calltrace.cycles-pp.error_entry
23:4 -436% 6:2 perf-profile.calltrace.cycles-pp.sync_regs.error_entry
27:4 -514% 7:2 perf-profile.children.cycles-pp.error_entry
0:4 -7% 0:2 perf-profile.children.cycles-pp.error_exit
3:4 -54% 0:2 perf-profile.self.cycles-pp.error_entry
0:4 -6% 0:2 perf-profile.self.cycles-pp.error_exit
%stddev %change %stddev
\ | \
6.96 ± 8% -0.8 6.14 perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
2.81 ± 16% -0.6 2.19 perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.91 ± 24% -0.5 1.43 perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.48 ± 30% -0.4 1.04 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.56 ± 7% +0.1 0.63 perf-profile.calltrace.cycles-pp.page_mapping.find_lock_entry.shmem_getpage_gfp.shmem_fault.__do_fault
0.76 ± 5% +0.1 0.84 perf-profile.calltrace.cycles-pp.__mod_lruvec_state.page_remove_rmap.unmap_page_range.unmap_vmas.unmap_region
0.83 ± 6% +0.1 0.96 perf-profile.calltrace.cycles-pp.__mod_lruvec_state.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault
1.65 ± 6% +0.2 1.81 perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.handle_pte_fault.__handle_mm_fault
1.84 ± 7% +0.2 2.05 perf-profile.calltrace.cycles-pp.xas_load.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_fault
4.75 ± 5% +0.3 5.09 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
2.83 ± 16% -0.6 2.21 perf-profile.children.cycles-pp.menu_select
2.02 ± 23% -0.5 1.51 perf-profile.children.cycles-pp.irq_exit
1.57 ± 30% -0.5 1.11 perf-profile.children.cycles-pp.__softirqentry_text_start
0.76 ± 18% -0.3 0.45 perf-profile.children.cycles-pp.rcu_core
0.69 ± 13% -0.3 0.42 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.40 ± 16% -0.2 0.20 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.14 ± 16% -0.1 0.05 perf-profile.children.cycles-pp._raw_spin_trylock
0.14 ± 15% -0.1 0.08 perf-profile.children.cycles-pp.note_gp_changes
0.20 ± 4% +0.0 0.22 perf-profile.children.cycles-pp.mark_page_accessed
0.07 ± 5% +0.0 0.09 perf-profile.children.cycles-pp.pmd_pfn
0.06 ± 13% +0.0 0.08 perf-profile.children.cycles-pp.ksys_read
0.06 ± 13% +0.0 0.08 perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.06 ± 11% +0.0 0.08 perf-profile.children.cycles-pp.vfs_read
0.09 ± 14% +0.0 0.11 perf-profile.children.cycles-pp.serial8250_console_putchar
0.11 ± 12% +0.0 0.13 perf-profile.children.cycles-pp.irq_work_run_list
0.10 ± 11% +0.0 0.12 perf-profile.children.cycles-pp.serial8250_console_write
0.10 ± 14% +0.0 0.13 perf-profile.children.cycles-pp.console_unlock
0.09 ± 13% +0.0 0.12 perf-profile.children.cycles-pp.wait_for_xmitr
0.09 ± 14% +0.0 0.12 perf-profile.children.cycles-pp.uart_console_write
0.03 ±105% +0.1 0.09 perf-profile.children.cycles-pp.rcu_irq_enter
0.07 ± 66% +0.1 0.13 perf-profile.children.cycles-pp.irq_work_interrupt
0.07 ± 66% +0.1 0.13 perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.07 ± 66% +0.1 0.13 perf-profile.children.cycles-pp.irq_work_run
0.07 ± 66% +0.1 0.13 perf-profile.children.cycles-pp.printk
0.07 ± 66% +0.1 0.13 perf-profile.children.cycles-pp.vprintk_emit
0.39 ± 8% +0.1 0.48 perf-profile.children.cycles-pp.__mod_node_page_state
0.06 ± 58% +0.1 0.15 perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.81 ± 6% +0.1 0.91 perf-profile.children.cycles-pp.set_page_dirty
1.03 ± 6% +0.2 1.20 perf-profile.children.cycles-pp.page_mapping
1.66 ± 6% +0.2 1.83 perf-profile.children.cycles-pp.page_add_file_rmap
1.64 ± 5% +0.2 1.85 perf-profile.children.cycles-pp.__mod_lruvec_state
1.90 ± 7% +0.2 2.11 perf-profile.children.cycles-pp.xas_load
4.94 ± 5% +0.4 5.32 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
4.94 ± 5% +0.4 5.32 perf-profile.children.cycles-pp.do_syscall_64
0.40 ± 16% -0.2 0.20 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.14 ± 19% -0.1 0.05 perf-profile.self.cycles-pp._raw_spin_trylock
0.29 ± 14% -0.1 0.21 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.11 ± 17% -0.0 0.07 perf-profile.self.cycles-pp.note_gp_changes
0.12 ± 15% -0.0 0.09 perf-profile.self.cycles-pp.__softirqentry_text_start
0.10 +0.0 0.11 perf-profile.self.cycles-pp.rcu_all_qs
0.06 ± 9% +0.0 0.07 perf-profile.self.cycles-pp.hrtimer_interrupt
0.06 ± 13% +0.0 0.08 perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.04 ± 57% +0.0 0.06 perf-profile.self.cycles-pp.pmd_pfn
0.34 ± 6% +0.0 0.37 perf-profile.self.cycles-pp.file_update_time
0.18 ± 8% +0.0 0.22 perf-profile.self.cycles-pp.current_time
0.19 ± 6% +0.0 0.23 perf-profile.self.cycles-pp.__do_fault
0.17 ± 17% +0.0 0.21 perf-profile.self.cycles-pp.__next_timer_interrupt
0.01 ±173% +0.0 0.06 perf-profile.self.cycles-pp.scheduler_tick
0.00 +0.1 0.05 perf-profile.self.cycles-pp.rcu_irq_enter
0.62 ± 7% +0.1 0.68 perf-profile.self.cycles-pp.page_add_file_rmap
0.02 ±173% +0.1 0.08 perf-profile.self.cycles-pp.sched_clock_cpu
0.04 ±106% +0.1 0.11 perf-profile.self.cycles-pp.__hrtimer_run_queues
0.06 ± 58% +0.1 0.15 perf-profile.self.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.96 ± 5% +0.1 1.05 perf-profile.self.cycles-pp.page_fault
0.39 ± 8% +0.1 0.48 perf-profile.self.cycles-pp.__mod_node_page_state
1.21 ± 6% +0.1 1.31 perf-profile.self.cycles-pp.unmap_page_range
0.03 ±173% +0.1 0.14 perf-profile.self.cycles-pp.irq_exit
0.97 ± 6% +0.2 1.13 perf-profile.self.cycles-pp.page_mapping
1.33 ± 4% +0.2 1.49 perf-profile.self.cycles-pp.__do_page_fault
1.91 ± 6% +0.2 2.09 perf-profile.self.cycles-pp.__handle_mm_fault
1.67 ± 7% +0.2 1.87 perf-profile.self.cycles-pp.xas_load
1051166 -1.1% 1039453 will-it-scale.per_process_ops
16818666 -1.1% 16631263 will-it-scale.workload
29.48 +3.4% 30.48 boot-time.dhcp
117776 ± 15% -15.4% 99617 cpuidle.POLL.usage
1529 +4.2% 1594 vmstat.system.cs
890172 ± 9% +19.2% 1061326 meminfo.Mapped
1774629 ± 9% +18.5% 2103707 meminfo.Shmem
0.01 ± 20% -0.0 0.00 mpstat.cpu.all.soft%
5.90 ± 10% +1.2 7.06 mpstat.cpu.all.sys%
24.50 ± 8% +18.4% 29.00 turbostat.Avg_MHz
44.75 ± 4% +9.5% 49.00 turbostat.CoreTmp
44.75 ± 4% +14.0% 51.00 turbostat.PkgTmp
138669 ± 65% +251.4% 487282 numa-numastat.node2.local_node
158886 ± 51% +226.3% 518514 numa-numastat.node2.numa_hit
184115 ± 72% -99.1% 1613 numa-numastat.node3.local_node
215302 ± 62% -84.7% 32892 numa-numastat.node3.numa_hit
502965 ± 8% +16.6% 586206 proc-vmstat.nr_active_anon
65106 +2.1% 66446 proc-vmstat.nr_anon_pages
714232 ± 6% +11.5% 796087 proc-vmstat.nr_file_pages
222424 ± 9% +18.7% 263923 proc-vmstat.nr_mapped
1741 ± 4% +8.6% 1890 proc-vmstat.nr_page_table_pages
444007 ± 9% +18.4% 525927 proc-vmstat.nr_shmem
66610 -1.2% 65833 proc-vmstat.nr_slab_unreclaimable
502965 ± 8% +16.6% 586206 proc-vmstat.nr_zone_active_anon
11723634 -2.4% 11444319 proc-vmstat.numa_hit
11630029 -2.4% 11350548 proc-vmstat.numa_local
11828653 -2.4% 11543524 proc-vmstat.pgalloc_normal
5.066e+09 -1.2% 5.003e+09 proc-vmstat.pgfault
11807749 -2.4% 11521890 proc-vmstat.pgfree
3902 ± 90% +339.8% 17161 numa-meminfo.node0.Inactive
3627 ± 95% +373.1% 17161 numa-meminfo.node0.Inactive(anon)
68771 ± 17% +44.3% 99218 numa-meminfo.node0.SUnreclaim
100961 ± 22% +39.2% 140516 numa-meminfo.node0.Slab
3580 ± 88% -64.8% 1259 numa-meminfo.node1.Inactive(anon)
1123 ± 63% +117.1% 2440 numa-meminfo.node1.PageTables
605414 ± 6% -11.1% 538182 numa-meminfo.node2.MemUsed
12215 ± 45% -84.2% 1924 numa-meminfo.node3.Inactive
12171 ± 45% -84.2% 1924 numa-meminfo.node3.Inactive(anon)
29690 ± 22% -57.9% 12493 numa-meminfo.node3.KReclaimable
6485 ± 3% -15.4% 5486 numa-meminfo.node3.KernelStack
29690 ± 22% -57.9% 12493 numa-meminfo.node3.SReclaimable
86539 ± 15% -44.7% 47815 numa-meminfo.node3.SUnreclaim
12628 ± 45% -84.4% 1975 numa-meminfo.node3.Shmem
116230 ± 16% -48.1% 60309 numa-meminfo.node3.Slab
2.94 ± 93% +186.5% 8.41 sched_debug.cfs_rq:/.exec_clock.min
305006 ± 50% -81.2% 57474 sched_debug.cfs_rq:/.load.max
353.96 ± 38% -40.1% 212.17 sched_debug.cfs_rq:/.load_avg.max
64.73 ± 19% -20.1% 51.69 sched_debug.cfs_rq:/.load_avg.stddev
16773 ± 6% +58.4% 26562 sched_debug.cfs_rq:/.min_vruntime.min
66721 ± 10% +26.1% 84130 sched_debug.cpu.nr_switches.max
5758 ± 2% +16.3% 6696 sched_debug.cpu.nr_switches.stddev
15.17 ± 6% -13.7% 13.09 sched_debug.cpu.nr_uninterruptible.stddev
63553 ± 10% +26.4% 80318 sched_debug.cpu.sched_count.max
5501 ± 3% +16.8% 6426 sched_debug.cpu.sched_count.stddev
31773 ± 10% +26.4% 40157 sched_debug.cpu.sched_goidle.max
2753 ± 3% +16.8% 3215 sched_debug.cpu.sched_goidle.stddev
8716 ± 39% +67.3% 14584 sched_debug.cpu.ttwu_local.max
42.40 ± 8% -10.0% 38.17 sched_debug.cpu.ttwu_local.min
805.40 ± 20% +38.4% 1114 sched_debug.cpu.ttwu_local.stddev
8111 ± 7% -13.3% 7033 slabinfo.Acpi-State.active_objs
8111 ± 7% -13.3% 7033 slabinfo.Acpi-State.num_objs
2019 ± 10% -15.0% 1717 slabinfo.UNIX.active_objs
2019 ± 10% -15.0% 1717 slabinfo.UNIX.num_objs
5842 ± 4% -6.4% 5467 slabinfo.files_cache.active_objs
5842 ± 4% -6.4% 5467 slabinfo.files_cache.num_objs
1480 ± 2% -8.0% 1363 slabinfo.khugepaged_mm_slot.active_objs
1480 ± 2% -8.0% 1363 slabinfo.khugepaged_mm_slot.num_objs
16013 -9.2% 14534 slabinfo.kmalloc-96.active_objs
16179 -9.1% 14703 slabinfo.kmalloc-96.num_objs
11503 ± 2% -10.4% 10304 slabinfo.shmem_inode_cache.active_objs
11503 ± 2% -10.4% 10304 slabinfo.shmem_inode_cache.num_objs
3718 ± 6% -14.5% 3178 slabinfo.sock_inode_cache.active_objs
3718 ± 6% -14.5% 3178 slabinfo.sock_inode_cache.num_objs
1063 ± 7% -22.7% 822.00 slabinfo.task_group.active_objs
1063 ± 7% -22.7% 822.00 slabinfo.task_group.num_objs
27666 ± 3% -7.2% 25679 slabinfo.vmap_area.active_objs
911.00 ± 95% +370.9% 4290 numa-vmstat.node0.nr_inactive_anon
217166 ± 9% +21.0% 262790 numa-vmstat.node0.nr_mapped
17192 ± 17% +44.3% 24804 numa-vmstat.node0.nr_slab_unreclaimable
911.00 ± 95% +370.9% 4290 numa-vmstat.node0.nr_zone_inactive_anon
6869007 ± 6% -12.6% 6002609 numa-vmstat.node0.numa_hit
6850057 ± 6% -12.8% 5971662 numa-vmstat.node0.numa_local
897.25 ± 88% -63.9% 324.00 numa-vmstat.node1.nr_inactive_anon
283.25 ± 63% +115.4% 610.00 numa-vmstat.node1.nr_page_table_pages
897.25 ± 88% -63.9% 324.00 numa-vmstat.node1.nr_zone_inactive_anon
111149 ± 11% -20.8% 88042 numa-vmstat.node1.numa_other
3042 ± 45% -84.2% 481.00 numa-vmstat.node3.nr_inactive_anon
6484 ± 3% -15.4% 5486 numa-vmstat.node3.nr_kernel_stack
3156 ± 45% -84.4% 493.00 numa-vmstat.node3.nr_shmem
7422 ± 22% -57.9% 3123 numa-vmstat.node3.nr_slab_reclaimable
21634 ± 15% -44.7% 11954 numa-vmstat.node3.nr_slab_unreclaimable
3042 ± 45% -84.2% 481.00 numa-vmstat.node3.nr_zone_inactive_anon
541677 ± 17% -43.6% 305371 numa-vmstat.node3.numa_hit
423010 ± 22% -55.9% 186679 numa-vmstat.node3.numa_local
7.865e+09 ± 9% +17.1% 9.213e+09 perf-stat.i.branch-instructions
27931933 ± 30% +59.5% 44540346 perf-stat.i.branch-misses
23281337 ± 8% +14.6% 26678885 perf-stat.i.cache-misses
1486 +3.5% 1538 perf-stat.i.context-switches
1.87 ± 11% -31.6% 1.28 perf-stat.i.cpi
4.824e+10 ± 8% +18.9% 5.738e+10 perf-stat.i.cpu-cycles
0.80 ± 36% -67.7% 0.26 perf-stat.i.cpu-migrations
1993 ± 6% +7.9% 2151 perf-stat.i.cycles-between-cache-misses
1.105e+10 ± 10% +17.3% 1.296e+10 perf-stat.i.dTLB-loads
5.96 ± 10% +1.1 7.10 perf-stat.i.dTLB-store-miss-rate%
4.144e+08 ± 10% +26.8% 5.256e+08 perf-stat.i.dTLB-store-misses
5.445e+09 ± 9% +25.9% 6.856e+09 perf-stat.i.dTLB-stores
16686629 ± 18% +44.5% 24114199 perf-stat.i.iTLB-load-misses
3.862e+10 ± 9% +17.3% 4.532e+10 perf-stat.i.instructions
0.72 ± 6% +9.8% 0.79 perf-stat.i.ipc
14026957 ± 10% +17.9% 16542297 perf-stat.i.minor-faults
15.88 ± 51% -14.4 1.52 perf-stat.i.node-store-miss-rate%
14027197 ± 10% +17.9% 16542684 perf-stat.i.page-faults
2074 ± 4% +3.7% 2150 perf-stat.overall.cycles-between-cache-misses
2367 ± 15% -20.6% 1879 perf-stat.overall.instructions-per-iTLB-miss
7.84e+09 ± 9% +17.1% 9.182e+09 perf-stat.ps.branch-instructions
27845972 ± 30% +59.4% 44389186 perf-stat.ps.branch-misses
23209084 ± 8% +14.6% 26588425 perf-stat.ps.cache-misses
1482 +3.4% 1533 perf-stat.ps.context-switches
4.809e+10 ± 8% +18.9% 5.719e+10 perf-stat.ps.cpu-cycles
0.80 ± 36% -67.7% 0.26 perf-stat.ps.cpu-migrations
1.102e+10 ± 10% +17.2% 1.291e+10 perf-stat.ps.dTLB-loads
4.13e+08 ± 10% +26.8% 5.238e+08 perf-stat.ps.dTLB-store-misses
5.428e+09 ± 9% +25.9% 6.833e+09 perf-stat.ps.dTLB-stores
16634243 ± 18% +44.5% 24032584 perf-stat.ps.iTLB-load-misses
3.85e+10 ± 9% +17.3% 4.516e+10 perf-stat.ps.instructions
13981893 ± 10% +17.9% 16486060 perf-stat.ps.minor-faults
13982135 ± 10% +17.9% 16486457 perf-stat.ps.page-faults
1.385e+13 -1.8% 1.36e+13 perf-stat.total.instructions
1152 ± 36% -73.6% 304.00 interrupts.32:PCI-MSI.524290-edge.eth0-TxRx-1
735.50 ± 10% -16.8% 612.00 interrupts.9:IO-APIC.9-fasteoi.acpi
722902 ± 11% -16.5% 603301 interrupts.CPU0.LOC:Local_timer_interrupts
2854 ± 27% +92.6% 5498 interrupts.CPU0.RES:Rescheduling_interrupts
735.50 ± 10% -16.8% 612.00 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
722329 ± 11% -16.5% 603192 interrupts.CPU1.LOC:Local_timer_interrupts
1152 ± 36% -73.6% 304.00 interrupts.CPU10.32:PCI-MSI.524290-edge.eth0-TxRx-1
722722 ± 11% -16.5% 603262 interrupts.CPU10.LOC:Local_timer_interrupts
725299 ± 11% -16.4% 606222 interrupts.CPU100.LOC:Local_timer_interrupts
725372 ± 11% -16.4% 606217 interrupts.CPU101.LOC:Local_timer_interrupts
725300 ± 11% -16.4% 606215 interrupts.CPU102.LOC:Local_timer_interrupts
5123 ± 16% -54.2% 2344 interrupts.CPU103.CAL:Function_call_interrupts
725258 ± 11% -16.4% 606546 interrupts.CPU103.LOC:Local_timer_interrupts
6874 ± 28% -57.2% 2942 interrupts.CPU103.NMI:Non-maskable_interrupts
6874 ± 28% -57.2% 2942 interrupts.CPU103.PMI:Performance_monitoring_interrupts
724513 ± 11% -16.3% 606082 interrupts.CPU104.LOC:Local_timer_interrupts
7676 ± 24% -62.4% 2886 interrupts.CPU104.NMI:Non-maskable_interrupts
7676 ± 24% -62.4% 2886 interrupts.CPU104.PMI:Performance_monitoring_interrupts
725292 ± 11% -16.4% 606262 interrupts.CPU105.LOC:Local_timer_interrupts
725321 ± 11% -16.4% 606282 interrupts.CPU106.LOC:Local_timer_interrupts
725305 ± 11% -16.4% 606281 interrupts.CPU107.LOC:Local_timer_interrupts
725349 ± 11% -16.4% 606214 interrupts.CPU108.LOC:Local_timer_interrupts
725353 ± 11% -16.4% 606216 interrupts.CPU109.LOC:Local_timer_interrupts
7665 ± 24% -43.3% 4346 interrupts.CPU109.NMI:Non-maskable_interrupts
7665 ± 24% -43.3% 4346 interrupts.CPU109.PMI:Performance_monitoring_interrupts
722495 ± 11% -16.5% 603290 interrupts.CPU11.LOC:Local_timer_interrupts
725355 ± 11% -16.4% 606214 interrupts.CPU110.LOC:Local_timer_interrupts
725344 ± 11% -16.4% 606216 interrupts.CPU111.LOC:Local_timer_interrupts
723506 ± 11% -16.6% 603057 interrupts.CPU112.LOC:Local_timer_interrupts
722397 ± 11% -16.5% 603057 interrupts.CPU113.LOC:Local_timer_interrupts
722412 ± 11% -16.5% 603057 interrupts.CPU114.LOC:Local_timer_interrupts
722357 ± 11% -16.5% 603057 interrupts.CPU115.LOC:Local_timer_interrupts
722486 ± 11% -16.5% 603057 interrupts.CPU116.LOC:Local_timer_interrupts
722303 ± 11% -16.5% 603057 interrupts.CPU117.LOC:Local_timer_interrupts
722203 ± 11% -16.5% 603056 interrupts.CPU118.LOC:Local_timer_interrupts
722662 ± 11% -16.6% 603044 interrupts.CPU119.LOC:Local_timer_interrupts
722214 ± 11% -16.5% 603372 interrupts.CPU12.LOC:Local_timer_interrupts
191.00 ± 31% +55.5% 297.00 interrupts.CPU127.NMI:Non-maskable_interrupts
191.00 ± 31% +55.5% 297.00 interrupts.CPU127.PMI:Performance_monitoring_interrupts
722272 ± 11% -16.5% 603284 interrupts.CPU13.LOC:Local_timer_interrupts
186.25 ± 31% +41.2% 263.00 interrupts.CPU133.NMI:Non-maskable_interrupts
186.25 ± 31% +41.2% 263.00 interrupts.CPU133.PMI:Performance_monitoring_interrupts
4.75 ±149% +21415.8% 1022 interrupts.CPU136.RES:Rescheduling_interrupts
7.75 ±173% +3745.2% 298.00 interrupts.CPU138.RES:Rescheduling_interrupts
722218 ± 11% -16.5% 603282 interrupts.CPU14.LOC:Local_timer_interrupts
235.00 ± 12% +23.4% 290.00 interrupts.CPU143.NMI:Non-maskable_interrupts
235.00 ± 12% +23.4% 290.00 interrupts.CPU143.PMI:Performance_monitoring_interrupts
182.25 ± 18% +42.1% 259.00 interrupts.CPU146.NMI:Non-maskable_interrupts
182.25 ± 18% +42.1% 259.00 interrupts.CPU146.PMI:Performance_monitoring_interrupts
722730 ± 11% -16.5% 603544 interrupts.CPU15.LOC:Local_timer_interrupts
4257 ± 32% -32.4% 2877 interrupts.CPU15.NMI:Non-maskable_interrupts
4257 ± 32% -32.4% 2877 interrupts.CPU15.PMI:Performance_monitoring_interrupts
722394 ± 11% -16.4% 604200 interrupts.CPU16.LOC:Local_timer_interrupts
722465 ± 11% -16.5% 603070 interrupts.CPU17.LOC:Local_timer_interrupts
722215 ± 11% -16.5% 603067 interrupts.CPU18.LOC:Local_timer_interrupts
216.25 ± 17% -44.5% 120.00 interrupts.CPU181.NMI:Non-maskable_interrupts
216.25 ± 17% -44.5% 120.00 interrupts.CPU181.PMI:Performance_monitoring_interrupts
219.50 ± 17% -42.6% 126.00 interrupts.CPU182.NMI:Non-maskable_interrupts
219.50 ± 17% -42.6% 126.00 interrupts.CPU182.PMI:Performance_monitoring_interrupts
237.75 ± 16% -44.1% 133.00 interrupts.CPU183.NMI:Non-maskable_interrupts
237.75 ± 16% -44.1% 133.00 interrupts.CPU183.PMI:Performance_monitoring_interrupts
222.00 ± 18% -39.6% 134.00 interrupts.CPU184.NMI:Non-maskable_interrupts
222.00 ± 18% -39.6% 134.00 interrupts.CPU184.PMI:Performance_monitoring_interrupts
228.50 ± 17% -40.5% 136.00 interrupts.CPU185.NMI:Non-maskable_interrupts
228.50 ± 17% -40.5% 136.00 interrupts.CPU185.PMI:Performance_monitoring_interrupts
232.50 ± 22% -40.6% 138.00 interrupts.CPU186.NMI:Non-maskable_interrupts
232.50 ± 22% -40.6% 138.00 interrupts.CPU186.PMI:Performance_monitoring_interrupts
230.25 ± 21% -44.0% 129.00 interrupts.CPU187.NMI:Non-maskable_interrupts
230.25 ± 21% -44.0% 129.00 interrupts.CPU187.PMI:Performance_monitoring_interrupts
237.75 ± 23% -40.3% 142.00 interrupts.CPU188.NMI:Non-maskable_interrupts
237.75 ± 23% -40.3% 142.00 interrupts.CPU188.PMI:Performance_monitoring_interrupts
223.00 ± 18% -42.6% 128.00 interrupts.CPU189.NMI:Non-maskable_interrupts
223.00 ± 18% -42.6% 128.00 interrupts.CPU189.PMI:Performance_monitoring_interrupts
722128 ± 11% -16.5% 603068 interrupts.CPU19.LOC:Local_timer_interrupts
233.00 ± 26% -44.6% 129.00 interrupts.CPU190.NMI:Non-maskable_interrupts
233.00 ± 26% -44.6% 129.00 interrupts.CPU190.PMI:Performance_monitoring_interrupts
218.25 ± 22% +38.4% 302.00 interrupts.CPU191.NMI:Non-maskable_interrupts
218.25 ± 22% +38.4% 302.00 interrupts.CPU191.PMI:Performance_monitoring_interrupts
722201 ± 11% -16.5% 603218 interrupts.CPU2.LOC:Local_timer_interrupts
79468 ±106% -100.0% 1.00 interrupts.CPU2.RES:Rescheduling_interrupts
722524 ± 11% -16.5% 603073 interrupts.CPU20.LOC:Local_timer_interrupts
722169 ± 11% -16.5% 603061 interrupts.CPU21.LOC:Local_timer_interrupts
32.75 ±153% -100.0% 0.00 interrupts.CPU21.RES:Rescheduling_interrupts
722302 ± 11% -16.5% 603068 interrupts.CPU22.LOC:Local_timer_interrupts
722201 ± 11% -16.5% 603075 interrupts.CPU23.LOC:Local_timer_interrupts
197.00 ± 33% +44.7% 285.00 interrupts.CPU24.NMI:Non-maskable_interrupts
197.00 ± 33% +44.7% 285.00 interrupts.CPU24.PMI:Performance_monitoring_interrupts
186.00 ± 31% +39.8% 260.00 interrupts.CPU25.NMI:Non-maskable_interrupts
186.00 ± 31% +39.8% 260.00 interrupts.CPU25.PMI:Performance_monitoring_interrupts
226.75 ±168% -100.0% 0.00 interrupts.CPU25.RES:Rescheduling_interrupts
178.25 ± 15% +40.8% 251.00 interrupts.CPU26.NMI:Non-maskable_interrupts
178.25 ± 15% +40.8% 251.00 interrupts.CPU26.PMI:Performance_monitoring_interrupts
151.25 ± 88% -93.4% 10.00 interrupts.CPU26.RES:Rescheduling_interrupts
194.00 ± 13% +28.4% 249.00 interrupts.CPU27.NMI:Non-maskable_interrupts
194.00 ± 13% +28.4% 249.00 interrupts.CPU27.PMI:Performance_monitoring_interrupts
266.75 ±168% -100.0% 0.00 interrupts.CPU28.RES:Rescheduling_interrupts
722175 ± 11% -16.5% 603212 interrupts.CPU3.LOC:Local_timer_interrupts
226.75 ± 18% -34.3% 149.00 interrupts.CPU31.NMI:Non-maskable_interrupts
226.75 ± 18% -34.3% 149.00 interrupts.CPU31.PMI:Performance_monitoring_interrupts
225.50 ± 21% -33.5% 150.00 interrupts.CPU32.NMI:Non-maskable_interrupts
225.50 ± 21% -33.5% 150.00 interrupts.CPU32.PMI:Performance_monitoring_interrupts
216.50 ± 15% -40.0% 130.00 interrupts.CPU33.NMI:Non-maskable_interrupts
216.50 ± 15% -40.0% 130.00 interrupts.CPU33.PMI:Performance_monitoring_interrupts
214.00 ± 12% -33.6% 142.00 interrupts.CPU34.NMI:Non-maskable_interrupts
214.00 ± 12% -33.6% 142.00 interrupts.CPU34.PMI:Performance_monitoring_interrupts
219.50 ± 19% -39.0% 134.00 interrupts.CPU35.NMI:Non-maskable_interrupts
219.50 ± 19% -39.0% 134.00 interrupts.CPU35.PMI:Performance_monitoring_interrupts
233.00 ± 17% -37.8% 145.00 interrupts.CPU36.NMI:Non-maskable_interrupts
233.00 ± 17% -37.8% 145.00 interrupts.CPU36.PMI:Performance_monitoring_interrupts
210.25 ± 15% -42.9% 120.00 interrupts.CPU37.NMI:Non-maskable_interrupts
210.25 ± 15% -42.9% 120.00 interrupts.CPU37.PMI:Performance_monitoring_interrupts
9.00 ± 88% +1822.2% 173.00 interrupts.CPU37.RES:Rescheduling_interrupts
225.25 ± 19% -36.5% 143.00 interrupts.CPU38.NMI:Non-maskable_interrupts
225.25 ± 19% -36.5% 143.00 interrupts.CPU38.PMI:Performance_monitoring_interrupts
235.00 ± 20% -42.1% 136.00 interrupts.CPU39.NMI:Non-maskable_interrupts
235.00 ± 20% -42.1% 136.00 interrupts.CPU39.PMI:Performance_monitoring_interrupts
722365 ± 11% -16.5% 603196 interrupts.CPU4.LOC:Local_timer_interrupts
5175 ± 17% -38.8% 3167 interrupts.CPU41.CAL:Function_call_interrupts
29.75 ±130% +404.2% 150.00 interrupts.CPU48.RES:Rescheduling_interrupts
192.75 ± 33% +49.4% 288.00 interrupts.CPU49.NMI:Non-maskable_interrupts
192.75 ± 33% +49.4% 288.00 interrupts.CPU49.PMI:Performance_monitoring_interrupts
722191 ± 11% -16.5% 603195 interrupts.CPU5.LOC:Local_timer_interrupts
70.50 ±157% +559.6% 465.00 interrupts.CPU50.RES:Rescheduling_interrupts
163.75 ± 20% +65.5% 271.00 interrupts.CPU52.NMI:Non-maskable_interrupts
163.75 ± 20% +65.5% 271.00 interrupts.CPU52.PMI:Performance_monitoring_interrupts
164.50 ± 21% +54.4% 254.00 interrupts.CPU53.NMI:Non-maskable_interrupts
164.50 ± 21% +54.4% 254.00 interrupts.CPU53.PMI:Performance_monitoring_interrupts
163.50 ± 22% +65.7% 271.00 interrupts.CPU54.NMI:Non-maskable_interrupts
163.50 ± 22% +65.7% 271.00 interrupts.CPU54.PMI:Performance_monitoring_interrupts
156.25 ± 26% +77.3% 277.00 interrupts.CPU55.NMI:Non-maskable_interrupts
156.25 ± 26% +77.3% 277.00 interrupts.CPU55.PMI:Performance_monitoring_interrupts
159.75 ± 22% +69.0% 270.00 interrupts.CPU56.NMI:Non-maskable_interrupts
159.75 ± 22% +69.0% 270.00 interrupts.CPU56.PMI:Performance_monitoring_interrupts
161.50 ± 21% +71.5% 277.00 interrupts.CPU57.NMI:Non-maskable_interrupts
161.50 ± 21% +71.5% 277.00 interrupts.CPU57.PMI:Performance_monitoring_interrupts
160.00 ± 27% +59.4% 255.00 interrupts.CPU58.NMI:Non-maskable_interrupts
160.00 ± 27% +59.4% 255.00 interrupts.CPU58.PMI:Performance_monitoring_interrupts
722238 ± 11% -16.5% 603192 interrupts.CPU6.LOC:Local_timer_interrupts
722328 ± 11% -16.1% 606229 interrupts.CPU7.LOC:Local_timer_interrupts
4344 ± 58% +102.3% 8786 interrupts.CPU7.NMI:Non-maskable_interrupts
4344 ± 58% +102.3% 8786 interrupts.CPU7.PMI:Performance_monitoring_interrupts
723135 ± 11% -16.6% 603383 interrupts.CPU8.LOC:Local_timer_interrupts
3552 ± 32% +144.6% 8689 interrupts.CPU8.NMI:Non-maskable_interrupts
3552 ± 32% +144.6% 8689 interrupts.CPU8.PMI:Performance_monitoring_interrupts
193.75 ± 36% +55.9% 302.00 interrupts.CPU89.NMI:Non-maskable_interrupts
193.75 ± 36% +55.9% 302.00 interrupts.CPU89.PMI:Performance_monitoring_interrupts
722538 ± 11% -16.5% 603451 interrupts.CPU9.LOC:Local_timer_interrupts
59.25 ± 15% +239.2% 201.00 interrupts.CPU95.RES:Rescheduling_interrupts
725270 ± 11% -16.4% 606216 interrupts.CPU96.LOC:Local_timer_interrupts
725188 ± 11% -16.4% 606225 interrupts.CPU97.LOC:Local_timer_interrupts
725315 ± 11% -16.4% 606215 interrupts.CPU98.LOC:Local_timer_interrupts
725329 ± 11% -16.4% 606217 interrupts.CPU99.LOC:Local_timer_interrupts
236778 ± 18% -22.2% 184150 interrupts.RES:Rescheduling_interrupts
56689 ± 8% -11.4% 50215 softirqs.CPU0.SCHED
92703 ± 4% -18.6% 75470 softirqs.CPU10.RCU
99775 ± 9% -17.6% 82205 softirqs.CPU100.RCU
17730 ± 48% -77.7% 3951 softirqs.CPU100.SCHED
147300 ± 22% -39.3% 89387 softirqs.CPU100.TIMER
97189 ± 7% -12.1% 85398 softirqs.CPU101.RCU
148056 ± 22% -38.4% 91142 softirqs.CPU101.TIMER
96474 ± 6% -12.3% 84575 softirqs.CPU102.RCU
11371 ± 39% -54.8% 5143 softirqs.CPU102.SCHED
147482 ± 21% -38.2% 91133 softirqs.CPU102.TIMER
17656 ± 78% +135.9% 41654 softirqs.CPU103.SCHED
97780 ± 5% -6.7% 91254 softirqs.CPU104.RCU
12354 ± 46% +130.2% 28441 softirqs.CPU104.SCHED
96069 ± 5% -13.6% 82982 softirqs.CPU105.RCU
11766 ± 41% -64.6% 4164 softirqs.CPU105.SCHED
147057 ± 21% -38.9% 89793 softirqs.CPU105.TIMER
95668 ± 5% -14.0% 82315 softirqs.CPU106.RCU
11506 ± 40% -65.7% 3942 softirqs.CPU106.SCHED
146821 ± 21% -39.2% 89302 softirqs.CPU106.TIMER
95348 ± 5% -12.9% 83062 softirqs.CPU107.RCU
11337 ± 38% -63.9% 4089 softirqs.CPU107.SCHED
146151 ± 21% -38.6% 89764 softirqs.CPU107.TIMER
95571 ± 5% -13.5% 82713 softirqs.CPU108.RCU
11395 ± 38% -65.2% 3963 softirqs.CPU108.SCHED
146604 ± 21% -38.7% 89864 softirqs.CPU108.TIMER
95989 ± 5% -14.0% 82522 softirqs.CPU109.RCU
11349 ± 38% -71.0% 3291 softirqs.CPU109.SCHED
145958 ± 21% -38.5% 89709 softirqs.CPU109.TIMER
95729 ± 6% -12.9% 83348 softirqs.CPU110.RCU
11495 ± 40% -63.9% 4155 softirqs.CPU110.SCHED
146755 ± 21% -38.8% 89796 softirqs.CPU110.TIMER
96518 ± 6% -13.7% 83265 softirqs.CPU111.RCU
11595 ± 39% -64.7% 4096 softirqs.CPU111.SCHED
147672 ± 21% -39.3% 89661 softirqs.CPU111.TIMER
48538 ± 10% -17.4% 40089 softirqs.CPU112.SCHED
181139 ± 5% -27.6% 131121 softirqs.CPU112.TIMER
105420 ± 3% -8.6% 96389 softirqs.CPU113.RCU
104742 ± 2% -8.6% 95780 softirqs.CPU114.RCU
45533 ± 12% -23.7% 34751 softirqs.CPU115.SCHED
107624 ± 6% -10.0% 96900 softirqs.CPU116.RCU
107635 -10.2% 96613 softirqs.CPU117.RCU
106064 -9.1% 96400 softirqs.CPU119.RCU
46927 ± 12% -16.3% 39285 softirqs.CPU119.SCHED
102984 ± 9% -11.7% 90910 softirqs.CPU120.RCU
44966 ± 9% -38.4% 27711 softirqs.CPU122.SCHED
105488 ± 6% -12.3% 92505 softirqs.CPU123.RCU
46573 ± 12% -40.1% 27891 softirqs.CPU123.SCHED
89927 ± 3% -13.3% 77994 softirqs.CPU129.RCU
45618 ± 9% -32.3% 30900 softirqs.CPU129.SCHED
90337 ± 4% -6.4% 84556 softirqs.CPU130.RCU
88433 ± 3% -12.8% 77071 softirqs.CPU132.RCU
45279 ± 9% -30.8% 31351 softirqs.CPU132.SCHED
89813 ± 5% -10.2% 80624 softirqs.CPU134.RCU
44868 ± 10% -28.7% 31991 softirqs.CPU134.SCHED
89257 ± 6% -15.4% 75483 softirqs.CPU137.RCU
45875 ± 11% -34.0% 30298 softirqs.CPU137.SCHED
89642 ± 4% -9.6% 81077 softirqs.CPU138.RCU
90646 ± 5% -11.0% 80656 softirqs.CPU139.RCU
90560 ± 4% -7.4% 83860 softirqs.CPU140.RCU
90906 ± 6% -9.3% 82430 softirqs.CPU141.RCU
89318 ± 4% -14.8% 76057 softirqs.CPU143.RCU
46962 ± 8% -31.6% 32130 softirqs.CPU143.SCHED
90171 ± 10% -15.2% 76492 softirqs.CPU144.RCU
45321 ± 13% -22.8% 34975 softirqs.CPU144.SCHED
93262 ± 7% -12.2% 81921 softirqs.CPU145.RCU
46520 ± 14% -24.4% 35161 softirqs.CPU145.SCHED
45605 ± 14% -23.3% 34983 softirqs.CPU146.SCHED
89096 ± 7% -18.6% 72512 softirqs.CPU147.RCU
44452 ± 11% -22.0% 34677 softirqs.CPU147.SCHED
90046 ± 6% -17.0% 74727 softirqs.CPU149.RCU
46880 ± 8% -21.9% 36593 softirqs.CPU149.SCHED
88970 ± 5% -16.2% 74549 softirqs.CPU150.RCU
44572 ± 14% -17.4% 36834 softirqs.CPU150.SCHED
87986 ± 4% -11.9% 77513 softirqs.CPU157.RCU
45726 ± 13% -19.7% 36714 softirqs.CPU157.SCHED
93065 ± 7% -13.2% 80792 softirqs.CPU158.RCU
47646 ± 10% -23.2% 36608 softirqs.CPU169.SCHED
46175 ± 8% -20.6% 36653 softirqs.CPU170.SCHED
47015 ± 7% -21.7% 36797 softirqs.CPU171.SCHED
47475 ± 8% -21.4% 37299 softirqs.CPU172.SCHED
47893 ± 8% -21.1% 37775 softirqs.CPU173.SCHED
47081 ± 8% -19.0% 38140 softirqs.CPU174.SCHED
46005 ± 8% -15.6% 38837 softirqs.CPU175.SCHED
47523 ± 8% -13.1% 41314 softirqs.CPU176.SCHED
46430 ± 8% -12.6% 40585 softirqs.CPU178.SCHED
88784 ± 5% -18.7% 72203 softirqs.CPU179.RCU
46480 ± 8% -18.1% 38086 softirqs.CPU179.SCHED
91293 ± 8% -18.9% 74068 softirqs.CPU180.RCU
46192 ± 8% -16.7% 38497 softirqs.CPU180.SCHED
91738 ± 8% -10.0% 82562 softirqs.CPU181.RCU
45715 ± 9% -15.2% 38746 softirqs.CPU181.SCHED
91089 ± 7% -12.6% 79615 softirqs.CPU182.RCU
45794 ± 9% -15.5% 38699 softirqs.CPU182.SCHED
88730 ± 5% -17.9% 72841 softirqs.CPU183.RCU
47390 ± 8% -18.3% 38700 softirqs.CPU183.SCHED
92576 ± 6% -16.3% 77505 softirqs.CPU184.RCU
45521 ± 9% -14.8% 38781 softirqs.CPU184.SCHED
88044 ± 5% -16.0% 73961 softirqs.CPU185.RCU
47396 ± 8% -17.6% 39050 softirqs.CPU185.SCHED
88826 ± 6% -18.6% 72296 softirqs.CPU186.RCU
45769 ± 8% -13.9% 39403 softirqs.CPU186.SCHED
91422 ± 6% -18.7% 74310 softirqs.CPU187.RCU
45545 ± 9% -13.8% 39243 softirqs.CPU187.SCHED
94833 ± 5% -24.8% 71344 softirqs.CPU188.RCU
46021 ± 10% -14.6% 39314 softirqs.CPU188.SCHED
94543 ± 5% -20.5% 75202 softirqs.CPU189.RCU
47905 ± 9% -17.5% 39501 softirqs.CPU189.SCHED
90552 ± 7% -11.6% 80007 softirqs.CPU190.RCU
96650 ± 5% -10.9% 86101 softirqs.CPU191.RCU
106296 ± 4% -7.2% 98607 softirqs.CPU22.RCU
107223 ± 4% -9.7% 96866 softirqs.CPU24.RCU
108255 ± 5% -12.9% 94249 softirqs.CPU25.RCU
46348 ± 9% -67.0% 15297 softirqs.CPU25.SCHED
109583 ± 5% -10.7% 97882 softirqs.CPU26.RCU
46691 ± 8% -12.1% 41043 softirqs.CPU26.SCHED
108483 ± 3% -10.8% 96769 softirqs.CPU27.RCU
46821 ± 9% -12.3% 41051 softirqs.CPU27.SCHED
106958 ± 5% -14.8% 91109 softirqs.CPU28.RCU
45482 ± 9% -68.3% 14429 softirqs.CPU28.SCHED
107793 ± 4% -11.4% 95488 softirqs.CPU29.RCU
44973 ± 10% -12.1% 39542 softirqs.CPU29.SCHED
108063 ± 3% -16.5% 90210 softirqs.CPU30.RCU
45032 ± 11% -63.8% 16309 softirqs.CPU30.SCHED
107152 ± 5% -11.6% 94683 softirqs.CPU31.RCU
44928 ± 10% -56.4% 19583 softirqs.CPU31.SCHED
94918 ± 4% -18.5% 77378 softirqs.CPU32.RCU
44508 ± 11% -57.4% 18974 softirqs.CPU32.SCHED
93668 ± 2% -23.7% 71431 softirqs.CPU33.RCU
44666 ± 10% -59.5% 18100 softirqs.CPU33.SCHED
96644 ± 7% -21.2% 76202 softirqs.CPU34.RCU
44895 ± 10% -59.2% 18303 softirqs.CPU34.SCHED
92018 ± 2% -20.0% 73592 softirqs.CPU35.RCU
44843 ± 10% -60.0% 17921 softirqs.CPU35.SCHED
92157 -9.4% 83540 softirqs.CPU36.RCU
44491 ± 12% -48.3% 22984 softirqs.CPU37.SCHED
94202 ± 3% -19.1% 76243 softirqs.CPU38.RCU
43107 ± 16% -56.6% 18705 softirqs.CPU38.SCHED
93262 ± 2% -17.4% 77069 softirqs.CPU39.RCU
44906 ± 12% -58.7% 18559 softirqs.CPU39.SCHED
90791 ± 2% -10.5% 81285 softirqs.CPU41.RCU
92583 ± 2% -14.5% 79125 softirqs.CPU42.RCU
93478 ± 2% -21.7% 73191 softirqs.CPU43.RCU
44702 ± 11% -64.3% 15953 softirqs.CPU43.SCHED
93415 ± 3% -10.5% 83593 softirqs.CPU44.RCU
93672 ± 4% -20.2% 74756 softirqs.CPU45.RCU
44458 ± 12% -68.8% 13856 softirqs.CPU45.SCHED
93669 ± 4% -20.5% 74454 softirqs.CPU46.RCU
47017 ± 8% -68.1% 14981 softirqs.CPU46.SCHED
93358 ± 4% -23.8% 71147 softirqs.CPU47.RCU
98765 ± 9% -17.9% 81080 softirqs.CPU48.RCU
46694 ± 10% -29.2% 33070 softirqs.CPU48.SCHED
92432 ± 9% -16.0% 77657 softirqs.CPU5.RCU
60526 ± 43% -35.2% 39236 softirqs.CPU5.SCHED
45249 ± 13% -30.9% 31264 softirqs.CPU50.SCHED
91585 ± 7% -25.3% 68413 softirqs.CPU51.RCU
45055 ± 11% -63.6% 16418 softirqs.CPU51.SCHED
89001 ± 3% -19.9% 71253 softirqs.CPU53.RCU
42989 ± 13% -54.7% 19458 softirqs.CPU53.SCHED
89893 ± 5% -22.2% 69935 softirqs.CPU54.RCU
43136 ± 13% -56.7% 18691 softirqs.CPU54.SCHED
92521 ± 7% -12.9% 80589 softirqs.CPU55.RCU
42323 ± 15% -55.7% 18759 softirqs.CPU55.SCHED
90042 ± 4% -17.7% 74132 softirqs.CPU56.RCU
43107 ± 13% -63.0% 15964 softirqs.CPU56.SCHED
90021 ± 5% -16.2% 75404 softirqs.CPU57.RCU
42786 ± 14% -60.6% 16845 softirqs.CPU57.SCHED
89947 ± 5% -16.9% 74725 softirqs.CPU58.RCU
43099 ± 13% -57.4% 18343 softirqs.CPU58.SCHED
91440 ± 9% -17.1% 75778 softirqs.CPU59.RCU
44659 ± 10% -53.3% 20868 softirqs.CPU59.SCHED
88172 ± 8% -12.9% 76806 softirqs.CPU6.RCU
92480 ± 8% -11.5% 81878 softirqs.CPU60.RCU
42281 ± 15% -56.4% 18426 softirqs.CPU60.SCHED
90018 ± 5% -16.4% 75253 softirqs.CPU61.RCU
43299 ± 13% -61.3% 16751 softirqs.CPU61.SCHED
92198 ± 8% -20.5% 73298 softirqs.CPU62.RCU
45177 ± 11% -63.5% 16498 softirqs.CPU62.SCHED
45892 ± 12% -23.7% 35031 softirqs.CPU64.SCHED
90560 ± 4% -9.5% 81927 softirqs.CPU65.RCU
43320 ± 12% -52.1% 20763 softirqs.CPU65.SCHED
42094 ± 15% -56.6% 18257 softirqs.CPU66.SCHED
91876 ± 4% -7.5% 84984 softirqs.CPU67.RCU
43215 ± 15% -59.4% 17545 softirqs.CPU67.SCHED
90479 ± 4% -9.8% 81649 softirqs.CPU68.RCU
42921 ± 13% -60.4% 16998 softirqs.CPU68.SCHED
92454 ± 6% -10.1% 83080 softirqs.CPU69.RCU
45395 ± 11% -62.8% 16909 softirqs.CPU69.SCHED
37417 ± 14% -89.1% 4079 softirqs.CPU7.SCHED
154619 ± 16% -41.1% 90996 softirqs.CPU7.TIMER
90591 ± 4% -7.9% 83471 softirqs.CPU70.RCU
43179 ± 13% -56.9% 18595 softirqs.CPU70.SCHED
91531 ± 6% -7.8% 84364 softirqs.CPU71.RCU
42983 ± 13% -57.2% 18378 softirqs.CPU71.SCHED
94665 ± 10% -15.0% 80506 softirqs.CPU72.RCU
46535 ± 10% -57.6% 19719 softirqs.CPU72.SCHED
45130 ± 12% -58.3% 18809 softirqs.CPU73.SCHED
42465 ± 19% -57.2% 18179 softirqs.CPU74.SCHED
95026 ± 6% -12.2% 83431 softirqs.CPU75.RCU
45373 ± 11% -60.6% 17862 softirqs.CPU75.SCHED
94421 ± 6% -10.9% 84163 softirqs.CPU76.RCU
45176 ± 9% -58.8% 18616 softirqs.CPU76.SCHED
94766 ± 8% -13.3% 82201 softirqs.CPU77.RCU
45494 ± 9% -57.1% 19538 softirqs.CPU77.SCHED
95098 ± 7% -14.3% 81533 softirqs.CPU78.RCU
47674 ± 7% -57.1% 20439 softirqs.CPU78.SCHED
96861 ± 6% -13.0% 84295 softirqs.CPU79.RCU
47281 ± 7% -58.3% 19731 softirqs.CPU79.SCHED
92989 ± 4% -13.9% 80048 softirqs.CPU8.RCU
43836 ± 12% -63.1% 16194 softirqs.CPU8.SCHED
164867 ± 18% -32.9% 110581 softirqs.CPU8.TIMER
95346 ± 8% -12.6% 83345 softirqs.CPU80.RCU
47068 ± 7% -60.9% 18422 softirqs.CPU80.SCHED
92325 ± 7% -21.1% 72840 softirqs.CPU81.RCU
46490 ± 7% -62.8% 17282 softirqs.CPU81.SCHED
92011 ± 6% -16.4% 76950 softirqs.CPU82.RCU
46311 ± 7% -61.9% 17628 softirqs.CPU82.SCHED
89899 ± 5% -20.6% 71360 softirqs.CPU83.RCU
46137 ± 7% -61.6% 17713 softirqs.CPU83.SCHED
91637 ± 7% -20.4% 72946 softirqs.CPU84.RCU
44425 ± 11% -56.2% 19477 softirqs.CPU84.SCHED
92990 ± 8% -10.0% 83648 softirqs.CPU85.RCU
44585 ± 10% -52.7% 21105 softirqs.CPU85.SCHED
94612 ± 7% -15.7% 79719 softirqs.CPU86.RCU
46872 ± 7% -56.9% 20182 softirqs.CPU86.SCHED
90747 ± 5% -20.2% 72384 softirqs.CPU87.RCU
44863 ± 10% -56.7% 19432 softirqs.CPU87.SCHED
93852 ± 6% -17.2% 77695 softirqs.CPU88.RCU
44454 ± 11% -55.3% 19873 softirqs.CPU88.SCHED
89669 ± 4% -8.0% 82535 softirqs.CPU89.RCU
44949 ± 10% -23.4% 34419 softirqs.CPU89.SCHED
89747 ± 5% -19.5% 72208 softirqs.CPU90.RCU
44453 ± 11% -47.7% 23241 softirqs.CPU90.SCHED
93196 ± 7% -20.7% 73951 softirqs.CPU91.RCU
44599 ± 11% -50.6% 22016 softirqs.CPU91.SCHED
92979 ± 7% -23.4% 71214 softirqs.CPU92.RCU
44708 ± 10% -50.5% 22141 softirqs.CPU92.SCHED
93728 ± 8% -19.1% 75842 softirqs.CPU93.RCU
44667 ± 11% -46.8% 23784 softirqs.CPU93.SCHED
93413 ± 8% -22.5% 72371 softirqs.CPU94.RCU
44781 ± 10% -41.4% 26232 softirqs.CPU94.SCHED
98971 ± 6% -11.8% 87340 softirqs.CPU95.RCU
47603 ± 8% -18.2% 38922 softirqs.CPU95.SCHED
96746 ± 5% -13.6% 83607 softirqs.CPU96.RCU
101818 -18.1% 83394 softirqs.CPU97.RCU
22246 ± 60% -83.7% 3622 softirqs.CPU97.SCHED
156558 ± 17% -43.6% 88347 softirqs.CPU97.TIMER
95633 ± 6% -13.9% 82360 softirqs.CPU98.RCU
11351 ± 39% -63.4% 4152 softirqs.CPU98.SCHED
146607 ± 21% -38.7% 89932 softirqs.CPU98.TIMER
96507 ± 6% -13.6% 83363 softirqs.CPU99.RCU
11800 ± 39% -65.3% 4092 softirqs.CPU99.SCHED
169798 ± 23% -47.0% 89958 softirqs.CPU99.TIMER
18156052 ± 3% -11.1% 16134404 softirqs.RCU
8141587 ± 10% -28.0% 5861065 softirqs.SCHED
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 1 month
[mm] 5c555b54a8: INFO:rcu_sched_self-detected_stall_on_CPU
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 5c555b54a84ac88b1babeb19ec93b5f69077277a ("mm: Allow find_get_page to be used for large pages")
git://git.infradead.org/users/willy/linux-dax.git xarray-pagecache
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: btrfs
test: generic-group24
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------+------------+------------+
| | c1d3c048b2 | 5c555b54a8 |
+------------------------------------------------+------------+------------+
| boot_successes | 8 | 0 |
| boot_failures | 0 | 8 |
| INFO:rcu_sched_self-detected_stall_on_CPU | 0 | 8 |
| RIP:pagecache_get_page | 0 | 3 |
| BUG:soft_lockup-CPU##stuck_for#s![fsstress:#] | 0 | 8 |
| RIP:xas_load | 0 | 7 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 8 |
| RIP:xas_start | 0 | 2 |
| RIP:xas_find | 0 | 2 |
+------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 177.324084] rcu: INFO: rcu_sched self-detected stall on CPU
[ 177.327011] rcu: 1-...!: (99999 ticks this GP) idle=64a/1/0x4000000000000002 softirq=69153/69153 fqs=37
[ 177.328193] (t=100004 jiffies g=71273 q=1649)
[ 177.328998] rcu: rcu_sched kthread starved for 99851 jiffies! g71273 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0
[ 177.330166] rcu: RCU grace-period kthread stack dump:
[ 177.331033] rcu_sched R running task 0 10 2 0x80004000
[ 177.332071] Call Trace:
[ 177.332801] ? __schedule+0x28f/0x770
[ 177.333579] schedule+0x59/0xd0
[ 177.334325] schedule_timeout+0x16b/0x310
[ 177.335117] ? rcu_report_qs_rnp+0xd8/0xf0
[ 177.335914] ? __next_timer_interrupt+0xc0/0xc0
[ 177.336731] rcu_gp_kthread+0x65f/0xae0
[ 177.337510] ? rcu_accelerate_cbs_unlocked+0x80/0x80
[ 177.338352] kthread+0x11e/0x140
[ 177.339086] ? kthread_park+0x90/0x90
[ 177.339875] ret_from_fork+0x35/0x40
[ 177.340639] NMI backtrace for cpu 1
[ 177.341385] CPU: 1 PID: 1515 Comm: fsstress Not tainted 5.5.0-rc2-00364-g5c555b54a84ac #1
[ 177.342457] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 177.343505] Call Trace:
[ 177.344247] <IRQ>
[ 177.344910] dump_stack+0x66/0x8b
[ 177.345653] nmi_cpu_backtrace+0x89/0x90
[ 177.346431] ? lapic_can_unplug_cpu+0xa0/0xa0
[ 177.347230] nmi_trigger_cpumask_backtrace+0xe0/0x110
[ 177.348072] rcu_dump_cpu_stacks+0x9c/0xd4
[ 177.348844] rcu_sched_clock_irq+0x5f3/0x860
[ 177.349615] ? tick_sched_do_timer+0x60/0x60
[ 177.350364] update_process_times+0x24/0x50
[ 177.351112] tick_sched_handle+0x21/0x70
[ 177.351862] tick_sched_timer+0x37/0x70
[ 177.352572] __hrtimer_run_queues+0x108/0x2b0
[ 177.353303] hrtimer_interrupt+0xe5/0x240
[ 177.354003] smp_apic_timer_interrupt+0x6a/0x140
[ 177.354724] apic_timer_interrupt+0xf/0x20
[ 177.355391] </IRQ>
[ 177.355989] RIP: 0010:xas_load+0x8/0x80
[ 177.356650] Code: 31 c0 c3 48 c1 f8 02 85 c0 74 b3 31 c0 c3 48 8b 07 48 8b 40 08 c3 66 66 2e 0f 1f 84 00 00 00 00 00 90 49 89 f8 e8 68 ff ff ff <48> 89 c2 83 e2 03 48 83 fa 02 75 08 48 3d 00 10 00 00 77 02 f3 c3
[ 177.358668] RSP: 0018:ffff9820822c3bb8 EFLAGS: 00000206 ORIG_RAX: ffffffffffffff13
[ 177.359588] RAX: ffff89685dd1791a RBX: 0000000000000062 RCX: 0000000000000002
[ 177.360495] RDX: 0000000000000001 RSI: ffff89685dd17da8 RDI: ffff9820822c3be0
[ 177.361378] RBP: 0000000000000000 R08: ffff9820822c3be0 R09: ffff9820822c3be0
[ 177.362262] R10: 0000000000000062 R11: 0000000000000230 R12: 0000000000000000
[ 177.363150] R13: 0000000000000000 R14: 0000000000000062 R15: 000101c5a0001001
[ 177.364085] ? xas_load+0x8/0x80
[ 177.364740] xas_find+0x158/0x190
[ 177.365396] pagecache_get_page+0xa3/0x510
[ 177.366094] do_read_cache_page+0x52/0x740
[ 177.366775] vfs_dedupe_get_page+0x12/0xa0
[ 177.367492] generic_remap_file_range_prep+0x1e3/0x640
[ 177.368341] btrfs_remap_file_range+0x14a/0x3e0 [btrfs]
[ 177.369081] vfs_dedupe_file_range_one+0x137/0x150
[ 177.369780] vfs_dedupe_file_range+0x159/0x1c0
[ 177.370455] do_vfs_ioctl+0x27a/0x740
[ 177.371087] ksys_ioctl+0x70/0x80
[ 177.371724] __x64_sys_ioctl+0x16/0x20
[ 177.372381] do_syscall_64+0x5b/0x1f0
[ 177.373010] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 177.373729] RIP: 0033:0x7f9ab56fa017
[ 177.374354] Code: 00 00 00 48 8b 05 81 7e 2b 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 51 7e 2b 00 f7 d8 64 89 01 48
[ 177.376379] RSP: 002b:00007ffeac8ce2a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 177.377287] RAX: ffffffffffffffda RBX: 0000000000000068 RCX: 00007f9ab56fa017
[ 177.378171] RDX: 0000558215846170 RSI: 00000000c0189436 RDI: 0000000000000004
[ 177.379051] RBP: 000000000000000b R08: 0000558215851a80 R09: 0000000000000005
[ 177.379967] R10: 0000000000000078 R11: 0000000000000246 R12: 0000000000004000
[ 177.380831] R13: 00005582158518c0 R14: 0000558215851578 R15: 0000000000000068
[ 240.526048] watchdog: BUG: soft lockup - CPU#0 stuck for 135s! [fsstress:1517]
[ 240.529167] Modules linked in: btrfs blake2b_generic xor zstd_decompress zstd_compress raid6_pq libcrc32c dm_mod intel_rapl_msr intel_rapl_common bochs_drm sr_mod drm_vram_helper crct10dif_pclmul crc32_pclmul cdrom drm_ttm_helper crc32c_intel ttm sg ghash_clmulni_intel drm_kms_helper ppdev syscopyarea sysfillrect sysimgblt ata_generic pata_acpi fb_sys_fops drm snd_pcm aesni_intel snd_timer crypto_simd cryptd glue_helper snd joydev soundcore ata_piix pcspkr serio_raw libata i2c_piix4 parport_pc floppy parport ip_tables
[ 240.536523] CPU: 0 PID: 1517 Comm: fsstress Not tainted 5.5.0-rc2-00364-g5c555b54a84ac #1
[ 240.537969] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 240.539442] RIP: 0010:xas_load+0x40/0x80
[ 240.540560] Code: 00 10 00 00 77 02 f3 c3 0f b6 48 fe 48 8d 70 fe 41 38 48 10 77 f0 49 8b 50 08 48 d3 ea 83 e2 3f 89 d0 48 8d 44 c6 28 48 8b 00 <49> 89 70 18 48 89 c1 83 e1 03 48 83 f9 02 75 18 48 3d fd 00 00 00
[ 240.543738] RSP: 0018:ffff98208238bbb8 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff13
[ 240.545206] RAX: ffff89685387824a RBX: 0000000000000055 RCX: 0000000000000006
[ 240.546642] RDX: 0000000000000001 RSI: ffff89685dff4b68 RDI: ffff98208238bbe0
[ 240.548090] RBP: 0000000000000000 R08: ffff98208238bbe0 R09: ffff98208238bbe0
[ 240.549532] R10: 0000000000000055 R11: 0000000000000001 R12: 0000000000000000
[ 240.550986] R13: 0000000000000000 R14: 0000000000000055 R15: 000101fba0001001
[ 240.552417] FS: 00007f9ab61f5b40(0000) GS:ffff8968bfc00000(0000) knlGS:0000000000000000
[ 240.553906] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 240.555200] CR2: 00007f9ab61fb631 CR3: 00000001d47ee000 CR4: 00000000000406f0
[ 240.556630] Call Trace:
[ 240.557650] xas_find+0x158/0x190
[ 240.558744] pagecache_get_page+0xa3/0x510
[ 240.559907] do_read_cache_page+0x52/0x740
[ 240.561059] vfs_dedupe_get_page+0x12/0xa0
[ 240.562267] generic_remap_file_range_prep+0x1e3/0x640
[ 240.563546] btrfs_remap_file_range+0x14a/0x3e0 [btrfs]
[ 240.564798] vfs_dedupe_file_range_one+0x137/0x150
[ 240.566016] vfs_dedupe_file_range+0x159/0x1c0
[ 240.567201] do_vfs_ioctl+0x27a/0x740
[ 240.568326] ksys_ioctl+0x70/0x80
[ 240.569407] __x64_sys_ioctl+0x16/0x20
[ 240.570523] do_syscall_64+0x5b/0x1f0
[ 240.571643] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 240.572872] RIP: 0033:0x7f9ab56fa017
[ 240.573981] Code: 00 00 00 48 8b 05 81 7e 2b 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 51 7e 2b 00 f7 d8 64 89 01 48
[ 240.577222] RSP: 002b:00007ffeac8ce2a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 240.578726] RAX: ffffffffffffffda RBX: 0000000000000053 RCX: 00007f9ab56fa017
[ 240.580201] RDX: 0000558215850250 RSI: 00000000c0189436 RDI: 0000000000000004
[ 240.581652] RBP: 000000000000000b R08: 0000558215851610 R09: 0000000000000004
[ 240.583083] R10: 0000000000000078 R11: 0000000000000246 R12: 0000000000016000
[ 240.584506] R13: 00005582158514a8 R14: 000055821584a478 R15: 0000000000000053
[ 240.585919] Kernel panic - not syncing: softlockup: hung tasks
[ 240.587224] CPU: 0 PID: 1517 Comm: fsstress Tainted: G L 5.5.0-rc2-00364-g5c555b54a84ac #1
[ 240.588870] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 240.590419] Call Trace:
[ 240.591484] <IRQ>
[ 240.592508] dump_stack+0x66/0x8b
[ 240.593607] panic+0x105/0x2e0
[ 240.594660] watchdog_timer_fn+0x256/0x260
[ 240.595782] ? softlockup_fn+0x40/0x40
[ 240.596848] __hrtimer_run_queues+0x108/0x2b0
[ 240.597943] hrtimer_interrupt+0xe5/0x240
[ 240.599002] smp_apic_timer_interrupt+0x6a/0x140
[ 240.600100] apic_timer_interrupt+0xf/0x20
[ 240.601133] </IRQ>
[ 240.601980] RIP: 0010:xas_load+0x40/0x80
[ 240.602962] Code: 00 10 00 00 77 02 f3 c3 0f b6 48 fe 48 8d 70 fe 41 38 48 10 77 f0 49 8b 50 08 48 d3 ea 83 e2 3f 89 d0 48 8d 44 c6 28 48 8b 00 <49> 89 70 18 48 89 c1 83 e1 03 48 83 f9 02 75 18 48 3d fd 00 00 00
[ 240.605864] RSP: 0018:ffff98208238bbb8 EFLAGS: 00000202 ORIG_RAX: ffffffffffffff13
[ 240.607174] RAX: ffff89685387824a RBX: 0000000000000055 RCX: 0000000000000006
[ 240.608470] RDX: 0000000000000001 RSI: ffff89685dff4b68 RDI: ffff98208238bbe0
[ 240.609761] RBP: 0000000000000000 R08: ffff98208238bbe0 R09: ffff98208238bbe0
[ 240.611051] R10: 0000000000000055 R11: 0000000000000001 R12: 0000000000000000
[ 240.612350] R13: 0000000000000000 R14: 0000000000000055 R15: 000101fba0001001
[ 240.613620] ? xas_load+0x8/0x80
[ 240.614558] xas_find+0x158/0x190
[ 240.615501] pagecache_get_page+0xa3/0x510
[ 240.616508] do_read_cache_page+0x52/0x740
[ 240.617513] vfs_dedupe_get_page+0x12/0xa0
[ 240.618577] generic_remap_file_range_prep+0x1e3/0x640
[ 240.619704] btrfs_remap_file_range+0x14a/0x3e0 [btrfs]
[ 240.620810] vfs_dedupe_file_range_one+0x137/0x150
[ 240.621879] vfs_dedupe_file_range+0x159/0x1c0
[ 240.622912] do_vfs_ioctl+0x27a/0x740
[ 240.623887] ksys_ioctl+0x70/0x80
[ 240.624830] __x64_sys_ioctl+0x16/0x20
[ 240.625810] do_syscall_64+0x5b/0x1f0
[ 240.626779] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 240.627878] RIP: 0033:0x7f9ab56fa017
[ 240.628843] Code: 00 00 00 48 8b 05 81 7e 2b 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 2e 0f 1f 84 00 00 00 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 51 7e 2b 00 f7 d8 64 89 01 48
[ 240.631769] RSP: 002b:00007ffeac8ce2a8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
[ 240.633087] RAX: ffffffffffffffda RBX: 0000000000000053 RCX: 00007f9ab56fa017
[ 240.634357] RDX: 0000558215850250 RSI: 00000000c0189436 RDI: 0000000000000004
[ 240.635632] RBP: 000000000000000b R08: 0000558215851610 R09: 0000000000000004
[ 240.636896] R10: 0000000000000078 R11: 0000000000000246 R12: 0000000000016000
[ 240.638154] R13: 00005582158514a8 R14: 000055821584a478 R15: 0000000000000053
[ 240.639462] Kernel Offset: 0x33000000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
Elapsed time: 300
qemu-img create -f qcow2 disk-vm-snb-b61411d4a03e-0 256G
qemu-img create -f qcow2 disk-vm-snb-b61411d4a03e-1 256G
qemu-img create -f qcow2 disk-vm-snb-b61411d4a03e-2 256G
qemu-img create -f qcow2 disk-vm-snb-b61411d4a03e-3 256G
qemu-img create -f qcow2 disk-vm-snb-b61411d4a03e-4 256G
qemu-img create -f qcow2 disk-vm-snb-b61411d4a03e-5 256G
qemu-img create -f qcow2 disk-vm-snb-b61411d4a03e-6 256G
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-snb-b61411d4a03e
-m 8192
-smp 2
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc2-00364-g5c555b54a84ac .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 1 month
[mm/lru] 5470521fac: unixbench.score -2.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -2.3% regression of unixbench.score due to commit:
commit: 5470521faccbd36e0725c2a7b87b76b8e87e23ed ("mm/lru: debug checking for page memcg moving and lock_page_memcg")
https://github.com/alexshi/linux.git lru-next
in testcase: unixbench
on test machine: 104 threads Skylake with 192G memory
with following parameters:
runtime: 300s
nr_task: 1
test: shell8
cpufreq_governor: performance
ucode: 0x2000065
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------+
| testcase: change | unixbench: unixbench.score -1.5% regression |
| test machine | 8 threads Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz with 16G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=30% |
| | runtime=300s |
| | test=shell1 |
| | ucode=0x21 |
+------------------+---------------------------------------------------------------------------+
| testcase: change | unixbench: unixbench.score -2.3% regression |
| test machine | 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=1 |
| | runtime=300s |
| | test=shell8 |
| | ucode=0x500002c |
+------------------+---------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1/debian-x86_64-2019-11-14.cgz/300s/lkp-skl-fpga01/shell8/unixbench/0x2000065
commit:
7e21e4009e ("mm/lru: revise the comments of lru_lock")
5470521fac ("mm/lru: debug checking for page memcg moving and lock_page_memcg")
7e21e4009e9f4fcf 5470521faccbd36e0725c2a7b87
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 50% 2:4 kmsg.ipmi_si_dmi-ipmi-si.#:IRQ_index#not_found
:4 278% 11:4 perf-profile.calltrace.cycles-pp.error_entry
:4 278% 11:4 perf-profile.children.cycles-pp.error_entry
:4 278% 11:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
9016 -2.3% 8813 unixbench.score
41368025 -2.3% 40436535 unixbench.time.minor_page_faults
180.36 +4.0% 187.51 unixbench.time.system_time
194.12 -3.9% 186.63 unixbench.time.user_time
1217911 -2.5% 1187206 unixbench.time.voluntary_context_switches
340834 -2.3% 333144 unixbench.workload
7590 ± 9% -18.1% 6215 meminfo.PageTables
202659 +1.4% 205451 vmstat.system.in
36.57 +2.0% 37.30 boot-time.boot
22.75 +2.9% 23.41 boot-time.dhcp
0.95 +24.9% 1.18 boot-time.smp_boot
1.73 ± 21% -0.7 1.01 ± 10% mpstat.cpu.all.gnice%
0.00 ± 57% -0.0 0.00 ± 42% mpstat.cpu.all.soft%
1.73 ± 21% -0.7 1.01 ± 10% mpstat.cpu.all.usr%
7.95e+09 ± 33% +35.9% 1.08e+10 ± 28% cpuidle.C1E.time
17856770 ± 29% +40.3% 25059135 ± 22% cpuidle.C1E.usage
6.309e+08 ± 81% +371.3% 2.974e+09 ± 56% cpuidle.C6.time
689380 ± 85% +835.0% 6445719 ± 75% cpuidle.C6.usage
20583 ± 16% +23.1% 25340 ± 4% cpuidle.POLL.usage
9369 ± 65% +110.0% 19680 ± 15% numa-meminfo.node0.Inactive
9191 ± 66% +112.3% 19508 ± 15% numa-meminfo.node0.Inactive(anon)
14446 ± 13% +24.6% 17994 ± 9% numa-meminfo.node0.Mapped
9836 ± 65% +111.0% 20757 ± 11% numa-meminfo.node0.Shmem
18508 ± 34% -56.5% 8050 ± 30% numa-meminfo.node1.Shmem
2297 ± 66% +112.3% 4876 ± 15% numa-vmstat.node0.nr_inactive_anon
3658 ± 13% +24.9% 4570 ± 10% numa-vmstat.node0.nr_mapped
2458 ± 65% +111.1% 5189 ± 11% numa-vmstat.node0.nr_shmem
2297 ± 66% +112.3% 4876 ± 15% numa-vmstat.node0.nr_zone_inactive_anon
4626 ± 34% -56.5% 2012 ± 30% numa-vmstat.node1.nr_shmem
9995846 ± 23% +31.9% 13187716 ± 7% numa-vmstat.node1.numa_hit
9851679 ± 23% +32.0% 13003753 ± 7% numa-vmstat.node1.numa_local
2270 ± 6% -15.6% 1917 turbostat.Bzy_MHz
17856472 ± 29% +40.3% 25058447 ± 22% turbostat.C1E
673294 ± 87% +854.8% 6428616 ± 75% turbostat.C6
7.50 ± 82% +14.3 21.81 ± 63% turbostat.C6%
1.14 ± 31% -52.0% 0.55 ± 2% turbostat.CPU%c6
18204188 ± 27% +59.5% 29043001 ± 10% turbostat.IRQ
40.00 ± 6% -11.2% 35.50 turbostat.PkgTmp
190.58 ± 6% -11.9% 167.88 ± 2% turbostat.PkgWatt
1877 ± 10% -17.3% 1551 proc-vmstat.nr_page_table_pages
7085 +1.6% 7201 proc-vmstat.nr_shmem
20307 +3.1% 20926 proc-vmstat.nr_slab_reclaimable
51343 +1.9% 52343 proc-vmstat.nr_slab_unreclaimable
32329291 -1.8% 31762545 proc-vmstat.numa_hit
32295632 -1.8% 31728840 proc-vmstat.numa_local
21495 ± 58% +149.3% 53583 ± 21% proc-vmstat.numa_pte_updates
34499357 -1.9% 33848694 proc-vmstat.pgalloc_normal
41618291 -1.9% 40821412 proc-vmstat.pgfault
34436876 -1.9% 33785413 proc-vmstat.pgfree
606189 -2.3% 592496 proc-vmstat.unevictable_pgs_culled
1757 ± 6% +10.4% 1939 ± 3% slabinfo.UNIX.active_objs
1757 ± 6% +10.4% 1939 ± 3% slabinfo.UNIX.num_objs
296.00 ± 4% +24.3% 368.00 ± 13% slabinfo.biovec-64.active_objs
296.00 ± 4% +24.3% 368.00 ± 13% slabinfo.biovec-64.num_objs
1818 +11.8% 2032 slabinfo.kmalloc-4k.active_objs
1818 +11.8% 2032 slabinfo.kmalloc-4k.num_objs
3998 ± 7% +17.4% 4693 slabinfo.kmalloc-rcl-64.active_objs
3998 ± 7% +17.4% 4693 slabinfo.kmalloc-rcl-64.num_objs
1459 ± 11% +23.7% 1806 ± 2% slabinfo.kmalloc-rcl-96.active_objs
1459 ± 11% +23.7% 1806 ± 2% slabinfo.kmalloc-rcl-96.num_objs
24280 ± 6% +13.7% 27603 ± 5% slabinfo.pid.active_objs
758.25 ± 6% +13.7% 862.50 ± 5% slabinfo.pid.active_slabs
24286 ± 6% +13.7% 27614 ± 5% slabinfo.pid.num_objs
758.25 ± 6% +13.7% 862.50 ± 5% slabinfo.pid.num_slabs
11129 ± 4% +17.6% 13087 slabinfo.proc_inode_cache.active_objs
11129 ± 4% +17.6% 13087 slabinfo.proc_inode_cache.num_objs
20691 +25.0% 25855 ± 2% slabinfo.vmap_area.active_objs
23511 +16.3% 27341 ± 2% slabinfo.vmap_area.num_objs
67705297 ± 20% -39.3% 41129676 ± 9% perf-stat.i.branch-misses
4.01 ± 32% +8.1 12.11 ± 46% perf-stat.i.cache-miss-rate%
1.49 ± 24% +65.1% 2.45 ± 5% perf-stat.i.cpi
3.97e+09 ± 21% -39.9% 2.388e+09 ± 7% perf-stat.i.dTLB-loads
0.84 ± 13% -29.7% 0.59 perf-stat.i.ipc
70.27 ± 7% +9.6 79.87 perf-stat.i.node-load-miss-rate%
351866 ± 22% -41.5% 205943 ± 7% perf-stat.i.node-loads
34.17 ± 37% +24.8 58.92 ± 4% perf-stat.i.node-store-miss-rate%
571174 ± 23% -42.3% 329489 ± 6% perf-stat.i.node-stores
10.70 -2.6% 10.42 perf-stat.overall.MPKI
2.36 ± 3% +0.7 3.03 ± 16% perf-stat.overall.cache-miss-rate%
1.04 ± 3% +14.6% 1.19 perf-stat.overall.cpi
0.96 ± 3% -12.8% 0.84 perf-stat.overall.ipc
65.15 +1.7 66.88 perf-stat.overall.node-load-miss-rate%
19.24 ± 8% +2.1 21.33 ± 6% perf-stat.overall.node-store-miss-rate%
3704762 ± 2% +5.3% 3899576 perf-stat.overall.path-length
66778203 ± 20% -38.9% 40784057 ± 9% perf-stat.ps.branch-misses
3.916e+09 ± 20% -39.5% 2.368e+09 ± 7% perf-stat.ps.dTLB-loads
347026 ± 22% -41.2% 204204 ± 7% perf-stat.ps.node-loads
563276 ± 23% -42.0% 326689 ± 6% perf-stat.ps.node-stores
590.97 ±173% +315.5% 2455 ± 3% sched_debug.cfs_rq:/.exec_clock.avg
911.08 ±172% +326.1% 3881 ± 3% sched_debug.cfs_rq:/.exec_clock.max
87.12 ±171% +557.5% 572.80 ± 11% sched_debug.cfs_rq:/.exec_clock.stddev
50.62 ± 4% -19.6% 40.69 ± 2% sched_debug.cfs_rq:/.load_avg.avg
932.08 ± 7% -28.9% 662.83 ± 18% sched_debug.cfs_rq:/.load_avg.max
165.62 ± 6% -18.5% 135.00 ± 4% sched_debug.cfs_rq:/.load_avg.stddev
21805 ± 68% +118.3% 47593 sched_debug.cfs_rq:/.min_vruntime.avg
31776 ± 55% +111.2% 67124 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
3052 ± 41% +140.2% 7332 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev
0.00 ±173% +506.2% 0.03 ± 34% sched_debug.cfs_rq:/.nr_spread_over.avg
0.25 ±173% +300.0% 1.00 ± 33% sched_debug.cfs_rq:/.nr_spread_over.max
0.03 ±173% +379.7% 0.15 ± 39% sched_debug.cfs_rq:/.nr_spread_over.stddev
3055 ± 41% +140.0% 7331 ± 9% sched_debug.cfs_rq:/.spread0.stddev
1143 ± 19% -29.8% 801.83 sched_debug.cfs_rq:/.util_avg.max
289.85 ± 13% -32.3% 196.31 sched_debug.cfs_rq:/.util_avg.stddev
17895 ±123% +175.9% 49375 ± 32% sched_debug.cpu.avg_idle.min
101746 ± 25% +32.1% 134424 ± 2% sched_debug.cpu.clock.avg
101752 ± 25% +32.1% 134429 ± 2% sched_debug.cpu.clock.max
101741 ± 25% +32.1% 134418 ± 2% sched_debug.cpu.clock.min
101746 ± 25% +32.1% 134424 ± 2% sched_debug.cpu.clock_task.avg
101752 ± 25% +32.1% 134429 ± 2% sched_debug.cpu.clock_task.max
101741 ± 25% +32.1% 134418 ± 2% sched_debug.cpu.clock_task.min
7273 ±107% +132.1% 16880 ± 16% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 13% -26.4% 0.00 ± 10% sched_debug.cpu.next_balance.stddev
0.11 ± 30% -42.6% 0.06 ± 12% sched_debug.cpu.nr_running.avg
0.32 ± 18% -28.6% 0.23 ± 8% sched_debug.cpu.nr_running.stddev
18107 ± 48% +165.9% 48151 sched_debug.cpu.nr_switches.max
2102 ± 19% +160.7% 5481 ± 14% sched_debug.cpu.nr_switches.stddev
56.75 ± 29% -46.3% 30.50 ± 4% sched_debug.cpu.nr_uninterruptible.max
6665 ±165% +516.0% 41064 ± 7% sched_debug.cpu.sched_count.max
546.52 ±158% +746.2% 4624 ± 22% sched_debug.cpu.sched_count.stddev
3284 ±165% +515.5% 20218 ± 7% sched_debug.cpu.sched_goidle.max
267.75 ±157% +744.2% 2260 ± 22% sched_debug.cpu.sched_goidle.stddev
3303 ±172% +424.2% 17317 ± 15% sched_debug.cpu.ttwu_count.max
250.91 ±171% +675.0% 1944 ± 23% sched_debug.cpu.ttwu_count.stddev
678.08 ±171% +367.4% 3169 sched_debug.cpu.ttwu_local.max
57.48 ±170% +664.3% 439.32 ± 22% sched_debug.cpu.ttwu_local.stddev
101742 ± 25% +32.1% 134419 ± 2% sched_debug.cpu_clk
98870 ± 26% +32.8% 131258 ± 2% sched_debug.ktime
102162 ± 25% +32.0% 134867 ± 2% sched_debug.sched_clk
11.51 ± 69% -11.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
11.51 ± 69% -11.5 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
11.51 ± 69% -11.5 0.00 perf-profile.calltrace.cycles-pp.read
11.51 ± 69% -11.5 0.00 perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
11.51 ± 69% -11.5 0.00 perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
10.42 ± 87% -10.4 0.00 perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.vfs_read.ksys_read
10.42 ± 87% -10.4 0.00 perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.42 ± 87% -10.4 0.00 perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
7.92 ±129% -7.9 0.00 perf-profile.calltrace.cycles-pp.page_fault
7.92 ±129% -7.9 0.00 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
7.92 ±129% -7.9 0.00 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.get_signal.do_signal.exit_to_usermode_loop
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp.task_work_run.do_exit.do_group_exit.get_signal.do_signal
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp.__fput.task_work_run.do_exit.do_group_exit.get_signal
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp.perf_release.__fput.task_work_run.do_exit.do_group_exit
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp.perf_event_release_kernel.perf_release.__fput.task_work_run.do_exit
6.85 ±107% -6.8 0.00 perf-profile.calltrace.cycles-pp._free_event.perf_event_release_kernel.perf_release.__fput.task_work_run
4.79 ±108% -4.8 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
4.79 ±108% -4.8 0.00 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
4.79 ±108% -4.8 0.00 perf-profile.calltrace.cycles-pp.handle_pte_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
4.67 ±100% -4.7 0.00 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
43.03 ± 38% +36.2 79.25 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
43.03 ± 38% +36.6 79.63 ± 2% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
43.03 ± 38% +44.4 87.41 ± 11% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
43.03 ± 38% +44.4 87.47 ± 11% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
43.03 ± 38% +44.4 87.47 ± 11% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
43.03 ± 38% +44.8 87.86 ± 11% perf-profile.calltrace.cycles-pp.secondary_startup_64
31.85 ± 42% -25.8 6.02 ± 84% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
31.85 ± 42% -25.8 6.02 ± 84% perf-profile.children.cycles-pp.do_syscall_64
11.51 ± 69% -11.5 0.00 perf-profile.children.cycles-pp.seq_read
11.51 ± 69% -11.4 0.09 ± 99% perf-profile.children.cycles-pp.read
11.51 ± 69% -11.4 0.10 ±100% perf-profile.children.cycles-pp.ksys_read
11.51 ± 69% -11.4 0.10 ±100% perf-profile.children.cycles-pp.vfs_read
10.42 ± 87% -10.4 0.00 perf-profile.children.cycles-pp.show_interrupts
10.42 ± 87% -10.4 0.00 perf-profile.children.cycles-pp.proc_reg_read
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.prepare_exit_to_usermode
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.exit_to_usermode_loop
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.do_signal
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.get_signal
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.do_group_exit
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.do_exit
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.task_work_run
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.__fput
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.perf_release
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp.perf_event_release_kernel
6.85 ±107% -6.8 0.00 perf-profile.children.cycles-pp._free_event
1.09 ±173% +4.5 5.62 ± 97% perf-profile.children.cycles-pp.__x64_sys_execve
1.09 ±173% +4.5 5.62 ± 97% perf-profile.children.cycles-pp.__do_execve_file
43.03 ± 38% +36.9 79.94 ± 2% perf-profile.children.cycles-pp.cpuidle_enter
43.03 ± 38% +36.9 79.94 ± 2% perf-profile.children.cycles-pp.cpuidle_enter_state
43.03 ± 38% +44.4 87.47 ± 11% perf-profile.children.cycles-pp.start_secondary
43.03 ± 38% +44.8 87.86 ± 11% perf-profile.children.cycles-pp.secondary_startup_64
43.03 ± 38% +44.8 87.86 ± 11% perf-profile.children.cycles-pp.cpu_startup_entry
43.03 ± 38% +44.9 87.91 ± 11% perf-profile.children.cycles-pp.do_idle
7.29 ± 64% -7.3 0.00 perf-profile.self.cycles-pp.show_interrupts
166.00 ± 63% +238.9% 562.50 ± 4% interrupts.41:PCI-MSI.67633156-edge.eth0-TxRx-3
177.00 ± 27% +57.6% 279.00 ± 10% interrupts.9:IO-APIC.9-fasteoi.acpi
137637 ± 15% +36.2% 187404 ± 8% interrupts.CAL:Function_call_interrupts
1259 ± 20% +34.2% 1691 ± 16% interrupts.CPU0.CAL:Function_call_interrupts
174888 ± 27% +59.3% 278514 ± 10% interrupts.CPU0.LOC:Local_timer_interrupts
177.00 ± 27% +57.6% 279.00 ± 10% interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
1328 ± 15% +34.3% 1784 ± 8% interrupts.CPU1.CAL:Function_call_interrupts
173666 ± 28% +59.3% 276689 ± 9% interrupts.CPU1.LOC:Local_timer_interrupts
1315 ± 16% +26.4% 1663 ± 17% interrupts.CPU10.CAL:Function_call_interrupts
173611 ± 28% +59.3% 276621 ± 10% interrupts.CPU10.LOC:Local_timer_interrupts
1296 ± 10% +42.8% 1851 ± 6% interrupts.CPU100.CAL:Function_call_interrupts
173100 ± 27% +59.0% 275239 ± 10% interrupts.CPU100.LOC:Local_timer_interrupts
1339 ± 15% +37.4% 1839 ± 7% interrupts.CPU101.CAL:Function_call_interrupts
173961 ± 27% +59.9% 278114 ± 10% interrupts.CPU101.LOC:Local_timer_interrupts
1335 ± 15% +35.2% 1806 ± 5% interrupts.CPU102.CAL:Function_call_interrupts
173828 ± 28% +60.1% 278267 ± 10% interrupts.CPU102.LOC:Local_timer_interrupts
1328 ± 14% +37.7% 1828 ± 6% interrupts.CPU103.CAL:Function_call_interrupts
173740 ± 27% +60.0% 278013 ± 10% interrupts.CPU103.LOC:Local_timer_interrupts
1263 ± 19% +40.1% 1769 ± 8% interrupts.CPU11.CAL:Function_call_interrupts
172264 ± 28% +60.6% 276732 ± 10% interrupts.CPU11.LOC:Local_timer_interrupts
1310 ± 16% +35.7% 1778 ± 7% interrupts.CPU12.CAL:Function_call_interrupts
173265 ± 27% +59.7% 276774 ± 10% interrupts.CPU12.LOC:Local_timer_interrupts
1317 ± 15% +34.9% 1777 ± 8% interrupts.CPU13.CAL:Function_call_interrupts
173086 ± 28% +59.0% 275223 ± 10% interrupts.CPU13.LOC:Local_timer_interrupts
1306 ± 16% +36.0% 1777 ± 8% interrupts.CPU14.CAL:Function_call_interrupts
172359 ± 27% +61.4% 278161 ± 10% interrupts.CPU14.LOC:Local_timer_interrupts
309.00 ± 61% -60.2% 123.00 ± 30% interrupts.CPU14.RES:Rescheduling_interrupts
173433 ± 28% +58.7% 275307 ± 10% interrupts.CPU15.LOC:Local_timer_interrupts
1328 ± 15% +33.6% 1774 ± 8% interrupts.CPU16.CAL:Function_call_interrupts
173121 ± 28% +59.9% 276746 ± 10% interrupts.CPU16.LOC:Local_timer_interrupts
1297 ± 17% +36.6% 1772 ± 8% interrupts.CPU17.CAL:Function_call_interrupts
173189 ± 28% +59.8% 276767 ± 10% interrupts.CPU17.LOC:Local_timer_interrupts
1327 ± 15% +33.8% 1775 ± 7% interrupts.CPU18.CAL:Function_call_interrupts
173124 ± 28% +59.9% 276799 ± 10% interrupts.CPU18.LOC:Local_timer_interrupts
1324 ± 16% +34.1% 1776 ± 6% interrupts.CPU19.CAL:Function_call_interrupts
173166 ± 28% +59.8% 276751 ± 10% interrupts.CPU19.LOC:Local_timer_interrupts
1320 ± 15% +33.8% 1767 ± 13% interrupts.CPU2.CAL:Function_call_interrupts
173238 ± 28% +59.7% 276677 ± 10% interrupts.CPU2.LOC:Local_timer_interrupts
278.75 ± 23% +120.8% 615.50 ± 46% interrupts.CPU2.RES:Rescheduling_interrupts
1331 ± 16% +37.7% 1833 ± 7% interrupts.CPU20.CAL:Function_call_interrupts
172514 ± 28% +59.5% 275246 ± 10% interrupts.CPU20.LOC:Local_timer_interrupts
1330 ± 15% +37.9% 1835 ± 6% interrupts.CPU21.CAL:Function_call_interrupts
173205 ± 28% +59.8% 276811 ± 10% interrupts.CPU21.LOC:Local_timer_interrupts
199.00 ± 19% -40.5% 118.50 ± 5% interrupts.CPU21.RES:Rescheduling_interrupts
1329 ± 15% +36.9% 1820 ± 8% interrupts.CPU22.CAL:Function_call_interrupts
172538 ± 27% +60.4% 276822 ± 10% interrupts.CPU22.LOC:Local_timer_interrupts
1328 ± 15% +34.4% 1785 ± 9% interrupts.CPU23.CAL:Function_call_interrupts
173279 ± 28% +59.6% 276631 ± 9% interrupts.CPU23.LOC:Local_timer_interrupts
1323 ± 16% +35.8% 1797 ± 9% interrupts.CPU24.CAL:Function_call_interrupts
171695 ± 28% +61.1% 276532 ± 9% interrupts.CPU24.LOC:Local_timer_interrupts
169.00 ± 18% -37.9% 105.00 ± 23% interrupts.CPU24.RES:Rescheduling_interrupts
1324 ± 15% +35.8% 1798 ± 9% interrupts.CPU25.CAL:Function_call_interrupts
171713 ± 28% +60.3% 275173 ± 10% interrupts.CPU25.LOC:Local_timer_interrupts
1332 ± 17% +37.3% 1829 ± 8% interrupts.CPU26.CAL:Function_call_interrupts
173648 ± 27% +60.2% 278151 ± 10% interrupts.CPU26.LOC:Local_timer_interrupts
1332 ± 16% +36.1% 1813 ± 6% interrupts.CPU27.CAL:Function_call_interrupts
173973 ± 27% +59.9% 278148 ± 10% interrupts.CPU27.LOC:Local_timer_interrupts
72.50 ± 15% +53.8% 111.50 ± 14% interrupts.CPU27.TLB:TLB_shootdowns
1253 ± 6% +44.3% 1809 ± 7% interrupts.CPU28.CAL:Function_call_interrupts
173943 ± 28% +59.0% 276572 ± 9% interrupts.CPU28.LOC:Local_timer_interrupts
174555 ± 27% +59.7% 278752 ± 10% interrupts.CPU29.LOC:Local_timer_interrupts
1327 ± 16% +34.9% 1791 ± 11% interrupts.CPU3.CAL:Function_call_interrupts
173132 ± 27% +59.3% 275832 ± 10% interrupts.CPU3.LOC:Local_timer_interrupts
1342 ± 16% +33.3% 1789 ± 7% interrupts.CPU30.CAL:Function_call_interrupts
174659 ± 27% +59.2% 278059 ± 10% interrupts.CPU30.LOC:Local_timer_interrupts
206.50 ± 14% +27.8% 264.00 ± 8% interrupts.CPU30.RES:Rescheduling_interrupts
1326 ± 15% +35.9% 1802 ± 7% interrupts.CPU31.CAL:Function_call_interrupts
174626 ± 27% +59.1% 277831 ± 10% interrupts.CPU31.LOC:Local_timer_interrupts
1340 ± 15% +28.5% 1722 ± 2% interrupts.CPU32.CAL:Function_call_interrupts
173597 ± 27% +59.9% 277558 ± 10% interrupts.CPU32.LOC:Local_timer_interrupts
166.00 ± 63% +238.9% 562.50 ± 4% interrupts.CPU33.41:PCI-MSI.67633156-edge.eth0-TxRx-3
1335 ± 15% +36.8% 1827 ± 8% interrupts.CPU33.CAL:Function_call_interrupts
174619 ± 27% +59.4% 278272 ± 10% interrupts.CPU33.LOC:Local_timer_interrupts
1272 ± 9% +40.8% 1791 ± 6% interrupts.CPU34.CAL:Function_call_interrupts
173039 ± 27% +60.8% 278289 ± 10% interrupts.CPU34.LOC:Local_timer_interrupts
1331 ± 16% +34.7% 1793 ± 6% interrupts.CPU35.CAL:Function_call_interrupts
173810 ± 27% +59.2% 276737 ± 10% interrupts.CPU35.LOC:Local_timer_interrupts
1332 ± 16% +33.7% 1781 ± 6% interrupts.CPU36.CAL:Function_call_interrupts
173940 ± 27% +59.1% 276675 ± 10% interrupts.CPU36.LOC:Local_timer_interrupts
1214 ± 27% +49.5% 1815 ± 7% interrupts.CPU37.CAL:Function_call_interrupts
174024 ± 26% +58.2% 275240 ± 10% interrupts.CPU37.LOC:Local_timer_interrupts
1292 ± 17% +41.2% 1824 ± 8% interrupts.CPU38.CAL:Function_call_interrupts
173913 ± 28% +58.3% 275238 ± 10% interrupts.CPU38.LOC:Local_timer_interrupts
1261 ± 23% +35.0% 1703 ± 2% interrupts.CPU39.CAL:Function_call_interrupts
173539 ± 27% +58.6% 275196 ± 10% interrupts.CPU39.LOC:Local_timer_interrupts
1331 ± 15% +34.1% 1786 ± 10% interrupts.CPU4.CAL:Function_call_interrupts
173966 ± 28% +58.6% 275846 ± 10% interrupts.CPU4.LOC:Local_timer_interrupts
172284 ± 27% +59.8% 275277 ± 10% interrupts.CPU40.LOC:Local_timer_interrupts
1332 ± 15% +38.1% 1840 ± 9% interrupts.CPU41.CAL:Function_call_interrupts
173571 ± 27% +58.6% 275266 ± 10% interrupts.CPU41.LOC:Local_timer_interrupts
1345 ± 18% +36.5% 1836 ± 5% interrupts.CPU42.CAL:Function_call_interrupts
173734 ± 27% +59.3% 276704 ± 10% interrupts.CPU42.LOC:Local_timer_interrupts
1347 ± 16% +37.3% 1850 ± 7% interrupts.CPU43.CAL:Function_call_interrupts
173250 ± 27% +60.6% 278282 ± 10% interrupts.CPU43.LOC:Local_timer_interrupts
1332 ± 15% +39.1% 1854 ± 6% interrupts.CPU44.CAL:Function_call_interrupts
172968 ± 27% +60.9% 278309 ± 10% interrupts.CPU44.LOC:Local_timer_interrupts
1344 ± 15% +35.7% 1824 ± 5% interrupts.CPU45.CAL:Function_call_interrupts
173867 ± 27% +58.9% 276294 ± 10% interrupts.CPU45.LOC:Local_timer_interrupts
1340 ± 14% +36.4% 1829 ± 5% interrupts.CPU46.CAL:Function_call_interrupts
173254 ± 27% +60.2% 277629 ± 10% interrupts.CPU46.LOC:Local_timer_interrupts
1326 ± 14% +41.4% 1875 ± 7% interrupts.CPU47.CAL:Function_call_interrupts
173478 ± 27% +59.5% 276733 ± 10% interrupts.CPU47.LOC:Local_timer_interrupts
1341 ± 14% +36.1% 1825 ± 6% interrupts.CPU48.CAL:Function_call_interrupts
173376 ± 27% +59.9% 277151 ± 9% interrupts.CPU48.LOC:Local_timer_interrupts
1188 ± 7% +53.0% 1819 ± 5% interrupts.CPU49.CAL:Function_call_interrupts
174977 ± 27% +59.0% 278220 ± 10% interrupts.CPU49.LOC:Local_timer_interrupts
1306 ± 16% +40.0% 1828 ± 8% interrupts.CPU5.CAL:Function_call_interrupts
173031 ± 27% +60.0% 276813 ± 9% interrupts.CPU5.LOC:Local_timer_interrupts
1337 ± 15% +39.3% 1862 ± 7% interrupts.CPU50.CAL:Function_call_interrupts
173546 ± 26% +60.3% 278155 ± 10% interrupts.CPU50.LOC:Local_timer_interrupts
1318 ± 14% +39.2% 1835 ± 7% interrupts.CPU51.CAL:Function_call_interrupts
172224 ± 27% +60.6% 276594 ± 10% interrupts.CPU51.LOC:Local_timer_interrupts
195.50 ± 8% +71.9% 336.00 ± 22% interrupts.CPU51.RES:Rescheduling_interrupts
1336 ± 14% +35.5% 1810 ± 9% interrupts.CPU52.CAL:Function_call_interrupts
173928 ± 28% +58.3% 275245 ± 10% interrupts.CPU52.LOC:Local_timer_interrupts
1336 ± 16% +35.3% 1808 ± 8% interrupts.CPU53.CAL:Function_call_interrupts
174667 ± 27% +59.4% 278471 ± 10% interrupts.CPU53.LOC:Local_timer_interrupts
1338 ± 16% +38.2% 1849 ± 7% interrupts.CPU54.CAL:Function_call_interrupts
173986 ± 28% +59.0% 276725 ± 10% interrupts.CPU54.LOC:Local_timer_interrupts
1344 ± 16% +35.5% 1822 ± 10% interrupts.CPU55.CAL:Function_call_interrupts
173597 ± 28% +59.8% 277325 ± 10% interrupts.CPU55.LOC:Local_timer_interrupts
1348 ± 16% +35.8% 1832 ± 10% interrupts.CPU56.CAL:Function_call_interrupts
173532 ± 28% +59.5% 276819 ± 10% interrupts.CPU56.LOC:Local_timer_interrupts
1339 ± 16% +30.7% 1751 ± 14% interrupts.CPU57.CAL:Function_call_interrupts
173388 ± 28% +59.1% 275873 ± 10% interrupts.CPU57.LOC:Local_timer_interrupts
1351 ± 16% +36.2% 1839 ± 9% interrupts.CPU58.CAL:Function_call_interrupts
173261 ± 28% +60.1% 277415 ± 9% interrupts.CPU58.LOC:Local_timer_interrupts
1346 ± 16% +38.1% 1860 ± 9% interrupts.CPU59.CAL:Function_call_interrupts
172391 ± 28% +61.5% 278403 ± 10% interrupts.CPU59.LOC:Local_timer_interrupts
1272 ± 20% +40.1% 1783 ± 10% interrupts.CPU6.CAL:Function_call_interrupts
173598 ± 28% +59.5% 276966 ± 9% interrupts.CPU6.LOC:Local_timer_interrupts
1047 ±104% -82.0% 188.00 ± 17% interrupts.CPU6.RES:Rescheduling_interrupts
1343 ± 17% +35.1% 1814 ± 10% interrupts.CPU60.CAL:Function_call_interrupts
173883 ± 28% +59.2% 276739 ± 9% interrupts.CPU60.LOC:Local_timer_interrupts
1340 ± 16% +35.1% 1810 ± 11% interrupts.CPU61.CAL:Function_call_interrupts
173740 ± 28% +59.3% 276816 ± 9% interrupts.CPU61.LOC:Local_timer_interrupts
235.25 ± 22% -46.7% 125.50 ± 25% interrupts.CPU61.RES:Rescheduling_interrupts
1333 ± 17% +36.0% 1813 ± 10% interrupts.CPU62.CAL:Function_call_interrupts
173849 ± 28% +58.9% 276216 ± 9% interrupts.CPU62.LOC:Local_timer_interrupts
1330 ± 16% +35.5% 1802 ± 10% interrupts.CPU63.CAL:Function_call_interrupts
172955 ± 27% +60.6% 277839 ± 10% interrupts.CPU63.LOC:Local_timer_interrupts
1339 ± 16% +35.7% 1817 ± 9% interrupts.CPU64.CAL:Function_call_interrupts
173124 ± 27% +59.8% 276711 ± 9% interrupts.CPU64.LOC:Local_timer_interrupts
1347 ± 15% +33.7% 1801 ± 10% interrupts.CPU65.CAL:Function_call_interrupts
172346 ± 27% +61.5% 278316 ± 10% interrupts.CPU65.LOC:Local_timer_interrupts
1348 ± 15% +33.8% 1803 ± 10% interrupts.CPU66.CAL:Function_call_interrupts
173139 ± 27% +59.8% 276735 ± 10% interrupts.CPU66.LOC:Local_timer_interrupts
1340 ± 15% +36.0% 1823 ± 9% interrupts.CPU67.CAL:Function_call_interrupts
173874 ± 28% +60.1% 278313 ± 10% interrupts.CPU67.LOC:Local_timer_interrupts
1346 ± 15% +33.4% 1796 ± 10% interrupts.CPU68.CAL:Function_call_interrupts
172394 ± 27% +60.5% 276761 ± 9% interrupts.CPU68.LOC:Local_timer_interrupts
227.00 ± 30% -47.4% 119.50 ± 36% interrupts.CPU68.RES:Rescheduling_interrupts
1339 ± 16% +35.7% 1817 ± 8% interrupts.CPU69.CAL:Function_call_interrupts
173224 ± 28% +60.7% 278363 ± 10% interrupts.CPU69.LOC:Local_timer_interrupts
210.50 ± 14% -36.1% 134.50 ± 24% interrupts.CPU69.RES:Rescheduling_interrupts
1313 ± 15% +35.3% 1777 ± 10% interrupts.CPU7.CAL:Function_call_interrupts
173458 ± 28% +59.5% 276697 ± 9% interrupts.CPU7.LOC:Local_timer_interrupts
1347 ± 15% +35.6% 1827 ± 8% interrupts.CPU70.CAL:Function_call_interrupts
172412 ± 28% +61.4% 278309 ± 10% interrupts.CPU70.LOC:Local_timer_interrupts
1348 ± 16% +35.5% 1827 ± 7% interrupts.CPU71.CAL:Function_call_interrupts
172701 ± 28% +60.9% 277830 ± 10% interrupts.CPU71.LOC:Local_timer_interrupts
1347 ± 15% +36.0% 1832 ± 7% interrupts.CPU72.CAL:Function_call_interrupts
172533 ± 28% +61.3% 278365 ± 10% interrupts.CPU72.LOC:Local_timer_interrupts
187.50 ± 16% +35.5% 254.00 ± 24% interrupts.CPU72.RES:Rescheduling_interrupts
1342 ± 15% +36.6% 1834 ± 8% interrupts.CPU73.CAL:Function_call_interrupts
172469 ± 28% +60.5% 276787 ± 10% interrupts.CPU73.LOC:Local_timer_interrupts
1346 ± 15% +35.9% 1830 ± 7% interrupts.CPU74.CAL:Function_call_interrupts
173301 ± 28% +59.7% 276774 ± 10% interrupts.CPU74.LOC:Local_timer_interrupts
1354 ± 14% +33.2% 1803 ± 9% interrupts.CPU75.CAL:Function_call_interrupts
173292 ± 28% +60.6% 278298 ± 10% interrupts.CPU75.LOC:Local_timer_interrupts
1349 ± 16% +32.5% 1789 ± 10% interrupts.CPU76.CAL:Function_call_interrupts
173249 ± 28% +60.4% 277872 ± 10% interrupts.CPU76.LOC:Local_timer_interrupts
1345 ± 15% +34.8% 1813 ± 8% interrupts.CPU77.CAL:Function_call_interrupts
173279 ± 28% +60.4% 278017 ± 10% interrupts.CPU77.LOC:Local_timer_interrupts
1319 ± 16% +38.9% 1832 ± 7% interrupts.CPU78.CAL:Function_call_interrupts
173870 ± 28% +60.3% 278724 ± 10% interrupts.CPU78.LOC:Local_timer_interrupts
1340 ± 14% +37.8% 1847 ± 7% interrupts.CPU79.CAL:Function_call_interrupts
173712 ± 27% +59.4% 276902 ± 9% interrupts.CPU79.LOC:Local_timer_interrupts
173.75 ± 14% +39.0% 241.50 interrupts.CPU79.RES:Rescheduling_interrupts
1316 ± 16% +32.8% 1748 ± 11% interrupts.CPU8.CAL:Function_call_interrupts
173912 ± 27% +59.1% 276636 ± 10% interrupts.CPU8.LOC:Local_timer_interrupts
1357 ± 16% +27.5% 1731 interrupts.CPU80.CAL:Function_call_interrupts
173795 ± 27% +60.0% 278145 ± 10% interrupts.CPU80.LOC:Local_timer_interrupts
1338 ± 15% +38.5% 1853 ± 7% interrupts.CPU81.CAL:Function_call_interrupts
174569 ± 27% +58.6% 276820 ± 10% interrupts.CPU81.LOC:Local_timer_interrupts
65.75 ± 21% +51.3% 99.50 ± 18% interrupts.CPU81.TLB:TLB_shootdowns
1336 ± 14% +37.9% 1843 ± 7% interrupts.CPU82.CAL:Function_call_interrupts
173218 ± 27% +60.8% 278592 ± 10% interrupts.CPU82.LOC:Local_timer_interrupts
1296 ± 19% +42.4% 1846 ± 7% interrupts.CPU83.CAL:Function_call_interrupts
173122 ± 27% +60.1% 277194 ± 9% interrupts.CPU83.LOC:Local_timer_interrupts
1327 ± 15% +38.3% 1836 ± 7% interrupts.CPU84.CAL:Function_call_interrupts
174109 ± 26% +59.8% 278298 ± 10% interrupts.CPU84.LOC:Local_timer_interrupts
1339 ± 15% +37.2% 1836 ± 7% interrupts.CPU85.CAL:Function_call_interrupts
174071 ± 27% +59.0% 276801 ± 10% interrupts.CPU85.LOC:Local_timer_interrupts
165.25 ± 8% +29.8% 214.50 ± 4% interrupts.CPU85.RES:Rescheduling_interrupts
1327 ± 16% +38.3% 1836 ± 6% interrupts.CPU86.CAL:Function_call_interrupts
173885 ± 27% +59.2% 276852 ± 9% interrupts.CPU86.LOC:Local_timer_interrupts
1343 ± 16% +37.4% 1845 ± 7% interrupts.CPU87.CAL:Function_call_interrupts
172947 ± 27% +60.9% 278344 ± 10% interrupts.CPU87.LOC:Local_timer_interrupts
1288 ± 21% +44.0% 1855 ± 6% interrupts.CPU88.CAL:Function_call_interrupts
173002 ± 27% +61.1% 278665 ± 10% interrupts.CPU88.LOC:Local_timer_interrupts
1344 ± 15% +37.1% 1842 ± 7% interrupts.CPU89.CAL:Function_call_interrupts
173080 ± 27% +60.8% 278355 ± 10% interrupts.CPU89.LOC:Local_timer_interrupts
1316 ± 16% +34.8% 1774 ± 10% interrupts.CPU9.CAL:Function_call_interrupts
173990 ± 28% +58.8% 276276 ± 10% interrupts.CPU9.LOC:Local_timer_interrupts
248.75 ± 30% -40.5% 148.00 ± 26% interrupts.CPU9.RES:Rescheduling_interrupts
1344 ± 15% +37.1% 1842 ± 7% interrupts.CPU90.CAL:Function_call_interrupts
174059 ± 27% +59.9% 278281 ± 10% interrupts.CPU90.LOC:Local_timer_interrupts
1341 ± 15% +35.1% 1812 ± 6% interrupts.CPU91.CAL:Function_call_interrupts
172223 ± 27% +60.7% 276826 ± 9% interrupts.CPU91.LOC:Local_timer_interrupts
185.50 ± 8% +26.7% 235.00 ± 2% interrupts.CPU91.RES:Rescheduling_interrupts
1338 ± 14% +36.9% 1832 ± 6% interrupts.CPU92.CAL:Function_call_interrupts
173894 ± 27% +60.0% 278251 ± 10% interrupts.CPU92.LOC:Local_timer_interrupts
1284 ± 17% +44.2% 1852 ± 7% interrupts.CPU93.CAL:Function_call_interrupts
173836 ± 27% +59.3% 276847 ± 10% interrupts.CPU93.LOC:Local_timer_interrupts
1106 ± 45% +64.5% 1820 ± 6% interrupts.CPU94.CAL:Function_call_interrupts
174069 ± 27% +59.9% 278252 ± 10% interrupts.CPU94.LOC:Local_timer_interrupts
1338 ± 16% +36.4% 1826 ± 8% interrupts.CPU95.CAL:Function_call_interrupts
174054 ± 27% +59.9% 278287 ± 10% interrupts.CPU95.LOC:Local_timer_interrupts
1337 ± 16% +37.4% 1838 ± 7% interrupts.CPU96.CAL:Function_call_interrupts
174005 ± 27% +59.9% 278274 ± 10% interrupts.CPU96.LOC:Local_timer_interrupts
1343 ± 16% +36.2% 1829 ± 7% interrupts.CPU97.CAL:Function_call_interrupts
173143 ± 27% +60.7% 278249 ± 10% interrupts.CPU97.LOC:Local_timer_interrupts
1337 ± 16% +37.0% 1832 ± 7% interrupts.CPU98.CAL:Function_call_interrupts
174019 ± 27% +58.2% 275216 ± 10% interrupts.CPU98.LOC:Local_timer_interrupts
1324 ± 17% +39.2% 1843 ± 7% interrupts.CPU99.CAL:Function_call_interrupts
174208 ± 27% +58.9% 276797 ± 9% interrupts.CPU99.LOC:Local_timer_interrupts
18038925 ± 27% +59.8% 28825342 ± 10% interrupts.LOC:Local_timer_interrupts
40972 ± 6% +16.2% 47615 softirqs.CPU0.RCU
19448 ± 14% +37.4% 26716 ± 4% softirqs.CPU0.SCHED
47242 ± 14% +23.1% 58169 softirqs.CPU0.TIMER
40171 ± 7% +18.4% 47578 ± 5% softirqs.CPU1.RCU
13896 ± 24% +44.1% 20024 ± 8% softirqs.CPU1.SCHED
40954 ± 26% +38.2% 56579 ± 6% softirqs.CPU1.TIMER
40176 ± 3% +8.0% 43407 ± 3% softirqs.CPU10.RCU
12051 ± 28% +59.1% 19171 ± 11% softirqs.CPU10.SCHED
38548 ± 6% +19.1% 45905 softirqs.CPU100.RCU
12567 ± 20% +52.0% 19096 ± 6% softirqs.CPU100.SCHED
34376 ± 20% +57.9% 54296 ± 18% softirqs.CPU100.TIMER
38644 ± 7% +15.1% 44469 softirqs.CPU101.RCU
12694 ± 20% +49.3% 18954 ± 9% softirqs.CPU101.SCHED
12402 ± 26% +53.4% 19026 ± 7% softirqs.CPU102.SCHED
35097 ± 25% +48.7% 52205 ± 19% softirqs.CPU102.TIMER
10572 ± 35% +55.7% 16457 ± 4% softirqs.CPU103.SCHED
34366 ± 21% +53.4% 52702 ± 20% softirqs.CPU103.TIMER
38628 ± 8% +13.4% 43792 ± 3% softirqs.CPU11.RCU
12637 ± 23% +39.7% 17660 ± 20% softirqs.CPU11.SCHED
40221 ± 5% +12.8% 45367 ± 6% softirqs.CPU12.RCU
12721 ± 22% +52.7% 19427 ± 9% softirqs.CPU12.SCHED
41205 ± 24% +34.4% 55377 softirqs.CPU12.TIMER
39084 ± 8% +12.7% 44051 ± 2% softirqs.CPU13.RCU
13007 ± 21% +47.2% 19152 ± 9% softirqs.CPU13.SCHED
38357 ± 7% +14.1% 43779 ± 2% softirqs.CPU14.RCU
12449 ± 22% +55.2% 19323 ± 10% softirqs.CPU14.SCHED
40519 ± 24% +37.2% 55580 softirqs.CPU14.TIMER
12686 ± 22% +51.3% 19198 ± 7% softirqs.CPU15.SCHED
39565 ± 6% +10.7% 43813 ± 3% softirqs.CPU16.RCU
12424 ± 24% +66.5% 20681 ± 2% softirqs.CPU16.SCHED
40766 ± 26% +40.9% 57446 ± 4% softirqs.CPU16.TIMER
12788 ± 24% +37.6% 17599 ± 20% softirqs.CPU17.SCHED
39402 ± 8% +11.2% 43817 ± 3% softirqs.CPU18.RCU
12759 ± 24% +50.7% 19222 ± 11% softirqs.CPU18.SCHED
11349 ± 8% +68.6% 19136 ± 10% softirqs.CPU19.SCHED
12775 ± 22% +63.5% 20889 ± 13% softirqs.CPU2.SCHED
41519 ± 26% +41.0% 58547 ± 3% softirqs.CPU2.TIMER
12044 ± 27% +57.0% 18911 ± 8% softirqs.CPU20.SCHED
12795 ± 23% +42.3% 18203 ± 5% softirqs.CPU21.SCHED
41071 ± 24% +37.1% 56296 softirqs.CPU22.TIMER
12785 ± 23% +56.6% 20018 ± 11% softirqs.CPU23.SCHED
41462 ± 26% +35.4% 56152 ± 2% softirqs.CPU23.TIMER
39022 ± 6% +11.5% 43523 ± 2% softirqs.CPU24.RCU
12477 ± 22% +52.2% 18989 ± 8% softirqs.CPU24.SCHED
40203 ± 24% +39.0% 55880 ± 4% softirqs.CPU24.TIMER
12407 ± 22% +32.5% 16433 ± 24% softirqs.CPU25.SCHED
39954 ± 7% +13.2% 45235 softirqs.CPU26.RCU
13529 ± 19% +43.8% 19449 ± 7% softirqs.CPU26.SCHED
37416 ± 20% +51.2% 56576 ± 13% softirqs.CPU26.TIMER
12985 ± 21% +47.6% 19163 ± 7% softirqs.CPU27.SCHED
36033 ± 23% +50.4% 54196 ± 18% softirqs.CPU27.TIMER
40644 ± 9% +16.6% 47400 ± 3% softirqs.CPU28.RCU
13805 ± 22% +37.9% 19044 ± 6% softirqs.CPU28.SCHED
40320 ± 8% +23.0% 49580 ± 2% softirqs.CPU29.RCU
13222 ± 21% +47.3% 19483 ± 7% softirqs.CPU29.SCHED
36384 ± 20% +54.9% 56351 ± 16% softirqs.CPU29.TIMER
39478 ± 7% +21.6% 48010 ± 7% softirqs.CPU3.RCU
12874 ± 22% +56.8% 20182 ± 13% softirqs.CPU3.SCHED
41490 ± 24% +40.8% 58416 ± 2% softirqs.CPU3.TIMER
39555 ± 7% +16.9% 46245 softirqs.CPU30.RCU
13150 ± 20% +46.8% 19302 ± 9% softirqs.CPU30.SCHED
36273 ± 20% +53.7% 55736 ± 17% softirqs.CPU30.TIMER
41076 ± 5% +11.8% 45926 softirqs.CPU31.RCU
37747 ± 18% +47.5% 55690 ± 16% softirqs.CPU31.TIMER
39614 ± 7% +15.3% 45683 softirqs.CPU32.RCU
13143 ± 20% +48.1% 19466 ± 8% softirqs.CPU32.SCHED
36356 ± 20% +52.3% 55362 ± 18% softirqs.CPU32.TIMER
39888 ± 8% +18.5% 47275 softirqs.CPU33.RCU
13198 ± 21% +49.9% 19780 ± 6% softirqs.CPU33.SCHED
36414 ± 21% +54.6% 56309 ± 16% softirqs.CPU33.TIMER
39357 ± 7% +16.4% 45809 softirqs.CPU34.RCU
13081 ± 20% +47.8% 19334 ± 7% softirqs.CPU34.SCHED
35790 ± 18% +52.2% 54456 ± 18% softirqs.CPU34.TIMER
39363 ± 8% +16.1% 45683 softirqs.CPU35.RCU
13181 ± 20% +49.6% 19718 ± 11% softirqs.CPU35.SCHED
36085 ± 20% +55.6% 56142 ± 18% softirqs.CPU35.TIMER
39044 ± 7% +14.4% 44670 softirqs.CPU36.RCU
12884 ± 19% +48.8% 19169 ± 8% softirqs.CPU36.SCHED
36071 ± 18% +52.9% 55146 ± 16% softirqs.CPU36.TIMER
13212 ± 19% +43.0% 18893 ± 7% softirqs.CPU37.SCHED
40273 ± 12% +35.2% 54464 ± 16% softirqs.CPU37.TIMER
38990 ± 8% +20.2% 46851 softirqs.CPU38.RCU
13031 ± 22% +45.2% 18914 ± 6% softirqs.CPU38.SCHED
36005 ± 21% +55.1% 55851 ± 17% softirqs.CPU38.TIMER
38788 ± 7% +18.6% 45995 ± 3% softirqs.CPU39.RCU
12898 ± 20% +49.0% 19223 ± 6% softirqs.CPU39.SCHED
35455 ± 20% +57.2% 55743 ± 13% softirqs.CPU39.TIMER
39430 ± 7% +17.9% 46472 ± 6% softirqs.CPU4.RCU
38380 ± 7% +15.0% 44148 softirqs.CPU40.RCU
12558 ± 21% +49.7% 18805 ± 7% softirqs.CPU40.SCHED
34835 ± 20% +54.0% 53654 ± 16% softirqs.CPU40.TIMER
38523 ± 8% +14.5% 44121 ± 2% softirqs.CPU41.RCU
12995 ± 21% +45.9% 18965 ± 6% softirqs.CPU41.SCHED
35682 ± 21% +53.6% 54815 ± 15% softirqs.CPU41.TIMER
38888 ± 7% +15.8% 45046 softirqs.CPU42.RCU
13267 ± 19% +33.6% 17729 ± 16% softirqs.CPU42.SCHED
35883 ± 20% +53.1% 54931 ± 16% softirqs.CPU42.TIMER
38836 ± 9% +19.8% 46536 softirqs.CPU43.RCU
13145 ± 23% +49.7% 19676 ± 9% softirqs.CPU43.SCHED
35796 ± 20% +60.1% 57305 ± 16% softirqs.CPU43.TIMER
38242 ± 9% +16.4% 44528 softirqs.CPU44.RCU
12916 ± 23% +50.6% 19454 ± 6% softirqs.CPU44.SCHED
35264 ± 20% +58.2% 55771 ± 16% softirqs.CPU44.TIMER
40030 ± 8% +13.0% 45230 softirqs.CPU45.RCU
13134 ± 21% +46.8% 19282 ± 8% softirqs.CPU45.SCHED
35923 ± 19% +53.6% 55172 ± 17% softirqs.CPU45.TIMER
39882 ± 10% +15.6% 46110 softirqs.CPU46.RCU
12694 ± 18% +52.0% 19298 ± 5% softirqs.CPU46.SCHED
35673 ± 21% +55.1% 55336 ± 14% softirqs.CPU46.TIMER
12841 ± 22% +49.1% 19147 ± 9% softirqs.CPU47.SCHED
34939 ± 20% +56.2% 54567 ± 17% softirqs.CPU47.TIMER
39455 ± 9% +14.7% 45243 ± 2% softirqs.CPU48.RCU
12616 ± 23% +51.8% 19148 ± 6% softirqs.CPU48.SCHED
35145 ± 20% +56.2% 54898 ± 15% softirqs.CPU48.TIMER
12727 ± 21% +51.3% 19261 ± 8% softirqs.CPU49.SCHED
40180 ± 21% +34.8% 54183 ± 18% softirqs.CPU49.TIMER
39528 ± 7% +16.4% 45995 ± 3% softirqs.CPU5.RCU
12906 ± 22% +50.9% 19480 ± 8% softirqs.CPU5.SCHED
41070 ± 24% +37.1% 56317 ± 3% softirqs.CPU5.TIMER
40227 ± 8% +14.2% 45943 softirqs.CPU50.RCU
12958 ± 22% +48.9% 19290 ± 7% softirqs.CPU50.SCHED
36282 ± 19% +50.2% 54491 ± 18% softirqs.CPU50.TIMER
39743 ± 8% +14.9% 45655 softirqs.CPU51.RCU
12061 ± 13% +59.1% 19189 ± 9% softirqs.CPU51.SCHED
35203 ± 21% +53.6% 54056 ± 18% softirqs.CPU51.TIMER
12532 ± 22% +47.1% 18441 ± 10% softirqs.CPU52.SCHED
12642 ± 20% +41.3% 17860 ± 8% softirqs.CPU53.SCHED
42548 ± 12% +11.7% 47510 ± 11% softirqs.CPU54.RCU
12497 ± 23% +50.8% 18850 ± 11% softirqs.CPU54.SCHED
39893 ± 8% +17.8% 47012 ± 9% softirqs.CPU55.RCU
13070 ± 25% +53.2% 20018 ± 14% softirqs.CPU55.SCHED
41127 ± 25% +69.1% 69534 ± 21% softirqs.CPU55.TIMER
39115 ± 8% +18.0% 46156 ± 9% softirqs.CPU56.RCU
12641 ± 23% +51.2% 19112 ± 11% softirqs.CPU56.SCHED
40189 ± 5% +14.2% 45903 ± 6% softirqs.CPU57.RCU
12661 ± 23% +44.2% 18261 ± 5% softirqs.CPU57.SCHED
39203 ± 8% +14.6% 44934 ± 5% softirqs.CPU58.RCU
12692 ± 23% +51.1% 19179 ± 9% softirqs.CPU58.SCHED
39829 ± 8% +12.1% 44647 ± 4% softirqs.CPU59.RCU
12404 ± 24% +52.1% 18870 ± 8% softirqs.CPU59.SCHED
39809 ± 7% +13.2% 45068 ± 2% softirqs.CPU6.RCU
13810 ± 25% +41.1% 19488 ± 8% softirqs.CPU6.SCHED
42514 ± 25% +31.4% 55876 ± 3% softirqs.CPU6.TIMER
12877 ± 22% +41.4% 18206 ± 5% softirqs.CPU60.SCHED
12806 ± 22% +49.0% 19084 ± 8% softirqs.CPU61.SCHED
12632 ± 23% +49.4% 18875 ± 8% softirqs.CPU62.SCHED
39025 ± 7% +10.9% 43269 ± 2% softirqs.CPU63.RCU
12034 ± 8% +56.3% 18814 ± 8% softirqs.CPU63.SCHED
40861 ± 22% +29.3% 52839 ± 4% softirqs.CPU63.TIMER
12595 ± 20% +43.9% 18129 ± 4% softirqs.CPU64.SCHED
40696 ± 22% +28.3% 52226 ± 5% softirqs.CPU64.TIMER
38379 ± 8% +12.1% 43014 ± 3% softirqs.CPU65.RCU
12506 ± 22% +50.8% 18862 ± 8% softirqs.CPU65.SCHED
40243 ± 24% +31.0% 52735 ± 4% softirqs.CPU65.TIMER
38319 ± 8% +11.7% 42803 ± 3% softirqs.CPU66.RCU
12367 ± 23% +50.6% 18626 ± 11% softirqs.CPU66.SCHED
39374 ± 6% +10.1% 43333 ± 4% softirqs.CPU67.RCU
12727 ± 22% +49.1% 18973 ± 8% softirqs.CPU67.SCHED
12590 ± 21% +49.9% 18878 ± 8% softirqs.CPU68.SCHED
40274 ± 24% +31.4% 52918 ± 4% softirqs.CPU68.TIMER
12871 ± 22% +45.9% 18781 ± 13% softirqs.CPU69.SCHED
39523 ± 7% +13.5% 44847 ± 2% softirqs.CPU7.RCU
12672 ± 23% +50.3% 19046 ± 8% softirqs.CPU7.SCHED
11989 ± 28% +55.0% 18587 ± 6% softirqs.CPU70.SCHED
40139 ± 25% +39.7% 56065 ± 5% softirqs.CPU70.TIMER
40203 ± 4% +12.0% 45012 softirqs.CPU71.RCU
12442 ± 19% +51.6% 18866 ± 7% softirqs.CPU71.SCHED
40148 ± 25% +32.6% 53219 ± 4% softirqs.CPU71.TIMER
39205 ± 4% +16.8% 45803 ± 3% softirqs.CPU72.RCU
12573 ± 22% +53.6% 19318 ± 6% softirqs.CPU72.SCHED
39393 ± 4% +18.9% 46843 ± 4% softirqs.CPU73.RCU
12040 ± 29% +57.6% 18970 ± 8% softirqs.CPU73.SCHED
38872 ± 6% +20.3% 46765 ± 4% softirqs.CPU74.RCU
12461 ± 25% +54.9% 19301 ± 8% softirqs.CPU74.SCHED
39877 ± 5% +20.0% 47869 ± 4% softirqs.CPU75.RCU
12562 ± 23% +54.7% 19430 ± 6% softirqs.CPU75.SCHED
40443 ± 26% +35.5% 54795 ± 5% softirqs.CPU75.TIMER
40298 ± 7% +16.2% 46835 softirqs.CPU76.RCU
12678 ± 23% +50.7% 19109 ± 7% softirqs.CPU76.SCHED
39678 ± 7% +15.8% 45930 softirqs.CPU77.RCU
12631 ± 23% +48.7% 18783 ± 6% softirqs.CPU77.SCHED
13033 ± 20% +50.5% 19614 ± 3% softirqs.CPU78.SCHED
36428 ± 22% +81.4% 66096 ± 5% softirqs.CPU78.TIMER
39321 ± 6% +15.1% 45251 softirqs.CPU79.RCU
12675 ± 21% +46.3% 18540 ± 7% softirqs.CPU79.SCHED
34379 ± 21% +48.7% 51120 ± 19% softirqs.CPU79.TIMER
38912 ± 7% +14.3% 44458 ± 4% softirqs.CPU8.RCU
12790 ± 22% +48.3% 18969 ± 9% softirqs.CPU8.SCHED
39256 ± 6% +20.8% 47420 ± 5% softirqs.CPU80.RCU
12652 ± 18% +48.7% 18819 ± 7% softirqs.CPU80.SCHED
34916 ± 22% +52.9% 53384 ± 16% softirqs.CPU80.TIMER
39168 ± 6% +14.9% 45015 softirqs.CPU81.RCU
12969 ± 20% +45.6% 18887 ± 8% softirqs.CPU81.SCHED
34890 ± 21% +50.0% 52338 ± 19% softirqs.CPU81.TIMER
38943 ± 6% +17.1% 45620 softirqs.CPU82.RCU
12700 ± 20% +49.6% 19004 ± 8% softirqs.CPU82.SCHED
34359 ± 21% +53.1% 52589 ± 19% softirqs.CPU82.TIMER
39741 ± 6% +15.7% 45991 softirqs.CPU83.RCU
12868 ± 20% +50.2% 19326 ± 6% softirqs.CPU83.SCHED
34527 ± 22% +53.9% 53129 ± 17% softirqs.CPU83.TIMER
40382 ± 5% +12.5% 45431 softirqs.CPU84.RCU
12925 ± 19% +41.0% 18230 ± 12% softirqs.CPU84.SCHED
34852 ± 20% +50.3% 52369 ± 19% softirqs.CPU84.TIMER
39219 ± 5% +15.8% 45430 softirqs.CPU85.RCU
12883 ± 19% +18.3% 15247 ± 11% softirqs.CPU85.SCHED
38921 ± 6% +15.4% 44909 softirqs.CPU86.RCU
12837 ± 19% +46.0% 18741 ± 7% softirqs.CPU86.SCHED
34669 ± 20% +50.3% 52111 ± 18% softirqs.CPU86.TIMER
39254 ± 5% +17.5% 46135 softirqs.CPU87.RCU
12982 ± 19% +48.3% 19258 ± 6% softirqs.CPU87.SCHED
35171 ± 20% +53.3% 53934 ± 17% softirqs.CPU87.TIMER
39849 ± 5% +13.8% 45344 softirqs.CPU88.RCU
12888 ± 17% +49.7% 19288 ± 7% softirqs.CPU88.SCHED
35436 ± 21% +50.7% 53389 ± 18% softirqs.CPU88.TIMER
39258 ± 5% +15.9% 45481 softirqs.CPU89.RCU
12923 ± 19% +48.5% 19191 ± 6% softirqs.CPU89.SCHED
35231 ± 20% +51.3% 53314 ± 17% softirqs.CPU89.TIMER
39605 ± 10% +12.8% 44689 ± 5% softirqs.CPU9.RCU
12897 ± 23% +49.7% 19301 ± 10% softirqs.CPU9.SCHED
39985 ± 5% +14.6% 45840 softirqs.CPU90.RCU
13145 ± 18% +29.4% 17015 ± 19% softirqs.CPU90.SCHED
35673 ± 19% +47.0% 52448 ± 17% softirqs.CPU90.TIMER
39079 ± 7% +15.4% 45086 softirqs.CPU91.RCU
12613 ± 20% +50.4% 18975 ± 6% softirqs.CPU91.SCHED
34350 ± 20% +52.5% 52372 ± 16% softirqs.CPU91.TIMER
39010 ± 7% +15.8% 45162 softirqs.CPU92.RCU
12718 ± 19% +49.3% 18992 ± 7% softirqs.CPU92.SCHED
34704 ± 20% +52.4% 52883 ± 18% softirqs.CPU92.TIMER
38820 ± 7% +14.5% 44434 softirqs.CPU93.RCU
12863 ± 20% +48.1% 19046 ± 7% softirqs.CPU93.SCHED
34746 ± 20% +52.3% 52919 ± 17% softirqs.CPU93.TIMER
39843 ± 5% +13.0% 45009 softirqs.CPU94.RCU
12830 ± 19% +47.7% 18944 ± 7% softirqs.CPU94.SCHED
34532 ± 20% +52.2% 52556 ± 18% softirqs.CPU94.TIMER
38567 ± 6% +19.5% 46090 ± 3% softirqs.CPU95.RCU
12897 ± 19% +49.7% 19303 ± 4% softirqs.CPU95.SCHED
34509 ± 20% +54.8% 53433 ± 15% softirqs.CPU95.TIMER
38476 ± 6% +17.2% 45087 softirqs.CPU96.RCU
12898 ± 20% +49.1% 19228 ± 7% softirqs.CPU96.SCHED
34707 ± 20% +51.8% 52673 ± 18% softirqs.CPU96.TIMER
37890 ± 6% +23.1% 46643 ± 4% softirqs.CPU97.RCU
12692 ± 19% +50.8% 19141 ± 7% softirqs.CPU97.SCHED
34362 ± 21% +53.7% 52819 ± 18% softirqs.CPU97.TIMER
38481 ± 6% +15.3% 44352 softirqs.CPU98.RCU
12551 ± 15% +49.3% 18741 ± 7% softirqs.CPU98.SCHED
12899 ± 19% +47.8% 19065 ± 6% softirqs.CPU99.SCHED
36392 ± 21% +44.9% 52733 ± 16% softirqs.CPU99.TIMER
4117204 ± 6% +14.2% 4699983 softirqs.RCU
1333017 ± 21% +47.8% 1969723 ± 8% softirqs.SCHED
3998878 ± 18% +41.8% 5668977 ± 7% softirqs.TIMER
unixbench.score
10000 +-+-----------------------------------------------------------------+
9000 +-+ +.+.+..+ +.O..O.+.+ +.+.+ +.+.+..+ +.+..+.+.+.+..+.|
O O O O O O O O O O O : O : |
8000 +-+ : : : : : : : : : |
7000 +-+: : : : : : : : : : |
| : : : : : : : : : : |
6000 +-+: : : : : : : : : : |
5000 +-+: : : : : : : : : : |
4000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
3000 +-+ : : : : : : : : : : |
2000 +-+ :: : :: :: : |
| : : : : : |
1000 +-+ : : : : : |
0 +-+--O---O-O-------------O---O----O----O---O----O-------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-ivb-d01: 8 threads Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz with 16G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/30%/debian-x86_64-2019-11-14.cgz/300s/lkp-ivb-d01/shell1/unixbench/0x21
commit:
7e21e4009e ("mm/lru: revise the comments of lru_lock")
5470521fac ("mm/lru: debug checking for page memcg moving and lock_page_memcg")
7e21e4009e9f4fcf 5470521faccbd36e0725c2a7b87
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.RIP:cpuidle_enter_state
4:4 -100% :4 dmesg.RIP:poll_idle
:4 25% 1:4 kmsg.a5dec45>]usb_hcd_irq
1:4 -25% :4 kmsg.a7>]usb_hcd_irq
1:4 -25% :4 kmsg.a7a75d0>]usb_hcd_irq
1:4 -25% :4 kmsg.c9ac99e>]usb_hcd_irq
:4 25% 1:4 kmsg.d369>]usb_hcd_irq
:4 25% 1:4 kmsg.e373b18>]usb_hcd_irq
3:4 -6% 3:4 perf-profile.children.cycles-pp.error_entry
2:4 -5% 2:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
5710 -1.5% 5625 unixbench.score
665.38 ± 4% -4.9% 632.78 unixbench.time.elapsed_time
665.38 ± 4% -4.9% 632.78 unixbench.time.elapsed_time.max
306360 -1.3% 302321 unixbench.time.involuntary_context_switches
2.398e+08 -1.5% 2.362e+08 unixbench.time.minor_page_faults
275.00 ± 4% +5.1% 289.00 unixbench.time.percent_of_cpu_this_job_got
682.52 +2.5% 699.54 unixbench.time.system_time
1145 -1.2% 1132 unixbench.time.user_time
7381468 -1.5% 7268608 unixbench.time.voluntary_context_switches
16059798 ± 4% -6.1% 15087007 unixbench.workload
300649 ± 9% -21.5% 235985 ± 3% softirqs.CPU4.TIMER
23056693 ± 6% -11.1% 20487659 cpuidle.C1.time
216045 ± 7% -8.8% 197135 cpuidle.POLL.time
13.65 -0.8% 13.54 perf-stat.overall.MPKI
0.98 -1.2% 0.97 perf-stat.overall.ipc
6.765e+12 -1.3% 6.676e+12 perf-stat.total.instructions
525.75 ± 28% +52.6% 802.50 ± 38% interrupts.63:PCI-MSI.530438-edge.eth4-TxRx-5
4294 +76.8% 7593 ± 24% interrupts.CPU1.NMI:Non-maskable_interrupts
4294 +76.8% 7593 ± 24% interrupts.CPU1.PMI:Performance_monitoring_interrupts
525.75 ± 28% +52.6% 802.50 ± 38% interrupts.CPU5.63:PCI-MSI.530438-edge.eth4-TxRx-5
12931 ± 4% -7.9% 11904 interrupts.CPU6.RES:Rescheduling_interrupts
1.879e+08 -1.6% 1.849e+08 proc-vmstat.numa_hit
1.879e+08 -1.6% 1.849e+08 proc-vmstat.numa_local
1.969e+08 -1.5% 1.94e+08 proc-vmstat.pgalloc_normal
2.408e+08 -1.5% 2.371e+08 proc-vmstat.pgfault
1.969e+08 -1.5% 1.94e+08 proc-vmstat.pgfree
3375877 -1.5% 3324766 proc-vmstat.unevictable_pgs_culled
122195 ± 6% -7.2% 113390 sched_debug.cfs_rq:/.exec_clock.max
770.82 ± 8% -22.1% 600.50 ± 8% sched_debug.cfs_rq:/.load_avg.max
248.11 ± 10% -21.4% 195.13 ± 9% sched_debug.cfs_rq:/.load_avg.stddev
701275 ± 6% -7.2% 650527 sched_debug.cfs_rq:/.min_vruntime.max
17514 ± 8% -25.0% 13128 ± 15% sched_debug.cfs_rq:/.min_vruntime.stddev
2.33 ± 9% -33.8% 1.55 ± 20% sched_debug.cfs_rq:/.nr_spread_over.avg
459.70 ± 13% -35.9% 294.59 ± 33% sched_debug.cfs_rq:/.removed.load_avg.max
21218 ± 12% -35.6% 13656 ± 33% sched_debug.cfs_rq:/.removed.runnable_sum.max
40978 ± 6% -30.7% 28394 ± 30% sched_debug.cfs_rq:/.spread0.max
17513 ± 8% -25.0% 13128 ± 15% sched_debug.cfs_rq:/.spread0.stddev
8468 ± 9% +27.1% 10767 ± 19% sched_debug.cpu.curr->pid.avg
1355708 ± 6% -7.0% 1260319 sched_debug.cpu.sched_count.min
657812 ± 6% -7.0% 611632 sched_debug.cpu.sched_goidle.min
2.30 ± 2% -0.1 2.22 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.79 ± 5% -0.1 0.71 ± 5% perf-profile.calltrace.cycles-pp.read
1.34 -0.1 1.27 perf-profile.calltrace.cycles-pp.__strcoll_l
1.61 ± 2% +0.1 1.69 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
4.35 +0.1 4.43 perf-profile.calltrace.cycles-pp.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.72 ± 4% +0.1 0.80 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.flush_old_exec
0.75 ± 4% +0.1 0.85 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
0.56 ± 5% +0.2 0.76 ± 6% perf-profile.calltrace.cycles-pp.alloc_set_pte.filemap_map_pages.handle_pte_fault.__handle_mm_fault.handle_mm_fault
3.21 ± 3% +0.3 3.52 ± 2% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
4.24 ± 3% +0.3 4.55 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.24 ± 3% +0.3 4.55 ± 2% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.24 ± 3% +0.3 4.55 ± 2% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.23 ± 3% +0.3 3.54 ± 3% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
1.73 ± 4% +0.4 2.10 ± 2% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.66 ± 3% +0.4 2.03 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.26 ±100% +0.6 0.84 ± 2% perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
2.40 ± 3% -0.1 2.28 perf-profile.children.cycles-pp.vfprintf
1.59 ± 2% -0.1 1.48 ± 4% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.89 -0.1 0.80 ± 4% perf-profile.children.cycles-pp.free_pgtables
1.35 -0.1 1.27 perf-profile.children.cycles-pp.__strcoll_l
0.79 ± 5% -0.1 0.71 ± 5% perf-profile.children.cycles-pp.read
0.32 ± 6% -0.1 0.26 ± 13% perf-profile.children.cycles-pp.__get_free_pages
0.61 ± 3% -0.0 0.57 perf-profile.children.cycles-pp.ret_from_fork
0.07 ± 7% -0.0 0.03 ±100% perf-profile.children.cycles-pp.security_prepare_creds
0.21 ± 3% -0.0 0.17 ± 11% perf-profile.children.cycles-pp.anon_vma_fork
0.30 ± 6% -0.0 0.26 ± 6% perf-profile.children.cycles-pp.unlink_file_vma
0.18 ± 4% -0.0 0.15 ± 14% perf-profile.children.cycles-pp.pgd_alloc
0.30 -0.0 0.28 ± 9% perf-profile.children.cycles-pp.d_alloc
0.17 ± 7% -0.0 0.15 ± 7% perf-profile.children.cycles-pp.__pud_alloc
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.__put_anon_vma
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.ima_file_check
0.07 ± 10% +0.0 0.09 ± 11% perf-profile.children.cycles-pp.inode_has_perm
0.07 ± 14% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.file_has_perm
0.17 ± 6% +0.0 0.19 ± 2% perf-profile.children.cycles-pp.__generic_file_write_iter
0.27 ± 5% +0.0 0.30 ± 4% perf-profile.children.cycles-pp.open64
0.17 ± 4% +0.0 0.20 ± 2% perf-profile.children.cycles-pp.generic_file_write_iter
0.15 ± 7% +0.0 0.18 ± 7% perf-profile.children.cycles-pp.prepend_path
0.22 ± 6% +0.0 0.26 ± 4% perf-profile.children.cycles-pp._IO_file_xsputn
0.43 ± 5% +0.0 0.47 ± 6% perf-profile.children.cycles-pp.schedule_idle
0.03 ±100% +0.0 0.07 ± 12% perf-profile.children.cycles-pp.__update_load_avg_se
0.46 ± 4% +0.0 0.50 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
0.43 ± 4% +0.0 0.48 ± 4% perf-profile.children.cycles-pp.activate_task
0.42 ± 4% +0.0 0.46 ± 4% perf-profile.children.cycles-pp.enqueue_entity
0.65 ± 4% +0.0 0.70 ± 2% perf-profile.children.cycles-pp.irq_exit
0.01 ±173% +0.1 0.08 ± 24% perf-profile.children.cycles-pp.mem_cgroup_from_task
0.96 ± 3% +0.1 1.03 perf-profile.children.cycles-pp.entry_SYSCALL_64
0.07 ± 26% +0.1 0.14 ± 13% perf-profile.children.cycles-pp.PageHuge
4.38 ± 3% +0.3 4.67 ± 2% perf-profile.children.cycles-pp.do_group_exit
4.38 ± 3% +0.3 4.67 ± 2% perf-profile.children.cycles-pp.do_exit
4.38 ± 3% +0.3 4.68 ± 2% perf-profile.children.cycles-pp.__x64_sys_exit_group
4.96 ± 2% +0.4 5.34 perf-profile.children.cycles-pp.exit_mmap
4.99 ± 2% +0.4 5.38 perf-profile.children.cycles-pp.mmput
3.63 ± 2% +0.4 4.03 ± 3% perf-profile.children.cycles-pp.filemap_map_pages
1.26 ± 2% +0.4 1.69 ± 4% perf-profile.children.cycles-pp.alloc_set_pte
0.59 ± 8% +0.4 1.03 ± 6% perf-profile.children.cycles-pp.page_add_file_rmap
2.63 ± 2% +0.5 3.10 ± 2% perf-profile.children.cycles-pp.unmap_page_range
2.75 ± 2% +0.5 3.22 ± 2% perf-profile.children.cycles-pp.unmap_vmas
0.80 ± 2% +0.5 1.29 ± 3% perf-profile.children.cycles-pp.page_remove_rmap
0.20 ± 22% +1.0 1.22 ± 4% perf-profile.children.cycles-pp.lock_page_memcg
1.75 ± 2% -0.1 1.62 ± 3% perf-profile.self.cycles-pp.do_syscall_64
2.09 ± 3% -0.1 1.97 ± 2% perf-profile.self.cycles-pp.vfprintf
1.33 -0.1 1.25 perf-profile.self.cycles-pp.__strcoll_l
0.18 ± 11% -0.0 0.15 ± 10% perf-profile.self.cycles-pp.change_p4d_range
0.15 ± 7% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.unmapped_area_topdown
0.07 ± 6% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.__fget_light
0.07 ± 6% +0.0 0.09 ± 13% perf-profile.self.cycles-pp.inode_has_perm
0.20 ± 7% +0.0 0.22 ± 5% perf-profile.self.cycles-pp._IO_file_xsputn
0.07 ± 26% +0.0 0.11 ± 14% perf-profile.self.cycles-pp.PageHuge
0.01 ±173% +0.1 0.07 ± 21% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.93 ± 3% +0.1 0.98 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.19 ± 20% +1.0 1.19 ± 4% perf-profile.self.cycles-pp.lock_page_memcg
***************************************************************************************************
lkp-csl-2ap3: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/1/debian-x86_64-2019-11-14.cgz/300s/lkp-csl-2ap3/shell8/unixbench/0x500002c
commit:
7e21e4009e ("mm/lru: revise the comments of lru_lock")
5470521fac ("mm/lru: debug checking for page memcg moving and lock_page_memcg")
7e21e4009e9f4fcf 5470521faccbd36e0725c2a7b87
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 50% 2:4 kmsg.ipmi_si_dmi-ipmi-si.#:IRQ_index#not_found
%stddev %change %stddev
\ | \
10665 -2.3% 10417 unixbench.score
48380719 -2.3% 47260108 unixbench.time.minor_page_faults
161.51 +2.9% 166.20 unixbench.time.system_time
213.35 -2.6% 207.81 unixbench.time.user_time
1456778 -2.5% 1420836 unixbench.time.voluntary_context_switches
403149 -1.9% 395332 unixbench.workload
1.65e+10 ± 44% -72.0% 4.619e+09 ±130% cpuidle.C1E.time
8.84 ± 57% -4.8 4.05 ± 38% perf-stat.i.cache-miss-rate%
2156 ± 9% +17.2% 2527 ± 4% perf-stat.i.cycles-between-cache-misses
4.66 ± 17% -1.5 3.19 ± 9% perf-stat.overall.cache-miss-rate%
2218 ± 7% +15.1% 2552 ± 4% perf-stat.overall.cycles-between-cache-misses
56.25 ± 19% +27.6% 71.75 ± 13% turbostat.Avg_MHz
5.55 ± 10% +1.7 7.22 ± 8% turbostat.Busy%
90.78 -24.1% 68.93 ± 18% turbostat.CPU%c1
40.50 ± 2% -13.0% 35.25 ± 8% turbostat.CoreTmp
40.00 -11.9% 35.25 ± 7% turbostat.PkgTmp
7162 ± 2% +4.2% 7463 ± 2% proc-vmstat.nr_mapped
38190356 -2.4% 37260156 proc-vmstat.numa_hit
38097034 -2.4% 37166895 proc-vmstat.numa_local
40795074 -2.4% 39821000 proc-vmstat.pgalloc_normal
48745024 -2.4% 47586156 proc-vmstat.pgfault
40710907 -2.4% 39737517 proc-vmstat.pgfree
716928 -2.3% 700264 proc-vmstat.unevictable_pgs_culled
5.70 ±119% -5.7 0.00 perf-profile.calltrace.cycles-pp.perf_event_ctx_lock_nested.perf_event_release_kernel.perf_release.__fput.task_work_run
1.56 ±108% +4.4 5.94 ± 30% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.56 ±108% +4.4 5.94 ± 30% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.56 ±108% +4.4 5.94 ± 30% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.70 ±119% -5.0 0.71 ±173% perf-profile.children.cycles-pp.perf_event_ctx_lock_nested
1.58 ±106% +4.4 5.94 ± 30% perf-profile.children.cycles-pp.__x64_sys_exit_group
1.07 ± 96% +5.6 6.65 ± 43% perf-profile.children.cycles-pp.mmput
1.07 ± 96% +5.6 6.65 ± 43% perf-profile.children.cycles-pp.exit_mmap
215.50 ± 81% +102.4% 436.25 ± 14% sched_debug.cfs_rq:/.removed.util_avg.max
1026940 ± 4% +138.4% 2448387 ± 34% sched_debug.cpu.avg_idle.max
7057 ±117% -75.6% 1725 ± 31% sched_debug.cpu.avg_idle.min
247135 ± 4% +16.7% 288505 ± 8% sched_debug.cpu.avg_idle.stddev
579693 ± 16% +146.6% 1429781 ± 7% sched_debug.cpu.max_idle_balance_cost.max
6344 ±107% +1106.6% 76545 ± 22% sched_debug.cpu.max_idle_balance_cost.stddev
18412 ± 46% -37.8% 11461 ± 5% sched_debug.cpu.nr_switches.max
1883 ± 42% -33.4% 1254 ± 7% sched_debug.cpu.nr_switches.stddev
309.50 ± 52% -56.9% 133.25 ± 26% interrupts.32:PCI-MSI.524290-edge.eth0-TxRx-1
1087 ±120% -88.1% 129.25 ± 35% interrupts.34:PCI-MSI.524292-edge.eth0-TxRx-3
309.50 ± 52% -56.9% 133.25 ± 26% interrupts.CPU10.32:PCI-MSI.524290-edge.eth0-TxRx-1
151.75 ± 14% -29.0% 107.75 ± 16% interrupts.CPU100.RES:Rescheduling_interrupts
164.50 ± 32% -40.7% 97.50 ± 26% interrupts.CPU101.RES:Rescheduling_interrupts
77.00 ± 28% -39.9% 46.25 ± 31% interrupts.CPU105.TLB:TLB_shootdowns
165.25 ± 26% -36.3% 105.25 ± 17% interrupts.CPU108.RES:Rescheduling_interrupts
254.75 ± 62% -56.8% 110.00 ± 36% interrupts.CPU11.RES:Rescheduling_interrupts
82.75 ± 28% -49.2% 42.00 ± 23% interrupts.CPU11.TLB:TLB_shootdowns
146.75 ± 22% -41.4% 86.00 ± 35% interrupts.CPU110.RES:Rescheduling_interrupts
88.75 ± 21% -44.2% 49.50 ± 19% interrupts.CPU114.TLB:TLB_shootdowns
146.50 ± 26% -30.0% 102.50 ± 25% interrupts.CPU115.RES:Rescheduling_interrupts
88.25 ± 32% -48.2% 45.75 ± 9% interrupts.CPU115.TLB:TLB_shootdowns
137.25 ± 29% -37.3% 86.00 ± 34% interrupts.CPU116.RES:Rescheduling_interrupts
87.75 ± 39% -59.5% 35.50 ± 14% interrupts.CPU116.TLB:TLB_shootdowns
1087 ±120% -88.1% 129.25 ± 35% interrupts.CPU12.34:PCI-MSI.524292-edge.eth0-TxRx-3
83.75 ± 35% -43.3% 47.50 ± 28% interrupts.CPU12.TLB:TLB_shootdowns
82.00 ± 55% -48.8% 42.00 ± 24% interrupts.CPU13.TLB:TLB_shootdowns
82.75 ± 44% -40.8% 49.00 ± 14% interrupts.CPU15.TLB:TLB_shootdowns
100.00 ± 23% -39.8% 60.25 ± 36% interrupts.CPU153.RES:Rescheduling_interrupts
127.00 ± 49% -58.1% 53.25 ± 26% interrupts.CPU156.RES:Rescheduling_interrupts
96.00 ± 17% -34.1% 63.25 ± 28% interrupts.CPU157.RES:Rescheduling_interrupts
89.50 ± 12% -39.4% 54.25 ± 33% interrupts.CPU159.RES:Rescheduling_interrupts
121.50 ± 39% -61.5% 46.75 ± 26% interrupts.CPU161.RES:Rescheduling_interrupts
67.50 ± 55% -55.2% 30.25 ± 37% interrupts.CPU162.TLB:TLB_shootdowns
95.50 ± 24% -45.5% 52.00 ± 37% interrupts.CPU163.RES:Rescheduling_interrupts
61.75 ± 48% -56.7% 26.75 ± 25% interrupts.CPU164.TLB:TLB_shootdowns
195.75 ± 30% -50.7% 96.50 ± 37% interrupts.CPU18.RES:Rescheduling_interrupts
92.75 ± 29% -52.0% 44.50 ± 30% interrupts.CPU18.TLB:TLB_shootdowns
147.25 ± 29% -44.3% 82.00 ± 22% interrupts.CPU19.RES:Rescheduling_interrupts
162.00 ± 17% -37.3% 101.50 ± 36% interrupts.CPU21.RES:Rescheduling_interrupts
81.75 ± 18% -42.5% 47.00 ± 28% interrupts.CPU22.TLB:TLB_shootdowns
97.75 ± 11% -22.5% 75.75 ± 15% interrupts.CPU48.TLB:TLB_shootdowns
420.00 ± 36% -66.0% 142.75 ± 32% interrupts.CPU5.RES:Rescheduling_interrupts
107.50 ± 15% -30.0% 75.25 ± 22% interrupts.CPU54.RES:Rescheduling_interrupts
1441 ±158% -95.7% 61.75 ± 14% interrupts.CPU56.RES:Rescheduling_interrupts
65.75 ± 41% -58.6% 27.25 ± 19% interrupts.CPU59.TLB:TLB_shootdowns
111.50 ± 46% -46.2% 60.00 ± 17% interrupts.CPU61.RES:Rescheduling_interrupts
107.50 ± 43% -44.7% 59.50 ± 22% interrupts.CPU65.RES:Rescheduling_interrupts
86.50 ± 32% -40.5% 51.50 ± 33% interrupts.CPU68.RES:Rescheduling_interrupts
76.25 ± 63% -58.4% 31.75 ± 25% interrupts.CPU71.TLB:TLB_shootdowns
176.00 ± 32% -46.0% 95.00 ± 26% interrupts.CPU97.RES:Rescheduling_interrupts
11436 ± 2% -16.9% 9508 ± 6% interrupts.TLB:TLB_shootdowns
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 1 month
[mm/lru] 0bc395a878: aim7.jobs-per-min -50.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -50.4% regression of aim7.jobs-per-min due to commit:
commit: 0bc395a87831d0e71770a0ad44361099292dc15c ("mm/lru: replace pgdat lru_lock with lruvec lock")
https://github.com/alexshi/linux.git lru-next
in testcase: aim7
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
cpufreq_governor: performance
test: disk_wrt
load: 8000
ucode: 0xb000038
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/load/rootfs/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/8000/debian-x86_64-2019-11-14.cgz/lkp-bdw-ep6/disk_wrt/aim7/0xb000038
commit:
81840ebf1a ("mm/vmscan: remove unnecessary lruvec adding")
0bc395a878 ("mm/lru: replace pgdat lru_lock with lruvec lock")
81840ebf1a3cd283 0bc395a87831d0e71770a0ad443
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
3:4 -75% :4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
1219133 -50.4% 605068 aim7.jobs-per-min
40.09 +99.8% 80.09 aim7.time.elapsed_time
40.09 +99.8% 80.09 aim7.time.elapsed_time.max
190439 +208.8% 588054 ± 2% aim7.time.involuntary_context_switches
2183 +160.6% 5691 aim7.time.system_time
367.65 -7.4% 340.33 aim7.time.user_time
58356 ± 4% +42.3% 83041 ± 3% aim7.time.voluntary_context_switches
75.44 ± 20% -33.0 42.41 ± 53% mpstat.cpu.all.idle%
18.35 ± 61% +33.1 51.41 ± 39% mpstat.cpu.all.sys%
196168 ± 26% +41.4% 277330 ± 17% numa-meminfo.node0.Active
196025 ± 26% +41.4% 277275 ± 17% numa-meminfo.node0.Active(anon)
25920 ± 19% +114.8% 55681 ± 16% numa-meminfo.node0.Shmem
48901 ± 27% +41.7% 69277 ± 18% numa-vmstat.node0.nr_active_anon
6355 ± 15% +118.3% 13872 ± 18% numa-vmstat.node0.nr_shmem
48893 ± 27% +41.7% 69281 ± 18% numa-vmstat.node0.nr_zone_active_anon
752.00 ± 53% +117.7% 1637 ± 37% turbostat.Avg_MHz
29.38 ± 48% +30.5 59.84 ± 35% turbostat.Busy%
30.14 ± 18% -54.8% 13.61 ± 42% turbostat.CPU%c1
75.75 ± 19% -43.2% 43.00 ± 51% vmstat.cpu.id
20.00 ± 61% +162.5% 52.50 ± 39% vmstat.cpu.sy
56.75 ± 68% +228.2% 186.25 ± 42% vmstat.procs.r
3491 ± 45% +99.7% 6973 ± 34% vmstat.system.cs
351216 ± 16% +41.0% 495065 ± 18% meminfo.Active
351032 ± 16% +41.0% 494881 ± 18% meminfo.Active(anon)
76680 ± 59% +116.9% 166286 ± 38% meminfo.PageTables
50705 ± 21% +103.2% 103030 ± 24% meminfo.Shmem
356974 ± 9% +17.9% 421011 ± 11% meminfo.Slab
87640 ± 16% +41.9% 124376 ± 18% proc-vmstat.nr_active_anon
277570 +4.7% 290664 ± 2% proc-vmstat.nr_file_pages
4472 +5.4% 4714 proc-vmstat.nr_inactive_anon
19107 ± 59% +117.9% 41634 ± 38% proc-vmstat.nr_page_table_pages
12645 ± 18% +108.7% 26388 ± 26% proc-vmstat.nr_shmem
28601 ± 4% +9.4% 31299 ± 4% proc-vmstat.nr_slab_reclaimable
87640 ± 16% +41.9% 124376 ± 18% proc-vmstat.nr_zone_active_anon
4472 +5.4% 4714 proc-vmstat.nr_zone_inactive_anon
4160 +5.0% 4370 proc-vmstat.unevictable_pgs_culled
138628 ± 20% +37.3% 190314 ± 19% slabinfo.anon_vma_chain.active_objs
111218 ± 5% +12.8% 125445 ± 6% slabinfo.dentry.active_objs
2928 ± 5% +10.2% 3227 ± 4% slabinfo.dentry.active_slabs
123031 ± 5% +10.2% 135558 ± 4% slabinfo.dentry.num_objs
2928 ± 5% +10.2% 3227 ± 4% slabinfo.dentry.num_slabs
2833 ± 14% +61.5% 4575 ± 23% slabinfo.names_cache.active_objs
364.50 ± 14% +60.6% 585.50 ± 23% slabinfo.names_cache.active_slabs
2920 ± 14% +60.5% 4687 ± 23% slabinfo.names_cache.num_objs
364.50 ± 14% +60.6% 585.50 ± 23% slabinfo.names_cache.num_slabs
35171 ± 19% +44.0% 50658 ± 17% slabinfo.proc_inode_cache.active_objs
930.75 ± 15% +31.2% 1220 ± 13% slabinfo.proc_inode_cache.active_slabs
45631 ± 15% +31.1% 59838 ± 13% slabinfo.proc_inode_cache.num_objs
930.75 ± 15% +31.2% 1220 ± 13% slabinfo.proc_inode_cache.num_slabs
1128 ± 4% -10.9% 1005 ± 7% slabinfo.task_group.active_objs
1128 ± 4% -10.9% 1005 ± 7% slabinfo.task_group.num_objs
51.30 ± 27% -45.2% 28.13 ± 46% perf-stat.i.MPKI
1.40 ± 54% +1.6 3.00 ± 36% perf-stat.i.cache-miss-rate%
3479 ± 47% +102.6% 7050 ± 34% perf-stat.i.context-switches
8.78 ± 16% -35.5% 5.66 ± 25% perf-stat.i.cpi
6.765e+10 ± 54% +114.9% 1.454e+11 ± 37% perf-stat.i.cpu-cycles
0.95 ± 26% -0.5 0.43 ± 43% perf-stat.i.dTLB-load-miss-rate%
0.22 ± 11% -0.0 0.17 ± 6% perf-stat.i.dTLB-store-miss-rate%
1389 ± 30% +132.4% 3229 ± 24% perf-stat.i.instructions-per-iTLB-miss
82.54 ± 10% -21.4 61.14 ± 24% perf-stat.i.node-load-miss-rate%
52.78 ± 20% -32.1 20.66 ± 44% perf-stat.i.node-store-miss-rate%
7.94 ± 12% -26.5% 5.83 ± 12% perf-stat.overall.MPKI
1.19 ± 8% -0.5 0.71 ± 12% perf-stat.overall.branch-miss-rate%
3.37 ± 14% +0.8 4.20 ± 12% perf-stat.overall.cache-miss-rate%
2.55 ± 5% +27.3% 3.25 ± 2% perf-stat.overall.cpi
9717 ± 6% +38.9% 13493 ± 4% perf-stat.overall.cycles-between-cache-misses
0.16 ± 12% -0.1 0.09 ± 11% perf-stat.overall.dTLB-load-miss-rate%
0.17 ± 3% -0.0 0.16 perf-stat.overall.dTLB-store-miss-rate%
88.27 ± 3% -5.9 82.34 ± 5% perf-stat.overall.iTLB-load-miss-rate%
2915 ± 8% +63.7% 4773 ± 19% perf-stat.overall.instructions-per-iTLB-miss
0.39 ± 6% -21.7% 0.31 ± 2% perf-stat.overall.ipc
48.06 ± 6% -6.9 41.14 ± 9% perf-stat.overall.node-load-miss-rate%
11.43 ± 2% -3.5 7.96 ± 4% perf-stat.overall.node-store-miss-rate%
3450 ± 46% +102.6% 6990 ± 34% perf-stat.ps.context-switches
6.708e+10 ± 53% +114.9% 1.442e+11 ± 37% perf-stat.ps.cpu-cycles
272.58 ± 61% +102.1% 550.87 ± 38% perf-stat.ps.cpu-migrations
3.261e+12 ± 2% +64.5% 5.365e+12 perf-stat.total.instructions
15561 ± 58% +102.5% 31507 ± 41% sched_debug.cfs_rq:/.exec_clock.avg
16085 ± 58% +100.7% 32288 ± 40% sched_debug.cfs_rq:/.exec_clock.max
15433 ± 58% +102.1% 31195 ± 41% sched_debug.cfs_rq:/.exec_clock.min
36290 ± 21% -56.3% 15844 ± 72% sched_debug.cfs_rq:/.load.avg
895202 ± 19% -83.2% 150396 ±133% sched_debug.cfs_rq:/.load.max
153527 ± 9% -77.7% 34172 ±133% sched_debug.cfs_rq:/.load.stddev
123.58 ± 97% -67.0% 40.84 ± 7% sched_debug.cfs_rq:/.load_avg.avg
5317 ±143% -89.5% 560.00 ± 10% sched_debug.cfs_rq:/.load_avg.max
718.29 ±129% -83.7% 117.17 ± 13% sched_debug.cfs_rq:/.load_avg.stddev
1455068 ± 57% +125.7% 3283622 ± 40% sched_debug.cfs_rq:/.min_vruntime.avg
1647995 ± 58% +123.2% 3677843 ± 39% sched_debug.cfs_rq:/.min_vruntime.max
1372901 ± 58% +118.3% 2997680 ± 43% sched_debug.cfs_rq:/.min_vruntime.min
57401 ± 61% +145.7% 141034 ± 46% sched_debug.cfs_rq:/.min_vruntime.stddev
0.13 ± 34% +298.3% 0.53 ± 28% sched_debug.cfs_rq:/.nr_running.avg
0.34 ± 13% -24.4% 0.26 ± 10% sched_debug.cfs_rq:/.nr_running.stddev
5.07 ±118% +224.0% 16.42 ± 42% sched_debug.cfs_rq:/.nr_spread_over.max
437.37 ± 46% -69.0% 135.75 ±143% sched_debug.cfs_rq:/.runnable_load_avg.max
82.91 ± 51% -65.2% 28.86 ±136% sched_debug.cfs_rq:/.runnable_load_avg.stddev
35862 ± 23% -60.0% 14360 ± 83% sched_debug.cfs_rq:/.runnable_weight.avg
893368 ± 20% -84.4% 139765 ±143% sched_debug.cfs_rq:/.runnable_weight.max
153338 ± 9% -78.6% 32755 ±138% sched_debug.cfs_rq:/.runnable_weight.stddev
-165319 +194.9% -487460 sched_debug.cfs_rq:/.spread0.min
57401 ± 61% +145.4% 140849 ± 46% sched_debug.cfs_rq:/.spread0.stddev
200.61 ± 62% +194.2% 590.17 ± 31% sched_debug.cfs_rq:/.util_avg.avg
17.90 ± 71% +2308.4% 431.12 ± 44% sched_debug.cfs_rq:/.util_est_enqueued.avg
290.53 ± 45% +416.3% 1500 ± 27% sched_debug.cfs_rq:/.util_est_enqueued.max
62.01 ± 53% +434.8% 331.61 ± 39% sched_debug.cfs_rq:/.util_est_enqueued.stddev
2.96 ± 36% +1927.4% 60.08 ± 53% sched_debug.cpu.clock.stddev
2.96 ± 36% +1927.4% 60.08 ± 53% sched_debug.cpu.clock_task.stddev
192.66 ± 7% +1521.7% 3124 ± 33% sched_debug.cpu.curr->pid.avg
1010 ± 25% +49.0% 1505 ± 7% sched_debug.cpu.curr->pid.stddev
0.07 ± 40% +1486.7% 1.19 ± 44% sched_debug.cpu.nr_running.avg
1.27 ± 22% +189.5% 3.67 ± 18% sched_debug.cpu.nr_running.max
0.25 ± 19% +241.9% 0.86 ± 34% sched_debug.cpu.nr_running.stddev
0.01 ± 89% +2.9e+05% 27.19 ± 34% sched_debug.cpu.nr_uninterruptible.avg
56.92 ± 15% +53.1% 87.12 ± 15% sched_debug.cpu.nr_uninterruptible.max
1994 ± 59% +105.8% 4105 ± 39% sched_debug.cpu.sched_count.min
519.25 ± 24% +115.6% 1119 ± 23% interrupts.CPU0.RES:Rescheduling_interrupts
419.75 ± 31% +55.7% 653.50 ± 10% interrupts.CPU10.RES:Rescheduling_interrupts
310.25 ± 10% +88.0% 583.25 ± 13% interrupts.CPU12.RES:Rescheduling_interrupts
378.25 ± 27% +80.4% 682.25 ± 28% interrupts.CPU13.RES:Rescheduling_interrupts
358.50 ± 16% +74.1% 624.00 ± 8% interrupts.CPU16.RES:Rescheduling_interrupts
413.25 ± 33% +53.4% 634.00 ± 15% interrupts.CPU17.RES:Rescheduling_interrupts
405.75 ± 24% +70.4% 691.25 ± 12% interrupts.CPU18.RES:Rescheduling_interrupts
380.75 ± 17% +99.4% 759.25 ± 28% interrupts.CPU2.RES:Rescheduling_interrupts
360.75 ± 17% +66.5% 600.75 ± 16% interrupts.CPU20.RES:Rescheduling_interrupts
348.25 ± 16% +61.3% 561.75 ± 8% interrupts.CPU21.RES:Rescheduling_interrupts
386.50 ± 27% +61.0% 622.25 ± 10% interrupts.CPU22.RES:Rescheduling_interrupts
385.00 ± 17% +49.0% 573.50 ± 13% interrupts.CPU23.RES:Rescheduling_interrupts
388.25 ± 13% +37.6% 534.25 ± 9% interrupts.CPU24.RES:Rescheduling_interrupts
395.75 ± 19% +111.0% 835.00 ± 46% interrupts.CPU25.RES:Rescheduling_interrupts
329.75 ± 17% +65.4% 545.25 ± 8% interrupts.CPU26.RES:Rescheduling_interrupts
346.25 ± 8% +88.6% 653.00 ± 27% interrupts.CPU27.RES:Rescheduling_interrupts
358.50 ± 24% +49.0% 534.25 ± 10% interrupts.CPU29.RES:Rescheduling_interrupts
358.00 ± 23% +113.4% 764.00 ± 34% interrupts.CPU3.RES:Rescheduling_interrupts
328.00 ± 11% +66.2% 545.00 ± 7% interrupts.CPU30.RES:Rescheduling_interrupts
352.75 ± 23% +72.3% 607.75 ± 12% interrupts.CPU31.RES:Rescheduling_interrupts
414.00 ± 25% +32.7% 549.25 ± 16% interrupts.CPU32.RES:Rescheduling_interrupts
328.00 ± 18% +49.5% 490.25 ± 9% interrupts.CPU34.RES:Rescheduling_interrupts
363.75 ± 23% +65.4% 601.50 ± 17% interrupts.CPU35.RES:Rescheduling_interrupts
378.50 ± 19% +42.2% 538.25 ± 10% interrupts.CPU38.RES:Rescheduling_interrupts
357.75 ± 26% +52.5% 545.50 ± 12% interrupts.CPU39.RES:Rescheduling_interrupts
372.25 ± 16% +47.9% 550.50 ± 12% interrupts.CPU4.RES:Rescheduling_interrupts
369.00 ± 19% +53.7% 567.00 ± 17% interrupts.CPU40.RES:Rescheduling_interrupts
390.50 ± 25% +52.9% 597.25 ± 17% interrupts.CPU41.RES:Rescheduling_interrupts
341.00 ± 18% +62.0% 552.50 ± 13% interrupts.CPU42.RES:Rescheduling_interrupts
706.25 ± 7% +70.2% 1201 ± 8% interrupts.CPU43.RES:Rescheduling_interrupts
379.00 ± 18% +39.9% 530.25 ± 6% interrupts.CPU44.RES:Rescheduling_interrupts
378.00 ± 25% +62.3% 613.50 ± 9% interrupts.CPU45.RES:Rescheduling_interrupts
323.50 ± 21% +92.5% 622.75 ± 20% interrupts.CPU46.RES:Rescheduling_interrupts
331.00 ± 17% +75.5% 580.75 ± 14% interrupts.CPU47.RES:Rescheduling_interrupts
390.50 ± 20% +71.3% 669.00 ± 11% interrupts.CPU49.RES:Rescheduling_interrupts
349.50 ± 18% +76.3% 616.00 ± 17% interrupts.CPU5.RES:Rescheduling_interrupts
347.50 ± 23% +90.4% 661.50 ± 11% interrupts.CPU50.RES:Rescheduling_interrupts
388.75 ± 44% +75.9% 683.75 ± 23% interrupts.CPU51.RES:Rescheduling_interrupts
332.00 ± 25% +82.5% 605.75 ± 2% interrupts.CPU52.RES:Rescheduling_interrupts
369.75 ± 14% +63.8% 605.50 ± 15% interrupts.CPU54.RES:Rescheduling_interrupts
344.00 ± 22% +89.1% 650.50 ± 7% interrupts.CPU55.RES:Rescheduling_interrupts
346.25 ± 20% +76.6% 611.50 ± 15% interrupts.CPU57.RES:Rescheduling_interrupts
377.00 ± 18% +72.2% 649.25 ± 12% interrupts.CPU58.RES:Rescheduling_interrupts
370.50 ± 26% +91.4% 709.00 ± 16% interrupts.CPU6.RES:Rescheduling_interrupts
350.50 ± 21% +70.8% 598.75 ± 14% interrupts.CPU60.RES:Rescheduling_interrupts
377.50 ± 20% +52.9% 577.25 ± 4% interrupts.CPU61.RES:Rescheduling_interrupts
334.75 ± 13% +74.5% 584.00 ± 6% interrupts.CPU62.RES:Rescheduling_interrupts
369.75 ± 23% +65.4% 611.75 ± 8% interrupts.CPU63.RES:Rescheduling_interrupts
353.50 ± 15% +75.9% 621.75 ± 21% interrupts.CPU64.RES:Rescheduling_interrupts
302.00 ± 19% +94.4% 587.00 ± 11% interrupts.CPU65.RES:Rescheduling_interrupts
351.75 ± 20% +60.8% 565.75 ± 13% interrupts.CPU67.RES:Rescheduling_interrupts
347.25 ± 29% +78.8% 621.00 ± 18% interrupts.CPU68.RES:Rescheduling_interrupts
365.00 ± 15% +47.8% 539.50 ± 19% interrupts.CPU69.RES:Rescheduling_interrupts
325.25 ± 30% +169.9% 878.00 ± 46% interrupts.CPU7.RES:Rescheduling_interrupts
348.50 ± 18% +57.6% 549.25 ± 13% interrupts.CPU70.RES:Rescheduling_interrupts
349.75 ± 25% +70.0% 594.50 ± 13% interrupts.CPU71.RES:Rescheduling_interrupts
342.50 ± 16% +65.0% 565.25 ± 8% interrupts.CPU72.RES:Rescheduling_interrupts
377.75 ± 31% +47.7% 558.00 ± 17% interrupts.CPU73.RES:Rescheduling_interrupts
344.50 ± 27% +52.5% 525.50 ± 14% interrupts.CPU74.RES:Rescheduling_interrupts
325.50 ± 16% +59.5% 519.25 ± 13% interrupts.CPU75.RES:Rescheduling_interrupts
354.75 ± 17% +52.3% 540.25 ± 8% interrupts.CPU76.RES:Rescheduling_interrupts
327.25 ± 26% +52.8% 500.00 ± 10% interrupts.CPU77.RES:Rescheduling_interrupts
385.50 ± 25% +60.8% 620.00 ± 20% interrupts.CPU78.RES:Rescheduling_interrupts
344.75 ± 24% +56.2% 538.50 ± 9% interrupts.CPU79.RES:Rescheduling_interrupts
413.50 ± 24% +97.5% 816.75 ± 30% interrupts.CPU8.RES:Rescheduling_interrupts
306.00 ± 16% +66.3% 509.00 ± 6% interrupts.CPU80.RES:Rescheduling_interrupts
345.00 ± 18% +57.1% 542.00 ± 5% interrupts.CPU82.RES:Rescheduling_interrupts
334.00 ± 21% +49.7% 500.00 ± 8% interrupts.CPU83.RES:Rescheduling_interrupts
338.25 ± 22% +58.1% 534.75 ± 5% interrupts.CPU84.RES:Rescheduling_interrupts
324.75 ± 17% +79.3% 582.25 ± 20% interrupts.CPU85.RES:Rescheduling_interrupts
341.50 ± 21% +61.1% 550.00 ± 14% interrupts.CPU87.RES:Rescheduling_interrupts
405.50 ± 22% +64.2% 666.00 ± 20% interrupts.CPU9.RES:Rescheduling_interrupts
33672 ± 5% +60.6% 54067 ± 3% interrupts.RES:Rescheduling_interrupts
aim7.jobs-per-min
1.3e+06 +-+---------------------------------------------------------------+
| |
1.2e+06 +-++..+.+..+..O..O.O..O..O..O.O..O..O..O.O..O..O..O.O..O..+..+ |
O O O O O |
1.1e+06 +-+ |
| |
1e+06 +-+ |
| |
900000 +-+ |
| |
800000 +-+ |
| |
700000 +-+ |
| |
600000 +-+-------------------------------------------------------O--O-O--O
aim7.time.elapsed_time
85 +-+--------------------------------------------------------------------+
80 +-+ O O O O
| |
75 +-+ |
70 +-+ |
| |
65 +-+ |
60 +-+ |
55 +-+ |
| |
50 +-+ |
45 +-+ |
O O O O O O O O O O O O O O O O O O O O O |
40 +-++..+..+..+..+..+..+..+..+..+..+..+.+..+..+..+..+..+..+..+..+..+ |
35 +-+--------------------------------------------------------------------+
aim7.time.elapsed_time.max
85 +-+--------------------------------------------------------------------+
80 +-+ O O O O
| |
75 +-+ |
70 +-+ |
| |
65 +-+ |
60 +-+ |
55 +-+ |
| |
50 +-+ |
45 +-+ |
O O O O O O O O O O O O O O O O O O O O O |
40 +-++..+..+..+..+..+..+..+..+..+..+..+.+..+..+..+..+..+..+..+..+..+ |
35 +-+--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 1 month
[selinux] 66f8e2f03c: RIP:sidtab_hash_stats
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 66f8e2f03c02e812002f8e9e465681cc62edda5b ("selinux: sidtab reverse lookup hash table")
https://git.kernel.org/cgit/linux/kernel/git/pcmoore/selinux.git next
in testcase: leaking_addresses
with following parameters:
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------------------------+----------+------------+
| | v5.5-rc1 | 66f8e2f03c |
+------------------------------------------------------------------------+----------+------------+
| boot_successes | 5431 | 0 |
| boot_failures | 91 | 3 |
| INFO:rcu_sched_self-detected_stall_on_CPU | 5 | |
| RIP:__memcpy | 5 | |
| BUG:kernel_hang_in_boot_stage | 9 | |
| Assertion_failed | 9 | |
| WARNING:at_fs/xfs/xfs_message.c:#assfail[xfs] | 9 | |
| RIP:assfail[xfs] | 9 | |
| INFO:rcu_sched_detected_stalls_on_CPUs/tasks | 8 | |
| RIP:clear_page_rep | 1 | |
| WARNING:at_fs/iomap/direct-io.c:#iomap_dio_actor | 8 | |
| RIP:iomap_dio_actor | 8 | |
| WARNING:at_net/sched/sch_generic.c:#dev_watchdog | 18 | |
| RIP:dev_watchdog | 18 | |
| RIP:native_safe_halt | 13 | |
| RIP:__wake_up_common | 1 | |
| RIP:esb_timer_keepalive | 1 | |
| WARNING:at_kernel/trace/trace_hwlat.c:#start_kthread | 15 | |
| RIP:start_kthread | 15 | |
| WARNING:at_arch/x86/mm/ioremap.c:#__ioremap_caller | 8 | |
| RIP:__ioremap_caller | 8 | |
| BUG:soft_lockup-CPU##stuck_for#s![inetd:#] | 1 | |
| RIP:new_slab | 1 | |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 5 | |
| RIP:copy_user_generic_string | 2 | |
| BUG:soft_lockup-CPU##stuck_for#s![(md-udevd):#] | 1 | |
| RIP:smp_call_function_single | 1 | |
| BUG:kernel_NULL_pointer_dereference,address | 1 | 3 |
| Oops:#[##] | 1 | 3 |
| RIP:iter_file_splice_write | 1 | |
| Kernel_panic-not_syncing:Fatal_exception | 2 | 3 |
| RIP:do_syscall_64 | 1 | |
| BUG:soft_lockup-CPU##stuck_for#s![swapper:#] | 1 | |
| RIP:io_serial_in | 2 | |
| WARNING:at_net/mac80211/tx.c:#__ieee80211_csa_update_counter[mac80211] | 1 | |
| RIP:__ieee80211_csa_update_counter[mac80211] | 1 | |
| RIP:console_unlock | 1 | |
| kernel_BUG_at_fs/buffer.c | 1 | |
| invalid_opcode:#[##] | 1 | |
| RIP:__block_commit_write | 1 | |
| WARNING:at_kernel/trace/ring_buffer.c:#rb_set_head_page | 1 | |
| RIP:rb_set_head_page | 1 | |
| BUG:soft_lockup-CPU##stuck_for#s![sed:#] | 2 | |
| RIP:kvm_async_pf_task_wait | 1 | |
| BUG:soft_lockup-CPU##stuck_for#s![ld:#] | 1 | |
| RIP:simple_write_begin | 1 | |
| RIP:__do_softirq | 1 | |
| RIP:__memset | 1 | |
| RIP:_raw_spin_unlock_irqrestore | 1 | |
| IP-Config:Auto-configuration_of_network_failed | 9 | |
| BUG:kernel_reboot-without-warning_in_test_stage | 1 | |
| RIP:sidtab_hash_stats | 0 | 3 |
+------------------------------------------------------------------------+----------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 52.068624] BUG: kernel NULL pointer dereference, address: 0000000000000c08
[ 52.070367] #PF: supervisor read access in kernel mode
[ 52.071799] #PF: error_code(0x0000) - not-present page
[ 52.073217] PGD 0 P4D 0
[ 52.074363] Oops: 0000 [#1] SMP PTI
[ 52.075587] CPU: 1 PID: 597 Comm: perl Not tainted 5.5.0-rc1-00001-g66f8e2f03c02e #1
[ 52.077279] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 52.079076] RIP: 0010:sidtab_hash_stats+0x25/0xb0
[ 52.080445] Code: 00 00 00 00 00 66 66 66 66 90 49 89 f3 65 ff 05 e9 be 41 5b 45 31 d2 48 81 c7 08 0c 00 00 45 31 c9 31 c9 45 31 c0 31 d2 31 f6 <48> 8b 07 48 85 c0 75 17 eb 3f 48 8b 40 58 41 39 d1 41 89 f2 44 0f
[ 52.084258] RSP: 0018:ffffb84c00907e80 EFLAGS: 00010246
[ 52.085728] RAX: ffffffffa694b400 RBX: ffffffffa6946700 RCX: 0000000000000000
[ 52.087428] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000c08
[ 52.089113] RBP: ffffa0ed62d9b000 R08: 0000000000000000 R09: 0000000000000000
[ 52.090804] R10: 0000000000000000 R11: ffffa0ed62d9b000 R12: 0000560dfe8373a0
[ 52.092483] R13: 0000000000002000 R14: ffffb84c00907f08 R15: 0000000000000000
[ 52.094176] FS: 00007fb7731af400(0000) GS:ffffa0ed7fd00000(0000) knlGS:0000000000000000
[ 52.095990] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 52.097527] CR2: 0000000000000c08 CR3: 000000022265c000 CR4: 00000000000406e0
[ 52.099227] Call Trace:
[ 52.100425] security_sidtab_hash_stats+0x2c/0x50
[ 52.102772] sel_read_sidtab_hash_stats+0x48/0x90
[ 52.105338] vfs_read+0x9b/0x160
[ 52.107601] ksys_read+0xa1/0xe0
[ 52.109829] do_syscall_64+0x5b/0x1f0
[ 52.112118] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 52.114642] RIP: 0033:0x7fb77287f1f0
[ 52.116843] Code: 73 01 c3 48 8b 0d b8 7d 20 00 f7 d8 64 89 01 48 83 c8 ff c3 66 0f 1f 44 00 00 83 3d d9 c1 20 00 00 75 10 b8 00 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 31 c3 48 83 ec 08 e8 4e fc ff ff 48 89 04 24
[ 52.123645] RSP: 002b:00007ffd023f4df8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000
[ 52.126827] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007fb77287f1f0
[ 52.129882] RDX: 0000000000002000 RSI: 0000560dfe8373a0 RDI: 0000000000000005
[ 52.131767] RBP: 0000000000002000 R08: 0000560dfe82dc40 R09: 0000000000002010
[ 52.133380] R10: 0000000000000030 R11: 0000000000000246 R12: 0000560dfe8373a0
[ 52.135011] R13: 0000560dfe00a010 R14: 0000000000000005 R15: 0000560dfe8302d0
[ 52.136614] Modules linked in: binfmt_misc sr_mod cdrom sg ata_generic pata_acpi bochs_drm drm_vram_helper drm_ttm_helper ttm intel_rapl_msr drm_kms_helper ppdev syscopyarea intel_rapl_common crct10dif_pclmul crc32_pclmul sysfillrect crc32c_intel snd_pcm sysimgblt fb_sys_fops ghash_clmulni_intel ata_piix snd_timer drm libata joydev aesni_intel crypto_simd snd cryptd glue_helper soundcore serio_raw pcspkr i2c_piix4 parport_pc floppy parport ip_tables
[ 52.144300] CR2: 0000000000000c08
[ 52.145489] ---[ end trace 5a55d87c8b28fabe ]---
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc1-00001-g66f8e2f03c02e .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 1 month
[xfs] c8d017feec: BUG:kernel_NULL_pointer_dereference,address
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: c8d017feec4bd540f9fa09c4d848a8e54fa29f48 ("xfs: Use filemap_huge_fault")
git://git.infradead.org/users/willy/linux-dax.git xarray-pagecache
in testcase: ltp
with following parameters:
disk: 1HDD
fs: xfs
test: syscalls_part2
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------+------------+------------+
| | 208aa7c007 | c8d017feec |
+----------------------------------------------------+------------+------------+
| boot_successes | 7 | 1 |
| boot_failures | 4 | 9 |
| WARNING:at_fs/iomap/buffered-io.c:#iomap_readpages | 4 | |
| RIP:iomap_readpages | 4 | |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 8 |
| Oops:#[##] | 0 | 8 |
| RIP:iomap_page_mkwrite | 0 | 8 |
| RIP:__clear_user | 0 | 8 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 8 |
| invoked_oom-killer:gfp_mask=0x | 0 | 1 |
| Mem-Info | 0 | 1 |
+----------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 617.046165] LTP: starting fanotify12
[ 617.049901] fanotify10.c:398: PASS: group 2 (prio 1) with FAN_MARK_INODE and FAN_MARK_MOUNT ignore mask got no event
[ 617.049903]
[ 617.052628] BUG: kernel NULL pointer dereference, address: 0000000000000008
[ 617.054064] #PF: supervisor read access in kernel mode
[ 617.055322] #PF: error_code(0x0000) - not-present page
[ 617.055665] fanotify10.c:398: PASS: group 0 (prio 2) with FAN_MARK_INODE and FAN_MARK_MOUNT ignore mask got no event
[ 617.055668]
[ 617.056563] PGD 0 P4D 0
[ 617.056565] Oops: 0000 [#1] SMP PTI
[ 617.056567] CPU: 1 PID: 5487 Comm: fanotify_child Not tainted 5.5.0-rc2-00368-gc8d017feec4bd #1
[ 617.056568] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 617.056572] RIP: 0010:iomap_page_mkwrite+0x40/0x190
[ 617.061217] fanotify10.c:398: PASS: group 1 (prio 2) with FAN_MARK_INODE and FAN_MARK_MOUNT ignore mask got no event
[ 617.061219]
[ 617.061354] Code: 02 00 00 48 83 ec 08 48 8b 07 48 8b 5f 48 48 c7 c7 b8 aa f3 b9 48 8b 80 a0 00 00 00 4c 8b 60 20 e8 55 75 d7 ff e8 c0 ad 75 00 <48> 8b 53 08 48 8d 42 ff 83 e2 01 48 0f 44 c3 f0 48 0f ba 28 00 0f
[ 617.061355] RSP: 0018:ffffb504c0b97aa8 EFLAGS: 00010246
[ 617.061356] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 617.061357] RDX: 0000000000000000 RSI: 0000000000000218 RDI: ffffffffb9f3aab8
[ 617.061357] RBP: ffffb504c0b97b40 R08: ffff894b7fde4d00 R09: fffff8d0861c4b80
[ 617.061358] R10: fffff8d0861c4b40 R11: 0000000000000000 R12: ffff894ad5911d58
[ 617.061359] R13: ffffffffc0842b20 R14: 0000000000000001 R15: ffff894ad6799af0
[ 617.061360] FS: 0000000000000000(0000) GS:ffff894b7fd00000(0000) knlGS:0000000000000000
[ 617.061361] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 617.065894] fanotify10.c:398: PASS: group 2 (prio 2) with FAN_MARK_INODE and FAN_MARK_MOUNT ignore mask got no event
[ 617.065896]
[ 617.067488] CR2: 0000000000000008 CR3: 00000001870b8000 CR4: 00000000000406e0
[ 617.067492] Call Trace:
[ 617.067511] ? down_read+0x21/0xb0
[ 617.070887] fanotify10.c:331: INFO: Test #5: ignore exec inode events created on a specific mount point
[ 617.070889]
[ 617.071855] __xfs_filemap_fault+0x1da/0x230 [xfs]
[ 617.074822] fanotify10.c:292: PASS: group 0 got event: mask 1020 pid=5466 fd=16
[ 617.074825]
[ 617.075955] ? native_set_p4d+0x1b/0x30
[ 617.079199] fanotify10.c:292: PASS: group 1 got event: mask 1020 pid=5466 fd=16
[ 617.079201]
[ 617.080302] __handle_mm_fault+0x1d4/0x6a0
[ 617.080309] handle_mm_fault+0xdd/0x210
[ 617.083655] fanotify10.c:292: PASS: group 2 got event: mask 1020 pid=5466 fd=16
[ 617.083657]
[ 617.084912] __do_page_fault+0x2f1/0x520
[ 617.084919] do_page_fault+0x30/0x120
[ 617.087657] fanotify10.c:292: PASS: group 0 got event: mask 1000 pid=5466 fd=16
[ 617.087659]
[ 617.088319] async_page_fault+0x3e/0x50
[ 617.088328] RIP: 0010:__clear_user+0x36/0x60
[ 617.091223] fanotify10.c:292: PASS: group 1 got event: mask 1000 pid=5466 fd=16
[ 617.091231]
[ 617.092012] Code: f3 48 c7 c7 c0 92 00 ba be 14 00 00 00 e8 72 67 7a ff 66 66 90 48 89 d8 48 c1 eb 03 48 89 ef 83 e0 07 48 89 d9 48 85 c9 74 0f <48> c7 07 00 00 00 00 48 83 c7 08 ff c9 75 f1 48 89 c1 85 c9 74 0a
[ 617.092013] RSP: 0018:ffffb504c0b97d58 EFLAGS: 00010202
[ 617.092014] RAX: 0000000000000000 RBX: 00000000000001fb RCX: 00000000000001fb
[ 617.092015] RDX: 0000000000000000 RSI: 0000000000000014 RDI: 000055597811c028
[ 617.092015] RBP: 000055597811c028 R08: ffff894ad6799af0 R09: ffff894ad6799120
[ 617.092016] R10: 000055597811be08 R11: 00000000001ff000 R12: ffff894aadd6b600
[ 617.092017] R13: 000055597811c030 R14: ffff894ad62f6e00 R15: 000055597811c028
[ 617.092026] ? __clear_user+0x1e/0x60
[ 617.096022] fanotify10.c:292: PASS: group 2 got event: mask 1000 pid=5466 fd=16
[ 617.096024]
[ 617.096962] load_elf_binary+0x14bc/0x1644
[ 617.096965] search_binary_handler+0x91/0x1d0
[ 617.096972] __do_execve_file+0x73d/0x910
[ 617.100281] fanotify10.c:292: PASS: group 0 got event: mask 1000 pid=5466 fd=16
[ 617.100283]
[ 617.100510] __x64_sys_execve+0x34/0x40
[ 617.100513] do_syscall_64+0x5b/0x1f0
[ 617.100520] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 617.103466] fanotify10.c:292: PASS: group 1 got event: mask 1000 pid=5466 fd=16
[ 617.103468]
[ 617.103979] RIP: 0033:0x7f49be0d8647
[ 617.106887] fanotify10.c:292: PASS: group 2 got event: mask 1000 pid=5466 fd=16
[ 617.106889]
[ 617.107598] Code: Bad RIP value.
[ 617.107599] RSP: 002b:00007ffc8a756608 EFLAGS: 00000202 ORIG_RAX: 000000000000003b
[ 617.107600] RAX: ffffffffffffffda RBX: 00007ffc8a756630 RCX: 00007f49be0d8647
[ 617.107601] RDX: 00007ffc8a756908 RSI: 00007ffc8a756610 RDI: 0000564b8d0b9ee4
[ 617.107602] RBP: 00007ffc8a756690 R08: 0000000000000018 R09: 0000000000000001
[ 617.107602] R10: 00007ffc8a7566a0 R11: 0000000000000202 R12: 0000000000000010
[ 617.107603] R13: 0000000000000000 R14: 0000564b8d2c4540 R15: 0000564b8d0b9fc8
[ 617.110965] fanotify10.c:331: INFO: Test #6: don't ignore inode events created on another mount point
[ 617.110967]
[ 617.112029] Modules linked in: overlay btrfs blake2b_generic xor zstd_decompress zstd_compress raid6_pq ext4 mbcache jbd2 tun loop xfs libcrc32c dm_mod intel_rapl_msr intel_rapl_common crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sr_mod cdrom sg ppdev ata_generic pata_acpi snd_pcm bochs_drm drm_vram_helper drm_ttm_helper ttm snd_timer snd aesni_intel crypto_simd drm_kms_helper cryptd glue_helper soundcore syscopyarea pcspkr sysfillrect joydev sysimgblt fb_sys_fops serio_raw ata_piix drm libata parport_pc i2c_piix4 parport floppy ip_tables
[ 617.114638] fanotify10.c:292: PASS: group 0 got event: mask 20 pid=5467 fd=16
[ 617.114640]
[ 617.116020] CR2: 0000000000000008
[ 617.118972] fanotify10.c:292: PASS: group 1 got event: mask 20 pid=5467 fd=16
[ 617.118974]
[ 617.120016] ---[ end trace 03ece483a70242a3 ]---
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc2-00368-gc8d017feec4bd .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
1 year, 1 month