Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
1 year, 7 months
Test monitoring on custom github repo
by Thomas Garnier
Hi,
I am working on KASLR (PIE for x86_64). I previously used Kees (CCed)
branches for lkp bot testing but someone told be I could ask you to add a
custom github path to monitor all branches on it.
I pushed my changes to: https://github.com/thgarnie/linux (kasrl_pie_v2
right now)
Can you add it? Anything I need to do?
Thanks,
--
Thomas
1 year, 12 months
[lkp-robot] [sched/fair] d519329f72: unixbench.score -9.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -9.9% regression of unixbench.score due to commit:
commit: d519329f72a6f36bc4f2b85452640cfe583b4f81 ("sched/fair: Update util_est only on util_avg updates")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: unixbench
on test machine: 8 threads Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz with 6G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: execl
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-7/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/300s/nhm-white/execl/unixbench
commit:
a07630b8b2 ("sched/cpufreq/schedutil: Use util_est for OPP selection")
d519329f72 ("sched/fair: Update util_est only on util_avg updates")
a07630b8b2c16f82 d519329f72a6f36bc4f2b85452
---------------- --------------------------
%stddev %change %stddev
\ | \
4626 -9.9% 4167 unixbench.score
3495362 ± 4% +70.4% 5957769 ± 2% unixbench.time.involuntary_context_switches
2.866e+08 -11.6% 2.534e+08 unixbench.time.minor_page_faults
666.75 -9.7% 602.25 unixbench.time.percent_of_cpu_this_job_got
1830 -9.7% 1653 unixbench.time.system_time
395.13 -5.2% 374.58 unixbench.time.user_time
8611715 -58.9% 3537314 ± 3% unixbench.time.voluntary_context_switches
6639375 -9.1% 6033775 unixbench.workload
26025 +3849.3% 1027825 interrupts.CAL:Function_call_interrupts
4856 ± 14% -27.4% 3523 ± 11% slabinfo.filp.active_objs
3534356 -8.8% 3223918 softirqs.RCU
77929 -11.2% 69172 vmstat.system.cs
19489 ± 2% +7.5% 20956 vmstat.system.in
9.05 ± 9% +11.0% 10.05 ± 8% boot-time.dhcp
131.63 ± 4% +8.6% 142.89 ± 7% boot-time.idle
9.07 ± 9% +11.0% 10.07 ± 8% boot-time.kernel_boot
76288 ± 3% -12.8% 66560 ± 3% meminfo.DirectMap4k
16606 -13.1% 14433 meminfo.Inactive
16515 -13.2% 14341 meminfo.Inactive(anon)
11.87 ± 5% +7.8 19.63 ± 4% mpstat.cpu.idle%
0.07 ± 35% -0.0 0.04 ± 17% mpstat.cpu.soft%
68.91 -6.1 62.82 mpstat.cpu.sys%
29291570 +325.4% 1.246e+08 cpuidle.C1.time
8629105 -36.1% 5513780 cpuidle.C1.usage
668733 ± 12% +11215.3% 75668902 ± 2% cpuidle.C1E.time
9763 ± 12% +16572.7% 1627882 ± 2% cpuidle.C1E.usage
1.834e+08 ± 9% +23.1% 2.258e+08 ± 11% cpuidle.C3.time
222674 ± 8% +133.4% 519690 ± 6% cpuidle.C3.usage
4129 -13.3% 3581 proc-vmstat.nr_inactive_anon
4129 -13.3% 3581 proc-vmstat.nr_zone_inactive_anon
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_hit
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_local
6625 -10.9% 5905 proc-vmstat.pgactivate
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgalloc_normal
2.936e+08 -12.6% 2.566e+08 proc-vmstat.pgfault
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgfree
2850 -15.3% 2413 turbostat.Avg_MHz
8629013 -36.1% 5513569 turbostat.C1
1.09 +3.5 4.61 turbostat.C1%
9751 ± 12% +16593.0% 1627864 ± 2% turbostat.C1E
0.03 ± 19% +2.8 2.80 turbostat.C1E%
222574 ± 8% +133.4% 519558 ± 6% turbostat.C3
6.84 ± 8% +1.5 8.34 ± 10% turbostat.C3%
2.82 ± 7% +250.3% 9.87 ± 2% turbostat.CPU%c1
6552773 ± 3% +23.8% 8111699 ± 2% turbostat.IRQ
2.02 ± 11% +28.3% 2.58 ± 9% turbostat.Pkg%pc3
7.635e+11 -12.5% 6.682e+11 perf-stat.branch-instructions
3.881e+10 -12.9% 3.381e+10 perf-stat.branch-misses
2.09 -0.3 1.77 ± 4% perf-stat.cache-miss-rate%
1.551e+09 -15.1% 1.316e+09 ± 4% perf-stat.cache-misses
26177920 -10.5% 23428188 perf-stat.context-switches
1.99 -2.8% 1.93 perf-stat.cpi
7.553e+12 -14.7% 6.446e+12 perf-stat.cpu-cycles
522523 ± 2% +628.3% 3805664 perf-stat.cpu-migrations
2.425e+10 ± 4% -14.3% 2.078e+10 perf-stat.dTLB-load-misses
1.487e+12 -11.3% 1.319e+12 perf-stat.dTLB-loads
1.156e+10 ± 3% -7.7% 1.066e+10 perf-stat.dTLB-store-misses
6.657e+11 -11.1% 5.915e+11 perf-stat.dTLB-stores
0.15 +0.0 0.15 perf-stat.iTLB-load-miss-rate%
5.807e+09 -11.0% 5.166e+09 perf-stat.iTLB-load-misses
3.799e+12 -12.1% 3.34e+12 perf-stat.iTLB-loads
3.803e+12 -12.2% 3.338e+12 perf-stat.instructions
654.99 -1.4% 646.07 perf-stat.instructions-per-iTLB-miss
0.50 +2.8% 0.52 perf-stat.ipc
2.754e+08 -11.6% 2.435e+08 perf-stat.minor-faults
1.198e+08 ± 7% +73.1% 2.074e+08 ± 4% perf-stat.node-stores
2.754e+08 -11.6% 2.435e+08 perf-stat.page-faults
572928 -3.4% 553258 perf-stat.path-length
unixbench.score
4800 +-+------------------------------------------------------------------+
|+ + + |
4700 +-+ + + :+ +. :+ + + |
| + + + +. : + + + + + + + .+++++ .+ +|
4600 +-+ +++ :+++ + ++: : :+ +++ ++.++++ + ++++ ++ |
| + + + ++ ++ + |
4500 +-+ |
| |
4400 +-+ |
| |
4300 +-+ |
O |
4200 +-O O O OOOO OO OOO OOOO OOOO O O |
|O OO OOOOO O O OO O O O O O OO |
4100 +-+------------------------------------------------------------------+
unixbench.workload
9e+06 +-+---------------------------------------------------------------+
| : |
8.5e+06 +-+ : |
| : |
8e+06 +-+ : |
| :: |
7.5e+06 +-+ : : + |
| +: : : + |
7e+06 +-+ + + :: : :: + + : + + + + + |
|:+ + + : :: : : :: : :+ : : ::+ :+ .+ :+ ++ ++ + ++ ::++|
6.5e+06 +-O+ +++ ++++ +++ + ++ +.+ + ++ + + + + + + + +.+++ + |
O O O O O O O |
6e+06 +O+OOO O OOOOOOOO OOOO OO OOOOOOOOO O O O OO |
| O |
5.5e+06 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 2 months
[lkp-robot] [mm] 9092c71bb7: blogbench.write_score -12.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.3% regression of blogbench.write_score and a +9.6% improvement
of blogbench.read_score due to commit:
commit: 9092c71bb724dba2ecba849eae69e5c9d39bd3d2 ("mm: use sc->priority for slab shrink targets")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: blogbench
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
with following parameters:
disk: 1SSD
fs: btrfs
cpufreq_governor: performance
test-description: Blogbench is a portable filesystem benchmark that tries to reproduce the load of a real-world busy file server.
test-url: https://www.pureftpd.org/project/blogbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/rootfs/tbox_group/testcase:
gcc-7/performance/1SSD/btrfs/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-bdw-de1/blogbench
commit:
fcb2b0c577 ("mm: show total hugetlb memory consumption in /proc/meminfo")
9092c71bb7 ("mm: use sc->priority for slab shrink targets")
fcb2b0c577f145c7 9092c71bb724dba2ecba849eae
---------------- --------------------------
%stddev %change %stddev
\ | \
3256 -12.3% 2854 blogbench.write_score
1235237 ± 2% +9.6% 1354163 blogbench.read_score
28050912 -10.1% 25212230 blogbench.time.file_system_outputs
6481995 ± 3% +25.0% 8105320 ± 2% blogbench.time.involuntary_context_switches
906.00 +13.7% 1030 blogbench.time.percent_of_cpu_this_job_got
2552 +14.0% 2908 blogbench.time.system_time
173.80 +8.4% 188.32 blogbench.time.user_time
19353936 +3.6% 20045728 blogbench.time.voluntary_context_switches
8719514 +13.0% 9850451 softirqs.RCU
2.97 ± 5% -0.7 2.30 ± 3% mpstat.cpu.idle%
24.92 -6.5 18.46 mpstat.cpu.iowait%
0.65 ± 2% +0.1 0.75 mpstat.cpu.soft%
67.76 +6.7 74.45 mpstat.cpu.sys%
50206 -10.7% 44858 vmstat.io.bo
49.25 -9.1% 44.75 ± 2% vmstat.procs.b
224125 -1.8% 220135 vmstat.system.cs
48903 +10.7% 54134 vmstat.system.in
3460654 +10.8% 3834883 meminfo.Active
3380666 +11.0% 3752872 meminfo.Active(file)
1853849 -17.4% 1530415 meminfo.Inactive
1836507 -17.6% 1513054 meminfo.Inactive(file)
551311 -10.3% 494265 meminfo.SReclaimable
196525 -12.6% 171775 meminfo.SUnreclaim
747837 -10.9% 666040 meminfo.Slab
8.904e+08 -24.9% 6.683e+08 cpuidle.C1.time
22971020 -12.8% 20035820 cpuidle.C1.usage
2.518e+08 ± 3% -31.7% 1.72e+08 cpuidle.C1E.time
821393 ± 2% -33.3% 548003 cpuidle.C1E.usage
75460078 ± 2% -23.3% 57903768 ± 2% cpuidle.C3.time
136506 ± 3% -25.3% 101956 ± 3% cpuidle.C3.usage
56892498 ± 4% -23.3% 43608427 ± 4% cpuidle.C6.time
85034 ± 3% -33.9% 56184 ± 3% cpuidle.C6.usage
24373567 -24.5% 18395538 cpuidle.POLL.time
449033 ± 2% -10.8% 400493 cpuidle.POLL.usage
1832 +9.3% 2002 turbostat.Avg_MHz
22967645 -12.8% 20032521 turbostat.C1
18.43 -4.6 13.85 turbostat.C1%
821328 ± 2% -33.3% 547948 turbostat.C1E
5.21 ± 3% -1.6 3.56 turbostat.C1E%
136377 ± 3% -25.3% 101823 ± 3% turbostat.C3
1.56 ± 2% -0.4 1.20 ± 3% turbostat.C3%
84404 ± 3% -34.0% 55743 ± 3% turbostat.C6
1.17 ± 4% -0.3 0.90 ± 4% turbostat.C6%
25.93 -26.2% 19.14 turbostat.CPU%c1
0.12 ± 3% -19.1% 0.10 ± 9% turbostat.CPU%c3
14813304 +10.7% 16398388 turbostat.IRQ
38.19 +3.6% 39.56 turbostat.PkgWatt
4.51 +4.5% 4.71 turbostat.RAMWatt
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_daemon_free_scanned
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_daemon_migrate_scanned
2444 ± 21% -63.3% 897.50 ± 20% proc-vmstat.compact_daemon_wake
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_free_scanned
755491 ± 32% -81.6% 138856 ± 28% proc-vmstat.compact_isolated
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_migrate_scanned
137.75 ± 34% +2.8e+06% 3801062 ± 2% proc-vmstat.kswapd_inodesteal
6749 ± 20% -53.6% 3131 ± 12% proc-vmstat.kswapd_low_wmark_hit_quickly
844991 +11.2% 939487 proc-vmstat.nr_active_file
3900576 -10.5% 3490567 proc-vmstat.nr_dirtied
459789 -17.8% 377930 proc-vmstat.nr_inactive_file
137947 -10.3% 123720 proc-vmstat.nr_slab_reclaimable
49165 -12.6% 42989 proc-vmstat.nr_slab_unreclaimable
1382 ± 11% -26.2% 1020 ± 20% proc-vmstat.nr_writeback
3809266 -10.7% 3403350 proc-vmstat.nr_written
844489 +11.2% 938974 proc-vmstat.nr_zone_active_file
459855 -17.8% 378121 proc-vmstat.nr_zone_inactive_file
7055 ± 18% -52.0% 3389 ± 11% proc-vmstat.pageoutrun
33764911 ± 2% +21.3% 40946445 proc-vmstat.pgactivate
42044161 ± 2% +12.1% 47139065 proc-vmstat.pgdeactivate
92153 ± 20% -69.1% 28514 ± 24% proc-vmstat.pgmigrate_success
15212270 -10.7% 13591573 proc-vmstat.pgpgout
42053817 ± 2% +12.1% 47151755 proc-vmstat.pgrefill
11297 ±107% +1025.4% 127138 ± 21% proc-vmstat.pgscan_direct
19930162 -24.0% 15141439 proc-vmstat.pgscan_kswapd
19423629 -24.0% 14758807 proc-vmstat.pgsteal_kswapd
10868768 +184.8% 30950752 proc-vmstat.slabs_scanned
3361780 ± 3% -22.9% 2593327 ± 3% proc-vmstat.workingset_activate
4994722 ± 2% -43.2% 2835020 ± 2% proc-vmstat.workingset_refault
316427 -9.3% 286844 slabinfo.Acpi-Namespace.active_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.active_slabs
318605 -9.4% 288623 slabinfo.Acpi-Namespace.num_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.num_slabs
220514 -40.7% 130747 slabinfo.btrfs_delayed_node.active_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.active_slabs
263293 -25.3% 196669 slabinfo.btrfs_delayed_node.num_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.num_slabs
6383 ± 8% -12.0% 5615 ± 2% slabinfo.btrfs_delayed_ref_head.num_objs
9496 +15.5% 10969 slabinfo.btrfs_extent_buffer.active_objs
9980 +20.5% 12022 slabinfo.btrfs_extent_buffer.num_objs
260933 -10.7% 233136 slabinfo.btrfs_extent_map.active_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.active_slabs
263009 -10.6% 235107 slabinfo.btrfs_extent_map.num_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.num_slabs
271938 -10.3% 243802 slabinfo.btrfs_inode.active_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.active_slabs
273856 -10.4% 245359 slabinfo.btrfs_inode.num_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.num_slabs
7085 ± 5% -5.5% 6692 ± 2% slabinfo.btrfs_path.num_objs
311936 -16.4% 260797 slabinfo.dentry.active_objs
7803 -9.6% 7058 slabinfo.dentry.active_slabs
327759 -9.6% 296439 slabinfo.dentry.num_objs
7803 -9.6% 7058 slabinfo.dentry.num_slabs
2289 -23.3% 1755 ± 6% slabinfo.proc_inode_cache.active_objs
2292 -19.0% 1856 ± 6% slabinfo.proc_inode_cache.num_objs
261546 -12.3% 229485 slabinfo.radix_tree_node.active_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.active_slabs
263347 -11.9% 232089 slabinfo.radix_tree_node.num_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.num_slabs
1140424 ± 12% +40.2% 1598980 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
790.55 +13.0% 893.20 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
1140425 ± 12% +40.2% 1598982 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
0.83 ± 10% +21.5% 1.00 ± 8% sched_debug.cfs_rq:/.nr_running.avg
3.30 ± 99% +266.3% 12.09 ± 13% sched_debug.cfs_rq:/.removed.load_avg.avg
153.02 ± 97% +266.6% 560.96 ± 13% sched_debug.cfs_rq:/.removed.runnable_sum.avg
569.93 ±102% +173.2% 1556 ± 14% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
1.42 ± 60% +501.5% 8.52 ± 34% sched_debug.cfs_rq:/.removed.util_avg.avg
19.88 ± 59% +288.9% 77.29 ± 16% sched_debug.cfs_rq:/.removed.util_avg.max
5.05 ± 58% +342.3% 22.32 ± 22% sched_debug.cfs_rq:/.removed.util_avg.stddev
791.44 ± 3% +47.7% 1168 ± 8% sched_debug.cfs_rq:/.util_avg.avg
1305 ± 6% +33.2% 1738 ± 5% sched_debug.cfs_rq:/.util_avg.max
450.25 ± 11% +66.2% 748.17 ± 14% sched_debug.cfs_rq:/.util_avg.min
220.82 ± 8% +21.1% 267.46 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
363118 ± 11% -23.8% 276520 ± 11% sched_debug.cpu.avg_idle.avg
726003 ± 8% -30.8% 502313 ± 4% sched_debug.cpu.avg_idle.max
202629 ± 3% -32.2% 137429 ± 18% sched_debug.cpu.avg_idle.stddev
31.96 ± 28% +54.6% 49.42 ± 14% sched_debug.cpu.cpu_load[3].min
36.21 ± 25% +64.0% 59.38 ± 6% sched_debug.cpu.cpu_load[4].min
1007 ± 5% +20.7% 1216 ± 7% sched_debug.cpu.curr->pid.avg
4.50 ± 5% +14.8% 5.17 ± 5% sched_debug.cpu.nr_running.max
2476195 -11.8% 2185022 sched_debug.cpu.nr_switches.max
212888 -26.6% 156172 ± 3% sched_debug.cpu.nr_switches.stddev
3570 ± 2% -58.7% 1474 ± 2% sched_debug.cpu.nr_uninterruptible.max
-803.67 -28.7% -573.38 sched_debug.cpu.nr_uninterruptible.min
1004 ± 2% -50.4% 498.55 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
2478809 -11.7% 2189310 sched_debug.cpu.sched_count.max
214130 -26.5% 157298 ± 3% sched_debug.cpu.sched_count.stddev
489430 ± 2% -16.6% 408309 ± 2% sched_debug.cpu.sched_goidle.avg
724333 ± 2% -28.2% 520263 ± 2% sched_debug.cpu.sched_goidle.max
457611 -18.1% 374746 ± 3% sched_debug.cpu.sched_goidle.min
62957 ± 2% -47.4% 33138 ± 3% sched_debug.cpu.sched_goidle.stddev
676053 ± 2% -15.4% 571816 ± 2% sched_debug.cpu.ttwu_local.max
42669 ± 3% +22.3% 52198 sched_debug.cpu.ttwu_local.min
151873 ± 2% -18.3% 124118 ± 2% sched_debug.cpu.ttwu_local.stddev
blogbench.write_score
3300 +-+------------------------------------------------------------------+
3250 +-+ +. .+ +. .+ : : : +. .+ .+.+.+. .|
|: +. .+ +.+.+.+ + + + : +. : : +. + +.+ + + |
3200 +-+ + +.+ + : + + : + + |
3150 +-+.+ ++ +.+ |
3100 +-+ |
3050 +-+ |
| |
3000 +-+ |
2950 +-+ O O |
2900 +-O O O O |
2850 +-+ O O O O O O O OO O O O |
| O O O O |
2800 O-+ O O |
2750 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 5 months
[lkp-robot] [mm, memcontrol] 309fe96bfc: vm-scalability.throughput +23.0% improvement
by kernel test robot
Greeting,
FYI, we noticed a +23.0% improvement of vm-scalability.throughput due to commit:
commit: 309fe96bfc0ae387f53612927a8f0dc3eb056efd ("mm, memcontrol: implement memory.swap.events")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
with following parameters:
runtime: 300s
size: 1T
test: lru-shm
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/300s/1T/lkp-hsx04/lru-shm/vm-scalability
commit:
ccc2f49f99 ("mm, memcontrol: move swap charge handling into get_swap_page()")
309fe96bfc ("mm, memcontrol: implement memory.swap.events")
ccc2f49f991f17cd 309fe96bfc0ae387f53612927a
---------------- --------------------------
%stddev %change %stddev
\ | \
71207426 +23.0% 87612470 vm-scalability.throughput
0.32 ± 8% -80.2% 0.06 ± 2% vm-scalability.free_time
499213 +23.4% 616000 vm-scalability.median
0.01 ± 9% -43.6% 0.00 ± 22% vm-scalability.median_stddev
71207426 +23.0% 87612470 vm-scalability.throughput
305.83 +3.5% 316.49 vm-scalability.time.elapsed_time
305.83 +3.5% 316.49 vm-scalability.time.elapsed_time.max
7.933e+08 +8.3% 8.594e+08 vm-scalability.time.minor_page_faults
2610 -18.8% 2119 vm-scalability.time.percent_of_cpu_this_job_got
5076 -20.1% 4056 vm-scalability.time.system_time
2910 -8.9% 2651 vm-scalability.time.user_time
24540 +8.2% 26563 vm-scalability.time.voluntary_context_switches
3.566e+09 +8.3% 3.863e+09 vm-scalability.workload
4435819 ± 2% +13.1% 5015715 ± 4% cpuidle.C1E.time
58453 +12.6% 65828 ± 13% cpuidle.POLL.time
630.29 -1.9% 618.41 pmeter.Average_Active_Power
112976 +25.4% 141673 pmeter.performance_per_watt
26.00 -19.2% 21.00 vmstat.procs.r
147037 -1.2% 145251 vmstat.system.in
0.00 ±173% +0.0 0.00 ±124% mpstat.cpu.iowait%
11.66 -2.6 9.02 mpstat.cpu.sys%
6.66 -0.8 5.86 mpstat.cpu.usr%
113669 -12.8% 99110 meminfo.Active
112018 -13.0% 97459 meminfo.Active(anon)
23274932 -21.5% 18277464 meminfo.Mapped
51042 -22.7% 39479 meminfo.PageTables
5675691 -22.6% 4394275 ± 2% numa-meminfo.node0.Mapped
12906 ± 9% -24.0% 9808 ± 11% numa-meminfo.node0.PageTables
5564225 ± 2% -20.1% 4445143 numa-meminfo.node1.Mapped
12478 ± 6% -23.3% 9573 ± 11% numa-meminfo.node1.PageTables
5605568 ± 2% -20.3% 4467557 ± 2% numa-meminfo.node2.Mapped
12399 ± 8% -23.0% 9545 ± 8% numa-meminfo.node2.PageTables
5747984 ± 3% -19.3% 4638538 ± 3% numa-meminfo.node3.Mapped
11853 ± 6% -16.8% 9867 ± 10% numa-meminfo.node3.PageTables
40006 ± 2% +32.6% 53040 ± 23% numa-meminfo.node3.SUnreclaim
1394386 -21.2% 1099228 ± 2% numa-vmstat.node0.nr_mapped
3220 ± 9% -23.7% 2457 ± 9% numa-vmstat.node0.nr_page_table_pages
1385184 ± 2% -19.8% 1111569 ± 2% numa-vmstat.node1.nr_mapped
3096 ± 6% -22.4% 2404 ± 11% numa-vmstat.node1.nr_page_table_pages
1392379 ± 2% -18.4% 1135757 ± 2% numa-vmstat.node2.nr_mapped
3056 ± 7% -20.7% 2422 ± 7% numa-vmstat.node2.nr_page_table_pages
1477487 ± 2% -18.7% 1201163 ± 3% numa-vmstat.node3.nr_mapped
3074 ± 4% -17.2% 2546 ± 11% numa-vmstat.node3.nr_page_table_pages
10001 ± 2% +32.6% 13259 ± 23% numa-vmstat.node3.nr_slab_unreclaimable
4316 ± 19% -26.2% 3183 ± 3% syscalls.sys_mmap.med
66119 ± 25% -41.3% 38816 ± 10% syscalls.sys_newfstat.max
71070823 ± 55% -6.4e+07 6980408 ± 20% syscalls.sys_newfstat.noise.100%
86378359 ± 45% -6.5e+07 20983557 ± 6% syscalls.sys_newfstat.noise.2%
83896012 ± 47% -6.5e+07 18902607 ± 7% syscalls.sys_newfstat.noise.25%
86279365 ± 46% -6.5e+07 20864391 ± 6% syscalls.sys_newfstat.noise.5%
79533721 ± 49% -6.4e+07 15258271 ± 8% syscalls.sys_newfstat.noise.50%
74988875 ± 52% -6.4e+07 11205147 ± 13% syscalls.sys_newfstat.noise.75%
2034 ± 9% -16.9% 1690 ± 4% syscalls.sys_read.med
1598 ± 6% -10.5% 1431 ± 3% syscalls.sys_write.med
5.102e+12 +9.0% 5.559e+12 perf-stat.branch-instructions
1.37 -21.4% 1.08 perf-stat.cpi
2.479e+13 -14.0% 2.132e+13 perf-stat.cpu-cycles
20771 +2.6% 21302 perf-stat.cpu-migrations
4.59e+12 ± 2% +9.2% 5.014e+12 perf-stat.dTLB-loads
1.483e+12 ± 4% +10.6% 1.639e+12 perf-stat.dTLB-stores
2.527e+09 +7.3% 2.712e+09 perf-stat.iTLB-load-misses
1.804e+13 ± 2% +9.3% 1.972e+13 perf-stat.instructions
0.73 +27.1% 0.93 perf-stat.ipc
7.943e+08 +8.3% 8.605e+08 perf-stat.minor-faults
2.416e+09 +4.3% 2.519e+09 perf-stat.node-stores
7.943e+08 +8.3% 8.605e+08 perf-stat.page-faults
27996 -13.0% 24359 proc-vmstat.nr_active_anon
237.75 +6.2% 252.50 proc-vmstat.nr_dirtied
33832161 -1.0% 33504100 proc-vmstat.nr_file_pages
33515106 -1.0% 33189801 proc-vmstat.nr_inactive_anon
485.50 +0.8% 489.25 proc-vmstat.nr_inactive_file
23158 -1.0% 22915 proc-vmstat.nr_kernel_stack
5811543 -24.2% 4407834 proc-vmstat.nr_mapped
12781 -25.1% 9571 proc-vmstat.nr_page_table_pages
33521883 -1.0% 33192863 proc-vmstat.nr_shmem
222.00 +13.1% 251.00 proc-vmstat.nr_written
28001 -13.0% 24362 proc-vmstat.nr_zone_active_anon
33515101 -1.0% 33189795 proc-vmstat.nr_zone_inactive_anon
485.50 +0.8% 489.25 proc-vmstat.nr_zone_inactive_file
7.959e+08 +8.3% 8.621e+08 proc-vmstat.numa_hit
7.958e+08 +8.3% 8.621e+08 proc-vmstat.numa_local
11401 ± 8% -69.4% 3491 ± 23% proc-vmstat.pgactivate
7.969e+08 +8.4% 8.635e+08 proc-vmstat.pgalloc_normal
7.944e+08 +8.3% 8.605e+08 proc-vmstat.pgfault
7.964e+08 +8.4% 8.63e+08 proc-vmstat.pgfree
76.68 ±173% -100.0% 0.00 ± 10% sched_debug.cfs_rq:/.MIN_vruntime.stddev
29153 -18.2% 23841 sched_debug.cfs_rq:/.exec_clock.avg
48865 ± 7% -14.6% 41739 ± 7% sched_debug.cfs_rq:/.exec_clock.max
26558 -18.4% 21662 sched_debug.cfs_rq:/.exec_clock.min
76.68 ±173% -100.0% 0.00 ± 10% sched_debug.cfs_rq:/.max_vruntime.stddev
4166046 -19.1% 3372283 sched_debug.cfs_rq:/.min_vruntime.avg
4360622 -19.2% 3524394 sched_debug.cfs_rq:/.min_vruntime.max
3816276 -17.3% 3154309 sched_debug.cfs_rq:/.min_vruntime.min
105670 ± 15% -32.9% 70895 ± 16% sched_debug.cfs_rq:/.min_vruntime.stddev
-361713 -46.2% -194567 sched_debug.cfs_rq:/.spread0.min
105504 ± 15% -32.8% 70895 ± 16% sched_debug.cfs_rq:/.spread0.stddev
309.71 ± 13% -22.3% 240.75 ± 21% sched_debug.cfs_rq:/.util_est_enqueued.max
6.42 ± 19% -37.9% 3.99 ± 5% sched_debug.cpu.clock.stddev
6.42 ± 19% -37.9% 3.98 ± 5% sched_debug.cpu.clock_task.stddev
5.91 ± 7% -14.8% 5.04 ± 6% sched_debug.cpu.cpu_load[4].avg
355621 ± 22% -28.6% 253956 ± 5% sched_debug.cpu.nr_switches.max
40018 ± 12% -30.5% 27804 ± 16% sched_debug.cpu.nr_switches.stddev
0.00 ± 19% +100.0% 0.01 ± 24% sched_debug.cpu.nr_uninterruptible.avg
364939 ± 24% -26.2% 269378 ± 6% sched_debug.cpu.sched_count.max
41878 ± 12% -27.3% 30433 ± 13% sched_debug.cpu.sched_count.stddev
179801 ± 22% -33.2% 120078 ± 3% sched_debug.cpu.ttwu_count.max
20153 ± 12% -32.8% 13538 ± 19% sched_debug.cpu.ttwu_count.stddev
174157 ± 23% -33.1% 116564 ± 2% sched_debug.cpu.ttwu_local.max
19436 ± 12% -34.2% 12782 ± 20% sched_debug.cpu.ttwu_local.stddev
66.14 -66.1 0.00 perf-profile.calltrace.cycles-pp.do_access
44.18 -44.2 0.00 perf-profile.calltrace.cycles-pp.page_fault.do_access
44.15 -44.1 0.00 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
44.13 -44.1 0.00 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
42.34 -42.3 0.00 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
17.02 -17.0 0.00 perf-profile.calltrace.cycles-pp.do_rw_once
6.85 ± 14% -6.9 0.00 perf-profile.calltrace.cycles-pp.__munmap
6.81 ± 14% -6.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
6.81 ± 14% -6.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
6.81 ± 14% -6.8 0.00 perf-profile.calltrace.cycles-pp.vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
6.81 ± 14% -6.8 0.00 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
8.94 -6.4 2.52 ±173% perf-profile.calltrace.cycles-pp.clear_page_erms.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
5.83 -5.8 0.00 perf-profile.calltrace.cycles-pp.native_irq_return_iret.do_access
6.80 ± 14% -5.6 1.25 ±145% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.do_munmap.vm_munmap
6.80 ± 14% -5.6 1.25 ±145% perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.do_munmap.vm_munmap.__x64_sys_munmap
6.81 ± 14% -5.6 1.26 ±144% perf-profile.calltrace.cycles-pp.do_munmap.vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.81 ± 14% -5.6 1.26 ±144% perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.vm_munmap.__x64_sys_munmap.do_syscall_64
8.00 -5.2 2.79 ±173% perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
5.34 -4.8 0.57 ±173% perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
5.30 -4.7 0.56 ±173% perf-profile.calltrace.cycles-pp.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +1.1 1.13 ± 91% perf-profile.calltrace.cycles-pp.native_irq_return_iret
0.00 +1.2 1.18 ± 31% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
0.00 +1.2 1.18 ± 32% perf-profile.calltrace.cycles-pp.get_next_timer_interrupt.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
0.00 +1.3 1.27 ± 31% perf-profile.calltrace.cycles-pp.load_balance.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt
0.00 +1.3 1.33 ± 33% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.00 +1.8 1.78 ± 32% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.00 +1.9 1.88 ± 33% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.00 +2.3 2.27 ± 33% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
0.00 +2.6 2.63 ± 32% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.23 ±173% +3.1 3.36 ± 64% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.23 ±173% +3.2 3.38 ± 63% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.23 ±173% +3.2 3.39 ± 63% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.23 ±173% +3.2 3.41 ± 63% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.23 ±173% +3.2 3.42 ± 63% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.00 +3.2 3.21 ± 31% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.23 ±173% +3.3 3.52 ± 61% perf-profile.calltrace.cycles-pp.write
0.00 +3.5 3.51 ± 32% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.00 +3.7 3.74 ± 30% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.00 +4.3 4.35 ± 33% perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.87 ± 3% +7.6 8.51 ± 31% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
0.87 ± 3% +7.7 8.58 ± 31% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
7.11 +46.8 53.92 ± 30% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
8.11 +55.7 63.77 ± 30% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
8.72 +61.2 69.92 ± 30% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
8.72 +61.2 69.92 ± 30% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
8.72 +61.2 69.92 ± 30% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
8.73 +61.4 70.12 ± 30% perf-profile.calltrace.cycles-pp.secondary_startup_64
66.14 -66.1 0.00 perf-profile.children.cycles-pp.do_access
17.02 -17.0 0.00 perf-profile.children.cycles-pp.do_rw_once
6.85 ± 14% -6.9 0.00 perf-profile.children.cycles-pp.__munmap
6.81 ± 14% -5.5 1.30 ±137% perf-profile.children.cycles-pp.do_munmap
6.81 ± 14% -5.5 1.31 ±136% perf-profile.children.cycles-pp.unmap_vmas
6.81 ± 14% -5.5 1.31 ±136% perf-profile.children.cycles-pp.unmap_page_range
6.81 ± 14% -5.5 1.30 ±137% perf-profile.children.cycles-pp.vm_munmap
6.81 ± 14% -5.5 1.30 ±137% perf-profile.children.cycles-pp.unmap_region
6.81 ± 14% -5.5 1.30 ±137% perf-profile.children.cycles-pp.__x64_sys_munmap
5.88 -5.1 0.75 ±173% perf-profile.children.cycles-pp.alloc_set_pte
5.35 -4.8 0.57 ±173% perf-profile.children.cycles-pp.finish_fault
5.89 -4.8 1.13 ± 90% perf-profile.children.cycles-pp.native_irq_return_iret
4.50 ± 13% -3.9 0.63 ±155% perf-profile.children.cycles-pp.page_remove_rmap
2.89 ± 8% -2.2 0.72 ±167% perf-profile.children.cycles-pp.shmem_alloc_page
2.86 ± 8% -2.1 0.71 ±167% perf-profile.children.cycles-pp.alloc_pages_vma
2.73 ± 9% -2.1 0.67 ±165% perf-profile.children.cycles-pp.__alloc_pages_nodemask
2.48 ± 10% -1.9 0.59 ±165% perf-profile.children.cycles-pp.get_page_from_freelist
1.57 ± 16% -1.2 0.34 ±164% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.25 ± 20% -0.8 0.40 ± 48% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +0.1 0.08 ± 21% perf-profile.children.cycles-pp.ret_from_intr
0.00 +0.1 0.08 ± 24% perf-profile.children.cycles-pp.update_rq_clock
0.00 +0.1 0.09 ± 26% perf-profile.children.cycles-pp.update_group_capacity
0.00 +0.1 0.09 ± 26% perf-profile.children.cycles-pp.intel_pmu_disable_all
0.00 +0.1 0.09 ± 28% perf-profile.children.cycles-pp.perf_event_task_tick
0.11 ± 6% +0.1 0.22 ± 15% perf-profile.children.cycles-pp.__indirect_thunk_start
0.00 +0.1 0.11 ± 34% perf-profile.children.cycles-pp.cpu_load_update
0.00 +0.1 0.12 ± 25% perf-profile.children.cycles-pp.run_posix_cpu_timers
0.00 +0.1 0.12 ± 33% perf-profile.children.cycles-pp.rb_next
0.00 +0.1 0.12 ± 19% perf-profile.children.cycles-pp.interrupt_entry
0.01 ±173% +0.1 0.13 ± 21% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.1 0.12 ± 36% perf-profile.children.cycles-pp.rcu_eqs_exit
0.00 +0.1 0.14 ± 39% perf-profile.children.cycles-pp.nr_iowait_cpu
0.00 +0.1 0.14 ± 38% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.00 +0.1 0.14 ± 38% perf-profile.children.cycles-pp.leave_mm
0.00 +0.1 0.14 ± 26% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.00 +0.1 0.14 ± 36% perf-profile.children.cycles-pp.irq_work_needs_cpu
0.00 +0.1 0.14 ± 30% perf-profile.children.cycles-pp.idle_cpu
0.00 +0.1 0.14 ± 30% perf-profile.children.cycles-pp.call_cpuidle
0.00 +0.2 0.15 ± 35% perf-profile.children.cycles-pp.rcu_irq_exit
0.00 +0.2 0.16 ± 40% perf-profile.children.cycles-pp.rcu_needs_cpu
0.00 +0.2 0.16 ± 31% perf-profile.children.cycles-pp.get_cpu_device
0.00 +0.2 0.16 ± 28% perf-profile.children.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.00 +0.2 0.16 ± 38% perf-profile.children.cycles-pp.native_apic_mem_write
0.00 +0.2 0.16 ± 36% perf-profile.children.cycles-pp.find_next_and_bit
0.00 +0.2 0.17 ± 43% perf-profile.children.cycles-pp.timekeeping_max_deferment
0.00 +0.2 0.18 ± 34% perf-profile.children.cycles-pp.cpumask_next_and
0.00 +0.2 0.19 ± 37% perf-profile.children.cycles-pp.timerqueue_add
0.00 +0.2 0.19 ± 38% perf-profile.children.cycles-pp.enqueue_hrtimer
0.00 +0.2 0.20 ± 34% perf-profile.children.cycles-pp.update_ts_time_stats
0.00 +0.2 0.20 ± 30% perf-profile.children.cycles-pp.rcu_idle_exit
0.04 ± 58% +0.2 0.25 ± 28% perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.2 0.21 ± 28% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.03 ±100% +0.2 0.24 ± 28% perf-profile.children.cycles-pp.irq_work_interrupt
0.03 ±100% +0.2 0.24 ± 28% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.03 ±100% +0.2 0.24 ± 28% perf-profile.children.cycles-pp.irq_work_run
0.03 ±100% +0.2 0.24 ± 28% perf-profile.children.cycles-pp.printk
0.00 +0.2 0.22 ± 36% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.00 +0.2 0.23 ± 39% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.2 0.24 ± 35% perf-profile.children.cycles-pp.hrtimer_get_next_event
0.00 +0.2 0.24 ± 33% perf-profile.children.cycles-pp._raw_spin_trylock
0.01 ±173% +0.2 0.25 ± 45% perf-profile.children.cycles-pp.rcu_process_callbacks
0.00 +0.3 0.26 ± 34% perf-profile.children.cycles-pp.pm_qos_read_value
0.01 ±173% +0.3 0.28 ± 26% perf-profile.children.cycles-pp.update_blocked_averages
0.00 +0.3 0.27 ± 30% perf-profile.children.cycles-pp.read_tsc
0.00 +0.3 0.28 ± 36% perf-profile.children.cycles-pp.timerqueue_del
0.00 +0.3 0.29 ± 27% perf-profile.children.cycles-pp.lapic_next_deadline
0.06 ± 7% +0.3 0.36 ± 28% perf-profile.children.cycles-pp.rcu_check_callbacks
0.03 ±173% +0.3 0.34 ± 70% perf-profile.children.cycles-pp.fbcon_putcs
0.01 ±173% +0.3 0.33 ± 33% perf-profile.children.cycles-pp.__remove_hrtimer
0.03 ±173% +0.3 0.34 ± 71% perf-profile.children.cycles-pp.bit_putcs
0.00 +0.3 0.32 ± 33% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.03 ±173% +0.3 0.35 ± 70% perf-profile.children.cycles-pp.fbcon_redraw
0.03 ±173% +0.3 0.35 ± 70% perf-profile.children.cycles-pp.lf
0.03 ±173% +0.3 0.35 ± 70% perf-profile.children.cycles-pp.con_scroll
0.03 ±173% +0.3 0.35 ± 70% perf-profile.children.cycles-pp.fbcon_scroll
0.03 ±100% +0.3 0.35 ± 26% perf-profile.children.cycles-pp.run_rebalance_domains
0.00 +0.3 0.33 ± 29% perf-profile.children.cycles-pp.native_sched_clock
0.03 ±173% +0.3 0.36 ± 71% perf-profile.children.cycles-pp.vt_console_print
0.00 +0.3 0.34 ± 34% perf-profile.children.cycles-pp.rcu_eqs_enter
0.06 ± 7% +0.3 0.40 ± 26% perf-profile.children.cycles-pp.native_write_msr
0.00 +0.3 0.35 ± 31% perf-profile.children.cycles-pp.hrtimer_next_event_without
0.00 +0.4 0.37 ± 29% perf-profile.children.cycles-pp.sched_clock
0.06 ± 11% +0.4 0.48 ± 26% perf-profile.children.cycles-pp.clockevents_program_event
0.17 ± 5% +0.4 0.59 ± 21% perf-profile.children.cycles-pp.scheduler_tick
0.05 ± 9% +0.4 0.49 ± 42% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.04 ± 57% +0.5 0.51 ± 29% perf-profile.children.cycles-pp.sched_clock_cpu
0.07 ± 10% +0.5 0.57 ± 34% perf-profile.children.cycles-pp.run_timer_softirq
0.06 +0.6 0.61 ± 30% perf-profile.children.cycles-pp.find_next_bit
0.11 ± 6% +0.6 0.68 ± 28% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.07 ± 7% +0.6 0.67 ± 31% perf-profile.children.cycles-pp.tick_irq_enter
0.09 ± 14% +0.7 0.75 ± 29% perf-profile.children.cycles-pp.ktime_get
0.09 ± 9% +0.8 0.85 ± 31% perf-profile.children.cycles-pp.irq_enter
0.09 ± 4% +0.8 0.86 ± 32% perf-profile.children.cycles-pp.find_busiest_group
0.09 ± 4% +0.8 0.90 ± 31% perf-profile.children.cycles-pp.__next_timer_interrupt
0.27 ± 5% +0.9 1.15 ± 25% perf-profile.children.cycles-pp.update_process_times
0.28 ± 4% +1.0 1.25 ± 25% perf-profile.children.cycles-pp.tick_sched_handle
0.30 ± 5% +1.1 1.43 ± 26% perf-profile.children.cycles-pp.tick_sched_timer
0.12 ± 3% +1.1 1.25 ± 32% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.13 ± 5% +1.2 1.29 ± 31% perf-profile.children.cycles-pp.load_balance
0.07 ±173% +1.2 1.23 ± 66% perf-profile.children.cycles-pp.delay_tsc
0.14 ±173% +1.5 1.67 ± 62% perf-profile.children.cycles-pp.io_serial_in
0.18 ± 4% +1.6 1.81 ± 32% perf-profile.children.cycles-pp.rebalance_domains
0.19 ± 6% +1.8 2.00 ± 34% perf-profile.children.cycles-pp.tick_nohz_next_event
0.22 ± 7% +2.1 2.33 ± 33% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.50 ± 4% +2.3 2.77 ± 28% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.23 ±143% +2.5 2.77 ± 64% perf-profile.children.cycles-pp.serial8250_console_putchar
0.23 ±143% +2.6 2.83 ± 64% perf-profile.children.cycles-pp.uart_console_write
0.23 ±143% +2.7 2.91 ± 64% perf-profile.children.cycles-pp.wait_for_xmitr
0.24 ±144% +2.7 2.97 ± 64% perf-profile.children.cycles-pp.serial8250_console_write
0.38 ± 4% +2.9 3.33 ± 30% perf-profile.children.cycles-pp.__softirqentry_text_start
0.22 ±173% +3.0 3.20 ± 69% perf-profile.children.cycles-pp.devkmsg_write
0.22 ±173% +3.0 3.20 ± 69% perf-profile.children.cycles-pp.printk_emit
0.64 ± 4% +3.0 3.66 ± 28% perf-profile.children.cycles-pp.hrtimer_interrupt
0.27 ±147% +3.1 3.34 ± 64% perf-profile.children.cycles-pp.console_unlock
0.24 ±159% +3.1 3.37 ± 64% perf-profile.children.cycles-pp.__vfs_write
0.24 ±157% +3.1 3.39 ± 63% perf-profile.children.cycles-pp.vfs_write
0.24 ±157% +3.1 3.39 ± 63% perf-profile.children.cycles-pp.ksys_write
0.24 ±161% +3.2 3.44 ± 63% perf-profile.children.cycles-pp.vprintk_emit
0.25 ±153% +3.3 3.52 ± 61% perf-profile.children.cycles-pp.write
0.44 ± 4% +3.4 3.85 ± 29% perf-profile.children.cycles-pp.irq_exit
0.43 ± 5% +4.0 4.43 ± 33% perf-profile.children.cycles-pp.menu_select
1.22 ± 2% +7.5 8.74 ± 29% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.23 ± 2% +7.6 8.81 ± 29% perf-profile.children.cycles-pp.apic_timer_interrupt
7.13 +47.0 54.12 ± 30% perf-profile.children.cycles-pp.intel_idle
8.18 +56.3 64.52 ± 30% perf-profile.children.cycles-pp.cpuidle_enter_state
8.72 +61.2 69.92 ± 30% perf-profile.children.cycles-pp.start_secondary
8.73 +61.4 70.12 ± 30% perf-profile.children.cycles-pp.secondary_startup_64
8.73 +61.4 70.12 ± 30% perf-profile.children.cycles-pp.cpu_startup_entry
8.74 +61.5 70.20 ± 30% perf-profile.children.cycles-pp.do_idle
16.94 -16.9 0.00 perf-profile.self.cycles-pp.do_rw_once
10.66 -10.7 0.00 perf-profile.self.cycles-pp.do_access
5.89 -4.8 1.13 ± 90% perf-profile.self.cycles-pp.native_irq_return_iret
3.75 ± 12% -3.2 0.54 ±154% perf-profile.self.cycles-pp.page_remove_rmap
1.57 ± 16% -1.2 0.34 ±164% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.00 +0.1 0.07 ± 17% perf-profile.self.cycles-pp.ret_from_intr
0.00 +0.1 0.08 ± 31% perf-profile.self.cycles-pp.rcu_idle_exit
0.00 +0.1 0.08 ± 26% perf-profile.self.cycles-pp.tick_irq_enter
0.00 +0.1 0.09 ± 28% perf-profile.self.cycles-pp.perf_event_task_tick
0.00 +0.1 0.11 ± 15% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.11 ± 6% +0.1 0.22 ± 15% perf-profile.self.cycles-pp.__indirect_thunk_start
0.00 +0.1 0.11 ± 34% perf-profile.self.cycles-pp.scheduler_tick
0.00 +0.1 0.11 ± 34% perf-profile.self.cycles-pp.cpu_load_update
0.00 +0.1 0.11 ± 27% perf-profile.self.cycles-pp.__remove_hrtimer
0.00 +0.1 0.12 ± 25% perf-profile.self.cycles-pp.run_posix_cpu_timers
0.00 +0.1 0.12 ± 33% perf-profile.self.cycles-pp.rb_next
0.00 +0.1 0.12 ± 19% perf-profile.self.cycles-pp.interrupt_entry
0.00 +0.1 0.12 ± 38% perf-profile.self.cycles-pp.timerqueue_add
0.00 +0.1 0.13 ± 32% perf-profile.self.cycles-pp.sched_clock_cpu
0.00 +0.1 0.13 ± 35% perf-profile.self.cycles-pp.hrtimer_interrupt
0.00 +0.1 0.14 ± 39% perf-profile.self.cycles-pp.nr_iowait_cpu
0.00 +0.1 0.14 ± 21% perf-profile.self.cycles-pp.smp_apic_timer_interrupt
0.00 +0.1 0.14 ± 38% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.00 +0.1 0.14 ± 38% perf-profile.self.cycles-pp.leave_mm
0.00 +0.1 0.14 ± 36% perf-profile.self.cycles-pp.irq_work_needs_cpu
0.00 +0.1 0.14 ± 30% perf-profile.self.cycles-pp.idle_cpu
0.00 +0.1 0.14 ± 30% perf-profile.self.cycles-pp.call_cpuidle
0.00 +0.2 0.16 ± 40% perf-profile.self.cycles-pp.rcu_needs_cpu
0.00 +0.2 0.16 ± 31% perf-profile.self.cycles-pp.get_cpu_device
0.00 +0.2 0.16 ± 28% perf-profile.self.cycles-pp.tick_check_oneshot_broadcast_this_cpu
0.00 +0.2 0.16 ± 38% perf-profile.self.cycles-pp.native_apic_mem_write
0.00 +0.2 0.16 ± 36% perf-profile.self.cycles-pp.find_next_and_bit
0.00 +0.2 0.17 ± 43% perf-profile.self.cycles-pp.timekeeping_max_deferment
0.00 +0.2 0.18 ± 29% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.2 0.19 ± 30% perf-profile.self.cycles-pp.__hrtimer_run_queues
0.00 +0.2 0.19 ± 36% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.00 +0.2 0.19 ± 27% perf-profile.self.cycles-pp.update_blocked_averages
0.00 +0.2 0.23 ± 22% perf-profile.self.cycles-pp.irq_exit
0.00 +0.2 0.23 ± 35% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.00 +0.2 0.24 ± 33% perf-profile.self.cycles-pp._raw_spin_trylock
0.00 +0.3 0.26 ± 34% perf-profile.self.cycles-pp.pm_qos_read_value
0.00 +0.3 0.27 ± 29% perf-profile.self.cycles-pp.rcu_check_callbacks
0.00 +0.3 0.27 ± 30% perf-profile.self.cycles-pp.read_tsc
0.00 +0.3 0.30 ± 35% perf-profile.self.cycles-pp.rebalance_domains
0.00 +0.3 0.30 ± 30% perf-profile.self.cycles-pp.load_balance
0.00 +0.3 0.32 ± 33% perf-profile.self.cycles-pp.__hrtimer_next_event_base
0.04 ± 57% +0.3 0.36 ± 31% perf-profile.self.cycles-pp.__softirqentry_text_start
0.00 +0.3 0.33 ± 29% perf-profile.self.cycles-pp.native_sched_clock
0.00 +0.3 0.34 ± 34% perf-profile.self.cycles-pp.rcu_eqs_enter
0.06 ± 7% +0.3 0.40 ± 26% perf-profile.self.cycles-pp.native_write_msr
0.05 ± 9% +0.4 0.45 ± 34% perf-profile.self.cycles-pp.run_timer_softirq
0.00 +0.4 0.40 ± 38% perf-profile.self.cycles-pp.tick_nohz_next_event
0.03 ±100% +0.4 0.45 ± 42% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.01 ±173% +0.4 0.44 ± 34% perf-profile.self.cycles-pp.__next_timer_interrupt
0.05 ± 59% +0.5 0.52 ± 32% perf-profile.self.cycles-pp.ktime_get
0.05 +0.5 0.52 ± 31% perf-profile.self.cycles-pp.do_idle
0.11 ± 7% +0.5 0.59 ± 34% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.06 +0.6 0.61 ± 30% perf-profile.self.cycles-pp.find_next_bit
0.06 ± 6% +0.6 0.62 ± 32% perf-profile.self.cycles-pp.find_busiest_group
0.11 ± 4% +1.1 1.18 ± 33% perf-profile.self.cycles-pp.cpuidle_enter_state
0.07 ±173% +1.2 1.23 ± 66% perf-profile.self.cycles-pp.delay_tsc
0.16 ± 2% +1.4 1.52 ± 34% perf-profile.self.cycles-pp.menu_select
0.14 ±173% +1.5 1.67 ± 62% perf-profile.self.cycles-pp.io_serial_in
7.12 +46.9 54.02 ± 30% perf-profile.self.cycles-pp.intel_idle
vm-scalability.throughput
9.5e+07 +-+---------------------------------------------------------------+
| |
9e+07 +-+ O O |
O O O O O O O O O O O O O |
| O O |
8.5e+07 +-+ |
| |
8e+07 +-+ O O |
| |
7.5e+07 +-+ +. |
| + +. |
|.+.+.+.+. .+..+.+.+ +. .+.+.+.+.+.+. .+..+.+.+.+.+.+.+.+.|
7e+07 +-+ +.+ + +.+ |
| |
6.5e+07 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 7 months
[lkp-robot] [tracing/x86] 1c758a2202: aim9.disk_rr.ops_per_sec -12.0% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.0% regression of aim9.disk_rr.ops_per_sec due to commit:
commit: 1c758a2202a6b4624d0703013a2c6cfa6e7455aa ("tracing/x86: Update syscall trace events to handle new prefixed syscall func names")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: aim9
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
with following parameters:
testtime: 300s
test: disk_rr
cpufreq_governor: performance
test-description: Suite IX is the "AIM Independent Resource Benchmark:" the famous synthetic benchmark.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite9/
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-hsx04/disk_rr/aim9/300s
commit:
1cdae042fc ("tracing: Add missing forward declaration")
1c758a2202 ("tracing/x86: Update syscall trace events to handle new prefixed syscall func names")
1cdae042fc63dd98 1c758a2202a6b4624d0703013a
---------------- --------------------------
%stddev %change %stddev
\ | \
633310 -12.0% 557268 aim9.disk_rr.ops_per_sec
244.24 +2.2% 249.57 aim9.time.system_time
55.76 -9.6% 50.43 aim9.time.user_time
8135 ± 39% +73.2% 14091 ± 33% numa-meminfo.node0.Shmem
1328 -11.9% 1171 pmeter.performance_per_watt
450606 ± 3% -9.5% 407878 ± 5% meminfo.Committed_AS
406.75 ±173% +300.1% 1627 meminfo.Mlocked
20124 ± 4% +8.4% 21819 ± 6% softirqs.NET_RX
8237636 ± 6% -15.4% 6965294 ± 2% softirqs.RCU
2033 ± 39% +73.0% 3518 ± 33% numa-vmstat.node0.nr_shmem
21.25 ±173% +378.8% 101.75 ± 27% numa-vmstat.node2.nr_mlock
21.25 ±173% +378.8% 101.75 ± 27% numa-vmstat.node3.nr_mlock
9.408e+08 ± 6% +53.3% 1.442e+09 ± 20% perf-stat.dTLB-load-misses
47.39 ± 17% -10.4 36.99 ± 8% perf-stat.iTLB-load-miss-rate%
1279 ± 27% +63.4% 2089 ± 21% perf-stat.instructions-per-iTLB-miss
46.73 ± 5% -5.4 41.33 ± 5% perf-stat.node-store-miss-rate%
19240 +1.2% 19474 proc-vmstat.nr_indirectly_reclaimable
18868 +4.0% 19628 proc-vmstat.nr_slab_reclaimable
48395423 -11.8% 42700849 proc-vmstat.numa_hit
48314997 -11.8% 42620296 proc-vmstat.numa_local
3153408 -12.0% 2775642 proc-vmstat.pgactivate
48365477 -11.8% 42678780 proc-vmstat.pgfree
3060 +38.9% 4250 slabinfo.ftrace_event_field.active_objs
3060 +38.9% 4250 slabinfo.ftrace_event_field.num_objs
2748 ± 3% -8.9% 2502 ± 3% slabinfo.sighand_cache.active_objs
2763 ± 3% -8.9% 2517 ± 3% slabinfo.sighand_cache.num_objs
4125 ± 4% -10.3% 3700 ± 2% slabinfo.signal_cache.active_objs
4125 ± 4% -10.3% 3700 ± 2% slabinfo.signal_cache.num_objs
1104 +57.3% 1736 slabinfo.trace_event_file.active_objs
1104 +57.3% 1736 slabinfo.trace_event_file.num_objs
0.78 ± 4% -0.1 0.67 ± 5% perf-profile.calltrace.cycles-pp.rcu_process_callbacks.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
2.10 ± 2% -0.1 2.00 perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.89 -0.1 1.79 perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.71 ± 2% -0.1 1.60 perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.53 ± 12% -0.1 0.44 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.17 ± 7% -0.0 0.15 ± 4% perf-profile.children.cycles-pp.leave_mm
0.09 ± 8% -0.0 0.08 ± 11% perf-profile.children.cycles-pp.cpu_load_update_active
0.17 ± 8% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.irq_work_needs_cpu
0.43 ± 14% -0.1 0.38 ± 3% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.17 ± 7% -0.0 0.15 ± 4% perf-profile.self.cycles-pp.leave_mm
0.30 +0.0 0.32 ± 2% perf-profile.self.cycles-pp.get_next_timer_interrupt
0.17 ± 8% +0.0 0.19 ± 4% perf-profile.self.cycles-pp.irq_work_needs_cpu
0.06 ± 9% +0.0 0.08 ± 15% perf-profile.self.cycles-pp.sched_clock
4358 ± 15% +25.0% 5446 ± 7% sched_debug.cfs_rq:/.exec_clock.avg
24231 ± 7% +12.3% 27202 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
23637 ± 17% +33.3% 31501 ± 14% sched_debug.cfs_rq:/.load.avg
73299 ± 9% +19.8% 87807 ± 8% sched_debug.cfs_rq:/.load.stddev
35584 ± 7% +13.2% 40294 ± 5% sched_debug.cfs_rq:/.min_vruntime.stddev
0.09 ± 10% +27.8% 0.12 ± 15% sched_debug.cfs_rq:/.nr_running.avg
17.10 ± 18% +36.5% 23.33 ± 18% sched_debug.cfs_rq:/.runnable_load_avg.avg
63.20 ± 10% +18.9% 75.11 ± 7% sched_debug.cfs_rq:/.runnable_load_avg.stddev
23618 ± 17% +33.3% 31477 ± 14% sched_debug.cfs_rq:/.runnable_weight.avg
73217 ± 9% +19.9% 87751 ± 8% sched_debug.cfs_rq:/.runnable_weight.stddev
35584 ± 7% +13.2% 40294 ± 5% sched_debug.cfs_rq:/.spread0.stddev
95.02 ± 9% +14.5% 108.82 ± 7% sched_debug.cfs_rq:/.util_avg.avg
19.44 ± 9% +23.0% 23.91 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg
323.02 ± 3% -6.4% 302.21 ± 4% sched_debug.cpu.ttwu_local.avg
aim9.disk_rr.ops_per_sec
650000 +-+----------------------------------------------------------------+
640000 +-+ .+.. |
| .+.+..+..+..+..+ +.. .+.. .+.. .+..+..|
630000 +-++. +..+..+ +..+..+.+. +..+..+ |
620000 +-+ |
| |
610000 +-+ |
600000 +-+ |
590000 +-+ |
| |
580000 +-+ |
570000 +-+ |
O O O O O O O O O O |
560000 +-+O O O O O O O O O O O O O |
550000 +-+----------------------------------------------------------------+
aim9.time.user_time
57 +-+--------------------------------------------------------------------+
| .+..+.. .+. .+.. .|
56 +-+ .+..+.. .+..+..+. +..+. +.. .+..+. +.. .+.. .+..+. |
55 +-++. +. +. +. +. |
| |
54 +-+ |
| |
53 +-+ |
| |
52 +-+ O |
51 +-+ |
| O O O O O O O O O |
50 O-+O O O O O O O O |
| O O O O |
49 +-+--------------------------------------------------------------------+
aim9.time.system_time
251 +-+-------------------------------------------------------------------+
| O O O O |
250 O-+O O O O O O O O |
249 +-+ O O O O O O O O O |
| |
248 +-+ O |
| |
247 +-+ |
| |
246 +-+ |
245 +-+ |
| .+.. .+.. .+.. .+.. .+.. |
244 +-+ +..+. +..+.+.. .+..+..+..+. +..+..+..+ +. +..+..|
| +..+. |
243 +-+-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 7 months
[lkp-robot] [mm] e27be240df: will-it-scale.per_process_ops -27.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -27.2% regression of will-it-scale.per_process_ops due to commit:
commit: e27be240df53f1a20c659168e722b5d9f16cc7f4 ("mm: memcg: make sure memory.events is uptodate when waking pollers")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
with following parameters:
nr_task: 100%
mode: process
test: page_fault3
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/process/100%/debian-x86_64-2016-08-31.cgz/lkp-hsw-ep4/page_fault3/will-it-scale
commit:
a38c015f31 ("mm/ksm.c: fix inconsistent accounting of zero pages")
e27be240df ("mm: memcg: make sure memory.events is uptodate when waking pollers")
a38c015f3156895b e27be240df53f1a20c659168e7
---------------- --------------------------
%stddev %change %stddev
\ | \
639324 -27.2% 465226 will-it-scale.per_process_ops
46031421 -27.2% 33496351 will-it-scale.workload
17.55 -3.2 14.38 mpstat.cpu.usr%
1130383 ± 6% -19.6% 909067 ± 4% softirqs.RCU
95892 ± 2% -7.5% 88706 ± 3% vmstat.system.in
2714 +2.0% 2768 turbostat.Avg_MHz
0.43 ± 9% -33.3% 0.29 ± 15% turbostat.CPU%c1
15.72 -2.5% 15.33 turbostat.RAMWatt
15220184 -26.9% 11118535 numa-numastat.node0.local_node
15223689 -26.9% 11125573 numa-numastat.node0.numa_hit
15236149 -22.2% 11857182 numa-numastat.node1.local_node
15246716 -22.2% 11864179 numa-numastat.node1.numa_hit
8676822 -22.6% 6714739 numa-vmstat.node0.numa_hit
8673095 -22.7% 6707502 numa-vmstat.node0.numa_local
8661159 -19.7% 6951620 numa-vmstat.node1.numa_hit
8481025 -20.1% 6775023 numa-vmstat.node1.numa_local
30466411 -24.6% 22979746 proc-vmstat.numa_hit
30452327 -24.6% 22965700 proc-vmstat.numa_local
30512939 -24.6% 23021801 proc-vmstat.pgalloc_normal
1.386e+10 -27.2% 1.008e+10 proc-vmstat.pgfault
28718588 ± 3% -24.0% 21818568 ± 5% proc-vmstat.pgfree
62.72 ± 10% -21.8% 49.06 ± 2% sched_debug.cfs_rq:/.exec_clock.stddev
80883 ± 10% -14.1% 69503 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
2.04 ± 3% +10.0% 2.24 ± 2% sched_debug.cfs_rq:/.nr_spread_over.stddev
119225 ± 11% -58.0% 50132 ± 59% sched_debug.cfs_rq:/.spread0.avg
199133 ± 7% -35.3% 128853 ± 23% sched_debug.cfs_rq:/.spread0.max
80591 ± 10% -14.1% 69247 ± 6% sched_debug.cfs_rq:/.spread0.stddev
6.275e+12 -27.3% 4.565e+12 perf-stat.branch-instructions
4.772e+10 ± 2% -26.7% 3.498e+10 perf-stat.branch-misses
55.58 -20.5 35.13 perf-stat.cache-miss-rate%
2.658e+10 -20.4% 2.116e+10 perf-stat.cache-misses
4.782e+10 +26.0% 6.025e+10 perf-stat.cache-references
1.86 +40.3% 2.60 perf-stat.cpi
5.875e+13 +2.0% 5.994e+13 perf-stat.cpu-cycles
8.997e+12 -27.4% 6.532e+12 perf-stat.dTLB-loads
2.94 -0.5 2.48 perf-stat.dTLB-store-miss-rate%
1.599e+11 -38.9% 9.764e+10 perf-stat.dTLB-store-misses
5.27e+12 -27.2% 3.838e+12 perf-stat.dTLB-stores
2.684e+10 -27.3% 1.95e+10 perf-stat.iTLB-load-misses
3.166e+13 -27.3% 2.303e+13 perf-stat.instructions
0.54 -28.7% 0.38 perf-stat.ipc
1.386e+10 -27.2% 1.009e+10 perf-stat.minor-faults
0.57 ± 10% +10.9 11.49 perf-stat.node-load-miss-rate%
67281213 ± 10% +1624.2% 1.16e+09 perf-stat.node-load-misses
1.179e+10 -24.2% 8.934e+09 perf-stat.node-loads
5.02 +0.6 5.64 perf-stat.node-store-miss-rate%
7.36e+08 -15.5% 6.216e+08 perf-stat.node-store-misses
1.393e+10 -25.3% 1.041e+10 perf-stat.node-stores
1.386e+10 -27.2% 1.009e+10 perf-stat.page-faults
will-it-scale.per_process_ops
700000 +-+----------------------------------------------------------------+
|.+ .+.+ +.+ |
650000 +-++ .+.+. .+ + : + |
| + .+.+ + +.. : + +.+.+.+..+.+.+.+.+.|
| + .+..+ : + + |
600000 +-+ +.+ +.+ +.+ |
| |
550000 +-+ |
| |
500000 +-+ |
O |
| O O O O O O O O O O O O O O O O O O O |
450000 +-+ O |
| O O O |
400000 +-+---O------------------------------------------------------------+
will-it-scale.workload
5e+07 +-+---------------------------------------------------------------+
4.8e+07 +-+ .+.+ +.+ |
| + .+.+.+.+ + : + .+.+.+. .+. .+.+.|
4.6e+07 +-+ +. .+..+ +. : +. +. + + |
4.4e+07 +-+ +. .+.+ +. : +. + |
4.2e+07 +-+ + + + |
4e+07 +-+ |
| |
3.8e+07 +-+ |
3.6e+07 +-+ |
3.4e+07 O-O O O O O O O O O O O O O |
3.2e+07 +-+ O O O O O O |
| O |
3e+07 +-+ O O O O |
2.8e+07 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 7 months
bf53a3dce7 ("aaa"): WARNING: CPU: 0 PID: 122 at kernel/stacktrace.c:77 save_stack_trace_tsk_reliable
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/jirislaby/linux.git devel
commit bf53a3dce7e2b37d90e8438e54fc5266fb2a7d49
Author: Jiri Slaby <jslaby(a)suse.cz>
AuthorDate: Wed Nov 29 10:12:27 2017 +0100
Commit: Jiri Slaby <jslaby(a)suse.cz>
CommitDate: Tue May 29 08:42:20 2018 +0200
aaa
Signed-off-by: Jiri Slaby <jslaby(a)suse.cz>
43db2e4d17 bluetooth: fix nesting
bf53a3dce7 aaa
8cf65d7f1f r8152: napi hangup fix
+------------------------------------------------------------------+------------+------------+------------+
| | 43db2e4d17 | bf53a3dce7 | 8cf65d7f1f |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 33 | 0 | 0 |
| boot_failures | 0 | 15 | 15 |
| WARNING:at_kernel/stacktrace.c:#save_stack_trace_tsk_reliable | 0 | 15 | 15 |
| RIP:save_stack_trace_tsk_reliable | 0 | 15 | 15 |
| invoked_oom-killer:gfp_mask=0x | 0 | 2 | 4 |
| Mem-Info | 0 | 2 | 4 |
| Out_of_memory:Kill_process | 0 | 2 | 4 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 0 | 2 | 2 |
+------------------------------------------------------------------+------------+------------+------------+
[ 4.852696] No rocketport ports found; unloading driver
[ 4.855099] telclk_interrupt = 0xf non-mcpbl0010 hw.
[ 4.856011] smapi::smapi_init, ERROR invalid usSmapiID
[ 4.857007] ------------[ cut here ]------------
[ 4.857830] save_stack_tsk_reliable() not implemented yet.
[ 4.858812] WARNING: CPU: 0 PID: 122 at kernel/stacktrace.c:77 save_stack_trace_tsk_reliable+0x1c/0x24
[ 4.860204] Modules linked in:
[ 4.860204] CPU: 0 PID: 122 Comm: pokus Not tainted 4.17.0-rc6-00065-gbf53a3d #2
[ 4.860204] RIP: 0010:save_stack_trace_tsk_reliable+0x1c/0x24
[ 4.860204] Code: 05 1e ad 29 01 01 e8 f2 2d fa ff 0f 0b c3 80 3d 0e ad 29 01 00 75 15 48 c7 c7 d0 f9 ec 81 c6 05 fe ac 29 01 01 e8 d3 2d fa ff <0f> 0b b8 da ff ff ff c3 48 6b c7 0a b9 03 00 00 00 31 d2 48 f7 f1
[ 4.860204] RSP: 0000:ffff880016e13cc8 EFLAGS: 00010282
[ 4.860204] RAX: 0000000000000000 RBX: ffff8800171e99c0 RCX: 00000000000000de
[ 4.860204] RDX: 0000000000000001 RSI: ffffffff810ace13 RDI: ffffffff810aceac
[ 4.860204] RBP: ffff880016e13ce8 R08: 0000000000000000 R09: 0000000000000000
[ 4.860204] R10: ffff8800171ea5a0 R11: ffffffff83048d27 R12: ffff8800171e99c0
[ 4.860204] R13: ffff8800001c3da8 R14: 0000000000000000 R15: ffffffff815344b0
[ 4.860204] FS: 0000000000000000(0000) GS:ffff88001f600000(0000) knlGS:0000000000000000
[ 4.860204] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4.860204] CR2: 0000000000000000 CR3: 0000000002012001 CR4: 00000000000606b0
[ 4.860204] Call Trace:
[ 4.860204] ? dump+0x4e/0xc5
[ 4.860204] ? __lock_acquire+0x506/0x582
[ 4.860204] ? kvm_sched_clock_read+0x5/0xd
[ 4.860204] ? check_chain_key+0x7f/0xc1
[ 4.860204] ? lock_release+0x247/0x26b
[ 4.860204] ? update_curr+0x116/0x140
[ 4.860204] ? __list_del_entry+0x9/0x1d
[ 4.860204] ? kvm_sched_clock_read+0x5/0xd
[ 4.860204] ? lock_release+0x247/0x26b
[ 4.860204] ? preempt_count_sub+0x8a/0x94
[ 4.860204] ? _raw_spin_unlock_irq+0x33/0x45
[ 4.860204] ? __schedule+0x466/0x489
[ 4.860204] ? get_lock_parent_ip+0x11/0x24
[ 4.860204] ? trace_preempt_on+0x6f/0x87
[ 4.860204] ? my_thr+0x5/0x8
[ 4.860204] ? kthread+0xf2/0xf7
[ 4.860204] ? kthread_stop+0x147/0x147
[ 4.860204] ? ret_from_fork+0x3a/0x50
[ 4.860204] ---[ end trace 17c2e44707756e3e ]---
[ 4.890901] mwave: tp3780i::tp3780I_InitializeBoardData: Error: SMAPI is not available on this machine
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start e265562bad48936877c834069dd70eda3cfb12db 75bc37fefc4471e718ba8e651aa74673d4e0a9eb --
git bisect bad 6cafb02b91acef1e9a17d7e8e763ddb53db63ae8 # 10:15 B 0 3 17 0 Merge 'linux-review/Krzysztof-Kozlowski/dt-bindings-arm-Remove-obsolete-insignal-boards-txt/20180529-135326' into devel-catchup-201805291449
git bisect bad 74570d4e148c8b6845332664ca70e4dc006a80e5 # 12:52 B 0 11 25 0 Merge 'linux-review/Colin-King/ath10k-fix-memory-leak-of-tpc_stats/20180529-143633' into devel-catchup-201805291449
git bisect good be330e9e96f7b49bb8e602e56c5ac43f87f52a67 # 14:43 G 11 0 0 0 Merge 'jirislaby/next_master' into devel-catchup-201805291449
git bisect bad f658f3480e449ecb02c068a55757bc01406d567a # 16:27 B 0 2 20 4 Merge 'jirislaby/devel' into devel-catchup-201805291449
git bisect good 9b555d31c0ab9f801d1b6250db6fc02c74d83bf5 # 18:16 G 10 0 0 2 TTY: con3215, remove tasklet for tty_wakeup
git bisect good 4765328e65f83363ea81b29c8b8cba300e92212c # 20:04 G 11 0 0 2 tty: vt, make vc_visible_origin u16 *
git bisect good 292b33b951eee126085db6167576beae3c8e4e48 # 21:54 G 11 0 1 1 x86/stacktrace: do not unwind after user regs
git bisect good 44df4b7c4a2c73e1c0ef2c219a1358b2e5bf7dbf # 22:57 G 11 0 0 0 x86/unwind/orc: Detect the end of the stack
git bisect good 43db2e4d179f0dcd1c751b552fef9ab9965fb9a2 # 23:44 G 11 0 0 0 bluetooth: fix nesting
git bisect bad 8cf65d7f1f7cc9b50c0d61ffc9275eb17ba8ae53 # 00:29 B 0 1 17 2 r8152: napi hangup fix
git bisect bad bf53a3dce7e2b37d90e8438e54fc5266fb2a7d49 # 01:20 B 0 1 15 0 aaa
# first bad commit: [bf53a3dce7e2b37d90e8438e54fc5266fb2a7d49] aaa
git bisect good 43db2e4d179f0dcd1c751b552fef9ab9965fb9a2 # 01:28 G 31 0 0 0 bluetooth: fix nesting
# extra tests with debug options
git bisect bad bf53a3dce7e2b37d90e8438e54fc5266fb2a7d49 # 02:22 B 0 11 29 4 aaa
# extra tests on HEAD of linux-devel/devel-catchup-201805291449
git bisect bad e265562bad48936877c834069dd70eda3cfb12db # 02:22 B 0 343 360 0 0day head guard for 'devel-catchup-201805291449'
# extra tests on tree/branch jirislaby/devel
git bisect bad 8cf65d7f1f7cc9b50c0d61ffc9275eb17ba8ae53 # 02:34 B 0 15 29 0 r8152: napi hangup fix
# extra tests with first bad commit reverted
git bisect good c15c29b94eac8f2ed8c850411ca00272eb2cb84e # 03:03 G 11 0 0 0 Revert "aaa"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
2 years, 7 months
[lkp-robot] [Print the memcg's name when system] c385a55f52: BUG:KASAN:null-ptr-deref_in_m
by kernel test robot
FYI, we noticed the following commit (built with gcc-6):
commit: c385a55f521e1649051d7f653bec9aa0ce711c9e ("Print the memcg's name when system-wide OOM happened")
url: https://github.com/0day-ci/linux/commits/ufo19890607/Print-the-memcg-s-na...
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -smp 2 -m 512M
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------------------+------------+------------+
| | 6741c4bb38 | c385a55f52 |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 12 | 30 |
| invoked_oom-killer:gfp_mask=0x | 12 | 29 |
| Mem-Info | 12 | |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 12 | |
| BUG:KASAN:null-ptr-deref_in_m | 0 | 29 |
| BUG:unable_to_handle_kernel | 0 | 29 |
| Oops:#[##] | 0 | 29 |
| RIP:mem_cgroup_print_oom_memcg_name | 0 | 29 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 29 |
| BUG:kernel_hang_in_boot_stage | 0 | 1 |
+------------------------------------------------------------------+------------+------------+
[ 5.366081] BUG: KASAN: null-ptr-deref in mem_cgroup_print_oom_memcg_name+0xdb/0x130
[ 5.366817] Read of size 8 at addr 0000000000000000 by task swapper/0/1
[ 5.366817]
[ 5.366817] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.17.0-rc6-00081-gc385a55 #2
[ 5.370063] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 5.370063] Call Trace:
[ 5.370063] dump_stack+0x137/0x1d5
[ 5.376789] ? mem_cgroup_print_oom_memcg_name+0xdb/0x130
[ 5.376789] kasan_report+0x330/0x3c0
[ 5.376789] __asan_load8+0x7d/0x80
[ 5.376789] mem_cgroup_print_oom_memcg_name+0xdb/0x130
[ 5.380065] dump_header+0x161/0x756
[ 5.380065] ? __asan_loadN+0xf/0x20
[ 5.380065] out_of_memory+0x69e/0x860
[ 5.380065] ? unregister_oom_notifier+0x20/0x20
[ 5.380065] __alloc_pages_slowpath+0x1399/0x1d20
[ 5.383398] ? fs_reclaim_release+0x60/0x60
[ 5.383398] ? __asan_loadN+0xf/0x20
[ 5.383398] ? ftrace_likely_update+0x8c/0xb0
[ 5.383398] ? __asan_loadN+0xf/0x20
[ 5.386811] __alloc_pages_nodemask+0x507/0x820
[ 5.386811] ? __alloc_pages_slowpath+0x1d20/0x1d20
[ 5.386811] ? __asan_loadN+0xf/0x20
[ 5.396789] cache_grow_begin+0x137/0x1260
[ 5.396789] ? fs_reclaim_release+0x3b/0x60
[ 5.403389] ? __asan_loadN+0xf/0x20
[ 5.403389] cache_alloc_refill+0x3c6/0x7d0
[ 5.403389] kmem_cache_alloc+0x1ba/0x540
[ 5.403389] getname_flags+0x7b/0x5c0
[ 5.406793] ? __asan_loadN+0xf/0x20
[ 5.410056] ? _parse_integer+0x1b3/0x1d0
[ 5.410056] user_path_at_empty+0x23/0x40
[ 5.410056] vfs_statx+0x191/0x250
[ 5.410056] ? __do_compat_sys_newfstat+0x100/0x100
[ 5.410056] clean_path+0x94/0x177
[ 5.416793] ? do_reset+0x85/0x85
[ 5.416793] ? __asan_loadN+0xf/0x20
[ 5.416793] ? trace_hardirqs_on+0x37/0x2c0
[ 5.416793] ? __asan_loadN+0xf/0x20
[ 5.416793] ? strcmp+0x5c/0xc0
[ 5.420054] do_name+0xc3/0x509
[ 5.420054] ? write_buffer+0x31/0x4c
[ 5.420054] write_buffer+0x39/0x4c
[ 5.423389] flush_buffer+0x110/0x140
[ 5.423389] __gunzip+0x667/0x842
[ 5.426788] ? bunzip2+0xa5b/0xa5b
[ 5.430063] ? error+0x51/0x51
[ 5.430063] ? __gunzip+0x842/0x842
[ 5.430063] gunzip+0x11/0x13
[ 5.430063] ? do_start+0x23/0x23
[ 5.430063] unpack_to_rootfs+0x355/0x645
[ 5.436806] ? do_start+0x23/0x23
[ 5.436806] ? kmsg_dump_rewind+0xd0/0xf3
[ 5.436806] ? do_collect+0xc9/0xc9
[ 5.436806] populate_rootfs+0xf4/0x308
[ 5.436806] ? unpack_to_rootfs+0x645/0x645
[ 5.443389] do_one_initcall+0x289/0x755
[ 5.443389] ? trace_event_raw_event_initcall_finish+0x270/0x270
[ 5.443389] ? kasan_check_write+0x20/0x20
[ 5.446790] ? ftrace_likely_update+0x8c/0xb0
[ 5.446790] ? do_early_param+0x11b/0x11b
[ 5.446790] ? cpumask_check+0x77/0x90
[ 5.446790] ? __asan_loadN+0xf/0x20
[ 5.453387] ? do_early_param+0x11b/0x11b
[ 5.453387] kernel_init_freeable+0x1c1/0x2e6
[ 5.453387] ? rest_init+0x110/0x110
[ 5.453387] kernel_init+0x11/0x200
[ 5.453387] ? rest_init+0x110/0x110
[ 5.453387] ret_from_fork+0x24/0x30
[ 5.460056] ==================================================================
[ 5.460056] Disabling lock debugging due to kernel taint
[ 5.464179] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
[ 5.465373] PGD 0 P4D 0
[ 5.467430] Oops: 0000 [#1] SMP KASAN
[ 5.467430] Modules linked in:
[ 5.470057] CPU: 1 PID: 1 Comm: swapper/0 Tainted: G B 4.17.0-rc6-00081-gc385a55 #2
[ 5.470057] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 5.476808] RIP: 0010:mem_cgroup_print_oom_memcg_name+0xdb/0x130
[ 5.476808] RSP: 0000:ffff88000320f458 EFLAGS: 00010292
[ 5.476808] RAX: 0000000000000000 RBX: 0000000000000002 RCX: ffffffffb4449027
[ 5.483385] RDX: 0000000000000000 RSI: 0000000000000008 RDI: 0000000000000297
[ 5.483385] RBP: ffff88000320f470 R08: fffffbfff6f2126f R09: fffffbfff6f2126e
[ 5.490049] R10: ffffffffb7909377 R11: fffffbfff6f2126f R12: 0000000000000000
[ 5.490049] R13: 0000000000000000 R14: 0000000000000000 R15: ffff88000320f6b0
[ 5.490049] FS: 0000000000000000(0000) GS:ffff880003700000(0000) knlGS:0000000000000000
[ 5.496794] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 5.496794] CR2: 0000000000000000 CR3: 0000000013422000 CR4: 00000000000006e0
[ 5.496794] Call Trace:
[ 5.496794] dump_header+0x161/0x756
[ 5.500058] ? __asan_loadN+0xf/0x20
[ 5.500058] out_of_memory+0x69e/0x860
[ 5.500058] ? unregister_oom_notifier+0x20/0x20
[ 5.500058] __alloc_pages_slowpath+0x1399/0x1d20
[ 5.503391] ? fs_reclaim_release+0x60/0x60
[ 5.503391] ? __asan_loadN+0xf/0x20
[ 5.503391] ? ftrace_likely_update+0x8c/0xb0
[ 5.503391] ? __asan_loadN+0xf/0x20
[ 5.506791] __alloc_pages_nodemask+0x507/0x820
[ 5.506791] ? __alloc_pages_slowpath+0x1d20/0x1d20
[ 5.506791] ? __asan_loadN+0xf/0x20
[ 5.506791] cache_grow_begin+0x137/0x1260
[ 5.510059] ? fs_reclaim_release+0x3b/0x60
[ 5.510059] ? __asan_loadN+0xf/0x20
[ 5.510059] cache_alloc_refill+0x3c6/0x7d0
[ 5.510059] kmem_cache_alloc+0x1ba/0x540
[ 5.513390] getname_flags+0x7b/0x5c0
[ 5.513390] ? __asan_loadN+0xf/0x20
[ 5.513390] ? _parse_integer+0x1b3/0x1d0
[ 5.513390] user_path_at_empty+0x23/0x40
[ 5.513390] vfs_statx+0x191/0x250
[ 5.513390] ? __do_compat_sys_newfstat+0x100/0x100
[ 5.516775] clean_path+0x94/0x177
[ 5.516775] ? do_reset+0x85/0x85
[ 5.516775] ? __asan_loadN+0xf/0x20
[ 5.516775] ? trace_hardirqs_on+0x37/0x2c0
[ 5.516775] ? __asan_loadN+0xf/0x20
[ 5.520065] ? strcmp+0x5c/0xc0
[ 5.520065] do_name+0xc3/0x509
[ 5.520065] ? write_buffer+0x31/0x4c
[ 5.520065] write_buffer+0x39/0x4c
[ 5.520065] flush_buffer+0x110/0x140
[ 5.520065] __gunzip+0x667/0x842
[ 5.523384] ? bunzip2+0xa5b/0xa5b
[ 5.523384] ? error+0x51/0x51
[ 5.523384] ? __gunzip+0x842/0x842
[ 5.523384] gunzip+0x11/0x13
[ 5.523384] ? do_start+0x23/0x23
[ 5.523384] unpack_to_rootfs+0x355/0x645
[ 5.526774] ? do_start+0x23/0x23
[ 5.530049] ? kmsg_dump_rewind+0xd0/0xf3
[ 5.530049] ? do_collect+0xc9/0xc9
[ 5.530049] populate_rootfs+0xf4/0x308
[ 5.530049] ? unpack_to_rootfs+0x645/0x645
[ 5.530049] do_one_initcall+0x289/0x755
[ 5.533381] ? trace_event_raw_event_initcall_finish+0x270/0x270
[ 5.533381] ? kasan_check_write+0x20/0x20
[ 5.533381] ? ftrace_likely_update+0x8c/0xb0
[ 5.540051] ? do_early_param+0x11b/0x11b
[ 5.540051] ? cpumask_check+0x77/0x90
[ 5.543385] ? __asan_loadN+0xf/0x20
[ 5.543385] ? do_early_param+0x11b/0x11b
[ 5.543385] kernel_init_freeable+0x1c1/0x2e6
[ 5.543385] ? rest_init+0x110/0x110
[ 5.546774] kernel_init+0x11/0x200
[ 5.550058] ? rest_init+0x110/0x110
[ 5.550058] ret_from_fork+0x24/0x30
[ 5.550058] Code: 50 01 00 00 e8 b7 31 15 00 48 c7 c7 00 dc ff b5 e8 6e 2e d0 ff eb 0c 48 c7 c7 60 dc ff b5 e8 60 2e d0 ff 4c 89 ef e8 75 e8 fd ff <49> 8b 5d 00 48 8d bb 50 01 00 00 e8 65 e8 fd ff 48 8b bb 50 01
[ 5.553391] RIP: mem_cgroup_print_oom_memcg_name+0xdb/0x130 RSP: ffff88000320f458
[ 5.556791] CR2: 0000000000000000
[ 5.556791] _warn_unseeded_randomness: 6 callbacks suppressed
[ 5.556791] random: get_random_bytes called from init_oops_id+0x50/0x70 with crng_init=0
[ 5.560058] ---[ end trace 8cd4338bfad4c0db ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Xiaolong
2 years, 7 months