Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
1 year, 7 months
Test monitoring on custom github repo
by Thomas Garnier
Hi,
I am working on KASLR (PIE for x86_64). I previously used Kees (CCed)
branches for lkp bot testing but someone told be I could ask you to add a
custom github path to monitor all branches on it.
I pushed my changes to: https://github.com/thgarnie/linux (kasrl_pie_v2
right now)
Can you add it? Anything I need to do?
Thanks,
--
Thomas
1 year, 12 months
[lkp-robot] [sched/fair] d519329f72: unixbench.score -9.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -9.9% regression of unixbench.score due to commit:
commit: d519329f72a6f36bc4f2b85452640cfe583b4f81 ("sched/fair: Update util_est only on util_avg updates")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: unixbench
on test machine: 8 threads Intel(R) Core(TM) i7 CPU 870 @ 2.93GHz with 6G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: execl
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-7/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/300s/nhm-white/execl/unixbench
commit:
a07630b8b2 ("sched/cpufreq/schedutil: Use util_est for OPP selection")
d519329f72 ("sched/fair: Update util_est only on util_avg updates")
a07630b8b2c16f82 d519329f72a6f36bc4f2b85452
---------------- --------------------------
%stddev %change %stddev
\ | \
4626 -9.9% 4167 unixbench.score
3495362 ± 4% +70.4% 5957769 ± 2% unixbench.time.involuntary_context_switches
2.866e+08 -11.6% 2.534e+08 unixbench.time.minor_page_faults
666.75 -9.7% 602.25 unixbench.time.percent_of_cpu_this_job_got
1830 -9.7% 1653 unixbench.time.system_time
395.13 -5.2% 374.58 unixbench.time.user_time
8611715 -58.9% 3537314 ± 3% unixbench.time.voluntary_context_switches
6639375 -9.1% 6033775 unixbench.workload
26025 +3849.3% 1027825 interrupts.CAL:Function_call_interrupts
4856 ± 14% -27.4% 3523 ± 11% slabinfo.filp.active_objs
3534356 -8.8% 3223918 softirqs.RCU
77929 -11.2% 69172 vmstat.system.cs
19489 ± 2% +7.5% 20956 vmstat.system.in
9.05 ± 9% +11.0% 10.05 ± 8% boot-time.dhcp
131.63 ± 4% +8.6% 142.89 ± 7% boot-time.idle
9.07 ± 9% +11.0% 10.07 ± 8% boot-time.kernel_boot
76288 ± 3% -12.8% 66560 ± 3% meminfo.DirectMap4k
16606 -13.1% 14433 meminfo.Inactive
16515 -13.2% 14341 meminfo.Inactive(anon)
11.87 ± 5% +7.8 19.63 ± 4% mpstat.cpu.idle%
0.07 ± 35% -0.0 0.04 ± 17% mpstat.cpu.soft%
68.91 -6.1 62.82 mpstat.cpu.sys%
29291570 +325.4% 1.246e+08 cpuidle.C1.time
8629105 -36.1% 5513780 cpuidle.C1.usage
668733 ± 12% +11215.3% 75668902 ± 2% cpuidle.C1E.time
9763 ± 12% +16572.7% 1627882 ± 2% cpuidle.C1E.usage
1.834e+08 ± 9% +23.1% 2.258e+08 ± 11% cpuidle.C3.time
222674 ± 8% +133.4% 519690 ± 6% cpuidle.C3.usage
4129 -13.3% 3581 proc-vmstat.nr_inactive_anon
4129 -13.3% 3581 proc-vmstat.nr_zone_inactive_anon
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_hit
2.333e+08 -12.2% 2.049e+08 proc-vmstat.numa_local
6625 -10.9% 5905 proc-vmstat.pgactivate
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgalloc_normal
2.936e+08 -12.6% 2.566e+08 proc-vmstat.pgfault
2.392e+08 -12.1% 2.102e+08 proc-vmstat.pgfree
2850 -15.3% 2413 turbostat.Avg_MHz
8629013 -36.1% 5513569 turbostat.C1
1.09 +3.5 4.61 turbostat.C1%
9751 ± 12% +16593.0% 1627864 ± 2% turbostat.C1E
0.03 ± 19% +2.8 2.80 turbostat.C1E%
222574 ± 8% +133.4% 519558 ± 6% turbostat.C3
6.84 ± 8% +1.5 8.34 ± 10% turbostat.C3%
2.82 ± 7% +250.3% 9.87 ± 2% turbostat.CPU%c1
6552773 ± 3% +23.8% 8111699 ± 2% turbostat.IRQ
2.02 ± 11% +28.3% 2.58 ± 9% turbostat.Pkg%pc3
7.635e+11 -12.5% 6.682e+11 perf-stat.branch-instructions
3.881e+10 -12.9% 3.381e+10 perf-stat.branch-misses
2.09 -0.3 1.77 ± 4% perf-stat.cache-miss-rate%
1.551e+09 -15.1% 1.316e+09 ± 4% perf-stat.cache-misses
26177920 -10.5% 23428188 perf-stat.context-switches
1.99 -2.8% 1.93 perf-stat.cpi
7.553e+12 -14.7% 6.446e+12 perf-stat.cpu-cycles
522523 ± 2% +628.3% 3805664 perf-stat.cpu-migrations
2.425e+10 ± 4% -14.3% 2.078e+10 perf-stat.dTLB-load-misses
1.487e+12 -11.3% 1.319e+12 perf-stat.dTLB-loads
1.156e+10 ± 3% -7.7% 1.066e+10 perf-stat.dTLB-store-misses
6.657e+11 -11.1% 5.915e+11 perf-stat.dTLB-stores
0.15 +0.0 0.15 perf-stat.iTLB-load-miss-rate%
5.807e+09 -11.0% 5.166e+09 perf-stat.iTLB-load-misses
3.799e+12 -12.1% 3.34e+12 perf-stat.iTLB-loads
3.803e+12 -12.2% 3.338e+12 perf-stat.instructions
654.99 -1.4% 646.07 perf-stat.instructions-per-iTLB-miss
0.50 +2.8% 0.52 perf-stat.ipc
2.754e+08 -11.6% 2.435e+08 perf-stat.minor-faults
1.198e+08 ± 7% +73.1% 2.074e+08 ± 4% perf-stat.node-stores
2.754e+08 -11.6% 2.435e+08 perf-stat.page-faults
572928 -3.4% 553258 perf-stat.path-length
unixbench.score
4800 +-+------------------------------------------------------------------+
|+ + + |
4700 +-+ + + :+ +. :+ + + |
| + + + +. : + + + + + + + .+++++ .+ +|
4600 +-+ +++ :+++ + ++: : :+ +++ ++.++++ + ++++ ++ |
| + + + ++ ++ + |
4500 +-+ |
| |
4400 +-+ |
| |
4300 +-+ |
O |
4200 +-O O O OOOO OO OOO OOOO OOOO O O |
|O OO OOOOO O O OO O O O O O OO |
4100 +-+------------------------------------------------------------------+
unixbench.workload
9e+06 +-+---------------------------------------------------------------+
| : |
8.5e+06 +-+ : |
| : |
8e+06 +-+ : |
| :: |
7.5e+06 +-+ : : + |
| +: : : + |
7e+06 +-+ + + :: : :: + + : + + + + + |
|:+ + + : :: : : :: : :+ : : ::+ :+ .+ :+ ++ ++ + ++ ::++|
6.5e+06 +-O+ +++ ++++ +++ + ++ +.+ + ++ + + + + + + + +.+++ + |
O O O O O O O |
6e+06 +O+OOO O OOOOOOOO OOOO OO OOOOOOOOO O O O OO |
| O |
5.5e+06 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 2 months
[lkp-robot] [mm] 9092c71bb7: blogbench.write_score -12.3% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.3% regression of blogbench.write_score and a +9.6% improvement
of blogbench.read_score due to commit:
commit: 9092c71bb724dba2ecba849eae69e5c9d39bd3d2 ("mm: use sc->priority for slab shrink targets")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: blogbench
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
with following parameters:
disk: 1SSD
fs: btrfs
cpufreq_governor: performance
test-description: Blogbench is a portable filesystem benchmark that tries to reproduce the load of a real-world busy file server.
test-url: https://www.pureftpd.org/project/blogbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/rootfs/tbox_group/testcase:
gcc-7/performance/1SSD/btrfs/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-bdw-de1/blogbench
commit:
fcb2b0c577 ("mm: show total hugetlb memory consumption in /proc/meminfo")
9092c71bb7 ("mm: use sc->priority for slab shrink targets")
fcb2b0c577f145c7 9092c71bb724dba2ecba849eae
---------------- --------------------------
%stddev %change %stddev
\ | \
3256 -12.3% 2854 blogbench.write_score
1235237 ± 2% +9.6% 1354163 blogbench.read_score
28050912 -10.1% 25212230 blogbench.time.file_system_outputs
6481995 ± 3% +25.0% 8105320 ± 2% blogbench.time.involuntary_context_switches
906.00 +13.7% 1030 blogbench.time.percent_of_cpu_this_job_got
2552 +14.0% 2908 blogbench.time.system_time
173.80 +8.4% 188.32 blogbench.time.user_time
19353936 +3.6% 20045728 blogbench.time.voluntary_context_switches
8719514 +13.0% 9850451 softirqs.RCU
2.97 ± 5% -0.7 2.30 ± 3% mpstat.cpu.idle%
24.92 -6.5 18.46 mpstat.cpu.iowait%
0.65 ± 2% +0.1 0.75 mpstat.cpu.soft%
67.76 +6.7 74.45 mpstat.cpu.sys%
50206 -10.7% 44858 vmstat.io.bo
49.25 -9.1% 44.75 ± 2% vmstat.procs.b
224125 -1.8% 220135 vmstat.system.cs
48903 +10.7% 54134 vmstat.system.in
3460654 +10.8% 3834883 meminfo.Active
3380666 +11.0% 3752872 meminfo.Active(file)
1853849 -17.4% 1530415 meminfo.Inactive
1836507 -17.6% 1513054 meminfo.Inactive(file)
551311 -10.3% 494265 meminfo.SReclaimable
196525 -12.6% 171775 meminfo.SUnreclaim
747837 -10.9% 666040 meminfo.Slab
8.904e+08 -24.9% 6.683e+08 cpuidle.C1.time
22971020 -12.8% 20035820 cpuidle.C1.usage
2.518e+08 ± 3% -31.7% 1.72e+08 cpuidle.C1E.time
821393 ± 2% -33.3% 548003 cpuidle.C1E.usage
75460078 ± 2% -23.3% 57903768 ± 2% cpuidle.C3.time
136506 ± 3% -25.3% 101956 ± 3% cpuidle.C3.usage
56892498 ± 4% -23.3% 43608427 ± 4% cpuidle.C6.time
85034 ± 3% -33.9% 56184 ± 3% cpuidle.C6.usage
24373567 -24.5% 18395538 cpuidle.POLL.time
449033 ± 2% -10.8% 400493 cpuidle.POLL.usage
1832 +9.3% 2002 turbostat.Avg_MHz
22967645 -12.8% 20032521 turbostat.C1
18.43 -4.6 13.85 turbostat.C1%
821328 ± 2% -33.3% 547948 turbostat.C1E
5.21 ± 3% -1.6 3.56 turbostat.C1E%
136377 ± 3% -25.3% 101823 ± 3% turbostat.C3
1.56 ± 2% -0.4 1.20 ± 3% turbostat.C3%
84404 ± 3% -34.0% 55743 ± 3% turbostat.C6
1.17 ± 4% -0.3 0.90 ± 4% turbostat.C6%
25.93 -26.2% 19.14 turbostat.CPU%c1
0.12 ± 3% -19.1% 0.10 ± 9% turbostat.CPU%c3
14813304 +10.7% 16398388 turbostat.IRQ
38.19 +3.6% 39.56 turbostat.PkgWatt
4.51 +4.5% 4.71 turbostat.RAMWatt
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_daemon_free_scanned
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_daemon_migrate_scanned
2444 ± 21% -63.3% 897.50 ± 20% proc-vmstat.compact_daemon_wake
8111200 ± 13% -63.2% 2986242 ± 48% proc-vmstat.compact_free_scanned
755491 ± 32% -81.6% 138856 ± 28% proc-vmstat.compact_isolated
1026719 ± 30% -81.2% 193485 ± 30% proc-vmstat.compact_migrate_scanned
137.75 ± 34% +2.8e+06% 3801062 ± 2% proc-vmstat.kswapd_inodesteal
6749 ± 20% -53.6% 3131 ± 12% proc-vmstat.kswapd_low_wmark_hit_quickly
844991 +11.2% 939487 proc-vmstat.nr_active_file
3900576 -10.5% 3490567 proc-vmstat.nr_dirtied
459789 -17.8% 377930 proc-vmstat.nr_inactive_file
137947 -10.3% 123720 proc-vmstat.nr_slab_reclaimable
49165 -12.6% 42989 proc-vmstat.nr_slab_unreclaimable
1382 ± 11% -26.2% 1020 ± 20% proc-vmstat.nr_writeback
3809266 -10.7% 3403350 proc-vmstat.nr_written
844489 +11.2% 938974 proc-vmstat.nr_zone_active_file
459855 -17.8% 378121 proc-vmstat.nr_zone_inactive_file
7055 ± 18% -52.0% 3389 ± 11% proc-vmstat.pageoutrun
33764911 ± 2% +21.3% 40946445 proc-vmstat.pgactivate
42044161 ± 2% +12.1% 47139065 proc-vmstat.pgdeactivate
92153 ± 20% -69.1% 28514 ± 24% proc-vmstat.pgmigrate_success
15212270 -10.7% 13591573 proc-vmstat.pgpgout
42053817 ± 2% +12.1% 47151755 proc-vmstat.pgrefill
11297 ±107% +1025.4% 127138 ± 21% proc-vmstat.pgscan_direct
19930162 -24.0% 15141439 proc-vmstat.pgscan_kswapd
19423629 -24.0% 14758807 proc-vmstat.pgsteal_kswapd
10868768 +184.8% 30950752 proc-vmstat.slabs_scanned
3361780 ± 3% -22.9% 2593327 ± 3% proc-vmstat.workingset_activate
4994722 ± 2% -43.2% 2835020 ± 2% proc-vmstat.workingset_refault
316427 -9.3% 286844 slabinfo.Acpi-Namespace.active_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.active_slabs
318605 -9.4% 288623 slabinfo.Acpi-Namespace.num_objs
3123 -9.4% 2829 slabinfo.Acpi-Namespace.num_slabs
220514 -40.7% 130747 slabinfo.btrfs_delayed_node.active_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.active_slabs
263293 -25.3% 196669 slabinfo.btrfs_delayed_node.num_objs
9751 -25.3% 7283 slabinfo.btrfs_delayed_node.num_slabs
6383 ± 8% -12.0% 5615 ± 2% slabinfo.btrfs_delayed_ref_head.num_objs
9496 +15.5% 10969 slabinfo.btrfs_extent_buffer.active_objs
9980 +20.5% 12022 slabinfo.btrfs_extent_buffer.num_objs
260933 -10.7% 233136 slabinfo.btrfs_extent_map.active_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.active_slabs
263009 -10.6% 235107 slabinfo.btrfs_extent_map.num_objs
9392 -10.6% 8396 slabinfo.btrfs_extent_map.num_slabs
271938 -10.3% 243802 slabinfo.btrfs_inode.active_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.active_slabs
273856 -10.4% 245359 slabinfo.btrfs_inode.num_objs
9804 -10.6% 8768 slabinfo.btrfs_inode.num_slabs
7085 ± 5% -5.5% 6692 ± 2% slabinfo.btrfs_path.num_objs
311936 -16.4% 260797 slabinfo.dentry.active_objs
7803 -9.6% 7058 slabinfo.dentry.active_slabs
327759 -9.6% 296439 slabinfo.dentry.num_objs
7803 -9.6% 7058 slabinfo.dentry.num_slabs
2289 -23.3% 1755 ± 6% slabinfo.proc_inode_cache.active_objs
2292 -19.0% 1856 ± 6% slabinfo.proc_inode_cache.num_objs
261546 -12.3% 229485 slabinfo.radix_tree_node.active_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.active_slabs
263347 -11.9% 232089 slabinfo.radix_tree_node.num_objs
9404 -11.9% 8288 slabinfo.radix_tree_node.num_slabs
1140424 ± 12% +40.2% 1598980 ± 14% sched_debug.cfs_rq:/.MIN_vruntime.max
790.55 +13.0% 893.20 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
1140425 ± 12% +40.2% 1598982 ± 14% sched_debug.cfs_rq:/.max_vruntime.max
0.83 ± 10% +21.5% 1.00 ± 8% sched_debug.cfs_rq:/.nr_running.avg
3.30 ± 99% +266.3% 12.09 ± 13% sched_debug.cfs_rq:/.removed.load_avg.avg
153.02 ± 97% +266.6% 560.96 ± 13% sched_debug.cfs_rq:/.removed.runnable_sum.avg
569.93 ±102% +173.2% 1556 ± 14% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
1.42 ± 60% +501.5% 8.52 ± 34% sched_debug.cfs_rq:/.removed.util_avg.avg
19.88 ± 59% +288.9% 77.29 ± 16% sched_debug.cfs_rq:/.removed.util_avg.max
5.05 ± 58% +342.3% 22.32 ± 22% sched_debug.cfs_rq:/.removed.util_avg.stddev
791.44 ± 3% +47.7% 1168 ± 8% sched_debug.cfs_rq:/.util_avg.avg
1305 ± 6% +33.2% 1738 ± 5% sched_debug.cfs_rq:/.util_avg.max
450.25 ± 11% +66.2% 748.17 ± 14% sched_debug.cfs_rq:/.util_avg.min
220.82 ± 8% +21.1% 267.46 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
363118 ± 11% -23.8% 276520 ± 11% sched_debug.cpu.avg_idle.avg
726003 ± 8% -30.8% 502313 ± 4% sched_debug.cpu.avg_idle.max
202629 ± 3% -32.2% 137429 ± 18% sched_debug.cpu.avg_idle.stddev
31.96 ± 28% +54.6% 49.42 ± 14% sched_debug.cpu.cpu_load[3].min
36.21 ± 25% +64.0% 59.38 ± 6% sched_debug.cpu.cpu_load[4].min
1007 ± 5% +20.7% 1216 ± 7% sched_debug.cpu.curr->pid.avg
4.50 ± 5% +14.8% 5.17 ± 5% sched_debug.cpu.nr_running.max
2476195 -11.8% 2185022 sched_debug.cpu.nr_switches.max
212888 -26.6% 156172 ± 3% sched_debug.cpu.nr_switches.stddev
3570 ± 2% -58.7% 1474 ± 2% sched_debug.cpu.nr_uninterruptible.max
-803.67 -28.7% -573.38 sched_debug.cpu.nr_uninterruptible.min
1004 ± 2% -50.4% 498.55 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
2478809 -11.7% 2189310 sched_debug.cpu.sched_count.max
214130 -26.5% 157298 ± 3% sched_debug.cpu.sched_count.stddev
489430 ± 2% -16.6% 408309 ± 2% sched_debug.cpu.sched_goidle.avg
724333 ± 2% -28.2% 520263 ± 2% sched_debug.cpu.sched_goidle.max
457611 -18.1% 374746 ± 3% sched_debug.cpu.sched_goidle.min
62957 ± 2% -47.4% 33138 ± 3% sched_debug.cpu.sched_goidle.stddev
676053 ± 2% -15.4% 571816 ± 2% sched_debug.cpu.ttwu_local.max
42669 ± 3% +22.3% 52198 sched_debug.cpu.ttwu_local.min
151873 ± 2% -18.3% 124118 ± 2% sched_debug.cpu.ttwu_local.stddev
blogbench.write_score
3300 +-+------------------------------------------------------------------+
3250 +-+ +. .+ +. .+ : : : +. .+ .+.+.+. .|
|: +. .+ +.+.+.+ + + + : +. : : +. + +.+ + + |
3200 +-+ + +.+ + : + + : + + |
3150 +-+.+ ++ +.+ |
3100 +-+ |
3050 +-+ |
| |
3000 +-+ |
2950 +-+ O O |
2900 +-O O O O |
2850 +-+ O O O O O O O OO O O O |
| O O O O |
2800 O-+ O O |
2750 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 5 months
[btrfs_put_block_group] WARNING: CPU: 1 PID: 14674 at fs/btrfs/disk-io.c:3675 free_fs_root+0xc2/0xd0 [btrfs]
by Fengguang Wu
Hello,
FYI this happens in mainline kernel and at least dates back to v4.16 .
It's rather rare error and happens when running xfstests.
[ 438.327552] BTRFS: error (device dm-0) in __btrfs_free_extent:6962: errno=-5 IO failure
[ 438.336415] BTRFS: error (device dm-0) in btrfs_run_delayed_refs:3070: errno=-5 IO failure
[ 438.345590] BTRFS error (device dm-0): pending csums is 1028096
[ 438.369254] BTRFS error (device dm-0): cleaner transaction attach returned -30
[ 438.377674] BTRFS info (device dm-0): at unmount delalloc count 98304
[ 438.385166] WARNING: CPU: 1 PID: 14674 at fs/btrfs/disk-io.c:3675 free_fs_root+0xc2/0xd0 [btrfs]
[ 438.396562] Modules linked in: dm_snapshot dm_thin_pool dm_persistent_data dm_bio_prison dm_bufio dm_flakey dm_mod netconsole btrfs xor zstd_decompress zstd_compress xxhash raid6_pq sd_mod sg snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ata_generic pata_acpi intel_rapl x86_pkg_temp_thermal intel_powerclamp coretemp snd_hda_intel kvm_intel snd_hda_codec kvm irqbypass crct10dif_pclmul eeepc_wmi crc32_pclmul crc32c_intel ghash_clmulni_intel pata_via asus_wmi sparse_keymap snd_hda_core ata_piix snd_hwdep ppdev rfkill wmi_bmof i915 pcbc snd_pcm snd_timer drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops snd aesni_intel parport_pc crypto_simd pcspkr libata soundcore cryptd glue_helper drm wmi parport video shpchp ip_tables
[ 438.467607] CPU: 1 PID: 14674 Comm: umount Not tainted 4.17.0-rc1 #1
[ 438.474798] Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 1002 04/01/2011
[ 438.484804] RIP: 0010:free_fs_root+0xc2/0xd0 [btrfs]
[ 438.490590] RSP: 0018:ffffc90008b0fda8 EFLAGS: 00010282
[ 438.496641] RAX: ffff88017c5954b0 RBX: ffff880137f6d800 RCX: 0000000180100003
[ 438.504652] RDX: 0000000000000001 RSI: ffffea0006e93600 RDI: 0000000000000000
[ 438.512679] RBP: ffff88017b360000 R08: ffff8801ba4dd000 R09: 0000000180100003
[ 438.520644] R10: ffffc90008b0fc70 R11: 0000000000000000 R12: ffffc90008b0fdd0
[ 438.528657] R13: ffff88017b360080 R14: ffffc90008b0fdc8 R15: 0000000000000000
[ 438.536662] FS: 00007f06c1a80fc0(0000) GS:ffff8801bfa80000(0000) knlGS:0000000000000000
[ 438.545582] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 438.552157] CR2: 00007f06c12b0260 CR3: 000000017d3de004 CR4: 00000000000606e0
[ 438.642653] RAX: 0000000000000000 RBX: 000000000234a2d0 RCX: 00007f06c1359cf7
[ 438.793559] CPU: 1 PID: 14674 Comm: umount Tainted: G W 4.17.0-rc1 #1
[ 438.802152] Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 1002 04/01/2011
[ 438.812108] RIP: 0010:btrfs_put_block_group+0x41/0x60 [btrfs]
[ 438.819364] RSP: 0018:ffffc90008b0fde0 EFLAGS: 00010206
[ 438.825378] RAX: 0000000000000000 RBX: ffff8801abf63000 RCX: e38e38e38e38e38f
[ 438.833307] RDX: 0000000000000001 RSI: 00000000000009f6 RDI: ffff8801abf63000
[ 438.841230] RBP: ffff88017b360000 R08: ffff88017d3b7750 R09: 0000000180380010
[ 438.849133] R10: ffffc90008b0fca0 R11: 0000000000000000 R12: ffff8801abf63000
[ 438.857047] R13: ffff88017b3600a0 R14: ffff8801abf630e0 R15: dead000000000100
[ 438.864943] FS: 00007f06c1a80fc0(0000) GS:ffff8801bfa80000(0000) knlGS:0000000000000000
[ 438.873793] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 438.880320] CR2: 00007f06c12b0260 CR3: 000000017d3de004 CR4: 00000000000606e0
[ 438.888226] Call Trace:
[ 438.891454] btrfs_free_block_groups+0x138/0x3d0 [btrfs]
[ 438.897569] close_ctree+0x13b/0x2f0 [btrfs]
[ 438.902618] generic_shutdown_super+0x6c/0x120:
__read_once_size at include/linux/compiler.h:188
(inlined by) list_empty at include/linux/list.h:203
(inlined by) generic_shutdown_super at fs/super.c:442
[ 438.907801] kill_anon_super+0xe/0x20:
kill_anon_super at fs/super.c:1038
[ 438.912223] btrfs_kill_super+0x13/0x100 [btrfs]
[ 438.917598] deactivate_locked_super+0x3f/0x70:
deactivate_locked_super at fs/super.c:320
[ 438.922757] cleanup_mnt+0x3b/0x70:
cleanup_mnt at fs/namespace.c:1174
[ 438.926879] task_work_run+0xa3/0xe0:
task_work_run at kernel/task_work.c:115 (discriminator 1)
[ 438.931205] exit_to_usermode_loop+0x9e/0xa0:
tracehook_notify_resume at include/linux/tracehook.h:191
(inlined by) exit_to_usermode_loop at arch/x86/entry/common.c:166
[ 438.936226] do_syscall_64+0x16c/0x180:
prepare_exit_to_usermode at arch/x86/entry/common.c:196
(inlined by) syscall_return_slowpath at arch/x86/entry/common.c:265
(inlined by) do_syscall_64 at arch/x86/entry/common.c:290
[ 438.940717] entry_SYSCALL_64_after_hwframe+0x44/0xa9:
entry_SYSCALL_64_after_hwframe at arch/x86/entry/entry_64.S:247
[ 438.946507] RIP: 0033:0x7f06c1359cf7
[ 438.950798] RSP: 002b:00007ffc6a59c608 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
[ 438.959137] RAX: 0000000000000000 RBX: 000000000234a2d0 RCX: 00007f06c1359cf7
[ 438.967056] RDX: 0000000000000001 RSI: 0000000000000000 RDI: 000000000234a4b0
[ 438.974937] RBP: 000000000234a4b0 R08: 0000000000000005 R09: 000000000234b510
[ 438.982801] R10: 00000000000006f4 R11: 0000000000000246 R12: 00007f06c1865e44
[ 438.990695] R13: 0000000000000000 R14: 0000000000000000 R15: 00007ffc6a59c890
[ 438.998600] Code: 2a 48 8b 83 e8 01 00 00 48 85 c0 75 2c 48 8b bb d8 00 00 00 e8 c1 1e b8 e0 48 89 df 5b e9 b8 1e b8 e0 0f 0b 48 83 7b 50 00 74 d6 <0f> 0b 48 8b 83 e8 01 00 00 48 85 c0 74 d4 0f 0b eb d0 0f 1f 00
[ 439.019082] ---[ end trace 9263ab2c46fd437a ]---
[ 439.030057] WARNING: CPU: 2 PID: 14674 at fs/btrfs/extent-tree.c:9898 btrfs_free_block_groups+0x2a2/0x3d0 [btrfs]
Attached the full dmesg, kconfig and reproduce scripts.
Thanks,
Fengguang
2 years, 8 months
[lkp-robot] [mm/cma] 2b0f904a5a: fio.read_bw_MBps -16.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -16.1% regression of fio.read_bw_MBps due to commit:
commit: 2b0f904a5a8781498417d67226fd12c5e56053ae ("mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: fio-basic
on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
with following parameters:
disk: 2pmem
fs: ext4
runtime: 200s
nr_task: 50%
time_based: tb
rw: randread
bs: 2M
ioengine: mmap
test_size: 200G
cpufreq_governor: performance
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
2M/gcc-7/performance/2pmem/ext4/mmap/x86_64-rhel-7.2/50%/debian-x86_64-2016-08-31.cgz/200s/randread/lkp-hsw-ep6/200G/fio-basic/tb
commit:
f6572f9cd2 ("mm/page_alloc: don't reserve ZONE_HIGHMEM for ZONE_MOVABLE request")
2b0f904a5a ("mm/cma: manage the memory of the CMA area by using the ZONE_MOVABLE")
f6572f9cd248df2c 2b0f904a5a8781498417d67226
---------------- --------------------------
%stddev %change %stddev
\ | \
11451 -16.1% 9605 fio.read_bw_MBps
0.29 ± 5% +0.1 0.40 ± 3% fio.latency_1000us%
19.35 ± 5% -4.7 14.69 ± 3% fio.latency_10ms%
7.92 ± 3% +12.2 20.15 fio.latency_20ms%
0.05 ± 11% +0.0 0.09 ± 8% fio.latency_2ms%
70.22 -8.9 61.36 fio.latency_4ms%
0.29 ± 13% +0.0 0.33 ± 3% fio.latency_500us%
0.45 ± 29% +1.0 1.45 ± 4% fio.latency_50ms%
1.37 +0.1 1.44 fio.latency_750us%
9792 +31.7% 12896 fio.read_clat_90%_us
10560 +33.0% 14048 fio.read_clat_95%_us
15376 ± 10% +46.9% 22592 fio.read_clat_99%_us
4885 +19.2% 5825 fio.read_clat_mean_us
5725 -16.1% 4802 fio.read_iops
4.598e+09 -16.4% 3.845e+09 fio.time.file_system_inputs
453153 -8.4% 415215 fio.time.involuntary_context_switches
5.748e+08 -16.4% 4.806e+08 fio.time.major_page_faults
1822257 +23.7% 2254706 fio.time.maximum_resident_set_size
5089 +1.6% 5172 fio.time.system_time
514.50 -16.3% 430.48 fio.time.user_time
24569 ± 2% +9.6% 26917 ± 2% fio.time.voluntary_context_switches
54443725 -14.9% 46353339 interrupts.CAL:Function_call_interrupts
0.00 ± 79% -0.0 0.00 ± 17% mpstat.cpu.iowait%
4.45 -0.7 3.71 mpstat.cpu.usr%
1467516 +21.3% 1779543 ± 3% meminfo.Active
1276031 +23.7% 1578443 ± 4% meminfo.Active(file)
25789 ± 3% -76.7% 6013 ± 4% meminfo.CmaFree
1.296e+08 -12.6% 1.133e+08 turbostat.IRQ
41.89 -3.4% 40.47 turbostat.RAMWatt
17444 ± 2% -13.5% 15092 ± 3% turbostat.SMI
10896428 -16.4% 9111830 vmstat.io.bi
6010 -6.2% 5637 vmstat.system.cs
317438 -12.1% 278980 vmstat.system.in
1072892 ± 3% +21.5% 1303487 numa-meminfo.node0.Active
978318 +21.6% 1189809 ± 2% numa-meminfo.node0.Active(file)
222968 -25.2% 166818 numa-meminfo.node0.PageTables
47374 ± 2% +10.6% 52402 ± 7% numa-meminfo.node0.SUnreclaim
165213 +31.9% 217870 numa-meminfo.node1.PageTables
222405 +10.4% 245633 ± 2% numa-meminfo.node1.SReclaimable
102992 ± 46% -80.8% 19812 ± 38% numa-meminfo.node1.Shmem
2.475e+08 ± 2% -24.0% 1.881e+08 numa-numastat.node0.local_node
39371795 ± 14% +167.1% 1.052e+08 ± 2% numa-numastat.node0.numa_foreign
2.475e+08 ± 2% -24.0% 1.881e+08 numa-numastat.node0.numa_hit
31890417 ± 17% +40.2% 44705135 ± 8% numa-numastat.node0.numa_miss
31899482 ± 17% +40.2% 44713255 ± 8% numa-numastat.node0.other_node
2.566e+08 ± 2% -44.2% 1.433e+08 numa-numastat.node1.local_node
31890417 ± 17% +40.2% 44705135 ± 8% numa-numastat.node1.numa_foreign
2.566e+08 ± 2% -44.2% 1.433e+08 numa-numastat.node1.numa_hit
39371795 ± 14% +167.1% 1.052e+08 ± 2% numa-numastat.node1.numa_miss
39373660 ± 14% +167.1% 1.052e+08 ± 2% numa-numastat.node1.other_node
6047 ± 39% -66.5% 2028 ± 63% sched_debug.cfs_rq:/.exec_clock.min
461.37 ± 8% +64.9% 760.74 ± 20% sched_debug.cfs_rq:/.load_avg.avg
1105 ± 13% +1389.3% 16467 ± 56% sched_debug.cfs_rq:/.load_avg.max
408.99 ± 3% +495.0% 2433 ± 49% sched_debug.cfs_rq:/.load_avg.stddev
28746 ± 12% -18.7% 23366 ± 14% sched_debug.cfs_rq:/.min_vruntime.min
752426 ± 3% -12.7% 656636 ± 4% sched_debug.cpu.avg_idle.avg
144956 ± 61% -85.4% 21174 ± 26% sched_debug.cpu.avg_idle.min
245684 ± 11% +44.6% 355257 ± 2% sched_debug.cpu.avg_idle.stddev
236035 ± 15% +51.8% 358264 ± 16% sched_debug.cpu.nr_switches.max
42039 ± 22% +34.7% 56616 ± 8% sched_debug.cpu.nr_switches.stddev
3204 ± 24% -48.1% 1663 ± 30% sched_debug.cpu.sched_count.min
2132 ± 25% +38.7% 2957 ± 11% sched_debug.cpu.sched_count.stddev
90.67 ± 32% -71.8% 25.58 ± 26% sched_debug.cpu.sched_goidle.min
6467 ± 15% +22.3% 7912 ± 15% sched_debug.cpu.ttwu_count.max
1513 ± 27% -55.7% 670.92 ± 22% sched_debug.cpu.ttwu_count.min
1025 ± 20% +68.4% 1727 ± 9% sched_debug.cpu.ttwu_count.stddev
1057 ± 16% -62.9% 391.85 ± 31% sched_debug.cpu.ttwu_local.min
244876 +21.6% 297770 ± 2% numa-vmstat.node0.nr_active_file
88.00 ± 5% +19.3% 105.00 ± 5% numa-vmstat.node0.nr_isolated_file
55778 -25.1% 41765 numa-vmstat.node0.nr_page_table_pages
11843 ± 2% +10.6% 13100 ± 7% numa-vmstat.node0.nr_slab_unreclaimable
159.25 ± 42% -74.9% 40.00 ± 52% numa-vmstat.node0.nr_vmscan_immediate_reclaim
244862 +21.6% 297739 ± 2% numa-vmstat.node0.nr_zone_active_file
19364320 ± 19% +187.2% 55617595 ± 2% numa-vmstat.node0.numa_foreign
268155 ± 3% +49.6% 401089 ± 4% numa-vmstat.node0.workingset_activate
1.229e+08 -19.0% 99590617 numa-vmstat.node0.workingset_refault
6345 ± 3% -76.5% 1489 ± 3% numa-vmstat.node1.nr_free_cma
41335 +32.0% 54552 numa-vmstat.node1.nr_page_table_pages
25770 ± 46% -80.8% 4956 ± 38% numa-vmstat.node1.nr_shmem
55684 +10.4% 61475 ± 2% numa-vmstat.node1.nr_slab_reclaimable
1.618e+08 ± 8% -47.6% 84846798 ± 17% numa-vmstat.node1.numa_hit
1.617e+08 ± 8% -47.6% 84676284 ± 17% numa-vmstat.node1.numa_local
19365342 ± 19% +187.2% 55620100 ± 2% numa-vmstat.node1.numa_miss
19534837 ± 19% +185.6% 55790654 ± 2% numa-vmstat.node1.numa_other
1.296e+08 -21.0% 1.024e+08 numa-vmstat.node1.workingset_refault
1.832e+12 -7.5% 1.694e+12 perf-stat.branch-instructions
0.25 -0.0 0.23 perf-stat.branch-miss-rate%
4.666e+09 -16.0% 3.918e+09 perf-stat.branch-misses
39.88 +1.1 40.98 perf-stat.cache-miss-rate%
2.812e+10 -11.6% 2.485e+10 perf-stat.cache-misses
7.051e+10 -14.0% 6.064e+10 perf-stat.cache-references
1260521 -6.1% 1183071 perf-stat.context-switches
1.87 +9.6% 2.05 perf-stat.cpi
6707 ± 2% -5.2% 6359 perf-stat.cpu-migrations
1.04 ± 11% -0.3 0.77 ± 4% perf-stat.dTLB-load-miss-rate%
2.365e+10 ± 7% -25.9% 1.751e+10 ± 9% perf-stat.dTLB-load-misses
1.05e+12 ± 4% -9.5% 9.497e+11 ± 2% perf-stat.dTLB-stores
28.16 +2.2 30.35 ± 2% perf-stat.iTLB-load-miss-rate%
2.56e+08 -10.4% 2.295e+08 perf-stat.iTLB-loads
8.974e+12 -9.2% 8.151e+12 perf-stat.instructions
89411 -8.8% 81529 perf-stat.instructions-per-iTLB-miss
0.54 -8.8% 0.49 perf-stat.ipc
5.748e+08 -16.4% 4.806e+08 perf-stat.major-faults
52.82 +5.8 58.61 ± 2% perf-stat.node-load-miss-rate%
7.206e+09 ± 2% -18.6% 5.867e+09 ± 3% perf-stat.node-loads
17.96 ± 8% +15.7 33.69 ± 2% perf-stat.node-store-miss-rate%
2.055e+09 ± 8% +65.1% 3.393e+09 ± 4% perf-stat.node-store-misses
9.391e+09 ± 2% -28.9% 6.675e+09 perf-stat.node-stores
5.753e+08 -16.4% 4.811e+08 perf-stat.page-faults
305865 -16.3% 256108 proc-vmstat.allocstall_movable
1923 ± 14% -72.1% 537.00 ± 12% proc-vmstat.allocstall_normal
0.00 +Inf% 1577 ± 67% proc-vmstat.compact_isolated
1005 ± 4% -65.8% 344.00 ± 7% proc-vmstat.kswapd_low_wmark_hit_quickly
320062 +23.2% 394374 ± 4% proc-vmstat.nr_active_file
6411 ± 2% -76.4% 1511 ± 4% proc-vmstat.nr_free_cma
277.00 ± 12% -51.4% 134.75 ± 52% proc-vmstat.nr_vmscan_immediate_reclaim
320049 +23.2% 394353 ± 4% proc-vmstat.nr_zone_active_file
71262212 ± 15% +110.3% 1.499e+08 ± 3% proc-vmstat.numa_foreign
5.042e+08 ± 2% -34.3% 3.314e+08 proc-vmstat.numa_hit
5.041e+08 ± 2% -34.3% 3.314e+08 proc-vmstat.numa_local
71262212 ± 15% +110.3% 1.499e+08 ± 3% proc-vmstat.numa_miss
71273176 ± 15% +110.3% 1.499e+08 ± 3% proc-vmstat.numa_other
1007 ± 4% -65.6% 346.25 ± 7% proc-vmstat.pageoutrun
23070268 -16.0% 19386190 proc-vmstat.pgalloc_dma32
5.525e+08 -16.7% 4.603e+08 proc-vmstat.pgalloc_normal
5.753e+08 -16.4% 4.812e+08 proc-vmstat.pgfault
5.751e+08 -16.3% 4.813e+08 proc-vmstat.pgfree
5.748e+08 -16.4% 4.806e+08 proc-vmstat.pgmajfault
2.299e+09 -16.4% 1.923e+09 proc-vmstat.pgpgin
8.396e+08 -17.8% 6.901e+08 proc-vmstat.pgscan_direct
3.018e+08 ± 2% -13.0% 2.627e+08 proc-vmstat.pgscan_kswapd
4.1e+08 -15.1% 3.48e+08 proc-vmstat.pgsteal_direct
1.542e+08 ± 3% -20.9% 1.22e+08 ± 3% proc-vmstat.pgsteal_kswapd
23514 ± 4% -23.1% 18076 ± 16% proc-vmstat.slabs_scanned
343040 ± 2% +40.3% 481253 ± 2% proc-vmstat.workingset_activate
2.525e+08 -20.1% 2.018e+08 proc-vmstat.workingset_refault
13.64 ± 3% -1.7 11.96 ± 2% perf-profile.calltrace.cycles-pp.ext4_mpage_readpages.filemap_fault.ext4_filemap_fault.__do_fault.__handle_mm_fault
11.67 ± 3% -1.4 10.29 ± 2% perf-profile.calltrace.cycles-pp.submit_bio.ext4_mpage_readpages.filemap_fault.ext4_filemap_fault.__do_fault
11.64 ± 3% -1.4 10.25 ± 2% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_mpage_readpages.filemap_fault.ext4_filemap_fault
11.10 ± 3% -1.3 9.82 ± 2% perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_mpage_readpages.filemap_fault
9.21 ± 3% -1.2 8.04 ± 3% perf-profile.calltrace.cycles-pp.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio.ext4_mpage_readpages
27.33 ± 4% -1.0 26.35 ± 5% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
27.33 ± 4% -1.0 26.35 ± 5% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
27.33 ± 4% -1.0 26.35 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
27.33 ± 4% -1.0 26.35 ± 5% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
26.79 ± 4% -0.8 25.98 ± 5% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
27.98 ± 3% -0.8 27.22 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
5.36 ± 12% -0.6 4.76 ± 7% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
5.36 ± 12% -0.6 4.76 ± 7% perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
5.30 ± 12% -0.6 4.71 ± 7% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
5.35 ± 12% -0.6 4.76 ± 7% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
5.43 ± 12% -0.5 4.88 ± 7% perf-profile.calltrace.cycles-pp.ret_from_fork
5.43 ± 12% -0.5 4.88 ± 7% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
11.04 ± 2% -0.2 10.82 ± 2% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
62.44 ± 2% +1.9 64.38 perf-profile.calltrace.cycles-pp.page_fault
62.38 ± 2% +2.0 64.33 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
62.38 ± 2% +2.0 64.34 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
61.52 ± 2% +2.1 63.58 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
61.34 ± 2% +2.1 63.44 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
30.18 ± 3% +2.3 32.45 ± 2% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
7.98 ± 3% +2.3 10.33 ± 2% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.filemap_fault.ext4_filemap_fault.__do_fault.__handle_mm_fault
30.48 ± 3% +2.4 32.83 ± 2% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.filemap_fault.ext4_filemap_fault
30.46 ± 3% +2.4 32.81 ± 2% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.filemap_fault
30.46 ± 3% +2.4 32.81 ± 2% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
30.37 ± 3% +2.4 32.75 ± 2% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
5.58 ± 4% +2.5 8.08 ± 2% perf-profile.calltrace.cycles-pp.__lru_cache_add.add_to_page_cache_lru.filemap_fault.ext4_filemap_fault.__do_fault
32.88 ± 3% +2.5 35.38 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.filemap_fault.ext4_filemap_fault.__do_fault.__handle_mm_fault
5.51 ± 4% +2.5 8.02 ± 2% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.filemap_fault.ext4_filemap_fault
4.24 ± 4% +2.5 6.76 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru.filemap_fault
4.18 ± 4% +2.5 6.70 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.add_to_page_cache_lru
18.64 ± 3% +2.5 21.16 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node
31.65 ± 3% +2.7 34.31 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.filemap_fault.ext4_filemap_fault.__do_fault
17.21 ± 3% +2.7 19.93 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
56.37 ± 2% +2.8 59.21 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
56.36 ± 2% +2.8 59.20 perf-profile.calltrace.cycles-pp.ext4_filemap_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
56.10 ± 2% +2.9 58.97 perf-profile.calltrace.cycles-pp.filemap_fault.ext4_filemap_fault.__do_fault.__handle_mm_fault.handle_mm_fault
13.66 ± 3% -1.7 11.98 ± 2% perf-profile.children.cycles-pp.ext4_mpage_readpages
11.69 ± 3% -1.4 10.30 ± 2% perf-profile.children.cycles-pp.submit_bio
11.64 ± 3% -1.4 10.26 ± 2% perf-profile.children.cycles-pp.generic_make_request
11.12 ± 3% -1.3 9.84 ± 2% perf-profile.children.cycles-pp.pmem_make_request
9.27 ± 3% -1.1 8.12 ± 3% perf-profile.children.cycles-pp.pmem_do_bvec
27.33 ± 4% -1.0 26.35 ± 5% perf-profile.children.cycles-pp.start_secondary
27.98 ± 3% -0.8 27.22 ± 4% perf-profile.children.cycles-pp.secondary_startup_64
27.98 ± 3% -0.8 27.22 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
27.98 ± 3% -0.8 27.22 ± 4% perf-profile.children.cycles-pp.do_idle
27.97 ± 3% -0.8 27.22 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
5.36 ± 12% -0.6 4.76 ± 7% perf-profile.children.cycles-pp.kswapd
27.42 ± 4% -0.6 26.84 ± 5% perf-profile.children.cycles-pp.intel_idle
5.43 ± 12% -0.5 4.88 ± 7% perf-profile.children.cycles-pp.kthread
5.43 ± 12% -0.5 4.88 ± 7% perf-profile.children.cycles-pp.ret_from_fork
14.25 -0.4 13.80 ± 2% perf-profile.children.cycles-pp.shrink_page_list
35.60 +1.7 37.31 ± 2% perf-profile.children.cycles-pp.shrink_inactive_list
35.89 +1.8 37.67 ± 2% perf-profile.children.cycles-pp.shrink_node
35.80 +1.8 37.60 ± 2% perf-profile.children.cycles-pp.shrink_node_memcg
62.46 ± 2% +2.0 64.41 perf-profile.children.cycles-pp.page_fault
62.43 ± 2% +2.0 64.39 perf-profile.children.cycles-pp.__do_page_fault
62.41 ± 2% +2.0 64.39 perf-profile.children.cycles-pp.do_page_fault
61.55 ± 2% +2.1 63.63 perf-profile.children.cycles-pp.handle_mm_fault
61.37 ± 2% +2.1 63.49 perf-profile.children.cycles-pp.__handle_mm_fault
8.00 ± 3% +2.3 10.35 ± 2% perf-profile.children.cycles-pp.add_to_page_cache_lru
30.55 ± 3% +2.4 32.92 ± 2% perf-profile.children.cycles-pp.try_to_free_pages
30.53 ± 3% +2.4 32.91 ± 2% perf-profile.children.cycles-pp.do_try_to_free_pages
5.59 ± 4% +2.5 8.09 ± 2% perf-profile.children.cycles-pp.__lru_cache_add
5.61 ± 4% +2.5 8.12 ± 2% perf-profile.children.cycles-pp.pagevec_lru_move_fn
32.97 ± 3% +2.5 35.50 ± 2% perf-profile.children.cycles-pp.__alloc_pages_nodemask
5.26 ± 4% +2.6 7.89 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
19.05 ± 3% +2.7 21.72 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irq
31.72 ± 3% +2.7 34.41 ± 2% perf-profile.children.cycles-pp.__alloc_pages_slowpath
56.29 ± 2% +2.8 59.07 perf-profile.children.cycles-pp.filemap_fault
56.38 ± 2% +2.8 59.23 perf-profile.children.cycles-pp.__do_fault
56.37 ± 2% +2.8 59.21 perf-profile.children.cycles-pp.ext4_filemap_fault
24.54 +5.3 29.82 ± 2% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
9.20 ± 3% -1.2 8.04 ± 3% perf-profile.self.cycles-pp.pmem_do_bvec
27.42 ± 4% -0.6 26.84 ± 5% perf-profile.self.cycles-pp.intel_idle
24.54 +5.3 29.82 ± 2% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
fio.read_bw_MBps
11600 +-+-----------------------------------------------------------------+
11400 +-+. .+..+.+.+.+ .+.. .+. .+ + .+. .+.+.+. .+.+.+.+..+.+.+.+.|
| + + .+.+ + + + +. + |
11200 +-+ + |
11000 +-+ |
10800 +-+ |
10600 +-+ |
| |
10400 +-+ |
10200 +-+ |
10000 +-+ |
9800 +-+ |
O O O O O O O O O O |
9600 +-O O O O O O O O O |
9400 +-+-----------------------------------------------------------------+
fio.read_iops
5800 +-+------------------------------------------------------------------+
5700 +-+. .+..+.+.+.+ .+. .+. .+. + .+. .+.+.+.. .+.+.+.+.+..+.+.+.|
| + + .+..+ + + + + + |
5600 +-+ + |
5500 +-+ |
5400 +-+ |
5300 +-+ |
| |
5200 +-+ |
5100 +-+ |
5000 +-+ |
4900 +-+ |
O O O O O O O O O O |
4800 +-O O O O O O O O O |
4700 +-+------------------------------------------------------------------+
fio.read_clat_mean_us
6000 +-+------------------------------------------------------------------+
| |
5800 +-O O O O O O O O O O O O O O O |
O O O O |
| |
5600 +-+ |
| |
5400 +-+ |
| |
5200 +-+ |
| |
| +. |
5000 +-+ + +..+. .+. .+. .+. .+. |
| +.+.+..+.+.+.+ +.+.+ +..+ + +.+.+. +.+.+.+.+..+.+.+.|
4800 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 8 months
[lkp-robot] [x86/unwinder] 5209e8ac93: dmesg.WARNING:stack_recursion
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 5209e8ac937261925c12db63a28cfaa033fa30ed ("x86/unwinder: Handle stack overflows more gracefully")
https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable-rc.git linux-4.14.y
in testcase: vm-scalability
with following parameters:
runtime: 300s
size: 512G
test: anon-w-rand-mt
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
kern :warn : [ 188.061048] WARNING: can't dereference registers at 000000003fb3add8 for ip ret_from_intr+0x6/0x19
kern :warn : [ 190.564038] WARNING: stack recursion on stack type 1
Thanks,
Xiaolong
2 years, 8 months
[crng_reseed] WARNING: inconsistent lock state
by Fengguang Wu
Hello,
FYI this happens in mainline kernel 4.17.0-rc2.
It looks like a new regression.
It occurs in 3 out of 3 boots.
There is another "[ 294.642506] BUG: sleeping function called from
invalid context at mm/slab.h:421" at the bottom of this long dmesg:
[ 123.885728] Writes: Total: 41708535 Max/Min: 0/0 Fail: 0
[ 185.312613] Writes: Total: 61148802 Max/Min: 0/0 Fail: 0
[ 246.752095] Writes: Total: 86067258 Max/Min: 0/0 Fail: 0
[ 294.560671]
[ 294.560924] ================================
[ 294.561539] WARNING: inconsistent lock state
[ 294.562165] 4.17.0-rc2 #145 Not tainted
[ 294.562724] --------------------------------
[ 294.563338] inconsistent {HARDIRQ-ON-W} -> {IN-HARDIRQ-W} usage.
[ 294.564201] kworker/0:2/28 [HC1[1]:SC0[0]:HE0:SE1] takes:
[ 294.564976] (ptrval) (fs_reclaim){?.+.}, at: fs_reclaim_acquire+0x0/0x30
[ 294.566157] {HARDIRQ-ON-W} state was registered at:
[ 294.566863] lock_acquire+0x79/0xd0:
lock_acquire at kernel/locking/lockdep.c:3922
[ 294.567379] fs_reclaim_acquire+0x24/0x30
[ 294.568069] fs_reclaim_acquire+0x14/0x20:
fs_reclaim_acquire at mm/page_alloc.c:3740
[ 294.568660] kmem_cache_alloc_node+0x32/0x240:
slab_pre_alloc_hook at mm/slab.h:419
(inlined by) slab_alloc_node at mm/slab.c:3299
(inlined by) kmem_cache_alloc_node at mm/slab.c:3642
[ 294.569294] __kmalloc_node+0x24/0x30:
__kmalloc_node at mm/slab.c:3690
[ 294.569841] alloc_cpumask_var_node+0x1f/0x50
[ 294.570477] zalloc_cpumask_var+0x14/0x20
[ 294.571071] native_smp_prepare_cpus+0xf4/0x364
[ 294.571738] kvm_smp_prepare_cpus+0x21/0xff
[ 294.572349] kernel_init_freeable+0x2b5/0x4db
[ 294.572995] kernel_init+0x9/0x100
[ 294.573499] ret_from_fork+0x24/0x30
[ 294.574169] irq event stamp: 141042
[ 294.574681] hardirqs last enabled at (141041): [<ffffffff81d6dd81>] _raw_spin_unlock_irqrestore+0x31/0x60
[ 294.576050] hardirqs last disabled at (141042): [<ffffffff81e008d4>] interrupt_entry+0xd4/0x100
[ 294.577278] softirqs last enabled at (140644): [<ffffffff81aaec82>] update_defense_level+0x122/0x450
[ 294.578577] softirqs last disabled at (140642): [<ffffffff81aaeb60>] update_defense_level+0x0/0x450
[ 294.579855]
[ 294.579855] other info that might help us debug this:
[ 294.580867] Possible unsafe locking scenario:
[ 294.580867]
[ 294.581714] CPU0
[ 294.582073] ----
[ 294.582431] lock(fs_reclaim);
[ 294.582892] <Interrupt>
[ 294.583273] lock(fs_reclaim);
[ 294.583757]
[ 294.583757] *** DEADLOCK ***
[ 294.583757]
[ 294.584592] 2 locks held by kworker/0:2/28:
[ 294.585196] #0: (ptrval) ((wq_completion)"events"){+.+.}, at: process_one_work+0x1d6/0x460
[ 294.586497] #1: (ptrval) ((work_completion)(&(&adapter->watchdog_task)->work)){+.+.}, at: process_one_work+0x1d6/0x460
[ 294.588125]
[ 294.588125] stack backtrace:
[ 294.588753] CPU: 0 PID: 28 Comm: kworker/0:2 Not tainted 4.17.0-rc2 #145
[ 294.589803] Workqueue: events e1000_watchdog
[ 294.590410] Call Trace:
[ 294.590776] <IRQ>
[ 294.591078] dump_stack+0x8e/0xd5
[ 294.591557] print_usage_bug+0x247/0x262
[ 294.592221] mark_lock+0x5c5/0x660
[ 294.592719] ? check_usage_backwards+0x160/0x160
[ 294.593436] __lock_acquire+0xdcf/0x1b70
[ 294.594006] ? __lock_acquire+0x3f1/0x1b70
[ 294.594595] ? trace_hardirqs_off+0xd/0x10
[ 294.595190] lock_acquire+0x79/0xd0
[ 294.595706] ? lock_acquire+0x79/0xd0
[ 294.596259] ? find_suitable_fallback+0x80/0x80
[ 294.609933] ? crng_reseed+0x13e/0x2e0
[ 294.610479] fs_reclaim_acquire+0x24/0x30
[ 294.611163] ? find_suitable_fallback+0x80/0x80
[ 294.611915] fs_reclaim_acquire+0x14/0x20
[ 294.612490] __kmalloc+0x35/0x240
[ 294.612982] crng_reseed+0x13e/0x2e0
[ 294.613499] credit_entropy_bits+0x21c/0x230
[ 294.625342] add_interrupt_randomness+0x293/0x300
[ 294.626072] handle_irq_event_percpu+0x3b/0x70
[ 294.626718] handle_irq_event+0x34/0x60
[ 294.627272] handle_fasteoi_irq+0x70/0x120
[ 294.627867] handle_irq+0x15/0x20
[ 294.628349] do_IRQ+0x53/0x100
[ 294.628803] common_interrupt+0xf/0xf
[ 294.629331] </IRQ>
[ 294.629653] RIP: 0010:e1000_watchdog+0x126/0x530
[ 294.630335] RSP: 0000:ffff880016823e00 EFLAGS: 00000297 ORIG_RAX: ffffffffffffffda
[ 294.631690] RAX: 0000000000000010 RBX: ffff880014f992c0 RCX: 0000000000000006
[ 294.632971] RDX: ffffc900008e0000 RSI: ffff88001681d3b8 RDI: ffff880014f98e70
[ 294.634237] RBP: ffff880016823e38 R08: 0000000097c673cd R09: 0000000000000000
[ 294.635549] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880014f98a80
[ 294.636838] R13: ffff880014f98000 R14: ffff880018994480 R15: ffff880014f98e70
[ 294.638276] process_one_work+0x236/0x460
[ 294.638993] ? process_one_work+0x1d6/0x460
[ 294.639652] worker_thread+0x35/0x3f0
[ 294.640175] kthread+0x122/0x140
[ 294.640646] ? process_one_work+0x460/0x460
[ 294.641242] ? __kthread_create_on_node+0x190/0x190
[ 294.641941] ret_from_fork+0x24/0x30
[ 294.642506] BUG: sleeping function called from invalid context at mm/slab.h:421
[ 294.644135] in_atomic(): 1, irqs_disabled(): 1, pid: 28, name: kworker/0:2
[ 294.646373] INFO: lockdep is turned off.
[ 294.647940] irq event stamp: 141042
[ 294.648638] hardirqs last enabled at (141041): [<ffffffff81d6dd81>] _raw_spin_unlock_irqrestore+0x31/0x60
[ 294.655362] hardirqs last disabled at (141042): [<ffffffff81e008d4>] interrupt_entry+0xd4/0x100
[ 294.676115] softirqs last enabled at (140644): [<ffffffff81aaec82>] update_defense_level+0x122/0x450
[ 294.677737] softirqs last disabled at (140642): [<ffffffff81aaeb60>] update_defense_level+0x0/0x450
[ 294.682738] CPU: 0 PID: 28 Comm: kworker/0:2 Not tainted 4.17.0-rc2 #145
[ 294.683763] Workqueue: events e1000_watchdog
[ 294.684414] Call Trace:
[ 294.697850] <IRQ>
[ 294.698174] dump_stack+0x8e/0xd5
[ 294.698697] ___might_sleep+0x15f/0x250
[ 294.699288] __might_sleep+0x45/0x80
[ 294.699853] __kmalloc+0x1a3/0x240
[ 294.700387] crng_reseed+0x13e/0x2e0
[ 294.700957] credit_entropy_bits+0x21c/0x230
[ 294.701642] add_interrupt_randomness+0x293/0x300
[ 294.702382] handle_irq_event_percpu+0x3b/0x70
[ 294.703080] handle_irq_event+0x34/0x60
[ 294.703689] handle_fasteoi_irq+0x70/0x120
[ 294.704325] handle_irq+0x15/0x20
[ 294.704863] do_IRQ+0x53/0x100
[ 294.705347] common_interrupt+0xf/0xf
[ 294.705927] </IRQ>
[ 294.706267] RIP: 0010:e1000_watchdog+0x126/0x530
[ 294.706983] RSP: 0000:ffff880016823e00 EFLAGS: 00000297 ORIG_RAX: ffffffffffffffda
[ 294.708142] RAX: 0000000000000010 RBX: ffff880014f992c0 RCX: 0000000000000006
[ 294.709228] RDX: ffffc900008e0000 RSI: ffff88001681d3b8 RDI: ffff880014f98e70
[ 294.710314] RBP: ffff880016823e38 R08: 0000000097c673cd R09: 0000000000000000
[ 294.715815] R10: 0000000000000000 R11: 0000000000000000 R12: ffff880014f98a80
[ 294.716907] R13: ffff880014f98000 R14: ffff880018994480 R15: ffff880014f98e70
[ 294.718019] process_one_work+0x236/0x460
[ 294.718643] ? process_one_work+0x1d6/0x460
[ 294.719281] worker_thread+0x35/0x3f0
[ 294.719859] kthread+0x122/0x140
[ 294.720361] ? process_one_work+0x460/0x460
[ 294.721008] ? __kthread_create_on_node+0x190/0x190
[ 294.721766] ret_from_fork+0x24/0x30
[ 294.722324] random: crng init done
Attached the full dmesg, kconfig and reproduce scripts.
Thanks,
Fengguang
2 years, 8 months
[llc_ui_release] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
by Fengguang Wu
Hello,
FYI this happens in mainline kernel 4.17.0-rc2.
It looks like a new regression.
It occurs in 5 out of 5 boots.
[main] 375 sockets created based on info from socket cachefile.
[main] Generating file descriptors
[main] Added 83 filenames from /dev
udevd[507]: failed to execute '/sbin/modprobe' '/sbin/modprobe -bv platform:regulatory': No such file or directory
[ 372.057947] caif:caif_disconnect_client(): nothing to disconnect
[ 372.082415] BUG: unable to handle kernel NULL pointer dereference at 0000000000000004
[ 372.083866] PGD 16e5b067 P4D 16e5b067 PUD 16cce067 PMD 0
[ 372.085033] Oops: 0000 [#1] SMP
[ 372.085654] CPU: 1 PID: 494 Comm: trinity-main Not tainted 4.17.0-rc2 #171
[ 372.086910] RIP: 0010:refcount_inc_not_zero+0x25/0x2f0:
__read_once_size at include/linux/compiler.h:188
(inlined by) arch_atomic_read at arch/x86/include/asm/atomic.h:31
(inlined by) atomic_read at include/asm-generic/atomic-instrumented.h:22
(inlined by) refcount_inc_not_zero at lib/refcount.c:120
[ 372.087918] RSP: 0018:ffff880016fb7d08 EFLAGS: 00010206
[ 372.089279] RAX: ffff880016bd5f00 RBX: 0000000000000004 RCX: ffffffff818317ed
[ 372.090142] RDX: 0000000000000000 RSI: 0000000000000001 RDI: 0000000000000004
[ 372.091443] RBP: ffff880016f46878 R08: 0000000000000002 R09: 0000000000000000
[ 372.093103] R10: 0000000001a0ae15 R11: 0000000011e52352 R12: 0000000000000004
[ 372.094480] R13: 0000000000000000 R14: ffffffff84655070 R15: ffff88001f2f93c0
[ 372.095848] FS: 00007fc0f65dc700(0000) GS:ffff88001d600000(0000) knlGS:0000000000000000
[ 372.097643] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 372.098736] CR2: 0000000000000004 CR3: 0000000016e5a000 CR4: 00000000000006a0
[ 372.099978] DR0: 0000000000693000 DR1: 0000000000000000 DR2: 0000000000000000
[ 372.101483] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000600
[ 372.102856] Call Trace:
[ 372.103434] refcount_inc+0x19/0x190:
refcount_inc at lib/refcount.c:153
[ 372.104200] llc_ui_release+0xf5/0x270:
constant_test_bit at arch/x86/include/asm/bitops.h:328
(inlined by) sock_flag at include/net/sock.h:802
(inlined by) llc_ui_release at net/llc/af_llc.c:208
[ 372.105122] sock_release+0x56/0x120:
sock_release at net/socket.c:594
[ 372.105773] ? sock_release+0x120/0x120:
sock_close at net/socket.c:1148
[ 372.106490] sock_close+0x1f/0x30:
sock_close at net/socket.c:1151
[ 372.107176] __fput+0x2e9/0x620:
__fput at fs/file_table.c:209
[ 372.107858] ____fput+0x1e/0x30:
____fput at fs/file_table.c:243
[ 372.108526] task_work_run+0x11a/0x180:
task_work_run at kernel/task_work.c:115 (discriminator 1)
[ 372.109491] do_exit+0xda4/0x2210:
do_exit at kernel/exit.c:866
[ 372.110171] ? __do_page_fault+0xffe/0x1150:
__do_page_fault at arch/x86/mm/fault.c:1444 (discriminator 1)
[ 372.110677] do_group_exit+0x1ce/0x1f0:
do_group_exit at kernel/exit.c:957
[ 372.111130] __do_sys_exit_group+0x1b/0x20:
__do_sys_exit_group at kernel/exit.c:979
[ 372.111878] __x64_sys_exit_group+0x1f/0x20:
__x64_sys_exit_group at kernel/exit.c:977
[ 372.112655] do_syscall_64+0x3c8/0x940:
do_syscall_64 at arch/x86/entry/common.c:287
[ 372.113612] entry_SYSCALL_64_after_hwframe+0x49/0xbe:
entry_SYSCALL_64_after_hwframe at arch/x86/entry/entry_64.S:240
[ 372.114638] RIP: 0033:0x7fc0f60c1408
[ 372.115389] RSP: 002b:00007fff057791b8 EFLAGS: 00000206 ORIG_RAX: 00000000000000e7
[ 372.117042] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007fc0f60c1408
[ 372.118462] RDX: 0000000000000000 RSI: 000000000000003c RDI: 0000000000000000
[ 372.119892] RBP: 0000000000000001 R08: 00000000000000e7 R09: ffffffffffffffa0
[ 372.121479] R10: 00007fff05778f50 R11: 0000000000000206 R12: 0000000000000005
[ 372.122831] R13: 00007fff057793a0 R14: 0000000000000000 R15: 0000000000000000
[ 372.124211] Code: 84 00 00 00 00 00 41 57 41 56 49 c7 c6 70 50 65 84 41 55 41 54 49 89 fc 55 53 48 83 ec 08 e8 a3 53 a4 ff 48 83 05 4b 01 9d 04 01 <45> 8b 2c 24 e8 92 53 a4 ff 31 c0 45 85 ed 41 8d 6d 01 0f 94 c0
[ 372.129128] RIP: refcount_inc_not_zero+0x25/0x2f0:
__read_once_size at include/linux/compiler.h:188
(inlined by) arch_atomic_read at arch/x86/include/asm/atomic.h:31
(inlined by) atomic_read at include/asm-generic/atomic-instrumented.h:22
(inlined by) refcount_inc_not_zero at lib/refcount.c:120 RSP: ffff880016fb7d08
[ 372.130205] CR2: 0000000000000004
[ 372.130604] ---[ end trace a6d858cc768df5f2 ]---
[ 372.131122] Kernel panic - not syncing: Fatal exception
Attached the full dmesg, kconfig and reproduce scripts.
Thanks,
Fengguang
2 years, 8 months