[lkp] [fanotify] a2227130fa: BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Eric-Sandeen/fanotify-pre-approve-listener-s-OPEN_PERM-access-requests/20160127-072258
commit a2227130fa626d64cd3d6184c4a88d5565572c29 ("fanotify: pre-approve listener's OPEN_PERM access requests")
[ 281.525813] tun: (C) 1999-2004 Max Krasnyansky <maxk(a)qualcomm.com>
[ 281.599228] EXT4-fs (loop0): mounting ext2 file system using the ext4 subsystem
[ 281.599909] EXT4-fs (loop0): mounted filesystem without journal. Opts: (null)
[ 281.656253] BUG: unable to handle kernel NULL pointer dereference at 0000000000000008
[ 281.656258] IP: [<ffffffff81238931>] fanotify_handle_event+0xa1/0x3b0
[ 281.656259] PGD 9a5ea067 PUD 98f11067 PMD 0
[ 281.656260] Oops: 0000 [#1] SMP
[ 281.656274] Modules linked in: tun loop rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver netconsole sg sr_mod cdrom sd_mod dcdbas ata_generic coretemp pata_acpi i7core_edac dell_smm_hwmon kvm_intel kvm irqbypass ata_piix crc32c_intel serio_raw pcspkr edac_core snd_hda_codec_realtek libata usb_storage snd_hda_codec_generic snd_hda_codec_hdmi snd_hda_intel snd_hda_codec snd_hda_core snd_hwdep firewire_ohci snd_pcm firewire_core snd_timer crc_itu_t snd soundcore shpchp acpi_cpufreq broadcom bcm_phy_lib
[ 281.656276] CPU: 6 PID: 12247 Comm: fanotify01 Not tainted 4.5.0-rc1-00001-ga222713 #2
[ 281.656276] Hardware name: Dell Inc. Studio XPS 8000/0X231R, BIOS A01 08/11/2009
[ 281.656277] task: ffff8800970fc880 ti: ffff880098648000 task.ti: ffff880098648000
[ 281.656279] RIP: 0010:[<ffffffff81238931>] [<ffffffff81238931>] fanotify_handle_event+0xa1/0x3b0
[ 281.656280] RSP: 0018:ffff88009864bda8 EFLAGS: 00010246
[ 281.656281] RAX: 0000000000400001 RBX: ffff88009881c410 RCX: 0000000000000000
[ 281.656281] RDX: 000000000000003b RSI: ffff88013da87b48 RDI: ffff88009a7f0000
[ 281.656282] RBP: ffff88009864be18 R08: 0000000000000020 R09: ffff88009881c410
[ 281.656282] R10: ffff88013da87b48 R11: 0000000000000020 R12: 0000000000000000
[ 281.656283] R13: ffff88009a7f0000 R14: 0000000000000020 R15: 0000000000000000
[ 281.656284] FS: 00007f5df6c18700(0000) GS:ffff88013fd80000(0000) knlGS:0000000000000000
[ 281.656284] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 281.656285] CR2: 0000000000000008 CR3: 000000009a537000 CR4: 00000000000006e0
[ 281.656285] Stack:
[ 281.656287] 0000000000000008 ffff88009864befc 0000000000000001 000000000000005d
[ 281.656288] 0000000000000000 ffff88009864bee0 ffffffff81203d62 ffff88009a914920
[ 281.656289] ffff880128f1b480 0000000000000000 ffff88013d876be0 ffff88013d876c08
[ 281.656289] Call Trace:
[ 281.656293] [<ffffffff81203d62>] ? do_filp_open+0x92/0xe0
[ 281.656294] [<ffffffff81235528>] fsnotify+0x378/0x490
[ 281.656295] [<ffffffff81202ea6>] ? getname_flags+0x56/0x1f0
[ 281.656298] [<ffffffff811cfee7>] ? kmem_cache_alloc+0x197/0x200
[ 281.656300] [<ffffffff811f22e9>] do_sys_open+0x179/0x1f0
[ 281.656301] [<ffffffff811f237e>] SyS_open+0x1e/0x20
[ 281.656304] [<ffffffff818defae>] entry_SYSCALL_64_fastpath+0x12/0x71
[ 281.656316] Code: 95 c0 4d 85 e4 0f 95 c2 84 c0 0f 84 25 02 00 00 84 d2 0f 84 fc 01 00 00 41 8b 47 40 41 0b 44 24 40 41 8b 17 41 0b 14 24 41 89 c7 <49> 8b 44 24 08 48 8b 80 08 01 00 00 48 85 c0 74 42 8b 40 30 85
[ 281.656317] RIP [<ffffffff81238931>] fanotify_handle_event+0xa1/0x3b0
[ 281.656317] RSP <ffff88009864bda8>
[ 281.656318] CR2: 0000000000000008
[ 281.656319] ---[ end trace 3422bb7dccb165e4 ]---
[ 281.656320] Kernel panic - not syncing: Fatal exception
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
4 years, 11 months
[lkp] [mm, page_alloc] 625fab12b4: 160.2% fileio.time.involuntary_context_switches
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/mel/linux-balancenuma mm-vmscan-node-lru-v2r26
commit 625fab12b407ee064c3f9d10b304e5aba87ad1ce ("mm, page_alloc: Remove fair zone allocation policy")
=========================================================================================
compiler/cpufreq_governor/disk/filenum/fs/iomode/kconfig/nr_threads/period/rootfs/rwmode/size/tbox_group/testcase:
gcc-4.9/performance/1HDD/1024f/xfs/sync/x86_64-rhel/100%/600s/debian-x86_64-2015-02-07.cgz/seqwr/64G/lkp-sb02/fileio
commit:
94388badc0ebce2bdcc516dc413844921bd768fa
625fab12b407ee064c3f9d10b304e5aba87ad1ce
94388badc0ebce2b 625fab12b407ee064c3f9d10b3
---------------- --------------------------
%stddev %change %stddev
\ | \
14265 ± 0% +160.2% 37121 ± 2% fileio.time.involuntary_context_switches
36.75 ± 1% +51.0% 55.50 ± 0% fileio.time.percent_of_cpu_this_job_got
224.17 ± 1% +50.0% 336.17 ± 1% fileio.time.system_time
1263295 ± 1% -26.4% 930092 ± 0% fileio.time.voluntary_context_switches
820.99 ± 0% -16.1% 688.70 ± 0% uptime.idle
32784 ± 3% +9.2% 35813 ± 3% softirqs.RCU
543300 ± 0% +21.8% 661679 ± 0% softirqs.TIMER
2.581e+08 ± 2% -55.1% 1.158e+08 ± 1% cpuidle.C1-SNB.time
991217 ± 2% -22.9% 763992 ± 1% cpuidle.C1-SNB.usage
34907285 ± 10% -15.4% 29536292 ± 4% cpuidle.C1E-SNB.time
91859362 ± 5% -19.0% 74382776 ± 5% cpuidle.POLL.time
231879 ± 6% -31.1% 159807 ± 5% cpuidle.POLL.usage
14265 ± 0% +160.2% 37121 ± 2% time.involuntary_context_switches
36.75 ± 1% +51.0% 55.50 ± 0% time.percent_of_cpu_this_job_got
224.17 ± 1% +50.0% 336.17 ± 1% time.system_time
0.84 ± 5% +27.1% 1.07 ± 1% time.user_time
1263295 ± 1% -26.4% 930092 ± 0% time.voluntary_context_switches
42.95 ± 0% +22.2% 52.48 ± 0% turbostat.%Busy
1278 ± 0% +21.6% 1555 ± 0% turbostat.Avg_MHz
12.53 ± 2% -48.0% 6.51 ± 1% turbostat.CPU%c1
22.19 ± 0% +12.8% 25.04 ± 0% turbostat.CorWatt
25.85 ± 0% +11.0% 28.70 ± 0% turbostat.PkgWatt
52830 ± 0% +1.8% 53782 ± 0% vmstat.io.bo
713.50 ± 7% -33.6% 473.75 ± 2% vmstat.memory.buff
1.00 ± 0% +100.0% 2.00 ± 0% vmstat.procs.r
5310 ± 1% -10.2% 4766 ± 0% vmstat.system.cs
2231 ± 1% +20.0% 2678 ± 0% vmstat.system.in
2247 ± 6% -12.3% 1971 ± 3% slabinfo.anon_vma.active_objs
2247 ± 6% -12.3% 1971 ± 3% slabinfo.anon_vma.num_objs
954.00 ± 10% +24.4% 1186 ± 3% slabinfo.cred_jar.active_objs
20832 ± 3% +345.2% 92754 ± 2% slabinfo.kmalloc-64.active_objs
325.25 ± 3% +345.4% 1448 ± 2% slabinfo.kmalloc-64.active_slabs
20832 ± 3% +345.2% 92755 ± 2% slabinfo.kmalloc-64.num_objs
325.25 ± 3% +345.4% 1448 ± 2% slabinfo.kmalloc-64.num_slabs
2541 ± 1% +18.8% 3018 ± 1% slabinfo.radix_tree_node.active_slabs
2541 ± 1% +18.8% 3018 ± 1% slabinfo.radix_tree_node.num_slabs
146398 ± 0% -82.4% 25752 ± 1% meminfo.Active
1660 ± 0% -100.0% 0.00 ± -1% meminfo.Active(anon)
144736 ± 0% -82.2% 25752 ± 1% meminfo.Active(file)
478122 ± 0% -52.4% 227480 ± 0% meminfo.Dirty
171938 ± 6% +94.9% 335031 ± 2% meminfo.Inactive
171938 ± 6% +11.8% 192237 ± 5% meminfo.Inactive(anon)
411066 ± 2% +10.3% 453230 ± 1% meminfo.MemAvailable
171939 ± 6% +11.8% 192238 ± 5% meminfo.MemFree
21148 ± 0% +21.8% 25752 ± 1% meminfo.SUnreclaim
21148 ± 0% -87.0% 2743 ± 0% meminfo.Unevictable
88635 ± 0% -13.8% 76382 ± 2% meminfo.Writeback
0.00 ± -1% +Inf% 14490 ± 57% latency_stats.avg.wait_iff_congested.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.copy_page_range.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 11880 ± 71% latency_stats.avg.wait_iff_congested.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
16059 ±118% -95.7% 693.00 ±126% latency_stats.avg.xfs_buf_submit_wait._xfs_buf_read.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_alloc_lookup_eq.xfs_free_ag_extent.xfs_free_extent.xfs_trans_free_extent.xfs_bmap_finish
14149 ± 3% -74.7% 3573 ± 6% latency_stats.hits.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write.SyS_pwrite64.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 14490 ± 57% latency_stats.max.wait_iff_congested.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.copy_page_range.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 12994 ± 70% latency_stats.max.wait_iff_congested.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
16059 ±118% -95.7% 693.00 ±126% latency_stats.max.xfs_buf_submit_wait._xfs_buf_read.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_alloc_lookup_eq.xfs_free_ag_extent.xfs_free_extent.xfs_trans_free_extent.xfs_bmap_finish
1.591e+08 ± 3% -69.6% 48384300 ± 0% latency_stats.sum.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write.SyS_pwrite64.entry_SYSCALL_64_fastpath
42101 ±173% +544.4% 271288 ±102% latency_stats.sum.call_rwsem_down_write_failed.xfs_ilock.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write.SyS_pwrite64.entry_SYSCALL_64_fastpath
29214 ± 74% -33.4% 19470 ± 0% latency_stats.sum.wait_iff_congested.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.alloc_kmem_pages.kmalloc_order.kmalloc_order_trace.devkmsg_open.memory_open.chrdev_open.do_dentry_open.vfs_open
0.00 ± -1% +Inf% 14490 ± 57% latency_stats.sum.wait_iff_congested.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.copy_page_range.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 17260 ± 76% latency_stats.sum.wait_iff_congested.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
16059 ±118% -95.7% 693.00 ±126% latency_stats.sum.xfs_buf_submit_wait._xfs_buf_read.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_btree_read_buf_block.xfs_btree_lookup_get_block.xfs_btree_lookup.xfs_alloc_lookup_eq.xfs_free_ag_extent.xfs_free_extent.xfs_trans_free_extent.xfs_bmap_finish
561.50 ± 4% +84.9% 1038 ± 4% proc-vmstat.compact_fail
19342446 ± 3% +10.1% 21300297 ± 2% proc-vmstat.compact_free_scanned
38803 ± 11% +184.1% 110237 ± 7% proc-vmstat.compact_isolated
167816 ± 7% +185.5% 479105 ± 3% proc-vmstat.compact_migrate_scanned
794.75 ± 4% +80.1% 1431 ± 4% proc-vmstat.compact_stall
108382 ±153% +329.3% 465280 ± 58% proc-vmstat.kswapd_inodesteal
6375 ± 3% +27.8% 8149 ± 2% proc-vmstat.kswapd_low_wmark_hit_quickly
67488 ± 2% -36.6% 42793 ± 2% proc-vmstat.nr_active_file
415.00 ± 0% -100.0% 0.00 ± -1% proc-vmstat.nr_alloc_batch
119527 ± 0% -52.5% 56832 ± 1% proc-vmstat.nr_dirty
5287 ± 0% +21.6% 6427 ± 1% proc-vmstat.nr_slab_unreclaimable
18134 ± 0% +366.6% 84625 ± 0% proc-vmstat.nr_unevictable
1.017e+09 ± 1% +71.9% 1.748e+09 ± 1% proc-vmstat.nr_vmscan_immediate_reclaim
22166 ± 0% -13.0% 19279 ± 4% proc-vmstat.nr_writeback
64023 ± 2% -57.2% 27420 ± 2% proc-vmstat.pgactivate
690575 ± 0% +15.4% 797060 ± 0% proc-vmstat.pgalloc_dma32
1327 ±120% +975.7% 14277 ± 64% proc-vmstat.pgmigrate_fail
12017 ± 8% +193.3% 35240 ± 4% proc-vmstat.pgmigrate_success
6469794 ± 1% +11.8% 7233894 ± 0% proc-vmstat.pgrotated
1.117e+09 ± 1% +95.6% 2.186e+09 ± 1% proc-vmstat.pgscan_direct
20246 ± 0% +373.6% 95891 ± 1% proc-vmstat.unevictable_pgs_culled
5347 ± 5% +75.3% 9371 ± 2% proc-vmstat.workingset_nodereclaim
58075 ± 5% +70.5% 99000 ± 3% sched_debug.cfs_rq:/.exec_clock.0
114182 ± 15% +26.0% 143907 ± 4% sched_debug.cfs_rq:/.exec_clock.2
121749 ± 12% +13.3% 138001 ± 1% sched_debug.cfs_rq:/.exec_clock.3
103780 ± 0% +23.8% 128512 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
57783 ± 5% +70.4% 98484 ± 3% sched_debug.cfs_rq:/.exec_clock.min
30369 ± 5% -38.7% 18605 ± 10% sched_debug.cfs_rq:/.exec_clock.stddev
232.25 ± 34% +92.0% 446.00 ± 26% sched_debug.cfs_rq:/.load_avg.0
328.00 ± 31% +54.0% 505.00 ± 20% sched_debug.cfs_rq:/.load_avg.3
306.17 ± 4% +49.1% 456.51 ± 9% sched_debug.cfs_rq:/.load_avg.avg
809.84 ± 6% +37.0% 1109 ± 14% sched_debug.cfs_rq:/.load_avg.max
67.39 ± 31% +118.3% 147.12 ± 12% sched_debug.cfs_rq:/.load_avg.min
310.31 ± 8% +27.1% 394.34 ± 18% sched_debug.cfs_rq:/.load_avg.stddev
111578 ± 5% +98.4% 221416 ± 2% sched_debug.cfs_rq:/.min_vruntime.0
167309 ± 6% +42.7% 238775 ± 1% sched_debug.cfs_rq:/.min_vruntime.1
158623 ± 7% +58.3% 251082 ± 2% sched_debug.cfs_rq:/.min_vruntime.2
168117 ± 9% +44.3% 242659 ± 3% sched_debug.cfs_rq:/.min_vruntime.3
151407 ± 1% +57.5% 238483 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
180632 ± 3% +40.3% 253385 ± 1% sched_debug.cfs_rq:/.min_vruntime.max
111387 ± 5% +97.9% 220417 ± 2% sched_debug.cfs_rq:/.min_vruntime.min
26737 ± 3% -53.0% 12574 ± 19% sched_debug.cfs_rq:/.min_vruntime.stddev
0.82 ± 7% +27.4% 1.05 ± 12% sched_debug.cfs_rq:/.nr_running.avg
0.34 ± 22% +105.3% 0.70 ± 17% sched_debug.cfs_rq:/.nr_running.min
262.97 ± 12% +31.6% 346.05 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.avg
38.16 ± 49% +159.3% 98.95 ± 16% sched_debug.cfs_rq:/.runnable_load_avg.min
55730 ± 18% -68.9% 17357 ± 23% sched_debug.cfs_rq:/.spread0.1
56538 ± 21% -62.4% 21237 ± 55% sched_debug.cfs_rq:/.spread0.3
39828 ± 8% -57.2% 17064 ± 31% sched_debug.cfs_rq:/.spread0.avg
69053 ± 6% -53.7% 31964 ± 23% sched_debug.cfs_rq:/.spread0.max
26737 ± 3% -53.0% 12573 ± 19% sched_debug.cfs_rq:/.spread0.stddev
353.25 ± 20% +80.7% 638.25 ± 27% sched_debug.cfs_rq:/.util_avg.0
391.25 ± 11% +75.6% 687.00 ± 18% sched_debug.cfs_rq:/.util_avg.2
448.25 ± 19% +85.4% 831.00 ± 11% sched_debug.cfs_rq:/.util_avg.3
432.79 ± 8% +63.3% 706.91 ± 10% sched_debug.cfs_rq:/.util_avg.avg
846.57 ± 5% +31.8% 1116 ± 7% sched_debug.cfs_rq:/.util_avg.max
216.25 ± 39% +117.6% 470.50 ± 14% sched_debug.cfs_rq:/.util_avg.min
695961 ± 12% -23.1% 534898 ± 14% sched_debug.cpu.avg_idle.2
354429 ± 20% -64.5% 125849 ± 16% sched_debug.cpu.avg_idle.min
212161 ± 11% +42.3% 301992 ± 2% sched_debug.cpu.avg_idle.stddev
232.00 ± 23% +74.9% 405.75 ± 9% sched_debug.cpu.cpu_load[0].2
265.62 ± 9% +28.4% 341.04 ± 8% sched_debug.cpu.cpu_load[0].avg
226.75 ± 23% +70.0% 385.50 ± 11% sched_debug.cpu.cpu_load[1].2
260.41 ± 8% +31.7% 342.93 ± 9% sched_debug.cpu.cpu_load[1].avg
40.23 ± 65% +190.8% 116.98 ± 28% sched_debug.cpu.cpu_load[1].min
229.50 ± 19% +72.0% 394.75 ± 10% sched_debug.cpu.cpu_load[2].2
258.82 ± 7% +35.4% 350.45 ± 9% sched_debug.cpu.cpu_load[2].avg
42.34 ± 63% +222.5% 136.55 ± 21% sched_debug.cpu.cpu_load[2].min
240.50 ± 14% +70.8% 410.75 ± 8% sched_debug.cpu.cpu_load[3].2
260.75 ± 25% +37.2% 357.75 ± 11% sched_debug.cpu.cpu_load[3].3
257.67 ± 8% +39.3% 358.95 ± 8% sched_debug.cpu.cpu_load[3].avg
41.75 ± 69% +235.6% 140.12 ± 20% sched_debug.cpu.cpu_load[3].min
254.75 ± 12% +66.0% 423.00 ± 10% sched_debug.cpu.cpu_load[4].2
246.75 ± 22% +46.4% 361.25 ± 16% sched_debug.cpu.cpu_load[4].3
257.68 ± 8% +41.8% 365.33 ± 8% sched_debug.cpu.cpu_load[4].avg
43.82 ± 60% +219.9% 140.18 ± 19% sched_debug.cpu.cpu_load[4].min
4021 ± 0% -16.0% 3378 ± 0% sched_debug.cpu.curr->pid.max
20.05 ± 67% +461.3% 112.53 ± 53% sched_debug.cpu.curr->pid.min
1691 ± 2% -20.3% 1348 ± 1% sched_debug.cpu.curr->pid.stddev
144.05 ± 34% +63.2% 235.02 ± 28% sched_debug.cpu.load.min
0.00 ± 7% -35.1% 0.00 ± 12% sched_debug.cpu.next_balance.stddev
87656 ± 3% +44.4% 126554 ± 3% sched_debug.cpu.nr_load_updates.0
128292 ± 0% +16.7% 149747 ± 1% sched_debug.cpu.nr_load_updates.avg
87534 ± 3% +44.1% 126162 ± 3% sched_debug.cpu.nr_load_updates.min
26961 ± 8% -44.2% 15041 ± 18% sched_debug.cpu.nr_load_updates.stddev
1.06 ± 11% +54.1% 1.64 ± 17% sched_debug.cpu.nr_running.avg
1.91 ± 11% +53.2% 2.92 ± 22% sched_debug.cpu.nr_running.max
0.34 ± 22% +112.7% 0.72 ± 15% sched_debug.cpu.nr_running.min
0.67 ± 7% +38.3% 0.92 ± 26% sched_debug.cpu.nr_running.stddev
437095 ± 6% -24.0% 332148 ± 3% sched_debug.cpu.nr_switches.1
434420 ± 11% -33.1% 290428 ± 4% sched_debug.cpu.nr_switches.2
415871 ± 10% -23.5% 318027 ± 3% sched_debug.cpu.nr_switches.3
460555 ± 0% -21.7% 360716 ± 1% sched_debug.cpu.nr_switches.avg
379951 ± 3% -24.2% 287882 ± 4% sched_debug.cpu.nr_switches.min
69023 ± 6% +25.2% 86397 ± 11% sched_debug.cpu.nr_switches.stddev
-9.50 ±-83% -778.9% 64.50 ± 49% sched_debug.cpu.nr_uninterruptible.0
23.23 ± 25% +219.9% 74.30 ± 25% sched_debug.cpu.nr_uninterruptible.max
-22.02 ±-32% +204.8% -67.12 ±-10% sched_debug.cpu.nr_uninterruptible.min
17.50 ± 28% +215.3% 55.18 ± 11% sched_debug.cpu.nr_uninterruptible.stddev
581166 ± 14% -23.6% 444135 ± 3% sched_debug.cpu.sched_count.1
501484 ± 8% -22.6% 388161 ± 11% sched_debug.cpu.sched_count.3
598204 ± 2% -13.2% 519290 ± 4% sched_debug.cpu.sched_count.avg
441282 ± 3% -19.5% 355031 ± 10% sched_debug.cpu.sched_count.min
147934 ± 4% +19.1% 176138 ± 10% sched_debug.cpu.sched_count.stddev
267440 ± 1% -17.4% 220961 ± 4% sched_debug.cpu.sched_goidle.0
209831 ± 7% -30.6% 145673 ± 3% sched_debug.cpu.sched_goidle.1
210490 ± 11% -41.7% 122757 ± 5% sched_debug.cpu.sched_goidle.2
200281 ± 10% -29.2% 141844 ± 3% sched_debug.cpu.sched_goidle.3
222011 ± 0% -28.9% 157809 ± 2% sched_debug.cpu.sched_goidle.avg
269793 ± 0% -17.7% 222161 ± 4% sched_debug.cpu.sched_goidle.max
183167 ± 3% -33.3% 122237 ± 4% sched_debug.cpu.sched_goidle.min
276225 ± 0% -30.6% 191807 ± 2% sched_debug.cpu.ttwu_count.0
210972 ± 8% -16.3% 176617 ± 4% sched_debug.cpu.ttwu_count.1
220636 ± 10% -15.3% 186902 ± 7% sched_debug.cpu.ttwu_count.2
212763 ± 4% -21.5% 167014 ± 1% sched_debug.cpu.ttwu_count.3
230149 ± 0% -21.5% 180585 ± 1% sched_debug.cpu.ttwu_count.avg
276922 ± 0% -29.1% 196398 ± 1% sched_debug.cpu.ttwu_count.max
194281 ± 3% -14.9% 165374 ± 2% sched_debug.cpu.ttwu_count.min
31652 ± 6% -59.6% 12798 ± 20% sched_debug.cpu.ttwu_count.stddev
31469 ± 5% +35.9% 42760 ± 6% sched_debug.cpu.ttwu_local.0
25336 ± 3% +22.4% 31021 ± 2% sched_debug.cpu.ttwu_local.avg
31881 ± 5% +35.3% 43121 ± 6% sched_debug.cpu.ttwu_local.max
20206 ± 5% +13.8% 22998 ± 6% sched_debug.cpu.ttwu_local.min
4634 ± 17% +70.9% 7918 ± 10% sched_debug.cpu.ttwu_local.stddev
0.02 ± 21% +58.1% 0.02 ± 22% sched_debug.rt_rq:/.rt_time.avg
lkp-sb02: Sandy Bridge-EP
Memory: 4G
fileio.time.involuntary_context_switches
120000 ++-----------------------------------------------------------------+
| OO O O |
100000 O+O O |
| |
| |
80000 ++ |
| |
60000 ++ |
| |
40000 ++ |
| O O O OO O O OO O O O |
| |
20000 ++ *.*.* *.*.**.*.*
| + :+ |
0 *+*-**-*-*-**-*-*-**-*-*-**-*-*-*-**-*-*-**-*-*-**------*----------+
proc-vmstat.nr_slab_unreclaimable
7000 ++-------------------------------------------------------------------+
O O OO O O O OO O O O O |
6000 ++O OO O O O |
| .*. .*. .*.*.**.*.|
5000 ++*.**.*.*.*.*.**.*.*.*.**.*.*.*.*.**.*.* **.*.*.*.** * *
| : |
4000 ++: |
|: |
3000 ++ |
|: |
2000 ++ |
|: |
1000 ++ |
| |
0 *+-------------------------------------------------------------------+
meminfo.SUnreclaim
30000 ++------------------------------------------------------------------+
| |
25000 O+O O O OO O O OO O O O OO O |
| OO O |
| *.**.*.*.*.**.*.*.**.*.*.*.**.*.*.*.**.*.*.*.**.*. *.*.*.*.**.*.*
20000 ++: *.* |
| : |
15000 ++ |
|: |
10000 ++ |
|: |
|: |
5000 ++ |
| |
0 *+------------------------------------------------------------------+
proc-vmstat.nr_alloc_batch
450 ++--------------------------------------------------------------------+
| *.*.*.*.*.**.*.*.*
400 ++ : |
350 ++ : |
| : |
300 ++ : |
250 ++*.*.**.*.*.*.*.**.*.*.*.*.**.*.*.*.*.*.**.*.*.*.*.* |
| : |
200 ++: |
150 ++ |
|: |
100 ++ |
50 ++ |
| |
0 *+--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
4 years, 11 months
[lkp] [vmpressure] 78d1788350: No primary result change, 274.1% vm-scalability.time.involuntary_context_switches
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Vladimir-Davydov/vmpressure-Fix-subtree-pressure-detection/20160128-003153
commit 78d1788350477246d516496d8d7684fa80ef7f18 ("vmpressure: Fix subtree pressure detection")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/ivb43/lru-file-mmap-read/vm-scalability
commit:
v4.5-rc1
78d1788350477246d516496d8d7684fa80ef7f18
v4.5-rc1 78d1788350477246d516496d8d
---------------- --------------------------
%stddev %change %stddev
\ | \
195486 ± 1% +274.1% 731248 ± 2% vm-scalability.time.involuntary_context_switches
2939 ± 21% +37.5% 4041 ± 12% numa-vmstat.node1.nr_anon_pages
897378 ± 1% +17.7% 1056357 ± 2% softirqs.RCU
4293 ± 4% +82.5% 7835 ± 2% vmstat.system.cs
120.40 ± 22% +41.6% 170.50 ± 19% cpuidle.C3-IVT.usage
4.723e+08 ± 4% +13.8% 5.375e+08 ± 5% cpuidle.C6-IVT.time
93336 ± 31% +611.8% 664396 ± 88% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
371333 ±166% -70.1% 110861 ±117% latency_stats.sum.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
195486 ± 1% +274.1% 731248 ± 2% time.involuntary_context_switches
717.20 ± 14% +20.3% 862.50 ± 6% time.major_page_faults
2724 ± 60% -93.6% 174.25 ±173% numa-meminfo.node0.AnonHugePages
1630 ±122% +222.2% 5254 ± 16% numa-meminfo.node1.AnonHugePages
11760 ± 21% +37.6% 16177 ± 12% numa-meminfo.node1.AnonPages
1.01 ± 4% +26.5% 1.28 ± 10% perf-profile.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead
0.90 ± 56% -72.4% 0.25 ± 95% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault
0.90 ± 56% -72.5% 0.25 ± 95% perf-profile.cycles-pp.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault.__do_fault
2.18 ± 2% +15.9% 2.52 ± 7% perf-profile.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
1.04 ± 4% +16.8% 1.21 ± 7% perf-profile.cycles-pp.__rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc
1.08 ± 6% +18.1% 1.28 ± 5% perf-profile.cycles-pp._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
0.90 ± 56% -72.4% 0.25 ± 95% perf-profile.cycles-pp.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.filemap_fault.xfs_filemap_fault
0.77 ± 20% +47.0% 1.14 ± 9% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__remove_mapping.shrink_page_list.shrink_inactive_list
1.38 ± 2% +15.8% 1.60 ± 7% perf-profile.cycles-pp.rmap_walk.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_zone_memcg
2.17 ± 2% -14.7% 1.85 ± 9% perf-profile.cycles-pp.shrink_inactive_list.shrink_zone_memcg.shrink_zone.kswapd.kthread
2.19 ± 2% -14.5% 1.87 ± 9% perf-profile.cycles-pp.shrink_zone.kswapd.kthread.ret_from_fork
2.18 ± 1% -14.8% 1.86 ± 9% perf-profile.cycles-pp.shrink_zone_memcg.shrink_zone.kswapd.kthread.ret_from_fork
1.46 ± 2% +19.5% 1.75 ± 10% perf-profile.cycles-pp.try_to_unmap.shrink_page_list.shrink_inactive_list.shrink_zone_memcg.shrink_zone
50.20 ±133% +273.0% 187.25 ±109% sched_debug.cfs_rq:/.load.5
56.60 ± 11% +97.4% 111.75 ± 55% sched_debug.cfs_rq:/.load_avg.0
17.60 ± 9% +25.0% 22.00 ± 14% sched_debug.cfs_rq:/.load_avg.19
47.00 ± 33% -54.8% 21.25 ± 13% sched_debug.cfs_rq:/.load_avg.23
21.60 ± 30% +139.6% 51.75 ± 63% sched_debug.cfs_rq:/.load_avg.30
20.80 ± 30% +50.2% 31.25 ± 26% sched_debug.cfs_rq:/.load_avg.36
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.14
16685 ±278% -374.2% -45753 ±-25% sched_debug.cfs_rq:/.spread0.12
119599 ± 47% -71.8% 33762 ±121% sched_debug.cfs_rq:/.spread0.27
106418 ± 42% -267.6% -178399 ±-221% sched_debug.cfs_rq:/.spread0.28
56797 ± 81% -136.5% -20757 ±-159% sched_debug.cfs_rq:/.spread0.3
63128 ± 63% -157.9% -36549 ±-295% sched_debug.cfs_rq:/.spread0.46
57759 ±100% -676.1% -332737 ±-164% sched_debug.cfs_rq:/.spread0.8
151767 ± 32% -40.2% 90684 ± 14% sched_debug.cfs_rq:/.spread0.max
855.00 ± 1% +7.5% 918.75 ± 7% sched_debug.cfs_rq:/.util_avg.20
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[0].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[1].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[2].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[3].14
16.40 ± 2% +18.9% 19.50 ± 11% sched_debug.cpu.cpu_load[4].14
1305 ± 1% +16.6% 1521 ± 13% sched_debug.cpu.curr->pid.23
50.20 ±133% +273.0% 187.25 ±109% sched_debug.cpu.load.5
0.00 ± 0% +Inf% 1.00 ± 0% sched_debug.cpu.nr_running.5
10459 ± 30% +86.1% 19462 ± 13% sched_debug.cpu.nr_switches.12
11042 ±113% +169.5% 29759 ± 31% sched_debug.cpu.nr_switches.14
17516 ± 69% +85.9% 32567 ± 46% sched_debug.cpu.nr_switches.18
8287 ± 69% +271.4% 30775 ± 71% sched_debug.cpu.nr_switches.21
16525 ± 59% +80.6% 29850 ± 41% sched_debug.cpu.nr_switches.23
8306 ± 36% +92.8% 16016 ± 8% sched_debug.cpu.nr_switches.24
9271 ± 64% +149.7% 23153 ± 30% sched_debug.cpu.nr_switches.25
6094 ± 49% +243.8% 20951 ± 31% sched_debug.cpu.nr_switches.26
6569 ± 32% +194.9% 19373 ± 17% sched_debug.cpu.nr_switches.30
10034 ±111% +295.8% 39717 ± 41% sched_debug.cpu.nr_switches.31
7417 ± 70% +205.4% 22657 ± 35% sched_debug.cpu.nr_switches.38
9204 ± 69% +125.4% 20746 ± 26% sched_debug.cpu.nr_switches.42
8742 ± 68% +172.8% 23845 ± 14% sched_debug.cpu.nr_switches.47
11267 ± 75% +311.6% 46375 ± 68% sched_debug.cpu.nr_switches.7
14373 ± 4% +74.8% 25131 ± 3% sched_debug.cpu.nr_switches.avg
68237 ± 15% +30.3% 88895 ± 10% sched_debug.cpu.nr_switches.max
1792 ± 11% +610.9% 12741 ± 4% sched_debug.cpu.nr_switches.min
-1.00 ±-244% -200.0% 1.00 ±212% sched_debug.cpu.nr_uninterruptible.21
-29.60 ±-22% -31.7% -20.21 ±-29% sched_debug.cpu.nr_uninterruptible.min
11570 ± 27% +75.8% 20344 ± 14% sched_debug.cpu.sched_count.12
18187 ± 52% +67.7% 30504 ± 40% sched_debug.cpu.sched_count.23
8874 ± 33% +83.7% 16301 ± 7% sched_debug.cpu.sched_count.24
9847 ± 60% +137.9% 23423 ± 29% sched_debug.cpu.sched_count.25
7299 ± 38% +190.8% 21225 ± 30% sched_debug.cpu.sched_count.26
7156 ± 30% +179.8% 20022 ± 16% sched_debug.cpu.sched_count.30
10606 ±105% +279.7% 40278 ± 41% sched_debug.cpu.sched_count.31
7977 ± 66% +187.5% 22938 ± 35% sched_debug.cpu.sched_count.38
10713 ± 47% +138.5% 25554 ± 54% sched_debug.cpu.sched_count.39
25911 ± 63% +76.9% 45826 ± 39% sched_debug.cpu.sched_count.4
9758 ± 66% +115.6% 21039 ± 26% sched_debug.cpu.sched_count.42
9252 ± 64% +160.6% 24110 ± 14% sched_debug.cpu.sched_count.47
12153 ± 69% +296.0% 48126 ± 65% sched_debug.cpu.sched_count.7
110528 ± 0% +9.1% 120637 ± 0% sched_debug.cpu.sched_count.avg
2364 ± 8% +455.3% 13129 ± 5% sched_debug.cpu.sched_count.min
816.60 ± 32% -40.5% 485.75 ± 16% sched_debug.cpu.sched_goidle.17
383.20 ± 22% +56.1% 598.00 ± 22% sched_debug.cpu.sched_goidle.23
204.20 ± 12% +210.7% 634.50 ± 86% sched_debug.cpu.sched_goidle.25
190.60 ± 26% +54.6% 294.75 ± 12% sched_debug.cpu.sched_goidle.31
4752 ± 36% +107.1% 9844 ± 12% sched_debug.cpu.ttwu_count.12
5428 ±115% +178.2% 15102 ± 31% sched_debug.cpu.ttwu_count.14
8902 ± 73% +80.8% 16095 ± 47% sched_debug.cpu.ttwu_count.18
3698 ± 62% +315.1% 15351 ± 75% sched_debug.cpu.ttwu_count.21
8371 ± 57% +80.6% 15121 ± 41% sched_debug.cpu.ttwu_count.23
4569 ± 33% +85.9% 8496 ± 10% sched_debug.cpu.ttwu_count.24
4884 ± 56% +152.6% 12339 ± 27% sched_debug.cpu.ttwu_count.25
3187 ± 39% +258.7% 11434 ± 23% sched_debug.cpu.ttwu_count.26
3513 ± 31% +183.5% 9962 ± 19% sched_debug.cpu.ttwu_count.30
5462 ± 97% +283.9% 20970 ± 41% sched_debug.cpu.ttwu_count.31
5605 ± 43% +85.8% 10415 ± 34% sched_debug.cpu.ttwu_count.34
4011 ± 66% +190.7% 11661 ± 33% sched_debug.cpu.ttwu_count.38
4305 ± 41% +183.7% 12215 ± 59% sched_debug.cpu.ttwu_count.39
4828 ± 73% +125.9% 10906 ± 25% sched_debug.cpu.ttwu_count.42
4560 ± 74% +168.1% 12227 ± 12% sched_debug.cpu.ttwu_count.47
5663 ± 78% +313.7% 23431 ± 67% sched_debug.cpu.ttwu_count.7
7326 ± 4% +74.8% 12808 ± 3% sched_debug.cpu.ttwu_count.avg
34268 ± 15% +29.6% 44398 ± 9% sched_debug.cpu.ttwu_count.max
792.03 ± 20% +698.7% 6325 ± 5% sched_debug.cpu.ttwu_count.min
3951 ± 45% +119.7% 8681 ± 13% sched_debug.cpu.ttwu_local.12
4585 ±137% +205.0% 13987 ± 33% sched_debug.cpu.ttwu_local.14
8045 ± 80% +88.6% 15173 ± 48% sched_debug.cpu.ttwu_local.18
2941 ± 79% +386.9% 14319 ± 79% sched_debug.cpu.ttwu_local.21
7262 ± 67% +95.1% 14172 ± 44% sched_debug.cpu.ttwu_local.23
3991 ± 40% +100.4% 8000 ± 10% sched_debug.cpu.ttwu_local.24
4237 ± 68% +162.6% 11128 ± 29% sched_debug.cpu.ttwu_local.25
2570 ± 41% +292.2% 10081 ± 30% sched_debug.cpu.ttwu_local.26
2891 ± 40% +215.8% 9132 ± 21% sched_debug.cpu.ttwu_local.30
4665 ±113% +318.2% 19509 ± 43% sched_debug.cpu.ttwu_local.31
4533 ± 50% +115.0% 9748 ± 35% sched_debug.cpu.ttwu_local.34
4131 ± 56% +145.5% 10144 ± 47% sched_debug.cpu.ttwu_local.37
3349 ± 82% +225.3% 10897 ± 34% sched_debug.cpu.ttwu_local.38
3630 ± 48% +219.9% 11611 ± 63% sched_debug.cpu.ttwu_local.39
4138 ± 82% +140.4% 9947 ± 29% sched_debug.cpu.ttwu_local.42
3356 ± 78% +238.4% 11359 ± 14% sched_debug.cpu.ttwu_local.47
4866 ± 91% +360.6% 22415 ± 71% sched_debug.cpu.ttwu_local.7
6406 ± 5% +84.0% 11787 ± 4% sched_debug.cpu.ttwu_local.avg
33023 ± 15% +29.7% 42825 ± 11% sched_debug.cpu.ttwu_local.max
450.27 ± 20% +1199.8% 5852 ± 4% sched_debug.cpu.ttwu_local.min
1.58 ± 4% -55.1% 0.71 ± 99% sched_debug.rt_rq:/.rt_time.12
ivb43: Ivytown Ivy Bridge-EP
Memory: 64G
vm-scalability.time.involuntary_context_switches
800000 ++-----------------------------------------------------------------+
| O O O O O O O |
700000 ++ O O O O O O O O O O O
| |
600000 O+ |
| O O |
500000 ++ O O O O |
| |
400000 ++ |
| |
300000 ++ |
| |
200000 *+.*..*.*..*..*..*..*.*..*..*..*..*.*..*..*..*.*..*..*..*..*.* |
| |
100000 ++-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
4 years, 11 months
[lkp] [Smack] 7412301b76: INFO: suspicious RCU usage. ]
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 7412301b76bd53ee53b860f611fc3b5b1c2245b5 ("Smack: Assign smack_known_web as default smk_in label for kernel thread's socket")
+----------------------------------+------------+------------+
| | 74f0414b2f | 7412301b76 |
+----------------------------------+------------+------------+
| boot_successes | 8 | 0 |
| boot_failures | 0 | 8 |
| INFO:suspicious_RCU_usage | 0 | 8 |
| backtrace:inet_ctl_sock_create | 0 | 8 |
| backtrace:dccp_v4_init_net | 0 | 8 |
| backtrace:ops_init | 0 | 8 |
| backtrace:register_pernet_subsys | 0 | 8 |
| backtrace:dccp_v4_init | 0 | 8 |
| backtrace:kernel_init_freeable | 0 | 8 |
| invoked_oom-killer:gfp_mask=0x | 0 | 1 |
| Mem-Info | 0 | 1 |
| Out_of_memory:Kill_process | 0 | 1 |
| backtrace:vfs_writev | 0 | 1 |
| backtrace:SyS_writev | 0 | 1 |
+----------------------------------+------------+------------+
[ 22.152970] DCCP: Activated CCID 2 (TCP-like)
[ 22.164997]
[ 22.165505] ===============================
[ 22.166384] [ INFO: suspicious RCU usage. ]
[ 22.167268] 4.0.0-rc1-00004-g7412301 #1 Not tainted
[ 22.168266] -------------------------------
[ 22.169154] net/ipv4/cipso_ipv4.c:1936 suspicious rcu_dereference_protected() usage!
[ 22.171025]
[ 22.171025] other info that might help us debug this:
[ 22.171025]
[ 22.172994]
[ 22.172994] rcu_scheduler_active = 1, debug_locks = 0
[ 22.174397] 3 locks held by swapper/0/1:
[ 22.175260] #0: (net_mutex){+.+.+.}, at: [<c21327ea>] register_pernet_subsys+0x25/0x67
[ 22.177432] #1: (slock-AF_INET/1){+.....}, at: [<c17fe59b>] smack_netlabel+0x53/0xff
[ 22.179575] #2: (rcu_read_lock){......}, at: [<c24cc6e4>] netlbl_sock_setattr+0x0/0x284
[ 22.181678]
[ 22.181678] stack backtrace:
[ 22.182729] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.0.0-rc1-00004-g7412301 #1
[ 22.184267] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 22.185948] 00000000 00000000 d326bdd8 c2595e9d 00000001 d326bdf4 c110de36 c2bbda2a
[ 22.188179] d32e0000 cde48000 cdd920fb cdd92e10 d326be18 c2274f3f 0000000b 0000000c
[ 22.190362] cdd920f0 cdd92e10 ffffffa7 c2e490f8 00000002 d326be30 c24cc866 cde48000
[ 22.192585] Call Trace:
[ 22.193197] [<c2595e9d>] dump_stack+0xb8/0x108
[ 22.194090] [<c110de36>] lockdep_rcu_suspicious+0x1eb/0x202
[ 22.195138] [<c2274f3f>] cipso_v4_sock_setattr+0x17c/0x282
[ 22.196221] [<c24cc866>] netlbl_sock_setattr+0x182/0x284
[ 22.197240] [<c17fe5fd>] smack_netlabel+0xb5/0xff
[ 22.198166] [<c17fe6c5>] smack_socket_post_create+0x7e/0x8f
[ 22.199212] [<c17fa048>] security_socket_post_create+0x29/0x3c
[ 22.200342] [<c21161e0>] __sock_create+0x3cb/0x515
[ 22.201296] [<c211639e>] sock_create_kern+0x2a/0x3a
[ 22.202252] [<c2233e29>] inet_ctl_sock_create+0x33/0x88
[ 22.203251] [<c3155384>] dccp_v4_init_net+0x42/0x5a
[ 22.204248] [<c2131c52>] ops_init+0x270/0x2b2
[ 22.205277] [<c2233505>] ? inet_register_protosw+0x1d9/0x1fe
[ 22.206341] [<c2131d55>] register_pernet_operations+0xc1/0x13d
[ 22.207507] [<c315539c>] ? dccp_v4_init_net+0x5a/0x5a
[ 22.208532] [<c21327ff>] register_pernet_subsys+0x3a/0x67
[ 22.223269] [<c315543a>] dccp_v4_init+0x9e/0x112
[ 22.224242] [<c3090a32>] do_one_initcall+0x227/0x358
[ 22.225211] [<c10a6bac>] ? __usermodehelper_set_disable_depth+0x65/0x79
[ 22.226399] [<c3090f40>] kernel_init_freeable+0x3dd/0x55e
[ 22.227418] [<c3090f40>] ? kernel_init_freeable+0x3dd/0x55e
[ 22.228509] [<c2585c2c>] kernel_init+0x1b/0x1ec
[ 22.229425] [<c25b5b01>] ret_from_kernel_thread+0x21/0x30
[ 22.230450] [<c2585c11>] ? rest_init+0x274/0x274
[ 22.253519] tipc: Activated (version 2.0.0)
[ 22.257814] NET: Registered protocol family 30
[ 22.275239] tipc: Started in single node mode
Thanks,
Ying Huang
4 years, 11 months
[lkp] [f2fs] ea32c36edc: -41.5% fsmark.time.file_system_outputs
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Jaegeuk-Kim/f2fs-avoid-multiple-node-page-writes-due-to-inline_data/20160124-041711
commit ea32c36edcf58f8c27653f3e5bc41f8c0b41e235 ("f2fs: avoid multiple node page writes due to inline_data")
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1HDD/9B/f2fs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-ne04/400M/fsmark
commit:
7fdec82af6a9e190e53d07a1463d2a9ac49a8750
ea32c36edcf58f8c27653f3e5bc41f8c0b41e235
7fdec82af6a9e190 ea32c36edcf58f8c27653f3e5b
---------------- --------------------------
%stddev %change %stddev
\ | \
1278 ± 0% -3.2% 1236 ± 0% fsmark.files_per_sec
1874824 ± 0% -41.5% 1097506 ± 0% fsmark.time.file_system_outputs
14748 ± 2% +9.2% 16099 ± 1% fsmark.time.involuntary_context_switches
44.50 ± 4% +19.1% 53.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
36.30 ± 1% +18.6% 43.03 ± 0% fsmark.time.system_time
870163 ± 0% +19.6% 1041002 ± 0% fsmark.time.voluntary_context_switches
22186 ± 1% +10.4% 24490 ± 1% vmstat.system.cs
326553 ± 4% -10.6% 292026 ± 0% meminfo.Active
292915 ± 4% -11.9% 258035 ± 0% meminfo.Active(file)
30038 ± 10% +17.1% 35170 ± 13% softirqs.RCU
31822 ± 9% +16.4% 37037 ± 13% softirqs.SCHED
143471 ± 4% -15.6% 121104 ± 2% numa-meminfo.node1.Active
17081 ± 18% -21.7% 13370 ± 13% numa-meminfo.node1.Active(anon)
126389 ± 2% -14.8% 107733 ± 2% numa-meminfo.node1.Active(file)
16928 ± 18% -21.9% 13222 ± 13% numa-meminfo.node1.AnonPages
75337 ± 20% -42.1% 43626 ± 1% numa-vmstat.node0.nr_dirtied
4266 ± 18% -21.8% 3338 ± 13% numa-vmstat.node1.nr_active_anon
31597 ± 2% -14.8% 26931 ± 2% numa-vmstat.node1.nr_active_file
4226 ± 18% -22.0% 3297 ± 13% numa-vmstat.node1.nr_anon_pages
73228 ± 4% -11.9% 64502 ± 0% proc-vmstat.nr_active_file
233617 ± 0% -41.5% 136640 ± 0% proc-vmstat.nr_dirtied
6178 ± 0% -11.1% 5489 ± 0% proc-vmstat.numa_pte_updates
50891 ± 0% -39.8% 30627 ± 0% proc-vmstat.pgactivate
1874824 ± 0% -41.5% 1097506 ± 0% time.file_system_outputs
44.50 ± 4% +19.1% 53.00 ± 0% time.percent_of_cpu_this_job_got
36.30 ± 1% +18.6% 43.03 ± 0% time.system_time
870163 ± 0% +19.6% 1041002 ± 0% time.voluntary_context_switches
63126565 ± 2% +16.8% 73717213 ± 1% cpuidle.C1-NHM.time
49681942 ± 1% +34.6% 66867204 ± 3% cpuidle.C1E-NHM.time
56832 ± 1% +20.6% 68527 ± 5% cpuidle.C1E-NHM.usage
3.428e+08 ± 1% +14.9% 3.938e+08 ± 3% cpuidle.C3-NHM.time
314858 ± 3% +32.6% 417584 ± 11% cpuidle.C3-NHM.usage
10.96 ± 6% +6.8% 11.70 ± 1% turbostat.%Busy
20.82 ± 5% +15.7% 24.10 ± 1% turbostat.CPU%c1
28.15 ± 4% +13.0% 31.82 ± 3% turbostat.CPU%c3
40.07 ± 7% -19.2% 32.38 ± 4% turbostat.CPU%c6
9.22 ± 6% -23.4% 7.06 ± 3% turbostat.Pkg%pc3
36043 ± 6% -100.0% 0.00 ± -1% latency_stats.avg.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
7336 ± 0% +698.9% 58613 ± 1% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
22886 ± 0% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_write_inline_data.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages
0.00 ± -1% +Inf% 79634 ± 0% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_write_inline_data.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
75051 ± 0% -100.0% 0.00 ± -1% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
17214 ± 0% +321.2% 72498 ± 0% latency_stats.hits.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
11958 ± 0% +454.6% 66321 ± 0% latency_stats.hits.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
74681 ± 0% -100.0% 8.25 ± 61% latency_stats.hits.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
130560 ± 11% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
270069 ± 14% -100.0% 125.25 ± 81% latency_stats.max.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
12155740 ± 1% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
1498927 ± 1% +858.3% 14364772 ± 0% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
22102 ± 9% +187.5% 63548 ± 19% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages
5128495 ± 1% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_write_inline_data.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages
0.00 ± -1% +Inf% 29676287 ± 0% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_write_inline_data.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
24113264 ± 1% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].__get_node_page.[f2fs].get_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
11707 ± 6% +229.8% 38617 ± 24% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_fdatawrite.sync_dirty_inodes.[f2fs]
3670765 ± 0% +549.9% 23857878 ± 0% latency_stats.sum.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
2546436 ± 0% +680.8% 19883041 ± 0% latency_stats.sum.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
5243465 ± 2% +258.7% 18809634 ± 1% latency_stats.sum.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
13216 ± 86% -58.0% 5547 ± 26% latency_stats.sum.call_rwsem_down_write_failed.f2fs_init_extent_tree.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
3020759 ± 2% +333.5% 13096379 ± 0% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1.722e+08 ± 2% -100.0% 630.50 ± 92% latency_stats.sum.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
896.92 ± 10% +23.0% 1103 ± 6% sched_debug.cfs_rq:/.exec_clock.12
1663 ± 9% +40.9% 2343 ± 24% sched_debug.cfs_rq:/.exec_clock.2
767.96 ± 8% +16.5% 894.94 ± 3% sched_debug.cfs_rq:/.exec_clock.min
3.12 ± 17% -88.0% 0.38 ±110% sched_debug.cfs_rq:/.load_avg.min
3666 ± 13% +35.9% 4980 ± 6% sched_debug.cfs_rq:/.min_vruntime.12
4101 ± 10% +60.7% 6589 ± 26% sched_debug.cfs_rq:/.min_vruntime.14
4702 ± 7% +50.3% 7067 ± 12% sched_debug.cfs_rq:/.min_vruntime.15
7043 ± 14% +25.9% 8870 ± 11% sched_debug.cfs_rq:/.min_vruntime.2
4100 ± 4% +48.3% 6081 ± 15% sched_debug.cfs_rq:/.min_vruntime.9
6575 ± 3% +17.8% 7744 ± 3% sched_debug.cfs_rq:/.min_vruntime.avg
3239 ± 3% +31.0% 4242 ± 9% sched_debug.cfs_rq:/.min_vruntime.min
0.33 ±141% +1175.0% 4.25 ± 69% sched_debug.cfs_rq:/.nr_spread_over.2
-4695 ±-17% -40.2% -2807 ±-39% sched_debug.cfs_rq:/.spread0.15
136.75 ± 26% -25.8% 101.50 ± 15% sched_debug.cfs_rq:/.util_avg.3
48.62 ± 21% -80.7% 9.38 ± 59% sched_debug.cfs_rq:/.util_avg.min
43.46 ± 16% +60.2% 69.63 ± 21% sched_debug.cfs_rq:/.util_avg.stddev
9236 ± 4% +9.8% 10143 ± 6% sched_debug.cpu.nr_load_updates.10
9179 ± 4% +11.8% 10258 ± 5% sched_debug.cpu.nr_load_updates.14
9296 ± 4% +9.8% 10211 ± 4% sched_debug.cpu.nr_load_updates.8
64213 ± 7% +12.9% 72481 ± 7% sched_debug.cpu.nr_switches.0
49843 ± 2% +19.5% 59574 ± 8% sched_debug.cpu.nr_switches.1
35136 ± 10% +23.1% 43259 ± 5% sched_debug.cpu.nr_switches.10
27937 ± 6% +23.4% 34467 ± 6% sched_debug.cpu.nr_switches.11
38343 ± 10% +21.4% 46565 ± 10% sched_debug.cpu.nr_switches.12
35454 ± 10% +21.8% 43192 ± 3% sched_debug.cpu.nr_switches.14
29114 ± 7% +29.6% 37725 ± 20% sched_debug.cpu.nr_switches.15
58979 ± 5% +17.3% 69170 ± 7% sched_debug.cpu.nr_switches.2
56391 ± 5% +23.7% 69761 ± 8% sched_debug.cpu.nr_switches.6
35766 ± 9% +25.9% 45013 ± 2% sched_debug.cpu.nr_switches.8
43509 ± 5% +16.8% 50818 ± 3% sched_debug.cpu.nr_switches.avg
65222 ± 7% +12.5% 73372 ± 7% sched_debug.cpu.nr_switches.max
27549 ± 6% +18.2% 32567 ± 3% sched_debug.cpu.nr_switches.min
-6534 ± -5% +70.0% -11111 ± -2% sched_debug.cpu.nr_uninterruptible.0
-958.25 ±-13% +42.1% -1361 ±-10% sched_debug.cpu.nr_uninterruptible.1
1716 ± 20% +46.1% 2507 ± 6% sched_debug.cpu.nr_uninterruptible.10
1373 ± 8% +56.0% 2143 ± 10% sched_debug.cpu.nr_uninterruptible.11
1599 ± 7% +71.7% 2744 ± 6% sched_debug.cpu.nr_uninterruptible.12
1351 ± 20% +52.4% 2060 ± 3% sched_debug.cpu.nr_uninterruptible.13
1824 ± 8% +40.8% 2568 ± 4% sched_debug.cpu.nr_uninterruptible.14
1334 ± 10% +68.0% 2241 ± 4% sched_debug.cpu.nr_uninterruptible.15
-1140 ±-14% +44.6% -1649 ± -4% sched_debug.cpu.nr_uninterruptible.5
-1139 ±-15% +41.1% -1607 ±-17% sched_debug.cpu.nr_uninterruptible.7
1167 ± 13% +70.0% 1983 ± 4% sched_debug.cpu.nr_uninterruptible.8
1319 ± 7% +46.4% 1930 ± 6% sched_debug.cpu.nr_uninterruptible.9
1956 ± 8% +41.9% 2775 ± 5% sched_debug.cpu.nr_uninterruptible.max
-6537 ± -5% +70.0% -11114 ± -2% sched_debug.cpu.nr_uninterruptible.min
2035 ± 1% +62.9% 3315 ± 1% sched_debug.cpu.nr_uninterruptible.stddev
51578 ± 2% +17.3% 60512 ± 7% sched_debug.cpu.sched_count.1
35156 ± 10% +24.2% 43650 ± 4% sched_debug.cpu.sched_count.10
27959 ± 6% +25.4% 35049 ± 4% sched_debug.cpu.sched_count.11
38363 ± 10% +25.0% 47965 ± 7% sched_debug.cpu.sched_count.12
35618 ± 9% +21.8% 43369 ± 2% sched_debug.cpu.sched_count.14
29134 ± 7% +34.3% 39119 ± 18% sched_debug.cpu.sched_count.15
50741 ± 8% +30.2% 66049 ± 23% sched_debug.cpu.sched_count.3
62138 ± 8% +14.3% 71037 ± 6% sched_debug.cpu.sched_count.6
36028 ± 9% +25.0% 45039 ± 2% sched_debug.cpu.sched_count.8
27623 ± 6% +18.4% 32700 ± 3% sched_debug.cpu.sched_count.min
28067 ± 8% +10.2% 30921 ± 8% sched_debug.cpu.sched_goidle.0
21211 ± 3% +20.1% 25465 ± 8% sched_debug.cpu.sched_goidle.1
13529 ± 10% +20.1% 16248 ± 6% sched_debug.cpu.sched_goidle.10
11159 ± 5% +21.5% 13554 ± 6% sched_debug.cpu.sched_goidle.11
13666 ± 11% +18.9% 16252 ± 3% sched_debug.cpu.sched_goidle.14
11607 ± 8% +29.2% 15001 ± 25% sched_debug.cpu.sched_goidle.15
25465 ± 6% +15.8% 29484 ± 8% sched_debug.cpu.sched_goidle.2
24349 ± 5% +22.5% 29827 ± 10% sched_debug.cpu.sched_goidle.6
13756 ± 9% +23.4% 16972 ± 3% sched_debug.cpu.sched_goidle.8
18266 ± 4% +14.8% 20973 ± 4% sched_debug.cpu.sched_goidle.avg
28513 ± 7% +11.4% 31770 ± 7% sched_debug.cpu.sched_goidle.max
11005 ± 5% +15.1% 12672 ± 3% sched_debug.cpu.sched_goidle.min
15905 ± 9% +22.8% 19533 ± 8% sched_debug.cpu.ttwu_count.12
13379 ± 10% +22.5% 16391 ± 9% sched_debug.cpu.ttwu_count.13
15038 ± 11% +29.0% 19395 ± 8% sched_debug.cpu.ttwu_count.14
26020 ± 4% +22.2% 31789 ± 3% sched_debug.cpu.ttwu_count.2
25547 ± 7% +16.5% 29762 ± 5% sched_debug.cpu.ttwu_count.3
26667 ± 5% +31.7% 35117 ± 9% sched_debug.cpu.ttwu_count.4
26022 ± 9% +15.1% 29953 ± 3% sched_debug.cpu.ttwu_count.5
27546 ± 6% +31.4% 36193 ± 8% sched_debug.cpu.ttwu_count.6
25556 ± 5% +24.9% 31908 ± 13% sched_debug.cpu.ttwu_count.7
24624 ± 5% +18.5% 29191 ± 3% sched_debug.cpu.ttwu_count.avg
11891 ± 7% +23.4% 14671 ± 5% sched_debug.cpu.ttwu_count.min
21466 ± 2% +33.3% 28616 ± 2% sched_debug.cpu.ttwu_local.0
6323 ± 9% +29.1% 8161 ± 3% sched_debug.cpu.ttwu_local.1
4485 ± 9% +31.7% 5907 ± 6% sched_debug.cpu.ttwu_local.10
3238 ± 3% +25.5% 4063 ± 6% sched_debug.cpu.ttwu_local.11
5061 ± 10% +22.7% 6211 ± 9% sched_debug.cpu.ttwu_local.12
3264 ± 10% +28.0% 4178 ± 5% sched_debug.cpu.ttwu_local.13
4339 ± 9% +35.0% 5859 ± 4% sched_debug.cpu.ttwu_local.14
3638 ± 10% +21.4% 4415 ± 5% sched_debug.cpu.ttwu_local.15
6344 ± 6% +37.4% 8716 ± 2% sched_debug.cpu.ttwu_local.2
5839 ± 9% +28.8% 7523 ± 6% sched_debug.cpu.ttwu_local.3
6459 ± 6% +48.7% 9602 ± 13% sched_debug.cpu.ttwu_local.4
5643 ± 9% +29.8% 7322 ± 5% sched_debug.cpu.ttwu_local.5
6341 ± 4% +58.9% 10079 ± 14% sched_debug.cpu.ttwu_local.6
5599 ± 4% +34.9% 7552 ± 5% sched_debug.cpu.ttwu_local.7
4901 ± 6% +28.0% 6275 ± 5% sched_debug.cpu.ttwu_local.8
6031 ± 3% +33.5% 8053 ± 3% sched_debug.cpu.ttwu_local.avg
21563 ± 2% +34.9% 29080 ± 3% sched_debug.cpu.ttwu_local.max
3061 ± 6% +29.6% 3966 ± 4% sched_debug.cpu.ttwu_local.min
4253 ± 3% +36.5% 5807 ± 4% sched_debug.cpu.ttwu_local.stddev
lkp-ne04: Nehalem-EP
Memory: 12G
fsmark.time.file_system_outputs
1.9e+06 **-**-*-**--***-****-***-****-***-***-****-***-****-***-***-****-**
| * * |
1.8e+06 ++ |
1.7e+06 ++ |
| |
1.6e+06 ++ |
1.5e+06 ++ |
| |
1.4e+06 ++ |
1.3e+06 ++ |
| |
1.2e+06 ++ |
1.1e+06 ++ O OO OO |
OO OOOO OOO OOO OOOO OOO O OO O O |
1e+06 ++----------------------------------------------------------------+
proc-vmstat.nr_dirtied
240000 ++----------------*--*---------------------------------------------+
**.***.***.****.** * *.***.***.****.***.***.***.***.****.***.***.**
220000 ++ |
| |
| |
200000 ++ |
| |
180000 ++ |
| |
160000 ++ |
| |
| |
140000 OO OOO OOO OOOO OOO OOO OOO OOO OOOO O |
| |
120000 ++-----------------------------------------------------------------+
proc-vmstat.pgactivate
55000 ++------------------------------------------------------------------+
| |
**.***.***.* *.* .** .**
50000 ++ * : * * |
| : : |
| **.***.***.***.***.***.***.***.***.* *.***.** |
45000 ++ * |
| |
40000 ++ |
OO OOO OOO OOO OOO OOO |
| |
35000 ++ |
| |
| |
30000 ++---------------------OOO-OOO-OOO-OOO------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
4 years, 11 months
[lkp] [gpio] 1b06d64f73: kmsg.dio-#e#-dio-#e:Unable_to_lock#-dio-#e_port_addresses(#-#)
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git devel
commit 1b06d64f73746c30ddba43bb57c30ba9a126f53b ("gpio: Add GPIO support for the ACCES 104-DIO-48E")
With your commit, there are the following new messages in kernel log.
[ 6.176052] 104-dio-48e 104-dio-48e: Unable to lock 104-dio-48e port addresses (0x0-0x10)
[ 6.176052] 104-dio-48e 104-dio-48e: Unable to lock 104-dio-48e port addresses (0x0-0x10)
[ 6.177538] 104-dio-48e: probe of 104-dio-48e failed with error -16
[ 6.177538] 104-dio-48e: probe of 104-dio-48e failed with error -16
[ 6.178846] 104-idi-48 104-idi-48: Unable to lock 104-idi-48 port addresses (0x0-0x8)
[ 6.178846] 104-idi-48 104-idi-48: Unable to lock 104-idi-48 port addresses (0x0-0x8)
[ 6.180359] 104-idi-48: probe of 104-idi-48 failed with error -16
[ 6.180359] 104-idi-48: probe of 104-idi-48 failed with error -16
Thanks,
Ying Huang
4 years, 11 months
[lkp] [sctp] 3a360b6d44: INFO: rcu_sched detected stalls on CPUs/tasks:
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Marcelo-Ricardo-Leitner/sctp-add-GSO-support/20160128-010819
commit 3a360b6d44b7d6852207ab13294b7947df5ce307 ("sctp: Add GSO support")
+-------------------------------------------------------+------------+------------+
| | a5aaf18dc6 | 3a360b6d44 |
+-------------------------------------------------------+------------+------------+
| boot_successes | 35 | 22 |
| boot_failures | 3 | 58 |
| drm:intel_set_pch_fifo_underrun_reporting[i915]] | 3 | 22 |
| drm:intel_pch_fifo_underrun_irq_handler[i915]] | 3 | 22 |
| INFO:rcu_sched_self-detected_stall_on_CPU | 0 | 21 |
| backtrace:SYSC_connect | 0 | 12 |
| backtrace:SyS_connect | 0 | 12 |
| backtrace:vfs_write | 0 | 21 |
| backtrace:SyS_write | 0 | 21 |
| INFO:rcu_sched_detected_stalls_on_CPUs/tasks | 0 | 15 |
| backtrace:fb_flashcursor | 0 | 1 |
| BUG:unable_to_handle_kernel | 0 | 16 |
| Oops | 0 | 16 |
| RIP:sctp_packet_transmit[sctp] | 0 | 15 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 16 |
| backtrace:SyS_sendmsg | 0 | 15 |
| backtrace:__sys_sendmsg | 0 | 12 |
| RIP:native_queued_spin_lock_slowpath | 0 | 7 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 7 |
| backtrace:__sys_recvmsg | 0 | 7 |
| backtrace:SyS_recvmsg | 0 | 7 |
+-------------------------------------------------------+------------+------------+
[ 438.630047] INFO: rcu_sched detected stalls on CPUs/tasks:
[ 438.630063] 15-...: (54714 ticks this GP) idle=e85/140000000000000/0 softirq=161/161 fqs=132567
[ 438.630097] (detected by 2, t=400008 jiffies, g=966, c=965, q=0)
[ 438.630098] Task dump for CPU 15:
[ 438.630100] netperf R running task 0 1691 1 0x0000000a
[ 438.630102] 0000000000000000 ffff88081f347900 ffffffffa031843c 00000000003478b8
[ 438.630111] 0000000000000246 0000000000000000 0000000000000000 ffff88081f347aa8
[ 438.630112] ffff88081f347ab0 ffff880c23a9ad00 ffff88081f3478f0 ffffffff810d2f9f
[ 438.630113] Call Trace:
[ 438.630116] [<ffffffff810d2f9f>] ? vprintk_default+0x1f/0x30
[ 438.630117] [<ffffffff8116e2ac>] ? printk+0x48/0x50
[ 438.630141] [<ffffffffa02f75b6>] ? sctp_chunk_put+0x56/0x60 [sctp]
[ 438.630145] [<ffffffffa030af62>] ? sctp_packet_transmit+0x972/0xab0 [sctp]
[ 438.630148] [<ffffffff810bdbf0>] ? remove_wait_queue+0x30/0x40
[ 438.630149] [<ffffffff812019db>] ? poll_freewait+0x4b/0xa0
[ 438.630153] [<ffffffffa030a3ba>] ? sctp_packet_append_chunk+0xfa/0x2a0 [sctp]
[ 438.630157] [<ffffffffa02fcda9>] ? sctp_outq_flush+0x259/0xb50 [sctp]
[ 438.630158] [<ffffffff8178720c>] ? __alloc_skb+0x4c/0x1d0
[ 438.630160] [<ffffffff811cc7fa>] ? kmem_cache_alloc+0x1da/0x200
[ 438.630164] [<ffffffffa02f71ba>] ? sctp_chunkify+0x2a/0xb0 [sctp]
[ 438.630167] [<ffffffffa02fde5a>] ? sctp_outq_uncork+0x1a/0x20 [sctp]
[ 438.630171] [<ffffffffa02f1896>] ? sctp_cmd_interpreter+0x12c6/0x16b0 [sctp]
[ 438.630174] [<ffffffffa02efe50>] ? sctp_do_sm+0xc0/0x330 [sctp]
[ 438.630175] [<ffffffff817d711d>] ? rt_set_nexthop+0xfd/0x290
[ 438.630178] [<ffffffffa030cf20>] ? sctp_cname+0x60/0x60 [sctp]
[ 438.630180] [<ffffffff817d8088>] ? __ip_route_output_key_hash+0x268/0x7b0
[ 438.630181] [<ffffffff8183849b>] ? xfrm_lookup_route+0x1b/0x80
[ 438.630182] [<ffffffff817d881a>] ? ip_route_output_flow+0x4a/0x60
[ 438.630184] [<ffffffff818d4cae>] ? _raw_spin_unlock_bh+0x1e/0x20
[ 438.630187] [<ffffffffa030bb90>] ? sctp_hash_transport+0xf0/0x270 [sctp]
[ 438.630190] [<ffffffffa0309dba>] ? sctp_primitive_ASSOCIATE+0x3a/0x50 [sctp]
[ 438.630193] [<ffffffffa030221b>] ? __sctp_connect+0x32b/0x410 [sctp]
[ 438.630196] [<ffffffffa0302476>] ? sctp_connect+0x56/0xa0 [sctp]
[ 438.630197] [<ffffffff8181537d>] ? inet_dgram_connect+0x3d/0x80
[ 438.630198] [<ffffffff8177f4f7>] ? SYSC_connect+0xb7/0xf0
[ 438.630199] [<ffffffff8120271e>] ? SyS_select+0xce/0x100
[ 438.630200] [<ffffffff8178015e>] ? SyS_connect+0xe/0x10
[ 438.630202] [<ffffffff818d526e>] ? entry_SYSCALL_64_fastpath+0x12/0x71
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
4 years, 11 months
[lkp] [ftrace] 2fa389ecba: WARNING: CPU: 0 PID: 0 at mm/percpu.c:892 pcpu_alloc+0x331/0xaf0()
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/jolsa/perf.git ftrace/speedup_11
commit 2fa389ecba3591351105e6d8d20f9ca7beae28f2 ("ftrace: Move savedcmd data to be per cpu")
[ 0.000000] ** NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE NOTICE **
[ 0.000000] **********************************************************
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: CPU: 0 PID: 0 at mm/percpu.c:892 pcpu_alloc+0x331/0xaf0()
[ 0.000000] illegal size (131092) or align (4) for percpu allocation
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc1-00204-g2fa389e #322
[ 0.000000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.000000] c3ff77d8 c40cdf10 c36c3005 c40cdf40 c30cb669 c3ffafe8 c40cdf6c 00000000
[ 0.000000] c3ff77d8 0000037c c32a16e1 c32a16e1 0000037c 00020014 00020014 c40cdf58
[ 0.000000] c30cb7df 00000009 c40cdf50 c3ffafe8 c40cdf6c c40cdfac c32a16e1 c3ff77d8
[ 0.000000] Call Trace:
[ 0.000000] [<c36c3005>] dump_stack+0x32/0x4d
[ 0.000000] [<c30cb669>] warn_slowpath_common+0xe9/0x160
[ 0.000000] [<c32a16e1>] ? pcpu_alloc+0x331/0xaf0
[ 0.000000] [<c32a16e1>] ? pcpu_alloc+0x331/0xaf0
[ 0.000000] [<c30cb7df>] warn_slowpath_fmt+0x4f/0x60
[ 0.000000] [<c32a16e1>] pcpu_alloc+0x331/0xaf0
[ 0.000000] [<c31401ad>] ? debug_mutex_init+0x4d/0x70
[ 0.000000] [<c32a1ec7>] __alloc_percpu+0x27/0x40
[ 0.000000] [<c31ea0e4>] trace_create_savedcmd+0x24/0xb0
[ 0.000000] [<c4445cb1>] trace_init+0x169/0x441
[ 0.000000] [<c4407545>] start_kernel+0x71a/0xc20
[ 0.000000] [<c30a1f60>] ? native_write_cr3+0x30/0x30
[ 0.000000] [<c44063aa>] i386_start_kernel+0x165/0x177
[ 0.000000] ---[ end trace 3c5629a2a18b7be2 ]---
[ 0.000000] NR_IRQS:2304 nr_irqs:256 16
Thanks,
Kernel Test Robot
4 years, 11 months
[lkp] [goldfish] bd2f348db5: genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git char-misc-testing
commit bd2f348db5033b88b8b81caf58e7bdabf0b6961d ("goldfish: refactor goldfish platform configs")
[ 3.877696] microcode: Microcode Update Driver: v2.01 <tigran(a)aivazian.fsnet.co.uk>, Peter Oruba
[ 3.884624] hctosys: unable to open rtc device (rtc0)
[ 3.884624] hctosys: unable to open rtc device (rtc0)
[ 3.887390] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 3.887390] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 3.890264] Freeing unused kernel memory: 424K (b8065000 - b80cf000)
[ 3.890264] Freeing unused kernel memory: 424K (b8065000 - b80cf000)
[ 3.943140] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 3.943140] genirq: Flags mismatch irq 4. 00000000 (serial) vs. 00000080 (goldfish_pdev_bus)
[ 3.949727] kmodloader (81) used greatest stack depth: 6656 bytes left
[ 3.949727] kmodloader (81) used greatest stack depth: 6656 bytes left
[ 3.965804] random: mktemp urandom read with 18 bits of entropy available
Thanks,
Kernel Test Robot
4 years, 11 months