[lkp] [x86/mm] d9da2c95d7: WARNING: CPU: 0 PID: 0 at arch/x86/kernel/cpu/common.c:1498 warn_pre_alternatives+0x1c/0x1e()
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git x86/pcid
commit d9da2c95d77fd14360cd902ff1bc3859452bb5bc ("x86/mm: If INVPCID is available, use it to flush global mappings")
[ 0.000000] BRK [0x0355d000, 0x0355dfff] PGTABLE
[ 0.000000] BRK [0x0355e000, 0x0355efff] PGTABLE
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: CPU: 0 PID: 0 at arch/x86/kernel/cpu/common.c:1498 warn_pre_alternatives+0x1c/0x1e()
[ 0.000000] You're using static_cpu_has before alternatives have run!
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.0-rc5-00003-gd9da2c9 #1
[ 0.000000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.000000] 0000000000000000 ffffffff81a03c80 ffffffff81267ba8 ffffffff81a03cb8
[ 0.000000] ffffffff81057366 ffffffff8100eb09 ffff88000355cff8 0000007fc0000000
[ 0.000000] 0000000000100000 0000008000000000 ffffffff81a03d20 ffffffff810573c2
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffff81267ba8>] dump_stack+0x19/0x1b
[ 0.000000] [<ffffffff81057366>] warn_slowpath_common+0x89/0xa2
[ 0.000000] [<ffffffff8100eb09>] ? warn_pre_alternatives+0x1c/0x1e
[ 0.000000] [<ffffffff810573c2>] warn_slowpath_fmt+0x43/0x4b
[ 0.000000] [<ffffffff8100eb09>] warn_pre_alternatives+0x1c/0x1e
[ 0.000000] [<ffffffff8102cd04>] native_flush_tlb_global+0x31/0x5e
[ 0.000000] [<ffffffff814d5f89>] ? _raw_spin_unlock+0x22/0x2b
[ 0.000000] [<ffffffff814ceeae>] phys_pud_init+0x287/0x2af
[ 0.000000] [<ffffffff814cf0a2>] kernel_physical_mapping_init+0x10b/0x1bd
[ 0.000000] [<ffffffff814ccbb7>] init_memory_mapping+0x24b/0x2e8
[ 0.000000] [<ffffffff81add18a>] init_mem_mapping+0x118/0x21f
[ 0.000000] [<ffffffff81acf10f>] setup_arch+0x65a/0xb6a
[ 0.000000] [<ffffffff812685f1>] ? idr_init+0x27/0x29
[ 0.000000] [<ffffffff81acbb86>] start_kernel+0xce/0x42e
[ 0.000000] [<ffffffff81acb120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffff81acb315>] x86_64_start_reservations+0x2a/0x2c
[ 0.000000] [<ffffffff81acb3fc>] x86_64_start_kernel+0xe5/0xf2
[ 0.000000] ---[ end trace 44e73404887f7749 ]---
[ 0.000000] ------------[ cut here ]------------
Thanks,
Kernel Test Robot
6 years, 4 months
[net] ceb5d58b21: -69.2% fsmark.files_per_sec
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit ceb5d58b217098a657f3850b7a2640f995032e62 ("net: fix sock_wake_async() rcu protection")
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1BRD_48G/4M/nfsv4/xfs/1x/x86_64-rhel/64t/debian-x86_64-2015-02-07.cgz/NoSync/lkp-hsx04/40G/fsmark
commit:
9cd3e072b0be17446e37d7414eac8a3499e0601e
ceb5d58b217098a657f3850b7a2640f995032e62
9cd3e072b0be1744 ceb5d58b217098a657f3850b7a
---------------- --------------------------
%stddev %change %stddev
\ | \
12633189 ± 10% +1291.9% 1.758e+08 ± 9% fsmark.app_overhead
83.20 ± 0% -69.2% 25.60 ± 0% fsmark.files_per_sec
122.39 ± 0% +210.5% 380.02 ± 1% fsmark.time.elapsed_time
122.39 ± 0% +210.5% 380.02 ± 1% fsmark.time.elapsed_time.max
31.50 ± 1% -68.3% 10.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
821975 ± 1% +20.9% 993866 ± 1% fsmark.time.voluntary_context_switches
145.50 ± 5% -51.2% 71.00 ± 75% numa-numastat.node1.other_node
351.01 ± 0% +17.7% 413.02 ± 0% pmeter.Average_Active_Power
0.24 ± 0% -73.9% 0.06 ± 0% pmeter.performance_per_watt
255321 ± 0% +18.3% 302089 ± 0% softirqs.NET_RX
258211 ± 4% +10.6% 285507 ± 4% softirqs.SCHED
165.59 ± 3% +157.5% 426.34 ± 1% uptime.boot
23326 ± 3% +154.5% 59370 ± 1% uptime.idle
122.39 ± 0% +210.5% 380.02 ± 1% time.elapsed_time
122.39 ± 0% +210.5% 380.02 ± 1% time.elapsed_time.max
31.50 ± 1% -68.3% 10.00 ± 0% time.percent_of_cpu_this_job_got
821975 ± 1% +20.9% 993866 ± 1% time.voluntary_context_switches
338095 ± 0% -67.4% 110274 ± 1% vmstat.io.bo
8925769 ± 2% +253.1% 31517980 ± 2% vmstat.memory.cache
0.00 ± 0% +Inf% 4.00 ± 0% vmstat.procs.b
23192 ± 1% -57.9% 9767 ± 2% vmstat.system.cs
2309 ± 1% -58.4% 960.25 ± 1% vmstat.system.in
3.9e+08 ± 2% +442.5% 2.116e+09 ± 5% cpuidle.C1-HSW.time
748008 ± 1% +30.1% 973158 ± 0% cpuidle.C1-HSW.usage
64262588 ± 30% +163.5% 1.693e+08 ± 6% cpuidle.C1E-HSW.time
47403089 ± 1% +396.1% 2.352e+08 ± 3% cpuidle.C3-HSW.time
66079 ± 0% -34.4% 43348 ± 2% cpuidle.C3-HSW.usage
1.711e+10 ± 0% +204.8% 5.216e+10 ± 1% cpuidle.C6-HSW.time
340992 ± 0% +60.0% 545567 ± 1% cpuidle.C6-HSW.usage
0.82 ± 0% -62.9% 0.30 ± 3% turbostat.%Busy
24.00 ± 0% -62.5% 9.00 ± 0% turbostat.Avg_MHz
6.52 ± 4% +37.3% 8.95 ± 3% turbostat.CPU%c1
0.47 ± 1% +80.2% 0.84 ± 3% turbostat.CPU%c3
43.12 ± 1% -66.2% 14.58 ± 2% turbostat.Pkg%pc6
118.20 ± 1% +46.9% 173.66 ± 0% turbostat.PkgWatt
123.57 ± 0% +5.3% 130.06 ± 1% turbostat.RAMWatt
28840 ± 1% +23.4% 35581 ± 4% proc-vmstat.nr_active_file
1833 ± 8% +60.2% 2936 ± 2% proc-vmstat.nr_dirty
2029989 ± 2% +277.0% 7653760 ± 2% proc-vmstat.nr_file_pages
1998019 ± 2% +281.1% 7615432 ± 2% proc-vmstat.nr_inactive_file
47332 ± 2% +198.7% 141374 ± 1% proc-vmstat.nr_slab_reclaimable
44811 ± 0% +12.6% 50464 ± 1% proc-vmstat.nr_slab_unreclaimable
300.00 ± 29% +699.8% 2399 ± 3% proc-vmstat.nr_unstable
1707 ± 2% +126.4% 3866 ± 1% proc-vmstat.nr_writeback
48443 ± 2% -34.4% 31787 ± 13% proc-vmstat.pgactivate
346120 ± 0% +189.7% 1002651 ± 1% proc-vmstat.pgfault
656396 ± 1% +126.8% 1488886 ± 1% proc-vmstat.pgfree
146936 ± 1% +18.4% 173933 ± 4% meminfo.Active
115363 ± 1% +23.4% 142325 ± 4% meminfo.Active(file)
8118848 ± 2% +277.1% 30613168 ± 2% meminfo.Cached
327564 ± 12% -31.8% 223525 ± 4% meminfo.Committed_AS
7348 ± 9% +60.8% 11819 ± 1% meminfo.Dirty
8001596 ± 2% +280.8% 30470769 ± 2% meminfo.Inactive
7991436 ± 2% +281.2% 30460312 ± 2% meminfo.Inactive(file)
1203 ± 27% +704.0% 9675 ± 3% meminfo.NFS_Unstable
189322 ± 2% +198.7% 565468 ± 1% meminfo.SReclaimable
179246 ± 0% +12.6% 201861 ± 1% meminfo.SUnreclaim
368570 ± 0% +108.2% 767329 ± 1% meminfo.Slab
6876 ± 4% +124.3% 15426 ± 1% meminfo.Writeback
2532239 ± 70% -100.0% 0.00 ± -1% latency_stats.avg.async_synchronize_cookie_domain.async_synchronize_full.do_init_module.load_module.SyS_finit_module.entry_SYSCALL_64_fastpath
843.50 ± 15% +1174.0% 10746 ± 17% latency_stats.avg.filename_create.SyS_mkdir.entry_SYSCALL_64_fastpath
55793 ± 0% +1232.3% 743345 ± 2% latency_stats.avg.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_do_close.[nfsv4].__nfs4_close.[nfsv4].nfs4_close_sync.[nfsv4].nfs4_close_context.[nfsv4].__put_nfs_open_context.nfs_file_clear_open_context.nfs_file_release.__fput.____fput.task_work_run
52108 ± 1% +1186.2% 670200 ± 2% latency_stats.avg.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
6963 ± 0% +682.1% 54465 ± 3% latency_stats.avg.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs_initiate_commit.nfs_generic_commit_list.nfs_commit_inode.nfs_file_fsync_commit.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.__close_fd
5032124 ± 70% -100.0% 0.00 ± -1% latency_stats.max.async_synchronize_cookie_domain.async_synchronize_full.do_init_module.load_module.SyS_finit_module.entry_SYSCALL_64_fastpath
77079 ±131% +1249.2% 1039953 ± 10% latency_stats.max.filename_create.SyS_mkdir.entry_SYSCALL_64_fastpath
3449 ± 18% +14057.5% 488293 ± 0% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.link_path_walk
69016 ±146% +1136.2% 853184 ± 14% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_lookup.[nfsv4].nfs4_proc_lookup_common.[nfsv4].nfs4_proc_lookup.[nfsv4].nfs_lookup_revalidate.nfs4_lookup_revalidate.lookup_dcache.__lookup_hash
281574 ± 33% +1258.5% 3825187 ± 7% latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_do_close.[nfsv4].__nfs4_close.[nfsv4].nfs4_close_sync.[nfsv4].nfs4_close_context.[nfsv4].__put_nfs_open_context.nfs_file_clear_open_context.nfs_file_release.__fput.____fput.task_work_run
281460 ± 33% +1237.0% 3763032 ± 8% latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
194155 ± 50% +404.6% 979700 ± 0% latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs_initiate_commit.nfs_generic_commit_list.nfs_commit_inode.nfs_file_fsync_commit.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.__close_fd
191922 ± 51% +441.1% 1038403 ± 9% latency_stats.max.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.__close_fd.SyS_close.entry_SYSCALL_64_fastpath
5064479 ± 70% -100.0% 0.00 ± -1% latency_stats.sum.async_synchronize_cookie_domain.async_synchronize_full.do_init_module.load_module.SyS_finit_module.entry_SYSCALL_64_fastpath
6496349 ± 15% +1107.9% 78468583 ± 17% latency_stats.sum.filename_create.SyS_mkdir.entry_SYSCALL_64_fastpath
18312 ± 12% +16665.1% 3070021 ± 25% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.link_path_walk
5018371 ± 5% +1727.4% 91703842 ± 7% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_lookup.[nfsv4].nfs4_proc_lookup_common.[nfsv4].nfs4_proc_lookup.[nfsv4].nfs_lookup_revalidate.nfs4_lookup_revalidate.lookup_dcache.__lookup_hash
5.714e+08 ± 0% +1232.3% 7.613e+09 ± 2% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_do_close.[nfsv4].__nfs4_close.[nfsv4].nfs4_close_sync.[nfsv4].nfs4_close_context.[nfsv4].__put_nfs_open_context.nfs_file_clear_open_context.nfs_file_release.__fput.____fput.task_work_run
5.77e+08 ± 1% +1186.1% 7.42e+09 ± 2% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
71314091 ± 0% +682.1% 5.577e+08 ± 3% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs_initiate_commit.nfs_generic_commit_list.nfs_commit_inode.nfs_file_fsync_commit.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.__close_fd
89416426 ± 1% +1720.5% 1.628e+09 ± 3% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.__close_fd.SyS_close.entry_SYSCALL_64_fastpath
30161 ± 1% +20.8% 36432 ± 11% numa-meminfo.node0.Active(file)
1712 ± 19% +93.3% 3310 ± 20% numa-meminfo.node0.Dirty
1917831 ± 7% +348.6% 8603774 ± 32% numa-meminfo.node0.FilePages
1887051 ± 8% +354.0% 8566759 ± 33% numa-meminfo.node0.Inactive
1882380 ± 8% +354.9% 8562079 ± 33% numa-meminfo.node0.Inactive(file)
3222689 ± 7% +318.3% 13481905 ± 32% numa-meminfo.node0.MemUsed
655.50 ± 23% +280.0% 2491 ± 27% numa-meminfo.node0.NFS_Unstable
46133 ± 4% +250.5% 161720 ± 29% numa-meminfo.node0.SReclaimable
94909 ± 7% +126.0% 214481 ± 22% numa-meminfo.node0.Slab
1624 ± 28% +186.8% 4658 ± 15% numa-meminfo.node0.Writeback
1999351 ± 3% +285.0% 7697871 ± 42% numa-meminfo.node1.FilePages
1970557 ± 3% +288.8% 7662501 ± 42% numa-meminfo.node1.Inactive
1970065 ± 3% +288.9% 7661884 ± 42% numa-meminfo.node1.Inactive(file)
3325566 ± 2% +268.5% 12255253 ± 41% numa-meminfo.node1.MemUsed
529.75 ± 56% +342.8% 2345 ± 40% numa-meminfo.node1.NFS_Unstable
47182 ± 2% +208.2% 145401 ± 40% numa-meminfo.node1.SReclaimable
92632 ± 3% +110.6% 195041 ± 32% numa-meminfo.node1.Slab
33235 ± 10% +29.6% 43078 ± 8% numa-meminfo.node2.Active
27437 ± 7% +23.4% 33867 ± 8% numa-meminfo.node2.Active(file)
1937945 ± 6% +217.8% 6159402 ± 25% numa-meminfo.node2.FilePages
1910156 ± 6% +220.7% 6125590 ± 26% numa-meminfo.node2.Inactive
1907674 ± 6% +221.0% 6122998 ± 26% numa-meminfo.node2.Inactive(file)
3266051 ± 3% +191.8% 9529210 ± 24% numa-meminfo.node2.MemUsed
546.25 ± 44% +294.3% 2153 ± 32% numa-meminfo.node2.NFS_Unstable
44530 ± 3% +151.2% 111877 ± 21% numa-meminfo.node2.SReclaimable
38553 ± 9% +26.2% 48635 ± 7% numa-meminfo.node2.SUnreclaim
83084 ± 4% +93.2% 160513 ± 16% numa-meminfo.node2.Slab
1483 ± 28% +121.6% 3287 ± 26% numa-meminfo.node2.Writeback
1607 ± 11% +93.9% 3116 ± 21% numa-meminfo.node3.Dirty
2032818 ± 2% +300.7% 8145969 ± 40% numa-meminfo.node3.FilePages
2003573 ± 2% +304.7% 8109341 ± 40% numa-meminfo.node3.Inactive
2001059 ± 2% +305.1% 8106773 ± 40% numa-meminfo.node3.Inactive(file)
62663610 ± 0% -14.4% 53639868 ± 9% numa-meminfo.node3.MemFree
3372020 ± 5% +267.6% 12395762 ± 40% numa-meminfo.node3.MemUsed
756.00 ± 30% +277.6% 2854 ± 17% numa-meminfo.node3.NFS_Unstable
48476 ± 5% +202.1% 146464 ± 38% numa-meminfo.node3.SReclaimable
94866 ± 4% +108.0% 197295 ± 30% numa-meminfo.node3.Slab
1491 ± 27% +166.8% 3978 ± 19% numa-meminfo.node3.Writeback
7540 ± 1% +20.8% 9107 ± 11% numa-vmstat.node0.nr_active_file
449034 ± 8% +372.2% 2120560 ± 33% numa-vmstat.node0.nr_dirtied
416.50 ± 22% +99.7% 831.75 ± 20% numa-vmstat.node0.nr_dirty
479363 ± 7% +348.7% 2150934 ± 32% numa-vmstat.node0.nr_file_pages
470499 ± 8% +354.9% 2140510 ± 33% numa-vmstat.node0.nr_inactive_file
11531 ± 4% +250.6% 40429 ± 29% numa-vmstat.node0.nr_slab_reclaimable
149.00 ± 14% +317.6% 622.25 ± 27% numa-vmstat.node0.nr_unstable
408.75 ± 27% +184.1% 1161 ± 15% numa-vmstat.node0.nr_writeback
448390 ± 8% +372.5% 2118719 ± 33% numa-vmstat.node0.nr_written
890591 ± 8% +299.5% 3557907 ± 30% numa-vmstat.node0.numa_hit
857084 ± 9% +311.2% 3524392 ± 31% numa-vmstat.node0.numa_local
470395 ± 3% +302.9% 1895134 ± 43% numa-vmstat.node1.nr_dirtied
499736 ± 3% +285.1% 1924462 ± 42% numa-vmstat.node1.nr_file_pages
492415 ± 3% +289.0% 1915465 ± 42% numa-vmstat.node1.nr_inactive_file
11790 ± 2% +208.3% 36349 ± 40% numa-vmstat.node1.nr_slab_reclaimable
132.25 ± 55% +342.9% 585.75 ± 41% numa-vmstat.node1.nr_unstable
469832 ± 3% +303.0% 1893553 ± 43% numa-vmstat.node1.nr_written
935618 ± 4% +251.6% 3289245 ± 39% numa-vmstat.node1.numa_hit
875866 ± 5% +269.6% 3236968 ± 40% numa-vmstat.node1.numa_local
6858 ± 7% +23.4% 8466 ± 8% numa-vmstat.node2.nr_active_file
454614 ± 6% +232.1% 1509975 ± 26% numa-vmstat.node2.nr_dirtied
338.25 ± 19% +70.3% 576.00 ± 25% numa-vmstat.node2.nr_dirty
484397 ± 6% +217.9% 1539846 ± 25% numa-vmstat.node2.nr_file_pages
476829 ± 6% +221.0% 1530745 ± 26% numa-vmstat.node2.nr_inactive_file
11127 ± 3% +151.3% 27968 ± 21% numa-vmstat.node2.nr_slab_reclaimable
9636 ± 9% +26.2% 12158 ± 7% numa-vmstat.node2.nr_slab_unreclaimable
128.00 ± 54% +323.8% 542.50 ± 31% numa-vmstat.node2.nr_unstable
358.25 ± 31% +128.6% 819.00 ± 24% numa-vmstat.node2.nr_writeback
454063 ± 6% +232.3% 1508683 ± 26% numa-vmstat.node2.nr_written
885296 ± 3% +191.7% 2582782 ± 23% numa-vmstat.node2.numa_hit
808902 ± 3% +208.9% 2498881 ± 23% numa-vmstat.node2.numa_local
478325 ± 2% +319.5% 2006721 ± 41% numa-vmstat.node3.nr_dirtied
381.75 ± 15% +104.1% 779.25 ± 21% numa-vmstat.node3.nr_dirty
508139 ± 2% +300.8% 2036488 ± 40% numa-vmstat.node3.nr_file_pages
15666010 ± 0% -14.4% 13409974 ± 9% numa-vmstat.node3.nr_free_pages
500198 ± 2% +305.2% 2026689 ± 40% numa-vmstat.node3.nr_inactive_file
12118 ± 5% +202.2% 36615 ± 38% numa-vmstat.node3.nr_slab_reclaimable
175.25 ± 34% +306.0% 711.50 ± 17% numa-vmstat.node3.nr_unstable
410.25 ± 20% +143.2% 997.75 ± 19% numa-vmstat.node3.nr_writeback
477685 ± 2% +319.7% 2005052 ± 41% numa-vmstat.node3.nr_written
954369 ± 4% +244.3% 3285965 ± 38% numa-vmstat.node3.numa_hit
870176 ± 4% +268.0% 3201883 ± 39% numa-vmstat.node3.numa_local
2722 ± 5% +28.1% 3487 ± 3% slabinfo.RAW.active_objs
2722 ± 5% +28.1% 3487 ± 3% slabinfo.RAW.num_objs
956107 ± 3% +294.0% 3766937 ± 2% slabinfo.buffer_head.active_objs
24515 ± 3% +294.0% 96587 ± 2% slabinfo.buffer_head.active_slabs
956107 ± 3% +294.0% 3766937 ± 2% slabinfo.buffer_head.num_objs
24515 ± 3% +294.0% 96587 ± 2% slabinfo.buffer_head.num_slabs
2152 ± 3% +28.7% 2770 ± 1% slabinfo.file_lock_cache.active_objs
2152 ± 3% +28.7% 2770 ± 1% slabinfo.file_lock_cache.num_objs
8496 ± 0% +38.6% 11777 ± 2% slabinfo.kmalloc-1024.active_objs
269.25 ± 1% +40.4% 378.00 ± 2% slabinfo.kmalloc-1024.active_slabs
8632 ± 1% +40.4% 12119 ± 2% slabinfo.kmalloc-1024.num_objs
269.25 ± 1% +40.4% 378.00 ± 2% slabinfo.kmalloc-1024.num_slabs
28517 ± 1% +120.5% 62871 ± 1% slabinfo.kmalloc-128.active_objs
450.50 ± 1% +125.0% 1013 ± 1% slabinfo.kmalloc-128.active_slabs
28856 ± 1% +124.9% 64912 ± 1% slabinfo.kmalloc-128.num_objs
450.50 ± 1% +125.0% 1013 ± 1% slabinfo.kmalloc-128.num_slabs
21296 ± 2% +19.2% 25386 ± 3% slabinfo.kmalloc-192.active_objs
507.25 ± 2% +19.5% 606.00 ± 3% slabinfo.kmalloc-192.active_slabs
21328 ± 2% +19.5% 25481 ± 3% slabinfo.kmalloc-192.num_objs
507.25 ± 2% +19.5% 606.00 ± 3% slabinfo.kmalloc-192.num_slabs
2033 ± 0% +19.2% 2423 ± 1% slabinfo.kmalloc-4096.active_objs
2043 ± 0% +21.9% 2491 ± 1% slabinfo.kmalloc-4096.num_objs
23627 ± 4% +37.4% 32465 ± 6% slabinfo.kmalloc-512.active_objs
370.75 ± 4% +38.7% 514.25 ± 5% slabinfo.kmalloc-512.active_slabs
23764 ± 4% +38.7% 32950 ± 5% slabinfo.kmalloc-512.num_objs
370.75 ± 4% +38.7% 514.25 ± 5% slabinfo.kmalloc-512.num_slabs
12413 ± 0% +102.5% 25134 ± 2% slabinfo.kmalloc-96.active_objs
306.50 ± 1% +96.4% 602.00 ± 2% slabinfo.kmalloc-96.active_slabs
12898 ± 1% +96.2% 25307 ± 2% slabinfo.kmalloc-96.num_objs
306.50 ± 1% +96.4% 602.00 ± 2% slabinfo.kmalloc-96.num_slabs
2121 ± 4% +40.8% 2986 ± 3% slabinfo.mnt_cache.active_objs
2121 ± 4% +40.8% 2986 ± 3% slabinfo.mnt_cache.num_objs
2314 ± 10% +141.4% 5587 ± 1% slabinfo.nfs_inode_cache.active_objs
2314 ± 10% +141.4% 5587 ± 1% slabinfo.nfs_inode_cache.num_objs
1975 ± 12% +39.7% 2759 ± 6% slabinfo.nfsd4_stateids.active_objs
1975 ± 12% +39.7% 2759 ± 6% slabinfo.nfsd4_stateids.num_objs
2764 ± 12% +39.0% 3843 ± 6% slabinfo.numa_policy.active_objs
2764 ± 12% +39.0% 3843 ± 6% slabinfo.numa_policy.num_objs
61065 ± 2% +227.7% 200116 ± 1% slabinfo.radix_tree_node.active_objs
1090 ± 2% +227.8% 3573 ± 1% slabinfo.radix_tree_node.active_slabs
61065 ± 2% +227.7% 200116 ± 1% slabinfo.radix_tree_node.num_objs
1090 ± 2% +227.8% 3573 ± 1% slabinfo.radix_tree_node.num_slabs
2265 ± 3% +265.8% 8287 ± 6% slabinfo.scsi_data_buffer.active_objs
2265 ± 3% +265.8% 8287 ± 6% slabinfo.scsi_data_buffer.num_objs
532.50 ± 3% +266.1% 1949 ± 6% slabinfo.xfs_efd_item.active_objs
532.50 ± 3% +266.1% 1949 ± 6% slabinfo.xfs_efd_item.num_objs
2945 ± 5% +98.6% 5850 ± 0% slabinfo.xfs_ili.active_objs
2945 ± 5% +98.6% 5850 ± 0% slabinfo.xfs_ili.num_objs
2070 ± 5% +135.4% 4875 ± 1% slabinfo.xfs_inode.active_objs
2070 ± 5% +135.4% 4875 ± 1% slabinfo.xfs_inode.num_objs
3320 ± 5% +37.7% 4571 ± 2% slabinfo.xfs_trans.active_objs
3320 ± 5% +37.7% 4571 ± 2% slabinfo.xfs_trans.num_objs
lkp-hsx04: 4 Socket -EX machine
Memory: 512G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
6 years, 4 months
[x86/mm] d9da2c95d7: WARNING: CPU: 0 PID: 0 at arch/x86/kernel/cpu/common.c:1498 warn_pre_alternatives()
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git x86/pcid
commit d9da2c95d77fd14360cd902ff1bc3859452bb5bc
Author: Andy Lutomirski <luto(a)kernel.org>
AuthorDate: Sat Dec 19 22:47:44 2015 -0800
Commit: Andy Lutomirski <luto(a)kernel.org>
CommitDate: Sun Dec 27 05:18:22 2015 -0800
x86/mm: If INVPCID is available, use it to flush global mappings
On my Skylake laptop, INVPCID function 2 (flush absolutely
everything) takes about 376ns, whereas saving flags, twiddling
CR4.PGE to flush global mappings, and restoring flags takes about
539ns.
Signed-off-by: Andy Lutomirski <luto(a)kernel.org>
+------------------------------------------------------------------+------------+------------+------------+
| | b796453d83 | d9da2c95d7 | cf6fc92736 |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 63 | 0 | 0 |
| boot_failures | 0 | 22 | 13 |
| WARNING:at_arch/x86/kernel/cpu/common.c:#warn_pre_alternatives() | 0 | 22 | 13 |
| backtrace:native_flush_tlb_global | 0 | 22 | 13 |
| backtrace:kernel_physical_mapping_init | 0 | 22 | 13 |
| backtrace:init_memory_mapping | 0 | 22 | 13 |
| backtrace:init_mem_mapping | 0 | 22 | 13 |
| backtrace:init_range_memory_mapping | 0 | 22 | 13 |
| backtrace:paging_init | 0 | 22 | 13 |
| backtrace:native_pagetable_init | 0 | 22 | 13 |
+------------------------------------------------------------------+------------+------------+------------+
[ 0.000000] Initializing cgroup subsys cpu
[ 0.000000] Linux version 4.4.0-rc5-00003-gd9da2c9 (kbuild@xian) (gcc version 5.2.1 20150911 (Debian 5.2.1-17) ) #1 SMP Sun Dec 27 22:48:51 CST 2015
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: CPU: 0 PID: 0 at arch/x86/kernel/cpu/common.c:1498 warn_pre_alternatives+0x17/0x1c()
[ 0.000000] You're using static_cpu_has before alternatives have run!
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.4.0-rc5-00003-gd9da2c9 #1
[ 0.000000] 00000000 00000000 b1b07f34 b1378294 b1b07f5c b1b07f4c b103da59 b100dcf8
[ 0.000000] 01bf1000 b1c70000 b1b07fa8 b1b07f64 b103daaf 00000009 b1b07f5c b19fbc0c
[ 0.000000] b1b07f78 b1b07f78 b100dcf8 b19fbaa4 000005da b19fbc0c b1b07f98 b102bdd8
[ 0.000000] Call Trace:
[ 0.000000] [<b1378294>] dump_stack+0x48/0x60
[ 0.000000] [<b103da59>] warn_slowpath_common+0x74/0x8b
[ 0.000000] [<b100dcf8>] ? warn_pre_alternatives+0x17/0x1c
[ 0.000000] [<b103daaf>] warn_slowpath_fmt+0x26/0x2a
[ 0.000000] [<b100dcf8>] warn_pre_alternatives+0x17/0x1c
[ 0.000000] [<b102bdd8>] native_flush_tlb_global+0x46/0x77
[ 0.000000] [<b1758585>] ? memblock_reserve+0x4b/0x53
[ 0.000000] [<b1bee340>] setup_arch+0xbe/0x9ba
[ 0.000000] [<b1bee340>] ? setup_arch+0xbe/0x9ba
[ 0.000000] [<b107f868>] ? vprintk_default+0x12/0x14
[ 0.000000] [<b1beb7a2>] start_kernel+0x83/0x3e2
[ 0.000000] [<b1beb2c3>] i386_start_kernel+0x91/0x95
[ 0.000000] ---[ end trace cb88537fdc8fa200 ]---
[ 0.000000] x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256
git bisect start cf6fc92736fe6e1a16ca231f986d310c3370984e 80c75a0f1d81922bf322c0634d1e1a15825a89e6 --
git bisect bad db819013cdec49a3f0267befd50f07662823bb75 # 22:31 0- 1 Merge 'luto/x86/pcid' into devel-catchup-201512272150
git bisect good d1400af73470b0f9e572451e77c79a317143d501 # 22:35 22+ 0 0day base guard for 'devel-catchup-201512272150'
git bisect good 20734770e9eb2fe31b1f6df43180718c36733e56 # 22:41 22+ 0 Merge 'linux-review/Sudip-Mukherjee/gpiolib-fix-warning-about-iterator/20151227-214019' into devel-catchup-201512272150
git bisect good b796453d838cdc096140bef0bd9e1c35e0a7444b # 22:45 22+ 0 x86/mm: Add a noinvpcid option to turn off INVPCID
git bisect bad d9da2c95d77fd14360cd902ff1bc3859452bb5bc # 22:51 0- 21 x86/mm: If INVPCID is available, use it to flush global mappings
# first bad commit: [d9da2c95d77fd14360cd902ff1bc3859452bb5bc] x86/mm: If INVPCID is available, use it to flush global mappings
git bisect good b796453d838cdc096140bef0bd9e1c35e0a7444b # 22:53 62+ 0 x86/mm: Add a noinvpcid option to turn off INVPCID
# extra tests with DEBUG_INFO
git bisect bad d9da2c95d77fd14360cd902ff1bc3859452bb5bc # 22:58 0- 27 x86/mm: If INVPCID is available, use it to flush global mappings
# extra tests on HEAD of linux-devel/devel-catchup-201512272150
git bisect bad cf6fc92736fe6e1a16ca231f986d310c3370984e # 22:58 0- 13 0day head guard for 'devel-catchup-201512272150'
# extra tests on tree/branch luto/x86/pcid
git bisect bad d9da2c95d77fd14360cd902ff1bc3859452bb5bc # 23:01 0- 22 x86/mm: If INVPCID is available, use it to flush global mappings
# extra tests with first bad commit reverted
git bisect good 207aa60635fe332fd20ae458b7bfc9e012af8f2d # 23:06 66+ 0 Revert "x86/mm: If INVPCID is available, use it to flush global mappings"
# extra tests on tree/branch linus/master
git bisect good 2c96961fb8176b69fac20d2ea7242180a5f49847 # 23:08 60+ 62 Merge tag 'pm+acpi-4.4-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
# extra tests on tree/branch linux-next/master
git bisect good 80c75a0f1d81922bf322c0634d1e1a15825a89e6 # 23:10 64+ 2 Add linux-next specific files for 20151223
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
6 years, 4 months
4e2095582d: BUG: kernel early-boot hang early console in setup code
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://internal_merge_and_test_tree devel-spot-201512270805
commit 4e2095582d1add94cad997a7a036b05eb70338e9
Merge: 467663f 5e9b06f
Author: 0day robot <fengguang.wu(a)intel.com>
AuthorDate: Sun Dec 27 08:06:45 2015 +0800
Commit: 0day robot <fengguang.wu(a)intel.com>
CommitDate: Sun Dec 27 08:06:45 2015 +0800
Merge 'linux-review/SF-Markus-Elfring/i2c-core-One-function-call-less-in-acpi_i2c_space_handler-after-error-detection/20151226-151227' into devel-spot-201512270805
+-------------------------------------------------------------------+------------+------------+------------+------------+
| | 467663f352 | 5e9b06f45e | 4e2095582d | 56033542b4 |
+-------------------------------------------------------------------+------------+------------+------------+------------+
| boot_successes | 41 | 40 | 0 | 0 |
| boot_failures | 51 | 48 | 26 | 13 |
| INFO:rcu_sched_stall_on_CPU(#ticks_this_GP)idle=#(t=#jiffies_q=#) | 22 | 11 | | |
| backtrace:mark_rodata_ro | 22 | 11 | | |
| IP-Config:Auto-configuration_of_network_failed | 2 | | | |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 40 | 41 | | |
| invoked_oom-killer:gfp_mask=0x | 0 | 1 | | |
| Mem-Info | 0 | 1 | | |
| Out_of_memory:Kill_process | 0 | 1 | | |
| backtrace:__mm_populate | 0 | 1 | | |
| backtrace:SyS_mlockall | 0 | 1 | | |
| BUG:kernel_early-boot_hang_early_console_in_setup_code | 0 | 0 | 26 | 13 |
+-------------------------------------------------------------------+------------+------------+------------+------------+
early console in setup code
Elapsed time: 310
BUG: kernel early-boot hang early console in setup code
Linux version 4.4.0-rc6-01476-g4e20955 #1
Command line: hung_task_panic=1 earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal root=/dev/ram0 rw link=/kbuild-tests/run-queue/kvm/x86_64-randconfig-b0-12270811/linux-devel:devel-spot-201512270805:4e2095582d1add94cad997a7a036b05eb70338e9:bisect-linux-7/.vmlinuz-4e2095582d1add94cad997a7a036b05eb70338e9-20151227092348-18-ivb42 branch=linux-devel/devel-spot-201512270805 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-b0-12270811/gcc-5/4e2095582d1add94cad997a7a036b05eb70338e9/vmlinuz-4.4.0-rc6-01476-g4e20955 drbd.minor_count=8
qemu-system-x86_64 -enable-kvm -cpu kvm64 -kernel /pkg/linux/x86_64-randconfig-b0-12270811/gcc-5/4e2095582d1add94cad997a7a036b05eb70338e9/vmlinuz-4.4.0-rc6-01476-g4e20955 -append 'hung_task_panic=1 earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal root=/dev/ram0 rw link=/kbuild-tests/run-queue/kvm/x86_64-randconfig-b0-12270811/linux-devel:devel-spot-201512270805:4e2095582d1add94cad997a7a036b05eb70338e9:bisect-linux-7/.vmlinuz-4e2095582d1add94cad997a7a036b05eb70338e9-20151227092348-18-ivb42 branch=linux-devel/devel-spot-201512270805 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-b0-12270811/gcc-5/4e2095582d1add94cad997a7a036b05eb70338e9/vmlinuz-4.4.0-rc6-01476-g4e20955 drbd.minor_count=8' -initrd /osimage/quantal/quantal-core-x86_64.cgz -m 300 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sda4/disk0-quantal-ivb42-109,media=disk,if=virtio -drive file=/fs/sda4/disk1-quantal-ivb42-109,media=disk,if=virtio -drive file=/fs/sda4/disk2-quantal-ivb42-109,media=disk,if=virtio -drive file=/fs/sda4/disk3-quantal-ivb42-109,media=disk,if=virtio -drive file=/fs/sda4/disk4-quantal-ivb42-109,media=disk,if=virtio -drive file=/fs/sda4/disk5-quantal-ivb42-109,media=disk,if=virtio -drive file=/fs/sda4/disk6-quantal-ivb42-109,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-quantal-ivb42-109 -serial file:/dev/shm/kboot/serial-quantal-ivb42-109 -daemonize -display none -monitor null
git bisect start 56033542b464990a28e6d41d55f3118f1f563392 4ef7675344d687a0ef5b0d7c0cee12da005870c0 --
git bisect good 5d9e4a5bbb667550981914e13b480161cc1c2266 # 09:04 21+ 15 Merge 'linux-review/Marcus-Weseloh/spi-dts-sun4i-Add-support-for-wait-time-between-word-transmissions/20151226-235715' into devel-spot-201512270805
git bisect good 7e068f15c3b698826e2285f1869ad1dd88669047 # 09:18 20+ 16 Merge 'linux-review/Figo/fix-a-dead-loop-when-in-heavy-low-memory/20151226-194950' into devel-spot-201512270805
git bisect bad 4e2095582d1add94cad997a7a036b05eb70338e9 # 09:29 0- 17 Merge 'linux-review/SF-Markus-Elfring/i2c-core-One-function-call-less-in-acpi_i2c_space_handler-after-error-detection/20151226-151227' into devel-spot-201512270805
git bisect good a218ae3c850fda19c1642962893951a19b399ba4 # 09:41 20+ 13 Merge 'linux-review/SF-Markus-Elfring/IDE-ACPI-Fine-tuning-for-a-function/20151226-182119' into devel-spot-201512270805
git bisect good 467663f35207ada35581a487b803843d047fa2e2 # 10:00 21+ 13 Merge 'linux-review/Fan-Li/f2fs-fix-bugs-and-simplify-codes-of-f2fs_fiemap/20151226-181207' into devel-spot-201512270805
git bisect good a613d9d48573d3d2f9bba7ed7719c5cc26d3a755 # 10:16 22+ 15 i2c: eg20t: set i2c_adapter->dev.of_node
git bisect good a45af72a609b095820feab26c49286337301377e # 10:48 20+ 14 i2c: xlr: fix extra read/write at end of rx transfer
git bisect good 128d9c051fadec6619ae760ff61ae0ed9aa91ae2 # 11:01 21+ 13 Merge branch 'i2c/for-4.5' into i2c/for-next
git bisect good 54177ccfbe95fcf250a89508a705bfe4706e3b86 # 11:16 22+ 12 i2c: make i2c_parse_fw_timings() always visible
git bisect good b8a69bb27938971879a8cbcbb29e6a640d9ccb2d # 11:43 21+ 11 Merge branch 'i2c/for-current' into i2c/for-next
git bisect good 5e9b06f45e1d6cc5dcb4daa612c6c2faca3c6b36 # 12:08 21+ 14 Merge branch 'i2c/for-4.5' into i2c/for-next
# first bad commit: [4e2095582d1add94cad997a7a036b05eb70338e9] Merge 'linux-review/SF-Markus-Elfring/i2c-core-One-function-call-less-in-acpi_i2c_space_handler-after-error-detection/20151226-151227' into devel-spot-201512270805
git bisect good 467663f35207ada35581a487b803843d047fa2e2 # 12:15 61+ 46 Merge 'linux-review/Fan-Li/f2fs-fix-bugs-and-simplify-codes-of-f2fs_fiemap/20151226-181207' into devel-spot-201512270805
git bisect good 5e9b06f45e1d6cc5dcb4daa612c6c2faca3c6b36 # 12:23 60+ 42 Merge branch 'i2c/for-4.5' into i2c/for-next
# extra tests on HEAD of linux-devel/devel-spot-201512270805
git bisect bad 56033542b464990a28e6d41d55f3118f1f563392 # 12:23 0- 13 0day head guard for 'devel-spot-201512270805'
# extra tests on tree/branch linux-devel/devel-spot-201512270805
git bisect bad 56033542b464990a28e6d41d55f3118f1f563392 # 12:24 0- 13 0day head guard for 'devel-spot-201512270805'
# extra tests on tree/branch linus/master
git bisect good 2c96961fb8176b69fac20d2ea7242180a5f49847 # 12:46 62+ 52 Merge tag 'pm+acpi-4.4-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
# extra tests on tree/branch linux-next/master
git bisect good 80c75a0f1d81922bf322c0634d1e1a15825a89e6 # 12:58 63+ 50 Add linux-next specific files for 20151223
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
6 years, 4 months
[lkp] [resources] 09b7f22ba7: BUG: sleeping function called from invalid context at mm/slub.c:1287
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/yinghai/linux-yinghai.git for-pci-v4.5-next
commit 09b7f22ba7bc2c45f7b2b1588c17d5264dc571f4 ("resources: Make allocate_resource() return best fit resource")
<7>[ 27.129164] pcieport 0000:00:1c.0: BAR 13: [io 0x0000-0x0fff] get_res_add_align min_align 0x1000
<7>[ 27.129166] pcieport 0000:00:1c.0: BAR 15: [mem 0x00000000-0xffffffffffffffff 64bit pref] get_res_add_size add_size 0x200000
<7>[ 27.129168] pcieport 0000:00:1c.0: BAR 15: [mem 0x00000000-0x001fffff 64bit pref] get_res_add_align min_align 0x100000
<3>[ 27.129170] BUG: sleeping function called from invalid context at mm/slub.c:1287
<3>[ 27.129171] in_atomic(): 1, irqs_disabled(): 0, pid: 259, name: kworker/u16:5
<4>[ 27.129173] CPU: 1 PID: 259 Comm: kworker/u16:5 Not tainted 4.4.0-rc4-00140-g09b7f22 #1
<4>[ 27.129174] Hardware name: Intel Corporation Broadwell Client platform/WhiteTip Mountain 1, BIOS BDW-E1R1.86C.0120.R00.1504020241 04/02/2015
<4>[ 27.129180] Workqueue: kacpi_hotplug acpi_hotplug_work_fn
<4>[ 27.129182] 0000000000000507 ffff880136da78e8 ffffffff81410ee2 ffffffff81bc0188
<4>[ 27.129184] ffff880136da78f8 ffffffff8109d988 ffff880136da7920 ffffffff8109da22
<4>[ 27.129185] 00000000024080c0 00000000024080c0 ffffffff8107f523 ffff880136da7968
<4>[ 27.129186] Call Trace:
<4>[ 27.129190] [<ffffffff81410ee2>] dump_stack+0x4b/0x69
<4>[ 27.129192] [<ffffffff8109d988>] ___might_sleep+0xe8/0x130
<4>[ 27.129194] [<ffffffff8109da22>] __might_sleep+0x52/0xb0
<4>[ 27.129196] [<ffffffff8107f523>] ? allocate_resource+0x133/0x250
<4>[ 27.129199] [<ffffffff811c9a60>] kmem_cache_alloc_trace+0x190/0x200
<4>[ 27.129200] [<ffffffff8107f523>] allocate_resource+0x133/0x250
<4>[ 27.129203] [<ffffffff8176cc70>] ? pcibios_fwaddrmap_lookup+0x60/0x60
<4>[ 27.129205] [<ffffffff810d1101>] ? vprintk_emit+0x341/0x540
<4>[ 27.129208] [<ffffffff8144f1c7>] pci_bus_alloc_from_region+0x77/0x190
<4>[ 27.129209] [<ffffffff8176cc70>] ? pcibios_fwaddrmap_lookup+0x60/0x60
<4>[ 27.129211] [<ffffffff8144f39a>] pci_bus_alloc_resource+0xba/0xe0
<4>[ 27.129213] [<ffffffff8176cc70>] ? pcibios_fwaddrmap_lookup+0x60/0x60
<4>[ 27.129216] [<ffffffff8145c769>] _pci_assign_resource+0x189/0x1e0
<4>[ 27.129218] [<ffffffff8176cc70>] ? pcibios_fwaddrmap_lookup+0x60/0x60
<4>[ 27.129220] [<ffffffff8145cb6b>] pci_assign_resource+0xbb/0x2c0
<4>[ 27.129222] [<ffffffff8145e288>] assign_requested_resources_sorted+0x78/0xe0
<4>[ 27.129224] [<ffffffff8145eb9e>] __assign_resources_sorted+0x67e/0x850
<4>[ 27.129227] [<ffffffff814607a7>] __pci_bus_assign_resources+0x67/0x1e0
<4>[ 27.129229] [<ffffffff8146f9f3>] enable_slot+0x153/0x2d0
<4>[ 27.129231] [<ffffffff8109da22>] ? __might_sleep+0x52/0xb0
<4>[ 27.129232] [<ffffffff8109da22>] ? __might_sleep+0x52/0xb0
<4>[ 27.129234] [<ffffffff814706b5>] acpiphp_hotplug_notify+0x195/0x210
<4>[ 27.129244] [<ffffffff81470520>] ? acpiphp_post_dock_fixup+0xc0/0xc0
<4>[ 27.129246] [<ffffffff81495ebe>] acpi_device_hotplug+0x3f8/0x440
<4>[ 27.129249] [<ffffffff8148ea6a>] acpi_hotplug_work_fn+0x1e/0x29
<4>[ 27.129251] [<ffffffff810913c7>] process_one_work+0x157/0x420
<4>[ 27.129253] [<ffffffff81092029>] worker_thread+0x69/0x4a0
<4>[ 27.129255] [<ffffffff81091fc0>] ? rescuer_thread+0x380/0x380
<4>[ 27.129257] [<ffffffff81091fc0>] ? rescuer_thread+0x380/0x380
<4>[ 27.129259] [<ffffffff8109758f>] kthread+0xef/0x110
<4>[ 27.129260] [<ffffffff810974a0>] ? kthread_park+0x60/0x60
<4>[ 27.129263] [<ffffffff818c848f>] ret_from_fork+0x3f/0x70
<4>[ 27.129264] [<ffffffff810974a0>] ? kthread_park+0x60/0x60
<6>[ 27.129271] pcieport 0000:00:1c.0: BAR 15: assigned [mem 0xd1200000-0xd13fffff 64bit pref]
<6>[ 27.129276] pcieport 0000:00:1c.0: BAR 13: assigned [io 0x2000-0x2fff]
<7>[ 27.129808] pci_bus 0000:01: Allocating resources
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
6 years, 4 months
[lkp] [blk] 26efb85d35: INFO: trying to register non-static key.
by kernel test robot
FYI, we noticed the below changes on
git://git.infradead.org/users/kbusch/linux-nvme master
commit 26efb85d35c71bc297e22773928062c97e41a8a2 ("blk-mq: dynamic h/w context count")
+------------------------------------------------+------------+------------+
| | e56dda0be8 | 26efb85d35 |
+------------------------------------------------+------------+------------+
| boot_successes | 6 | 2 |
| boot_failures | 1 | 28 |
| WARNING:at_kernel/trace/ftrace.c:#ftrace_bug() | 1 | |
| backtrace:perf_ftrace_event_register | 1 | |
| backtrace:perf_trace_init | 1 | |
| backtrace:perf_tp_event_init | 1 | |
| backtrace:perf_try_init_event | 1 | |
| backtrace:SYSC_perf_event_open | 1 | |
| backtrace:SyS_perf_event_open | 1 | |
| INFO:trying_to_register_non-static_key | 0 | 28 |
| backtrace:do_mount | 0 | 28 |
| backtrace:compat_SyS_mount | 0 | 5 |
| backtrace:SyS_mount | 0 | 23 |
+------------------------------------------------+------------+------------+
[ 22.943488] FAT-fs (nbd10): unable to read boot sector
[ 22.944820] block nbd10: Attempted send on closed socket
[ 22.947117] block nbd0: Attempted send on closed socket
[ 22.949310] INFO: trying to register non-static key.
[ 22.950428] the code is fine but needs lockdep annotation.
[ 22.951599] turning off the locking correctness validator.
[ 22.952839] CPU: 0 PID: 839 Comm: mount Not tainted 4.4.0-rc2-00145-g26efb85 #1
[ 22.954689] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 22.956860] 0000000000000000 ffff88001dc43928 ffffffff8173d271 0000000000000000
[ 22.958925] ffff88001dc43990 ffffffff81112bf7 ffff88001ef21a40 0000000000000000
[ 22.960946] 0000000000000002 0000000000000337 000000001ef211c0 ffff88001dc43a20
[ 22.963064] Call Trace:
[ 22.963926] [<ffffffff8173d271>] dump_stack+0x4b/0x63
[ 22.965115] [<ffffffff81112bf7>] register_lock_class+0x142/0x2e9
[ 22.966424] [<ffffffff81115116>] __lock_acquire+0x179/0xdee
[ 22.967700] [<ffffffff8171cabf>] ? blk_insert_flush+0x16e/0x1b1
[ 22.969038] [<ffffffff81092028>] ? kvm_clock_read+0x25/0x2e
[ 22.970322] [<ffffffff81116121>] lock_acquire+0x10a/0x196
[ 22.971538] [<ffffffff81116352>] ? lock_release+0x1a5/0x41a
[ 22.972787] [<ffffffff81116121>] ? lock_acquire+0x10a/0x196
[ 22.974085] [<ffffffff8171cabf>] ? blk_insert_flush+0x16e/0x1b1
[ 22.975427] [<ffffffff82e137df>] _raw_spin_lock_irq+0x3f/0x75
[ 22.976684] [<ffffffff8171cabf>] ? blk_insert_flush+0x16e/0x1b1
[ 22.978001] [<ffffffff8171cabf>] blk_insert_flush+0x16e/0x1b1
[ 22.979303] [<ffffffff81723fec>] blk_sq_make_request+0xf8/0x19f
[ 22.980610] [<ffffffff81719667>] generic_make_request+0xbd/0x160
[ 22.981909] [<ffffffff817197ff>] submit_bio+0xf5/0x100
[ 22.983063] [<ffffffff8110c29f>] ? __init_waitqueue_head+0x3b/0x4e
[ 22.984365] [<ffffffff817117f3>] submit_bio_wait+0x54/0x6a
[ 22.985581] [<ffffffff8171c3f4>] blkdev_issue_flush+0x62/0x84
[ 22.986866] [<ffffffff814d6d5b>] xfs_blkdev_issue_flush+0x19/0x1b
[ 22.988197] [<ffffffff814be9b0>] xfs_free_buftarg+0x3a/0x47
[ 22.989396] [<ffffffff814d559d>] xfs_close_devices+0x64/0x69
[ 22.990588] [<ffffffff814d665c>] xfs_fs_fill_super+0x323/0x504
[ 22.991795] [<ffffffff81201851>] mount_bdev+0x148/0x19f
[ 22.992905] [<ffffffff814d6339>] ? xfs_parseargs+0x8b6/0x8b6
[ 22.994062] [<ffffffff81112e61>] ? lockdep_init_map+0xc3/0x1c6
[ 22.995245] [<ffffffff814d4bf6>] xfs_fs_mount+0x15/0x17
[ 22.996324] [<ffffffff8120228c>] mount_fs+0x67/0x131
[ 22.997362] [<ffffffff8121921e>] vfs_kern_mount+0x6c/0xde
[ 22.998495] [<ffffffff8121bc35>] do_mount+0x88f/0x9ce
[ 22.999574] [<ffffffff811bf0f5>] ? strndup_user+0x3f/0x8c
[ 23.000706] [<ffffffff81241967>] compat_SyS_mount+0x185/0x1b1
[ 23.001892] [<ffffffff81003b19>] do_syscall_32_irqs_off+0x5c/0x6b
[ 23.003119] [<ffffffff82e169e8>] entry_INT80_compat+0x38/0x50
[ 23.004882] SQUASHFS error: squashfs_read_data failed to read block 0x0
[ 23.006257] squashfs: SQUASHFS error: unable to read squashfs_super_block
Thanks,
Ying Huang
6 years, 4 months
[lkp] [sched] 6df481c389: 43.0% aim9.exec_test.ops_per_sec
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree revert-6df481c389734ccc3a2a007c51e3dd557e5776ca-6df481c389734ccc3a2a007c51e3dd557e5776ca
commit 6df481c389734ccc3a2a007c51e3dd557e5776ca ("sched: Fix new task's load avg removed from source CPU in wake_up_new_task()")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-hsx04/exec_test/aim9/300s
commit:
9a4cb134f8adcf1413808e4d07bb637c83e437ba
6df481c389734ccc3a2a007c51e3dd557e5776ca
9a4cb134f8adcf14 6df481c389734ccc3a2a007c51
---------------- --------------------------
%stddev %change %stddev
\ | \
1144 ± 0% +43.0% 1635 ± 1% aim9.exec_test.ops_per_sec
343688 ± 0% +42.9% 491264 ± 1% aim9.time.involuntary_context_switches
26316782 ± 0% +42.9% 37616540 ± 0% aim9.time.minor_page_faults
101.00 ± 0% -3.5% 97.50 ± 0% aim9.time.percent_of_cpu_this_job_got
34.95 ± 0% -21.0% 27.62 ± 0% aim9.time.user_time
686051 ± 0% +43.1% 981909 ± 1% aim9.time.voluntary_context_switches
94.75 ± 22% +13144.3% 12549 ± 14% latency_stats.sum.stop_one_cpu.sched_exec.do_execveat_common.SyS_execve.return_from_execve
2046 ± 3% +5.1% 2151 ± 1% vmstat.system.in
394.33 ± 1% +20.1% 473.70 ± 0% pmeter.Average_Active_Power
2.90 ± 1% +19.0% 3.45 ± 1% pmeter.performance_per_watt
286262 ± 6% -12.3% 251010 ± 7% softirqs.RCU
396670 ± 0% -38.9% 242361 ± 0% softirqs.SCHED
1211551 ± 1% -47.1% 640580 ± 2% softirqs.TIMER
5865703 ± 4% +62.0% 9505248 ± 12% numa-numastat.node0.local_node
5874365 ± 4% +62.0% 9516239 ± 12% numa-numastat.node0.numa_hit
5854682 ± 4% +59.5% 9338476 ± 12% numa-numastat.node2.local_node
5864541 ± 4% +59.4% 9347809 ± 12% numa-numastat.node2.numa_hit
343688 ± 0% +42.9% 491264 ± 1% time.involuntary_context_switches
26316782 ± 0% +42.9% 37616540 ± 0% time.minor_page_faults
34.95 ± 0% -21.0% 27.62 ± 0% time.user_time
686051 ± 0% +43.1% 981909 ± 1% time.voluntary_context_switches
123941 ± 3% +459.4% 693334 ± 0% meminfo.Committed_AS
7090 ± 5% +8.6% 7699 ± 5% meminfo.PageTables
174855 ± 0% +15.8% 202525 ± 0% meminfo.SUnreclaim
25033 ± 0% +10.3% 27617 ± 2% meminfo.Shmem
241692 ± 0% +13.3% 273811 ± 0% meminfo.Slab
2.82 ± 4% +235.9% 9.47 ± 8% turbostat.CPU%c1
0.01 ±110% +1.9e+05% 14.53 ± 11% turbostat.CPU%c3
96.35 ± 0% -21.9% 75.22 ± 3% turbostat.CPU%c6
35.00 ± 2% +11.4% 39.00 ± 1% turbostat.CoreTmp
57.61 ± 1% -36.2% 36.77 ± 2% turbostat.Pkg%pc2
131.12 ± 1% +54.4% 202.48 ± 0% turbostat.PkgWatt
20774707 ± 15% +405.4% 1.05e+08 ± 27% cpuidle.C1-HSW.time
77168 ± 4% +21.4% 93647 ± 2% cpuidle.C1-HSW.usage
11231117 ± 53% +12135.9% 1.374e+09 ± 12% cpuidle.C1E-HSW.time
4595 ± 10% +240.9% 15666 ± 15% cpuidle.C1E-HSW.usage
2192166 ± 93% +1.9e+05% 4.162e+09 ± 12% cpuidle.C3-HSW.time
1436 ± 14% +21892.4% 315811 ± 6% cpuidle.C3-HSW.usage
4.395e+10 ± 3% -15.0% 3.735e+10 ± 1% cpuidle.C6-HSW.time
2385398 ± 0% -20.3% 1901468 ± 0% cpuidle.C6-HSW.usage
66827 ±100% +11511.8% 7759922 ± 23% cpuidle.POLL.time
32.50 ± 45% +2983.8% 1002 ± 8% cpuidle.POLL.usage
1772 ± 5% +8.0% 1914 ± 4% proc-vmstat.nr_page_table_pages
6258 ± 0% +10.3% 6904 ± 2% proc-vmstat.nr_shmem
43711 ± 0% +15.8% 50630 ± 0% proc-vmstat.nr_slab_unreclaimable
24073921 ± 0% +42.1% 34202892 ± 1% proc-vmstat.numa_hit
24044201 ± 0% +42.1% 34173793 ± 1% proc-vmstat.numa_local
4936 ± 1% +19.4% 5896 ± 3% proc-vmstat.pgactivate
442793 ± 4% +22.5% 542431 ± 12% proc-vmstat.pgalloc_dma32
24709314 ± 0% +43.3% 35418578 ± 1% proc-vmstat.pgalloc_normal
27059462 ± 0% +41.9% 38393061 ± 0% proc-vmstat.pgfault
25155012 ± 0% +42.9% 35955139 ± 1% proc-vmstat.pgfree
2799 ±129% -84.9% 422.50 ± 52% numa-meminfo.node0.Inactive(anon)
545456 ± 2% +18.6% 646805 ± 0% numa-meminfo.node0.MemUsed
7751 ± 83% -93.6% 497.50 ± 42% numa-meminfo.node0.Shmem
40001 ± 5% +21.9% 48780 ± 4% numa-meminfo.node1.SUnreclaim
56343 ± 8% +20.1% 67696 ± 5% numa-meminfo.node1.Slab
4573 ± 32% +169.7% 12336 ± 60% numa-meminfo.node2.Active(anon)
4908 ± 32% +69.0% 8293 ± 20% numa-meminfo.node2.AnonPages
594291 ± 1% -16.1% 498561 ± 1% numa-meminfo.node2.MemUsed
1549 ± 10% +32.5% 2052 ± 5% numa-meminfo.node2.PageTables
14888 ± 6% +24.7% 18569 ± 11% numa-meminfo.node2.SReclaimable
40345 ± 7% +22.5% 49425 ± 4% numa-meminfo.node2.SUnreclaim
2536 ±137% +247.5% 8813 ± 64% numa-meminfo.node2.Shmem
55233 ± 5% +23.1% 67996 ± 1% numa-meminfo.node2.Slab
63684 ± 4% +10.4% 70297 ± 2% numa-meminfo.node3.Slab
1938 ± 83% -93.6% 124.00 ± 42% numa-vmstat.node0.nr_shmem
3164319 ± 6% +55.3% 4913528 ± 16% numa-vmstat.node0.numa_hit
3120338 ± 7% +56.0% 4868498 ± 16% numa-vmstat.node0.numa_local
9999 ± 5% +22.0% 12195 ± 4% numa-vmstat.node1.nr_slab_unreclaimable
1141 ± 32% +170.1% 3083 ± 60% numa-vmstat.node2.nr_active_anon
1223 ± 31% +69.2% 2070 ± 20% numa-vmstat.node2.nr_anon_pages
11.00 ±167% +570.5% 73.75 ± 48% numa-vmstat.node2.nr_dirtied
386.50 ± 11% +32.6% 512.50 ± 5% numa-vmstat.node2.nr_page_table_pages
633.50 ±137% +247.8% 2203 ± 64% numa-vmstat.node2.nr_shmem
3722 ± 6% +24.7% 4642 ± 11% numa-vmstat.node2.nr_slab_reclaimable
10086 ± 7% +22.5% 12355 ± 4% numa-vmstat.node2.nr_slab_unreclaimable
11.00 ±167% +568.2% 73.50 ± 48% numa-vmstat.node2.nr_written
3174169 ± 7% +52.3% 4834564 ± 17% numa-vmstat.node2.numa_hit
3087580 ± 7% +53.5% 4738914 ± 17% numa-vmstat.node2.numa_local
951.00 ± 7% -21.8% 744.00 ± 8% slabinfo.blkdev_requests.active_objs
951.00 ± 7% -21.8% 744.00 ± 8% slabinfo.blkdev_requests.num_objs
76166 ± 1% +30.8% 99641 ± 0% slabinfo.dentry.active_objs
1813 ± 1% +31.2% 2379 ± 0% slabinfo.dentry.active_slabs
76177 ± 1% +31.2% 99952 ± 0% slabinfo.dentry.num_objs
1813 ± 1% +31.2% 2379 ± 0% slabinfo.dentry.num_slabs
19515 ± 4% +161.2% 50966 ± 2% slabinfo.kmalloc-192.active_objs
464.25 ± 4% +163.4% 1222 ± 2% slabinfo.kmalloc-192.active_slabs
19522 ± 4% +163.1% 51372 ± 2% slabinfo.kmalloc-192.num_objs
464.25 ± 4% +163.4% 1222 ± 2% slabinfo.kmalloc-192.num_slabs
30657 ± 5% +28.3% 39340 ± 2% slabinfo.kmalloc-256.active_objs
490.50 ± 5% +27.5% 625.50 ± 2% slabinfo.kmalloc-256.active_slabs
31440 ± 4% +27.4% 40058 ± 2% slabinfo.kmalloc-256.num_objs
490.50 ± 5% +27.5% 625.50 ± 2% slabinfo.kmalloc-256.num_slabs
76029 ± 0% +75.7% 133579 ± 2% slabinfo.kmalloc-64.active_objs
1187 ± 0% +76.3% 2093 ± 2% slabinfo.kmalloc-64.active_slabs
76029 ± 0% +76.3% 134029 ± 2% slabinfo.kmalloc-64.num_objs
1187 ± 0% +76.3% 2093 ± 2% slabinfo.kmalloc-64.num_slabs
357.00 ± 10% -39.3% 216.75 ± 10% slabinfo.kmem_cache.active_objs
357.00 ± 10% -39.3% 216.75 ± 10% slabinfo.kmem_cache.num_objs
699.00 ± 6% -25.2% 523.00 ± 5% slabinfo.kmem_cache_node.active_objs
768.00 ± 5% -22.9% 592.00 ± 4% slabinfo.kmem_cache_node.num_objs
4737 ± 6% +102.5% 9592 ± 1% slabinfo.mm_struct.active_objs
279.00 ± 6% +106.5% 576.25 ± 1% slabinfo.mm_struct.active_slabs
4750 ± 6% +106.4% 9805 ± 1% slabinfo.mm_struct.num_objs
279.00 ± 6% +106.5% 576.25 ± 1% slabinfo.mm_struct.num_slabs
1754 ± 3% -9.6% 1585 ± 3% slabinfo.mnt_cache.active_objs
1754 ± 3% -9.6% 1585 ± 3% slabinfo.mnt_cache.num_objs
5608 ± 4% +113.8% 11988 ± 4% slabinfo.signal_cache.active_objs
186.50 ± 4% +120.0% 410.25 ± 3% slabinfo.signal_cache.active_slabs
5608 ± 4% +119.8% 12330 ± 3% slabinfo.signal_cache.num_objs
186.50 ± 4% +120.0% 410.25 ± 3% slabinfo.signal_cache.num_slabs
0.95 ± 4% -64.2% 0.34 ± 46% perf-profile.cycles-pp.__schedule.schedule.rcu_nocb_kthread.kthread.ret_from_fork
0.82 ± 1% +40.9% 1.16 ± 15% perf-profile.cycles-pp.__schedule.schedule.smpboot_thread_fn.kthread.ret_from_fork
1.89 ± 4% +36.6% 2.58 ± 8% perf-profile.cycles-pp._dl_addr
4.97 ± 2% +31.8% 6.55 ± 13% perf-profile.cycles-pp._do_fork.sys_clone.entry_SYSCALL_64_fastpath
0.65 ± 6% +45.0% 0.94 ± 18% perf-profile.cycles-pp.anon_vma_fork.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath
1.05 ± 2% -29.9% 0.73 ± 22% perf-profile.cycles-pp.copy_page.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault
1.50 ± 5% +35.4% 2.04 ± 16% perf-profile.cycles-pp.copy_page_range.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath
4.04 ± 3% +38.3% 5.59 ± 15% perf-profile.cycles-pp.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath
6.56 ± 2% +56.6% 10.27 ± 3% perf-profile.cycles-pp.do_execveat_common.isra.33.sys_execve.return_from_execve
4.66 ± 3% -52.0% 2.24 ± 90% perf-profile.cycles-pp.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath._exit
0.76 ± 3% +46.0% 1.10 ± 5% perf-profile.cycles-pp.do_set_pte.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault
1.98 ± 3% +87.3% 3.71 ± 4% perf-profile.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
7.85 ± 1% -23.8% 5.98 ± 7% perf-profile.cycles-pp.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.97 ± 5% +91.0% 3.76 ± 4% perf-profile.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.load_script.search_binary_handler
0.78 ± 6% +72.7% 1.34 ± 23% perf-profile.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap.mmput
12.30 ± 1% -14.4% 10.53 ± 11% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.93 ± 5% -10.1% 3.53 ± 2% perf-profile.cycles-pp.kthread.ret_from_fork
0.72 ± 3% +42.2% 1.03 ± 16% perf-profile.cycles-pp.load_balance.pick_next_task_fair.__schedule.schedule.smpboot_thread_fn
4.17 ± 3% +62.6% 6.78 ± 5% perf-profile.cycles-pp.load_elf_binary.search_binary_handler.load_script.search_binary_handler.do_execveat_common
4.46 ± 3% +63.1% 7.28 ± 5% perf-profile.cycles-pp.load_script.search_binary_handler.do_execveat_common.sys_execve.return_from_execve
1.91 ± 4% +91.5% 3.66 ± 4% perf-profile.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.load_script
0.67 ± 8% +52.4% 1.02 ± 11% perf-profile.cycles-pp.native_irq_return_iret
0.60 ± 4% +54.5% 0.94 ± 19% perf-profile.cycles-pp.page_remove_rmap.unmap_page_range.unmap_single_vma.unmap_vmas.exit_mmap
0.77 ± 2% +42.6% 1.10 ± 15% perf-profile.cycles-pp.pick_next_task_fair.__schedule.schedule.smpboot_thread_fn.kthread
2.14 ± 6% -36.6% 1.36 ± 16% perf-profile.cycles-pp.rcu_nocb_kthread.kthread.ret_from_fork
0.69 ± 7% +70.5% 1.17 ± 22% perf-profile.cycles-pp.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap
6.62 ± 2% +56.4% 10.36 ± 3% perf-profile.cycles-pp.return_from_execve
0.97 ± 4% -63.1% 0.36 ± 46% perf-profile.cycles-pp.schedule.rcu_nocb_kthread.kthread.ret_from_fork
0.82 ± 1% +41.9% 1.17 ± 15% perf-profile.cycles-pp.schedule.smpboot_thread_fn.kthread.ret_from_fork
4.18 ± 3% +63.1% 6.82 ± 5% perf-profile.cycles-pp.search_binary_handler.load_script.search_binary_handler.do_execveat_common.sys_execve
1.10 ± 4% +38.8% 1.53 ± 16% perf-profile.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
4.97 ± 2% +31.7% 6.55 ± 13% perf-profile.cycles-pp.sys_clone.entry_SYSCALL_64_fastpath
6.62 ± 2% +56.5% 10.36 ± 3% perf-profile.cycles-pp.sys_execve.return_from_execve
0.61 ± 6% +67.1% 1.01 ± 34% perf-profile.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.58 ± 7% +67.8% 0.98 ± 35% perf-profile.cycles-pp.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap.mmput.do_exit
2.01 ± 0% +16.3% 2.33 ± 7% perf-profile.cycles-pp.unlock_page.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault
2.23 ± 1% +53.1% 3.42 ± 8% perf-profile.cycles-pp.unmap_page_range.unmap_single_vma.unmap_vmas.exit_mmap.mmput
0.97 ± 6% +92.7% 1.86 ± 7% perf-profile.cycles-pp.unmap_single_vma.unmap_vmas.exit_mmap.mmput.flush_old_exec
1.06 ± 6% +92.0% 2.03 ± 7% perf-profile.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.08 ± 5% +33.3% 1.44 ± 15% perf-profile.cycles-pp.wp_page_copy.isra.58.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-hsx04/fork_test/aim9/300s
commit:
9a4cb134f8adcf1413808e4d07bb637c83e437ba
6df481c389734ccc3a2a007c51e3dd557e5776ca
9a4cb134f8adcf14 6df481c389734ccc3a2a007c51
---------------- --------------------------
%stddev %change %stddev
\ | \
2359 ± 0% +113.7% 5041 ± 3% aim9.fork_test.ops_per_sec
615.50 ± 2% +151.5% 1548 ± 7% aim9.time.involuntary_context_switches
24067036 ± 1% +112.1% 51051335 ± 3% aim9.time.minor_page_faults
1413208 ± 0% +113.7% 3020551 ± 3% aim9.time.voluntary_context_switches
390.00 ± 0% +23.4% 481.36 ± 0% pmeter.Average_Active_Power
6.05 ± 0% +73.1% 10.47 ± 2% pmeter.performance_per_watt
394565 ± 0% -36.7% 249722 ± 0% softirqs.SCHED
1211155 ± 0% -46.3% 650989 ± 1% softirqs.TIMER
18844 ± 0% +24.5% 23458 ± 2% vmstat.system.cs
2033 ± 0% +8.8% 2211 ± 1% vmstat.system.in
28908 ±105% +3443.2% 1024275 ±100% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
28908 ±105% +4396.5% 1299863 ±100% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
28908 ±105% +6986.4% 2048551 ±100% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
2071 ± 80% +879.7% 20290 ±150% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.link_path_walk
615.50 ± 2% +151.5% 1548 ± 7% time.involuntary_context_switches
24067036 ± 1% +112.1% 51051335 ± 3% time.minor_page_faults
10.04 ± 1% -38.7% 6.16 ± 1% time.user_time
1413208 ± 0% +113.7% 3020551 ± 3% time.voluntary_context_switches
5854678 ± 2% +137.2% 13888138 ± 16% numa-numastat.node0.local_node
5864668 ± 2% +136.9% 13896095 ± 16% numa-numastat.node0.numa_hit
5909056 ± 2% +61.8% 9559429 ± 24% numa-numastat.node1.local_node
5915466 ± 2% +61.7% 9566771 ± 24% numa-numastat.node1.numa_hit
5895526 ± 2% +156.0% 15091426 ± 21% numa-numastat.node3.local_node
5906616 ± 2% +155.7% 15102488 ± 21% numa-numastat.node3.numa_hit
0.83 ± 0% -5.4% 0.79 ± 0% turbostat.%Busy
3.27 ± 9% +477.4% 18.89 ± 6% turbostat.CPU%c1
0.01 ±-10000% +1100.0% 0.12 ± 38% turbostat.CPU%c3
95.90 ± 0% -16.4% 80.20 ± 1% turbostat.CPU%c6
35.50 ± 1% +19.0% 42.25 ± 5% turbostat.CoreTmp
57.81 ± 0% -26.0% 42.77 ± 4% turbostat.Pkg%pc2
39.75 ± 1% +12.6% 44.75 ± 1% turbostat.PkgTmp
126.47 ± 0% +66.1% 210.02 ± 0% turbostat.PkgWatt
171256 ± 0% +28.9% 220773 ± 1% meminfo.Active
38197 ± 1% +29.1% 49316 ± 1% meminfo.Active(anon)
133058 ± 1% +28.9% 171457 ± 2% meminfo.Active(file)
29573 ± 2% +16.5% 34455 ± 2% meminfo.AnonPages
134364 ± 1% +487.6% 789587 ± 4% meminfo.Committed_AS
207374 ± 7% -16.0% 174124 ± 10% meminfo.DirectMap4k
179915 ± 1% +16.3% 209228 ± 0% meminfo.SUnreclaim
22316 ± 0% +19.2% 26604 ± 1% meminfo.Shmem
247443 ± 1% +12.0% 277110 ± 0% meminfo.Slab
21320688 ± 12% +832.6% 1.988e+08 ± 37% cpuidle.C1-HSW.time
73857 ± 1% +25.6% 92774 ± 2% cpuidle.C1-HSW.usage
76807447 ± 98% +7888.7% 6.136e+09 ± 10% cpuidle.C1E-HSW.time
3719 ± 34% +36842.8% 1373995 ± 16% cpuidle.C1E-HSW.usage
1662410 ± 61% +6555.8% 1.106e+08 ± 14% cpuidle.C3-HSW.time
1747 ± 12% +21168.1% 371659 ± 15% cpuidle.C3-HSW.usage
4.289e+10 ± 0% -14.8% 3.653e+10 ± 1% cpuidle.C6-HSW.time
2769655 ± 0% -38.1% 1715615 ± 5% cpuidle.C6-HSW.usage
33498 ±148% +14178.3% 4783051 ± 25% cpuidle.POLL.time
59.75 ± 42% +1659.0% 1051 ± 13% cpuidle.POLL.usage
9547 ± 1% +29.1% 12322 ± 1% proc-vmstat.nr_active_anon
33249 ± 1% +28.9% 42863 ± 2% proc-vmstat.nr_active_file
7400 ± 2% +16.5% 8618 ± 2% proc-vmstat.nr_anon_pages
2428 ± 1% -10.4% 2175 ± 4% proc-vmstat.nr_page_table_pages
5579 ± 0% +19.2% 6650 ± 1% proc-vmstat.nr_shmem
44974 ± 1% +16.3% 52293 ± 0% proc-vmstat.nr_slab_unreclaimable
23515066 ± 2% +111.7% 49780502 ± 1% proc-vmstat.numa_hit
23485225 ± 2% +111.8% 49752148 ± 1% proc-vmstat.numa_local
20877 ± 0% +98.1% 41367 ± 2% proc-vmstat.pgactivate
517464 ± 2% +71.1% 885179 ± 11% proc-vmstat.pgalloc_dma32
25186761 ± 2% +118.2% 54950756 ± 0% proc-vmstat.pgalloc_normal
24800002 ± 1% +108.9% 51811411 ± 3% proc-vmstat.pgfault
25689514 ± 2% +117.2% 55795269 ± 0% proc-vmstat.pgfree
40202 ± 6% +54.0% 61912 ± 17% numa-meminfo.node0.Active
6240 ± 40% +135.8% 14717 ± 31% numa-meminfo.node0.Active(anon)
33961 ± 3% +39.0% 47195 ± 26% numa-meminfo.node0.Active(file)
127461 ± 2% +15.2% 146775 ± 9% numa-meminfo.node0.FilePages
49016 ± 3% +15.9% 56799 ± 3% numa-meminfo.node0.SUnreclaim
2497 ±141% +215.1% 7868 ± 65% numa-meminfo.node0.Shmem
64287 ± 1% +15.4% 74161 ± 5% numa-meminfo.node0.Slab
45929 ± 7% +18.3% 54320 ± 6% numa-meminfo.node1.Active
32814 ± 4% +37.2% 45021 ± 3% numa-meminfo.node1.Active(file)
9616 ± 12% -40.0% 5773 ± 39% numa-meminfo.node1.AnonPages
2609 ± 13% -53.7% 1207 ± 16% numa-meminfo.node1.PageTables
6501 ± 18% +74.0% 11315 ± 22% numa-meminfo.node2.AnonPages
3141 ± 0% +95.7% 6148 ± 48% numa-meminfo.node2.Mapped
559677 ± 9% -11.3% 496371 ± 2% numa-meminfo.node2.MemUsed
41990 ± 2% +17.6% 49382 ± 5% numa-meminfo.node2.SUnreclaim
59728 ± 5% +11.4% 66556 ± 5% numa-meminfo.node2.Slab
39149 ± 6% +32.6% 51926 ± 8% numa-meminfo.node3.Active
33190 ± 4% +34.7% 44700 ± 13% numa-meminfo.node3.Active(file)
127828 ± 2% +7.0% 136835 ± 4% numa-meminfo.node3.FilePages
44894 ± 6% +27.5% 57235 ± 3% numa-meminfo.node3.SUnreclaim
59868 ± 6% +23.6% 73991 ± 4% numa-meminfo.node3.Slab
1558 ± 40% +135.9% 3676 ± 31% numa-vmstat.node0.nr_active_anon
8491 ± 3% +38.9% 11797 ± 26% numa-vmstat.node0.nr_active_file
31866 ± 2% +15.1% 36692 ± 9% numa-vmstat.node0.nr_file_pages
623.75 ±141% +215.3% 1966 ± 66% numa-vmstat.node0.nr_shmem
12254 ± 3% +15.9% 14196 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
3074685 ± 2% +143.1% 7474391 ± 16% numa-vmstat.node0.numa_hit
3028772 ± 2% +145.4% 7431676 ± 16% numa-vmstat.node0.numa_local
8204 ± 4% +37.2% 11254 ± 3% numa-vmstat.node1.nr_active_file
2402 ± 12% -40.3% 1433 ± 39% numa-vmstat.node1.nr_anon_pages
648.00 ± 14% -54.6% 294.50 ± 18% numa-vmstat.node1.nr_page_table_pages
3155146 ± 2% +59.9% 5045487 ± 23% numa-vmstat.node1.numa_hit
3086223 ± 2% +61.5% 4983428 ± 23% numa-vmstat.node1.numa_local
597.50 ± 1% +10.5% 660.25 ± 2% numa-vmstat.node2.nr_alloc_batch
1620 ± 17% +75.4% 2841 ± 22% numa-vmstat.node2.nr_anon_pages
785.00 ± 0% +95.8% 1537 ± 48% numa-vmstat.node2.nr_mapped
10498 ± 2% +17.6% 12345 ± 5% numa-vmstat.node2.nr_slab_unreclaimable
8299 ± 4% +34.6% 11173 ± 13% numa-vmstat.node3.nr_active_file
31958 ± 2% +7.0% 34207 ± 4% numa-vmstat.node3.nr_file_pages
11223 ± 6% +27.5% 14305 ± 3% numa-vmstat.node3.nr_slab_unreclaimable
3120991 ± 2% +137.7% 7417057 ± 23% numa-vmstat.node3.numa_hit
3023169 ± 2% +142.1% 7320343 ± 24% numa-vmstat.node3.numa_local
1.05 ± 3% -46.9% 0.56 ± 2% perf-profile.cycles-pp.__schedule.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
0.97 ± 3% +44.8% 1.41 ± 7% perf-profile.cycles-pp.anon_vma_clone.anon_vma_fork.copy_process._do_fork.sys_clone
1.92 ± 1% +23.0% 2.37 ± 7% perf-profile.cycles-pp.anon_vma_fork.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath
56.66 ± 0% -18.9% 45.96 ± 6% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
1.32 ± 5% -63.0% 0.49 ± 18% perf-profile.cycles-pp.copy_page.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault
4.11 ± 1% +25.0% 5.14 ± 12% perf-profile.cycles-pp.copy_page_range.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath
60.87 ± 0% -19.0% 49.29 ± 5% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
56.66 ± 0% -18.9% 45.95 ± 6% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
55.98 ± 0% -18.7% 45.50 ± 6% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.19 ± 46% +406.5% 0.97 ± 32% perf-profile.cycles-pp.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.18 ± 53% +438.4% 0.98 ± 32% perf-profile.cycles-pp.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
1.68 ± 2% -12.8% 1.46 ± 7% perf-profile.cycles-pp.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
2.25 ± 6% -25.7% 1.67 ± 11% perf-profile.cycles-pp.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.18 ± 37% +352.9% 0.79 ± 34% perf-profile.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.sys_exit_group
0.32 ± 9% +76.4% 0.56 ± 51% perf-profile.cycles-pp.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
56.13 ± 0% -18.4% 45.83 ± 6% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
3.00 ± 3% -62.2% 1.14 ± 8% perf-profile.cycles-pp.kthread.ret_from_fork
0.18 ± 37% +352.9% 0.79 ± 34% perf-profile.cycles-pp.mmput.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
2.39 ± 5% -73.1% 0.64 ± 11% perf-profile.cycles-pp.rcu_nocb_kthread.kthread.ret_from_fork
3.02 ± 3% -60.6% 1.19 ± 10% perf-profile.cycles-pp.ret_from_fork
0.91 ± 6% -94.2% 0.05 ± 8% perf-profile.cycles-pp.schedule.rcu_nocb_kthread.kthread.ret_from_fork
1.20 ± 3% -47.0% 0.64 ± 1% perf-profile.cycles-pp.schedule.schedule_preempt_disabled.cpu_startup_entry.start_secondary
1.21 ± 2% -45.8% 0.66 ± 1% perf-profile.cycles-pp.schedule_preempt_disabled.cpu_startup_entry.start_secondary
1.50 ± 4% -22.6% 1.16 ± 8% perf-profile.cycles-pp.select_task_rq_fair.wake_up_new_task._do_fork.sys_clone.entry_SYSCALL_64_fastpath
61.06 ± 0% -19.1% 49.37 ± 5% perf-profile.cycles-pp.start_secondary
0.18 ± 53% +438.4% 0.98 ± 32% perf-profile.cycles-pp.sys_exit_group.entry_SYSCALL_64_fastpath
1.73 ± 2% -14.1% 1.49 ± 7% perf-profile.cycles-pp.sys_wait4.entry_SYSCALL_64_fastpath
2.12 ± 2% -21.7% 1.66 ± 10% perf-profile.cycles-pp.wake_up_new_task._do_fork.sys_clone.entry_SYSCALL_64_fastpath
0.74 ± 8% +32.3% 0.98 ± 9% perf-profile.cycles-pp.wp_page_copy.isra.58.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault
13781 ± 0% +253.7% 48744 ± 6% slabinfo.kmalloc-128.active_objs
214.75 ± 0% +257.5% 767.75 ± 6% slabinfo.kmalloc-128.active_slabs
13781 ± 0% +256.6% 49149 ± 6% slabinfo.kmalloc-128.num_objs
214.75 ± 0% +257.5% 767.75 ± 6% slabinfo.kmalloc-128.num_slabs
16964 ± 1% +168.1% 45487 ± 6% slabinfo.kmalloc-192.active_objs
403.50 ± 1% +170.3% 1090 ± 6% slabinfo.kmalloc-192.active_slabs
16964 ± 1% +170.1% 45828 ± 6% slabinfo.kmalloc-192.num_objs
403.50 ± 1% +170.3% 1090 ± 6% slabinfo.kmalloc-192.num_slabs
43002 ± 0% +194.9% 126825 ± 7% slabinfo.kmalloc-32.active_objs
337.75 ± 0% +195.7% 998.75 ± 7% slabinfo.kmalloc-32.active_slabs
43293 ± 0% +195.4% 127900 ± 7% slabinfo.kmalloc-32.num_objs
337.75 ± 0% +195.7% 998.75 ± 7% slabinfo.kmalloc-32.num_slabs
93314 ± 2% +95.7% 182573 ± 2% slabinfo.kmalloc-64.active_objs
1457 ± 2% +96.4% 2863 ± 2% slabinfo.kmalloc-64.active_slabs
93314 ± 2% +96.4% 183278 ± 2% slabinfo.kmalloc-64.num_objs
1457 ± 2% +96.4% 2863 ± 2% slabinfo.kmalloc-64.num_slabs
459.00 ± 7% -55.6% 204.00 ± 17% slabinfo.kmem_cache.active_objs
459.00 ± 7% -55.6% 204.00 ± 17% slabinfo.kmem_cache.num_objs
827.00 ± 5% -38.7% 507.00 ± 8% slabinfo.kmem_cache_node.active_objs
896.00 ± 5% -35.7% 576.00 ± 7% slabinfo.kmem_cache_node.num_objs
6453 ± 5% +73.5% 11196 ± 2% slabinfo.mm_struct.active_objs
382.50 ± 5% +75.3% 670.50 ± 2% slabinfo.mm_struct.active_slabs
6510 ± 5% +75.2% 11406 ± 2% slabinfo.mm_struct.num_objs
382.50 ± 5% +75.3% 670.50 ± 2% slabinfo.mm_struct.num_slabs
5772 ± 3% +71.6% 9906 ± 5% slabinfo.signal_cache.active_objs
192.00 ± 3% +77.6% 341.00 ± 5% slabinfo.signal_cache.active_slabs
5782 ± 3% +77.2% 10243 ± 5% slabinfo.signal_cache.num_objs
192.00 ± 3% +77.6% 341.00 ± 5% slabinfo.signal_cache.num_slabs
47491 ± 3% -37.2% 29843 ± 1% slabinfo.vm_area_struct.active_objs
1079 ± 3% -37.2% 678.00 ± 1% slabinfo.vm_area_struct.active_slabs
47500 ± 3% -37.2% 29843 ± 1% slabinfo.vm_area_struct.num_objs
1079 ± 3% -37.2% 678.00 ± 1% slabinfo.vm_area_struct.num_slabs
lkp-hsx04: Brickland Haswell-EX
Memory: 512G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
6 years, 4 months
[lkp] [drm/i915] f47b495db4: [drm:i915_set_reset_status [i915]] *ERROR* gpu hanging too fast, banning!
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Ben-Widawsky/drm-i915-Correct-MI_STORE_DWORD_INDEX-usage/20151216-081807
commit f47b495db44e9362dc80cf5837b177c19e513c06 ("drm/i915: Correct MI_STORE_DWORD_INDEX usage")
+----------------------------------+------------+------------+
| | 663f3122d0 | f47b495db4 |
+----------------------------------+------------+------------+
| boot_successes | 29 | 17 |
| boot_failures | 0 | 12 |
| drm:i915_set_reset_status[i915]] | 0 | 12 |
+----------------------------------+------------+------------+
<6>[ 23.332191] [drm] stuck on blitter ring
<6>[ 23.336604] [drm] stuck on video enhancement ring
<6>[ 23.343837] [drm] GPU HANG: ecode 8:1:0xfffffffe, reason: Ring hung, action: reset
<3>[ 23.354042] [drm:i915_set_reset_status [i915]] *ERROR* gpu hanging too fast, banning!
<5>[ 23.366600] drm/i915: Resetting chip after gpu hang
<6>[ 27.326811] [drm] stuck on bsd ring
<6>[ 27.332304] [drm] stuck on blitter ring
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
6 years, 4 months
[lkp] [acpi] 87ab35c97a: pm-qa.cpufreq_02.fail
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree devel-catchup-201512182051
commit 87ab35c97aff307e45ebe603886a8436f22e59e0 ("acpi-cpufreq: Remove get_cur_freq_on_cpu()")
We found the following new message in kernel log after your commit.
[ 7.297919] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 0 (-22)
[ 7.298601] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 1 (-22)
[ 7.299322] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 2 (-22)
[ 7.299416] wmi: Mapper loaded
[ 7.299440] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 3 (-22)
[ 7.299562] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 4 (-22)
[ 7.299684] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 5 (-22)
[ 7.299816] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 6 (-22)
[ 7.299904] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 7 (-22)
[ 7.299985] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 8 (-22)
[ 7.300065] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 9 (-22)
[ 7.300143] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 10 (-22)
[ 7.300221] cpufreq: cpufreq_online: Failed to initialize policy for cpu: 11 (-22)
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
6 years, 4 months
[lkp] [blk] 26efb85d35: BUG kmalloc-512 (Not tainted): Redzone overwritten
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree devel-catchup-201512220028
commit 26efb85d35c71bc297e22773928062c97e41a8a2 ("blk-mq: dynamic h/w context count")
+------------------------------------------------+------------+------------+
| | e56dda0be8 | 26efb85d35 |
+------------------------------------------------+------------+------------+
| boot_successes | 15 | 2 |
| boot_failures | 0 | 12 |
| BUG_kmalloc-#(Not_tainted):Poison_overwritten | 0 | 8 |
| INFO:#-#.First_byte#instead_of | 0 | 12 |
| INFO:Slab#objects=#used=#fp=0x(null)flags= | 0 | 12 |
| INFO:Object#@offset=#fp= | 0 | 12 |
| backtrace:init | 0 | 12 |
| backtrace:kernel_init_freeable | 0 | 12 |
| BUG_kmalloc-#(Not_tainted):Redzone_overwritten | 0 | 4 |
| BUG_kmalloc-#(Tainted:G_B):Redzone_overwritten | 0 | 4 |
| backtrace:vp_find_vqs | 0 | 4 |
| backtrace:init_vq | 0 | 4 |
| backtrace:ide_host_alloc | 0 | 4 |
| backtrace:ide_pci_init_two | 0 | 4 |
| backtrace:ide_pci_init_one | 0 | 4 |
| backtrace:piix_init_one | 0 | 4 |
| backtrace:ide_scan_pcibus | 0 | 4 |
+------------------------------------------------+------------+------------+
[ 14.781818] brd: module loaded
[ 14.818921] loop: module loaded
[ 18.033239] =============================================================================
[ 18.035280] BUG kmalloc-512 (Not tainted): Redzone overwritten
[ 18.036486] -----------------------------------------------------------------------------
[ 18.036486]
[ 18.039089] Disabling lock debugging due to kernel taint
[ 18.040287] INFO: 0xffff88007e0b55b0-0xffff88007e0b55b7. First byte 0x0 instead of 0xbb
[ 18.042289] INFO: Slab 0xffffea0001f82d00 objects=19 used=19 fp=0x (null) flags=0x100000000004080
[ 18.044496] INFO: Object 0xffff88007e0b53b0 @offset=5040 fp=0xffff88007e0b56f8
[ 18.044496]
[ 18.046957] Bytes b4 ffff88007e0b53a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.049075] Object ffff88007e0b53b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.051240] Object ffff88007e0b53c0: 21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
[ 18.053370] Object ffff88007e0b53d0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.055466] Object ffff88007e0b53e0: 00 00 00 00 00 00 00 00 21 43 65 87 00 00 00 00 ........!Ce.....
[ 18.057562] Object ffff88007e0b53f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.059653] Object ffff88007e0b5400: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.061817] Object ffff88007e0b5410: 21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
[ 18.063933] Object ffff88007e0b5420: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.066024] Object ffff88007e0b5430: 00 00 00 00 00 00 00 00 21 43 65 87 00 00 00 00 ........!Ce.....
[ 18.068141] Object ffff88007e0b5440: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.070246] Object ffff88007e0b5450: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.072341] Object ffff88007e0b5460: 21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
[ 18.074462] Object ffff88007e0b5470: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.076561] Object ffff88007e0b5480: 00 00 00 00 00 00 00 00 21 43 65 87 00 00 00 00 ........!Ce.....
[ 18.078663] Object ffff88007e0b5490: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.080804] Object ffff88007e0b54a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.082917] Object ffff88007e0b54b0: 21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
[ 18.085004] Object ffff88007e0b54c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.087129] Object ffff88007e0b54d0: 00 00 00 00 00 00 00 00 21 43 65 87 00 00 00 00 ........!Ce.....
[ 18.089217] Object ffff88007e0b54e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.091373] Object ffff88007e0b54f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.093455] Object ffff88007e0b5500: 21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
[ 18.095548] Object ffff88007e0b5510: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.097645] Object ffff88007e0b5520: 00 00 00 00 00 00 00 00 21 43 65 87 00 00 00 00 ........!Ce.....
[ 18.099755] Object ffff88007e0b5530: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.101861] Object ffff88007e0b5540: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.103941] Object ffff88007e0b5550: 21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
[ 18.106079] Object ffff88007e0b5560: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.108176] Object ffff88007e0b5570: 00 00 00 00 00 00 00 00 21 43 65 87 00 00 00 00 ........!Ce.....
[ 18.110287] Object ffff88007e0b5580: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.112393] Object ffff88007e0b5590: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
[ 18.114497] Object ffff88007e0b55a0: 21 43 65 87 00 00 00 00 00 00 00 00 00 00 00 00 !Ce.............
[ 18.116568] Redzone ffff88007e0b55b0: 00 00 00 00 00 00 00 00 ........
[ 18.118599] Padding ffff88007e0b56f0: 00 00 00 00 00 00 00 00 ........
[ 18.120625] CPU: 1 PID: 1 Comm: swapper/0 Tainted: G B 4.4.0-rc2-00145-g26efb85 #1
[ 18.122636] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 18.124670] 0000000000000000 ffff88007570b680 ffffffff81504f45 ffff880075c02b40
[ 18.126801] ffff88007570b6b0 ffffffff811c0c1e ffff88007e0b55b0 00000000000000bb
[ 18.128935] ffff880075c02b40 ffff88007e0b55b7 ffff88007570b708 ffffffff811c0cc9
[ 18.131113] Call Trace:
[ 18.131932] [<ffffffff81504f45>] dump_stack+0x4b/0x63
[ 18.133074] [<ffffffff811c0c1e>] print_trailer+0x127/0x130
[ 18.134312] [<ffffffff811c0cc9>] check_bytes_and_report+0xa2/0xea
[ 18.135625] [<ffffffff811c0f40>] check_object+0x45/0x1f7
[ 18.136801] [<ffffffff810ec749>] ? alloc_desc+0x31/0x1a4
[ 18.137980] [<ffffffff811c2127>] alloc_debug_processing+0xdc/0x14b
[ 18.139278] [<ffffffff811c261b>] ___slab_alloc+0x485/0x612
[ 18.140494] [<ffffffff810ec749>] ? alloc_desc+0x31/0x1a4
[ 18.141707] [<ffffffff811c1768>] ? deactivate_slab+0x4da/0x516
[ 18.142930] [<ffffffff810d7452>] ? __lock_is_held+0x3c/0x57
[ 18.144140] [<ffffffff810ec749>] ? alloc_desc+0x31/0x1a4
[ 18.145350] [<ffffffff811c27f7>] __slab_alloc+0x4f/0x83
[ 18.146501] [<ffffffff811c27f7>] ? __slab_alloc+0x4f/0x83
[ 18.147713] [<ffffffff810ec749>] ? alloc_desc+0x31/0x1a4
[ 18.148879] [<ffffffff810ec749>] ? alloc_desc+0x31/0x1a4
[ 18.150053] [<ffffffff811c2fa0>] kmem_cache_alloc_node_trace+0x91/0x234
[ 18.151445] [<ffffffff810da2c4>] ? trace_hardirqs_on_caller+0x17d/0x199
[ 18.152774] [<ffffffff810ec749>] alloc_desc+0x31/0x1a4
[ 18.153934] [<ffffffff81c18cd7>] __irq_alloc_descs+0xf4/0x1a3
[ 18.155171] [<ffffffff810f11f4>] irq_domain_alloc_descs+0x4c/0x72
[ 18.156443] [<ffffffff810f1939>] __irq_domain_alloc_irqs+0x81/0x22b
[ 18.157760] [<ffffffff810c2a9e>] ? local_clock+0x20/0x22
[ 18.158914] [<ffffffff810f322e>] msi_domain_alloc_irqs+0xa7/0x14b
[ 18.160217] [<ffffffff81553afb>] pci_msi_domain_alloc_irqs+0x15/0x17
[ 18.161546] [<ffffffff810797c7>] native_setup_msi_irqs+0x50/0x5b
[ 18.162798] [<ffffffff8104c981>] arch_setup_msi_irqs+0xf/0x11
[ 18.164023] [<ffffffff81552b18>] pci_msi_setup_msi_irqs+0x4e/0x52
[ 18.165357] [<ffffffff815531f2>] pci_enable_msix+0x225/0x36e
[ 18.178254] [<ffffffff8155336c>] pci_enable_msix_range+0x31/0x50
[ 18.179515] [<ffffffff815c7774>] vp_request_msix_vectors+0xbf/0x1e1
[ 18.180836] [<ffffffff815c7c64>] vp_try_to_find_vqs+0xe6/0x318
[ 18.182067] [<ffffffff8150f193>] ? vsnprintf+0x376/0x3af
[ 18.183226] [<ffffffff815c7ec4>] vp_find_vqs+0x2e/0x81
[ 18.184458] [<ffffffff816fb8c6>] init_vq+0x162/0x201
[ 18.185591] [<ffffffff816fc2f4>] ? virtblk_probe+0xc5/0x641
[ 18.186797] [<ffffffff816fc36c>] virtblk_probe+0x13d/0x641
[ 18.188004] [<ffffffff816d0abd>] ? devices_kset_move_last+0x57/0x5c
[ 18.189298] [<ffffffff815c4de6>] virtio_dev_probe+0x111/0x187
[ 18.190537] [<ffffffff816d396e>] driver_probe_device+0xf7/0x250
[ 18.191790] [<ffffffff816d3b28>] __driver_attach+0x61/0x83
[ 18.192972] [<ffffffff816d3ac7>] ? driver_probe_device+0x250/0x250
[ 18.194274] [<ffffffff816d1f0f>] bus_for_each_dev+0x6f/0x87
[ 18.195474] [<ffffffff816d3519>] driver_attach+0x1e/0x20
[ 18.196632] [<ffffffff816d3106>] bus_add_driver+0xf2/0x1e4
[ 18.197838] [<ffffffff825f3321>] ? init_cryptoloop+0x28/0x28
[ 18.199032] [<ffffffff816d464c>] driver_register+0x8a/0xc6
[ 18.200242] [<ffffffff825f3321>] ? init_cryptoloop+0x28/0x28
[ 18.201475] [<ffffffff815c4c72>] register_virtio_driver+0x2b/0x2d
[ 18.202722] [<ffffffff825f337b>] init+0x5a/0x87
[ 18.203800] [<ffffffff81000402>] do_one_initcall+0xe7/0x177
[ 18.205010] [<ffffffff825a60ec>] kernel_init_freeable+0x1c2/0x24a
[ 18.206273] [<ffffffff81c185a9>] ? rest_init+0x140/0x140
[ 18.207453] [<ffffffff81c185b7>] kernel_init+0xe/0xd4
[ 18.208600] [<ffffffff81c2689f>] ret_from_fork+0x3f/0x70
[ 18.209764] [<ffffffff81c185a9>] ? rest_init+0x140/0x140
[ 18.210958] FIX kmalloc-512: Restoring 0xffff88007e0b55b0-0xffff88007e0b55b7=0xbb
[ 18.210958]
[ 18.213431] FIX kmalloc-512: Marking all objects used
Thanks,
Kernel Test Robot
6 years, 4 months