[lkp] [x86/dumpstack] 8ba848287b: BUG: scheduling while atomic: swapper/0/0/0x00000002
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree revert-8ba848287b7a408b280e30beea9a29d1297662a5-8ba848287b7a408b280e30beea9a29d1297662a5
commit 8ba848287b7a408b280e30beea9a29d1297662a5 ("x86/dumpstack: Show top of special stack if we OOPS on a special stack")
+------------------------------------------------------------------------------+------------+------------+
| | 0de0ad4a15 | 8ba848287b |
+------------------------------------------------------------------------------+------------+------------+
| boot_successes | 11 | 11 |
| boot_failures | 20 | 19 |
| INFO:task_blocked_for_more_than#seconds | 18 | 18 |
| RIP:delay_tsc | 2 | 1 |
| RIP:flat_send_IPI_mask | 18 | 5 |
| Kernel_panic-not_syncing:hung_task:blocked_tasks | 18 | 4 |
| backtrace:vfs_write | 16 | 15 |
| backtrace:SyS_write | 16 | 15 |
| backtrace:vfs_read | 4 | 1 |
| backtrace:SyS_read | 4 | 1 |
| backtrace:watchdog | 18 | 4 |
| RIP:update_cfs_shares | 1 | |
| backtrace:cpu_startup_entry | 8 | 7 |
| RIP:__schedule | 1 | |
| RIP:native_safe_halt | 5 | 2 |
| RIP:irq_exit | 1 | |
| backtrace:do_sys_open | 2 | 1 |
| backtrace:SyS_open | 2 | 1 |
| RIP:rcu_read_lock_held | 1 | |
| RIP:___might_sleep | 1 | |
| RIP:lock_is_held | 1 | |
| RIP:arch_cpu_idle | 1 | |
| RIP:sched_clock_local | 1 | 1 |
| invoked_oom-killer:gfp_mask=0x | 2 | 1 |
| Mem-Info | 2 | 1 |
| Out_of_memory:Kill_process | 1 | 1 |
| RIP:set_next_entity | 1 | |
| RIP:raise_softirq | 1 | |
| RIP:get_next_timer_interrupt | 1 | |
| BUG:scheduling_while_atomic | 0 | 14 |
| BUG:sleeping_function_called_from_invalid_context_at_include/linux/pagemap.h | 0 | 1 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/slub.c | 0 | 2 |
| backtrace:do_execve | 0 | 1 |
| backtrace:SyS_execve | 0 | 1 |
| RIP:trace_hardirqs_on | 0 | 1 |
| backtrace:x86_64_start_kernel | 0 | 3 |
| backtrace:do_dup2 | 0 | 2 |
| backtrace:SyS_dup3 | 0 | 2 |
| backtrace:SyS_dup2 | 0 | 2 |
| RIP:trace_hardirqs_on_caller | 0 | 1 |
+------------------------------------------------------------------------------+------------+------------+
The first softlockup seems not related to the commit. But the BUG
following appears related.
[ 360.697040] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 120.359549] XFS (sdc): Unmounting Filesystem
[ 120.378720] XFS (sdd): Unmounting Filesystem
2015-09-19 18:08:35 export TEST_DIR=/fs/sda
[ 360.534202] INFO: task cat-vmstat:4009 blocked for more than 120 seconds.
[ 360.536125] Not tainted 4.3.0-rc1-00050-g8ba8482 #1
[ 360.537786] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 360.540392] cat-vmstat D ffff8800383d6880 13568 4009 1 0x00000000
[ 360.549607] ffff88003d66fa48 0000000000000086 000000177cbca02e ffff8800383d6898
[ 360.552534] ffff8800361e0000 ffff88003b81d400 ffff88003d670000 7fffffffffffffff
[ 360.555425] 7fffffffffffffff ffffffff81c2eae2 0000000000000002 ffff88003d66fa60
[ 360.567558] Call Trace:
[ 360.577645] [<ffffffff81c2eae2>] ? bit_wait+0x4b/0x4b
[ 360.579203] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.580749] [<ffffffff81c31d88>] schedule_timeout+0x3f/0x29d
[ 360.582410] [<ffffffff810c1e0e>] ? sched_clock_cpu+0x99/0xad
[ 360.584026] [<ffffffff810da8a8>] ? mark_held_locks+0x52/0x68
[ 360.585627] [<ffffffff81106cdb>] ? ktime_get+0x8e/0x117
[ 360.605356] [<ffffffff81c2eae2>] ? bit_wait+0x4b/0x4b
[ 360.607036] [<ffffffff8107fe2d>] ? kvm_clock_read+0x25/0x2e
[ 360.608770] [<ffffffff81c2eae2>] ? bit_wait+0x4b/0x4b
[ 360.610315] [<ffffffff81c2d4c7>] io_schedule_timeout+0xb7/0x12b
[ 360.611859] [<ffffffff81c2d4c7>] ? io_schedule_timeout+0xb7/0x12b
[ 360.613416] [<ffffffff81c2eb33>] bit_wait_io+0x51/0x55
[ 360.614845] [<ffffffff81c2e7e5>] __wait_on_bit+0x4b/0x7d
[ 360.616290] [<ffffffff81c2e97e>] out_of_line_wait_on_bit+0x71/0x7c
[ 360.630974] [<ffffffff81c2eae2>] ? bit_wait+0x4b/0x4b
[ 360.632642] [<ffffffff810d086a>] ? autoremove_wake_function+0x3a/0x3a
[ 360.634526] [<ffffffff812cd070>] nfs_wait_on_request+0x47/0x4a
[ 360.636301] [<ffffffff812d1517>] nfs_updatepage+0x637/0x7b9
[ 360.638030] [<ffffffff812c3c53>] nfs_write_end+0x12c/0x2fe
[ 360.639744] [<ffffffff81177a59>] generic_perform_write+0x142/0x1fd
[ 360.641567] [<ffffffff811792e7>] __generic_file_write_iter+0xce/0x174
[ 360.643441] [<ffffffff811794a8>] generic_file_write_iter+0x11b/0x188
[ 360.658129] [<ffffffff812c4477>] nfs_file_write+0xa1/0x11e
[ 360.659622] [<ffffffff811cd5e9>] __vfs_write+0x95/0xbe
[ 360.661046] [<ffffffff811cdce2>] vfs_write+0xbc/0x163
[ 360.671479] [<ffffffff811ce78b>] SyS_write+0x51/0x92
[ 360.673127] [<ffffffff81c337f2>] entry_SYSCALL_64_fastpath+0x12/0x76
[ 360.674915] 2 locks held by cat-vmstat/4009:
[ 360.676182] #0: (sb_writers#12){.+.+.+}, at: [<ffffffff811d099a>] __sb_start_write+0x5f/0xb0
[ 360.679201] #1: (&sb->s_type->i_mutex_key#14){+.+.+.}, at: [<ffffffff811793c5>] generic_file_write_iter+0x38/0x188
[ 360.682508] Sending NMI to all CPUs:
[ 360.697029] NMI backtrace for cpu 0
[ 360.697040] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.697042] no locks held by swapper/0/0.
[ 360.697057] Modules linked in: snd_pcsp
[ 360.697060] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.3.0-rc1-00050-g8ba8482 #1
[ 360.697061] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.697064] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.697066] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.697069] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.697069] Call Trace:
[ 360.697072] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.697076] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
[ 360.697078] [<ffffffff81c2d5a8>] __schedule+0x6d/0xe9a
[ 360.697081] [<ffffffff81110149>] ? tick_nohz_idle_exit+0xf6/0x102
[ 360.697083] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.697084] [<ffffffff81c2e6dc>] schedule_preempt_disabled+0x15/0x1e
[ 360.697087] [<ffffffff810d0e2f>] cpu_startup_entry+0x30b/0x3a3
[ 360.697089] [<ffffffff81c256c3>] rest_init+0x13a/0x140
[ 360.697091] [<ffffffff825ccf0b>] start_kernel+0x43b/0x448
[ 360.697093] [<ffffffff825cc120>] ? early_idt_handler_array+0x120/0x120
[ 360.697096] [<ffffffff825cc4a2>] x86_64_start_reservations+0x2a/0x2c
[ 360.697098] [<ffffffff825cc5dc>] x86_64_start_kernel+0x138/0x145
[ 360.701035] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.701036] no locks held by swapper/0/0.
[ 360.701039] Modules linked in: snd_pcsp
[ 360.701041] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.3.0-rc1-00050-g8ba8482 #1
[ 360.701042] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.701045] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.701047] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.701050] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.701050] Call Trace:
[ 360.701054] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.701057] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
[ 360.701060] [<ffffffff81c2d5a8>] __schedule+0x6d/0xe9a
[ 360.701063] [<ffffffff81110149>] ? tick_nohz_idle_exit+0xf6/0x102
[ 360.701065] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.701067] [<ffffffff81c2e6dc>] schedule_preempt_disabled+0x15/0x1e
[ 360.701069] [<ffffffff810d0e2f>] cpu_startup_entry+0x30b/0x3a3
[ 360.701071] [<ffffffff81c256c3>] rest_init+0x13a/0x140
[ 360.701075] [<ffffffff825ccf0b>] start_kernel+0x43b/0x448
[ 360.701078] [<ffffffff825cc120>] ? early_idt_handler_array+0x120/0x120
[ 360.701080] [<ffffffff825cc4a2>] x86_64_start_reservations+0x2a/0x2c
[ 360.701082] [<ffffffff825cc5dc>] x86_64_start_kernel+0x138/0x145
[ 360.706048] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.706049] no locks held by swapper/0/0.
[ 360.706052] Modules linked in: snd_pcsp
[ 360.706054] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.3.0-rc1-00050-g8ba8482 #1
[ 360.706055] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.706059] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.706061] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.706063] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.706064] Call Trace:
[ 360.706070] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.706074] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
[ 360.706076] [<ffffffff81c2d5a8>] __schedule+0x6d/0xe9a
[ 360.706081] [<ffffffff81110149>] ? tick_nohz_idle_exit+0xf6/0x102
[ 360.706083] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.706085] [<ffffffff81c2e6dc>] schedule_preempt_disabled+0x15/0x1e
[ 360.706088] [<ffffffff810d0e2f>] cpu_startup_entry+0x30b/0x3a3
[ 360.706091] [<ffffffff81c256c3>] rest_init+0x13a/0x140
[ 360.706096] [<ffffffff825ccf0b>] start_kernel+0x43b/0x448
[ 360.706098] [<ffffffff825cc120>] ? early_idt_handler_array+0x120/0x120
[ 360.706101] [<ffffffff825cc4a2>] x86_64_start_reservations+0x2a/0x2c
[ 360.706103] [<ffffffff825cc5dc>] x86_64_start_kernel+0x138/0x145
[ 360.709045] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.709047] no locks held by swapper/0/0.
[ 360.709049] Modules linked in: snd_pcsp
[ 360.709052] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.3.0-rc1-00050-g8ba8482 #1
[ 360.709053] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.709056] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.709058] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.709060] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.709061] Call Trace:
[ 360.709066] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.709069] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
[ 360.709071] [<ffffffff81c2d5a8>] __schedule+0x6d/0xe9a
[ 360.709074] [<ffffffff81110149>] ? tick_nohz_idle_exit+0xf6/0x102
[ 360.709076] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.709078] [<ffffffff81c2e6dc>] schedule_preempt_disabled+0x15/0x1e
[ 360.709081] [<ffffffff810d0e2f>] cpu_startup_entry+0x30b/0x3a3
[ 360.709083] [<ffffffff81c256c3>] rest_init+0x13a/0x140
[ 360.709086] [<ffffffff825ccf0b>] start_kernel+0x43b/0x448
[ 360.709088] [<ffffffff825cc120>] ? early_idt_handler_array+0x120/0x120
[ 360.709090] [<ffffffff825cc4a2>] x86_64_start_reservations+0x2a/0x2c
[ 360.709092] [<ffffffff825cc5dc>] x86_64_start_kernel+0x138/0x145
[ 360.717047] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.717048] no locks held by swapper/0/0.
[ 360.717051] Modules linked in: snd_pcsp
[ 360.717054] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.3.0-rc1-00050-g8ba8482 #1
[ 360.717055] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.717058] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.717060] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.717062] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.717063] Call Trace:
[ 360.717069] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.717074] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
[ 360.717077] [<ffffffff81c2d5a8>] __schedule+0x6d/0xe9a
[ 360.717081] [<ffffffff81110149>] ? tick_nohz_idle_exit+0xf6/0x102
[ 360.717083] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.717085] [<ffffffff81c2e6dc>] schedule_preempt_disabled+0x15/0x1e
[ 360.717089] [<ffffffff810d0e2f>] cpu_startup_entry+0x30b/0x3a3
[ 360.717092] [<ffffffff81c256c3>] rest_init+0x13a/0x140
[ 360.717096] [<ffffffff825ccf0b>] start_kernel+0x43b/0x448
[ 360.717099] [<ffffffff825cc120>] ? early_idt_handler_array+0x120/0x120
[ 360.717102] [<ffffffff825cc4a2>] x86_64_start_reservations+0x2a/0x2c
[ 360.717104] [<ffffffff825cc5dc>] x86_64_start_kernel+0x138/0x145
[ 360.722112] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.722113] no locks held by swapper/0/0.
[ 360.722116] Modules linked in: snd_pcsp
[ 360.722119] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.3.0-rc1-00050-g8ba8482 #1
[ 360.722120] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.722123] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.722125] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.722128] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.722128] Call Trace:
[ 360.722134] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.722138] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
[ 360.722140] [<ffffffff81c2d5a8>] __schedule+0x6d/0xe9a
[ 360.722144] [<ffffffff81110149>] ? tick_nohz_idle_exit+0xf6/0x102
[ 360.722146] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.722148] [<ffffffff81c2e6dc>] schedule_preempt_disabled+0x15/0x1e
[ 360.722150] [<ffffffff810d0e2f>] cpu_startup_entry+0x30b/0x3a3
[ 360.722152] [<ffffffff81c256c3>] rest_init+0x13a/0x140
[ 360.722156] [<ffffffff825ccf0b>] start_kernel+0x43b/0x448
[ 360.722158] [<ffffffff825cc120>] ? early_idt_handler_array+0x120/0x120
[ 360.722160] [<ffffffff825cc4a2>] x86_64_start_reservations+0x2a/0x2c
[ 360.722162] [<ffffffff825cc5dc>] x86_64_start_kernel+0x138/0x145
[ 360.725037] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.725038] no locks held by swapper/0/0.
[ 360.725040] Modules linked in: snd_pcsp
[ 360.725042] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.3.0-rc1-00050-g8ba8482 #1
[ 360.725043] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.725046] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.725048] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.725050] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.725050] Call Trace:
[ 360.725055] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.725059] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
[ 360.725061] [<ffffffff81c2d5a8>] __schedule+0x6d/0xe9a
[ 360.725064] [<ffffffff81110149>] ? tick_nohz_idle_exit+0xf6/0x102
[ 360.725067] [<ffffffff81c2e44f>] schedule+0x7a/0x8f
[ 360.725068] [<ffffffff81c2e6dc>] schedule_preempt_disabled+0x15/0x1e
[ 360.725071] [<ffffffff810d0e2f>] cpu_startup_entry+0x30b/0x3a3
[ 360.725074] [<ffffffff81c256c3>] rest_init+0x13a/0x140
[ 360.725077] [<ffffffff825ccf0b>] start_kernel+0x43b/0x448
[ 360.725079] [<ffffffff825cc120>] ? early_idt_handler_array+0x120/0x120
[ 360.725081] [<ffffffff825cc4a2>] x86_64_start_reservations+0x2a/0x2c
[ 360.725083] [<ffffffff825cc5dc>] x86_64_start_kernel+0x138/0x145
[ 360.728046] BUG: scheduling while atomic: swapper/0/0/0x00000002
[ 360.728047] no locks held by swapper/0/0.
[ 360.728049] Modules linked in: snd_pcsp
[ 360.728052] CPU: 0 PID: 0 Comm: swapper/0 Tainted: G W 4.3.0-rc1-00050-g8ba8482 #1
[ 360.728053] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 360.728056] 0000000000000000 ffffffff8222be68 ffffffff814fdfac ffffffff82238580
[ 360.728058] ffffffff8222be80 ffffffff810b7bd4 ffff8800381d6880 ffffffff8222bed8
[ 360.728060] ffffffff81c2d5a8 ffffffff82238580 ffffffff81110149 ffffffff82228000
[ 360.728061] Call Trace:
[ 360.728066] [<ffffffff814fdfac>] dump_stack+0x4b/0x63
[ 360.728069] [<ffffffff810b7bd4>] __schedule_bug+0x64/0x73
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 6 months
[lkp] [x86/entry/64/compat] d47479d034: BUG: kernel boot hang
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git x86/entry_compat
commit d47479d0345d4031e660c59bfe92a2dd317c8eaf ("x86/entry/64/compat: Migrate the body of the syscall entry to C")
+------------------------------------------------------+------------+------------+
| | 87475de25f | d47479d034 |
+------------------------------------------------------+------------+------------+
| boot_successes | 189 | 13 |
| boot_failures | 11 | 210 |
| BUG:kernel_test_crashed | 10 | |
| WARNING:at_mm/page_alloc.c:#__alloc_pages_nodemask() | 1 | |
| backtrace:packet_setsockopt | 1 | |
| backtrace:compat_SyS_socketcall | 1 | |
| kernel_BUG_at_arch/x86/kernel/traps.c | 0 | 47 |
| invalid_opcode | 0 | 47 |
| RIP:fixup_bad_iret | 0 | 47 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 47 |
| BUG:kernel_boot_hang | 0 | 163 |
+------------------------------------------------------+------------+------------+
[ 3.456532] ------------[ cut here ]------------
[ 3.457007] kernel BUG at arch/x86/kernel/traps.c:568!
[ 3.457007] invalid opcode: 0000 [#1] SMP
[ 3.457007] Modules linked in:
[ 3.457007] CPU: 0 PID: 82 Comm: rcS Not tainted 4.2.0-rc7-00104-gd47479d #1
[ 3.457007] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 3.457007] task: ffff880013a1c000 ti: ffff880013a9c000 task.ti: ffff880013a9c000
[ 3.457007] RIP: 0010:[<ffffffff810151de>] [<ffffffff810151de>] fixup_bad_iret+0x5e/0x60
[ 3.457007] RSP: 0000:ffff880013a9fee8 EFLAGS: 00010046
[ 3.457007] RAX: ffff880013a9ff50 RBX: ffff880013a9ff50 RCX: ffffffff81893957
[ 3.457007] RDX: 0000000000000008 RSI: ffff880013a9ff10 RDI: ffff880013a9ff50
[ 3.457007] RBP: ffff880013a9ff00 R08: 0000000000000000 R09: 0000000000000000
[ 3.457007] R10: ffffffff81894f92 R11: ffffffff81894f92 R12: ffff880013aa0000
[ 3.457007] R13: ffff880013a9ff10 R14: 0000000000000000 R15: 0000000000000000
[ 3.457007] FS: 0000000000000000(0000) GS:ffff880012200000(0000) knlGS:0000000000000000
[ 3.457007] CS: 0010 DS: 002b ES: 002b CR0: 000000008005003b
[ 3.457007] CR2: 00000000ffaeafa0 CR3: 0000000013aa4000 CR4: 00000000000006f0
[ 3.457007] Stack:
[ 3.457007] 0000000000000001 0000000000000000 0000000000000000 0000000000000000
[ 3.457007] ffffffff81895219 ffffffff81894f92 0000000000000000 0000000000000000
[ 3.457007] 0000000000000000 0000000000000000 0000000000000000 0000000000000000
[ 3.457007] Call Trace:
[ 3.457007] [<ffffffff81895219>] error_entry+0xa9/0xb0
[ 3.457007] [<ffffffff81894f92>] ? general_protection+0x12/0x30
[ 3.457007] [<ffffffff81894f92>] ? general_protection+0x12/0x30
[ 3.457007] Code: 88 00 00 00 e8 e4 ab 3e 00 ba 88 00 00 00 4c 89 ee 48 89 df e8 d4 ab 3e 00 41 f6 44 24 e0 03 74 0a 48 89 d8 5b 41 5c 41 5d 5d c3 <0f> 0b 66 66 66 66 90 55 48 89 e5 eb 13 66 66 90 0f 0b bf 7d 00
[ 3.457007] RIP [<ffffffff810151de>] fixup_bad_iret+0x5e/0x60
[ 3.457007] RSP <ffff880013a9fee8>
[ 3.457007] ---[ end trace 7cb2870a2255142c ]---
[ 3.457007] Kernel panic - not syncing: Fatal exception
[ 3.457007] Kernel Offset: disabled
Thanks,
Ying Huang
5 years, 6 months
[lkp] [x86/kconfig/32] 2bdc727abf: BUG kmalloc-256 (Not tainted): Object already free
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git x86/sigcontext
commit 2bdc727abf3a90a0e53849c88c48607e152166b2 ("Revert "x86/kconfig/32: Rename CONFIG_VM86 and default it to 'n'"")
+-----------------------------------------------------+------------+------------+
| | 7e01ebffff | 2bdc727abf |
+-----------------------------------------------------+------------+------------+
| boot_successes | 33 | 20 |
| boot_failures | 0 | 15 |
| BUG_kmalloc-#(Not_tainted):Object_already_free | 0 | 14 |
| INFO:Allocated_in_alloc_fdmem_age=#cpu=#pid= | 0 | 11 |
| INFO:Freed_in_kvfree_age=#cpu=#pid= | 0 | 10 |
| INFO:Slab#objects=#used=#fp=#flags= | 0 | 15 |
| INFO:Object#@offset=#fp= | 0 | 15 |
| BUG_kmalloc-#(Tainted:G_B):Object_already_free | 0 | 11 |
| INFO:Allocated_in_do_execveat_common_age=#cpu=#pid= | 0 | 10 |
| INFO:Freed_in_free_bprm_age=#cpu=#pid= | 0 | 10 |
| backtrace:do_group_exit | 0 | 12 |
| backtrace:SyS_exit_group | 0 | 12 |
| BUG:unable_to_handle_kernel | 0 | 2 |
| Oops | 0 | 2 |
| EIP_is_at_strrchr | 0 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 2 |
| backtrace:do_execve | 0 | 3 |
| backtrace:SyS_execve | 0 | 3 |
| BUG_kmalloc-#(Not_tainted):Poison_overwritten | 0 | 1 |
| INFO:#-#.First_byte#instead_of | 0 | 1 |
| INFO:Freed_in_load_elf_binary_age=#cpu=#pid= | 0 | 3 |
| INFO:Slab#objects=#used=#fp=0x(null)flags= | 0 | 1 |
| BUG_kmalloc-#(Tainted:G_B):Redzone_overwritten | 0 | 1 |
| INFO:Allocated_in_do_sys_vm86_age=#cpu=#pid= | 0 | 2 |
| INFO:Freed_in_exit_thread_age=#cpu=#pid= | 0 | 2 |
| invoked_oom-killer:gfp_mask=0x | 0 | 1 |
| Mem-Info | 0 | 1 |
| Out_of_memory:Kill_process | 0 | 1 |
| backtrace:vfs_writev | 0 | 1 |
| backtrace:SyS_writev | 0 | 1 |
| EIP_is_at_commit_creds | 0 | 1 |
| INFO:Allocated_in_load_elf_phdrs_age=#cpu=#pid= | 0 | 2 |
+-----------------------------------------------------+------------+------------+
[ 158.004783] mmap: trinity-c0 (11572) uses deprecated remap_file_pages() syscall. See Documentation/vm/remap_file_pages.txt.
[ 158.066462] VFS: Warning: trinity-c0 using old stat() call. Recompile your binary.
[ 158.141967] =============================================================================
[ 158.143697] BUG kmalloc-256 (Not tainted): Object already free
[ 158.155727] -----------------------------------------------------------------------------
[ 158.155727]
[ 158.158045] Disabling lock debugging due to kernel taint
[ 158.159123] INFO: Allocated in do_sys_vm86+0x44f/0x4b0 age=60 cpu=0 pid=11572
[ 158.164703] INFO: Freed in exit_thread+0x89/0xb0 age=27 cpu=0 pid=11586
[ 158.194498] INFO: Slab 0xcbc5f180 objects=9 used=6 fp=0xc818c6c0 flags=0x40000080
[ 158.196060] INFO: Object 0xc818c6c0 @offset=1728 fp=0xc818c1b0
[ 158.196060]
[ 158.197539] Bytes b4 c818c6b0: 33 2d 00 00 bc 56 ff ff 5a 5a 5a 5a 5a 5a 5a 5a 3-...V..ZZZZZZZZ
[ 158.199193] Object c818c6c0: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
Thanks,
Ying Huang
5 years, 6 months
[lkp] [perf] 3215929f8f: n f[d 29 [ 4:22:177.]092021] EIP is at SyS_perf_event_open+0x757/0xb86
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree devel-catchup-201509131855
commit 3215929f8ff55da910c26971d4286abff53ea6e2 ("perf: Restructure perf syscall point of no return")
[m ai2n]1 S.089628] BUG: unable to handle kernel NULL pointer dereference at 00000198
et[so ck op t(21011 .1 090827] IP: [<c2b2e592>] SyS_perf_event_open+0x757/0xb86
] Setso[ck op t( 21.092021] CPU: 0 PID: 668 Comm: trinity-main Not tainted 4.2.0-02761-g3215929 #2
[ 21.092021] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
1 [7 8b 60 0020 14). o092021] task: cfc523c0 ti: d039c000 task.ti: d039c000
[ 21.092021] EIP: 0060:[<c2b2e592>] EFLAGS: 00010246 CPU: 0
n f[d 29 [ 4:22:177.]092021] EIP is at SyS_perf_event_open+0x757/0xb86
[ 21.092021] EAX: c38e33c4 EBX: 00000000 ECX: 00000000 EDX: 00000000
[ 21.092021] ESI: cfc523c0 EDI: 00000000 EBP: d039dfac ESP: d039df00
[ ma in ] 2Se1ts.oc092021] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 21.092021] CR0: 8005003b CR2: 00000198 CR3: 109cb140 CR4: 000006b0
kop[t( 1 21 82b6100.0092021] Stack:
[0ma0in0] 0Se0ts0oc0k0 0000001do pt0(10070 80 80b6000000
d 34 [2[6: 2: 39 21.092021] Call Trace:
[m ai n]2 S1et.so092021] [<c33451c1>] syscall_call+0x7/0x7
[ 21.092021] [<c3340000>] ? preempt_schedule_irq+0x4d/0xea
sockop[t( 10 c 21.092021] EIP: [<c2b2e592>] SyS_perf_event_open+0x757/0xb86 SS:ESP 0068:d039df00
9 [8b 60 00 0 24)1 o.n 092021] CR2: 0000000000000198
fd 44 [26:1:232]
[ 21.114479] ---[ end trace c29b51e2a14544eb ]---
[ ma in ] 2Se1ts.oc115072] Kernel panic - not syncing: Fatal exception
Thanks,
Ying Huang
5 years, 6 months
[lkp] [Btrfs] b659ef0277: +159.7% fileio.requests_per_sec
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit b659ef027792219b590d67a2baf1643a93727d29 ("Btrfs: avoid syncing log in the fast fsync path when not necessary")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/period/nr_threads/disk/fs/size/filenum/rwmode/iomode:
lkp-sb02/fileio/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/600s/100%/1HDD/btrfs/64G/1024f/rndwr/sync
commit:
1ab818b137e198e4d06e987a4b089411f2e39d40
b659ef027792219b590d67a2baf1643a93727d29
1ab818b137e198e4 b659ef027792219b590d67a2ba
---------------- --------------------------
%stddev %change %stddev
\ | \
156.81 ±171% -99.2% 1.20 ± 14% fileio.request_latency_max_ms
17.03 ± 0% +159.7% 44.24 ± 0% fileio.requests_per_sec
2373960 ± 0% +6.7% 2533688 ± 0% fileio.time.file_system_outputs
161019 ± 0% -14.3% 137992 ± 0% fileio.time.voluntary_context_switches
9810964 ± 2% -17.9% 8052196 ± 1% cpuidle.C3-SNB.time
161019 ± 0% -14.3% 137992 ± 0% time.voluntary_context_switches
3568 ± 0% -2.4% 3484 ± 0% vmstat.io.bo
60476 ± 0% +18.1% 71428 ± 0% proc-vmstat.nr_active_file
507.75 ± 0% +86.4% 946.25 ± 0% proc-vmstat.nr_dirty
10809 ± 0% +15.6% 12497 ± 0% proc-vmstat.nr_slab_reclaimable
52238 ± 0% +13.4% 59232 ± 0% proc-vmstat.pgactivate
10189 ± 8% -9.9% 9180 ± 8% sched_debug.cfs_rq[0]:/.min_vruntime
1.75 ±4164% +6385.7% 113.50 ± 16% sched_debug.cpu#2.nr_uninterruptible
14748 ± 3% -7.2% 13685 ± 2% sched_debug.cpu#2.ttwu_local
141.25 ± 47% -83.9% 22.75 ±169% sched_debug.cpu#3.nr_uninterruptible
50725 ± 0% -9.8% 45774 ± 0% softirqs.BLOCK
43962 ± 1% -13.9% 37864 ± 1% softirqs.RCU
32134 ± 1% -8.8% 29295 ± 1% softirqs.SCHED
62715 ± 1% -9.9% 56488 ± 0% softirqs.TIMER
50.00 ± 0% +2.0% 51.00 ± 0% turbostat.Avg_MHz
0.41 ± 2% -14.5% 0.35 ± 2% turbostat.CPU%c3
2.69 ± 0% -9.9% 2.42 ± 0% turbostat.Pkg%pc2
1.02 ± 4% -22.9% 0.78 ± 4% turbostat.Pkg%pc3
260823 ± 0% +16.8% 304593 ± 0% meminfo.Active
241907 ± 0% +18.1% 285712 ± 0% meminfo.Active(file)
2033 ± 0% +86.2% 3784 ± 0% meminfo.Dirty
43239 ± 0% +15.6% 49990 ± 0% meminfo.SReclaimable
61606 ± 0% +11.0% 68373 ± 0% meminfo.Slab
1571 ± 5% +16.0% 1823 ± 8% slabinfo.btrfs_delayed_data_ref.active_objs
1577 ± 5% +16.0% 1830 ± 8% slabinfo.btrfs_delayed_data_ref.num_objs
1256 ± 5% -13.3% 1089 ± 8% slabinfo.btrfs_delayed_ref_head.active_objs
1261 ± 5% -13.3% 1093 ± 8% slabinfo.btrfs_delayed_ref_head.num_objs
767.25 ± 2% +14.7% 880.25 ± 4% slabinfo.btrfs_delayed_tree_ref.active_objs
767.25 ± 2% +14.7% 880.25 ± 4% slabinfo.btrfs_delayed_tree_ref.num_objs
2933 ± 3% -29.6% 2066 ± 0% slabinfo.btrfs_extent_buffer.active_objs
2944 ± 2% -29.4% 2079 ± 0% slabinfo.btrfs_extent_buffer.num_objs
5699 ± 0% +129.2% 13066 ± 0% slabinfo.btrfs_extent_state.active_objs
111.50 ± 1% +129.6% 256.00 ± 0% slabinfo.btrfs_extent_state.active_slabs
5699 ± 0% +129.4% 13078 ± 0% slabinfo.btrfs_extent_state.num_objs
111.50 ± 1% +129.6% 256.00 ± 0% slabinfo.btrfs_extent_state.num_slabs
351.00 ± 2% +103.5% 714.25 ± 0% slabinfo.btrfs_ordered_extent.active_objs
376.75 ± 2% +91.1% 720.00 ± 0% slabinfo.btrfs_ordered_extent.num_objs
12062 ± 0% +122.3% 26821 ± 0% slabinfo.btrfs_path.active_objs
430.50 ± 0% +122.4% 957.50 ± 0% slabinfo.btrfs_path.active_slabs
12062 ± 0% +122.3% 26821 ± 0% slabinfo.btrfs_path.num_objs
430.50 ± 0% +122.4% 957.50 ± 0% slabinfo.btrfs_path.num_slabs
407.25 ± 0% +80.4% 734.50 ± 4% slabinfo.ext4_extent_status.active_objs
407.25 ± 0% +80.4% 734.50 ± 4% slabinfo.ext4_extent_status.num_objs
19541 ± 0% +36.8% 26730 ± 0% slabinfo.radix_tree_node.active_objs
697.50 ± 0% +36.8% 954.25 ± 0% slabinfo.radix_tree_node.active_slabs
19541 ± 0% +36.8% 26730 ± 0% slabinfo.radix_tree_node.num_objs
697.50 ± 0% +36.8% 954.25 ± 0% slabinfo.radix_tree_node.num_slabs
158807 ± 2% +131.0% 366899 ± 3% latency_stats.avg.btrfs_log_inode_parent.[btrfs].btrfs_log_dentry_safe.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
6668 ±173% +1166.1% 84431 ± 59% latency_stats.avg.get_request.blk_queue_bio.generic_make_request.submit_bio.submit_stripe_bio.[btrfs].btrfs_map_bio.[btrfs].btrfs_submit_bio_hook.[btrfs].submit_one_bio.[btrfs].flush_epd_write_bio.[btrfs].extent_writepages.[btrfs].btrfs_writepages.[btrfs].do_writepages
13866462 ±166% -81.3% 2589754 ±141% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.system_call_fastpath
41062 ± 0% +54.4% 63383 ± 0% latency_stats.avg.wait_log_commit.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
25520 ± 0% +70.5% 43514 ± 0% latency_stats.avg.write_all_supers.[btrfs].write_ctree_super.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
18476 ± 1% -35.5% 11911 ± 0% latency_stats.hits.btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
41791 ± 0% -35.9% 26801 ± 0% latency_stats.hits.wait_log_commit.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
18075 ± 0% -28.7% 12891 ± 0% latency_stats.hits.wait_on_page_bit.filemap_fdatawait_range.btrfs_wait_marked_extents.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
9830 ± 0% +163.3% 25887 ± 0% latency_stats.hits.wait_ordered_extents.[btrfs].btrfs_log_changed_extents.[btrfs].btrfs_log_inode.[btrfs].btrfs_log_inode_parent.[btrfs].btrfs_log_dentry_safe.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
16441 ± 0% -34.1% 10827 ± 0% latency_stats.hits.write_all_supers.[btrfs].write_ctree_super.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
774721 ± 2% +17.4% 909689 ± 2% latency_stats.max.btrfs_log_inode_parent.[btrfs].btrfs_log_dentry_safe.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
308974 ±102% -99.9% 169.50 ± 7% latency_stats.max.btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
6668 ±173% +1166.1% 84431 ± 59% latency_stats.max.get_request.blk_queue_bio.generic_make_request.submit_bio.submit_stripe_bio.[btrfs].btrfs_map_bio.[btrfs].btrfs_submit_bio_hook.[btrfs].submit_one_bio.[btrfs].flush_epd_write_bio.[btrfs].extent_writepages.[btrfs].btrfs_writepages.[btrfs].do_writepages
13999612 ±164% -81.5% 2589754 ±141% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.system_call_fastpath
155855 ±169% -92.6% 11547 ± 28% latency_stats.sum.btrfs_file_write_iter.[btrfs].__vfs_write.vfs_write.SyS_pwrite64.system_call_fastpath
40811794 ± 1% +8.0% 44071985 ± 1% latency_stats.sum.btrfs_log_inode_parent.[btrfs].btrfs_log_dentry_safe.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
423179 ±111% -98.4% 6944 ± 10% latency_stats.sum.btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
11208157 ± 1% -42.1% 6484422 ± 0% latency_stats.sum.btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
55333 ± 3% -73.7% 14563 ± 5% latency_stats.sum.btrfs_tree_lock.[btrfs].btrfs_lock_root_node.[btrfs].btrfs_search_slot.[btrfs].btrfs_insert_empty_items.[btrfs].btrfs_log_inode.[btrfs].btrfs_log_inode_parent.[btrfs].btrfs_log_dentry_safe.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
262263 ± 1% -18.7% 213138 ± 1% latency_stats.sum.btrfs_tree_read_lock.[btrfs].btrfs_read_lock_root_node.[btrfs].btrfs_search_slot.[btrfs].btrfs_insert_empty_items.[btrfs].btrfs_log_inode.[btrfs].btrfs_log_inode_parent.[btrfs].btrfs_log_dentry_safe.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
1387167 ± 0% +6.7% 1479555 ± 1% latency_stats.sum.do_wait.SyS_wait4.system_call_fastpath
6668 ±173% +1166.1% 84431 ± 59% latency_stats.sum.get_request.blk_queue_bio.generic_make_request.submit_bio.submit_stripe_bio.[btrfs].btrfs_map_bio.[btrfs].btrfs_submit_bio_hook.[btrfs].submit_one_bio.[btrfs].flush_epd_write_bio.[btrfs].extent_writepages.[btrfs].btrfs_writepages.[btrfs].do_writepages
14672479 ±154% -82.3% 2589754 ±141% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.system_call_fastpath
2775089 ± 0% +6.1% 2944954 ± 1% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.system_call_fastpath
1515574 ± 1% +160.1% 3941685 ± 0% latency_stats.sum.wait_for_writer.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
47560 ± 10% +335.6% 207189 ± 8% latency_stats.sum.wait_on_page_bit.extent_write_cache_pages.[btrfs].extent_writepages.[btrfs].btrfs_writepages.[btrfs].do_writepages.__filemap_fdatawrite_range.filemap_fdatawrite_range.btrfs_fdatawrite_range.[btrfs].start_ordered_ops.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync
20108473 ± 0% -20.5% 15989866 ± 0% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.btrfs_wait_marked_extents.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
6853626 ± 0% +225.5% 22310007 ± 0% latency_stats.sum.wait_ordered_extents.[btrfs].btrfs_log_changed_extents.[btrfs].btrfs_log_inode.[btrfs].btrfs_log_inode_parent.[btrfs].btrfs_log_dentry_safe.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
4.196e+08 ± 0% +12.3% 4.712e+08 ± 0% latency_stats.sum.write_all_supers.[btrfs].write_ctree_super.[btrfs].btrfs_sync_log.[btrfs].btrfs_sync_file.[btrfs].vfs_fsync_range.do_fsync.SyS_fsync.system_call_fastpath
lkp-sb02: Sandy Bridge-EP
Memory: 4G
fileio.requests_per_sec
45 ++-O-O--O--O--O-O--O--O--O-O--O--O--O-O--O--O-O--O--O--O-O--O--O--O-O--O
| |
40 ++ |
35 ++ |
| |
30 ++ |
25 ++ |
| |
20 ++ .*.. |
15 *+.*.*..*..*..*.*..*..*..* *..*..*.*..*..*.*..*..*..* |
| |
10 ++ |
5 ++ |
| |
0 O+---------------------------------------------------------------------+
fileio.time.voluntary_context_switches
180000 ++-----------------------------------------------------------------+
| .*.*..*..*.*..*.*..*.. .*.. .*. .*.. |
160000 *+.*.*..*.*. * *.*. *. * |
140000 ++ O O O O O O O O O O O O O O O O O O O O O O O O O
| |
120000 ++ |
100000 ++ O |
| |
80000 ++ |
60000 ++ |
| |
40000 ++ |
20000 ++ |
| |
0 O+-----------------------------------------------------------------+
time.voluntary_context_switches
180000 ++-----------------------------------------------------------------+
| .*.*..*..*.*..*.*..*.. .*.. .*. .*.. |
160000 *+.*.*..*.*. * *.*. *. * |
140000 ++ O O O O O O O O O O O O O O O O O O O O O O O O O
| |
120000 ++ |
100000 ++ O |
| |
80000 ++ |
60000 ++ |
| |
40000 ++ |
20000 ++ |
| |
0 O+-----------------------------------------------------------------+
proc-vmstat.nr_dirty
1000 ++-------------------------------------------------------------------+
900 ++ O O O O O O O O O O O O O O O O O O O O O O O O O
| |
800 ++ O |
700 ++ |
| |
600 ++ |
500 *+.*.*..*..*.*..*..*.*..*..*.*..*..*.*..*.*..*..*.*..* |
400 ++ |
| |
300 ++ |
200 ++ |
| |
100 ++ |
0 O+-------------------------------------------------------------------+
meminfo.Dirty
4000 ++-------------------------------------------------------------------+
| O O O O O O O O O O O O O O O O O O O O O O O O O
3500 ++ |
3000 ++ O |
| |
2500 ++ |
| |
2000 *+.*.*..*..*.*..*..*.*..*..*.*..*..*.*..*.*..*..*.*..* |
| |
1500 ++ |
1000 ++ |
| |
500 ++ |
| |
0 O+-------------------------------------------------------------------+
slabinfo.btrfs_extent_buffer.active_objs
3500 ++-------------------------------------------------------------------+
| *.. .*. .*.. |
3000 ++ .*.. + *. *. *. .*.*..*.*..*..*.*.. |
*..* *..* *..*. * |
2500 ++ |
| |
2000 ++ O O O O O O O O O O O O O O O O O O O O O O O O O
| |
1500 ++ O |
| |
1000 ++ |
| |
500 ++ |
| |
0 O+-------------------------------------------------------------------+
slabinfo.btrfs_extent_buffer.num_objs
3500 ++-------------------------------------------------------------------+
| *.. .*. .*.. |
3000 ++ .*.. + *. *. *. .*.*..*.*..*..*.*.. |
*..* *..* *..*. * |
2500 ++ |
| O |
2000 ++ O O O O O O O O O O O O O O O O O O O O O O O O
| |
1500 ++ O |
| |
1000 ++ |
| |
500 ++ |
| |
0 O+-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 6 months
[lkp] [rcu] 92c16ca391: WARNING: CPU: 0 PID: 1 at kernel/rcu/tree.c:3558 rcu_report_exp_cpu_mult+0x141/0x160()
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev.2015.09.03b
commit 92c16ca3910a3f0ea2dfaa58b4f0710854e4d35b ("rcu: Move synchronize_sched_expedited() to combining tree")
+---------------------------------------------------------+------------+------------+
| | 0fd0ed1fc6 | 92c16ca391 |
+---------------------------------------------------------+------------+------------+
| boot_successes | 25 | 12 |
| boot_failures | 0 | 11 |
| WARNING:at_kernel/rcu/tree.c:#rcu_report_exp_cpu_mult() | 0 | 10 |
| backtrace:acpi_ns_evaluate | 0 | 10 |
| backtrace:acpi_ns_init_one_device | 0 | 10 |
| backtrace:acpi_initialize_objects | 0 | 10 |
| backtrace:acpi_init | 0 | 10 |
| backtrace:kernel_init_freeable | 0 | 10 |
| BUG:kernel_test_crashed | 0 | 1 |
+---------------------------------------------------------+------------+------------+
[ 2.610625] ACPI: Executed 1 blocks of module-level executable AML code
[ 2.748380] [Firmware Bug]: ACPI: BIOS _OSI(Linux) query ignored
[ 2.755129] ------------[ cut here ]------------
[ 2.760290] WARNING: CPU: 0 PID: 1 at kernel/rcu/tree.c:3558 rcu_report_exp_cpu_mult+0x141/0x160()
[ 2.772328] Modules linked in:
[ 2.775745] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.2.0-rc1-00088-g92c16ca #1
[ 2.784101] Hardware name: Intel Corporation S2600WTT/S2600WTT, BIOS SE5C610.86B.01.01.0008.021120151325 02/11/2015
[ 2.795752] ffffffff81b77148 ffff88103802b6a8 ffffffff8189a2f5 0000000000000001
[ 2.804039] 0000000000000000 ffff88103802b6e8 ffffffff8107346a ffff88202df38000
[ 2.812328] ffffffff81cfff40 0000ffffffffffff 0000000000000292 0000000000000000
[ 2.820618] Call Trace:
[ 2.823351] [<ffffffff8189a2f5>] dump_stack+0x4c/0x65
[ 2.829094] [<ffffffff8107346a>] warn_slowpath_common+0x8a/0xc0
[ 2.835802] [<ffffffff8107355a>] warn_slowpath_null+0x1a/0x20
[ 2.842314] [<ffffffff810d4e41>] rcu_report_exp_cpu_mult+0x141/0x160
[ 2.849499] [<ffffffff810d68aa>] synchronize_sched_expedited+0x38a/0x550
[ 2.857079] [<ffffffff810d6a7e>] synchronize_rcu_expedited+0xe/0x10
[ 2.864170] [<ffffffff8147549b>] acpi_os_map_cleanup+0x14/0x3e
[ 2.871460] [<ffffffff81890ebb>] acpi_os_unmap_iomem+0xe4/0xed
[ 2.878069] [<ffffffff81890ed2>] acpi_os_unmap_memory+0xe/0x1c
[ 2.884675] [<ffffffff8149dfbb>] acpi_ex_system_memory_space_handler+0xe6/0x29d
[ 2.892934] [<ffffffff8149ded5>] ? acpi_ex_do_logical_op+0x1b2/0x1b2
[ 2.900120] [<ffffffff81492119>] acpi_ev_address_space_dispatch+0x2d4/0x342
[ 2.907992] [<ffffffff81499a39>] acpi_ex_access_region+0x423/0x4d9
[ 2.914993] [<ffffffff81097a66>] ? __might_sleep+0x56/0xb0
[ 2.921214] [<ffffffff81499ecf>] acpi_ex_field_datum_io+0x183/0x41d
[ 2.928300] [<ffffffff8149a41b>] acpi_ex_extract_from_field+0xcf/0x2c3
[ 2.935676] [<ffffffff814990b9>] acpi_ex_read_data_from_field+0x389/0x3da
[ 2.943344] [<ffffffff8149e7fe>] acpi_ex_resolve_node_to_value+0x337/0x457
[ 2.951108] [<ffffffff8149ecac>] acpi_ex_resolve_to_value+0x38e/0x42c
[ 2.958388] [<ffffffff8148bf49>] acpi_ds_evaluate_name_path+0xa3/0x145
[ 2.965767] [<ffffffff814b5028>] ? acpi_ut_trace_ptr+0x2e/0x78
[ 2.972376] [<ffffffff8148c5d1>] acpi_ds_exec_end_op+0xd6/0x6f8
[ 2.979077] [<ffffffff814ac822>] acpi_ps_parse_loop+0x7b5/0x859
[ 2.985783] [<ffffffff81488fb5>] ? acpi_ds_call_control_method+0x2da/0x2eb
[ 2.993547] [<ffffffff814adaca>] acpi_ps_parse_aml+0x1bb/0x4a3
[ 3.000148] [<ffffffff814ae756>] acpi_ps_execute_method+0x276/0x3aa
[ 3.007235] [<ffffffff814a5de9>] acpi_ns_evaluate+0x2f6/0x437
[ 3.013739] [<ffffffff814a6412>] acpi_ns_init_one_device+0x17c/0x207
[ 3.020922] [<ffffffff814aa018>] acpi_ns_walk_namespace+0x125/0x276
[ 3.028008] [<ffffffff814a6296>] ? acpi_ns_init_one_object+0x107/0x107
[ 3.035385] [<ffffffff81eb04c8>] ? acpi_sleep_proc_init+0x2a/0x2a
[ 3.042277] [<ffffffff814a67fa>] acpi_ns_initialize_devices+0x13e/0x217
[ 3.049760] [<ffffffff81eb2430>] acpi_initialize_objects+0x156/0x1ae
[ 3.056952] [<ffffffff81eb055a>] acpi_init+0x92/0x276
[ 3.062682] [<ffffffff81eb04c8>] ? acpi_sleep_proc_init+0x2a/0x2a
[ 3.069577] [<ffffffff81002123>] do_one_initcall+0xb3/0x1d0
[ 3.075896] [<ffffffff81e65170>] kernel_init_freeable+0x1c9/0x256
[ 3.082789] [<ffffffff8188d890>] ? rest_init+0x80/0x80
[ 3.088615] [<ffffffff8188d89e>] kernel_init+0xe/0xe0
[ 3.094346] [<ffffffff818a22df>] ret_from_fork+0x3f/0x70
[ 3.100373] [<ffffffff8188d890>] ? rest_init+0x80/0x80
[ 3.106201] ---[ end trace fde73dd3fbbe2b1c ]---
[ 3.111362] ------------[ cut here ]------------
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 6 months
[lkp] [drm/i915] 8be6ca8537: WARNING: CPU: 0 PID: 629 at drivers/gpu/drm/i915/intel_uncore.c:70 assert_device_not_suspended+0x53/0x60 [i915]()
by kernel test robot
FYI, we noticed the below changes on
git://anongit.freedesktop.org/drm-intel for-linux-next
commit 8be6ca8537e1230da8e92c753df4125151a3f6b1 ("drm/i915: Also call frontbuffer flip when disabling planes.")
<3>[ 88.957989] x86/PAT: bmc-watchdog:523 map pfn expected mapping type uncached-minus for [mem 0xbac69000-0xbac69fff], got write-back
<3>[ 88.972289] x86/PAT: bmc-watchdog:523 map pfn expected mapping type uncached-minus for [mem 0xbac69000-0xbac69fff], got write-back
<4>[ 90.353612] ------------[ cut here ]------------
<4>[ 90.358328] WARNING: CPU: 0 PID: 629 at drivers/gpu/drm/i915/intel_uncore.c:70 assert_device_not_suspended+0x53/0x60 [i915]()
<4>[ 90.372286] Device suspended
<4>[ 90.375207] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver netconsole btrfs xor raid6_pq sg sd_mod snd_hda_codec_hdmi ata_generic snd_hda_codec_realtek snd_hda_codec_generic pata_acpi x86_pkg_temp_thermal coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper i915 cryptd ppdev eeepc_wmi asus_wmi sparse_keymap snd_hda_intel snd_hda_codec microcode snd_hda_core drm_kms_helper snd_hwdep snd_pcm rfkill ata_piix pata_via pcspkr snd_timer i2c_i801 libata snd soundcore serio_raw shpchp drm wmi parport_pc parport tpm_infineon video
<4>[ 90.432069] CPU: 0 PID: 629 Comm: pm_rpm Not tainted 4.2.0-rc6-00569-g8be6ca8 #1
<4>[ 90.439530] Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 1002 04/01/2011
<47>
[ 90.439898] systemd-journald[131]: Compressed data object 624 -> 468
<4>[ 90.455193] ffffffffa0438010 ffff88009956b9a8 ffffffff8189df6e ffff8801bfa11578
<4>[ 90.462744] ffff88009956b9f8 ffff88009956b9e8 ffffffff8107348a ffff88009956b9e8
<4>[ 90.470339] ffff8801bd1d0080 0000000000043208 ffff8801bd1d7e78 0000000000043208
<4>[ 90.477903] Call Trace:
<4>[ 90.480362] [<ffffffff8189df6e>] dump_stack+0x4c/0x65
<4>[ 90.485583] [<ffffffff8107348a>] warn_slowpath_common+0x8a/0xc0
<4>[ 90.491661] [<ffffffff81073506>] warn_slowpath_fmt+0x46/0x50
<4>[ 90.497470] [<ffffffffa03b3483>] assert_device_not_suspended+0x53/0x60 [i915]
<4>[ 90.505487] [<ffffffffa03b63f8>] gen6_read32+0x38/0x190 [i915]
<4>[ 90.511474] [<ffffffffa03e20c8>] ilk_fbc_disable+0x28/0x70 [i915]
<4>[ 90.517769] [<ffffffffa03e194d>] __intel_fbc_disable+0x2d/0x60 [i915]
<4>[ 90.524358] [<ffffffffa03e2c1c>] intel_fbc_flush+0x5c/0x70 [i915]
<4>[ 90.530626] [<ffffffffa03e345e>] intel_frontbuffer_flush+0x6e/0x80 [i915]
<4>[ 90.537598] [<ffffffffa03e373c>] intel_frontbuffer_flip+0x4c/0x60 [i915]
<4>[ 90.544517] [<ffffffffa03d7a52>] intel_atomic_commit+0x222/0x600 [i915]
<4>[ 90.551298] [<ffffffffa0078077>] drm_atomic_commit+0x37/0x60 [drm]
<4>[ 90.557641] [<ffffffffa01f96a4>] drm_atomic_helper_set_config+0x1c4/0x430 [drm_kms_helper]
<4>[ 90.566127] [<ffffffffa00682a6>] drm_mode_set_config_internal+0x66/0x100 [drm]
<4>[ 90.573514] [<ffffffffa006c2c9>] drm_mode_setcrtc+0x189/0x4e0 [drm]
<4>[ 90.579931] [<ffffffffa005d6c6>] drm_ioctl+0x196/0x640 [drm]
<4>[ 90.585746] [<ffffffff811f0ee1>] do_vfs_ioctl+0x301/0x560
<4>[ 90.591290] [<ffffffff8139502d>] ? selinux_file_ioctl+0x4d/0xc0
<4>[ 90.597400] [<ffffffff8138e383>] ? security_file_ioctl+0x43/0x60
<4>[ 90.603575] [<ffffffff811f11b9>] SyS_ioctl+0x79/0x90
<4>[ 90.608684] [<ffffffff818a5aae>] entry_SYSCALL_64_fastpath+0x12/0x71
<4>[ 90.615212] ---[ end trace ca2dc2b969c553ed ]---
<4>[ 90.615212] ---[ end trace ca2dc2b969c553ed ]---
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 6 months
[lkp] [sched/numa] 2a595721a1: No primary change, -4.1% pigz.time.system_time, -35.6% pigz.time.involuntary_context_switches
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 2a595721a1fa6b684c1c818f379bef834ac3d65e ("sched/numa: Convert sched_numa_balancing to a static_branch")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/blocksize:
brickland1/pigz/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/25%/512K
commit:
2b49d84b259fc18e131026e5d38e7855352f71b9
2a595721a1fa6b684c1c818f379bef834ac3d65e
2b49d84b259fc18e 2a595721a1fa6b684c1c818f37
---------------- --------------------------
%stddev %change %stddev
\ | \
8893 ± 3% -35.6% 5725 ± 2% pigz.time.involuntary_context_switches
529645 ± 7% -84.6% 81746 ± 2% pigz.time.minor_page_faults
136.12 ± 0% -4.1% 130.53 ± 0% pigz.time.system_time
49894 ± 0% -12.0% 43908 ± 0% vmstat.system.in
23110 ± 4% +10.4% 25504 ± 3% numa-meminfo.node2.Active(file)
34491 ± 7% +9.9% 37902 ± 7% numa-meminfo.node3.SUnreclaim
2198559 ± 85% +298.4% 8759838 ± 40% numa-numastat.node3.local_node
2201064 ± 85% +298.1% 8762796 ± 40% numa-numastat.node3.numa_hit
8893 ± 3% -35.6% 5725 ± 2% time.involuntary_context_switches
529645 ± 7% -84.6% 81746 ± 2% time.minor_page_faults
921.83 ± 0% +4.0% 958.44 ± 0% turbostat.CorWatt
935.50 ± 0% +3.9% 972.09 ± 0% turbostat.PkgWatt
1210520 ±116% +466.6% 6858564 ±173% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1210520 ±116% +466.6% 6858564 ±173% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1210520 ±116% +466.6% 6858564 ±173% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
5777 ± 4% +10.4% 6375 ± 3% numa-vmstat.node2.nr_active_file
8622 ± 7% +9.9% 9475 ± 7% numa-vmstat.node3.nr_slab_unreclaimable
1397511 ±112% +225.2% 4545159 ± 45% numa-vmstat.node3.numa_hit
1344766 ±116% +234.2% 4494534 ± 45% numa-vmstat.node3.numa_local
446934 ± 9% -100.0% 0.00 ± -1% proc-vmstat.numa_hint_faults
156287 ± 8% -100.0% 0.00 ± -1% proc-vmstat.numa_hint_faults_local
79222 ± 3% -100.0% 0.00 ± -1% proc-vmstat.numa_pages_migrated
606436 ± 10% -100.0% 0.00 ± -1% proc-vmstat.numa_pte_updates
1325944 ± 3% -34.0% 875483 ± 0% proc-vmstat.pgfault
2944 ± 22% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_fail
79222 ± 3% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_success
1.66 ± 88% -90.3% 0.16 ± 35% perf-profile.cpu-cycles.__acct_update_integrals.acct_account_cputime.account_user_time.account_process_tick.update_process_times
51.92 ± 11% -18.3% 42.43 ± 14% perf-profile.cpu-cycles.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
29.38 ± 7% -23.4% 22.52 ± 14% perf-profile.cpu-cycles.ktime_get.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
29.13 ± 7% -23.4% 22.32 ± 13% perf-profile.cpu-cycles.read_hpet.ktime_get.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.17 ± 31% +112.8% 2.48 ± 17% perf-profile.cpu-cycles.read_hpet.update_wall_time.tick_do_update_jiffies64.tick_sched_do_timer.tick_sched_timer
1.78 ± 32% +107.3% 3.70 ± 13% perf-profile.cpu-cycles.tick_do_update_jiffies64.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.78 ± 32% +107.4% 3.70 ± 13% perf-profile.cpu-cycles.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
50.82 ± 11% -18.1% 41.61 ± 14% perf-profile.cpu-cycles.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
1.77 ± 32% +106.6% 3.67 ± 13% perf-profile.cpu-cycles.update_wall_time.tick_do_update_jiffies64.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues
116261 ± 3% -24.1% 88256 ± 7% sched_debug.cfs_rq[0]:/.exec_clock
3830581 ± 1% -24.8% 2878817 ± 7% sched_debug.cfs_rq[0]:/.min_vruntime
133.00 ± 55% -98.7% 1.67 ± 74% sched_debug.cfs_rq[100]:/.util_avg
-3659940 ± -3% -24.4% -2765978 ± -7% sched_debug.cfs_rq[101]:/.spread0
-3278346 ± -8% -20.5% -2604920 ±-18% sched_debug.cfs_rq[102]:/.spread0
-3435271 ±-11% -22.2% -2672848 ±-15% sched_debug.cfs_rq[103]:/.spread0
-3566499 ± -3% -29.6% -2512386 ±-20% sched_debug.cfs_rq[104]:/.spread0
-3574386 ±-10% -24.6% -2693594 ±-14% sched_debug.cfs_rq[105]:/.spread0
2881 ±120% +1131.0% 35471 ± 40% sched_debug.cfs_rq[106]:/.exec_clock
0.50 ±173% +1100.0% 6.00 ± 31% sched_debug.cfs_rq[106]:/.load_avg
145030 ± 68% +703.4% 1165173 ± 40% sched_debug.cfs_rq[106]:/.min_vruntime
-3686246 ± -3% -53.5% -1714418 ±-32% sched_debug.cfs_rq[106]:/.spread0
0.50 ±173% +1100.0% 6.00 ± 31% sched_debug.cfs_rq[106]:/.tg_load_avg_contrib
2.75 ±132% +7254.5% 202.25 ± 26% sched_debug.cfs_rq[106]:/.util_avg
18683 ± 58% -85.0% 2799 ± 95% sched_debug.cfs_rq[107]:/.exec_clock
-3196208 ±-10% -15.4% -2705137 ±-11% sched_debug.cfs_rq[107]:/.spread0
750.95 ± 58% +1665.3% 13256 ± 95% sched_debug.cfs_rq[108]:/.exec_clock
66759 ± 44% +599.7% 467098 ± 81% sched_debug.cfs_rq[108]:/.min_vruntime
-3764526 ± 0% -35.9% -2412505 ±-20% sched_debug.cfs_rq[108]:/.spread0
-3621586 ± -2% -23.4% -2774188 ± -7% sched_debug.cfs_rq[109]:/.spread0
17.25 ± 25% -36.2% 11.00 ± 23% sched_debug.cfs_rq[10]:/.load_avg
-2053233 ±-43% -51.5% -994945 ±-61% sched_debug.cfs_rq[10]:/.spread0
17.25 ± 25% -36.2% 11.00 ± 23% sched_debug.cfs_rq[10]:/.tg_load_avg_contrib
-3753277 ± 0% -36.5% -2382031 ±-18% sched_debug.cfs_rq[110]:/.spread0
6278 ±157% +389.1% 30704 ± 39% sched_debug.cfs_rq[111]:/.exec_clock
0.00 ± 0% +Inf% 6.00 ± 20% sched_debug.cfs_rq[111]:/.load_avg
229896 ±131% +344.9% 1022781 ± 37% sched_debug.cfs_rq[111]:/.min_vruntime
-3601405 ± -8% -48.4% -1856844 ±-24% sched_debug.cfs_rq[111]:/.spread0
0.00 ± 0% +Inf% 6.00 ± 20% sched_debug.cfs_rq[111]:/.tg_load_avg_contrib
1.67 ±101% +10235.0% 172.25 ± 0% sched_debug.cfs_rq[111]:/.util_avg
737.39 ± 29% +726.5% 6094 ±137% sched_debug.cfs_rq[112]:/.exec_clock
67404 ± 35% +324.5% 286110 ± 83% sched_debug.cfs_rq[112]:/.min_vruntime
-3763902 ± -1% -31.1% -2593520 ±-15% sched_debug.cfs_rq[112]:/.spread0
-3486522 ± -9% -26.7% -2556764 ±-22% sched_debug.cfs_rq[113]:/.spread0
1443 ± 78% +667.7% 11079 ±111% sched_debug.cfs_rq[114]:/.exec_clock
-3740179 ± -1% -33.8% -2474399 ±-19% sched_debug.cfs_rq[114]:/.spread0
1.00 ± 81% +8650.0% 87.50 ± 94% sched_debug.cfs_rq[114]:/.util_avg
-3696449 ± -3% -34.1% -2434206 ±-28% sched_debug.cfs_rq[115]:/.spread0
-3332046 ±-11% -15.3% -2822306 ± -7% sched_debug.cfs_rq[116]:/.spread0
-3750772 ± 0% -27.2% -2730139 ± -5% sched_debug.cfs_rq[117]:/.spread0
781.26 ± 27% +197.5% 2324 ± 72% sched_debug.cfs_rq[118]:/.exec_clock
62554 ± 40% +131.9% 145063 ± 37% sched_debug.cfs_rq[118]:/.min_vruntime
-3768786 ± -1% -27.4% -2734597 ± -9% sched_debug.cfs_rq[118]:/.spread0
34.00 ±109% -99.0% 0.33 ±141% sched_debug.cfs_rq[118]:/.util_avg
964.82 ± 17% +535.2% 6128 ±139% sched_debug.cfs_rq[119]:/.exec_clock
-3743042 ± -1% -29.5% -2638522 ±-16% sched_debug.cfs_rq[119]:/.spread0
33831 ± 35% +136.1% 79889 ± 18% sched_debug.cfs_rq[11]:/.exec_clock
1598925 ± 14% +69.9% 2717005 ± 13% sched_debug.cfs_rq[11]:/.min_vruntime
6.00 ± 65% +150.0% 15.00 ± 33% sched_debug.cfs_rq[11]:/.runnable_load_avg
-2231731 ±-10% -92.7% -161883 ±-290% sched_debug.cfs_rq[11]:/.spread0
211.00 ± 48% +145.1% 517.25 ± 30% sched_debug.cfs_rq[11]:/.util_avg
-1733896 ±-28% -95.5% -78376 ±-1047% sched_debug.cfs_rq[12]:/.spread0
30322 ± 29% +114.8% 65119 ± 41% sched_debug.cfs_rq[13]:/.exec_clock
-2454110 ±-10% -75.7% -597570 ±-187% sched_debug.cfs_rq[13]:/.spread0
-2559592 ±-14% -62.8% -952814 ±-46% sched_debug.cfs_rq[14]:/.spread0
109741 ± 2% -10.9% 97760 ± 2% sched_debug.cfs_rq[15]:/.exec_clock
3599873 ± 5% -12.0% 3168005 ± 4% sched_debug.cfs_rq[15]:/.min_vruntime
-230805 ±-64% -225.3% 289094 ± 42% sched_debug.cfs_rq[15]:/.spread0
-93388 ±-171% -1216.9% 1043029 ± 44% sched_debug.cfs_rq[16]:/.spread0
-1052675 ±-33% -165.5% 689427 ± 65% sched_debug.cfs_rq[18]:/.spread0
1.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[21]:/.nr_spread_over
-1248598 ±-26% -54.5% -568164 ±-71% sched_debug.cfs_rq[21]:/.spread0
67072 ± 27% -82.4% 11829 ± 54% sched_debug.cfs_rq[22]:/.exec_clock
10.75 ± 19% -53.5% 5.00 ± 0% sched_debug.cfs_rq[22]:/.load
13.25 ± 14% -45.3% 7.25 ± 24% sched_debug.cfs_rq[22]:/.load_avg
2526358 ± 34% -61.6% 970591 ± 40% sched_debug.cfs_rq[22]:/.min_vruntime
11.00 ± 21% -56.8% 4.75 ± 9% sched_debug.cfs_rq[22]:/.runnable_load_avg
13.25 ± 14% -45.3% 7.25 ± 24% sched_debug.cfs_rq[22]:/.tg_load_avg_contrib
404.75 ± 15% -55.7% 179.25 ± 8% sched_debug.cfs_rq[22]:/.util_avg
34903 ± 43% +111.9% 73956 ± 7% sched_debug.cfs_rq[24]:/.exec_clock
1459089 ± 37% +78.7% 2607295 ± 6% sched_debug.cfs_rq[24]:/.min_vruntime
-2371657 ±-21% -88.5% -271682 ±-51% sched_debug.cfs_rq[24]:/.spread0
114156 ± 14% -24.2% 86563 ± 5% sched_debug.cfs_rq[2]:/.exec_clock
108423 ± 6% -19.3% 87469 ± 16% sched_debug.cfs_rq[30]:/.exec_clock
23.25 ± 9% -28.0% 16.75 ± 12% sched_debug.cfs_rq[30]:/.load_avg
3552781 ± 5% -19.5% 2860014 ± 16% sched_debug.cfs_rq[30]:/.min_vruntime
23.25 ± 9% -28.0% 16.75 ± 12% sched_debug.cfs_rq[30]:/.tg_load_avg_contrib
738.50 ± 8% -23.7% 563.25 ± 12% sched_debug.cfs_rq[30]:/.util_avg
-435993 ±-87% -290.1% 828697 ± 66% sched_debug.cfs_rq[31]:/.spread0
-1171740 ±-71% -119.6% 229420 ±240% sched_debug.cfs_rq[32]:/.spread0
-1371768 ±-44% -104.5% 61104 ±1230% sched_debug.cfs_rq[33]:/.spread0
616.00 ± 16% -29.3% 435.50 ± 38% sched_debug.cfs_rq[36]:/.util_avg
40119 ± 43% +65.1% 66221 ± 18% sched_debug.cfs_rq[38]:/.exec_clock
-2124064 ±-28% -78.3% -460976 ±-79% sched_debug.cfs_rq[38]:/.spread0
-1798066 ±-18% -82.6% -312882 ±-148% sched_debug.cfs_rq[39]:/.spread0
114466 ± 5% -40.3% 68312 ± 40% sched_debug.cfs_rq[3]:/.exec_clock
3814268 ± 5% -33.5% 2537783 ± 32% sched_debug.cfs_rq[3]:/.min_vruntime
688.00 ± 15% -46.4% 369.00 ± 35% sched_debug.cfs_rq[3]:/.util_avg
-1624917 ±-29% -70.9% -473178 ±-168% sched_debug.cfs_rq[41]:/.spread0
10.75 ± 31% -53.5% 5.00 ± 70% sched_debug.cfs_rq[42]:/.load
112152 ± 7% -28.8% 79900 ± 5% sched_debug.cfs_rq[45]:/.exec_clock
27.25 ± 38% -67.9% 8.75 ± 74% sched_debug.cfs_rq[45]:/.load
23.25 ± 8% -52.7% 11.00 ± 60% sched_debug.cfs_rq[45]:/.load_avg
3652753 ± 5% -24.6% 2754845 ± 3% sched_debug.cfs_rq[45]:/.min_vruntime
22.25 ± 10% -65.2% 7.75 ± 73% sched_debug.cfs_rq[45]:/.runnable_load_avg
23.25 ± 8% -52.7% 11.00 ± 60% sched_debug.cfs_rq[45]:/.tg_load_avg_contrib
744.75 ± 9% -62.0% 283.25 ± 65% sched_debug.cfs_rq[45]:/.util_avg
101629 ± 3% -44.1% 56764 ± 13% sched_debug.cfs_rq[46]:/.exec_clock
20.00 ± 19% -51.2% 9.75 ± 60% sched_debug.cfs_rq[46]:/.load
21.50 ± 18% -50.0% 10.75 ± 59% sched_debug.cfs_rq[46]:/.load_avg
3325558 ± 2% -36.6% 2107303 ± 12% sched_debug.cfs_rq[46]:/.min_vruntime
19.75 ± 18% -51.9% 9.50 ± 60% sched_debug.cfs_rq[46]:/.runnable_load_avg
21.50 ± 18% -50.0% 10.75 ± 59% sched_debug.cfs_rq[46]:/.tg_load_avg_contrib
680.50 ± 18% -50.8% 335.00 ± 59% sched_debug.cfs_rq[46]:/.util_avg
92913 ± 2% -35.2% 60227 ± 38% sched_debug.cfs_rq[47]:/.exec_clock
730.50 ± 9% -48.3% 378.00 ± 47% sched_debug.cfs_rq[47]:/.util_avg
100043 ± 6% -35.7% 64337 ± 10% sched_debug.cfs_rq[48]:/.exec_clock
3482864 ± 9% -32.0% 2368231 ± 9% sched_debug.cfs_rq[48]:/.min_vruntime
682.50 ± 15% -31.2% 469.75 ± 28% sched_debug.cfs_rq[4]:/.util_avg
-1795812 ±-34% -99.6% -7570 ±-10364% sched_debug.cfs_rq[52]:/.spread0
-1189477 ±-80% -101.2% 13908 ±4111% sched_debug.cfs_rq[53]:/.spread0
-2198387 ±-28% -73.4% -585016 ±-64% sched_debug.cfs_rq[55]:/.spread0
272.50 ± 24% +46.1% 398.00 ± 4% sched_debug.cfs_rq[55]:/.util_avg
1882614 ± 10% +30.9% 2464360 ± 4% sched_debug.cfs_rq[56]:/.min_vruntime
-1948373 ±-11% -78.7% -414875 ±-44% sched_debug.cfs_rq[56]:/.spread0
6.25 ± 34% +100.0% 12.50 ± 20% sched_debug.cfs_rq[57]:/.load
6.00 ± 28% +108.3% 12.50 ± 20% sched_debug.cfs_rq[57]:/.runnable_load_avg
-2176925 ±-24% -104.0% 86742 ±1111% sched_debug.cfs_rq[57]:/.spread0
223.00 ± 33% +91.1% 426.25 ± 19% sched_debug.cfs_rq[57]:/.util_avg
-2261079 ±-41% -78.7% -482338 ±-65% sched_debug.cfs_rq[58]:/.spread0
-1954361 ±-42% -78.4% -421282 ±-115% sched_debug.cfs_rq[59]:/.spread0
94715 ± 3% -34.4% 62135 ± 26% sched_debug.cfs_rq[5]:/.exec_clock
17.50 ± 21% -57.1% 7.50 ± 33% sched_debug.cfs_rq[5]:/.load
3311193 ± 9% -27.7% 2394814 ± 23% sched_debug.cfs_rq[5]:/.min_vruntime
632.25 ± 24% -59.1% 258.75 ± 33% sched_debug.cfs_rq[5]:/.util_avg
0.33 ±141% +2900.0% 10.00 ± 66% sched_debug.cfs_rq[60]:/.load_avg
-3742126 ± 0% -36.8% -2365254 ±-24% sched_debug.cfs_rq[60]:/.spread0
0.33 ±141% +2900.0% 10.00 ± 66% sched_debug.cfs_rq[60]:/.tg_load_avg_contrib
-3554476 ± -7% -21.3% -2796294 ± -9% sched_debug.cfs_rq[61]:/.spread0
-3612954 ± -6% -28.9% -2568207 ±-16% sched_debug.cfs_rq[63]:/.spread0
-3759598 ± -1% -34.4% -2464987 ±-15% sched_debug.cfs_rq[65]:/.spread0
980.95 ± 17% +2645.6% 26932 ± 60% sched_debug.cfs_rq[66]:/.exec_clock
105444 ± 34% +751.5% 897829 ± 57% sched_debug.cfs_rq[66]:/.min_vruntime
-3725607 ± -1% -46.8% -1981493 ±-23% sched_debug.cfs_rq[66]:/.spread0
-3656770 ± -3% -31.5% -2503974 ±-10% sched_debug.cfs_rq[67]:/.spread0
-3680625 ± -3% -39.5% -2226211 ±-55% sched_debug.cfs_rq[68]:/.spread0
-3594149 ± -5% -22.2% -2795647 ± -7% sched_debug.cfs_rq[69]:/.spread0
451.25 ± 29% -50.7% 222.25 ± 55% sched_debug.cfs_rq[6]:/.util_avg
-3731439 ± 0% -30.3% -2601216 ±-15% sched_debug.cfs_rq[71]:/.spread0
1121 ± 22% -27.4% 814.73 ± 17% sched_debug.cfs_rq[72]:/.exec_clock
128857 ± 49% -54.6% 58446 ± 9% sched_debug.cfs_rq[72]:/.min_vruntime
-3702229 ± 0% -23.8% -2820926 ± -7% sched_debug.cfs_rq[72]:/.spread0
-3669791 ± -3% -25.3% -2740420 ±-12% sched_debug.cfs_rq[73]:/.spread0
-3674440 ± -4% -24.1% -2788078 ±-10% sched_debug.cfs_rq[74]:/.spread0
-3770001 ± 0% -27.8% -2721945 ±-13% sched_debug.cfs_rq[75]:/.spread0
3.50 ± 58% +4264.3% 152.75 ± 86% sched_debug.cfs_rq[75]:/.util_avg
-3736543 ± 0% -25.8% -2771201 ± -9% sched_debug.cfs_rq[76]:/.spread0
734.97 ± 63% +1526.6% 11954 ± 90% sched_debug.cfs_rq[78]:/.exec_clock
-3772623 ± 0% -34.7% -2462353 ±-23% sched_debug.cfs_rq[78]:/.spread0
-3552096 ± -6% -38.4% -2187150 ±-30% sched_debug.cfs_rq[79]:/.spread0
63492 ± 10% -22.1% 49436 ± 13% sched_debug.cfs_rq[7]:/.exec_clock
2387489 ± 14% -21.1% 1883110 ± 16% sched_debug.cfs_rq[7]:/.min_vruntime
-3751786 ± -1% -26.6% -2754141 ± -5% sched_debug.cfs_rq[80]:/.spread0
-3554010 ± -6% -21.4% -2792868 ± -6% sched_debug.cfs_rq[81]:/.spread0
-3632345 ± -6% -28.9% -2581916 ±-14% sched_debug.cfs_rq[86]:/.spread0
-3760781 ± 0% -43.9% -2111066 ±-31% sched_debug.cfs_rq[87]:/.spread0
26902 ± 64% -74.7% 6797 ±163% sched_debug.cfs_rq[88]:/.exec_clock
884822 ± 59% -72.3% 244816 ±150% sched_debug.cfs_rq[88]:/.min_vruntime
-3666833 ± -3% -29.3% -2591895 ±-21% sched_debug.cfs_rq[89]:/.spread0
-3552123 ± -6% -36.1% -2270416 ±-23% sched_debug.cfs_rq[90]:/.spread0
-3734940 ± -1% -26.2% -2755171 ±-10% sched_debug.cfs_rq[91]:/.spread0
655826 ± 83% -91.1% 58454 ± 31% sched_debug.cfs_rq[93]:/.min_vruntime
1479 ± 11% -19.0% 1198 ± 14% sched_debug.cfs_rq[93]:/.tg_load_avg
87.25 ± 96% -98.6% 1.25 ± 66% sched_debug.cfs_rq[94]:/.util_avg
35140 ± 37% -98.7% 444.99 ± 54% sched_debug.cfs_rq[95]:/.exec_clock
1158811 ± 34% -96.8% 37162 ± 63% sched_debug.cfs_rq[95]:/.min_vruntime
-3744235 ± 0% -35.4% -2417077 ±-18% sched_debug.cfs_rq[96]:/.spread0
-3403276 ±-13% -17.3% -2814514 ± -7% sched_debug.cfs_rq[97]:/.spread0
633334 ± 52% -81.4% 117729 ± 62% sched_debug.cfs_rq[98]:/.min_vruntime
-3197900 ± -9% -13.6% -2761812 ± -6% sched_debug.cfs_rq[98]:/.spread0
-3375824 ± -6% -18.2% -2760505 ± -8% sched_debug.cfs_rq[99]:/.spread0
8.75 ± 16% +65.7% 14.50 ± 20% sched_debug.cfs_rq[9]:/.load_avg
-2128634 ±-31% -80.4% -416709 ±-118% sched_debug.cfs_rq[9]:/.spread0
8.75 ± 16% +65.7% 14.50 ± 20% sched_debug.cfs_rq[9]:/.tg_load_avg_contrib
225.50 ± 41% +87.3% 422.25 ± 21% sched_debug.cfs_rq[9]:/.util_avg
122178 ± 2% -23.0% 94103 ± 6% sched_debug.cpu#0.nr_load_updates
7369 ± 22% -42.4% 4245 ± 23% sched_debug.cpu#10.ttwu_count
924.50 ± 59% +78.6% 1650 ± 32% sched_debug.cpu#104.ttwu_count
675.50 ± 12% +85.4% 1252 ± 26% sched_debug.cpu#105.nr_switches
676.00 ± 12% +85.5% 1253 ± 26% sched_debug.cpu#105.sched_count
287.50 ± 9% +89.2% 544.00 ± 27% sched_debug.cpu#105.sched_goidle
431.50 ± 71% +457.5% 2405 ± 86% sched_debug.cpu#105.ttwu_count
134.75 ± 12% +157.9% 347.50 ± 35% sched_debug.cpu#105.ttwu_local
43808 ± 5% +55.4% 68059 ± 16% sched_debug.cpu#106.nr_load_updates
423.75 ± 24% +808.6% 3850 ± 80% sched_debug.cpu#106.ttwu_count
55428 ± 14% -23.5% 42409 ± 9% sched_debug.cpu#107.nr_load_updates
673.75 ± 26% +297.4% 2677 ± 92% sched_debug.cpu#107.nr_switches
2.75 ± 82% -9.1% 2.50 ±142% sched_debug.cpu#107.nr_uninterruptible
675.00 ± 26% +296.9% 2679 ± 92% sched_debug.cpu#107.sched_count
286.75 ± 26% +334.6% 1246 ± 99% sched_debug.cpu#107.sched_goidle
714.00 ± 5% +89.0% 1349 ± 51% sched_debug.cpu#108.nr_switches
714.75 ± 4% +88.9% 1350 ± 51% sched_debug.cpu#108.sched_count
308.25 ± 5% +92.8% 594.25 ± 57% sched_debug.cpu#108.sched_goidle
364.00 ± 46% +169.8% 982.25 ± 33% sched_debug.cpu#108.ttwu_count
150.75 ± 14% +54.2% 232.50 ± 11% sched_debug.cpu#108.ttwu_local
6.00 ± 65% +150.0% 15.00 ± 33% sched_debug.cpu#11.cpu_load[0]
6.00 ± 65% +150.0% 15.00 ± 33% sched_debug.cpu#11.cpu_load[1]
6.00 ± 65% +150.0% 15.00 ± 33% sched_debug.cpu#11.cpu_load[2]
5.75 ± 65% +160.9% 15.00 ± 33% sched_debug.cpu#11.cpu_load[3]
5.75 ± 65% +160.9% 15.00 ± 33% sched_debug.cpu#11.cpu_load[4]
933.00 ± 64% +218.0% 2966 ± 41% sched_debug.cpu#11.curr->pid
74672 ± 10% +39.5% 104165 ± 9% sched_debug.cpu#11.nr_load_updates
23887 ± 35% -60.2% 9514 ± 55% sched_debug.cpu#11.nr_switches
23935 ± 34% -60.2% 9517 ± 55% sched_debug.cpu#11.sched_count
11797 ± 35% -60.5% 4657 ± 56% sched_debug.cpu#11.sched_goidle
617.00 ± 14% +285.7% 2379 ± 91% sched_debug.cpu#110.nr_switches
618.00 ± 14% +285.2% 2380 ± 91% sched_debug.cpu#110.sched_count
260.75 ± 11% +333.4% 1130 ± 95% sched_debug.cpu#110.sched_goidle
242.75 ± 25% +494.5% 1443 ± 90% sched_debug.cpu#110.ttwu_count
135.00 ± 18% +53.9% 207.75 ± 13% sched_debug.cpu#110.ttwu_local
985570 ± 1% -13.1% 856012 ± 1% sched_debug.cpu#111.avg_idle
45139 ± 17% +43.1% 64592 ± 14% sched_debug.cpu#111.nr_load_updates
378.00 ± 77% +464.0% 2132 ± 38% sched_debug.cpu#111.ttwu_count
740.50 ± 25% +181.4% 2084 ± 87% sched_debug.cpu#112.nr_switches
741.50 ± 25% +181.3% 2085 ± 87% sched_debug.cpu#112.sched_count
313.75 ± 27% +203.7% 952.75 ± 96% sched_debug.cpu#112.sched_goidle
733.25 ± 41% +84.6% 1353 ± 36% sched_debug.cpu#115.nr_switches
734.25 ± 41% +84.5% 1354 ± 36% sched_debug.cpu#115.sched_count
322.00 ± 40% +86.8% 601.50 ± 40% sched_debug.cpu#115.sched_goidle
808.50 ± 15% +416.0% 4171 ± 57% sched_debug.cpu#118.nr_switches
1004 ± 34% +315.4% 4173 ± 57% sched_debug.cpu#118.sched_count
348.25 ± 15% +471.2% 1989 ± 60% sched_debug.cpu#118.sched_goidle
15394 ± 20% -64.2% 5513 ± 60% sched_debug.cpu#12.nr_switches
15483 ± 20% -64.4% 5515 ± 60% sched_debug.cpu#12.sched_count
7564 ± 21% -64.8% 2665 ± 62% sched_debug.cpu#12.sched_goidle
7731 ± 19% -32.4% 5229 ± 38% sched_debug.cpu#12.ttwu_count
782.75 ± 11% -43.9% 438.75 ± 4% sched_debug.cpu#12.ttwu_local
1002 ± 5% -62.6% 374.75 ± 25% sched_debug.cpu#13.ttwu_local
-0.25 ±-591% +1500.0% -4.00 ±-35% sched_debug.cpu#14.nr_uninterruptible
795.75 ± 9% -46.3% 427.00 ± 9% sched_debug.cpu#14.ttwu_local
-3.00 ±-81% -141.7% 1.25 ±228% sched_debug.cpu#17.nr_uninterruptible
4479 ± 30% +29.2% 5787 ± 17% sched_debug.cpu#18.ttwu_count
335.00 ± 35% -41.0% 197.50 ± 19% sched_debug.cpu#18.ttwu_local
9528 ± 57% +159.3% 24709 ± 50% sched_debug.cpu#2.nr_switches
9621 ± 57% +156.9% 24713 ± 50% sched_debug.cpu#2.sched_count
4569 ± 60% +167.5% 12220 ± 50% sched_debug.cpu#2.sched_goidle
5817 ± 9% +61.1% 9374 ± 29% sched_debug.cpu#2.ttwu_count
11.00 ± 21% -56.8% 4.75 ± 9% sched_debug.cpu#22.cpu_load[0]
11.00 ± 21% -56.8% 4.75 ± 9% sched_debug.cpu#22.cpu_load[1]
11.00 ± 21% -56.8% 4.75 ± 9% sched_debug.cpu#22.cpu_load[2]
11.00 ± 21% -56.8% 4.75 ± 9% sched_debug.cpu#22.cpu_load[3]
11.00 ± 21% -56.8% 4.75 ± 9% sched_debug.cpu#22.cpu_load[4]
2778 ± 10% -61.6% 1066 ± 43% sched_debug.cpu#22.curr->pid
10.75 ± 19% -53.5% 5.00 ± 0% sched_debug.cpu#22.load
96429 ± 15% -39.1% 58700 ± 10% sched_debug.cpu#22.nr_load_updates
281.75 ± 28% +119.5% 618.50 ± 23% sched_debug.cpu#22.ttwu_local
842.75 ± 89% +267.5% 3097 ± 44% sched_debug.cpu#23.curr->pid
72493 ± 15% +39.1% 100851 ± 3% sched_debug.cpu#24.nr_load_updates
20.00 ± 17% -45.0% 11.00 ± 45% sched_debug.cpu#3.cpu_load[1]
20.00 ± 17% -45.0% 11.00 ± 45% sched_debug.cpu#3.cpu_load[2]
20.00 ± 17% -46.2% 10.75 ± 43% sched_debug.cpu#3.cpu_load[3]
20.00 ± 17% -46.2% 10.75 ± 43% sched_debug.cpu#3.cpu_load[4]
4116 ± 16% -48.0% 2142 ± 59% sched_debug.cpu#3.curr->pid
127716 ± 3% -26.1% 94370 ± 23% sched_debug.cpu#3.nr_load_updates
985032 ± 1% -13.4% 852699 ± 5% sched_debug.cpu#32.avg_idle
253.25 ± 18% +72.4% 436.50 ± 18% sched_debug.cpu#35.ttwu_local
3857 ± 13% -36.0% 2470 ± 39% sched_debug.cpu#36.curr->pid
4029 ± 14% -36.4% 2563 ± 35% sched_debug.cpu#4.curr->pid
10.75 ± 31% -53.5% 5.00 ± 70% sched_debug.cpu#42.load
22.25 ± 10% -65.2% 7.75 ± 73% sched_debug.cpu#45.cpu_load[0]
22.25 ± 10% -65.2% 7.75 ± 73% sched_debug.cpu#45.cpu_load[1]
22.25 ± 10% -65.2% 7.75 ± 73% sched_debug.cpu#45.cpu_load[2]
22.25 ± 10% -65.2% 7.75 ± 73% sched_debug.cpu#45.cpu_load[3]
22.25 ± 10% -65.2% 7.75 ± 73% sched_debug.cpu#45.cpu_load[4]
4517 ± 7% -63.5% 1650 ± 91% sched_debug.cpu#45.curr->pid
27.25 ± 38% -67.9% 8.75 ± 74% sched_debug.cpu#45.load
126272 ± 4% -17.7% 103981 ± 2% sched_debug.cpu#45.nr_load_updates
2986 ± 50% +268.2% 10993 ± 12% sched_debug.cpu#45.nr_switches
3060 ± 46% +259.3% 10997 ± 12% sched_debug.cpu#45.sched_count
1349 ± 56% +295.6% 5336 ± 13% sched_debug.cpu#45.sched_goidle
4670 ± 3% +18.0% 5508 ± 6% sched_debug.cpu#45.ttwu_count
229.25 ± 11% +90.9% 437.75 ± 16% sched_debug.cpu#45.ttwu_local
19.75 ± 18% -55.7% 8.75 ± 62% sched_debug.cpu#46.cpu_load[0]
19.75 ± 18% -55.7% 8.75 ± 62% sched_debug.cpu#46.cpu_load[1]
19.75 ± 18% -54.4% 9.00 ± 61% sched_debug.cpu#46.cpu_load[2]
19.75 ± 18% -54.4% 9.00 ± 61% sched_debug.cpu#46.cpu_load[3]
19.75 ± 18% -55.7% 8.75 ± 62% sched_debug.cpu#46.cpu_load[4]
21.50 ± 12% -54.7% 9.75 ± 60% sched_debug.cpu#46.load
118521 ± 1% -25.9% 87824 ± 6% sched_debug.cpu#46.nr_load_updates
3312 ± 57% +310.0% 13580 ± 52% sched_debug.cpu#46.nr_switches
3392 ± 54% +300.5% 13585 ± 52% sched_debug.cpu#46.sched_count
1508 ± 62% +339.1% 6624 ± 54% sched_debug.cpu#46.sched_goidle
21.75 ± 9% -46.0% 11.75 ± 45% sched_debug.cpu#47.cpu_load[4]
4444 ± 12% -49.7% 2235 ± 50% sched_debug.cpu#47.curr->pid
112219 ± 1% -18.9% 91025 ± 18% sched_debug.cpu#47.nr_load_updates
3516 ± 60% +371.4% 16575 ± 32% sched_debug.cpu#47.nr_switches
3560 ± 59% +365.7% 16580 ± 32% sched_debug.cpu#47.sched_count
1631 ± 65% +400.6% 8168 ± 33% sched_debug.cpu#47.sched_goidle
3937 ± 14% +62.5% 6398 ± 20% sched_debug.cpu#47.ttwu_count
191.00 ± 18% +189.0% 552.00 ± 14% sched_debug.cpu#47.ttwu_local
119687 ± 4% -21.9% 93518 ± 5% sched_debug.cpu#48.nr_load_updates
2246 ± 83% +352.0% 10153 ± 74% sched_debug.cpu#49.sched_goidle
197.75 ± 21% +82.4% 360.75 ± 47% sched_debug.cpu#49.ttwu_local
4103 ± 26% -63.9% 1482 ± 44% sched_debug.cpu#5.curr->pid
18.25 ± 24% -58.9% 7.50 ± 33% sched_debug.cpu#5.load
113195 ± 4% -21.4% 88966 ± 12% sched_debug.cpu#5.nr_load_updates
255.75 ± 29% +37.4% 351.50 ± 22% sched_debug.cpu#50.ttwu_local
1848 ± 43% +65.1% 3051 ± 35% sched_debug.cpu#54.curr->pid
257.75 ± 20% +37.3% 354.00 ± 23% sched_debug.cpu#54.ttwu_local
3400 ± 25% +67.6% 5698 ± 41% sched_debug.cpu#55.ttwu_count
82247 ± 3% +12.2% 92289 ± 2% sched_debug.cpu#56.nr_load_updates
7470 ±118% +216.4% 23636 ± 41% sched_debug.cpu#56.nr_switches
7527 ±117% +214.0% 23638 ± 41% sched_debug.cpu#56.sched_count
3652 ±121% +220.7% 11713 ± 41% sched_debug.cpu#56.sched_goidle
2864 ± 16% +137.4% 6799 ± 50% sched_debug.cpu#56.ttwu_count
6.00 ± 28% +108.3% 12.50 ± 20% sched_debug.cpu#57.cpu_load[0]
6.00 ± 28% +108.3% 12.50 ± 20% sched_debug.cpu#57.cpu_load[1]
6.00 ± 28% +108.3% 12.50 ± 20% sched_debug.cpu#57.cpu_load[2]
6.00 ± 28% +108.3% 12.50 ± 20% sched_debug.cpu#57.cpu_load[3]
6.00 ± 28% +108.3% 12.50 ± 20% sched_debug.cpu#57.cpu_load[4]
6.25 ± 34% +100.0% 12.50 ± 20% sched_debug.cpu#57.load
2554 ± 28% +94.0% 4955 ± 32% sched_debug.cpu#57.ttwu_count
4928 ± 54% +373.3% 23326 ± 39% sched_debug.cpu#58.nr_switches
4962 ± 54% +370.1% 23328 ± 39% sched_debug.cpu#58.sched_count
2388 ± 56% +384.7% 11576 ± 39% sched_debug.cpu#58.sched_goidle
2382 ± 50% +168.0% 6385 ± 30% sched_debug.cpu#58.ttwu_count
2857 ± 30% -45.9% 1544 ± 66% sched_debug.cpu#6.curr->pid
3210 ± 42% -74.7% 813.50 ± 32% sched_debug.cpu#61.nr_switches
4.00 ± 77% -100.0% 0.00 ± 1% sched_debug.cpu#61.nr_uninterruptible
3212 ± 42% -74.6% 815.25 ± 32% sched_debug.cpu#61.sched_count
1485 ± 46% -76.2% 353.50 ± 36% sched_debug.cpu#61.sched_goidle
645.00 ± 24% +97.9% 1276 ± 23% sched_debug.cpu#62.nr_switches
645.50 ± 24% +97.9% 1277 ± 23% sched_debug.cpu#62.sched_count
277.00 ± 24% +107.7% 575.25 ± 25% sched_debug.cpu#62.sched_goidle
142.25 ± 29% +55.5% 221.25 ± 21% sched_debug.cpu#62.ttwu_local
43304 ± 1% +44.5% 62557 ± 19% sched_debug.cpu#66.nr_load_updates
1942 ± 59% -65.9% 662.50 ± 27% sched_debug.cpu#68.nr_switches
1943 ± 59% -65.9% 662.75 ± 27% sched_debug.cpu#68.sched_count
871.75 ± 63% -66.4% 292.75 ± 27% sched_debug.cpu#68.sched_goidle
2863 ±100% -68.4% 905.75 ±121% sched_debug.cpu#68.ttwu_count
304.25 ± 11% -48.9% 155.50 ± 47% sched_debug.cpu#68.ttwu_local
3742 ± 63% -70.0% 1122 ± 17% sched_debug.cpu#69.nr_switches
3743 ± 63% -70.0% 1123 ± 17% sched_debug.cpu#69.sched_count
1767 ± 67% -72.4% 488.50 ± 16% sched_debug.cpu#69.sched_goidle
338.75 ± 8% -34.8% 221.00 ± 27% sched_debug.cpu#69.ttwu_local
6904 ± 20% -37.2% 4338 ± 23% sched_debug.cpu#7.ttwu_count
3540 ± 48% -75.2% 877.00 ± 86% sched_debug.cpu#70.ttwu_count
545.75 ± 73% -63.3% 200.50 ± 30% sched_debug.cpu#70.ttwu_local
1629 ± 14% -50.8% 801.75 ± 39% sched_debug.cpu#71.nr_switches
1822 ± 27% -56.0% 802.00 ± 39% sched_debug.cpu#71.sched_count
701.50 ± 14% -49.4% 355.00 ± 38% sched_debug.cpu#71.sched_goidle
335.25 ± 21% -52.0% 161.00 ± 35% sched_debug.cpu#71.ttwu_local
2889 ± 72% -68.4% 914.25 ± 16% sched_debug.cpu#72.nr_switches
2889 ± 72% -68.3% 915.25 ± 16% sched_debug.cpu#72.sched_count
1351 ± 77% -69.8% 408.75 ± 15% sched_debug.cpu#72.sched_goidle
1367 ± 15% -36.3% 871.25 ± 25% sched_debug.cpu#73.nr_switches
1368 ± 15% -36.3% 872.00 ± 25% sched_debug.cpu#73.sched_count
595.25 ± 15% -34.8% 388.25 ± 29% sched_debug.cpu#73.sched_goidle
4466 ±104% -88.9% 497.75 ± 21% sched_debug.cpu#73.ttwu_count
2033 ± 20% -55.1% 913.75 ± 33% sched_debug.cpu#74.nr_switches
2034 ± 20% -55.1% 914.50 ± 33% sched_debug.cpu#74.sched_count
887.00 ± 25% -54.1% 407.25 ± 33% sched_debug.cpu#74.sched_goidle
536.75 ± 47% -63.2% 197.75 ± 26% sched_debug.cpu#74.ttwu_local
355.00 ± 21% -25.7% 263.75 ± 3% sched_debug.cpu#77.sched_goidle
42672 ± 1% +20.1% 51258 ± 15% sched_debug.cpu#78.nr_load_updates
30851 ± 46% -73.6% 8131 ± 83% sched_debug.cpu#8.nr_switches
1.50 ±300% -400.0% -4.50 ±-24% sched_debug.cpu#8.nr_uninterruptible
30940 ± 46% -73.7% 8134 ± 82% sched_debug.cpu#8.sched_count
15271 ± 46% -74.0% 3967 ± 84% sched_debug.cpu#8.sched_goidle
0.75 ±404% -66.7% 0.25 ±331% sched_debug.cpu#83.nr_uninterruptible
62144 ± 20% -23.7% 47431 ± 17% sched_debug.cpu#88.nr_load_updates
3349 ± 60% -76.1% 799.00 ± 25% sched_debug.cpu#89.nr_switches
3.75 ± 22% -86.7% 0.50 ±100% sched_debug.cpu#89.nr_uninterruptible
3350 ± 60% -76.1% 799.25 ± 25% sched_debug.cpu#89.sched_count
1579 ± 66% -78.1% 346.50 ± 22% sched_debug.cpu#89.sched_goidle
5.00 ± 56% +135.0% 11.75 ± 29% sched_debug.cpu#9.cpu_load[2]
5.00 ± 56% +145.0% 12.25 ± 22% sched_debug.cpu#9.cpu_load[3]
4.75 ± 67% +163.2% 12.50 ± 20% sched_debug.cpu#9.cpu_load[4]
7871 ± 17% -39.6% 4756 ± 22% sched_debug.cpu#9.ttwu_count
1391 ± 55% -58.6% 576.50 ± 11% sched_debug.cpu#9.ttwu_local
142.50 ± 15% +122.6% 317.25 ± 66% sched_debug.cpu#90.ttwu_local
55305 ± 21% -22.6% 42803 ± 0% sched_debug.cpu#93.nr_load_updates
67974 ± 14% -37.5% 42516 ± 0% sched_debug.cpu#95.nr_load_updates
4.00 ± 30% -75.0% 1.00 ± 70% sched_debug.cpu#95.nr_uninterruptible
299.50 ± 15% +28.9% 386.00 ± 9% sched_debug.cpu#95.sched_goidle
3575 ± 72% -89.5% 373.75 ± 9% sched_debug.cpu#95.ttwu_count
153.75 ± 12% +36.6% 210.00 ± 11% sched_debug.cpu#95.ttwu_local
41155 ± 4% +24.5% 51234 ± 16% sched_debug.cpu#96.nr_load_updates
353.25 ± 46% +625.9% 2564 ± 42% sched_debug.cpu#96.ttwu_count
48865 ± 11% -10.2% 43872 ± 4% sched_debug.cpu#98.nr_load_updates
brickland1: Brickland Ivy Bridge-EX
Memory: 128G
pigz.time.minor_page_faults
700000 ++-----------------------------------------------------------------+
| |
600000 ++ .*.. .* .*.. *.. |
*..* .*.*. + .*..* *.*..*.*.. + *. |
500000 ++ *.*..*.*..*. *. *..* *..* |
| |
400000 ++ |
| |
300000 ++ |
| |
200000 ++ |
| |
100000 O+ O O O O O O O O O O O O O O O O O O O O O O O O O O
| |
0 ++-----------------------------------------------------------------+
time.minor_page_faults
700000 ++-----------------------------------------------------------------+
| |
600000 ++ .*.. .* .*.. *.. |
*..* .*.*. + .*..* *.*..*.*.. + *. |
500000 ++ *.*..*.*..*. *. *..* *..* |
| |
400000 ++ |
| |
300000 ++ |
| |
200000 ++ |
| |
100000 O+ O O O O O O O O O O O O O O O O O O O O O O O O O O
| |
0 ++-----------------------------------------------------------------+
vmstat.system.in
51000 ++------------------------------------------------------------------+
| .*..* .*.*.. .*. .*.. |
50000 *+.* + .*..*.*..*. *. .*.*. *. *.*..*..*.*..* |
49000 ++ *. *..*. |
| |
48000 ++ |
| |
47000 ++ |
| |
46000 ++ |
45000 ++ |
| |
44000 ++ O O O O O O O O O O O O O O O O O O O
O O O O O O O O |
43000 ++------------------------------------------------------------------+
proc-vmstat.pgfault
1.5e+06 ++----------------------------------------------------------------+
| |
1.4e+06 ++ *.. .*.. *. *.. |
*.. + .* .*.. .. *..*.*..*. + *. |
1.3e+06 ++ * *.*..*.*..*.*. * * *..* *..* |
| |
1.2e+06 ++ |
| |
1.1e+06 ++ |
| |
1e+06 ++ |
| |
900000 O+ O O O O O O O O O O O O O O O O O |
| O O O O O O O O O
800000 ++----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 6 months
[lkp] [dma] 6894258eda: mpt2sas0: reply_post_free pool: pci_pool_alloc failed
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 6894258eda2f9badc28c878086c0e54bd5b7fb30 ("dma-mapping: consolidate dma_{alloc,free}_{attrs,coherent}")
[ 38.454412] mpt2sas 0000:01:00.0: swiotlb buffer is full (sz: 398336 bytes)
[ 38.462243] swiotlb: coherent allocation failed for device 0000:01:00.0 size=398336
[ 38.470860] CPU: 33 PID: 1287 Comm: systemd-udevd Not tainted 4.2.0-10851-g6894258 #1
[ 38.479661] Hardware name: Intel Corporation BRICKLAND/BRICKLAND, BIOS BRBDXSD1.000.0320.R00.1506111113 06/11/2015
[ 38.491285] 00000000000613ff ffff887fe642b998 ffffffff81400352 00000000ffffffff
[ 38.499656] ffff887fe642b9d8 ffffffff81427f86 0000000000000007 00000000000002d0
[ 38.508025] ffff883ff0cce098 0000000000061400 ffff883fe0e178d8 ffffffff81cd9bc0
[ 38.516366] Call Trace:
[ 38.520330] [<ffffffff81400352>] dump_stack+0x4b/0x69
[ 38.527336] [<ffffffff81427f86>] swiotlb_alloc_coherent+0x146/0x160
[ 38.535725] [<ffffffff8105ad53>] x86_swiotlb_alloc_coherent+0x43/0x50
[ 38.544292] [<ffffffff811aed05>] dma_pool_alloc+0x105/0x240
[ 38.551877] [<ffffffffa0167de6>] _base_allocate_memory_pools+0x306/0x10d0 [mpt2sas]
[ 38.562571] [<ffffffffa0166662>] ? _base_handshake_req_reply_wait+0x1e2/0x430 [mpt2sas]
[ 38.572893] [<ffffffffa016c6bb>] mpt2sas_base_attach+0x39b/0x7c0 [mpt2sas]
[ 38.581956] [<ffffffffa0170d0c>] _scsih_probe+0x38c/0x520 [mpt2sas]
[ 38.590334] [<ffffffff814467c5>] local_pci_probe+0x45/0xa0
[ 38.597808] [<ffffffff81447bb7>] pci_device_probe+0xd7/0x120
[ 38.605480] [<ffffffff815490b4>] driver_probe_device+0x224/0x490
[ 38.613534] [<ffffffff815493a4>] __driver_attach+0x84/0x90
[ 38.620992] [<ffffffff81549320>] ? driver_probe_device+0x490/0x490
[ 38.629224] [<ffffffff81546dd4>] bus_for_each_dev+0x64/0xa0
[ 38.636759] [<ffffffff81548a4e>] driver_attach+0x1e/0x20
[ 38.643997] [<ffffffff815485b1>] bus_add_driver+0x1f1/0x290
[ 38.651527] [<ffffffffa0196000>] ? 0xffffffffa0196000
[ 38.658480] [<ffffffff81549d40>] driver_register+0x60/0xe0
[ 38.665907] [<ffffffff8144616c>] __pci_register_driver+0x4c/0x50
[ 38.673932] [<ffffffffa0196164>] _scsih_init+0x164/0x1000 [mpt2sas]
[ 38.682255] [<ffffffff81002123>] do_one_initcall+0xb3/0x1d0
[ 38.689808] [<ffffffff811c1e44>] ? kmem_cache_alloc_trace+0x1b4/0x220
[ 38.698339] [<ffffffff811647f8>] ? do_init_module+0x27/0x1e8
[ 38.705976] [<ffffffff81164831>] do_init_module+0x60/0x1e8
[ 38.713431] [<ffffffff810f98b9>] load_module+0x2089/0x24b0
[ 38.720841] [<ffffffff810f5ac0>] ? __symbol_put+0x40/0x40
[ 38.728139] [<ffffffff810f9ee0>] SyS_finit_module+0x80/0xb0
[ 38.735647] [<ffffffff818aa42e>] entry_SYSCALL_64_fastpath+0x12/0x71
[ 38.744018] mpt2sas0: reply_post_free pool: pci_pool_alloc failed
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 6 months