[lkp] [printk] 34578dc67f: EIP is at vprintk_emit+0x1ea/0x600
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 34578dc67f38c02ccbe696e4099967884caa8e15 ("printk: set may_schedule for some of console_trylock() callers")
+------------------------------------------------+------------+------------+
| | 08e0722e8e | 34578dc67f |
+------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 98 | 100 |
| BUG:kernel_torture_test_oversize | 98 | 50 |
| EIP_is_at_raw_spin_unlock_irqrestore | 0 | 3 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 50 |
| backtrace:of_unittest | 0 | 50 |
| backtrace:kernel_init_freeable | 0 | 50 |
| EIP_is_at_vprintk_emit | 0 | 28 |
| EIP_is_at_lock_acquire | 0 | 5 |
| EIP_is_at_of_unittest_overlay | 0 | 8 |
| EIP_is_at_mutex_lock_nested | 0 | 1 |
| EIP_is_at__mutex_unlock_slowpath | 0 | 2 |
| EIP_is_at_of_overlay_destroy | 0 | 1 |
| EIP_is_at_idr_find_slowpath | 0 | 1 |
| EIP_is_at___might_sleep | 0 | 1 |
+------------------------------------------------+------------+------------+
[ 33.497678] ### dt-test ### of_unittest_destroy_tracked_overlays: overlay destroy failed for #6
[ 33.497693] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [swapper:1]
[ 33.497693] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [swapper:1]
[ 33.497695] Modules linked in:
[ 33.497695] Modules linked in:
[ 33.497696] irq event stamp: 69018756
[ 33.497696] irq event stamp: 69018756
[ 33.497700] hardirqs last enabled at (69018755): [<c068e185>] vprintk_emit+0x1e5/0x600
[ 33.497700] hardirqs last enabled at (69018755): [<c068e185>] vprintk_emit+0x1e5/0x600
[ 33.497704] hardirqs last disabled at (69018756): [<c0a091b8>] apic_timer_interrupt+0x28/0x40
[ 33.497704] hardirqs last disabled at (69018756): [<c0a091b8>] apic_timer_interrupt+0x28/0x40
[ 33.497707] softirqs last enabled at (68988726): [<c064cf32>] __do_softirq+0x312/0x3f0
[ 33.497707] softirqs last enabled at (68988726): [<c064cf32>] __do_softirq+0x312/0x3f0
[ 33.497710] softirqs last disabled at (68988719): [<c0604c7e>] do_softirq_own_stack+0x1e/0x30
[ 33.497710] softirqs last disabled at (68988719): [<c0604c7e>] do_softirq_own_stack+0x1e/0x30
[ 33.497713] CPU: 0 PID: 1 Comm: swapper Not tainted 4.5.0-rc4-00295-g34578dc #1
[ 33.497713] CPU: 0 PID: 1 Comm: swapper Not tainted 4.5.0-rc4-00295-g34578dc #1
[ 33.497714] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 33.497714] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 33.497715] task: b0058000 ti: b0060000 task.ti: b0060000
[ 33.497715] task: b0058000 ti: b0060000 task.ti: b0060000
[ 33.497716] EIP: 0060:[<c068e18a>] EFLAGS: 00000246 CPU: 0
[ 33.497716] EIP: 0060:[<c068e18a>] EFLAGS: 00000246 CPU: 0
[ 33.497718] EIP is at vprintk_emit+0x1ea/0x600
[ 33.497718] EIP is at vprintk_emit+0x1ea/0x600
[ 33.497719] EAX: 00000246 EBX: 00000053 ECX: 00000000 EDX: 00000001
[ 33.497719] EAX: 00000246 EBX: 00000053 ECX: 00000000 EDX: 00000001
[ 33.497720] ESI: 00000006 EDI: 00000000 EBP: b0061e4c ESP: b0061e1c
[ 33.497720] ESI: 00000006 EDI: 00000000 EBP: b0061e4c ESP: b0061e1c
[ 33.497721] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 33.497721] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 33.497722] CR0: 8005003b CR2: 00000000 CR3: 10e3d000 CR4: 00000690
[ 33.497722] CR0: 8005003b CR2: 00000000 CR3: 10e3d000 CR4: 00000690
[ 33.497726] Stack:
[ 33.497726] Stack:
[ 33.497730] 00000000 00000000 00000000 00000000 c15d9462 00000053 c15d9462 00000000
[ 33.497730] 00000000 00000000 00000000 00000000 c15d9462 00000053 c15d9462 00000000
[ 33.497734] 00000246 00000006 00000001 00000000 b0061e6c c068e712 00000000 00000004
[ 33.497734] 00000246 00000006 00000001 00000000 b0061e6c c068e712 00000000 00000004
[ 33.497737] 00000000 00000000 c0bcf58c b0061e80 b0061e74 c06fdf6e b0061ea4 c0df5eec
[ 33.497737] 00000000 00000000 c0bcf58c b0061e80 b0061e74 c06fdf6e b0061ea4 c0df5eec
[ 33.497738] Call Trace:
[ 33.497738] Call Trace:
[ 33.497741] [<c068e712>] vprintk_default+0x32/0x40
[ 33.497741] [<c068e712>] vprintk_default+0x32/0x40
[ 33.497744] [<c06fdf6e>] printk+0x11/0x13
[ 33.497744] [<c06fdf6e>] printk+0x11/0x13
[ 33.497748] [<c0df5eec>] of_unittest_overlay+0x8d1/0x900
[ 33.497748] [<c0df5eec>] of_unittest_overlay+0x8d1/0x900
[ 33.497750] [<c0df6b1f>] of_unittest+0xc04/0xc2d
[ 33.497750] [<c0df6b1f>] of_unittest+0xc04/0xc2d
[ 33.497753] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497753] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497756] [<c099fda9>] ? add_sysfs_fw_map_entry+0x3c/0x7d
[ 33.497756] [<c099fda9>] ? add_sysfs_fw_map_entry+0x3c/0x7d
[ 33.497758] [<c0600433>] ? do_one_initcall+0x73/0x1d0
[ 33.497758] [<c0600433>] ? do_one_initcall+0x73/0x1d0
[ 33.497760] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497760] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497761] [<c060043e>] do_one_initcall+0x7e/0x1d0
[ 33.497761] [<c060043e>] do_one_initcall+0x7e/0x1d0
[ 33.497763] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497763] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497765] [<c0664def>] ? parse_args+0x24f/0x440
[ 33.497765] [<c0664def>] ? parse_args+0x24f/0x440
[ 33.497767] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497767] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497770] [<c0dc4b30>] ? kernel_init_freeable+0xc3/0x15b
[ 33.497770] [<c0dc4b30>] ? kernel_init_freeable+0xc3/0x15b
[ 33.497771] [<c0dc4b4d>] kernel_init_freeable+0xe0/0x15b
[ 33.497771] [<c0dc4b4d>] kernel_init_freeable+0xe0/0x15b
[ 33.497773] [<c0a014bb>] kernel_init+0xb/0xe0
[ 33.497773] [<c0a014bb>] kernel_init+0xb/0xe0
[ 33.497776] [<c066e17c>] ? schedule_tail+0xc/0x50
[ 33.497776] [<c066e17c>] ? schedule_tail+0xc/0x50
[ 33.497778] [<c0a088a0>] ret_from_kernel_thread+0x20/0x40
[ 33.497778] [<c0a088a0>] ret_from_kernel_thread+0x20/0x40
[ 33.497779] [<c0a014b0>] ? rest_init+0x110/0x110
[ 33.497779] [<c0a014b0>] ? rest_init+0x110/0x110
[ 33.497804] Code: 00 8d 40 ff 80 b8 60 94 5d c1 0a 0f 85 d0 fe ff ff 89 c3 be 02 00 00 00 e9 c6 fe ff ff 8d 74 26 00 e8 0b 11 ff ff 8b 45 f0 50 9d <8d> 74 26 00 80 7d ef 00 0f 85 7e ff ff ff e8 73 ce fe ff e8 fe
[ 33.497804] Code: 00 8d 40 ff 80 b8 60 94 5d c1 0a 0f 85 d0 fe ff ff 89 c3 be 02 00 00 00 e9 c6 fe ff ff 8d 74 26 00 e8 0b 11 ff ff 8b 45 f0 50 9d <8d> 74 26 00 80 7d ef 00 0f 85 7e ff ff ff e8 73 ce fe ff e8 fe
[ 33.497805] Kernel panic - not syncing: softlockup: hung tasks
[ 33.497805] Kernel panic - not syncing: softlockup: hung tasks
[ 33.497807] CPU: 0 PID: 1 Comm: swapper Tainted: G L 4.5.0-rc4-00295-g34578dc #1
[ 33.497807] CPU: 0 PID: 1 Comm: swapper Tainted: G L 4.5.0-rc4-00295-g34578dc #1
[ 33.497808] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 33.497808] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 33.497812] b0058000 b0061cf8 c07c681d b0061d10 c06fdaf6 b0058000 00000017 b0058000
[ 33.497812] b0058000 b0061cf8 c07c681d b0061d10 c06fdaf6 b0058000 00000017 b0058000
[ 33.497815] 00000000 b0061d40 c06c11cd c0b7dd9b 00000000 00000017 b0058298 00000001
[ 33.497815] 00000000 b0061d40 c06c11cd c0b7dd9b 00000000 00000017 b0058298 00000001
[ 33.497819] b0061de0 00000008 c0d799a0 c0d760c0 7fffffff b0061d7c c069b960 00000000
[ 33.497819] b0061de0 00000008 c0d799a0 c0d760c0 7fffffff b0061d7c c069b960 00000000
[ 33.497819] Call Trace:
[ 33.497819] Call Trace:
[ 33.497821] [<c07c681d>] dump_stack+0x16/0x19
[ 33.497821] [<c07c681d>] dump_stack+0x16/0x19
[ 33.497823] [<c06fdaf6>] panic+0x85/0x190
[ 33.497823] [<c06fdaf6>] panic+0x85/0x190
[ 33.497825] [<c06c11cd>] watchdog_timer_fn+0x1ad/0x1b0
[ 33.497825] [<c06c11cd>] watchdog_timer_fn+0x1ad/0x1b0
[ 33.497828] [<c069b960>] hrtimer_run_queues+0x140/0x4a0
[ 33.497828] [<c069b960>] hrtimer_run_queues+0x140/0x4a0
[ 33.497829] [<c06c1020>] ? watchdog+0x40/0x40
[ 33.497829] [<c06c1020>] ? watchdog+0x40/0x40
[ 33.497831] [<c069aa9b>] update_process_times+0x1b/0x50
[ 33.497831] [<c069aa9b>] update_process_times+0x1b/0x50
[ 33.497834] [<c06ab364>] tick_periodic+0x34/0xd0
[ 33.497834] [<c06ab364>] tick_periodic+0x34/0xd0
[ 33.497836] [<c06ab454>] ? tick_handle_periodic+0x14/0x60
[ 33.497836] [<c06ab454>] ? tick_handle_periodic+0x14/0x60
[ 33.497837] [<c06ab454>] tick_handle_periodic+0x14/0x60
[ 33.497837] [<c06ab454>] tick_handle_periodic+0x14/0x60
[ 33.497841] [<c0630b20>] local_apic_timer_interrupt+0x20/0x50
[ 33.497841] [<c0630b20>] local_apic_timer_interrupt+0x20/0x50
[ 33.497843] [<c06312cc>] smp_apic_timer_interrupt+0x2c/0x40
[ 33.497843] [<c06312cc>] smp_apic_timer_interrupt+0x2c/0x40
[ 33.497844] [<c0a091bf>] apic_timer_interrupt+0x2f/0x40
[ 33.497844] [<c0a091bf>] apic_timer_interrupt+0x2f/0x40
[ 33.497846] [<c068e18a>] ? vprintk_emit+0x1ea/0x600
[ 33.497846] [<c068e18a>] ? vprintk_emit+0x1ea/0x600
[ 33.497848] [<c068e712>] vprintk_default+0x32/0x40
[ 33.497848] [<c068e712>] vprintk_default+0x32/0x40
[ 33.497850] [<c06fdf6e>] printk+0x11/0x13
[ 33.497850] [<c06fdf6e>] printk+0x11/0x13
[ 33.497852] [<c0df5eec>] of_unittest_overlay+0x8d1/0x900
[ 33.497852] [<c0df5eec>] of_unittest_overlay+0x8d1/0x900
[ 33.497854] [<c0df6b1f>] of_unittest+0xc04/0xc2d
[ 33.497854] [<c0df6b1f>] of_unittest+0xc04/0xc2d
[ 33.497855] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497855] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497857] [<c099fda9>] ? add_sysfs_fw_map_entry+0x3c/0x7d
[ 33.497857] [<c099fda9>] ? add_sysfs_fw_map_entry+0x3c/0x7d
[ 33.497859] [<c0600433>] ? do_one_initcall+0x73/0x1d0
[ 33.497859] [<c0600433>] ? do_one_initcall+0x73/0x1d0
[ 33.497861] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497861] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497862] [<c060043e>] do_one_initcall+0x7e/0x1d0
[ 33.497862] [<c060043e>] do_one_initcall+0x7e/0x1d0
[ 33.497864] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497864] [<c0df5f1b>] ? of_unittest_overlay+0x900/0x900
[ 33.497866] [<c0664def>] ? parse_args+0x24f/0x440
[ 33.497866] [<c0664def>] ? parse_args+0x24f/0x440
[ 33.497867] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497867] [<c067f29b>] ? trace_hardirqs_on+0xb/0x10
[ 33.497869] [<c0dc4b30>] ? kernel_init_freeable+0xc3/0x15b
[ 33.497869] [<c0dc4b30>] ? kernel_init_freeable+0xc3/0x15b
[ 33.497871] [<c0dc4b4d>] kernel_init_freeable+0xe0/0x15b
[ 33.497871] [<c0dc4b4d>] kernel_init_freeable+0xe0/0x15b
[ 33.497872] [<c0a014bb>] kernel_init+0xb/0xe0
[ 33.497872] [<c0a014bb>] kernel_init+0xb/0xe0
[ 33.497874] [<c066e17c>] ? schedule_tail+0xc/0x50
[ 33.497874] [<c066e17c>] ? schedule_tail+0xc/0x50
[ 33.497876] [<c0a088a0>] ret_from_kernel_thread+0x20/0x40
[ 33.497876] [<c0a088a0>] ret_from_kernel_thread+0x20/0x40
[ 33.497877] [<c0a014b0>] ? rest_init+0x110/0x110
[ 33.497877] [<c0a014b0>] ? rest_init+0x110/0x110
[ 33.497881] Kernel Offset: 0xf600000 from 0xb1000000 (relocation range: 0xb0000000-0xc47dffff)
[ 33.497881] Kernel Offset: 0xf600000 from 0xb1000000 (relocation range: 0xb0000000-0xc47dffff)
Thanks,
Ying Huang
3 years, 9 months
[lkp] [rcutorture] f5b06dd00f: WARNING: CPU: 0 PID: 18 at kernel/rcu/rcuperf.c:361 rcu_perf_writer+0x3cd/0x418()
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev.2015.01.12a
commit f5b06dd00f499c7a155a13c2dbedfcfd9b24bb6d ("rcutorture: Add RCU grace-period performance tests")
[ 1.173594] rcu-perf: rcu_perf_writer task started
[ 1.174821] ------------[ cut here ]------------
[ 1.174821] ------------[ cut here ]------------
[ 1.175986] WARNING: CPU: 0 PID: 18 at kernel/rcu/rcuperf.c:361 rcu_perf_writer+0x3cd/0x418()
[ 1.175986] WARNING: CPU: 0 PID: 18 at kernel/rcu/rcuperf.c:361 rcu_perf_writer+0x3cd/0x418()
[ 1.178553] Modules linked in:
[ 1.178553] Modules linked in:
[ 1.179334] CPU: 0 PID: 18 Comm: rcu_perf_writer Not tainted 4.4.0-rc2-00066-gf5b06dd #1
[ 1.179334] CPU: 0 PID: 18 Comm: rcu_perf_writer Not tainted 4.4.0-rc2-00066-gf5b06dd #1
[ 1.181344] 00000000
[ 1.181344] 00000000 00000000 00000000 8e40bf04 8e40bf04 812c835e 812c835e 8e40bf1c 8e40bf1c 81073bf2 81073bf2 810cb713 810cb713 924c8c80 924c8c80
[ 1.183380] 00000000
[ 1.183380] 00000000 00000000 00000000 8e40bf2c 8e40bf2c 81073c5e 81073c5e 00000009 00000009 00000000 00000000 8e40bf50 8e40bf50 810cb713 810cb713
[ 1.185439] 8e407840
[ 1.185439] 8e407840 8e40a000 8e40a000 00000000 00000000 810cb346 810cb346 924c8c80 924c8c80 00000000 00000000 810cb346 810cb346 8e40bfac 8e40bfac
[ 1.187479] Call Trace:
[ 1.187479] Call Trace:
[ 1.188100] [<812c835e>] dump_stack+0x40/0x5e
[ 1.188100] [<812c835e>] dump_stack+0x40/0x5e
[ 1.189220] [<81073bf2>] warn_slowpath_common+0xd4/0x115
[ 1.189220] [<81073bf2>] warn_slowpath_common+0xd4/0x115
[ 1.190573] [<810cb713>] ? rcu_perf_writer+0x3cd/0x418
[ 1.190573] [<810cb713>] ? rcu_perf_writer+0x3cd/0x418
[ 1.191880] [<81073c5e>] warn_slowpath_null+0x2b/0x3d
[ 1.191880] [<81073c5e>] warn_slowpath_null+0x2b/0x3d
[ 1.193171] [<810cb713>] rcu_perf_writer+0x3cd/0x418
[ 1.193171] [<810cb713>] rcu_perf_writer+0x3cd/0x418
[ 1.194459] [<810cb346>] ? rcu_perf_wait_shutdown+0xe3/0xe3
[ 1.194459] [<810cb346>] ? rcu_perf_wait_shutdown+0xe3/0xe3
[ 1.195873] [<810cb346>] ? rcu_perf_wait_shutdown+0xe3/0xe3
[ 1.195873] [<810cb346>] ? rcu_perf_wait_shutdown+0xe3/0xe3
[ 1.197292] [<8109d06f>] kthread+0x11a/0x12d
[ 1.197292] [<8109d06f>] kthread+0x11a/0x12d
[ 1.198391] [<81c62548>] ret_from_kernel_thread+0x20/0x34
[ 1.198391] [<81c62548>] ret_from_kernel_thread+0x20/0x34
[ 1.199756] [<8109cf55>] ? flush_kthread_work+0x171/0x171
[ 1.199756] [<8109cf55>] ? flush_kthread_work+0x171/0x171
[ 1.201133] ---[ end trace 53c8b4a96b192d7d ]---
[ 1.201133] ---[ end trace 53c8b4a96b192d7d ]---
Thanks,
Kernel Test Robot
3 years, 9 months
[lkp] [kernel/fs] d57d611505: +54.7% turbostat.%Busy
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit d57d611505d911c6f9f81cd9bd6dbd293d66dd9f ("kernel/fs: fix I/O wait not accounted for RW O_DSYNC")
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1HDD/9B/f2fs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-ne04/400M/fsmark
commit:
09954bad448791ef01202351d437abdd9497a804
d57d611505d911c6f9f81cd9bd6dbd293d66dd9f
09954bad448791ef d57d611505d911c6f9f81cd9bd
---------------- --------------------------
%stddev %change %stddev
\ | \
14760 ± 1% +6.3% 15695 ± 1% fsmark.time.involuntary_context_switches
46.00 ± 0% -8.2% 42.25 ± 1% fsmark.time.percent_of_cpu_this_job_got
26151 ±116% -71.3% 7498 ± 23% latency_stats.sum.call_rwsem_down_read_failed.f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
7418 ± 23% -27.7% 5361 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
1.47 ± 1% -9.9% 1.33 ± 0% time.user_time
1447 ± 4% -29.0% 1028 ± 7% uptime.idle
7.75 ± 19% +206.5% 23.75 ± 1% vmstat.procs.b
29675 ± 23% -27.7% 21447 ± 3% numa-meminfo.node0.SUnreclaim
83099 ± 8% -11.5% 73568 ± 1% numa-meminfo.node0.Slab
11.91 ± 1% +54.7% 18.43 ± 1% turbostat.%Busy
358.00 ± 1% +60.0% 572.75 ± 1% turbostat.Avg_MHz
21.54 ± 0% -17.4% 17.79 ± 1% turbostat.CPU%c1
24.78 ± 3% +41.1% 34.98 ± 1% turbostat.CPU%c3
41.76 ± 1% -31.0% 28.81 ± 1% turbostat.CPU%c6
8.53 ± 5% -25.6% 6.34 ± 10% turbostat.Pkg%pc3
66312232 ± 0% -68.5% 20910669 ± 4% cpuidle.C1-NHM.time
46386272 ± 2% -20.5% 36884085 ± 5% cpuidle.C1E-NHM.time
2.79e+08 ± 2% +51.5% 4.226e+08 ± 0% cpuidle.C3-NHM.time
254315 ± 2% +25.1% 318250 ± 2% cpuidle.C3-NHM.usage
7.585e+08 ± 1% -22.2% 5.901e+08 ± 1% cpuidle.C6-NHM.time
415414 ± 1% -38.1% 257115 ± 1% cpuidle.C6-NHM.usage
1.135e+08 ± 1% +77.8% 2.017e+08 ± 2% cpuidle.POLL.time
102777 ± 1% +36.0% 139811 ± 1% cpuidle.POLL.usage
2011 ± 25% -27.1% 1467 ± 2% sched_debug.cfs_rq:/.exec_clock.2
1820 ± 3% -12.2% 1597 ± 7% sched_debug.cfs_rq:/.exec_clock.4
4297 ± 6% +78.1% 7653 ± 54% sched_debug.cfs_rq:/.min_vruntime.12
4803 ± 18% -16.1% 4028 ± 21% sched_debug.cfs_rq:/.min_vruntime.8
3763 ± 4% -17.0% 3124 ± 7% sched_debug.cfs_rq:/.min_vruntime.9
3408 ± 4% -14.9% 2900 ± 2% sched_debug.cfs_rq:/.min_vruntime.min
4.75 ± 54% -73.7% 1.25 ±173% sched_debug.cfs_rq:/.nr_spread_over.2
1.99 ± 16% +25.5% 2.50 ± 9% sched_debug.cfs_rq:/.nr_spread_over.stddev
-4362 ±-25% -92.7% -318.24 ±-1304% sched_debug.cfs_rq:/.spread0.12
200.25 ± 47% -48.8% 102.50 ± 22% sched_debug.cfs_rq:/.util_avg.7
187.75 ± 64% -77.2% 42.75 ± 56% sched_debug.cfs_rq:/.util_avg.8
1.41 ± 31% +37.0% 1.93 ± 12% sched_debug.cpu.clock.stddev
1.41 ± 31% +37.0% 1.93 ± 12% sched_debug.cpu.clock_task.stddev
1.63 ± 67% -53.9% 0.75 ± 38% sched_debug.cpu.cpu_load[4].stddev
33822 ± 16% -17.2% 28018 ± 4% sched_debug.cpu.nr_switches.13
36824 ± 3% +12.9% 41569 ± 4% sched_debug.cpu.nr_switches.14
33117 ± 7% -11.1% 29438 ± 5% sched_debug.cpu.nr_switches.15
57634 ± 3% +8.8% 62716 ± 3% sched_debug.cpu.nr_switches.4
1723 ± 7% -11.3% 1527 ± 5% sched_debug.cpu.nr_uninterruptible.10
1378 ± 6% -8.7% 1258 ± 5% sched_debug.cpu.nr_uninterruptible.15
1232 ± 10% +25.3% 1543 ± 3% sched_debug.cpu.nr_uninterruptible.8
34660 ± 13% -19.1% 28034 ± 4% sched_debug.cpu.sched_count.13
37545 ± 2% +18.9% 44629 ± 7% sched_debug.cpu.sched_count.14
34003 ± 9% -13.4% 29456 ± 5% sched_debug.cpu.sched_count.15
62715 ± 6% +9.5% 68642 ± 7% sched_debug.cpu.sched_count.2
14091 ± 19% -20.4% 11212 ± 5% sched_debug.cpu.sched_goidle.13
14175 ± 3% +15.4% 16354 ± 6% sched_debug.cpu.sched_goidle.14
13465 ± 9% -12.5% 11782 ± 6% sched_debug.cpu.sched_goidle.15
24646 ± 3% +9.4% 26968 ± 4% sched_debug.cpu.sched_goidle.4
15258 ± 6% -22.8% 11773 ± 4% sched_debug.cpu.ttwu_count.11
15765 ± 5% +48.1% 23355 ± 15% sched_debug.cpu.ttwu_count.12
16869 ± 3% -26.1% 12461 ± 4% sched_debug.cpu.ttwu_count.13
3142 ± 5% +13.0% 3551 ± 8% sched_debug.cpu.ttwu_local.13
lkp-ne04: Nehalem-EP
Memory: 12G
uptime.idle
1600 ++-------------------------------------------------------------------+
| .*. *. |
1500 *+**.** * **.*.**.* *.**.**.*.**.* *.**.*.**.**.*.**.**.*.* **.*
1400 ++ : : : : :+ |
| :: :: * |
1300 ++ * * |
| |
1200 ++ |
| |
1100 O+OO OO O O OO O OO OO O O OO O |
1000 ++ |
| |
900 ++ O O |
| |
800 ++-------------------------------------------------------------------+
cpuidle.POLL.time
2.2e+08 ++---------------O--O----O----------------------------------------+
O O O O O |
2e+08 ++O O O O O O O O |
| O OO O O |
| |
1.8e+08 ++ |
| |
1.6e+08 ++ |
| |
1.4e+08 ++ |
| |
| |
1.2e+08 ++ .* .* *. .*.**. .**. *.**. *.* *. *.**.**.**.|
*.**.** * *. : **. * ** **.* *.* *.* * *
1e+08 ++------------*-----*---------------------------------------------+
cpuidle.C1-NHM.time
8e+07 ++------------------------------------------------------------------+
| |
7e+07 ++ .* *. .* .* *. *. .* *.* |
| *. *.**.*. .**.*. * *.* * *. *.* *.* * *. * *. .* *.**.*
6e+07 *+* * ** * * * * |
| |
5e+07 ++ |
| |
4e+07 ++ |
| |
3e+07 ++ |
| O O |
2e+07 O+OO OO OO O OO O OO OO OO O O |
| |
1e+07 ++------------------------------------------------------------------+
cpuidle.C3-NHM.time
4.4e+08 ++------------O---------------------------------------------------+
| OO O O O O O O |
4.2e+08 O+ O O O O O O |
4e+08 ++ O O O O O |
| |
3.8e+08 ++ |
3.6e+08 ++ |
| |
3.4e+08 ++ |
3.2e+08 ++ |
| * |
3e+08 ++ +: ** *.* .* |
2.8e+08 *+ *. *. *. * :.* + + .* .**. .* :.* .*.* * *.**.* .* .*
| * * * * * * * * ** * * *.* * *.** |
2.6e+08 ++----------------------------------------------------------------+
cpuidle.C6-NHM.time
8e+08 ++----------*-----------------------------------------------------+
| * .* .* + .* |
|.* + * * ** *. *.*. *. *.**.**.**. *. .**.**.**. *.**. *.* |
7.5e+08 *+ * * * * * * * * *.*
| |
| |
7e+08 ++ |
| |
6.5e+08 ++ |
| |
O |
6e+08 ++ O OO O O O O O O |
| O OO O O O O O O |
| O O |
5.5e+08 ++----------------------------------------------------------------+
cpuidle.C6-NHM.usage
440000 ++-----------------------------------------------------------------+
420000 ++ *. *. *. .*. * .* .* .*.* .* |
|.* .**.**. : *.* * **.** * + *.**.*.** * :.** : .** *.*
400000 *+ * * * * *.** |
380000 ++ |
| |
360000 ++ |
340000 ++ |
320000 ++ |
| |
300000 ++ |
280000 ++ |
| O O O O O |
260000 O+ O OO OO O O OO O O OO O O |
240000 ++-----------------------------------------------------------------+
turbostat.Avg_MHz
650 ++--------------------------------------------------------------------+
| |
600 O+ OO O O O O |
| O O OO O O OO OO O |
550 ++ O O O |
| |
500 ++ |
| |
450 ++ |
| |
400 ++ |
| *.*.* .**. .**.*. .**. .* .**. *.|
350 *+**.*.**.**.*. *.*.* .* *.* **.* ** *.**.* * *.* *
| * * |
300 ++--------------------------------------------------------------------+
turbostat._Busy
20 ++---------------------------------------------------------------------+
19 O+ O O O O |
| O O OO O O O O O OO |
18 ++OO O O O |
17 ++ |
| |
16 ++ |
15 ++ |
14 ++ |
| |
13 ++ |
12 ++ .* .**.*.* .*.**. *.*.**. .**.* .*.**.*.**.*.**.|
*.**.*.**.* *. .**.* .* * *.* * * *
11 ++ * * |
10 ++---------------------------------------------------------------------+
turbostat.CPU_c1
23 ++---------------------------------------------------------------------+
| * * *. |
22 ++ * +: : *. .* *. : *. * * *. |
| *. *. .* : * : : * :+ * .*.: * + + *. *.*.* .*.* *.**.*
21 *+* *.* * : : *.* * * * * * * |
| *.* |
20 ++ |
| |
19 ++ |
| |
18 ++ O OO O O O O O |
| OO O O O O O O O |
17 O+ O O O |
| |
16 ++---------------------------------------------------------------------+
turbostat.CPU_c3
38 ++---------------------------------------------------------------------+
| O |
36 ++OO O O O O |
34 ++ O O O O O OO O |
O O O |
32 ++ O O O |
| |
30 ++ |
| |
28 ++ * |
26 ++ : : *.* *.* * .*
*. *. .* .*. : : *. : : *. .**. + : *. *.*. :+ .* .*. *. * |
24 ++* * * ** * * : : * *.** :+ * **.* * * * *.* |
| * * |
22 ++---------------------------------------------------------------------+
turbostat.CPU_c6
44 ++---*-------*---------------------------------------------------------+
|.* + + *.*.* + .**. *. *. .** * .*. *.* *. .*.* |
42 *+ * * * **.*.* + * * + .**. + * * :.*.* *.** *.|
40 ++ * * * * *
| |
38 ++ |
36 ++ |
| |
34 ++ |
32 ++ |
O O O |
30 ++ O O O OO O O O |
28 ++OO O O O O O O |
| O O |
26 ++---------------------------------------------------------------------+
vmstat.procs.b
30 ++---------------------------------------------------------------------+
| |
| O |
25 O+O O O O OO O O O O O |
| O O O O O O O O |
| |
20 ++ |
| |
15 ++ |
| |
| |
10 ++* *.* .* .* *.*. *. .* .**.*. * |
|+ :.*. .* .* *.* * + *. .* * * :.*. * **. .**. + :.*
* * **.* :.* * * * * * * * |
5 ++-----------*---------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
3 years, 9 months
[lkp] [quota] ba5da72c22: WARNING: CPU: 0 PID: 29133 at fs/ext4/ext4_jbd2.c:48 ext4_journal_check_start+0x67/0xa0()
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs.git getnextquota
commit ba5da72c225b3588000a7ecfacb5f510c29a3723 ("quota: Allow Q_GETQUOTA for frozen filesystem")
+------------------------------------------------------------+------------+------------+
| | fa03786fb0 | ba5da72c22 |
+------------------------------------------------------------+------------+------------+
| boot_successes | 8 | 16 |
| boot_failures | 0 | 5 |
| WARNING:at_fs/ext4/ext4_jbd2.c:#ext4_journal_check_start() | 0 | 4 |
| backtrace:quota_getquota | 0 | 4 |
| backtrace:SyS_quotactl | 0 | 4 |
| BUG:kernel_test_crashed | 0 | 1 |
+------------------------------------------------------------+------------+------------+
[ 639.272995] EXT4-fs (sda8): re-mounted. Opts: (null)
[ 639.284083] EXT4-fs (sda8): re-mounted. Opts: (null)
[ 639.406731] ------------[ cut here ]------------
[ 639.413284] WARNING: CPU: 0 PID: 29133 at fs/ext4/ext4_jbd2.c:48 ext4_journal_check_start+0x67/0xa0()
[ 639.426236] Modules linked in: loop dm_flakey dm_mod rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver netconsole btrfs xor raid6_pq sg sd_mod x86_pkg_temp_thermal coretemp ata_generic kvm_intel pata_acpi kvm irqbypass crct10dif_pclmul eeepc_wmi crc32_pclmul asus_wmi sparse_keymap rfkill ppdev crc32c_intel snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic ghash_clmulni_intel ata_piix pata_via serio_raw pcspkr i915 aesni_intel lrw gf128mul glue_helper ablk_helper drm_kms_helper syscopyarea cryptd sysfillrect sysimgblt fb_sys_fops snd_hda_intel snd_hda_codec snd_hda_core drm snd_hwdep snd_pcm snd_timer snd soundcore shpchp libata wmi parport_pc parport tpm_infineon video
[ 639.497181] CPU: 0 PID: 29133 Comm: setquota Not tainted 4.5.0-rc3-00004-gba5da72 #1
[ 639.506282] Hardware name: System manufacturer System Product Name/P8H67-M PRO, BIOS 1002 04/01/2011
[ 639.516768] 0000000000000000 ffff880093f0fc50 ffffffff81423a8a 0000000000000000
[ 639.525616] ffffffff81bef128 ffff880093f0fc88 ffffffff8107a476 ffff8800950d1000
[ 639.534438] 0000000000000049 0000000000000000 ffffffff812978d7 ffff8800977a8a00
[ 639.543268] Call Trace:
[ 639.547044] [<ffffffff81423a8a>] dump_stack+0x63/0x89
[ 639.553514] [<ffffffff8107a476>] warn_slowpath_common+0x86/0xc0
[ 639.560835] [<ffffffff812978d7>] ? ext4_acquire_dquot+0x57/0xa0
[ 639.568132] [<ffffffff8107a56a>] warn_slowpath_null+0x1a/0x20
[ 639.575257] [<ffffffff812b4bf7>] ext4_journal_check_start+0x67/0xa0
[ 639.582873] [<ffffffff812b4d36>] __ext4_journal_start_sb+0x36/0xe0
[ 639.590421] [<ffffffff812978d7>] ext4_acquire_dquot+0x57/0xa0
[ 639.597495] [<ffffffff81259ca0>] dqget+0x3f0/0x450
[ 639.603594] [<ffffffff8125a464>] dquot_get_dqblk+0x14/0xf0
[ 639.610379] [<ffffffff8125dbb6>] quota_getquota+0x96/0x190
[ 639.617165] [<ffffffff812048d4>] ? getname_kernel+0x34/0x120
[ 639.624089] [<ffffffff811d1a57>] ? kmem_cache_alloc+0x197/0x200
[ 639.631264] [<ffffffff8120caa1>] ? dput+0xc1/0x250
[ 639.637323] [<ffffffff813b68cb>] ? selinux_quotactl+0x6b/0xa0
[ 639.644325] [<ffffffff813b0f2b>] ? security_quotactl+0x4b/0x70
[ 639.651369] [<ffffffff8125ed6e>] SyS_quotactl+0x6be/0x880
[ 639.657964] [<ffffffff81064d2a>] ? __do_page_fault+0x1ca/0x410
[ 639.665012] [<ffffffff818e10ae>] entry_SYSCALL_64_fastpath+0x12/0x71
[ 639.672564] ---[ end trace ccf7f0485c6586e4 ]---
[ 639.673125] systemd-journald[125]: Compressed data object 683 -> 508
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
3 years, 9 months
[lkp] [f2fs] 7b51bf49f4: -31.2% fsmark.files_per_sec
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs dev-test
commit 7b51bf49f4825da09206c6d89e4aad5b4faa0a14 ("f2fs: set flush_merge by default")
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1HDD/8K/f2fs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-ne04/400M/fsmark
commit:
312564ace97b1a18d02cd49c35948c82da441f14
7b51bf49f4825da09206c6d89e4aad5b4faa0a14
312564ace97b1a18 7b51bf49f4825da09206c6d89e
---------------- --------------------------
%stddev %change %stddev
\ | \
4229542 ± 7% -49.8% 2123569 ± 4% fsmark.app_overhead
480.02 ± 0% -31.2% 330.03 ± 0% fsmark.files_per_sec
107.44 ± 0% +44.7% 155.46 ± 0% fsmark.time.elapsed_time
107.44 ± 0% +44.7% 155.46 ± 0% fsmark.time.elapsed_time.max
1317328 ± 0% -2.9% 1279518 ± 0% fsmark.time.file_system_outputs
16.50 ± 3% -30.3% 11.50 ± 4% fsmark.time.percent_of_cpu_this_job_got
479210 ± 0% -5.3% 453623 ± 1% fsmark.time.voluntary_context_switches
135.22 ± 3% +35.3% 183.00 ± 2% uptime.boot
1384 ± 5% +23.4% 1708 ± 3% uptime.idle
20471 ± 1% -42.0% 11875 ± 0% softirqs.BLOCK
19962 ± 1% +28.8% 25718 ± 11% softirqs.RCU
23562 ± 10% +19.2% 28089 ± 8% softirqs.SCHED
12.75 ± 1% -15.8% 10.74 ± 6% turbostat.CPU%c1
30.81 ± 1% -12.0% 27.13 ± 4% turbostat.CPU%c3
46.03 ± 1% +12.8% 51.95 ± 1% turbostat.CPU%c6
5992 ± 0% -32.3% 4058 ± 0% vmstat.io.bo
12690 ± 0% -21.8% 9921 ± 4% vmstat.system.cs
1082 ± 1% -29.0% 768.50 ± 16% vmstat.system.in
107.44 ± 0% +44.7% 155.46 ± 0% time.elapsed_time
107.44 ± 0% +44.7% 155.46 ± 0% time.elapsed_time.max
16.50 ± 3% -30.3% 11.50 ± 4% time.percent_of_cpu_this_job_got
0.79 ± 6% +20.5% 0.95 ± 2% time.user_time
197837 ± 3% +21.5% 240374 ± 1% numa-numastat.node0.local_node
197837 ± 3% +21.5% 240376 ± 1% numa-numastat.node0.numa_hit
0.50 ±173% +450.0% 2.75 ± 47% numa-numastat.node0.other_node
181435 ± 4% +23.8% 224592 ± 2% numa-numastat.node1.local_node
181437 ± 4% +23.8% 224592 ± 2% numa-numastat.node1.numa_hit
374626 ± 0% +22.8% 460072 ± 0% proc-vmstat.numa_hit
374623 ± 0% +22.8% 460069 ± 0% proc-vmstat.numa_local
33085 ± 0% -29.8% 23229 ± 3% proc-vmstat.pgactivate
77779 ± 3% +20.9% 94024 ± 1% proc-vmstat.pgalloc_dma32
322625 ± 1% +22.4% 394983 ± 0% proc-vmstat.pgalloc_normal
246180 ± 0% +40.7% 346265 ± 0% proc-vmstat.pgfault
215983 ± 0% +40.3% 303074 ± 0% proc-vmstat.pgfree
20880 ± 1% -46.6% 11151 ± 4% cpuidle.C1-NHM.usage
31361970 ± 4% -21.7% 24554439 ± 14% cpuidle.C1E-NHM.time
30519 ± 1% -46.3% 16394 ± 3% cpuidle.C1E-NHM.usage
4.521e+08 ± 1% +27.7% 5.772e+08 ± 6% cpuidle.C3-NHM.time
177038 ± 1% +20.7% 213730 ± 14% cpuidle.C3-NHM.usage
1.045e+09 ± 0% +54.9% 1.618e+09 ± 1% cpuidle.C6-NHM.time
260935 ± 2% +22.0% 318337 ± 6% cpuidle.C6-NHM.usage
1.562e+08 ± 2% +46.7% 2.292e+08 ± 1% cpuidle.POLL.time
0.00 ± -1% +Inf% 46222 ± 0% latency_stats.avg.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
26436 ± 0% -100.0% 0.00 ± -1% latency_stats.avg.submit_bio_wait.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 51120 ± 0% latency_stats.hits.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
51073 ± 0% -100.0% 0.00 ± -1% latency_stats.hits.submit_bio_wait.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 131989 ± 1% latency_stats.max.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
104864 ± 4% -100.0% 0.00 ± -1% latency_stats.max.submit_bio_wait.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
43003 ± 17% -78.7% 9176 ± 49% latency_stats.sum.alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
74613055 ± 3% -85.8% 10617474 ± 11% latency_stats.sum.call_rwsem_down_read_failed.f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
3206072 ± 9% -65.8% 1097191 ± 7% latency_stats.sum.call_rwsem_down_read_failed.f2fs_new_inode.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.363e+09 ± 0% latency_stats.sum.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
4821755 ± 4% +137.7% 11463656 ± 38% latency_stats.sum.f2fs_sync_fs.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1.35e+09 ± 0% -100.0% 0.00 ± -1% latency_stats.sum.submit_bio_wait.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
44021 ± 1% +20.0% 52824 ± 2% numa-vmstat.node0.nr_active_file
117.75 ± 5% +42.0% 167.25 ± 4% numa-vmstat.node0.nr_dirty
113890 ± 1% +10.0% 125333 ± 1% numa-vmstat.node0.nr_file_pages
82.75 ± 16% +1248.0% 1115 ± 91% numa-vmstat.node0.nr_inactive_anon
125.25 ± 17% +828.7% 1163 ± 87% numa-vmstat.node0.nr_shmem
11813 ± 2% +15.0% 13580 ± 1% numa-vmstat.node0.nr_slab_reclaimable
4274 ± 13% -24.3% 3234 ± 14% numa-vmstat.node1.nr_active_anon
28009 ± 1% -22.5% 21700 ± 4% numa-vmstat.node1.nr_active_file
4244 ± 13% -24.4% 3209 ± 14% numa-vmstat.node1.nr_anon_pages
40753 ± 33% -47.2% 21497 ± 61% numa-vmstat.node1.nr_dirtied
87167 ± 1% -14.0% 75003 ± 1% numa-vmstat.node1.nr_file_pages
8376 ± 2% -22.2% 6517 ± 3% numa-vmstat.node1.nr_slab_reclaimable
40696 ± 33% -47.3% 21465 ± 61% numa-vmstat.node1.nr_written
188090 ± 1% +20.9% 227409 ± 1% numa-meminfo.node0.Active
176095 ± 1% +20.0% 211303 ± 2% numa-meminfo.node0.Active(file)
455580 ± 1% +10.0% 501341 ± 1% numa-meminfo.node0.FilePages
333.75 ± 16% +1237.2% 4462 ± 91% numa-meminfo.node0.Inactive(anon)
47258 ± 2% +15.0% 54325 ± 1% numa-meminfo.node0.SReclaimable
503.25 ± 17% +824.8% 4654 ± 87% numa-meminfo.node0.Shmem
129176 ± 3% -22.8% 99752 ± 2% numa-meminfo.node1.Active
17133 ± 13% -24.4% 12949 ± 14% numa-meminfo.node1.Active(anon)
112043 ± 1% -22.5% 86802 ± 4% numa-meminfo.node1.Active(file)
17007 ± 13% -24.4% 12851 ± 14% numa-meminfo.node1.AnonPages
348677 ± 1% -14.0% 300017 ± 1% numa-meminfo.node1.FilePages
236455 ± 1% -9.9% 213055 ± 1% numa-meminfo.node1.Inactive
470973 ± 3% -12.6% 411429 ± 3% numa-meminfo.node1.MemUsed
33510 ± 2% -22.2% 26069 ± 3% numa-meminfo.node1.SReclaimable
584.68 ± 18% +51.8% 887.72 ± 5% sched_debug.cfs_rq:/.exec_clock.10
531.49 ± 42% +170.7% 1438 ± 72% sched_debug.cfs_rq:/.exec_clock.11
610.54 ± 17% +94.1% 1185 ± 47% sched_debug.cfs_rq:/.exec_clock.14
1172 ± 20% +31.4% 1540 ± 6% sched_debug.cfs_rq:/.exec_clock.2
981.99 ± 7% +38.9% 1364 ± 9% sched_debug.cfs_rq:/.exec_clock.4
1133 ± 31% +78.3% 2020 ± 13% sched_debug.cfs_rq:/.exec_clock.5
637.47 ± 24% +76.7% 1126 ± 44% sched_debug.cfs_rq:/.exec_clock.8
1381 ± 0% +16.0% 1602 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
361.27 ± 16% +31.3% 474.40 ± 17% sched_debug.cfs_rq:/.exec_clock.min
146.25 ± 17% +55.6% 227.50 ± 18% sched_debug.cfs_rq:/.load_avg.0
36.25 ± 72% -69.0% 11.25 ±113% sched_debug.cfs_rq:/.load_avg.14
67.25 ± 83% -83.6% 11.00 ± 52% sched_debug.cfs_rq:/.load_avg.6
0.20 ± 17% -44.0% 0.11 ± 28% sched_debug.cfs_rq:/.nr_running.avg
0.38 ± 4% -20.1% 0.31 ± 13% sched_debug.cfs_rq:/.nr_running.stddev
2.75 ± 90% -100.0% 0.00 ± -1% sched_debug.cfs_rq:/.nr_spread_over.12
3.25 ±102% -100.0% 0.00 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.14
4562 ±126% -167.7% -3089 ±-133% sched_debug.cfs_rq:/.spread0.6
9049 ± 29% -49.5% 4566 ± 61% sched_debug.cfs_rq:/.spread0.max
126.25 ± 32% -55.8% 55.75 ± 39% sched_debug.cfs_rq:/.util_avg.1
124.29 ± 17% -41.1% 73.21 ± 15% sched_debug.cfs_rq:/.util_avg.avg
322.00 ± 35% -44.3% 179.50 ± 27% sched_debug.cfs_rq:/.util_avg.max
879630 ± 8% -15.9% 739419 ± 11% sched_debug.cpu.avg_idle.14
906683 ± 8% -15.2% 768624 ± 10% sched_debug.cpu.avg_idle.3
58510 ± 7% +50.9% 88305 ± 4% sched_debug.cpu.clock.0
58514 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock.1
58516 ± 7% +50.9% 88309 ± 4% sched_debug.cpu.clock.10
58510 ± 7% +50.9% 88308 ± 4% sched_debug.cpu.clock.11
58516 ± 7% +50.9% 88308 ± 4% sched_debug.cpu.clock.12
58517 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock.13
58518 ± 7% +50.9% 88308 ± 4% sched_debug.cpu.clock.14
58517 ± 7% +50.9% 88297 ± 4% sched_debug.cpu.clock.15
58511 ± 7% +50.9% 88303 ± 4% sched_debug.cpu.clock.2
58512 ± 7% +50.9% 88307 ± 4% sched_debug.cpu.clock.3
58513 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock.4
58515 ± 7% +50.9% 88309 ± 4% sched_debug.cpu.clock.5
58516 ± 7% +50.9% 88307 ± 4% sched_debug.cpu.clock.6
58516 ± 7% +50.9% 88309 ± 4% sched_debug.cpu.clock.7
58516 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock.8
58516 ± 7% +50.9% 88303 ± 4% sched_debug.cpu.clock.9
58515 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock.avg
58518 ± 7% +50.9% 88310 ± 4% sched_debug.cpu.clock.max
58504 ± 7% +50.9% 88291 ± 4% sched_debug.cpu.clock.min
58510 ± 7% +50.9% 88305 ± 4% sched_debug.cpu.clock_task.0
58514 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock_task.1
58516 ± 7% +50.9% 88309 ± 4% sched_debug.cpu.clock_task.10
58510 ± 7% +50.9% 88308 ± 4% sched_debug.cpu.clock_task.11
58516 ± 7% +50.9% 88308 ± 4% sched_debug.cpu.clock_task.12
58517 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock_task.13
58518 ± 7% +50.9% 88308 ± 4% sched_debug.cpu.clock_task.14
58517 ± 7% +50.9% 88297 ± 4% sched_debug.cpu.clock_task.15
58511 ± 7% +50.9% 88303 ± 4% sched_debug.cpu.clock_task.2
58512 ± 7% +50.9% 88307 ± 4% sched_debug.cpu.clock_task.3
58513 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock_task.4
58515 ± 7% +50.9% 88309 ± 4% sched_debug.cpu.clock_task.5
58516 ± 7% +50.9% 88307 ± 4% sched_debug.cpu.clock_task.6
58516 ± 7% +50.9% 88309 ± 4% sched_debug.cpu.clock_task.7
58516 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock_task.8
58516 ± 7% +50.9% 88303 ± 4% sched_debug.cpu.clock_task.9
58515 ± 7% +50.9% 88306 ± 4% sched_debug.cpu.clock_task.avg
58518 ± 7% +50.9% 88310 ± 4% sched_debug.cpu.clock_task.max
58504 ± 7% +50.9% 88291 ± 4% sched_debug.cpu.clock_task.min
1791 ± 0% +34.7% 2413 ± 9% sched_debug.cpu.curr->pid.max
530.14 ± 8% +16.6% 618.02 ± 8% sched_debug.cpu.curr->pid.stddev
209.00 ± 33% -100.0% 0.00 ± -1% sched_debug.cpu.load.14
38.27 ± 24% -28.4% 27.41 ± 16% sched_debug.cpu.load.avg
9384 ± 3% +58.1% 14840 ± 12% sched_debug.cpu.nr_load_updates.0
7128 ± 10% +50.5% 10727 ± 5% sched_debug.cpu.nr_load_updates.10
5615 ± 10% +22.5% 6878 ± 4% sched_debug.cpu.nr_load_updates.11
6988 ± 5% +58.1% 11047 ± 4% sched_debug.cpu.nr_load_updates.12
6989 ± 2% +58.4% 11074 ± 3% sched_debug.cpu.nr_load_updates.14
5668 ± 11% +27.4% 7223 ± 6% sched_debug.cpu.nr_load_updates.15
9117 ± 2% +54.8% 14117 ± 7% sched_debug.cpu.nr_load_updates.2
9689 ± 4% +44.3% 13978 ± 5% sched_debug.cpu.nr_load_updates.4
7610 ± 7% +18.6% 9027 ± 8% sched_debug.cpu.nr_load_updates.5
9283 ± 4% +46.6% 13611 ± 5% sched_debug.cpu.nr_load_updates.6
7378 ± 7% +28.5% 9482 ± 10% sched_debug.cpu.nr_load_updates.7
7014 ± 5% +47.6% 10350 ± 2% sched_debug.cpu.nr_load_updates.8
5428 ± 9% +24.7% 6772 ± 4% sched_debug.cpu.nr_load_updates.9
7531 ± 4% +37.4% 10346 ± 3% sched_debug.cpu.nr_load_updates.avg
10658 ± 2% +46.5% 15612 ± 12% sched_debug.cpu.nr_load_updates.max
5170 ± 8% +23.5% 6383 ± 5% sched_debug.cpu.nr_load_updates.min
1642 ± 5% +74.6% 2868 ± 16% sched_debug.cpu.nr_load_updates.stddev
0.19 ± 11% -41.7% 0.11 ± 15% sched_debug.cpu.nr_running.avg
0.40 ± 9% -21.9% 0.31 ± 7% sched_debug.cpu.nr_running.stddev
42853 ± 9% +57.2% 67344 ± 18% sched_debug.cpu.nr_switches.0
20979 ± 6% +101.3% 42226 ± 11% sched_debug.cpu.nr_switches.12
20834 ± 10% +110.4% 43828 ± 23% sched_debug.cpu.nr_switches.14
31923 ± 3% +111.3% 67446 ± 21% sched_debug.cpu.nr_switches.2
42296 ± 12% +72.2% 72821 ± 30% sched_debug.cpu.nr_switches.4
32511 ± 3% +88.7% 61355 ± 20% sched_debug.cpu.nr_switches.6
22802 ± 9% +64.6% 37532 ± 1% sched_debug.cpu.nr_switches.8
26514 ± 1% +51.3% 40127 ± 5% sched_debug.cpu.nr_switches.avg
58800 ± 28% +55.9% 91652 ± 11% sched_debug.cpu.nr_switches.max
14014 ± 19% +68.8% 23661 ± 19% sched_debug.cpu.nr_switches.stddev
-171.00 ±-29% -83.5% -28.25 ±-65% sched_debug.cpu.nr_uninterruptible.1
591.00 ± 3% -33.0% 395.75 ± 16% sched_debug.cpu.nr_uninterruptible.10
240.00 ± 24% -60.5% 94.75 ± 35% sched_debug.cpu.nr_uninterruptible.11
535.25 ± 11% -41.4% 313.75 ± 25% sched_debug.cpu.nr_uninterruptible.12
525.75 ± 18% -37.3% 329.75 ± 15% sched_debug.cpu.nr_uninterruptible.14
212.75 ± 19% -53.2% 99.50 ± 58% sched_debug.cpu.nr_uninterruptible.15
-201.50 ±-28% -65.1% -70.25 ±-17% sched_debug.cpu.nr_uninterruptible.3
-252.50 ±-31% -51.1% -123.50 ±-18% sched_debug.cpu.nr_uninterruptible.4
-235.75 ±-15% -84.1% -37.50 ±-61% sched_debug.cpu.nr_uninterruptible.5
-284.75 ±-14% -94.2% -16.50 ±-317% sched_debug.cpu.nr_uninterruptible.7
162.50 ± 13% -43.1% 92.50 ± 17% sched_debug.cpu.nr_uninterruptible.9
1.06 ± 13% +29.9% 1.38 ± 1% sched_debug.cpu.nr_uninterruptible.avg
527.73 ± 5% -18.1% 432.06 ± 12% sched_debug.cpu.nr_uninterruptible.stddev
21300 ± 5% +98.4% 42250 ± 11% sched_debug.cpu.sched_count.12
20849 ± 10% +110.3% 43854 ± 23% sched_debug.cpu.sched_count.14
34572 ± 5% +100.5% 69304 ± 18% sched_debug.cpu.sched_count.2
22820 ± 9% +67.8% 38294 ± 1% sched_debug.cpu.sched_count.8
148192 ± 0% +9.7% 162582 ± 1% sched_debug.cpu.sched_count.avg
19147 ± 10% +52.7% 29229 ± 21% sched_debug.cpu.sched_goidle.0
8541 ± 6% +120.4% 18821 ± 12% sched_debug.cpu.sched_goidle.12
8536 ± 11% +130.5% 19673 ± 26% sched_debug.cpu.sched_goidle.14
13824 ± 3% +124.6% 31048 ± 23% sched_debug.cpu.sched_goidle.2
19038 ± 13% +78.5% 33981 ± 32% sched_debug.cpu.sched_goidle.4
14186 ± 3% +99.3% 28272 ± 22% sched_debug.cpu.sched_goidle.6
9322 ± 10% +71.9% 16023 ± 1% sched_debug.cpu.sched_goidle.8
11604 ± 2% +57.5% 18282 ± 6% sched_debug.cpu.sched_goidle.avg
27549 ± 30% +59.8% 44033 ± 10% sched_debug.cpu.sched_goidle.max
6704 ± 20% +66.7% 11174 ± 19% sched_debug.cpu.sched_goidle.stddev
53560 ± 3% +48.0% 79293 ± 3% sched_debug.cpu.ttwu_count.0
9727 ± 27% +107.8% 20210 ± 45% sched_debug.cpu.ttwu_count.10
7802 ± 8% +152.5% 19704 ± 33% sched_debug.cpu.ttwu_count.12
8037 ± 11% +204.4% 24466 ± 48% sched_debug.cpu.ttwu_count.14
15381 ± 5% +43.2% 22034 ± 13% sched_debug.cpu.ttwu_count.4
15299 ± 10% +52.0% 23253 ± 19% sched_debug.cpu.ttwu_count.6
14342 ± 1% +47.0% 21090 ± 5% sched_debug.cpu.ttwu_count.avg
53630 ± 3% +47.9% 79315 ± 3% sched_debug.cpu.ttwu_count.max
11752 ± 5% +52.6% 17935 ± 9% sched_debug.cpu.ttwu_count.stddev
13481 ± 4% +23.5% 16655 ± 4% sched_debug.cpu.ttwu_local.0
3872 ± 9% -22.1% 3016 ± 9% sched_debug.cpu.ttwu_local.1
1972 ± 19% +62.3% 3200 ± 7% sched_debug.cpu.ttwu_local.10
1900 ± 8% +85.7% 3528 ± 11% sched_debug.cpu.ttwu_local.12
2000 ± 17% +71.8% 3436 ± 7% sched_debug.cpu.ttwu_local.14
3086 ± 6% +20.9% 3732 ± 8% sched_debug.cpu.ttwu_local.2
2968 ± 14% -21.6% 2326 ± 17% sched_debug.cpu.ttwu_local.3
2882 ± 9% -23.8% 2195 ± 18% sched_debug.cpu.ttwu_local.7
2057 ± 9% +61.1% 3313 ± 3% sched_debug.cpu.ttwu_local.8
13534 ± 4% +23.4% 16700 ± 4% sched_debug.cpu.ttwu_local.max
1344 ± 10% -18.5% 1095 ± 10% sched_debug.cpu.ttwu_local.min
2862 ± 4% +25.1% 3581 ± 5% sched_debug.cpu.ttwu_local.stddev
58515 ± 7% +50.9% 88309 ± 4% sched_debug.cpu_clk
56859 ± 8% +52.4% 86644 ± 5% sched_debug.ktime
58515 ± 7% +50.9% 88309 ± 4% sched_debug.sched_clk
lkp-ne04: Nehalem-EP
Memory: 12G
1.2e+06 ++----------------------------------------------------------------+
| |
1e+06 ++ O O O O O O O O O O |
O O O O O O |
| O O O O
800000 ++ |
| |
600000 ++ |
| |
400000 ++ |
| |
| |
200000 ++ |
| |
0 ++---------------------O------------------------------------------+
60000 ++------------------------------------------------------------------+
| |
50000 O+O O O O O O O O O O O O O O O O O O O O O O O O O O O
| |
| |
40000 ++ |
| |
30000 ++ |
| |
20000 ++ |
| |
| |
10000 ++ |
| |
0 ++---------------O--------------------------------------------------+
2.5e+09 ++----------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O O O O O O
| |
2e+09 ++ |
| |
| |
1.5e+09 ++ |
| |
1e+09 ++ |
| |
| |
5e+08 ++ |
| |
| |
0 ++---------------O------------------------------------------------+
50000 ++------------------------------------------------------------------+
45000 O+O O O O O O O O O O O O O O O O O O O O O O O O O O O
| |
40000 ++ |
35000 ++ |
| |
30000 ++ |
25000 ++ |
20000 ++ |
| |
15000 ++ |
10000 ++ |
| |
5000 ++ |
0 ++---------------O--------------------------------------------------+
160000 ++-----------------------------------------------------------------+
O O O O O O |
140000 ++ O O O O O O O O |
120000 ++O O O O O O O O O O O O O O
| |
100000 ++ |
| |
80000 ++ |
| |
60000 ++ |
40000 ++ |
| |
20000 ++ |
| |
0 ++---------------O-------------------------------------------------+
softirqs.BLOCK
25000 ++------------------------------------------------------------------+
| |
*.**.**.**.**.**. .**.**.**.**.* **.**.**.**.* **.**.* .**. *.|
20000 ++ ** * : : : : * * *
| : : : : : : |
| : : : : : : |
15000 ++ : : : : : : |
O OO OO OO O OO OO:OO OO OO OO OO OO:OO OO : : |
10000 ++ : : :: :: |
| :: :: :: |
| :: :: :: |
5000 ++ :: :: :: |
| : : : |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
cpuidle.C1-NHM.usage
25000 *+---**--------------------*-**-*--*------*-**-*--*-----------------+
|+ * **.**.**. : * : **.* * : **.**. |
| * ** *.**.* : : : : **.**.* .*
20000 ++ : : : : : : * |
| : : : : : : |
| : : : : : : |
15000 ++ : : : : : : |
| : : :: :: |
10000 O+OO OO OO O OO OO:OO OO OO OO OO OO:OO OO :: |
| :: :: :: |
| :: :: :: |
5000 ++ : : : |
| : : : |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
cpuidle.C1E-NHM.usage
40000 ++------------------------------------------------------------------+
*. *.**. *.**.**. .**.* .* .**.* .* |
35000 ++* * ** *.**.** * : **.** * : **.**.* |
30000 ++ : : : : : : *.**.**.*
| : : : : : : |
25000 ++ : : : : : : |
| : : : : : : |
20000 ++ : : : : : : |
O O OO O OO OO:OO OO OO OO OO OO:OO OO :: |
15000 ++OO O :: :: :: |
10000 ++ :: :: :: |
| : :: :: |
5000 ++ : : : |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
cpuidle.C3-NHM.time
7e+08 ++------------------------------------------------------------------+
| O O |
6e+08 O+OO OO O O O O |
| O OO OO OO O OO OO OO O OO |
5e+08 ++ |
*.**.**.* .**.**.** *.**.**. *.**.* **. *. *.**.* **. *.**.**.**.*
4e+08 ++ * : : * : : * * : : * |
| : : : : : : |
3e+08 ++ : : : : : : |
| : : : : : : |
2e+08 ++ :: :: :: |
| :: :: :: |
1e+08 ++ : : : |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
cpuidle.C6-NHM.time
1.8e+09 ++----------------------------------------------------------------+
| OO O OO OO OO OOO OO OO OO OO O OO |
1.6e+09 OO O O O |
1.4e+09 ++ |
| |
1.2e+09 ++ |
1e+09 **.**.**.**.**.**.* ***.**.**.**.* **.**.***.** *.**.**.**.**.**
| : : : : : : |
8e+08 ++ : : : : : : |
6e+08 ++ : : : : : : |
| :: :: : : |
4e+08 ++ :: :: :: |
2e+08 ++ :: :: : |
| : : : |
0 ++--------O--------*---------------*--------------*---------------+
turbostat.CPU_c6
60 ++---------------------------------------------------------------------+
| O |
50 O+OO OO OO OO OO O O OO OO O OO OO OO O OO |
*.**.**.**.*.**.**.* .**.**.*.**.** .*.**.**.**.* .**.*.**.**.**.*
| : * : * : * |
40 ++ : : : : : : |
| : : : : : : |
30 ++ : : : : : : |
| : : : : : : |
20 ++ : : : : : : |
| :: :: :: |
| :: :: :: |
10 ++ : : : |
| : : : |
0 ++---------O---------*----------------*---------------*----------------+
fsmark.files_per_sec
500 ++------------------------------------------------------------*--*----+
450 *+**.**.**.**.*.**.* **.**.*.**.**.* **.*.**.**.** *.*.**.* * **.*
| : : : : : : |
400 ++ : : : : : : |
350 ++ : : : : : : |
O OO OO OO O O OO OO:OO OO O OO OO OO:OO O O : : |
300 ++ : : : : : : |
250 ++ : : : : : : |
200 ++ :: :: :: |
| :: :: :: |
150 ++ :: :: :: |
100 ++ : : : |
| : : : |
50 ++ : : : |
0 ++---------O--------*----------------*---------------*----------------+
fsmark.time.elapsed_time
160 O+OO-OO-OO--O-O-OO-OO-OO-OO-O-OO-OO-OO-OO-O-O-------------------------+
| |
140 ++ |
120 ++ |
| .**. .**. .* .* *.* .*.**.**.* *. .**.**.** .*. *. |
100 *+** ** * * : * * : * * : * * **.**.**.*
| : : : : : : |
80 ++ : : : : : : |
| : : : : : : |
60 ++ :: :: : : |
40 ++ :: :: :: |
| :: :: :: |
20 ++ : : : |
| : : : |
0 ++---------O--------*----------------*---------------*----------------+
fsmark.time.elapsed_time.max
160 O+OO-OO-OO--O-O-OO-OO-OO-OO-O-OO-OO-OO-OO-O-O-------------------------+
| |
140 ++ |
120 ++ |
| .**. .**. .* .* *.* .*.**.**.* *. .**.**.** .*. *. |
100 *+** ** * * : * * : * * : * * **.**.**.*
| : : : : : : |
80 ++ : : : : : : |
| : : : : : : |
60 ++ :: :: : : |
40 ++ :: :: :: |
| :: :: :: |
20 ++ : : : |
| : : : |
0 ++---------O--------*----------------*---------------*----------------+
time.elapsed_time
160 O+OO-OO-OO--O-O-OO-OO-OO-OO-O-OO-OO-OO-OO-O-O-------------------------+
| |
140 ++ |
120 ++ |
| .**. .**. .* .* *.* .*.**.**.* *. .**.**.** .*. *. |
100 *+** ** * * : * * : * * : * * **.**.**.*
| : : : : : : |
80 ++ : : : : : : |
| : : : : : : |
60 ++ :: :: : : |
40 ++ :: :: :: |
| :: :: :: |
20 ++ : : : |
| : : : |
0 ++---------O--------*----------------*---------------*----------------+
time.elapsed_time.max
160 O+OO-OO-OO--O-O-OO-OO-OO-OO-O-OO-OO-OO-OO-O-O-------------------------+
| |
140 ++ |
120 ++ |
| .**. .**. .* .* *.* .*.**.**.* *. .**.**.** .*. *. |
100 *+** ** * * : * * : * * : * * **.**.**.*
| : : : : : : |
80 ++ : : : : : : |
| : : : : : : |
60 ++ :: :: : : |
40 ++ :: :: :: |
| :: :: :: |
20 ++ : : : |
| : : : |
0 ++---------O--------*----------------*---------------*----------------+
vmstat.io.bo
7000 ++-------------------------------------------------------------------+
| |
6000 *+**.**.**.**.**.** *.*.**.**.**.** *.**.**.*.**.* **.**.**.**.**.*
| : : : : : : |
5000 ++ : : : : : : |
| : : : : : : |
4000 O+OO OO OO O OO OO:OO O OO OO OO OO:OO OO O : : |
| : : : : : : |
3000 ++ : : : : :: |
| :: :: :: |
2000 ++ :: :: :: |
| : : :: |
1000 ++ : : : |
| : : : |
0 ++---------O--------*----------------*--------------*----------------+
proc-vmstat.numa_hit
500000 ++-----------------------------------------------------------------+
450000 OO OO OO O OO OO OO OO OO OO OO OO OO OO O |
| |
400000 **. *.* .**.**.**.* **.* .**.**.** *.**.**.**.** *.**.**. *. *
350000 ++ * * : : * : : : : **.* *|
| : : : : : : |
300000 ++ : : : : : : |
250000 ++ : : : : : : |
200000 ++ : : : : : : |
| :: :: :: |
150000 ++ :: :: :: |
100000 ++ :: :: :: |
| : : : |
50000 ++ : : : |
0 ++--------O--------*----------------*--------------*---------------+
proc-vmstat.numa_local
500000 ++-----------------------------------------------------------------+
450000 OO OO OO O OO OO OO OO OO OO OO OO OO OO O |
| |
400000 **. *.* .**.**.**.* **.* .**.**.** *.**.**.**.** *.**.**. *. *
350000 ++ * * : : * : : : : **.* *|
| : : : : : : |
300000 ++ : : : : : : |
250000 ++ : : : : : : |
200000 ++ : : : : : : |
| :: :: :: |
150000 ++ :: :: :: |
100000 ++ :: :: :: |
| : : : |
50000 ++ : : : |
0 ++--------O--------*----------------*--------------*---------------+
proc-vmstat.pgalloc_normal
450000 ++-----------------------------------------------------------------+
| |
400000 OO OO OO O OO OO OO OO OO OO OO OO OO OO O |
350000 ++ |
**.**.**.**.**.**.* **.**.**.**.** *.**.**.**.** *.**.**.**.**.**
300000 ++ : : : : : : |
250000 ++ : : : : : : |
| : : : : : : |
200000 ++ : : : : : : |
150000 ++ :: : : : : |
| :: :: :: |
100000 ++ :: :: :: |
50000 ++ : : : |
| : : : |
0 ++--------O--------*----------------*--------------*---------------+
proc-vmstat.pgfree
350000 ++-----------------------------------------------------------------+
| |
300000 OO OO OO O OO OO OO OO OO OO OO OO OO OO O |
| |
250000 ++ |
**.**.**.**.**.**.* **.**.**.**.** *.**.**.**.** *.**.**.**.**.**
200000 ++ : : : : : : |
| : : : : : : |
150000 ++ : : : : : : |
| : : : : : : |
100000 ++ :: :: :: |
| :: :: :: |
50000 ++ : : : |
| : : : |
0 ++--------O--------*----------------*--------------*---------------+
proc-vmstat.pgfault
350000 OO-OO-OO-O--OO-OO-OO-OO-OO-OO-OO-OO-OO-OO-O------------------------+
| |
300000 ++ |
| |
250000 **.**.**.**.**.**.* **.**.**.**.** *.**.**.**.** *.**.**.**.**.**
| : : : : : : |
200000 ++ : : : : : : |
| : : : : : : |
150000 ++ : : : : : : |
| :: : : : : |
100000 ++ :: :: :: |
| :: :: :: |
50000 ++ : : : |
| : : : |
0 ++--------O--------*----------------*--------------*---------------+
sched_debug.cpu.nr_load_updates.2
16000 ++------------------------------------------------------------------+
O O O OO OO |
14000 ++O O O O O OO O OO OO O OO O |
12000 ++ O O O O O |
| |
10000 ++ *. *. *. * * * |
|.* .**.* * * ** *.**.* + : *.* + : *.**.* + *.**.*
8000 *+ * : : * *.**.* * * *.**.* * * |
| : : : : : : |
6000 ++ : : : : : : |
4000 ++ : : : : : : |
| :: :: :: |
2000 ++ : :: :: |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
sched_debug.cpu.nr_load_updates.6
16000 ++------------------------------------------------------------------+
| O |
14000 O+ O O O OO O O O OO OO OO O O O |
12000 ++OO O O O O O O O |
| |
10000 ++ * |
|.* .**. *.**.**.** :+ *.* .* *.* *.* .* *.* *.**.**.**.**.*
8000 *+ * * : : * * *.* : * * *.* : * |
| : : : : : : |
6000 ++ : : : : : : |
4000 ++ :: :: :: |
| :: :: :: |
2000 ++ : :: :: |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
sched_debug.cpu.nr_switches.8
60000 ++------------------------------------------------------------------+
| O |
50000 ++ O |
| |
| OO O O O OO OO |
40000 O+ O O OO OO O O O OO OO |
| O O O |
30000 ++ |
| .* .* *.|
20000 *+**.**.**.**.**.** *.**.**.** :.* **.**.** :.* **.**.**.**.* *
| : : * : : * : : |
| : : : : : : |
10000 ++ :: :: :: |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
sched_debug.cpu.nr_load_updates.8
12000 ++------------------------------------------------------------------+
O O OO OO O OO O O O O O O O |
10000 ++O O O O O O O O O O OO |
| |
| |
8000 ++ .* * *. .* .* *.|
|.* .**. * *.**.*: *. : **. * *. * *.**. *.* *
6000 *+ * * : : * **.**.* * **.**.* * * |
| : : : : : : |
4000 ++ : : : : : : |
| : : : : : : |
| :: :: :: |
2000 ++ : :: :: |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
sched_debug.cpu.sched_count.8
80000 ++------------------------------------------------------------------+
| O |
70000 ++ |
60000 ++ |
| O |
50000 ++ O |
| O O O |
40000 O+O O OO O OO O OO O O O O OO OO |
| O O O |
30000 ++ *.|
20000 *+ *.**. *.**.**.** *.**.**. *.**.* * .**. *.**.* * .**. *.**. : *
| * * : : * : :* * : :* * * |
10000 ++ : : :: :: |
| : :: :: |
0 ++---------O--------*---------------*--------------*----------------+
sched_debug.cpu.sched_goidle.8
25000 ++-------------------O----------------------------------------------+
| O |
| |
20000 ++ |
| OO O O OO OO OO |
O O O OO OO OO O OO OO |
15000 ++ O O |
| |
10000 ++ .* .* *.|
*. *.**. .**.**.** *.**.**. * :.* .**. * :.* .**. *.**. : *
| * ** : : * * : ** * * : ** * * |
5000 ++ : : : : : : |
| :: :: :: |
| : :: :: |
0 ++---------O--------*---------------*--------------*----------------+
sched_debug.cpu.nr_load_updates.12
12000 ++-------------O-----------------------O----------------------------+
O OO O OO O O O O O OO OO O OO O O |
10000 ++ O O O O O O |
| |
| |
8000 ++ .**. *.* *. *. *.|
|.* .**. * * * *. *.* : * : **.**. .* *
6000 *+ * * : : * *.**.**.* * *.**.**.* * ** |
| : : : : : : |
4000 ++ : : : : : : |
| : : : : : : |
| :: :: :: |
2000 ++ : :: :: |
| : : : |
0 ++---------O--------*---------------*--------------*----------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
3 years, 9 months
[lkp] [radix] 099cd24e86: INFO: possible irq lock inversion dependency detected ]
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/rt/linux-rt-devel.git linux-4.4.y-rt-rebase
commit 099cd24e86fe6fe9f5090b1ba34d2a16b6299990 ("radix-tree: Make RT aware")
+------------------------------------------------------+------------+------------+
| | 7043b6dfd3 | 099cd24e86 |
+------------------------------------------------------+------------+------------+
| boot_successes | 8 | 3 |
| boot_failures | 0 | 15 |
| INFO:possible_irq_lock_inversion_dependency_detected | 0 | 15 |
| backtrace:_do_fork | 0 | 12 |
| backtrace:user_path_at_empty | 0 | 3 |
| backtrace:SyS_name_to_handle_at | 0 | 3 |
| backtrace:cgroup_setup_root | 0 | 12 |
| backtrace:cgroup_init | 0 | 12 |
| backtrace:smpboot_thread_fn | 0 | 15 |
| backtrace:do_mount | 0 | 10 |
| backtrace:SyS_mount | 0 | 8 |
| backtrace:SYSC_mkdirat | 0 | 2 |
| backtrace:SyS_mkdir | 0 | 2 |
| backtrace:_raw_spin_lock_irq | 0 | 3 |
| backtrace:cfq_init_queue | 0 | 3 |
| backtrace:elevator_init | 0 | 3 |
| backtrace:blk_init_allocated_queue | 0 | 3 |
| backtrace:blk_init_queue_node | 0 | 3 |
| backtrace:blk_init_queue | 0 | 3 |
| backtrace:floppy_async_init | 0 | 3 |
| backtrace:async_run_entry_fn | 0 | 3 |
| backtrace:compat_SyS_mount | 0 | 2 |
| backtrace:disk_events_workfn | 0 | 3 |
+------------------------------------------------------+------------+------------+
[ 21.956188] systemd-sysv-ge (304) used greatest stack depth: 13408 bytes left
[ 22.653016]
[ 22.653404] =========================================================
[ 22.654250] [ INFO: possible irq lock inversion dependency detected ]
[ 22.655089] 4.4.1-00083-g099cd24 #15 Not tainted
[ 22.655740] ---------------------------------------------------------
[ 22.656578] ksoftirqd/0/3 just changed the state of lock:
[ 22.657316] (css_set_lock){+.-...}, at: [<ffffffff8115ac5a>] put_css_set+0x31/0x48
[ 22.658667] but this lock took another, SOFTIRQ-unsafe lock in the past:
[ 22.669481] (&(&sighand->siglock)->rlock){+.+...}
and interrupts could create inverse lock ordering between them.
[ 22.671329]
[ 22.671329] other info that might help us debug this:
[ 22.672319] Possible interrupt unsafe locking scenario:
[ 22.672319]
[ 22.673332] CPU0 CPU1
[ 22.673983] ---- ----
[ 22.674637] lock(&(&sighand->siglock)->rlock);
[ 22.675420] local_irq_disable();
[ 22.676211] lock(css_set_lock);
[ 22.677117] lock(&(&sighand->siglock)->rlock);
[ 22.678167] <Interrupt>
[ 22.678615] lock(css_set_lock);
[ 22.679281]
[ 22.679281] *** DEADLOCK ***
[ 22.679281]
[ 22.680306] 1 lock held by ksoftirqd/0/3:
[ 22.680892] #0: (rcu_callback){......}, at: [<ffffffff811361ec>] rcu_process_callbacks+0x31d/0x81c
[ 22.682451]
[ 22.682451] the shortest dependencies between 2nd lock and 1st lock:
[ 22.683559] -> (&(&sighand->siglock)->rlock){+.+...} ops: 1914 {
[ 22.684679] HARDIRQ-ON-W at:
[ 22.685288] [<ffffffff81117ac5>] __lock_acquire+0x3a7/0xdee
[ 22.686472] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.687587] [<ffffffff82e2c7e8>] _raw_spin_lock+0x38/0x6e
[ 22.688709] [<ffffffff810e6482>] __lock_task_sighand+0x86/0xb9
[ 22.689879] [<ffffffff8110fe89>] autogroup_task_get+0x14/0x66
[ 22.691033] [<ffffffff8111011a>] sched_autogroup_fork+0x1b/0x25
[ 22.692211] [<ffffffff810d87fe>] copy_process+0xb8d/0x1951
[ 22.693337] [<ffffffff810d971a>] _do_fork+0x82/0x2f7
[ 22.694289] [<ffffffff810d99b8>] kernel_thread+0x29/0x2b
[ 22.695400] [<ffffffff810f5cb6>] kthreadd+0x1a0/0x1d7
[ 22.696363] [<ffffffff82e2d9df>] ret_from_fork+0x3f/0x70
[ 22.697474] SOFTIRQ-ON-W at:
[ 22.698048] [<ffffffff81117ae7>] __lock_acquire+0x3c9/0xdee
[ 22.699192] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.700305] [<ffffffff82e2c7e8>] _raw_spin_lock+0x38/0x6e
[ 22.701419] [<ffffffff810e6482>] __lock_task_sighand+0x86/0xb9
[ 22.702579] [<ffffffff8110fe89>] autogroup_task_get+0x14/0x66
[ 22.703730] [<ffffffff8111011a>] sched_autogroup_fork+0x1b/0x25
[ 22.704900] [<ffffffff810d87fe>] copy_process+0xb8d/0x1951
[ 22.706022] [<ffffffff810d971a>] _do_fork+0x82/0x2f7
[ 22.706967] [<ffffffff810d99b8>] kernel_thread+0x29/0x2b
[ 22.708082] [<ffffffff810f5cb6>] kthreadd+0x1a0/0x1d7
[ 22.709031] [<ffffffff82e2d9df>] ret_from_fork+0x3f/0x70
[ 22.710139] INITIAL USE at:
[ 22.710709] [<ffffffff81117b2d>] __lock_acquire+0x40f/0xdee
[ 22.711832] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.712940] [<ffffffff82e2d469>] _raw_spin_lock_irqsave+0x4c/0x87
[ 22.714119] [<ffffffff810e51b7>] flush_signals+0x22/0x68
[ 22.715096] [<ffffffff810e52c7>] ignore_signals+0x2d/0x2f
[ 22.716209] [<ffffffff810f5b4b>] kthreadd+0x35/0x1d7
[ 22.717153] [<ffffffff82e2d9df>] ret_from_fork+0x3f/0x70
[ 22.718140] }
[ 22.718503] ... key at: [<ffffffff848e7238>] __key.54423+0x0/0x8
[ 22.719424] ... acquired at:
[ 22.719915] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.720747] [<ffffffff82e2c7e8>] _raw_spin_lock+0x38/0x6e
[ 22.721574] [<ffffffff811f2043>] get_partial_node+0x52/0x150
[ 22.722507] [<ffffffff811f2cd5>] ___slab_alloc+0x13b/0x518
[ 22.723345] [<ffffffff811f311c>] __slab_alloc+0x6a/0xaa
[ 22.724224] [<ffffffff811f3926>] kmem_cache_alloc+0x76/0x1f0
[ 22.725073] [<ffffffff8121390f>] __d_alloc+0x25/0x1a1
[ 22.725858] [<ffffffff81213aa8>] d_alloc+0x1d/0x6c
[ 22.726625] [<ffffffff81209340>] lookup_dcache+0x7a/0x95
[ 22.727443] [<ffffffff81209376>] __lookup_hash+0x1b/0x36
[ 22.728269] [<ffffffff8120ab56>] walk_component+0xa3/0x162
[ 22.729103] [<ffffffff8120b423>] path_lookupat+0x82/0x103
[ 22.729926] [<ffffffff8120c72a>] filename_lookup+0x7d/0xfa
[ 22.730763] [<ffffffff8120c850>] user_path_at_empty+0x37/0x3d
[ 22.731629] [<ffffffff8124d8e7>] SyS_name_to_handle_at+0x4e/0x1d5
[ 22.732532] [<ffffffff82e2d672>] entry_SYSCALL_64_fastpath+0x12/0x76
[ 22.733459]
[ 22.733792] -> (css_set_lock){+.-...} ops: 2346 {
[ 22.734772] HARDIRQ-ON-W at:
[ 22.735339] [<ffffffff81117ac5>] __lock_acquire+0x3a7/0xdee
[ 22.736459] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.737436] [<ffffffff82e2c9bb>] _raw_spin_lock_bh+0x3c/0x72
[ 22.738570] [<ffffffff8115e32f>] cgroup_setup_root+0x184/0x281
[ 22.739717] [<ffffffff846311b1>] cgroup_init+0xc4/0x2a1
[ 22.740680] [<ffffffff84607f0e>] start_kernel+0x407/0x44c
[ 22.741661] [<ffffffff84607339>] x86_64_start_reservations+0x2a/0x2c
[ 22.742859] [<ffffffff84607468>] x86_64_start_kernel+0x12d/0x13a
[ 22.744022] IN-SOFTIRQ-W at:
[ 22.744589] [<ffffffff81117a75>] __lock_acquire+0x357/0xdee
[ 22.745700] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.746674] [<ffffffff82e2c9bb>] _raw_spin_lock_bh+0x3c/0x72
[ 22.747798] [<ffffffff8115ac5a>] put_css_set+0x31/0x48
[ 22.748758] [<ffffffff8115f8fc>] cgroup_free+0x63/0x6a
[ 22.749710] [<ffffffff810d78c8>] __put_task_struct+0x66/0xd9
[ 22.750840] [<ffffffff810db24d>] put_task_struct+0x10/0x12
[ 22.751948] [<ffffffff810db2f7>] delayed_put_task_struct+0xa8/0xf0
[ 22.753136] [<ffffffff811364ea>] rcu_process_callbacks+0x61b/0x81c
[ 22.754320] [<ffffffff810de145>] __do_softirq+0x18d/0x3bb
[ 22.755302] [<ffffffff810de396>] run_ksoftirqd+0x23/0x5c
[ 22.756274] [<ffffffff810f8895>] smpboot_thread_fn+0x223/0x23c
[ 22.757420] [<ffffffff810f556c>] kthread+0xc5/0xcd
[ 22.758340] [<ffffffff82e2d9df>] ret_from_fork+0x3f/0x70
[ 22.759306] INITIAL USE at:
[ 22.759858] [<ffffffff81117b2d>] __lock_acquire+0x40f/0xdee
[ 22.760963] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.761932] [<ffffffff82e2c9bb>] _raw_spin_lock_bh+0x3c/0x72
[ 22.763045] [<ffffffff8115e32f>] cgroup_setup_root+0x184/0x281
[ 22.764186] [<ffffffff846311b1>] cgroup_init+0xc4/0x2a1
[ 22.765142] [<ffffffff84607f0e>] start_kernel+0x407/0x44c
[ 22.766113] [<ffffffff84607339>] x86_64_start_reservations+0x2a/0x2c
[ 22.767299] [<ffffffff84607468>] x86_64_start_kernel+0x12d/0x13a
[ 22.768461] }
[ 22.768811] ... key at: [<ffffffff8407b958>] css_set_lock+0x18/0x60
[ 22.769757] ... acquired at:
[ 22.770247] [<ffffffff8111699e>] check_usage_forwards+0xa5/0xc0
[ 22.771128] [<ffffffff81117155>] mark_lock+0xfa/0x201
[ 22.771915] [<ffffffff81117a75>] __lock_acquire+0x357/0xdee
[ 22.772760] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.773588] [<ffffffff82e2c9bb>] _raw_spin_lock_bh+0x3c/0x72
[ 22.774445] [<ffffffff8115ac5a>] put_css_set+0x31/0x48
[ 22.775247] [<ffffffff8115f8fc>] cgroup_free+0x63/0x6a
[ 22.776046] [<ffffffff810d78c8>] __put_task_struct+0x66/0xd9
[ 22.776899] [<ffffffff810db24d>] put_task_struct+0x10/0x12
[ 22.777736] [<ffffffff810db2f7>] delayed_put_task_struct+0xa8/0xf0
[ 22.778653] [<ffffffff811364ea>] rcu_process_callbacks+0x61b/0x81c
[ 22.779563] [<ffffffff810de145>] __do_softirq+0x18d/0x3bb
[ 22.780389] [<ffffffff810de396>] run_ksoftirqd+0x23/0x5c
[ 22.781208] [<ffffffff810f8895>] smpboot_thread_fn+0x223/0x23c
[ 22.782116] [<ffffffff810f556c>] kthread+0xc5/0xcd
[ 22.782876] [<ffffffff82e2d9df>] ret_from_fork+0x3f/0x70
[ 22.783726]
[ 22.784104]
[ 22.784104] stack backtrace:
[ 22.784854] CPU: 0 PID: 3 Comm: ksoftirqd/0 Not tainted 4.4.1-00083-g099cd24 #15
[ 22.785916] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 22.787131] 0000000000000000 ffff880054827a00 ffffffff8173d8f6 ffffffff85447720
[ 22.788458] ffff880054827a40 ffffffff811a9600 ffffffff83a831cf 0000000000000000
[ 22.799874] ffff88005480d980 ffff88005480d080 ffffffff83a831cf ffffffff85447720
[ 22.801208] Call Trace:
[ 22.801641] [<ffffffff8173d8f6>] dump_stack+0x4e/0x79
[ 22.802355] [<ffffffff811a9600>] print_irq_inversion_bug+0x195/0x1a1
[ 22.803272] [<ffffffff8111699e>] check_usage_forwards+0xa5/0xc0
[ 22.804075] [<ffffffff811168f9>] ? check_usage_backwards+0xba/0xba
[ 22.804900] [<ffffffff81117155>] mark_lock+0xfa/0x201
[ 22.805611] [<ffffffff81117155>] ? mark_lock+0xfa/0x201
[ 22.806343] [<ffffffff81117a75>] __lock_acquire+0x357/0xdee
[ 22.807113] [<ffffffff8115ac5a>] ? put_css_set+0x31/0x48
[ 22.807851] [<ffffffff81093a25>] ? kvm_clock_read+0x23/0x35
[ 22.808626] [<ffffffff8111707f>] ? mark_lock+0x24/0x201
[ 22.809351] [<ffffffff8111889e>] lock_acquire+0x106/0x192
[ 22.810096] [<ffffffff8111889e>] ? lock_acquire+0x106/0x192
[ 22.810853] [<ffffffff8115ac5a>] ? put_css_set+0x31/0x48
[ 22.811592] [<ffffffff82e2c9bb>] _raw_spin_lock_bh+0x3c/0x72
[ 22.812410] [<ffffffff8115ac5a>] ? put_css_set+0x31/0x48
[ 22.813341] [<ffffffff8115ac5a>] put_css_set+0x31/0x48
[ 22.814065] [<ffffffff8115f8fc>] cgroup_free+0x63/0x6a
[ 22.814782] [<ffffffff810d78c8>] __put_task_struct+0x66/0xd9
[ 22.815554] [<ffffffff810db24d>] put_task_struct+0x10/0x12
[ 22.816314] [<ffffffff810db2f7>] delayed_put_task_struct+0xa8/0xf0
[ 22.817147] [<ffffffff811364ea>] rcu_process_callbacks+0x61b/0x81c
[ 22.817975] [<ffffffff810db24f>] ? put_task_struct+0x12/0x12
[ 22.818759] [<ffffffff810de145>] __do_softirq+0x18d/0x3bb
[ 22.819509] [<ffffffff810de396>] run_ksoftirqd+0x23/0x5c
[ 22.820251] [<ffffffff810f8895>] smpboot_thread_fn+0x223/0x23c
[ 22.821041] [<ffffffff810f8672>] ? cpumask_next+0x2f/0x2f
[ 22.821790] [<ffffffff810f556c>] kthread+0xc5/0xcd
[ 22.822480] [<ffffffff82e29451>] ? __wait_for_common+0x114/0x147
[ 22.823294] [<ffffffff810f54a7>] ? kthread_parkme+0x24/0x24
[ 22.824055] [<ffffffff82e2d9df>] ret_from_fork+0x3f/0x70
[ 22.824795] [<ffffffff810f54a7>] ? kthread_parkme+0x24/0x24
[ 32.402656] systemd-journald[320]: Fixed max_use=65.7M max_size=8.2M min_size=4.0M keep_free=98.6M
[ 32.408082] systemd-journald[320]: Reserving 14968 entries in hash table.
[ 32.409435] systemd-journald[320]: Vacuuming...
Thanks,
Ying Huang
3 years, 9 months
[lkp] [crypto/xor] 420ce36f05: BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/s390/linux.git features
commit 420ce36f0515c0fe8b5fda9420bb64cfbea75841 ("crypto/xor: skip speed test if the xor function is selected automatically")
[ 4.830567] .................................... done.
[ 4.834798] page_owner is disabled
[ 4.839879] raid6test: testing the 4-disk case...
[ 4.841076] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020
[ 4.843396] IP: [<ffffffff8139c419>] xor_blocks+0x2e/0x4b
[ 4.844877] PGD 0
[ 4.845705] Oops: 0000 [#1] SMP
[ 4.846892] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.5.0-rc5-00005-g420ce36 #1
[ 4.848684] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 4.850461] task: ffff880012a38000 ti: ffff880012a40000 task.ti: ffff880012a40000
[ 4.852142] RIP: 0010:[<ffffffff8139c419>] [<ffffffff8139c419>] xor_blocks+0x2e/0x4b
[ 4.854341] RSP: 0000:ffff880012a43bc8 EFLAGS: 00010246
[ 4.855674] RAX: 0000000000000000 RBX: ffffffff82d943e0 RCX: ffff88000e6f7000
[ 4.857367] RDX: ffff88000e6f6000 RSI: ffff88000e710000 RDI: 0000000000001000
[ 4.859058] RBP: ffff880012a43bc8 R08: 0000000000000002 R09: ffffffff82d943e0
[ 4.860705] R10: 0000000000000002 R11: 0000000000000001 R12: 0000000000000004
[ 4.862376] R13: ffff880012a43cf8 R14: 0000000000000002 R15: 0000000000000002
[ 4.864169] FS: 0000000000000000(0000) GS:ffff880013800000(0000) knlGS:0000000000000000
[ 4.866144] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 4.867564] CR2: 0000000000000020 CR3: 0000000001d40000 CR4: 00000000000406b0
[ 4.869247] Stack:
[ 4.873291] ffff880012a43c30 ffffffff8139c981 0000000000000000 ffffffff82d943e8
[ 4.875322] 0000000000000002 0000000000001000 0000000000000002 ffff88000e710000
[ 4.877246] ffff880013040400 0000000000000004 ffff880012a43cf4 ffff880012a43cf8
[ 4.879384] Call Trace:
[ 4.880174] [<ffffffff8139c981>] async_xor+0x1b1/0x1da
[ 4.881392] [<ffffffff8139cf1e>] async_syndrome_val+0x16f/0x357
[ 4.882849] [<ffffffff8139d386>] ? raid6_test_exit+0x6/0x6
[ 4.884340] [<ffffffff8139d652>] raid6_dual_recov+0x269/0x2c8
[ 4.885847] [<ffffffff8139d98d>] test+0x2dc/0x4f1
[ 4.886970] [<ffffffff8139d386>] ? raid6_test_exit+0x6/0x6
[ 4.888221] [<ffffffff8139dc14>] raid6_test+0x72/0x119
[ 4.889464] [<ffffffff8139dba2>] ? test+0x4f1/0x4f1
[ 4.890646] [<ffffffff81fa913b>] do_one_initcall+0x10c/0x1c7
[ 4.891943] [<ffffffff81fa9309>] kernel_init_freeable+0x113/0x199
[ 4.893392] [<ffffffff81830cbf>] ? rest_init+0x136/0x136
[ 4.894743] [<ffffffff81830cc8>] kernel_init+0x9/0xcf
[ 4.895938] [<ffffffff8183761f>] ret_from_fork+0x3f/0x70
[ 4.897148] [<ffffffff81830cbf>] ? rest_init+0x136/0x136
[ 4.898402] Code: fa 48 8b 05 72 7d 9f 01 41 83 fa 01 89 f7 48 89 d6 48 89 e5 48 8b 11 75 05 ff 50 18 eb 28 41 83 fa 02 49 89 c9 48 8b 49 08 75 05 <ff> 50 20 eb 16 41 83 fa 03 4d 8b 41 10 75 05 ff 50 28 eb 07 4d
[ 4.911424] RIP [<ffffffff8139c419>] xor_blocks+0x2e/0x4b
[ 4.912791] RSP <ffff880012a43bc8>
[ 4.913792] CR2: 0000000000000020
[ 4.914654] ---[ end trace 596e7e91d3b9c992 ]---
[ 4.915786] Kernel panic - not syncing: Fatal exception
Thanks,
Kernel Test Robot
3 years, 9 months
[lkp] [xfs] fbcc025613: -5.6% fsmark.files_per_sec
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit fbcc025613590d7b1d15521555dcc6393a148a6b ("xfs: Introduce writeback context for writepages")
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/md/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/8BRD_12G/4M/xfs/1x/x86_64-rhel/RAID0/1t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-hsx02/60G/fsmark
commit:
150d5be09ce49a9bed6feb7b7dc4e5ae188778ec
fbcc025613590d7b1d15521555dcc6393a148a6b
150d5be09ce49a9b fbcc025613590d7b1d15521555
---------------- --------------------------
%stddev %change %stddev
\ | \
36122 ± 0% -57.4% 15404 ± 0% fsmark.time.involuntary_context_switches
95.30 ± 0% +1.8% 97.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
25339 ± 32% +5756.0% 1483901 ± 1% fsmark.time.voluntary_context_switches
881.80 ± 45% +14258.5% 126613 ± 10% latency_stats.hits.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
3548 ± 48% +11967.4% 428200 ± 7% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
36122 ± 0% -57.4% 15404 ± 0% time.involuntary_context_switches
25339 ± 32% +5756.0% 1483901 ± 1% time.voluntary_context_switches
0.75 ± 0% +6.3% 0.80 ± 0% turbostat.%Busy
22.00 ± 0% +4.5% 23.00 ± 0% turbostat.Avg_MHz
4991 ± 4% +709.3% 40394 ± 3% vmstat.system.cs
1578 ± 1% +5.4% 1663 ± 0% vmstat.system.in
2856 ± 10% -33.7% 1894 ± 10% slabinfo.file_lock_cache.active_objs
2856 ± 10% -33.7% 1894 ± 10% slabinfo.file_lock_cache.num_objs
10424 ± 15% -65.8% 3561 ± 35% slabinfo.scsi_data_buffer.active_objs
10424 ± 15% -65.8% 3561 ± 35% slabinfo.scsi_data_buffer.num_objs
3978 ± 15% -65.4% 1375 ± 37% slabinfo.xfs_da_state.active_objs
3978 ± 15% -65.4% 1375 ± 37% slabinfo.xfs_da_state.num_objs
2452 ± 15% -65.8% 837.70 ± 35% slabinfo.xfs_efd_item.active_objs
2452 ± 15% -65.8% 837.70 ± 35% slabinfo.xfs_efd_item.num_objs
10181 ± 3% -17.4% 8414 ± 3% slabinfo.xfs_ili.active_objs
10181 ± 3% -17.4% 8414 ± 3% slabinfo.xfs_ili.num_objs
2819 ± 13% -62.6% 1054 ± 32% slabinfo.xfs_log_ticket.active_objs
2819 ± 13% -62.6% 1054 ± 32% slabinfo.xfs_log_ticket.num_objs
4369 ± 14% -64.3% 1559 ± 33% slabinfo.xfs_trans.active_objs
4369 ± 14% -64.3% 1559 ± 33% slabinfo.xfs_trans.num_objs
154727 ± 25% +56.9% 242693 ± 9% sched_debug.cpu.avg_idle.stddev
3668 ± 4% +43.9% 5278 ± 27% sched_debug.cpu.nr_load_updates.103
3752 ± 3% +44.3% 5414 ± 31% sched_debug.cpu.nr_load_updates.104
3721 ± 3% +45.8% 5425 ± 28% sched_debug.cpu.nr_load_updates.105
3734 ± 2% +39.5% 5210 ± 28% sched_debug.cpu.nr_load_updates.106
3738 ± 5% +38.4% 5175 ± 29% sched_debug.cpu.nr_load_updates.107
3719 ± 3% +39.0% 5169 ± 27% sched_debug.cpu.nr_load_updates.108
3653 ± 3% +40.9% 5148 ± 27% sched_debug.cpu.nr_load_updates.109
5678 ± 14% +19.2% 6768 ± 21% sched_debug.cpu.nr_load_updates.11
3664 ± 2% +40.0% 5131 ± 28% sched_debug.cpu.nr_load_updates.110
3607 ± 4% +43.5% 5178 ± 28% sched_debug.cpu.nr_load_updates.111
3647 ± 2% +41.7% 5169 ± 28% sched_debug.cpu.nr_load_updates.112
3699 ± 4% +40.9% 5212 ± 28% sched_debug.cpu.nr_load_updates.113
3595 ± 3% +41.3% 5080 ± 29% sched_debug.cpu.nr_load_updates.114
3545 ± 5% +42.7% 5058 ± 29% sched_debug.cpu.nr_load_updates.115
3654 ± 3% +39.3% 5090 ± 28% sched_debug.cpu.nr_load_updates.117
3627 ± 4% +39.4% 5058 ± 28% sched_debug.cpu.nr_load_updates.118
3612 ± 4% +42.1% 5135 ± 28% sched_debug.cpu.nr_load_updates.119
3583 ± 4% +40.3% 5028 ± 29% sched_debug.cpu.nr_load_updates.120
3594 ± 2% +42.2% 5109 ± 29% sched_debug.cpu.nr_load_updates.121
3553 ± 4% +43.0% 5082 ± 29% sched_debug.cpu.nr_load_updates.122
3622 ± 4% +38.1% 5002 ± 29% sched_debug.cpu.nr_load_updates.123
3551 ± 3% +31.3% 4664 ± 22% sched_debug.cpu.nr_load_updates.124
3552 ± 3% +41.3% 5018 ± 30% sched_debug.cpu.nr_load_updates.125
3512 ± 8% +44.3% 5066 ± 29% sched_debug.cpu.nr_load_updates.126
3489 ± 6% +42.9% 4987 ± 29% sched_debug.cpu.nr_load_updates.127
3483 ± 6% +42.6% 4966 ± 29% sched_debug.cpu.nr_load_updates.128
3612 ± 5% +38.6% 5007 ± 29% sched_debug.cpu.nr_load_updates.129
3484 ± 7% +43.7% 5007 ± 29% sched_debug.cpu.nr_load_updates.130
3507 ± 3% +42.2% 4986 ± 29% sched_debug.cpu.nr_load_updates.131
3483 ± 3% +44.4% 5029 ± 29% sched_debug.cpu.nr_load_updates.132
3551 ± 4% +40.3% 4984 ± 28% sched_debug.cpu.nr_load_updates.134
3519 ± 3% +38.5% 4876 ± 30% sched_debug.cpu.nr_load_updates.135
3456 ± 3% +43.5% 4960 ± 30% sched_debug.cpu.nr_load_updates.136
3462 ± 4% +42.3% 4925 ± 29% sched_debug.cpu.nr_load_updates.137
3469 ± 5% +43.5% 4980 ± 29% sched_debug.cpu.nr_load_updates.138
3435 ± 5% +43.6% 4933 ± 29% sched_debug.cpu.nr_load_updates.139
3535 ± 4% +37.8% 4873 ± 28% sched_debug.cpu.nr_load_updates.140
3470 ± 5% +41.4% 4906 ± 29% sched_debug.cpu.nr_load_updates.141
3461 ± 5% +40.9% 4876 ± 29% sched_debug.cpu.nr_load_updates.142
3508 ± 6% +41.4% 4960 ± 29% sched_debug.cpu.nr_load_updates.143
5052 ± 11% +32.9% 6714 ± 21% sched_debug.cpu.nr_load_updates.39
4872 ± 13% +29.7% 6321 ± 20% sched_debug.cpu.nr_load_updates.54
5037 ± 14% +28.5% 6472 ± 20% sched_debug.cpu.nr_load_updates.56
5937 ± 12% +22.0% 7244 ± 20% sched_debug.cpu.nr_load_updates.6
4966 ± 18% +29.9% 6451 ± 19% sched_debug.cpu.nr_load_updates.63
4972 ± 17% +29.4% 6436 ± 19% sched_debug.cpu.nr_load_updates.64
4892 ± 15% +30.3% 6373 ± 20% sched_debug.cpu.nr_load_updates.65
4764 ± 20% +35.5% 6453 ± 18% sched_debug.cpu.nr_load_updates.66
4818 ± 14% +31.0% 6313 ± 19% sched_debug.cpu.nr_load_updates.67
4764 ± 14% +35.1% 6438 ± 26% sched_debug.cpu.nr_load_updates.68
4773 ± 15% +32.0% 6302 ± 20% sched_debug.cpu.nr_load_updates.69
4634 ± 15% +35.5% 6280 ± 20% sched_debug.cpu.nr_load_updates.70
4684 ± 14% +33.6% 6258 ± 21% sched_debug.cpu.nr_load_updates.71
3912 ± 6% +39.5% 5459 ± 25% sched_debug.cpu.nr_load_updates.72
4007 ± 4% +37.3% 5502 ± 27% sched_debug.cpu.nr_load_updates.73
3937 ± 3% +33.3% 5246 ± 22% sched_debug.cpu.nr_load_updates.74
3935 ± 4% +37.8% 5425 ± 25% sched_debug.cpu.nr_load_updates.75
3868 ± 4% +39.1% 5382 ± 27% sched_debug.cpu.nr_load_updates.76
3928 ± 3% +38.3% 5432 ± 26% sched_debug.cpu.nr_load_updates.77
3846 ± 4% +37.7% 5295 ± 27% sched_debug.cpu.nr_load_updates.78
3966 ± 3% +34.4% 5330 ± 26% sched_debug.cpu.nr_load_updates.79
5758 ± 15% +21.9% 7017 ± 19% sched_debug.cpu.nr_load_updates.8
3942 ± 5% +36.4% 5377 ± 26% sched_debug.cpu.nr_load_updates.81
3921 ± 3% +37.1% 5376 ± 26% sched_debug.cpu.nr_load_updates.82
3884 ± 3% +38.0% 5359 ± 26% sched_debug.cpu.nr_load_updates.83
3886 ± 5% +37.5% 5342 ± 27% sched_debug.cpu.nr_load_updates.84
3826 ± 5% +37.5% 5259 ± 24% sched_debug.cpu.nr_load_updates.85
3909 ± 2% +35.6% 5299 ± 27% sched_debug.cpu.nr_load_updates.86
3783 ± 3% +40.8% 5328 ± 27% sched_debug.cpu.nr_load_updates.88
5676 ± 15% +21.8% 6915 ± 20% sched_debug.cpu.nr_load_updates.9
3773 ± 3% +40.6% 5303 ± 27% sched_debug.cpu.nr_load_updates.92
3775 ± 2% +39.3% 5258 ± 28% sched_debug.cpu.nr_load_updates.93
3817 ± 5% +39.5% 5323 ± 27% sched_debug.cpu.nr_load_updates.94
3798 ± 4% +40.0% 5318 ± 29% sched_debug.cpu.nr_load_updates.95
3776 ± 3% +39.6% 5271 ± 27% sched_debug.cpu.nr_load_updates.96
3752 ± 4% +38.9% 5211 ± 27% sched_debug.cpu.nr_load_updates.97
4501 ± 2% +31.5% 5920 ± 24% sched_debug.cpu.nr_load_updates.avg
1642 ± 4% +552.3% 10714 ± 3% sched_debug.cpu.nr_switches.avg
12081 ± 14% +726.7% 99874 ± 12% sched_debug.cpu.nr_switches.max
2088 ± 11% +1143.5% 25972 ± 6% sched_debug.cpu.nr_switches.stddev
671.69 ± 4% +679.1% 5233 ± 3% sched_debug.cpu.sched_goidle.avg
4936 ± 18% +877.5% 48256 ± 12% sched_debug.cpu.sched_goidle.max
816.17 ± 12% +1457.0% 12707 ± 6% sched_debug.cpu.sched_goidle.stddev
838.29 ± 4% +543.4% 5393 ± 3% sched_debug.cpu.ttwu_count.avg
9306 ± 9% +461.0% 52212 ± 11% sched_debug.cpu.ttwu_count.max
1333 ± 8% +899.5% 13325 ± 6% sched_debug.cpu.ttwu_count.stddev
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1BRD_48G/4M/xfs/1x/x86_64-rhel/1t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/lkp-hsx04/40G/fsmark
commit:
150d5be09ce49a9bed6feb7b7dc4e5ae188778ec
fbcc025613590d7b1d15521555dcc6393a148a6b
150d5be09ce49a9b fbcc025613590d7b1d15521555
---------------- --------------------------
%stddev %change %stddev
\ | \
296253 ± 0% +7.3% 317785 ± 0% fsmark.app_overhead
260.88 ± 0% -5.6% 246.35 ± 0% fsmark.files_per_sec
28610 ± 0% -64.0% 10291 ± 0% fsmark.time.involuntary_context_switches
94.50 ± 0% +2.6% 97.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
13641 ± 20% +5289.6% 735225 ± 1% fsmark.time.voluntary_context_switches
393.50 ± 3% -27.8% 284.00 ± 14% proc-vmstat.nr_writeback
179855 ± 88% -99.3% 1304 ± 12% proc-vmstat.pgalloc_dma32
28610 ± 0% -64.0% 10291 ± 0% time.involuntary_context_switches
13641 ± 20% +5289.6% 735225 ± 1% time.voluntary_context_switches
1010090 ± 1% -5.1% 958337 ± 0% vmstat.io.bo
5530 ± 2% +578.6% 37527 ± 0% vmstat.system.cs
1283915 ± 0% +95.7% 2512835 ± 81% latency_stats.avg.async_synchronize_cookie_domain.async_synchronize_full.do_init_module.load_module.SyS_finit_module.entry_SYSCALL_64_fastpath
356.00 ± 57% +20640.0% 73834 ± 17% latency_stats.hits.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1679 ± 63% +14400.8% 243468 ± 1% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.xfs_file_fsync.vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
10257570 ± 97% -99.6% 36846 ± 5% numa-numastat.node0.local_node
10257571 ± 97% -99.6% 36847 ± 5% numa-numastat.node0.numa_hit
5159497 ±166% -99.3% 34470 ± 5% numa-numastat.node3.local_node
5159497 ±166% -99.3% 34470 ± 5% numa-numastat.node3.numa_hit
0.97 ± 13% +289.7% 3.80 ± 5% turbostat.%Busy
31.25 ± 14% +255.2% 111.00 ± 5% turbostat.Avg_MHz
3221 ± 0% -9.3% 2923 ± 0% turbostat.Bzy_MHz
2.30 ± 61% +252.2% 8.11 ± 43% turbostat.CPU%c1
193.92 ± 1% +9.2% 211.83 ± 1% turbostat.PkgWatt
898.00 ± 35% +68.3% 1511 ± 12% cpuidle.C1-HSW.usage
27387698 ±168% +439.5% 1.477e+08 ± 94% cpuidle.C1E-HSW.time
147.25 ± 38% +1074.2% 1729 ± 23% cpuidle.C3-HSW.usage
67775 ± 4% +14.8% 77808 ± 3% cpuidle.C6-HSW.usage
12851618 ± 57% +1331.2% 1.839e+08 ± 6% cpuidle.POLL.time
6455 ± 3% +891.3% 63993 ± 18% cpuidle.POLL.usage
2581 ± 6% -47.0% 1367 ± 14% slabinfo.file_lock_cache.active_objs
2581 ± 6% -47.0% 1367 ± 14% slabinfo.file_lock_cache.num_objs
9911 ± 12% -64.0% 3572 ± 33% slabinfo.scsi_data_buffer.active_objs
9911 ± 12% -64.0% 3572 ± 33% slabinfo.scsi_data_buffer.num_objs
3834 ± 14% -64.6% 1359 ± 36% slabinfo.xfs_da_state.active_objs
3834 ± 14% -64.6% 1359 ± 36% slabinfo.xfs_da_state.num_objs
2332 ± 12% -64.0% 840.25 ± 33% slabinfo.xfs_efd_item.active_objs
2332 ± 12% -64.0% 840.25 ± 33% slabinfo.xfs_efd_item.num_objs
7323 ± 3% -25.7% 5443 ± 4% slabinfo.xfs_ili.active_objs
7323 ± 3% -25.7% 5443 ± 4% slabinfo.xfs_ili.num_objs
6149 ± 1% -15.0% 5225 ± 1% slabinfo.xfs_inode.active_objs
6149 ± 1% -15.0% 5225 ± 1% slabinfo.xfs_inode.num_objs
2817 ± 11% -64.7% 995.50 ± 29% slabinfo.xfs_log_ticket.active_objs
2817 ± 11% -64.7% 995.50 ± 29% slabinfo.xfs_log_ticket.num_objs
4189 ± 11% -63.6% 1523 ± 30% slabinfo.xfs_trans.active_objs
4189 ± 11% -63.6% 1523 ± 30% slabinfo.xfs_trans.num_objs
2431648 ± 95% -100.0% 66.25 ± 57% numa-vmstat.node0.nr_dirtied
2461928 ± 93% -98.8% 29788 ± 0% numa-vmstat.node0.nr_file_pages
671.50 ±126% -87.1% 86.75 ± 14% numa-vmstat.node0.nr_inactive_anon
2455143 ± 94% -99.0% 23476 ± 1% numa-vmstat.node0.nr_inactive_file
690.75 ±122% -84.8% 105.25 ± 13% numa-vmstat.node0.nr_shmem
78482 ± 90% -94.9% 4022 ± 14% numa-vmstat.node0.nr_slab_reclaimable
12214 ± 7% -11.5% 10804 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
2431323 ± 95% -100.0% 60.25 ± 57% numa-vmstat.node0.nr_written
5105548 ± 91% -96.3% 188791 ± 7% numa-vmstat.node0.numa_hit
5072639 ± 92% -96.9% 155667 ± 8% numa-vmstat.node0.numa_local
1506 ± 49% -49.5% 761.50 ± 1% numa-vmstat.node1.nr_mapped
665.00 ±130% +154.7% 1693 ± 49% numa-vmstat.node2.nr_shmem
1190411 ±166% -100.0% 1.00 ±100% numa-vmstat.node3.nr_dirtied
1220156 ±161% -97.5% 30211 ± 2% numa-vmstat.node3.nr_file_pages
1213900 ±162% -98.1% 23410 ± 0% numa-vmstat.node3.nr_inactive_file
40402 ±149% -91.3% 3503 ± 13% numa-vmstat.node3.nr_slab_reclaimable
1190259 ±166% -100.0% 0.50 ±100% numa-vmstat.node3.nr_written
9848197 ± 93% -98.8% 119153 ± 0% numa-meminfo.node0.FilePages
9823758 ± 94% -99.0% 94254 ± 0% numa-meminfo.node0.Inactive
2686 ±126% -87.0% 348.50 ± 14% numa-meminfo.node0.Inactive(anon)
9821071 ± 94% -99.0% 93906 ± 1% numa-meminfo.node0.Inactive(file)
20295204 ± 92% -97.1% 584915 ± 9% numa-meminfo.node0.MemUsed
313945 ± 90% -94.9% 16093 ± 14% numa-meminfo.node0.SReclaimable
48861 ± 7% -11.5% 43225 ± 2% numa-meminfo.node0.SUnreclaim
2765 ±122% -84.7% 422.25 ± 13% numa-meminfo.node0.Shmem
362807 ± 78% -83.7% 59318 ± 4% numa-meminfo.node0.Slab
6028 ± 49% -49.5% 3045 ± 1% numa-meminfo.node1.Mapped
2660 ±130% +154.7% 6776 ± 49% numa-meminfo.node2.Shmem
4881563 ±161% -97.5% 120847 ± 2% numa-meminfo.node3.FilePages
4856947 ±162% -98.0% 95995 ± 4% numa-meminfo.node3.Inactive
4856548 ±162% -98.1% 93643 ± 0% numa-meminfo.node3.Inactive(file)
10148394 ±158% -95.3% 475091 ± 3% numa-meminfo.node3.MemUsed
669.25 ± 4% +22.5% 819.50 ± 13% numa-meminfo.node3.PageTables
161638 ±149% -91.3% 14017 ± 13% numa-meminfo.node3.SReclaimable
204385 ±117% -73.0% 55109 ± 4% numa-meminfo.node3.Slab
2473 ± 82% -87.6% 305.96 ±122% sched_debug.cfs_rq:/.exec_clock.102
321.21 ± 65% -74.9% 80.77 ± 61% sched_debug.cfs_rq:/.exec_clock.103
327.77 ± 55% -53.0% 154.12 ± 35% sched_debug.cfs_rq:/.exec_clock.106
329.30 ± 59% -67.4% 107.49 ± 41% sched_debug.cfs_rq:/.exec_clock.109
299.92 ± 44% -68.7% 93.76 ± 62% sched_debug.cfs_rq:/.exec_clock.120
304.61 ± 61% -64.5% 108.23 ± 51% sched_debug.cfs_rq:/.exec_clock.123
317.37 ± 49% -57.8% 133.97 ± 52% sched_debug.cfs_rq:/.exec_clock.124
346.76 ± 24% -48.7% 177.98 ± 28% sched_debug.cfs_rq:/.exec_clock.13
270.01 ± 25% -46.2% 145.32 ± 42% sched_debug.cfs_rq:/.exec_clock.131
317.15 ± 35% -61.8% 121.14 ± 43% sched_debug.cfs_rq:/.exec_clock.141
515.18 ± 35% -62.2% 194.89 ± 45% sched_debug.cfs_rq:/.exec_clock.19
1740 ±117% -88.5% 200.72 ± 18% sched_debug.cfs_rq:/.exec_clock.22
442.34 ± 39% -43.0% 252.26 ± 26% sched_debug.cfs_rq:/.exec_clock.25
335.97 ± 11% -35.5% 216.59 ± 36% sched_debug.cfs_rq:/.exec_clock.3
478.25 ± 42% -48.9% 244.26 ± 39% sched_debug.cfs_rq:/.exec_clock.40
369.23 ± 31% -35.8% 237.02 ± 34% sched_debug.cfs_rq:/.exec_clock.41
465.87 ± 47% -55.3% 208.45 ± 23% sched_debug.cfs_rq:/.exec_clock.43
381.79 ± 38% -53.6% 177.20 ± 25% sched_debug.cfs_rq:/.exec_clock.50
347.70 ± 16% -51.2% 169.70 ± 32% sched_debug.cfs_rq:/.exec_clock.71
284.55 ± 41% -66.0% 96.64 ± 89% sched_debug.cfs_rq:/.exec_clock.74
368.83 ± 55% -70.5% 108.91 ± 61% sched_debug.cfs_rq:/.exec_clock.79
1299 ±135% -95.2% 61.80 ± 30% sched_debug.cfs_rq:/.exec_clock.92
13.00 ±136% +501.9% 78.25 ± 65% sched_debug.cfs_rq:/.load_avg.140
1.33 ± 93% +17168.8% 230.25 ±167% sched_debug.cfs_rq:/.load_avg.25
2.00 ± 61% +2912.5% 60.25 ± 88% sched_debug.cfs_rq:/.load_avg.39
2.00 ± 70% +1550.0% 33.00 ±134% sched_debug.cfs_rq:/.load_avg.41
91.75 ±112% -75.5% 22.50 ±163% sched_debug.cfs_rq:/.load_avg.55
3.75 ± 51% +5740.0% 219.00 ± 89% sched_debug.cfs_rq:/.load_avg.60
1.25 ± 34% +18240.0% 229.25 ±146% sched_debug.cfs_rq:/.load_avg.7
53.75 ± 75% -98.6% 0.75 ± 57% sched_debug.cfs_rq:/.load_avg.98
50.61 ± 17% -24.6% 38.15 ± 15% sched_debug.cfs_rq:/.load_avg.avg
42372 ± 29% -52.5% 20110 ± 4% sched_debug.cfs_rq:/.min_vruntime.0
24964 ± 33% -49.5% 12599 ± 52% sched_debug.cfs_rq:/.min_vruntime.10
28449 ± 61% -77.0% 6548 ± 53% sched_debug.cfs_rq:/.min_vruntime.102
25429 ± 68% -78.1% 5579 ± 97% sched_debug.cfs_rq:/.min_vruntime.103
22244 ± 32% -61.8% 8497 ± 77% sched_debug.cfs_rq:/.min_vruntime.108
24582 ± 65% -72.6% 6733 ± 65% sched_debug.cfs_rq:/.min_vruntime.109
23957 ± 73% -71.4% 6853 ± 83% sched_debug.cfs_rq:/.min_vruntime.110
23019 ± 66% -64.7% 8134 ± 52% sched_debug.cfs_rq:/.min_vruntime.118
24409 ± 62% -63.1% 9018 ± 39% sched_debug.cfs_rq:/.min_vruntime.119
20859 ± 33% -56.4% 9085 ± 60% sched_debug.cfs_rq:/.min_vruntime.12
22914 ± 53% -80.8% 4408 ±132% sched_debug.cfs_rq:/.min_vruntime.120
24011 ± 64% -67.6% 7783 ± 66% sched_debug.cfs_rq:/.min_vruntime.122
25835 ± 56% -72.5% 7102 ± 53% sched_debug.cfs_rq:/.min_vruntime.125
22529 ± 35% -61.7% 8628 ± 45% sched_debug.cfs_rq:/.min_vruntime.13
22852 ± 48% -69.8% 6908 ± 55% sched_debug.cfs_rq:/.min_vruntime.141
24299 ± 46% -57.3% 10375 ± 46% sched_debug.cfs_rq:/.min_vruntime.17
32100 ± 43% -66.6% 10732 ± 47% sched_debug.cfs_rq:/.min_vruntime.19
29947 ± 59% -59.7% 12061 ± 42% sched_debug.cfs_rq:/.min_vruntime.22
21889 ± 57% -51.5% 10609 ± 27% sched_debug.cfs_rq:/.min_vruntime.25
19883 ± 18% -61.4% 7677 ± 80% sched_debug.cfs_rq:/.min_vruntime.3
30560 ± 66% -70.7% 8942 ± 66% sched_debug.cfs_rq:/.min_vruntime.34
23910 ± 38% -63.1% 8818 ± 52% sched_debug.cfs_rq:/.min_vruntime.4
24186 ± 45% -59.2% 9879 ± 35% sched_debug.cfs_rq:/.min_vruntime.41
30645 ± 61% -66.1% 10385 ± 50% sched_debug.cfs_rq:/.min_vruntime.43
26137 ± 59% -57.6% 11071 ± 42% sched_debug.cfs_rq:/.min_vruntime.50
24390 ± 66% -71.2% 7013 ± 41% sched_debug.cfs_rq:/.min_vruntime.58
24634 ± 41% -59.0% 10091 ± 40% sched_debug.cfs_rq:/.min_vruntime.61
27874 ± 54% -61.7% 10665 ± 52% sched_debug.cfs_rq:/.min_vruntime.65
23946 ± 75% -60.4% 9478 ± 36% sched_debug.cfs_rq:/.min_vruntime.66
21961 ± 49% -50.8% 10800 ± 38% sched_debug.cfs_rq:/.min_vruntime.70
21587 ± 34% -72.6% 5924 ± 43% sched_debug.cfs_rq:/.min_vruntime.71
23441 ± 74% -63.4% 8574 ± 58% sched_debug.cfs_rq:/.min_vruntime.73
17619 ± 53% -74.4% 4505 ±122% sched_debug.cfs_rq:/.min_vruntime.74
25305 ± 59% -75.4% 6232 ± 84% sched_debug.cfs_rq:/.min_vruntime.79
11691 ± 3% -54.6% 5307 ± 78% sched_debug.cfs_rq:/.min_vruntime.82
21995 ± 49% -83.0% 3735 ± 67% sched_debug.cfs_rq:/.min_vruntime.92
16851 ± 11% -59.4% 6849 ± 72% sched_debug.cfs_rq:/.min_vruntime.95
21138 ± 51% -51.7% 10211 ± 36% sched_debug.cfs_rq:/.min_vruntime.avg
0.05 ± 17% -46.2% 0.02 ± 31% sched_debug.cfs_rq:/.nr_running.avg
0.21 ± 8% -26.6% 0.15 ± 16% sched_debug.cfs_rq:/.nr_running.stddev
1.65 ± 40% -59.2% 0.68 ± 20% sched_debug.cfs_rq:/.runnable_load_avg.avg
101.50 ± 96% -78.8% 21.50 ± 28% sched_debug.cfs_rq:/.runnable_load_avg.max
9.14 ± 85% -71.8% 2.57 ± 21% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-25583 ±-20% -63.9% -9234 ±-45% sched_debug.cfs_rq:/.spread0.1
-18990 ±-29% -49.9% -9510 ±-52% sched_debug.cfs_rq:/.spread0.100
-32882 ±-53% -67.3% -10737 ±-42% sched_debug.cfs_rq:/.spread0.104
-18188 ±-31% -44.1% -10169 ±-38% sched_debug.cfs_rq:/.spread0.105
-31547 ±-55% -66.3% -10623 ±-34% sched_debug.cfs_rq:/.spread0.107
-27581 ±-43% -62.4% -10384 ±-34% sched_debug.cfs_rq:/.spread0.11
-22027 ±-16% -39.5% -13324 ±-37% sched_debug.cfs_rq:/.spread0.111
-18752 ±-25% -60.0% -7508 ±-55% sched_debug.cfs_rq:/.spread0.112
-35576 ±-48% -72.7% -9707 ±-36% sched_debug.cfs_rq:/.spread0.113
-32611 ±-53% -71.2% -9377 ±-48% sched_debug.cfs_rq:/.spread0.114
-23140 ±-38% -53.5% -10766 ±-55% sched_debug.cfs_rq:/.spread0.115
-36650 ±-44% -65.7% -12585 ±-38% sched_debug.cfs_rq:/.spread0.116
-17970 ±-25% -38.3% -11093 ±-24% sched_debug.cfs_rq:/.spread0.119
-18769 ±-30% -43.2% -10669 ±-44% sched_debug.cfs_rq:/.spread0.121
-18368 ±-21% -32.9% -12328 ±-39% sched_debug.cfs_rq:/.spread0.122
-30913 ±-61% -63.3% -11341 ±-53% sched_debug.cfs_rq:/.spread0.126
-21803 ±-26% -48.0% -11333 ±-36% sched_debug.cfs_rq:/.spread0.127
-21165 ±-24% -67.7% -6840 ±-85% sched_debug.cfs_rq:/.spread0.128
-26241 ±-26% -41.2% -15428 ±-18% sched_debug.cfs_rq:/.spread0.129
-32533 ±-55% -62.2% -12312 ±-38% sched_debug.cfs_rq:/.spread0.130
-24553 ±-32% -60.5% -9708 ±-43% sched_debug.cfs_rq:/.spread0.131
-18241 ±-22% -48.1% -9475 ±-58% sched_debug.cfs_rq:/.spread0.132
-23675 ±-43% -69.3% -7265 ±-75% sched_debug.cfs_rq:/.spread0.133
-31623 ±-58% -57.5% -13450 ±-33% sched_debug.cfs_rq:/.spread0.134
-18208 ±-26% -41.8% -10590 ±-48% sched_debug.cfs_rq:/.spread0.135
-19697 ±-30% -45.2% -10793 ±-33% sched_debug.cfs_rq:/.spread0.136
-23217 ±-39% -60.6% -9144 ±-46% sched_debug.cfs_rq:/.spread0.137
-22219 ±-20% -44.0% -12439 ±-20% sched_debug.cfs_rq:/.spread0.138
-29239 ±-61% -66.0% -9946 ±-59% sched_debug.cfs_rq:/.spread0.139
-28928 ±-28% -70.5% -8519 ±-48% sched_debug.cfs_rq:/.spread0.14
-18983 ±-14% -59.1% -7765 ±-64% sched_debug.cfs_rq:/.spread0.140
-19527 ± -7% -32.4% -13204 ±-26% sched_debug.cfs_rq:/.spread0.141
-18157 ±-27% -49.8% -9109 ±-36% sched_debug.cfs_rq:/.spread0.142
-23564 ±-34% -53.2% -11036 ±-56% sched_debug.cfs_rq:/.spread0.143
-23378 ±-22% -67.8% -7528 ±-65% sched_debug.cfs_rq:/.spread0.15
-18714 ±-36% -62.8% -6960 ±-45% sched_debug.cfs_rq:/.spread0.16
-18073 ± -8% -46.1% -9735 ±-39% sched_debug.cfs_rq:/.spread0.17
-25957 ±-34% -72.3% -7199 ±-77% sched_debug.cfs_rq:/.spread0.2
-27734 ±-48% -74.4% -7107 ±-77% sched_debug.cfs_rq:/.spread0.20
-21427 ± -9% -63.3% -7863 ±-88% sched_debug.cfs_rq:/.spread0.21
-18151 ±-47% -80.5% -3539 ±-246% sched_debug.cfs_rq:/.spread0.23
-20749 ±-12% -67.5% -6752 ±-52% sched_debug.cfs_rq:/.spread0.24
-20483 ±-13% -53.6% -9500 ±-27% sched_debug.cfs_rq:/.spread0.25
-22052 ±-15% -75.0% -5518 ±-67% sched_debug.cfs_rq:/.spread0.26
-19597 ±-17% -59.5% -7929 ±-51% sched_debug.cfs_rq:/.spread0.28
-15827 ±-46% -73.5% -4189 ±-54% sched_debug.cfs_rq:/.spread0.30
-20549 ±-39% -72.5% -5651 ±-73% sched_debug.cfs_rq:/.spread0.31
-16967 ±-28% -62.7% -6336 ±-66% sched_debug.cfs_rq:/.spread0.32
-17327 ± -7% -61.1% -6744 ±-89% sched_debug.cfs_rq:/.spread0.33
-20174 ±-15% -58.2% -8435 ±-41% sched_debug.cfs_rq:/.spread0.36
-28004 ±-73% -72.6% -7665 ±-51% sched_debug.cfs_rq:/.spread0.37
-18462 ±-21% -38.8% -11292 ±-35% sched_debug.cfs_rq:/.spread0.4
-19841 ±-29% -50.0% -9922 ±-38% sched_debug.cfs_rq:/.spread0.40
-18190 ± -7% -43.8% -10231 ±-28% sched_debug.cfs_rq:/.spread0.41
-20178 ±-14% -59.4% -8185 ±-57% sched_debug.cfs_rq:/.spread0.45
-18888 ±-11% -55.4% -8416 ±-44% sched_debug.cfs_rq:/.spread0.46
-21537 ±-21% -57.4% -9175 ±-70% sched_debug.cfs_rq:/.spread0.47
-16341 ±-14% -93.0% -1142 ±-867% sched_debug.cfs_rq:/.spread0.48
-18314 ± -7% -61.1% -7120 ±-62% sched_debug.cfs_rq:/.spread0.5
-16238 ±-23% -44.3% -9040 ±-41% sched_debug.cfs_rq:/.spread0.50
-27605 ±-56% -71.0% -8018 ±-53% sched_debug.cfs_rq:/.spread0.51
-26365 ±-42% -70.5% -7781 ±-53% sched_debug.cfs_rq:/.spread0.52
-25178 ±-19% -70.7% -7377 ±-65% sched_debug.cfs_rq:/.spread0.53
-17872 ±-23% -68.5% -5635 ±-86% sched_debug.cfs_rq:/.spread0.55
-16539 ±-23% -38.3% -10208 ±-40% sched_debug.cfs_rq:/.spread0.6
-20649 ±-23% -56.1% -9069 ±-55% sched_debug.cfs_rq:/.spread0.60
-17745 ±-24% -43.5% -10019 ±-35% sched_debug.cfs_rq:/.spread0.61
-18436 ±-32% -71.1% -5332 ±-144% sched_debug.cfs_rq:/.spread0.62
-20372 ±-20% -65.8% -6960 ±-77% sched_debug.cfs_rq:/.spread0.63
-16717 ±-17% -61.0% -6520 ±-79% sched_debug.cfs_rq:/.spread0.64
-19957 ±-37% -54.3% -9119 ±-45% sched_debug.cfs_rq:/.spread0.68
-21056 ±-11% -63.2% -7756 ±-90% sched_debug.cfs_rq:/.spread0.69
-30470 ±-48% -79.6% -6229 ±-75% sched_debug.cfs_rq:/.spread0.7
-20418 ±-14% -54.4% -9310 ±-42% sched_debug.cfs_rq:/.spread0.70
-20792 ±-24% -31.8% -14187 ±-15% sched_debug.cfs_rq:/.spread0.71
-24760 ±-19% -37.0% -15605 ±-30% sched_debug.cfs_rq:/.spread0.74
-25495 ±-21% -36.9% -16078 ±-15% sched_debug.cfs_rq:/.spread0.75
-25503 ±-16% -56.2% -11178 ±-24% sched_debug.cfs_rq:/.spread0.76
-32196 ±-52% -59.9% -12917 ±-52% sched_debug.cfs_rq:/.spread0.77
-29920 ±-29% -59.7% -12070 ±-35% sched_debug.cfs_rq:/.spread0.8
-30688 ±-41% -51.8% -14805 ±-30% sched_debug.cfs_rq:/.spread0.82
-20513 ±-22% -43.8% -11518 ±-31% sched_debug.cfs_rq:/.spread0.87
-27216 ±-26% -60.3% -10800 ±-33% sched_debug.cfs_rq:/.spread0.88
-26359 ±-63% -107.2% 1904 ±875% sched_debug.cfs_rq:/.spread0.9
-30731 ±-56% -51.6% -14887 ±-21% sched_debug.cfs_rq:/.spread0.90
-33391 ±-47% -58.6% -13809 ±-10% sched_debug.cfs_rq:/.spread0.91
-20385 ± -8% -19.7% -16376 ±-18% sched_debug.cfs_rq:/.spread0.92
-26920 ±-62% -74.0% -7005 ±-33% sched_debug.cfs_rq:/.spread0.93
-34217 ±-54% -59.2% -13959 ±-35% sched_debug.cfs_rq:/.spread0.94
-25528 ±-52% -48.0% -13262 ±-31% sched_debug.cfs_rq:/.spread0.95
-25938 ±-23% -66.5% -8680 ±-55% sched_debug.cfs_rq:/.spread0.98
-21239 ± -9% -53.4% -9900 ±-30% sched_debug.cfs_rq:/.spread0.avg
-41851 ±-30% -52.1% -20048 ± -4% sched_debug.cfs_rq:/.spread0.min
104.25 ± 18% -69.5% 31.75 ± 65% sched_debug.cfs_rq:/.util_avg.113
13.33 ±136% +509.4% 81.25 ± 58% sched_debug.cfs_rq:/.util_avg.140
223.25 ±141% -91.5% 19.00 ± 30% sched_debug.cfs_rq:/.util_avg.45
224.25 ± 54% -78.4% 48.50 ± 51% sched_debug.cfs_rq:/.util_avg.54
120.00 ± 71% -78.8% 25.50 ±116% sched_debug.cfs_rq:/.util_avg.55
7.25 ±102% +2924.1% 219.25 ± 92% sched_debug.cfs_rq:/.util_avg.60
58.50 ±100% -94.4% 3.25 ±173% sched_debug.cfs_rq:/.util_avg.79
65.25 ± 44% -78.9% 13.75 ±116% sched_debug.cfs_rq:/.util_avg.98
994.50 ± 2% -15.1% 844.00 ± 9% sched_debug.cfs_rq:/.util_avg.max
550561 ± 35% +57.8% 868662 ± 26% sched_debug.cpu.avg_idle.92
6.12 ± 29% -28.4% 4.38 ± 8% sched_debug.cpu.clock.stddev
6.12 ± 29% -28.4% 4.38 ± 8% sched_debug.cpu.clock_task.stddev
3.32 ± 85% -64.5% 1.18 ± 32% sched_debug.cpu.load.avg
27.19 ±123% -74.7% 6.89 ± 15% sched_debug.cpu.load.stddev
0.00 ± 15% -36.2% 0.00 ± 33% sched_debug.cpu.next_balance.stddev
0.04 ± 16% -33.3% 0.03 ± 35% sched_debug.cpu.nr_running.avg
0.20 ± 8% -18.9% 0.16 ± 18% sched_debug.cpu.nr_running.stddev
363.50 ± 37% -35.1% 235.75 ± 19% sched_debug.cpu.nr_switches.108
666.00 ± 18% -36.8% 421.00 ± 24% sched_debug.cpu.nr_switches.11
246.25 ± 16% +67.7% 413.00 ± 19% sched_debug.cpu.nr_switches.119
1400 ±102% -74.0% 364.75 ± 6% sched_debug.cpu.nr_switches.21
864.00 ± 15% -42.4% 498.00 ± 20% sched_debug.cpu.nr_switches.5
652.50 ± 4% -36.8% 412.25 ± 23% sched_debug.cpu.nr_switches.57
740.50 ± 75% -66.3% 249.25 ± 30% sched_debug.cpu.nr_switches.75
637.75 ± 64% -64.7% 225.25 ± 20% sched_debug.cpu.nr_switches.92
-16.00 ±-158% -104.2% 0.67 ±187% sched_debug.cpu.nr_uninterruptible.14
-1.67 ±-149% -160.0% 1.00 ±122% sched_debug.cpu.nr_uninterruptible.17
-6.75 ±-46% -81.5% -1.25 ±-34% sched_debug.cpu.nr_uninterruptible.18
1.50 ±191% -200.0% -1.50 ±-74% sched_debug.cpu.nr_uninterruptible.26
0.67 ±187% -587.5% -3.25 ±-59% sched_debug.cpu.nr_uninterruptible.57
-0.25 ±-173% +800.0% -2.25 ±-72% sched_debug.cpu.nr_uninterruptible.58
0.75 ±317% -600.0% -3.75 ±-39% sched_debug.cpu.nr_uninterruptible.67
-1.67 ±-28% -160.0% 1.00 ± 81% sched_debug.cpu.nr_uninterruptible.68
-1.25 ±-34% -80.0% -0.25 ±-331% sched_debug.cpu.nr_uninterruptible.69
5.00 ± 28% -100.0% 0.00 ± 0% sched_debug.cpu.nr_uninterruptible.79
1.75 ± 47% -100.0% 0.00 ± 0% sched_debug.cpu.nr_uninterruptible.90
365.75 ± 37% -35.3% 236.50 ± 19% sched_debug.cpu.sched_count.108
672.25 ± 18% -26.4% 494.50 ± 25% sched_debug.cpu.sched_count.11
248.00 ± 16% +67.1% 414.50 ± 19% sched_debug.cpu.sched_count.119
1497 ± 94% -75.4% 368.25 ± 6% sched_debug.cpu.sched_count.21
954.00 ± 36% -36.9% 602.25 ± 25% sched_debug.cpu.sched_count.45
538.75 ± 16% +100.9% 1082 ± 59% sched_debug.cpu.sched_count.47
871.25 ± 15% -42.4% 501.50 ± 20% sched_debug.cpu.sched_count.5
1057 ± 55% -60.8% 414.75 ± 23% sched_debug.cpu.sched_count.57
742.00 ± 75% -66.3% 250.00 ± 30% sched_debug.cpu.sched_count.75
639.25 ± 64% -64.6% 226.25 ± 20% sched_debug.cpu.sched_count.92
170.25 ± 36% -36.4% 108.25 ± 17% sched_debug.cpu.sched_goidle.108
316.75 ± 16% -36.5% 201.00 ± 25% sched_debug.cpu.sched_goidle.11
114.75 ± 16% +66.0% 190.50 ± 18% sched_debug.cpu.sched_goidle.119
87.00 ± 18% +33.0% 115.75 ± 23% sched_debug.cpu.sched_goidle.126
370.75 ± 50% -53.5% 172.25 ± 4% sched_debug.cpu.sched_goidle.21
246.50 ± 16% +98.0% 488.00 ± 68% sched_debug.cpu.sched_goidle.47
420.50 ± 14% -43.5% 237.50 ± 20% sched_debug.cpu.sched_goidle.5
270.25 ± 25% -33.4% 180.00 ± 25% sched_debug.cpu.sched_goidle.55
304.50 ± 6% -35.6% 196.25 ± 24% sched_debug.cpu.sched_goidle.57
361.75 ± 76% -68.1% 115.50 ± 29% sched_debug.cpu.sched_goidle.75
40.25 ± 26% +614.3% 287.50 ±118% sched_debug.cpu.ttwu_count.119
41.50 ± 58% +197.0% 123.25 ± 89% sched_debug.cpu.ttwu_count.133
917.25 ± 69% -83.7% 149.50 ± 78% sched_debug.cpu.ttwu_count.22
187.50 ± 82% +545.7% 1210 ± 78% sched_debug.cpu.ttwu_count.36
178.75 ± 27% +228.0% 586.25 ± 91% sched_debug.cpu.ttwu_count.44
133.75 ± 56% +212.5% 418.00 ± 74% sched_debug.cpu.ttwu_count.47
107.25 ± 29% +434.3% 573.00 ± 58% sched_debug.cpu.ttwu_count.48
25.25 ± 19% +36.6% 34.50 ± 16% sched_debug.cpu.ttwu_count.88
25.25 ± 30% -42.6% 14.50 ± 14% sched_debug.cpu.ttwu_local.110
19.50 ± 20% +43.6% 28.00 ± 23% sched_debug.cpu.ttwu_local.117
377.00 ±139% -92.7% 27.50 ± 10% sched_debug.cpu.ttwu_local.21
30.00 ± 46% +104.2% 61.25 ± 44% sched_debug.cpu.ttwu_local.36
151.25 ± 39% -71.6% 43.00 ± 21% sched_debug.cpu.ttwu_local.45
40.25 ± 43% +87.0% 75.25 ± 27% sched_debug.cpu.ttwu_local.48
0.00 ± 23% -61.7% 0.00 ±106% sched_debug.rt_rq:/.rt_time.stddev
lkp-hsx02: Brickland Haswell-EX
Memory: 128G
lkp-hsx04: Brickland Haswell-EX
Memory: 512G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
3 years, 9 months
[lkp] [perf/core] 17fcdff68f: +86.7% aim9.brk_test.ops_per_sec
by kernel test robot
Hi, Kan,
We finally enabled the Intel internal tree testing and reporting
support. This is the first performance report for Intel internal git
trees.
The report is for performance increase so is a good news in general.
But a dmesg.xz on i386 platform with kernel bug about perf is found
too. The dmesg is attached with the email. That may be not related to
your patch, please take a look.
FYI, we noticed the below changes on
ssh://lab_lkp@git-amr-3.devtools.intel.com:29418/kan_lkp-lkp lkp
commit 17fcdff68f9a69f8beb7600c0803c12efed503f1 ("perf/core: find auxiliary events in running pmus list")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-hsx04/all/aim9/5s
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
%stddev %change %stddev
\ | \
2050637 ± 4% +86.7% 3828182 ± 0% aim9.brk_test.ops_per_sec
2265615 ± 2% +2.7% 2327160 ± 0% aim9.fifo_test.ops_per_sec
602225 ± 0% +14.4% 689002 ± 0% aim9.page_test.ops_per_sec
352111 ± 0% +1.0% 355662 ± 0% aim9.tcp_test.ops_per_sec
7599375 ± 0% +5.3% 8005468 ± 0% aim9.time.minor_page_faults
6303 ± 3% -27.1% 4596 ± 6% cpuidle.C1-HSW.usage
6366 ± 1% +23.5% 7863 ± 9% meminfo.AnonHugePages
147.86 ± 0% -1.7% 145.34 ± 1% turbostat.RAMWatt
6213 ± 89% -87.8% 760.67 ±126% latency_stats.max.load_module.SYSC_finit_module.SyS_finit_module.entry_SYSCALL_64_fastpath
6798 ± 83% -80.0% 1356 ±132% latency_stats.sum.load_module.SYSC_finit_module.SyS_finit_module.entry_SYSCALL_64_fastpath
1.52 ± 1% +10.7% 1.68 ± 2% perf-profile.cycles-pp.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.22 ± 2% +15.7% 1.42 ± 4% perf-profile.cycles-pp.security_file_permission.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
142.10 ± 28% -40.1% 85.13 ± 49% sched_debug.cfs_rq:/.exec_clock.103
106.04 ± 16% +61.9% 171.66 ± 4% sched_debug.cfs_rq:/.exec_clock.12
100.69 ± 45% +71.0% 172.14 ± 33% sched_debug.cfs_rq:/.exec_clock.125
63.68 ± 28% +62.1% 103.24 ± 17% sched_debug.cfs_rq:/.exec_clock.131
71.39 ± 10% +52.4% 108.80 ± 23% sched_debug.cfs_rq:/.exec_clock.133
58.02 ± 16% +125.1% 130.62 ± 25% sched_debug.cfs_rq:/.exec_clock.134
323.84 ± 64% +9241.1% 30250 ±146% sched_debug.cfs_rq:/.exec_clock.36
120.15 ± 26% +5403.8% 6613 ±168% sched_debug.cfs_rq:/.exec_clock.45
106.21 ± 33% +130.2% 244.51 ± 18% sched_debug.cfs_rq:/.exec_clock.46
76.77 ± 26% +92.5% 147.81 ± 45% sched_debug.cfs_rq:/.exec_clock.66
150.36 ± 21% -41.4% 88.12 ± 46% sched_debug.cfs_rq:/.exec_clock.90
6864 ± 74% +190.7% 19957 ± 32% sched_debug.cfs_rq:/.min_vruntime.101
27696 ± 57% -52.3% 13224 ± 28% sched_debug.cfs_rq:/.min_vruntime.27
15116 ± 34% +1818.6% 290027 ±135% sched_debug.cfs_rq:/.min_vruntime.36
27852 ± 46% -45.1% 15291 ± 29% sched_debug.cfs_rq:/.min_vruntime.37
18947 ± 11% -49.4% 9587 ± 15% sched_debug.cfs_rq:/.min_vruntime.75
25747 ± 47% -82.6% 4484 ± 45% sched_debug.cfs_rq:/.min_vruntime.92
9540 ± 54% +73.8% 16579 ± 42% sched_debug.cfs_rq:/.min_vruntime.93
3094 ± 72% -53.5% 1438 ± 36% sched_debug.cfs_rq:/.min_vruntime.min
14.50 ± 51% +327.6% 62.00 ± 30% sched_debug.cfs_rq:/.util_avg.103
77.25 ± 59% -97.4% 2.00 ± 61% sched_debug.cfs_rq:/.util_avg.105
57.50 ± 70% -77.0% 13.25 ±138% sched_debug.cfs_rq:/.util_avg.19
83.25 ± 65% -92.8% 6.00 ± 13% sched_debug.cfs_rq:/.util_avg.49
16.25 ±101% -84.6% 2.50 ±150% sched_debug.cfs_rq:/.util_avg.74
11286 ± 15% -35.9% 7239 ± 18% sched_debug.cpu.nr_load_updates.19
7926 ± 16% -14.3% 6792 ± 18% sched_debug.cpu.nr_load_updates.22
9021 ± 13% -23.0% 6946 ± 24% sched_debug.cpu.nr_load_updates.26
9132 ± 6% -23.6% 6980 ± 17% sched_debug.cpu.nr_load_updates.33
9410 ± 5% -31.3% 6464 ± 19% sched_debug.cpu.nr_load_updates.70
7319 ± 21% -29.3% 5174 ± 25% sched_debug.cpu.nr_load_updates.92
824.25 ± 13% +108.9% 1721 ± 25% sched_debug.cpu.nr_switches.132
978.25 ± 15% +57.0% 1536 ± 19% sched_debug.cpu.nr_switches.134
4783 ± 23% +95.4% 9347 ± 10% sched_debug.cpu.nr_switches.3
7000 ± 36% -49.2% 3554 ± 34% sched_debug.cpu.nr_switches.49
3736 ± 4% +146.1% 9196 ± 40% sched_debug.cpu.nr_switches.51
4614 ± 39% +65.5% 7634 ± 27% sched_debug.cpu.nr_switches.6
3774 ± 20% +40.8% 5314 ± 12% sched_debug.cpu.nr_switches.62
-1.50 ±-74% +350.0% -6.75 ±-42% sched_debug.cpu.nr_uninterruptible.25
0.00 ± 2% -Inf% -5.50 ±-45% sched_debug.cpu.nr_uninterruptible.31
-1.00 ±-100% +725.0% -8.25 ±-48% sched_debug.cpu.nr_uninterruptible.33
-3.50 ±-58% -114.3% 0.50 ±300% sched_debug.cpu.nr_uninterruptible.49
0.75 ±145% -800.0% -5.25 ±-38% sched_debug.cpu.nr_uninterruptible.54
-1.25 ±-163% -340.0% 3.00 ± 97% sched_debug.cpu.nr_uninterruptible.55
-1.00 ±-339% -650.0% 5.50 ±174% sched_debug.cpu.nr_uninterruptible.57
1.00 ± 81% -400.0% -3.00 ±-40% sched_debug.cpu.nr_uninterruptible.59
1.00 ±-100% +175.0% 2.75 ± 53% sched_debug.cpu.nr_uninterruptible.77
1.75 ± 62% -157.1% -1.00 ±-70% sched_debug.cpu.nr_uninterruptible.97
6.37 ± 7% +249.7% 22.29 ± 80% sched_debug.cpu.nr_uninterruptible.max
639.75 ± 19% +140.6% 1539 ± 28% sched_debug.cpu.sched_count.132
722.25 ± 18% +80.8% 1305 ± 22% sched_debug.cpu.sched_count.134
752.00 ± 31% +76.0% 1323 ± 25% sched_debug.cpu.sched_count.135
700.00 ± 16% +61.2% 1128 ± 31% sched_debug.cpu.sched_count.136
3216 ± 50% +153.8% 8164 ± 6% sched_debug.cpu.sched_count.3
6091 ± 48% -49.3% 3089 ± 41% sched_debug.cpu.sched_count.49
3188 ± 22% +50.7% 4805 ± 18% sched_debug.cpu.sched_count.62
302.50 ± 19% +65.3% 500.00 ± 25% sched_debug.cpu.sched_goidle.127
264.00 ± 16% +142.0% 639.00 ± 29% sched_debug.cpu.sched_goidle.132
302.25 ± 15% +82.4% 551.25 ± 25% sched_debug.cpu.sched_goidle.134
322.00 ± 31% +69.2% 544.75 ± 30% sched_debug.cpu.sched_goidle.135
287.75 ± 10% +65.1% 475.00 ± 35% sched_debug.cpu.sched_goidle.136
303.75 ± 14% +87.2% 568.75 ± 58% sched_debug.cpu.sched_goidle.138
304.25 ± 9% +60.0% 486.75 ± 38% sched_debug.cpu.sched_goidle.142
3094 ± 47% -51.2% 1510 ± 22% sched_debug.cpu.sched_goidle.24
1557 ± 50% +159.6% 4044 ± 6% sched_debug.cpu.sched_goidle.3
2955 ± 50% -48.9% 1511 ± 39% sched_debug.cpu.sched_goidle.49
1545 ± 23% +50.5% 2325 ± 18% sched_debug.cpu.sched_goidle.62
1959 ± 42% +87.8% 3679 ± 11% sched_debug.cpu.ttwu_count.1
1611 ± 75% -74.6% 409.00 ± 67% sched_debug.cpu.ttwu_count.114
641.25 ± 98% +185.1% 1828 ± 69% sched_debug.cpu.ttwu_count.128
404.25 ± 59% +309.3% 1654 ± 79% sched_debug.cpu.ttwu_count.130
296.00 ± 28% +253.5% 1046 ± 62% sched_debug.cpu.ttwu_count.131
358.00 ± 30% +211.7% 1116 ± 54% sched_debug.cpu.ttwu_count.133
286.75 ± 22% +278.2% 1084 ± 82% sched_debug.cpu.ttwu_count.134
301.00 ± 31% +423.8% 1576 ± 78% sched_debug.cpu.ttwu_count.137
2422 ± 25% -50.0% 1212 ± 36% sched_debug.cpu.ttwu_count.24
1238 ± 75% +135.6% 2917 ± 62% sched_debug.cpu.ttwu_count.3
1091 ± 15% +98.3% 2163 ± 17% sched_debug.cpu.ttwu_count.61
145.25 ± 35% +102.2% 293.75 ± 42% sched_debug.cpu.ttwu_local.132
142.75 ± 34% +83.4% 261.75 ± 20% sched_debug.cpu.ttwu_local.135
1029 ± 42% -71.4% 294.00 ± 18% sched_debug.cpu.ttwu_local.20
1082 ± 94% -83.9% 174.25 ± 20% sched_debug.cpu.ttwu_local.37
775.25 ±107% -76.7% 181.00 ± 26% sched_debug.cpu.ttwu_local.39
440.50 ± 40% -49.9% 220.75 ± 54% sched_debug.cpu.ttwu_local.42
278.25 ± 38% +198.8% 831.50 ± 55% sched_debug.cpu.ttwu_local.47
249.00 ± 16% -39.1% 151.75 ± 22% sched_debug.cpu.ttwu_local.52
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/ivb43/all/aim9/5s
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
%stddev %change %stddev
\ | \
2481150 ± 0% +52.0% 3770425 ± 0% aim9.brk_test.ops_per_sec
774700 ± 0% -5.2% 734250 ± 0% aim9.creat-clo.ops_per_sec
1252315 ± 0% +2.7% 1285975 ± 0% aim9.dgram_pipe.ops_per_sec
1399630 ± 1% -2.8% 1360896 ± 0% aim9.disk_cp.ops_per_sec
2136576 ± 0% -2.3% 2087936 ± 0% aim9.disk_wrt.ops_per_sec
642005 ± 0% +7.2% 688330 ± 1% aim9.page_test.ops_per_sec
362.65 ± 0% +1.5% 367.95 ± 0% aim9.shell_rtns_2.ops_per_sec
363.67 ± 0% +2.1% 371.20 ± 0% aim9.shell_rtns_3.ops_per_sec
193.02 ± 0% -2.4% 188.43 ± 0% aim9.sieve.ops_per_sec
1441050 ± 0% -2.6% 1403875 ± 0% aim9.stream_pipe.ops_per_sec
2092365 ± 0% -2.3% 2044800 ± 0% aim9.sync_disk_wrt.ops_per_sec
527720 ± 1% -6.3% 494415 ± 3% aim9.udp_test.ops_per_sec
948.00 ± 17% -31.3% 651.50 ± 9% proc-vmstat.numa_pte_updates
51688 ± 5% +16.6% 60292 ± 7% softirqs.RCU
3.56 ± 1% -3.0% 3.45 ± 1% turbostat.RAMWatt
40801038 ± 15% +67.1% 68183060 ± 22% cpuidle.C1E-IVT.time
30446 ± 8% -23.6% 23262 ± 21% cpuidle.C1E-IVT.usage
12724 ± 6% -10.0% 11455 ± 6% slabinfo.radix_tree_node.active_objs
12813 ± 6% -9.4% 11613 ± 7% slabinfo.radix_tree_node.num_objs
1.65 ± 4% -10.3% 1.48 ± 1% perf-profile.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write
0.98 ± 3% -9.9% 0.89 ± 6% perf-profile.cycles-pp.free_hot_cold_page.free_hot_cold_page_list.release_pages.__pagevec_release.shmem_undo_range
1.95 ± 2% -7.4% 1.81 ± 3% perf-profile.cycles-pp.truncate_inode_page.shmem_undo_range.shmem_truncate_range.shmem_evict_inode.evict
916.81 ± 46% -68.6% 288.12 ± 50% sched_debug.cfs_rq:/.exec_clock.11
247.41 ± 44% +178.3% 688.51 ± 18% sched_debug.cfs_rq:/.exec_clock.13
470.48 ± 14% +185.8% 1344 ± 75% sched_debug.cfs_rq:/.exec_clock.14
397.15 ± 43% +8238.1% 33115 ±158% sched_debug.cfs_rq:/.exec_clock.16
314.47 ± 51% +115.7% 678.41 ± 13% sched_debug.cfs_rq:/.exec_clock.17
307.91 ± 41% +377.4% 1469 ± 97% sched_debug.cfs_rq:/.exec_clock.21
338.97 ± 54% +87.9% 636.85 ± 25% sched_debug.cfs_rq:/.exec_clock.22
288.60 ± 44% +122.3% 641.55 ± 18% sched_debug.cfs_rq:/.exec_clock.23
744.66 ± 21% -61.2% 288.72 ± 44% sched_debug.cfs_rq:/.exec_clock.3
174.12 ± 54% +104.6% 356.25 ± 44% sched_debug.cfs_rq:/.exec_clock.36
168.93 ± 43% +74.8% 295.31 ± 25% sched_debug.cfs_rq:/.exec_clock.37
176.70 ± 51% +113.7% 377.57 ± 35% sched_debug.cfs_rq:/.exec_clock.46
188.10 ± 44% +94.9% 366.58 ± 42% sched_debug.cfs_rq:/.exec_clock.47
567.29 ± 16% -57.5% 241.00 ± 33% sched_debug.cfs_rq:/.exec_clock.8
13087 ±151% -97.4% 336.14 ± 72% sched_debug.cfs_rq:/.exec_clock.9
75.75 ± 95% -38.0% 47.00 ± -2% sched_debug.cfs_rq:/.load.14
22709 ± 60% -61.4% 8765 ± 21% sched_debug.cfs_rq:/.min_vruntime.11
13014 ± 30% +35.2% 17599 ± 19% sched_debug.cfs_rq:/.min_vruntime.14
10710 ± 45% +984.7% 116183 ±133% sched_debug.cfs_rq:/.min_vruntime.16
9906 ± 33% +51.5% 15007 ± 11% sched_debug.cfs_rq:/.min_vruntime.22
101031 ±136% -92.0% 8119 ± 35% sched_debug.cfs_rq:/.min_vruntime.9
2535 ± 30% +50.8% 3823 ± 28% sched_debug.cfs_rq:/.min_vruntime.min
3.25 ± 39% -100.0% 0.00 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.11
18.25 ±103% +543.8% 117.50 ±124% sched_debug.cfs_rq:/.util_avg.1
132.50 ± 23% -84.2% 21.00 ± 81% sched_debug.cfs_rq:/.util_avg.11
14.00 ± 86% -85.7% 2.00 ±117% sched_debug.cfs_rq:/.util_avg.27
39.50 ± 46% -87.3% 5.00 ±110% sched_debug.cfs_rq:/.util_avg.32
15.00 ±107% +175.0% 41.25 ± 41% sched_debug.cfs_rq:/.util_avg.45
68.50 ±105% -86.5% 9.25 ±111% sched_debug.cfs_rq:/.util_avg.9
862727 ± 11% +15.3% 994974 ± 0% sched_debug.cpu.avg_idle.9
3.25 ± 39% -100.0% 0.00 ± 0% sched_debug.cpu.cpu_load[0].11
3.25 ± 39% -100.0% 0.00 ± 0% sched_debug.cpu.cpu_load[1].11
5367 ±156% -95.7% 232.00 ± 0% sched_debug.cpu.curr->pid.14
39907 ± 5% -32.6% 26909 ± 19% sched_debug.cpu.curr->pid.max
5971 ± 4% -30.0% 4180 ± 22% sched_debug.cpu.curr->pid.stddev
75.75 ± 95% -38.0% 47.00 ± -2% sched_debug.cpu.load.14
5398 ± 20% +75.1% 9452 ± 6% sched_debug.cpu.nr_load_updates.13
5524 ± 24% +69.4% 9361 ± 14% sched_debug.cpu.nr_load_updates.14
4971 ± 25% +722.6% 40893 ±127% sched_debug.cpu.nr_load_updates.16
4843 ± 23% +91.7% 9284 ± 8% sched_debug.cpu.nr_load_updates.17
4797 ± 29% +52.1% 7296 ± 6% sched_debug.cpu.nr_load_updates.19
5189 ± 18% +86.4% 9673 ± 33% sched_debug.cpu.nr_load_updates.21
5483 ± 20% +40.6% 7712 ± 4% sched_debug.cpu.nr_load_updates.22
8986 ± 17% -30.8% 6218 ± 27% sched_debug.cpu.nr_load_updates.3
3513 ± 9% +31.9% 4634 ± 33% sched_debug.cpu.nr_load_updates.35
3498 ± 18% +34.8% 4714 ± 16% sched_debug.cpu.nr_load_updates.39
3579 ± 16% +29.6% 4638 ± 15% sched_debug.cpu.nr_load_updates.41
3194 ± 24% +41.5% 4518 ± 19% sched_debug.cpu.nr_load_updates.45
3280 ± 20% +31.6% 4317 ± 18% sched_debug.cpu.nr_load_updates.47
20835 ± 98% -68.9% 6475 ± 28% sched_debug.cpu.nr_load_updates.9
2724 ± 11% +42.9% 3893 ± 24% sched_debug.cpu.nr_load_updates.min
21756 ± 28% -45.2% 11913 ± 12% sched_debug.cpu.nr_switches.0
14276 ± 7% -30.1% 9984 ± 24% sched_debug.cpu.nr_switches.1
7747 ± 44% +196.7% 22985 ± 27% sched_debug.cpu.nr_switches.13
8340 ± 49% +168.1% 22363 ± 17% sched_debug.cpu.nr_switches.14
7977 ± 47% +111.5% 16876 ± 10% sched_debug.cpu.nr_switches.15
6997 ± 57% +197.9% 20848 ± 23% sched_debug.cpu.nr_switches.16
6659 ± 52% +223.0% 21508 ± 20% sched_debug.cpu.nr_switches.17
6641 ± 51% +304.1% 26837 ± 30% sched_debug.cpu.nr_switches.18
8203 ± 53% +101.4% 16522 ± 17% sched_debug.cpu.nr_switches.20
20877 ± 40% -64.1% 7489 ± 48% sched_debug.cpu.nr_switches.3
21043 ± 41% -62.7% 7843 ± 25% sched_debug.cpu.nr_switches.4
22706 ± 38% -66.3% 7658 ± 35% sched_debug.cpu.nr_switches.5
19736 ± 13% -71.8% 5561 ± 34% sched_debug.cpu.nr_switches.8
30402 ± 62% -74.0% 7902 ± 44% sched_debug.cpu.nr_switches.9
-5.25 ±-20% -142.9% 2.25 ± 65% sched_debug.cpu.nr_uninterruptible.15
1.25 ±153% -540.0% -5.50 ±-15% sched_debug.cpu.nr_uninterruptible.2
-3.25 ±-111% -184.6% 2.75 ± 86% sched_debug.cpu.nr_uninterruptible.23
0.75 ±404% -33.3% 0.50 ±100% sched_debug.cpu.nr_uninterruptible.25
1.00 ±122% -400.0% -3.00 ± 33% sched_debug.cpu.nr_uninterruptible.35
1.50 ±100% +116.7% 3.25 ± 76% sched_debug.cpu.nr_uninterruptible.44
1.00 ± 70% +250.0% 3.50 ± 47% sched_debug.cpu.nr_uninterruptible.46
-0.25 ±-655% +1500.0% -4.00 ±-35% sched_debug.cpu.nr_uninterruptible.8
32037 ± 52% -73.5% 8479 ± 54% sched_debug.cpu.sched_count.11
6307 ± 54% +226.7% 20606 ± 36% sched_debug.cpu.sched_count.13
7007 ± 54% +118.1% 15280 ± 10% sched_debug.cpu.sched_count.15
5794 ± 58% +240.8% 19750 ± 24% sched_debug.cpu.sched_count.16
5758 ± 60% +249.0% 20095 ± 26% sched_debug.cpu.sched_count.17
5758 ± 62% +334.9% 25046 ± 33% sched_debug.cpu.sched_count.18
8738 ± 59% +271.1% 32427 ± 72% sched_debug.cpu.sched_count.21
6669 ± 56% +111.1% 14075 ± 14% sched_debug.cpu.sched_count.23
22514 ± 20% -71.3% 6456 ± 55% sched_debug.cpu.sched_count.3
19812 ± 45% -68.7% 6200 ± 38% sched_debug.cpu.sched_count.4
18619 ± 13% -74.8% 4699 ± 40% sched_debug.cpu.sched_count.8
39206 ± 87% -82.1% 7009 ± 47% sched_debug.cpu.sched_count.9
8214 ± 39% -59.8% 3299 ± 35% sched_debug.cpu.sched_goidle.0
15857 ± 52% -75.1% 3943 ± 59% sched_debug.cpu.sched_goidle.11
2895 ± 57% +250.9% 10160 ± 35% sched_debug.cpu.sched_goidle.13
3504 ± 55% +186.1% 10026 ± 16% sched_debug.cpu.sched_goidle.14
3312 ± 57% +126.8% 7512 ± 11% sched_debug.cpu.sched_goidle.15
2695 ± 62% +261.1% 9733 ± 24% sched_debug.cpu.sched_goidle.16
2716 ± 62% +265.6% 9932 ± 27% sched_debug.cpu.sched_goidle.17
2726 ± 64% +354.9% 12400 ± 33% sched_debug.cpu.sched_goidle.18
3348 ± 66% +131.9% 7763 ± 18% sched_debug.cpu.sched_goidle.20
3182 ± 56% +117.7% 6928 ± 13% sched_debug.cpu.sched_goidle.23
9258 ± 44% -67.2% 3039 ± 59% sched_debug.cpu.sched_goidle.3
9756 ± 45% -70.0% 2925 ± 40% sched_debug.cpu.sched_goidle.4
10530 ± 40% -70.2% 3135 ± 40% sched_debug.cpu.sched_goidle.5
9089 ± 12% -75.9% 2188 ± 41% sched_debug.cpu.sched_goidle.8
14612 ± 65% -77.3% 3324 ± 48% sched_debug.cpu.sched_goidle.9
5540 ± 30% -58.9% 2275 ± 39% sched_debug.cpu.ttwu_count.10
2323 ± 52% +237.2% 7832 ± 17% sched_debug.cpu.ttwu_count.15
2063 ± 51% +234.4% 6900 ± 30% sched_debug.cpu.ttwu_count.16
2166 ± 64% +170.7% 5865 ± 27% sched_debug.cpu.ttwu_count.17
5773 ± 22% -47.6% 3023 ± 30% sched_debug.cpu.ttwu_count.2
3042 ± 31% +118.6% 6651 ± 55% sched_debug.cpu.ttwu_count.20
2207 ± 51% +190.0% 6401 ± 16% sched_debug.cpu.ttwu_count.21
2033 ± 70% +166.4% 5416 ± 24% sched_debug.cpu.ttwu_count.23
7394 ± 46% -73.7% 1946 ± 28% sched_debug.cpu.ttwu_count.3
2153 ± 39% -49.2% 1094 ± 74% sched_debug.cpu.ttwu_count.34
4624 ± 27% -52.8% 2184 ± 27% sched_debug.cpu.ttwu_count.5
5181 ± 21% -52.5% 2463 ± 62% sched_debug.cpu.ttwu_count.7
5479 ± 24% -69.3% 1683 ± 27% sched_debug.cpu.ttwu_count.8
1486 ± 17% -23.8% 1132 ± 12% sched_debug.cpu.ttwu_local.1
2345 ± 39% -54.4% 1068 ± 47% sched_debug.cpu.ttwu_local.10
549.50 ± 41% +205.5% 1678 ± 60% sched_debug.cpu.ttwu_local.13
552.50 ± 31% +221.7% 1777 ± 49% sched_debug.cpu.ttwu_local.16
454.00 ± 57% +314.3% 1881 ± 48% sched_debug.cpu.ttwu_local.18
617.00 ± 38% +73.7% 1071 ± 26% sched_debug.cpu.ttwu_local.19
1650 ± 39% -45.1% 905.50 ± 36% sched_debug.cpu.ttwu_local.2
482.50 ± 46% +231.3% 1598 ± 32% sched_debug.cpu.ttwu_local.23
400.00 ± 11% +35.1% 540.50 ± 27% sched_debug.cpu.ttwu_local.25
2585 ± 57% -76.7% 602.25 ± 6% sched_debug.cpu.ttwu_local.3
1629 ± 7% -38.0% 1010 ± 40% sched_debug.cpu.ttwu_local.5
1789 ± 39% -62.2% 677.00 ± 40% sched_debug.cpu.ttwu_local.8
2223 ± 24% -48.4% 1146 ± 52% sched_debug.cpu.ttwu_local.9
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-5/i386-randconfig-w0-02200014/yocto-minimal-i386.cgz/1/vm-kbuild-yocto-i386/boot
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:2 -50% :2 dmesg.BUG:unable_to_handle_kernel
1:2 -50% :2 dmesg.EIP_is_at_perf_prepare_sample
1:2 -50% :2 dmesg.Kernel_panic-not_syncing:Fatal_exception
1:2 -50% :2 dmesg.Oops
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-5/x86_64-randconfig-w0-02192129/quantal-core-x86_64.cgz/1/vm-vp-quantal-x86_64/boot
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:2 dmesg.Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-5/i386-randconfig-w0-02192054/openwrt-i386.cgz/1/vm-intel12-openwrt-i386/boot
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:2 -50% :2 dmesg.Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes
1:2 -50% :2 dmesg.Mem-Info
1:2 -50% :2 dmesg.invoked_oom-killer:gfp_mask=0x
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-5/i386-randconfig-w0-02192022/yocto-minimal-i386.cgz/1/vm-kbuild-yocto-i386/boot
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:2 100% 2:2 last_state.is_incomplete_run
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-5/i386-randconfig-w0-02191915/openwrt-i386.cgz/1/vm-lkp-wsx03-openwrt-i386/boot
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:2 50% 1:2 dmesg.page_allocation_failure:order:#,mode
:2 50% 1:2 dmesg.warn_alloc_failed+0x
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-5/i386-randconfig-w0-02191633/yocto-minimal-i386.cgz/1/vm-kbuild-yocto-i386/boot
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:2 100% 2:46 dmesg.BUG:unable_to_handle_kernel
:2 100% 2:46 dmesg.EIP_is_at_perf_prepare_sample
:2 100% 2:46 dmesg.Kernel_panic-not_syncing:Fatal_exception
:2 100% 2:46 dmesg.Oops
=========================================================================================
compiler/kconfig/rootfs/sleep/tbox_group/testcase:
gcc-4.9/x86_64-rhel/debian-x86_64-2015-02-07.cgz/1/vm-client1-1G/boot
commit:
b7b269064ee1e3acbf738d122f599d1642c2d1f5
17fcdff68f9a69f8beb7600c0803c12efed503f1
b7b269064ee1e3ac 17fcdff68f9a69f8beb7600c08
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
2:2 -100% :2 kmsg.ACPI:No_IRQ_available_for_PCI_Interrupt_Link[LNKS].Try_pci=noacpi_or_acpi=off
2:2 -100% :2 kmsg.ACPI:PCI_Interrupt_Link[LNKA]enabled_at_IRQ
2:2 -100% :2 kmsg.ACPI:PCI_Interrupt_Link[LNKC]enabled_at_IRQ
2:2 -100% :2 kmsg.ACPI:PCI_Interrupt_Link[LNKD]enabled_at_IRQ
1:2 -50% :2 kmsg.APIC_calibration_not_consistent_with_PM-Timer:#ms_instead_of#ms
2:2 -100% :2 kmsg.Error:Driver'pcspkr'is_already_registered,aborting
2:2 -100% :2 kmsg.Measured#cycles_TSC_warp_between_CPUs,turning_off_TSC_clock
2:2 -100% :2 kmsg.TSC_synchronization[CPU##->CPU##]
2:2 -100% :2 kmsg.acpi_PNP0A03:#:fail_to_add_MMCONFIG_information,can't_access_extended_PCI_configuration_space_under_this_bridge
2:2 -100% :2 kmsg.i6300esb:Unexpected_close,not_stopping_watchdog
lkp-hsx04: Brickland Haswell-EX
Memory: 512G
ivb43: Ivytown Ivy Bridge-EP
Memory: 64G
vm-lkp-wsx03-openwrt-i386: qemu-system-i386 -enable-kvm
Memory: 192M
vm-kbuild-yocto-i386: qemu-system-i386 -enable-kvm
Memory: 320M
vm-kbuild-1G: qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap
Memory: 1G
vm-kbuild-yocto-x86_64: qemu-system-x86_64 -enable-kvm -cpu SandyBridge
Memory: 320M
vm-vp-quantal-x86_64: qemu-system-x86_64 -enable-kvm
Memory: 360M
vm-intel12-openwrt-i386: qemu-system-i386 -enable-kvm
Memory: 192M
vm-intel12-1G: qemu-system-x86_64 -enable-kvm -cpu Nehalem
Memory: 1G
vm-lkp-wsx03-1G: qemu-system-x86_64 -enable-kvm -cpu host
Memory: 1G
vm-client1-1G: qemu-system-x86_64 -enable-kvm
Memory: 1G
vm-lkp-wsx03-quantal-i386: qemu-system-i386 -enable-kvm
Memory: 360M
vm-lkp-wsx03-yocto-i386: qemu-system-i386 -enable-kvm
Memory: 320M
/usr/bin/gnuplot: /usr/lib/x86_64-linux-gnu/liblua5.1.so.0: no version information available (required by /usr/bin/gnuplot)
/usr/bin/gnuplot: /usr/lib/x86_64-linux-gnu/liblua5.1.so.0: no version information available (required by /usr/bin/gnuplot)
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Huang Ying
3 years, 9 months