[fentry] WARNING: CPU: 0 PID: 1 at kernel/trace/trace_functions_graph.c:224 ftrace_return_to_handler+0xd2/0x1a8()
by Fengguang Wu
FYI, we noticed the below changes on
git://flatbed.openfabrics.org/~amirv/linux.git for-netdev
commit de89e9380c82b0da22a6609f973dd6e9d67656e3 ("regression: Disable FENTRY")
+-----------------------------------------------------------------------------+------------+------------+
| | d068b02cfd | de89e9380c |
+-----------------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 10 | 10 |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c | 7 | |
| backtrace:do_vfs_ioctl | 7 | |
| backtrace:SyS_ioctl | 7 | |
| Out_of_memory:Kill_process | 3 | |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 2 | |
| backtrace:do_execve | 2 | |
| backtrace:SyS_execve | 2 | |
| backtrace:do_fork | 3 | |
| backtrace:SyS_clone | 3 | |
| backtrace:do_sys_open | 1 | |
| backtrace:SyS_open | 1 | |
| backtrace:pgd_alloc | 1 | |
| backtrace:mm_init | 1 | |
| backtrace:SYSC_reboot | 1 | |
| backtrace:SyS_reboot | 1 | |
| WARNING:at_kernel/trace/trace_functions_graph.c:ftrace_return_to_handler() | 0 | 10 |
| backtrace:register_tracer | 0 | 10 |
| backtrace:init_graph_trace | 0 | 10 |
| backtrace:kernel_init_freeable | 0 | 10 |
+-----------------------------------------------------------------------------+------------+------------+
[ 10.644181] PASSED
[ 10.644938] Testing tracer function_graph:
[ 11.136016] ------------[ cut here ]------------
[ 11.140000] WARNING: CPU: 0 PID: 1 at kernel/trace/trace_functions_graph.c:224 ftrace_return_to_handler+0xd2/0x1a8()
[ 11.140000] Bad frame pointer: expected ffff880023603eb8, received ffff880023603ef0
[ 11.140000] from func __calc_delta return to ffffffff810e981c
[ 11.140000] Modules linked in:
[ 11.140000] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.17.0-rc7-01516-g8585242 #920
[ 11.140000] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 11.140000] 0000000000000000 ffff880023603db8 ffffffff82d0d960 ffff880023603e00
[ 11.140000] ffff880023603df0 ffffffff810c3476 ffffffff81164abb ffffffff82d05448
[ 11.140000] ffff8800237d6238 ffff8800237d6180 0000000000000000 ffff880023603e58
[ 11.140000] Call Trace:
[ 11.140000] <IRQ> [<ffffffff82d0d960>] dump_stack+0x4d/0x66
[ 11.140000] [<ffffffff810c3476>] warn_slowpath_common+0x7f/0x98
[ 11.140000] [<ffffffff81164abb>] ? ftrace_return_to_handler+0xd2/0x1a8
[ 11.140000] [<ffffffff82d05448>] ? set_tsk_thread_flag+0x13/0x13
[ 11.140000] [<ffffffff810c34d7>] warn_slowpath_fmt+0x48/0x50
[ 11.140000] [<ffffffff810e9461>] ? __calc_delta+0x15/0xae
[ 11.140000] [<ffffffff81164abb>] ftrace_return_to_handler+0xd2/0x1a8
[ 11.140000] [<ffffffff810e981c>] ? sched_slice+0x74/0x87
[ 11.140000] [<ffffffff82d3485e>] ? ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff810e981c>] sched_slice+0x74/0x87
[ 11.140000] [<ffffffff810e981c>] ? sched_slice+0x74/0x87
[ 11.140000] [<ffffffff82d34873>] return_to_handler+0x15/0x32
[ 11.140000] [<ffffffff82d3485e>] ? ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff810ebe79>] task_tick_fair+0x88/0x109
[ 11.140000] [<ffffffff8107a7e3>] ? ftrace_write+0x23/0x4c
[ 11.140000] [<ffffffff82d3485e>] ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff810e54be>] scheduler_tick+0x57/0x89
[ 11.140000] [<ffffffff82d3485e>] ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff811195c8>] update_process_times+0x56/0x65
[ 11.140000] [<ffffffff82d3485e>] ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff811240dd>] tick_periodic+0xa1/0xad
[ 11.140000] [<ffffffff82d3485e>] ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff8112427d>] tick_handle_periodic+0x26/0x60
[ 11.140000] [<ffffffff81089300>] ? __virt_addr_valid+0x66/0x8d
[ 11.140000] [<ffffffff82d3485e>] ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff8107556a>] local_apic_timer_interrupt+0x54/0x57
[ 11.140000] [<ffffffff82d3485e>] ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff82d3501d>] smp_apic_timer_interrupt+0x2b/0x3c
[ 11.140000] [<ffffffff82d33082>] apic_timer_interrupt+0x72/0x80
[ 11.140000] <EOI> [<ffffffff81080ac3>] ? pvclock_clocksource_read+0x76/0xa0
[ 11.140000] [<ffffffff81164d97>] ? trace_graph_entry+0x175/0x18b
[ 11.140000] [<ffffffff8108937b>] ? __phys_addr_symbol+0x4/0x24
[ 11.140000] [<ffffffff8107ff56>] kvm_clock_read+0x27/0x31
[ 11.140000] [<ffffffff81051ebc>] paravirt_sched_clock+0x9/0xd
[ 11.140000] [<ffffffff81052506>] sched_clock+0x9/0xb
[ 11.140000] [<ffffffff8114fda6>] trace_clock_local+0x11/0x1b
[ 11.140000] [<ffffffff81164916>] ftrace_push_return_trace+0x76/0x149
[ 11.140000] [<ffffffff8107a7e3>] ? ftrace_write+0x23/0x4c
[ 11.140000] [<ffffffff8108937b>] ? __phys_addr_symbol+0x4/0x24
[ 11.140000] [<ffffffff8107af30>] prepare_ftrace_return+0xa2/0xb5
[ 11.140000] [<ffffffff8108937b>] ? __phys_addr_symbol+0x4/0x24
[ 11.140000] [<ffffffff82d34833>] ftrace_graph_caller+0x53/0x7e
[ 11.140000] [<ffffffff810f9873>] ? trace_hardirqs_on_caller+0x18f/0x1ab
[ 11.140000] [<ffffffff81164f4a>] ? trace_graph_return+0x8e/0x97
[ 11.140000] [<ffffffff8107a7e3>] ? ftrace_write+0x23/0x4c
[ 11.140000] [<ffffffff811420f6>] ? audit_find_rule+0x12/0x108
[ 11.140000] [<ffffffff811420f1>] ? audit_find_rule+0xd/0x108
[ 11.140000] [<ffffffff81089380>] ? __phys_addr_symbol+0x9/0x24
[ 11.140000] [<ffffffff81089380>] ? __phys_addr_symbol+0x9/0x24
[ 11.140000] [<ffffffff82d3485e>] ftrace_graph_caller+0x7e/0x7e
[ 11.140000] [<ffffffff8107aca9>] ftrace_replace_code+0x1d6/0x331
[ 11.140000] [<ffffffff81164ebc>] ? trace_graph_function+0x78/0x78
[ 11.140000] [<ffffffff811525f1>] ftrace_modify_all_code+0x41/0xca
[ 11.140000] [<ffffffff8107ae19>] arch_ftrace_update_code+0x15/0x1e
[ 11.140000] [<ffffffff81152b2c>] ftrace_run_update_code+0x2b/0x1b2
[ 11.140000] [<ffffffff81164ebc>] ? trace_graph_function+0x78/0x78
[ 11.140000] [<ffffffff81152ce5>] ftrace_startup_enable+0x32/0x34
[ 11.140000] [<ffffffff81152e25>] ftrace_startup+0x13e/0x148
[ 11.140000] [<ffffffff8115443b>] register_ftrace_graph+0x235/0x252
[ 11.140000] [<ffffffff8115f982>] ? ftrace_dump+0x241/0x241
[ 11.140000] [<ffffffff8441060b>] trace_selftest_startup_function_graph+0x57/0xe3
[ 11.140000] [<ffffffff8115eeac>] register_tracer+0x165/0x288
[ 11.140000] [<ffffffff84410c87>] ? init_graph_debugfs+0x2f/0x2f
[ 11.140000] [<ffffffff84410ced>] init_graph_trace+0x66/0x68
[ 11.140000] [<ffffffff8100216d>] do_one_initcall+0xee/0x17e
[ 11.140000] [<ffffffff810db300>] ? parse_args+0xfc/0x2b8
[ 11.140000] [<ffffffff843e80bf>] kernel_init_freeable+0x1e2/0x26f
[ 11.140000] [<ffffffff82cfcab5>] ? rest_init+0xc9/0xc9
[ 11.140000] [<ffffffff82cfcac3>] kernel_init+0xe/0xdf
[ 11.140000] [<ffffffff82d3203c>] ret_from_fork+0x7c/0xb0
[ 11.140000] [<ffffffff82cfcab5>] ? rest_init+0xc9/0xc9
[ 11.140000] ---[ end trace 94c1e7868964d6b1 ]---
[ 11.140000] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G W 3.17.0-rc7-01516-g8585242 #920
The "BUG: sleeping function called" in the parent commit "disappeared"
in commit de89e93 because it happens in much later time:
[ 322.056221] smpboot: CPU 1 is now offline
[ 322.081569] bnx2fc: CPU 1 offline: Remove Rx thread
[ 322.088300] CPU 1 offline: Remove Rx thread
mount.nfs: Connection timed out
run-parts: /etc/kernel-tests/99-trinity exited with return code 32
[ 362.720433] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:583
[ 362.733241] in_atomic(): 1, irqs_disabled(): 0, pid: 6357, name: reboot
[ 362.734867] 2 locks held by reboot/6357:
[ 362.735995] #0: (rtnl_mutex){+.+.+.}, at: [<ffffffff829d5b5d>] rtnl_lock+0x17/0x19
[ 362.738773] #1: (rcu_read_lock){......}, at: [<ffffffff810dca0e>] rcu_lock_acquire+0x0/0x20
[ 362.741686] CPU: 0 PID: 6357 Comm: reboot Not tainted 3.17.0-rc6-01531-gd068b02 #1333
[ 362.750958] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 362.752463] 0000000000000000 ffff88001858fa30 ffffffff82d0d3bf 00000000000018d5
[ 362.754976] ffff88001858fa50 ffffffff810e1e35 ffffffff83fc43c0 0000000000000000
[ 362.757523] ffff88001858fad0 ffffffff82d2e0f7 000000000052001f 0000000000000001
[ 362.760103] Call Trace:
[ 362.782376] [<ffffffff82d0d3bf>] dump_stack+0x4d/0x66
[ 362.793968] [<ffffffff810e1e35>] __might_sleep+0x125/0x12a
[ 362.795422] [<ffffffff82d2e0f7>] mutex_lock_nested+0x3e/0x3ec
[ 362.796958] [<ffffffff81d60f4e>] cxgbi_device_find_by_netdev+0x63/0x102
[ 362.798562] [<ffffffff81d60f4e>] ? cxgbi_device_find_by_netdev+0x63/0x102
[ 362.800309] [<ffffffff81d68648>] cxgbi_inet6addr_handler+0x3c/0x83
[ 362.801817] [<ffffffff810dc923>] notifier_call_chain+0x6d/0x93
[ 362.803274] [<ffffffff810dcdc7>] __atomic_notifier_call_chain+0x4c/0x79
[ 362.804991] [<ffffffff810dce08>] atomic_notifier_call_chain+0x14/0x16
[ 362.806572] [<ffffffff82b239b6>] inet6addr_notifier_call_chain+0x1b/0x1d
[ 362.808348] [<ffffffff82aefb5c>] addrconf_ifdown+0x28d/0x321
[ 362.809759] [<ffffffff82af0d7e>] addrconf_notify+0x717/0x7d5
[ 362.811192] [<ffffffff810c6ce5>] ? __local_bh_enable_ip+0xaf/0xb4
[ 362.812797] [<ffffffff82d30fe6>] ? _raw_spin_unlock_bh+0x35/0x38
[ 362.815980] [<ffffffff82af80bb>] ? fib6_run_gc+0xc4/0xcb
[ 362.817453] [<ffffffff810dc923>] notifier_call_chain+0x6d/0x93
[ 362.818944] [<ffffffff810dc923>] ? notifier_call_chain+0x6d/0x93
[ 362.820545] [<ffffffff810dc96d>] raw_notifier_call_chain+0x14/0x16
[ 362.822095] [<ffffffff829c52bb>] call_netdevice_notifiers_info+0x52/0x59
[ 362.823725] [<ffffffff829c52d5>] call_netdevice_notifiers+0x13/0x15
[ 362.825368] [<ffffffff829cc19e>] __dev_notify_flags+0x53/0x81
[ 362.826838] [<ffffffff829cc814>] dev_change_flags+0x4e/0x59
[ 362.828374] [<ffffffff82aa877a>] devinet_ioctl+0x267/0x566
[ 362.829731] [<ffffffff82aa91f7>] inet_ioctl+0x86/0xa2
[ 362.831085] [<ffffffff82aa91f7>] ? inet_ioctl+0x86/0xa2
[ 362.832482] [<ffffffff829b25cc>] sock_do_ioctl+0x25/0x42
[ 362.833941] [<ffffffff829b2ad8>] sock_ioctl+0x213/0x21f
[ 362.835346] [<ffffffff811e532f>] vfs_ioctl+0x18/0x34
[ 362.836692] [<ffffffff811e5b9a>] do_vfs_ioctl+0x39a/0x442
[ 362.838237] [<ffffffff811e5c99>] SyS_ioctl+0x57/0x79
[ 362.839545] [<ffffffff82d31d50>] tracesys+0xdd/0xe2
[ 364.099570] Unregister pv shared memory for cpu 0
[ 364.137326] reboot: Restarting system
[ 364.138391] reboot: machine restart
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 7 months
[x86, this_cpu_ptr] BUG: unable to handle kernel paging request at 000000000000d968
by Fengguang Wu
Hi Christoph,
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 89cbc76768c2fa4ed95545bf961f3a14ddfeed21 ("x86: Replace __get_cpu_var uses")
532d0d0690d1532 89cbc76768c2fa4ed95545bf9
--------------- -------------------------
0 +Inf% 0 ±50% dmesg.BUG:unable_to_handle_kernel
0 +Inf% 1 ± 0% last_state.running
0 +Inf% 1 ± 0% last_state.is_incomplete_run
run: /lkp/lkp/src/monitors/wrapper numa-vmstat {}
run: /lkp/lkp/src/monitors/wrapper numa-meminfo {}
run: /lkp/lkp/src/monitors/w
[ 45.378447] BUG: unable to handle kernel paging request at 000000000000d968
[ 45.379430] IP:PROGRESS CODE: V3020003 I0
Loading PEIM at 0x000FFFD57C0 EntryPoint=0x000FFFD673C
Thanks,
Fengguang
7 years, 7 months
[x86, kaslr] PANIC: early exception 0e rip 10:ffffffff842122df error 0 cr2 ffffffff829c9798
by Fengguang Wu
Hi Kees,
There are a number of oops that are bisect to commit
82fa9637a2ba285bcc7c5050c73010b2c1b3d803 ("x86, kaslr: Select random
position from e820 maps"), unfortunately they are mostly hard to
reproduce (often only 1 out of 100 boots is bad). However this one is
very reproducible. I hope it can sever as a good case for debugging
this issue.
+------------------------------------------+------------+------------+---------------+
| | 5bfce5ef55 | 82fa9637a2 | next-20140919 |
+------------------------------------------+------------+------------+---------------+
| boot_successes | 1000 | 6 | 49 |
| boot_failures | 0 | 894 | 4 |
| PANIC:early_exception | 0 | 894 | 2 |
| BUG:kernel_boot_hang | 0 | 894 | 2 |
| general_protection_fault | 0 | 0 | 1 |
| RIP:__lock_acquire | 0 | 0 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 0 | 2 |
| backtrace:free_reserved_area | 0 | 0 | 1 |
| backtrace:free_init_pages | 0 | 0 | 1 |
| backtrace:populate_rootfs | 0 | 0 | 1 |
| backtrace:kernel_init_freeable | 0 | 0 | 1 |
| backtrace:kvm_get_tsc_khz | 0 | 0 | 2 |
| backtrace:kvmclock_init | 0 | 0 | 2 |
| BUG:unable_to_handle_kernel | 0 | 0 | 1 |
| Oops | 0 | 0 | 1 |
| RIP:setup_real_mode | 0 | 0 | 1 |
+------------------------------------------+------------+------------+---------------+
[ 0.000000] BRK [0x06bd4000, 0x06bd4fff] PGTABLE
[ 0.000000] BRK [0x06bd5000, 0x06bd5fff] PGTABLE
[ 0.000000] BRK [0x06bd6000, 0x06bd6fff] PGTABLE
PANIC: early exception 0e rip 10:ffffffff842122df error 0 cr2 ffffffff829c9798
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.12.0-rc4-00005-g82fa963 #1
PANIC: early exception 0e rip 10:ffffffff842126f6 error 0 cr2 ffffffff829c9798
BUG: kernel boot hang
Elapsed time: 305
qemu-system-x86_64 -cpu kvm64 -enable-kvm -kernel /kernel/x86_64-randconfig-ib1-09191856/82fa9637a2ba285bcc7c5050c73010b2c1b3d803/vmlinuz-3.12.0-rc4-00005-g82fa963 -append 'hung_task_panic=1 earlyprintk=ttyS0,115200 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal root=/dev/ram0 rw link=/kbuild-tests/run-queue/kvm/x86_64-randconfig-ib1-09191856/next:master:82fa9637a2ba285bcc7c5050c73010b2c1b3d803:bisect-linux-5/.vmlinuz-82fa9637a2ba285bcc7c5050c73010b2c1b3d803-20140920200809-480-ivb41 branch=next/master BOOT_IMAGE=/kernel/x86_64-randconfig-ib1-09191856/82fa9637a2ba285bcc7c5050c73010b2c1b3d803/vmlinuz-3.12.0-rc4-00005-g82fa963 drbd.minor_count=8' -initrd /kernel-tests/initrd/yocto-minimal-x86_64.cgz -m 320 -smp 1 -net nic,vlan=1,model=e1000 -net user,vlan=1 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-yocto-ivb41-24 -serial file:/dev/shm/kboot/serial-yocto-ivb41-24 -daemonize -display none -monitor null
git bisect start v3.14 v3.13 --
git bisect bad 494479038d97f1b9f76fc633a360a681acdf035c # 02:20 373- 19 Merge tag 'pinctrl-v3.14-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
git bisect bad 1b17366d695c8ab03f98d0155357e97a427e1dce # 02:33 310- 11 Merge branch 'next' of git://git.kernel.org/pub/scm/linux/kernel/git/benh/powerpc
git bisect bad 60eaa0190f6b39dce18eb1975d9773ed8bc9a534 # 02:44 153- 7 Merge tag 'trace-3.14' of git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/linux-trace
git bisect good 74e8ee8262c3f93bbc41804037b43f07b95897bb # 03:21 900+ 7 Merge branch 'x86-platform-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 02d0a752460ea5dab34ce36c9ddc9c682e846a0d # 03:33 273- 8 Merge branch 'i2c/for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
git bisect bad 82b51734b4f228c76b6064b6e899d9d3d4c17c1a # 03:47 378- 16 Merge tag 'arm64-upstream' of git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux
git bisect bad f4bcd8ccddb02833340652e9f46f5127828eb79d # 03:57 158- 1 Merge branch 'x86-kaslr-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good fab5669d556200c4dd119af705bff14085845d1e # 04:21 900+ 0 Merge branch 'x86-ras-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 7fe67a1180db49d41a3f764c379a08f8e31580ec # 05:31 900+ 4 Merge branch 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 6145cfe394a7f138f6b64491c5663f97dba12450 # 05:50 3- 4 x86, kaslr: Raise the maximum virtual address to -1 GiB on x86_64
git bisect good 8ab3820fd5b2896d66da7bb2a906bc382e63e7bc # 20:01 900+ 0 x86, kaslr: Return location from decompress_kernel
git bisect bad 82fa9637a2ba285bcc7c5050c73010b2c1b3d803 # 20:13 0- 32 x86, kaslr: Select random position from e820 maps
git bisect good 5bfce5ef55cbe78ee2ee6e97f2e26a8a582008f3 # 22:04 900+ 0 x86, kaslr: Provide randomness functions
# first bad commit: [82fa9637a2ba285bcc7c5050c73010b2c1b3d803] x86, kaslr: Select random position from e820 maps
git bisect good 5bfce5ef55cbe78ee2ee6e97f2e26a8a582008f3 # 22:18 1000+ 0 x86, kaslr: Provide randomness functions
git bisect bad 6a10bca9b608df445baa23c3bfafc510d93d425b # 22:31 0- 4 Add linux-next specific files for 20140919
git bisect bad 46be7b73e82453447cd97b3440d523159eab09f8 # 22:44 169- 13 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs
git bisect bad 6a10bca9b608df445baa23c3bfafc510d93d425b # 22:44 0- 4 Add linux-next specific files for 20140919
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-m 320
-smp 1
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 7 months
[x86] double fault: ffff [#1]
by Fengguang Wu
Hi Andi,
Here is another bisect result with much earlier oops.
commit b8a868e9ea876a1b40020397305533c095921d7a
Author: Andi Kleen <ak(a)linux.intel.com>
AuthorDate: Wed Apr 23 13:26:20 2014 -0700
Commit: Andi Kleen <ak(a)linux.intel.com>
CommitDate: Fri Oct 3 15:19:56 2014 -0700
x86: Add support for rd/wr fs/gs base
...
v2: Change to save/restore GS instead of using swapgs
based on the value. Large scale changes.
Signed-off-by: Andi Kleen <ak(a)linux.intel.com>
Attached dmesg for the parent commit, too, to help confirm whether it is a noise error.
+-------------------------------------------------------+------------+------------+------------------+
| | 598d570a05 | b8a868e9ea | v3.17-rc7_100409 |
+-------------------------------------------------------+------------+------------+------------------+
| boot_successes | 207 | 24 | 17 |
| boot_failures | 3 | 46 | 4 |
| BUG:kernel_boot_crashed | 3 | | |
| double_fault:ffff | 0 | 39 | 4 |
| RIP:trace_hardirqs_off_thunk | 0 | 22 | 1 |
| BUG:unable_to_handle_kernel | 0 | 17 | 1 |
| Oops | 0 | 15 | 1 |
| RIP:show_stack_log_lvl | 0 | 14 | 1 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 20 | 2 |
| RIP:trace_hardirqs_off_caller | 0 | 6 | 1 |
| BUG:kernel_boot_hang | 0 | 23 | 2 |
| backtrace:rescuer_thread | 0 | 1 | |
+-------------------------------------------------------+------------+------------+------------------+
[ 0.212404] regulator-dummy: no parameters
[ 0.215602] NET: Registered protocol family 16
[ 0.215602] NET: Registered protocol family 16
[ 0.232049] double fault: ffff [#1]
[ 0.232049] double fault: ffff [#1] PREEMPT PREEMPT SMP SMP DEBUG_PAGEALLOCDEBUG_PAGEALLOC
[ 0.233000] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.17.0-rc7-00004-gb8a868e #1
[ 0.233000] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 3.17.0-rc7-00004-gb8a868e #1
[ 0.233000] task: ffff8800121ad500 ti: ffff88001234c000 task.ti: ffff88001234c000
[ 0.233000] task: ffff8800121ad500 ti: ffff88001234c000 task.ti: ffff88001234c000
[ 0.233000] RIP: 0010:[<ffffffff818273f5>]
[ 0.233000] RIP: 0010:[<ffffffff818273f5>] [<ffffffff818273f5>] trace_hardirqs_off_thunk+0x35/0x3c
[<ffffffff818273f5>] trace_hardirqs_off_thunk+0x35/0x3c
[ 0.233000] RSP: 0000:ffff880012500000 EFLAGS: 00010086
[ 0.233000] RSP: 0000:ffff880012500000 EFLAGS: 00010086
[ 0.233000] RAX: 0000000082767ff0 RBX: 0000000000000001 RCX: ffffffff82767ff0
[ 0.233000] RAX: 0000000082767ff0 RBX: 0000000000000001 RCX: ffffffff82767ff0
[ 0.233000] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff82769326
[ 0.233000] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff82769326
[ 0.233000] RBP: ffff880012500110 R08: 0000000000000000 R09: 0000000000000001
[ 0.233000] RBP: ffff880012500110 R08: 0000000000000000 R09: 0000000000000001
[ 0.233000] R10: ffff880012503e98 R11: ffffffff838c7fb0 R12: 0000000000013ec0
[ 0.233000] R10: ffff880012503e98 R11: ffffffff838c7fb0 R12: 0000000000013ec0
[ 0.233000] R13: 0000000000013ec0 R14: 0000000000000000 R15: 0000000000000000
[ 0.233000] R13: 0000000000013ec0 R14: 0000000000000000 R15: 0000000000000000
BUG: kernel boot hang
Elapsed time: 305
qemu-system-x86_64 -cpu kvm64 -enable-kvm -kernel /kernel/x86_64-randconfig-hsb0-10041007/b8a868e9ea876a1b40020397305533c095921d7a/vmlinuz-3.17.0-rc7-00004-gb8a868e -append 'hung_task_panic=1 earlyprintk=ttyS0,115200 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal root=/dev/ram0 rw link=/kbuild-tests/run-queue/kvm/x86_64-randconfig-hsb0-10041007/linux-devel:devel-hourly-2014100409:b8a868e9ea876a1b40020397305533c095921d7a:bisect-linux-3/.vmlinuz-b8a868e9ea876a1b40020397305533c095921d7a-20141004114524-27-lkp-nex04 branch=linux-devel/devel-hourly-2014100409 BOOT_IMAGE=/kernel/x86_64-randconfig-hsb0-10041007/b8a868e9ea876a1b40020397305533c095921d7a/vmlinuz-3.17.0-rc7-00004-gb8a868e drbd.minor_count=8' -initrd /kernel-tests/initrd/quantal-core-x86_64.cgz -m 320 -smp 2 -net nic,vlan=1,model=e1000 -net user,vlan=1 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-quantal-lkp-nex04-129 -serial file:/dev/shm/kboot/serial-quantal-lkp-nex04-129 -daemonize -display none -monitor null
git bisect start cd94c0185489f0cddcbfa80e86258bb4609b6e1d fe82dcec644244676d55a1384c958d5f67979adb --
git bisect good a3f8f887e2a36003026b5d3effdd418d507cf7c8 # 11:19 70+ 0 Merge 'spi/for-linus' into devel-hourly-2014100409
git bisect good 2e6b2ee6893da99e9a156087aee97f90b0c8429c # 11:26 70+ 0 Merge 'trace/for-next' into devel-hourly-2014100409
git bisect good 7ea38a3061ab82dcfb705c3f36f55ddd8c351c83 # 11:31 70+ 1 Merge 'usb/usb-next' into devel-hourly-2014100409
git bisect bad 6d3ac51e9688b5cc2d4a63ba5f2e9b057db75ac8 # 11:35 0- 24 Merge 'driver-core/driver-core-next' into devel-hourly-2014100409
git bisect bad a33db30bb2f4d53f325e1c465f594894f559843c # 11:39 0- 1 Merge 'ak/x86/fsgs-2' into devel-hourly-2014100409
git bisect bad b8a868e9ea876a1b40020397305533c095921d7a # 11:46 0- 23 x86: Add support for rd/wr fs/gs base
git bisect good a0b0be64599f50dc2c9fa85734026701221f186a # 11:53 70+ 1 x86: Naturally align the debug IST stack
git bisect good 598d570a05cd31500fb15a843a92f68ddb1b3618 # 11:59 70+ 0 x86: Add intrinsics/macros for new rd/wr fs/gs base instructions
# first bad commit: [b8a868e9ea876a1b40020397305533c095921d7a] x86: Add support for rd/wr fs/gs base
git bisect good 598d570a05cd31500fb15a843a92f68ddb1b3618 # 12:05 210+ 3 x86: Add intrinsics/macros for new rd/wr fs/gs base instructions
git bisect bad cd94c0185489f0cddcbfa80e86258bb4609b6e1d # 12:05 0- 4 0day head guard for 'devel-hourly-2014100409'
git bisect good 126d4576cb73c8a440adc37c129589cd66051bcc # 12:11 210+ 7 Merge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
git bisect good 2e1d004b9645628c64a2db55ef6b81fadf5e6e91 # 12:19 210+ 2 Add linux-next specific files for 20141003
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-m 320
-smp 2
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 7 months
[perf/x86/intel] WARNING: CPU: 48 PID: 7540 at arch/x86/kernel/cpu/perf_event_intel_ds.c:316 reserve_ds_buffers+0x208/0x430()
by Fengguang Wu
Hi David,
FYI, we noticed the below changes on commit
4485154138f6ffa5b252cb490aba3e8eb30124e4 ("perf/x86/intel: Avoid spamming kernel log for BTS buffer failure")
+---------------------------------------------------------------------------+------------+------------+
| | 338b522ca4 | 4485154138 |
+---------------------------------------------------------------------------+------------+------------+
| boot_successes | 3 | 5 |
| boot_failures | 21 | 15 |
| page_allocation_failure:order:,mode | 11 | |
| backtrace:reserve_ds_buffers | 11 | 5 |
| backtrace:x86_pmu_event_init | 11 | 5 |
| backtrace:perf_init_event | 11 | 5 |
| backtrace:SYSC_perf_event_open | 11 | 5 |
| backtrace:SyS_perf_event_open | 11 | 5 |
| BUG:kernel_test_crashed | 10 | 10 |
| WARNING:at_arch/x86/kernel/cpu/perf_event_intel_ds.c:reserve_ds_buffers() | 0 | 5 |
+---------------------------------------------------------------------------+------------+------------+
2014-10-03 01:27:35 ./usemem --runtime 300 -t 64 -f /tmp/vm-scalability/sparse-mremap-xread-rand-mt-60 -E --prealloc --readonly --random 17179869184
2014-10-03 01:27:36 ./usemem --runtime 300 -t 64 -f /tmp/vm-scalability/sparse-mremap-xread-rand-mt-62 -E --prealloc --readonly --random 17179869184
[ 192.496567] ------------[ cut here ]------------
[ 192.505045] WARNING: CPU: 48 PID: 7540 at arch/x86/kernel/cpu/perf_event_intel_ds.c:316 reserve_ds_buffers+0x208/0x430()
[ 192.524168] alloc_bts_buffer: BTS buffer allocation failure
[ 192.533339] Modules linked in: loop ipmi_watchdog nfsd auth_rpcgss dm_mod fuse sg sd_mod crct10dif_generic sr_mod crc_t10dif cdrom crct10dif_common ata_generic pata_acpi mgag200 syscopyarea snd_pcm sysfillrect sysimgblt snd_timer ttm snd drm_kms_helper soundcore ata_piix pcspkr i7core_edac drm libata megaraid_sas i2c_i801 edac_core ipmi_si ipmi_msghandler acpi_cpufreq
[ 192.581450] CPU: 48 PID: 7540 Comm: perf Not tainted 3.16.0-rc2-00011-g4485154 #1
[ 192.592393] Hardware name: Quanta MP Server/QSSC-S4R, BIOS QSSC-S4R.QCI.01.00.0028.090220101116 09/02/2010
[ 192.606455] 0000000000000009 ffff88005de53ce8 ffffffff81844e55 ffff88005de53d30
[ 192.618370] ffff88005de53d20 ffffffff8106de6d 0000000000000000 0000000000010528
[ 192.630786] 0000000000000000 0000000000000000 0000000000000000 ffff88005de53d80
[ 192.642465] Call Trace:
[ 192.647913] [<ffffffff81844e55>] dump_stack+0x4d/0x66
[ 192.656643] [<ffffffff8106de6d>] warn_slowpath_common+0x7d/0xa0
[ 192.666961] [<ffffffff8106dedc>] warn_slowpath_fmt+0x4c/0x50
[ 192.677047] [<ffffffff81030863>] ? reserve_ds_buffers+0x2b3/0x430
[ 192.687420] [<ffffffff810307b8>] reserve_ds_buffers+0x208/0x430
[ 192.697838] [<ffffffff8118abec>] ? handle_mm_fault+0x8ac/0xf70
[ 192.707522] [<ffffffff8102a354>] x86_pmu_event_init+0x434/0x450
[ 192.717351] [<ffffffff8115241b>] perf_init_event+0xdb/0x140
[ 192.727292] [<ffffffff81152808>] perf_event_alloc+0x388/0x440
[ 192.738117] [<ffffffff81152c60>] SYSC_perf_event_open+0x3a0/0xd20
[ 192.750511] [<ffffffff81153979>] SyS_perf_event_open+0x9/0x10
[ 192.761657] [<ffffffff8184da29>] system_call_fastpath+0x16/0x1b
[ 192.771904] ---[ end trace 6d601571820ebee0 ]---
4331667464 bytes / 300293184 usecs = 14086 KB/s
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 7 months
[PCI/MSI] WARNING: CPU: 0 PID: 1 at drivers/pci/msi.c:1034 pci_enable_msi_range+0x2c1/0x2f0()
by Fengguang Wu
Hi Alexander,
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git pci/msi
commit 4ab2ee7c77875669230a797a3ed2228990f82545 ("PCI/MSI: Rename pci_msi_check_device() to pci_msi_supported()")
+-----------------------------------------------------+------------+------------+
| | 888215f389 | 4ab2ee7c77 |
+-----------------------------------------------------+------------+------------+
| boot_successes | 15 | 2 |
| early-boot-hang | 1 | |
| boot_failures | 0 | 13 |
| WARNING:at_drivers/pci/msi.c:pci_enable_msi_range() | 0 | 5 |
| WARNING:at_fs/sysfs/dir.c:sysfs_warn_dup() | 0 | 5 |
| backtrace:ip_auto_config | 0 | 5 |
| backtrace:kernel_init_freeable | 0 | 5 |
| BUG:kernel_test_crashed | 0 | 8 |
+-----------------------------------------------------+------------+------------+
[ 5.410055] evm: HMAC attrs: 0x1
[ 5.413893] rtc_cmos 00:01: setting system clock to 2014-09-25 00:13:24 UTC (1411604004)
[ 5.447469] ------------[ cut here ]------------
[ 5.452212] WARNING: CPU: 0 PID: 1 at drivers/pci/msi.c:1034 pci_enable_msi_range+0x2c1/0x2f0()
[ 5.462479] Modules linked in:
[ 5.465712] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.17.0-rc2-00011-g4ab2ee7 #1
[ 5.473467] Hardware name: Supermicro X7DB8/X7DB8, BIOS 6.00 04/28/2006
[ 5.480202] 0000000000000009 ffff880139ab7c00 ffffffff8185b75a 0000000000000000
[ 5.488043] ffff880139ab7c38 ffffffff8107056d 0000000000000001 ffff88007f8ec000
[ 5.495879] ffff88013974e098 ffff88013974e000 ffff88007f8ecdb8 ffff880139ab7c48
[ 5.503722] Call Trace:
[ 5.506293] [<ffffffff8185b75a>] dump_stack+0x4d/0x66
[ 5.511552] [<ffffffff8107056d>] warn_slowpath_common+0x7d/0xa0
[ 5.517675] [<ffffffff8107064a>] warn_slowpath_null+0x1a/0x20
[ 5.523631] [<ffffffff81449961>] pci_enable_msi_range+0x2c1/0x2f0
[ 5.529932] [<ffffffff815b490a>] e1000_open+0x22a/0x4b0
[ 5.535368] [<ffffffff8172d466>] __dev_open+0xb6/0x130
[ 5.540712] [<ffffffff8172d77d>] __dev_change_flags+0x9d/0x160
[ 5.546754] [<ffffffff8172d869>] dev_change_flags+0x29/0x70
[ 5.552534] [<ffffffff81e5707e>] ip_auto_config+0x1c4/0xe7f
[ 5.558314] [<ffffffff81e56eba>] ? root_nfs_parse_addr+0xba/0xba
[ 5.564529] [<ffffffff81405f1e>] ? kasprintf+0x3e/0x40
[ 5.569876] [<ffffffff81e56eba>] ? root_nfs_parse_addr+0xba/0xba
[ 5.576091] [<ffffffff81002130>] do_one_initcall+0xc0/0x1f0
[ 5.581871] [<ffffffff81002130>] ? do_one_initcall+0xc0/0x1f0
[ 5.587826] [<ffffffff8108f000>] ? parse_args+0x180/0x4e0
[ 5.593436] [<ffffffff81df511b>] kernel_init_freeable+0x1b3/0x23b
[ 5.599735] [<ffffffff8184ed70>] ? rest_init+0x90/0x90
[ 5.605079] [<ffffffff8184ed7e>] kernel_init+0xe/0xf0
[ 5.610341] [<ffffffff8186437c>] ret_from_fork+0x7c/0xb0
[ 5.615861] [<ffffffff8184ed70>] ? rest_init+0x90/0x90
[ 5.621207] ---[ end trace 3e09fff841f00690 ]---
[ 5.625958] e1000e 0000:06:00.0: irq 27 for MSI/MSI-X
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 7 months
[compiler/gcc4] a9f180345f5: -100.0% last_state.is_incomplete_run
by Fengguang Wu
Hi Steven,
FYI, we noticed that your commit a9f180345f5378ac87d80ed0bea55ba421d83859
("compiler/gcc4: Make quirk for asm_volatile_goto() unconditional") fixed
a number of machine boot failures in our LKP test farm. This is really helpful!
Our gcc version is 4.9.1 (Debian 4.9.1-11).
569d6557ab957d6 a9f180345f5378ac87d80ed0b
--------------- -------------------------
%stddev %change %stddev
\ | /
1 ± 0% -100.0% 0 ± 0% lkp-st02/dd-write/5m-11HDD-RAID5-cfq-btrfs-100dd
1 ± 0% -100.0% 0 ± 0% TOTAL dmesg.kernel_BUG_at_fs/nfs/pagelist.c
569d6557ab957d6 a9f180345f5378ac87d80ed0b
--------------- -------------------------
1 ± 0% -100.0% 0 ± 0% lkp-st02/dd-write/5m-11HDD-RAID5-cfq-btrfs-100dd
1 ± 0% -100.0% 0 ± 0% TOTAL dmesg.Kernel_panic-not_syncing:Fatal_exception
569d6557ab957d6 a9f180345f5378ac87d80ed0b
--------------- -------------------------
1 ± 0% -100.0% 0 ± 0% lkp-st02/dd-write/5m-11HDD-RAID5-cfq-btrfs-100dd
1 ± 0% -100.0% 0 ± 0% TOTAL dmesg.invalid_opcode
569d6557ab957d6 a9f180345f5378ac87d80ed0b
--------------- -------------------------
1 ± 0% -100.0% 0 ± 0% ivb42/will-it-scale/futex4
1 ± 0% -100.0% 0 ± 0% ivb44/fsmark/1x-1t-1HDD-xfs-4M-60G-NoSync
1 ± 0% -100.0% 0 ± 0% ivb44/fsmark/1x-64t-1BRD_48G-btrfs-4M-40G-fsyncBeforeClose
1 ± 0% -100.0% 0 ± 0% lkp-bdw01/blogbench/1SSD-btrfs
1 ± 0% -100.0% 0 ± 0% lkp-hsw01/vm-scalability/300s-anon-rx-rand-mt
1 ± 0% -100.0% 0 ± 0% lkp-sbx04/will-it-scale/futex3
1 ± 0% -100.0% 0 ± 0% lkp-sbx04/will-it-scale/page_fault3
1 ± 0% -100.0% 0 ± 0% lkp-st02/dd-write/5m-11HDD-RAID5-cfq-btrfs-100dd
8 ± 0% -100.0% 0 ± 0% TOTAL last_state.is_incomplete_run
569d6557ab957d6 a9f180345f5378ac87d80ed0b
--------------- -------------------------
1 ± 0% -100.0% 0 ± 0% ivb42/will-it-scale/futex4
1 ± 0% -100.0% 0 ± 0% ivb44/fsmark/1x-1t-1HDD-xfs-4M-60G-NoSync
1 ± 0% -100.0% 0 ± 0% ivb44/fsmark/1x-64t-1BRD_48G-btrfs-4M-40G-fsyncBeforeClose
1 ± 0% -100.0% 0 ± 0% lkp-bdw01/blogbench/1SSD-btrfs
1 ± 0% -100.0% 0 ± 0% lkp-hsw01/vm-scalability/300s-anon-rx-rand-mt
1 ± 0% -100.0% 0 ± 0% lkp-sbx04/will-it-scale/futex3
1 ± 0% -100.0% 0 ± 0% lkp-sbx04/will-it-scale/page_fault3
1 ± 0% -100.0% 0 ± 0% lkp-st02/dd-write/5m-11HDD-RAID5-cfq-btrfs-100dd
8 ± 0% -100.0% 0 ± 0% TOTAL last_state.booting
Thanks,
Fengguang
7 years, 7 months
[mm/slab] BUG: unable to handle kernel paging request at 00010023
by Fengguang Wu
Hi Joonsoo,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 36fbfebe776eb5871d61e7a755c9feb1c96cc4aa
Author: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
AuthorDate: Tue Sep 23 11:52:35 2014 +1000
Commit: Stephen Rothwell <sfr(a)canb.auug.org.au>
CommitDate: Tue Sep 23 11:52:35 2014 +1000
mm/slab: support slab merge
Slab merge is good feature to reduce fragmentation. If new creating slab
have similar size and property with exsitent slab, this feature reuse it
rather than creating new one. As a result, objects are packed into fewer
slabs so that fragmentation is reduced.
Below is result of my testing.
* After boot, sleep 20; cat /proc/meminfo | grep Slab
<Before>
Slab: 25136 kB
<After>
Slab: 24364 kB
We can save 3% memory used by slab.
For supporting this feature in SLAB, we need to implement SLAB specific
kmem_cache_flag() and __kmem_cache_alias(), because SLUB implements some
SLUB specific processing related to debug flag and object size change on
these functions.
Signed-off-by: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
+----------------------------------------------+------------+------------+---------------+
| | 8c087489b8 | 36fbfebe77 | next-20140923 |
+----------------------------------------------+------------+------------+---------------+
| boot_successes | 60 | 0 | 1 |
| boot_failures | 0 | 20 | 314 |
| BUG:unable_to_handle_kernel | 0 | 20 | 312 |
| Oops | 0 | 20 | 312 |
| EIP_is_at_kernfs_link_sibling | 0 | 4 | 14 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 20 | 312 |
| backtrace:acpi_bus_scan | 0 | 4 | 14 |
| backtrace:acpi_scan_init | 0 | 20 | 45 |
| backtrace:acpi_init | 0 | 20 | 45 |
| backtrace:kernel_init_freeable | 0 | 20 | 312 |
| EIP_is_at_kernfs_add_one | 0 | 16 | 298 |
| backtrace:kobject_add_internal | 0 | 16 | 31 |
| backtrace:kobject_init_and_add | 0 | 16 | 31 |
| backtrace:acpi_scan_add_handler_with_hotplug | 0 | 16 | 31 |
| backtrace:acpi_pci_root_init | 0 | 16 | 31 |
| backtrace:tty_register_driver | 0 | 0 | 106 |
| backtrace:pty_init | 0 | 0 | 106 |
| backtrace:acpi_bus_register_driver | 0 | 0 | 1 |
| backtrace:acpi_button_driver_init | 0 | 0 | 1 |
| BUG:kernel_boot_crashed | 0 | 0 | 1 |
| BUG:kernel_test_crashed | 0 | 0 | 1 |
| backtrace:subsys_system_register | 0 | 0 | 160 |
| backtrace:container_dev_init | 0 | 0 | 160 |
| backtrace:driver_init | 0 | 0 | 160 |
+----------------------------------------------+------------+------------+---------------+
[ 0.463788] ACPI: (supports S0 S5)
[ 0.464003] ACPI: Using IOAPIC for interrupt routing
[ 0.464738] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
[ 0.466034] BUG: unable to handle kernel paging request at 00010023
[ 0.466989] IP: [<c117dcf9>] kernfs_add_one+0x89/0x130
[ 0.467812] *pdpt = 0000000000000000 *pde = f000ff53f000ff53
[ 0.468000] Oops: 0002 [#1] SMP
[ 0.468000] Modules linked in:
[ 0.468000] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 3.17.0-rc6-00089-g36fbfeb #1
[ 0.468000] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 0.468000] task: d303ec90 ti: d3040000 task.ti: d3040000
[ 0.468000] EIP: 0060:[<c117dcf9>] EFLAGS: 00010286 CPU: 0
[ 0.468000] EIP is at kernfs_add_one+0x89/0x130
[ 0.468000] EAX: 542572cb EBX: 00010003 ECX: 00000008 EDX: 2c8de598
[ 0.468000] ESI: d311de10 EDI: d311de70 EBP: d3041dd8 ESP: d3041db4
[ 0.468000] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
[ 0.468000] CR0: 8005003b CR2: 00010023 CR3: 01a8a000 CR4: 000006f0
[ 0.468000] Stack:
[ 0.468000] d3006f00 00000202 d311de70 d311de10 d3041dd8 c117dba0 d311de10 c159a5c0
[ 0.468000] c1862a00 d3041df0 c117f0f2 00000000 c18629f4 d311de70 00000000 d3041e2c
[ 0.468000] c117f8b5 00001000 00000000 c159a5c0 c18629f4 00000000 00000001 c1862a00
[ 0.468000] Call Trace:
[ 0.468000] [<c117dba0>] ? kernfs_new_node+0x30/0x40
[ 0.468000] [<c117f0f2>] __kernfs_create_file+0x92/0xc0
[ 0.468000] [<c117f8b5>] sysfs_add_file_mode_ns+0x95/0x190
[ 0.468000] [<c117f9d7>] sysfs_create_file_ns+0x27/0x40
[ 0.468000] [<c1252ef6>] kobject_add_internal+0x136/0x2c0
[ 0.468000] [<c125e360>] ? kvasprintf+0x40/0x50
[ 0.468000] [<c1252a92>] ? kobject_set_name_vargs+0x42/0x60
[ 0.468000] [<c12530b5>] kobject_init_and_add+0x35/0x50
[ 0.468000] [<c12ad04f>] acpi_sysfs_add_hotplug_profile+0x24/0x4a
[ 0.468000] [<c12a7280>] acpi_scan_add_handler_with_hotplug+0x21/0x28
[ 0.468000] [<c18df524>] acpi_pci_root_init+0x20/0x22
[ 0.468000] [<c18df0e1>] acpi_scan_init+0x24/0x16d
[ 0.468000] [<c18def73>] acpi_init+0x20c/0x224
[ 0.468000] [<c18ded67>] ? acpi_sleep_init+0xab/0xab
[ 0.468000] [<c100041e>] do_one_initcall+0x7e/0x1b0
[ 0.468000] [<c18ded67>] ? acpi_sleep_init+0xab/0xab
[ 0.468000] [<c18b24ba>] ? repair_env_string+0x12/0x54
[ 0.468000] [<c18b24a8>] ? initcall_blacklist+0x7c/0x7c
[ 0.468000] [<c105e100>] ? parse_args+0x160/0x3f0
[ 0.468000] [<c18b2bd1>] kernel_init_freeable+0xfc/0x179
[ 0.468000] [<c156782b>] kernel_init+0xb/0xd0
[ 0.468000] [<c1574601>] ret_from_kernel_thread+0x21/0x30
[ 0.468000] [<c1567820>] ? rest_init+0xb0/0xb0
[ 0.468000] Code: 26 00 83 e1 10 75 5b 8b 46 24 e8 b3 ea ff ff 89 46 38 89 f0 e8 d9 f9 ff ff 85 c0 89 c3 75 ca 8b 5f 5c 85 db 74 11 e8 97 90 f1 ff <89> 43 20 89 53 24 89 43 28 89 53 2c b8 c0 2f 85 c1 e8 d1 36 3f
[ 0.468000] EIP: [<c117dcf9>] kernfs_add_one+0x89/0x130 SS:ESP 0068:d3041db4
[ 0.468000] CR2: 0000000000010023
[ 0.468000] ---[ end trace 4fa173691404b63f ]---
[ 0.468000] Kernel panic - not syncing: Fatal exception
git bisect start 55f21306900abf9f9d2a087a127ff49c6d388ad2 0f33be009b89d2268e94194dc4fd01a7851b6d51 --
git bisect good 18c13e2d9b75e2760e6520f2fde00401192956f3 # 17:56 20+ 0 Merge remote-tracking branch 'bluetooth/master'
git bisect good abf79495f38ba66f750566b3f0a8da8dd94b4dc3 # 18:03 20+ 0 Merge remote-tracking branch 'ftrace/for-next'
git bisect good 0bed22034e26a3c37ee4407fccffa8c095d5e144 # 18:09 20+ 0 Merge remote-tracking branch 'pinctrl/for-next'
git bisect good 15c9281a15ed7718868d115d4d00619b0b7a2624 # 18:14 20+ 0 Merge remote-tracking branch 'clk/clk-next'
git bisect good 50939531dea1b913b7fa29f9bbc69feafefd090c # 18:23 20+ 0 Merge branch 'rd-docs/master'
git bisect good aa881e3c5e87c8aa23519f40554897d56f32b935 # 18:49 20+ 0 Merge remote-tracking branch 'powerpc-mpe/next'
git bisect bad 81b63d14db32bd7706c955d1e04e65b152b2277a # 18:57 0- 2 Merge branch 'akpm-current/current'
git bisect bad f313ca82d72066a3c44fd6c66cee57b25de43aa9 # 19:31 0- 1 introduce-dump_vma-fix
git bisect good 2c5fe9213048c5640b8e46407f5614038c03ad93 # 20:16 20+ 0 mm: fix kmemcheck.c build errors
git bisect bad 69454f8be7f621ac8c3c6c9763bb70e116988942 # 20:35 0- 18 block_dev: implement readpages() to optimize sequential read
git bisect bad 66a31d528a1e3d483be2b1c993ec1268412f0074 # 20:40 0- 1 memory-hotplug-add-sysfs-zones_online_to-attribute-fix-2
git bisect good 5e8acb68610c077b08cb3f16305aa3cc22e5d2a8 # 21:07 20+ 0 kernel/kthread.c: partial revert of 81c98869faa5 ("kthread: ensure locality of task_struct allocations")
git bisect bad 36fbfebe776eb5871d61e7a755c9feb1c96cc4aa # 22:06 0- 10 mm/slab: support slab merge
git bisect good 11e57381eced875ef5a6fea4005fdf72b6f68eff # 22:19 20+ 0 mm/slab_common: commonize slab merge logic
git bisect good 8c087489b8a32b9235f7f9417390c62d93aba522 # 22:24 20+ 0 mm/slab_common: fix build failure if CONFIG_SLUB
# first bad commit: [36fbfebe776eb5871d61e7a755c9feb1c96cc4aa] mm/slab: support slab merge
git bisect good 8c087489b8a32b9235f7f9417390c62d93aba522 # 22:26 60+ 0 mm/slab_common: fix build failure if CONFIG_SLUB
git bisect bad 55f21306900abf9f9d2a087a127ff49c6d388ad2 # 22:26 0- 314 Add linux-next specific files for 20140923
git bisect good f4cb707e7ad9727a046b463232f2de166e327d3e # 22:32 60+ 0 Merge tag 'pm+acpi-3.17-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect bad 4d8426f9ac601db2a64fa7be64051d02b9c9fe01 # 22:36 0- 60 Add linux-next specific files for 20140926
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
kvm=(
qemu-system-x86_64
-cpu kvm64
-enable-kvm
-kernel $kernel
-m 320
-smp 1
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 7 months
[net] 4ed2d765dfa: ltp.recv01.4.TFAIL
by Fengguang Wu
Hi Willem,
FYI, we noticed the below LTP failures on commit
4ed2d765dfaccff5ebdac68e2064b59125033a3b ("net-timestamp: TCP timestamping")
test case: nhm-white/ltp/syscalls
nhm-white is a Nehalem box with 6G memory.
e7fd2885385157d 4ed2d765dfaccff5ebdac68e2
--------------- -------------------------
0 +Inf% 1 ± 0% TOTAL ltp.recv01.4.TFAIL
0 +Inf% 1 ± 0% TOTAL ltp.recvfrom01.4.TFAIL
0 +Inf% 1 ± 0% TOTAL ltp.recvfrom01.6.TFAIL
0 +Inf% 1 ± 0% TOTAL ltp.recvmsg01.9.TFAIL
ltp.recvmsg01.9.TFAIL
1 O+-O-O--O-O--O--O-O--O-O--O----O--O-O--O-O--O----O--O-O--O--O-O--O-O--O
| |
| |
0.8 ++ |
| |
| |
0.6 ++ |
| |
0.4 ++ |
| |
| |
0.2 ++ |
| |
| |
0 ++---------------------------O-----------------O----------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Thanks,
Fengguang
7 years, 7 months
[loopback] b17c706987f: +175.7% netperf.Throughput_Mbps
by Fengguang Wu
Hi Daniel,
FYI, we noticed nice performance improvement in commit
b17c706987fa6f28bdc1771c8266e7a69e22adcb ("loopback: sctp: add NETIF_F_SCTP_CSUM to device features")
test case: lkp-nex04/netperf/300s-200%-10K-SCTP_STREAM_MANY
72f8e06f3ea022d b17c706987fa6f28bdc1771c8
--------------- -------------------------
%stddev %change %stddev
\ | /
664 ± 0% +175.7% 1832 ± 0% TOTAL netperf.Throughput_Mbps
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[63]:/.nr_running
947669 ± 2% +681.8% 7408572 ± 1% TOTAL sched_debug.cfs_rq[63]:/.min_vruntime
19701 ± 3% +2814.0% 574098 ± 1% TOTAL sched_debug.cpu#63.ttwu_local
41754 ± 1% -99.5% 200 ±43% TOTAL softirqs.HRTIMER
5 ±20% +400.0% 29 ± 2% TOTAL sched_debug.cpu#63.cpu_load[4]
2.59 ± 1% -100.0% 0.00 ± 0% TOTAL perf-profile.cpu-cycles.intel_idle.cpuidle_enter_state.cpuidle_idle_call.arch_cpu_idle.cpu_startup_entry
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#63.nr_running
72 ±48% -95.8% 3 ±42% TOTAL sched_debug.cfs_rq[62]:/.blocked_load_avg
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[62]:/.nr_running
0.24 ± 7% +2565.6% 6.50 ± 4% TOTAL perf-profile.cpu-cycles.sctp_transport_timeout.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter
0.22 ± 3% +1442.3% 3.42 ± 2% TOTAL perf-profile.cpu-cycles._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.kmalloc_large_node.__kmalloc_node_track_caller
0.27 ± 6% +1100.0% 3.29 ± 2% TOTAL perf-profile.cpu-cycles._raw_spin_lock.free_one_page.__free_pages_ok.__free_pages.__free_memcg_kmem_pages
0.04 ±10% +8463.2% 3.25 ± 2% TOTAL perf-profile.cpu-cycles.lock_timer_base.isra.35.mod_timer.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork
0.04 ±13% +6227.8% 2.28 ± 3% TOTAL perf-profile.cpu-cycles.memcpy.sctp_packet_transmit_chunk.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter
0.00 +Inf% 1.97 ± 8% TOTAL perf-profile.cpu-cycles._raw_spin_lock_irqsave.mod_timer.sctp_transport_reset_timers.sctp_outq_flush.sctp_outq_uncork
11 ±44% +9151.8% 1036 ±31% TOTAL sched_debug.cfs_rq[62]:/.nr_spread_over
1.15 ± 7% -94.4% 0.06 ± 7% TOTAL perf-profile.cpu-cycles._raw_spin_lock_bh.lock_sock_nested.sctp_recvmsg.sock_common_recvmsg.sock_recvmsg
33688875 ± 2% -89.6% 3504734 ±18% TOTAL cpuidle.C1-NHM.time
281217 ± 2% -90.9% 25698 ±41% TOTAL cpuidle.C1-NHM.usage
45558795 ± 0% -99.9% 27898 ±39% TOTAL cpuidle.C3-NHM.usage
39.61 ± 0% -99.7% 0.11 ±30% TOTAL turbostat.%c1
5.60 ± 0% -94.0% 0.34 ± 1% TOTAL turbostat.%c3
876992 ± 3% +736.3% 7333987 ± 1% TOTAL sched_debug.cfs_rq[62]:/.min_vruntime
19686 ± 2% +2810.9% 573039 ± 1% TOTAL sched_debug.cpu#62.ttwu_local
5 ±25% +403.4% 29 ± 3% TOTAL sched_debug.cpu#62.cpu_load[4]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#62.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[61]:/.nr_running
8 ±22% +13481.4% 1168 ±36% TOTAL sched_debug.cfs_rq[61]:/.nr_spread_over
899343 ± 6% +715.7% 7335525 ± 2% TOTAL sched_debug.cfs_rq[61]:/.min_vruntime
19673 ± 4% +2790.5% 568655 ± 2% TOTAL sched_debug.cpu#61.ttwu_local
989853 ± 2% -99.2% 7480 ±48% TOTAL sched_debug.cpu#61.sched_goidle
4 ±40% +581.0% 28 ± 3% TOTAL sched_debug.cpu#61.cpu_load[4]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#61.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[60]:/.nr_running
893662 ± 5% +724.8% 7370847 ± 0% TOTAL sched_debug.cfs_rq[60]:/.min_vruntime
19607 ± 3% +2827.4% 573986 ± 1% TOTAL sched_debug.cpu#60.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#60.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[59]:/.nr_running
86308 ±18% +7142.5% 6250836 ± 8% TOTAL sched_debug.cfs_rq[59]:/.max_vruntime
899110 ± 4% +717.5% 7350607 ± 0% TOTAL sched_debug.cfs_rq[59]:/.min_vruntime
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#0.nr_running
86308 ±18% +7142.5% 6250836 ± 8% TOTAL sched_debug.cfs_rq[59]:/.MIN_vruntime
20221 ± 4% +2739.2% 574120 ± 1% TOTAL sched_debug.cpu#59.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#59.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[58]:/.nr_running
370050 ± 3% -98.8% 4378 ±24% TOTAL sched_debug.cpu#0.sched_goidle
32479 ± 4% +1698.6% 584171 ± 1% TOTAL sched_debug.cpu#0.ttwu_local
947194 ± 4% +676.5% 7354983 ± 1% TOTAL sched_debug.cfs_rq[58]:/.min_vruntime
19 ±12% +26074.0% 5025 ±32% TOTAL sched_debug.cfs_rq[0]:/.nr_spread_over
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[0]:/.nr_running
20721 ± 4% +2663.3% 572601 ± 1% TOTAL sched_debug.cpu#58.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#58.nr_running
0 ± 0% +Inf% 1 ±33% TOTAL sched_debug.cfs_rq[57]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#1.nr_running
964313 ± 4% +662.7% 7355115 ± 0% TOTAL sched_debug.cfs_rq[57]:/.min_vruntime
21834 ± 6% +2531.4% 574540 ± 1% TOTAL sched_debug.cpu#57.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#57.nr_running
354246 ± 1% -98.7% 4745 ±38% TOTAL sched_debug.cpu#1.sched_goidle
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[56]:/.nr_running
28829 ± 5% +1914.4% 580717 ± 0% TOTAL sched_debug.cpu#1.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[1]:/.nr_running
960131 ± 3% +667.5% 7368750 ± 1% TOTAL sched_debug.cfs_rq[56]:/.min_vruntime
21418 ± 2% +2584.0% 574857 ± 0% TOTAL sched_debug.cpu#56.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#2.nr_running
4 ±17% +534.8% 29 ± 3% TOTAL sched_debug.cpu#56.cpu_load[4]
5 ±18% +433.3% 28 ± 2% TOTAL sched_debug.cpu#56.cpu_load[3]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#56.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[55]:/.nr_running
71260 ±42% +8857.8% 6383312 ± 6% TOTAL sched_debug.cfs_rq[55]:/.max_vruntime
935340 ± 4% +686.3% 7354731 ± 1% TOTAL sched_debug.cfs_rq[55]:/.min_vruntime
363420 ± 1% -98.9% 3993 ±40% TOTAL sched_debug.cpu#2.sched_goidle
71260 ±42% +8857.8% 6383312 ± 6% TOTAL sched_debug.cfs_rq[55]:/.MIN_vruntime
19268 ± 3% +2867.0% 571688 ± 2% TOTAL sched_debug.cpu#55.ttwu_local
29222 ± 4% +1889.9% 581489 ± 0% TOTAL sched_debug.cpu#2.ttwu_local
5 ±33% +457.7% 29 ± 2% TOTAL sched_debug.cpu#55.cpu_load[4]
21 ±10% +5539.8% 1218 ±19% TOTAL sched_debug.cfs_rq[2]:/.nr_spread_over
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[2]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#55.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[54]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#3.nr_running
45064 ±18% +14562.1% 6607460 ± 5% TOTAL sched_debug.cfs_rq[54]:/.max_vruntime
934539 ± 6% +685.1% 7337176 ± 1% TOTAL sched_debug.cfs_rq[54]:/.min_vruntime
45064 ±18% +14562.1% 6607460 ± 5% TOTAL sched_debug.cfs_rq[54]:/.MIN_vruntime
19262 ± 6% +2872.1% 572494 ± 1% TOTAL sched_debug.cpu#54.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#54.nr_running
356613 ± 1% -99.2% 2934 ±29% TOTAL sched_debug.cpu#3.sched_goidle
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[53]:/.nr_running
27577 ± 5% +2003.9% 580222 ± 1% TOTAL sched_debug.cpu#3.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[3]:/.nr_running
933523 ± 4% +686.2% 7339367 ± 1% TOTAL sched_debug.cfs_rq[53]:/.min_vruntime
19832 ± 3% +2803.2% 575758 ± 1% TOTAL sched_debug.cpu#53.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#4.nr_running
5 ±46% +410.7% 28 ± 4% TOTAL sched_debug.cpu#53.cpu_load[3]
6 ±47% +354.8% 28 ± 5% TOTAL sched_debug.cpu#53.cpu_load[0]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#53.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[52]:/.nr_running
923091 ± 3% +698.0% 7366148 ± 2% TOTAL sched_debug.cfs_rq[52]:/.min_vruntime
348410 ± 2% -98.8% 4230 ±39% TOTAL sched_debug.cpu#4.sched_goidle
19701 ± 3% +2820.0% 575293 ± 1% TOTAL sched_debug.cpu#52.ttwu_local
26645 ± 4% +2076.8% 580019 ± 1% TOTAL sched_debug.cpu#4.ttwu_local
178678 ±25% +3627.5% 6660167 ± 3% TOTAL sched_debug.cfs_rq[4]:/.MIN_vruntime
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#52.nr_running
178678 ±25% +3627.5% 6660167 ± 3% TOTAL sched_debug.cfs_rq[4]:/.max_vruntime
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[4]:/.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[51]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#5.nr_running
929685 ± 4% +688.0% 7325719 ± 1% TOTAL sched_debug.cfs_rq[51]:/.min_vruntime
19761 ± 4% +2813.8% 575800 ± 1% TOTAL sched_debug.cpu#51.ttwu_local
5 ±37% +461.5% 29 ± 2% TOTAL sched_debug.cpu#51.cpu_load[4]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#51.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[50]:/.nr_running
26189 ± 3% +2110.9% 579032 ± 0% TOTAL sched_debug.cpu#5.ttwu_local
963932 ± 3% +657.4% 7301196 ± 1% TOTAL sched_debug.cfs_rq[50]:/.min_vruntime
23557 ± 2% +2326.4% 571608 ± 2% TOTAL sched_debug.cpu#23.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[5]:/.nr_running
20098 ± 5% +2752.9% 573394 ± 1% TOTAL sched_debug.cpu#50.ttwu_local
6 ±34% +380.0% 28 ± 1% TOTAL sched_debug.cpu#50.cpu_load[4]
6 ±35% +343.8% 28 ± 1% TOTAL sched_debug.cpu#50.cpu_load[3]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#6.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#50.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[49]:/.nr_running
16 ± 8% +4700.0% 768 ±49% TOTAL sched_debug.cfs_rq[49]:/.nr_spread_over
25690 ± 2% +2152.9% 578774 ± 1% TOTAL sched_debug.cpu#6.ttwu_local
988774 ± 1% +643.1% 7347606 ± 1% TOTAL sched_debug.cfs_rq[49]:/.min_vruntime
17 ±16% +6484.7% 1119 ±36% TOTAL sched_debug.cfs_rq[6]:/.nr_spread_over
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[6]:/.nr_running
21655 ± 3% +2554.9% 574933 ± 1% TOTAL sched_debug.cpu#49.ttwu_local
917931 ± 0% -99.2% 7364 ±47% TOTAL sched_debug.cpu#49.sched_goidle
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#49.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#7.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[48]:/.nr_running
963896 ± 5% +662.9% 7353616 ± 1% TOTAL sched_debug.cfs_rq[48]:/.min_vruntime
20962 ± 5% +2638.5% 574061 ± 1% TOTAL sched_debug.cpu#48.ttwu_local
342563 ± 2% -99.0% 3297 ±40% TOTAL sched_debug.cpu#7.sched_goidle
25834 ± 4% +2138.5% 578300 ± 0% TOTAL sched_debug.cpu#7.ttwu_local
4 ±48% +500.0% 28 ± 1% TOTAL sched_debug.cpu#48.cpu_load[3]
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[7]:/.nr_running
4 ±44% +513.0% 28 ± 1% TOTAL sched_debug.cpu#48.cpu_load[0]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#48.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#8.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[47]:/.nr_running
1026500 ± 4% +606.8% 7255532 ± 1% TOTAL sched_debug.cfs_rq[47]:/.min_vruntime
21776 ± 3% +2530.7% 572873 ± 1% TOTAL sched_debug.cpu#47.ttwu_local
940486 ± 1% -99.3% 6905 ±46% TOTAL sched_debug.cpu#47.sched_goidle
375887 ± 2% -98.8% 4502 ±40% TOTAL sched_debug.cpu#8.sched_goidle
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#47.nr_running
46 ±43% -94.0% 2 ±41% TOTAL sched_debug.cfs_rq[46]:/.blocked_load_avg
27667 ± 3% +1978.0% 574926 ± 1% TOTAL sched_debug.cpu#8.ttwu_local
6 ±34% +370.0% 28 ± 1% TOTAL sched_debug.cfs_rq[46]:/.runnable_load_avg
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[46]:/.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[8]:/.nr_running
973194 ± 3% +646.5% 7264433 ± 1% TOTAL sched_debug.cfs_rq[46]:/.min_vruntime
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#9.nr_running
22161 ± 2% +2482.1% 572230 ± 1% TOTAL sched_debug.cpu#46.ttwu_local
4 ±15% +525.0% 30 ± 2% TOTAL sched_debug.cpu#46.cpu_load[4]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#46.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[45]:/.nr_running
359971 ± 2% -99.1% 3085 ±41% TOTAL sched_debug.cpu#9.sched_goidle
1018595 ± 4% +611.1% 7243421 ± 1% TOTAL sched_debug.cfs_rq[45]:/.min_vruntime
27131 ± 2% +2017.8% 574580 ± 1% TOTAL sched_debug.cpu#9.ttwu_local
134404 ±31% +4758.2% 6529609 ± 8% TOTAL sched_debug.cfs_rq[9]:/.MIN_vruntime
22294 ± 4% +2467.1% 572326 ± 1% TOTAL sched_debug.cpu#45.ttwu_local
134404 ±31% +4758.2% 6529609 ± 8% TOTAL sched_debug.cfs_rq[9]:/.max_vruntime
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[9]:/.nr_running
5 ±22% +455.6% 30 ± 2% TOTAL sched_debug.cpu#45.cpu_load[4]
0 ± 0% +Inf% 2 ±18% TOTAL sched_debug.cpu#45.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[44]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#10.nr_running
360383 ± 1% -98.8% 4421 ±45% TOTAL sched_debug.cpu#10.sched_goidle
27338 ± 3% +2000.0% 574125 ± 1% TOTAL sched_debug.cpu#10.ttwu_local
1021430 ± 4% +609.5% 7247197 ± 1% TOTAL sched_debug.cfs_rq[44]:/.min_vruntime
231918 ±24% +2549.3% 6144271 ± 8% TOTAL sched_debug.cfs_rq[10]:/.MIN_vruntime
231918 ±24% +2549.3% 6144271 ± 8% TOTAL sched_debug.cfs_rq[10]:/.max_vruntime
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[10]:/.nr_running
22562 ± 3% +2442.1% 573549 ± 1% TOTAL sched_debug.cpu#44.ttwu_local
5 ±25% +492.0% 29 ± 1% TOTAL sched_debug.cpu#44.cpu_load[4]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#11.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#44.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[43]:/.nr_running
1019742 ± 2% +614.0% 7280699 ± 1% TOTAL sched_debug.cfs_rq[43]:/.min_vruntime
366743 ± 3% -98.8% 4544 ±48% TOTAL sched_debug.cpu#11.sched_goidle
22781 ± 3% +2411.2% 572090 ± 1% TOTAL sched_debug.cpu#43.ttwu_local
27283 ± 5% +1994.9% 571569 ± 1% TOTAL sched_debug.cpu#11.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#43.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[11]:/.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[42]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#12.nr_running
1013625 ± 3% +610.3% 7199592 ± 1% TOTAL sched_debug.cfs_rq[42]:/.min_vruntime
23285 ± 2% +2357.3% 572199 ± 1% TOTAL sched_debug.cpu#42.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#42.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[41]:/.nr_running
363201 ± 5% -99.2% 3010 ±32% TOTAL sched_debug.cpu#12.sched_goidle
1026049 ± 2% +605.1% 7234511 ± 2% TOTAL sched_debug.cfs_rq[41]:/.min_vruntime
26558 ± 1% +2059.8% 573625 ± 1% TOTAL sched_debug.cpu#12.ttwu_local
24712 ± 3% +2225.8% 574772 ± 1% TOTAL sched_debug.cpu#41.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[12]:/.nr_running
908047 ± 0% -99.3% 6771 ±46% TOTAL sched_debug.cpu#41.sched_goidle
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#41.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[40]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#13.nr_running
1045087 ± 1% +592.6% 7238603 ± 1% TOTAL sched_debug.cfs_rq[40]:/.min_vruntime
23905 ± 4% +2300.9% 573931 ± 1% TOTAL sched_debug.cpu#40.ttwu_local
362073 ± 4% -99.0% 3783 ±43% TOTAL sched_debug.cpu#13.sched_goidle
6 ±45% +383.3% 29 ± 0% TOTAL sched_debug.cpu#40.cpu_load[4]
27170 ± 6% +2006.6% 572362 ± 1% TOTAL sched_debug.cpu#13.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#40.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[13]:/.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[39]:/.nr_running
956859 ± 2% +664.8% 7318047 ± 1% TOTAL sched_debug.cfs_rq[39]:/.min_vruntime
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#14.nr_running
22156 ± 6% +2515.2% 579425 ± 1% TOTAL sched_debug.cpu#39.ttwu_local
5 ±20% +400.0% 29 ± 2% TOTAL sched_debug.cpu#39.cpu_load[4]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#39.nr_running
342979 ± 2% -99.1% 3153 ±32% TOTAL sched_debug.cpu#14.sched_goidle
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[38]:/.nr_running
25831 ± 2% +2108.8% 570574 ± 1% TOTAL sched_debug.cpu#14.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[14]:/.nr_running
998185 ± 4% +630.6% 7292251 ± 1% TOTAL sched_debug.cfs_rq[38]:/.min_vruntime
21969 ± 3% +2536.5% 579224 ± 1% TOTAL sched_debug.cpu#38.ttwu_local
0 ± 0% +Inf% 2 ±18% TOTAL sched_debug.cpu#15.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#38.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[37]:/.nr_running
8 ±22% +7077.3% 631 ±47% TOTAL sched_debug.cfs_rq[37]:/.nr_spread_over
976930 ± 2% +653.4% 7359797 ± 1% TOTAL sched_debug.cfs_rq[37]:/.min_vruntime
365758 ± 2% -98.9% 3877 ±47% TOTAL sched_debug.cpu#15.sched_goidle
22211 ± 5% +2503.0% 578152 ± 0% TOTAL sched_debug.cpu#37.ttwu_local
25988 ± 4% +2107.9% 573784 ± 1% TOTAL sched_debug.cpu#15.ttwu_local
4 ±23% +559.1% 29 ± 2% TOTAL sched_debug.cpu#37.cpu_load[4]
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[15]:/.nr_running
5 ±14% +442.3% 28 ± 2% TOTAL sched_debug.cpu#37.cpu_load[3]
5 ± 9% +414.8% 27 ± 2% TOTAL sched_debug.cpu#37.cpu_load[2]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#37.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[36]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#16.nr_running
10 ±17% +6420.8% 691 ±26% TOTAL sched_debug.cfs_rq[36]:/.nr_spread_over
951225 ± 5% +672.6% 7349084 ± 1% TOTAL sched_debug.cfs_rq[36]:/.min_vruntime
22668 ± 5% +2450.8% 578229 ± 1% TOTAL sched_debug.cpu#36.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#36.nr_running
25231 ± 3% +2177.7% 574692 ± 1% TOTAL sched_debug.cpu#16.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[35]:/.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[16]:/.nr_running
969215 ± 3% +658.8% 7354084 ± 2% TOTAL sched_debug.cfs_rq[35]:/.min_vruntime
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#17.nr_running
22948 ± 6% +2424.6% 579345 ± 1% TOTAL sched_debug.cpu#35.ttwu_local
4 ±44% +615.0% 28 ± 3% TOTAL sched_debug.cpu#35.cpu_load[4]
4 ±48% +518.2% 27 ± 4% TOTAL sched_debug.cpu#35.cpu_load[0]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#35.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[34]:/.nr_running
24619 ± 2% +2234.0% 574630 ± 1% TOTAL sched_debug.cpu#17.ttwu_local
985770 ± 4% +646.0% 7353877 ± 1% TOTAL sched_debug.cfs_rq[34]:/.min_vruntime
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[17]:/.nr_running
23297 ± 6% +2389.1% 579887 ± 1% TOTAL sched_debug.cpu#34.ttwu_local
4 ±42% +534.8% 29 ± 2% TOTAL sched_debug.cpu#34.cpu_load[4]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#18.nr_running
5 ±41% +429.6% 28 ± 3% TOTAL sched_debug.cpu#34.cpu_load[3]
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#34.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[33]:/.nr_running
11 ±39% +7127.3% 795 ±29% TOTAL sched_debug.cfs_rq[33]:/.nr_spread_over
952580 ± 1% +670.8% 7342099 ± 1% TOTAL sched_debug.cfs_rq[33]:/.min_vruntime
24246 ± 4% +2265.3% 573496 ± 1% TOTAL sched_debug.cpu#18.ttwu_local
24645 ± 6% +2249.7% 579112 ± 1% TOTAL sched_debug.cpu#33.ttwu_local
933626 ± 1% -99.3% 6613 ±47% TOTAL sched_debug.cpu#33.sched_goidle
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[18]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#33.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[32]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#19.nr_running
950196 ± 2% +670.3% 7319391 ± 2% TOTAL sched_debug.cfs_rq[32]:/.min_vruntime
25033 ± 5% +2211.6% 578682 ± 1% TOTAL sched_debug.cpu#32.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#32.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[31]:/.nr_running
23931 ± 3% +2304.2% 575367 ± 1% TOTAL sched_debug.cpu#19.ttwu_local
23792 ± 1% +2309.0% 573173 ± 1% TOTAL sched_debug.cpu#31.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[19]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#31.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[30]:/.nr_running
45.19 ± 0% -100.0% 0.00 ± 0% TOTAL perf-profile.cpu-cycles.__crc32c_le.chksum_update.crypto_shash_update.crc32c.sctp_csum_update
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#20.nr_running
23990 ± 2% +2299.0% 575544 ± 1% TOTAL sched_debug.cpu#30.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#30.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[29]:/.nr_running
333744 ± 2% -99.1% 3032 ±30% TOTAL sched_debug.cpu#20.sched_goidle
23769 ± 2% +2321.8% 575648 ± 1% TOTAL sched_debug.cpu#20.ttwu_local
23963 ± 2% +2289.1% 572507 ± 1% TOTAL sched_debug.cpu#29.ttwu_local
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[20]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#29.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[28]:/.nr_running
16 ±16% +14806.2% 2385 ±48% TOTAL sched_debug.cfs_rq[28]:/.nr_spread_over
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#21.nr_running
23955 ± 2% +2294.9% 573707 ± 1% TOTAL sched_debug.cpu#28.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#28.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[27]:/.nr_running
200672 ±41% +3130.2% 6482112 ±11% TOTAL sched_debug.cfs_rq[27]:/.max_vruntime
333344 ± 3% -99.4% 2160 ±44% TOTAL sched_debug.cpu#21.sched_goidle
200672 ±41% +3130.2% 6482112 ±11% TOTAL sched_debug.cfs_rq[27]:/.MIN_vruntime
24388 ± 2% +2254.3% 574189 ± 1% TOTAL sched_debug.cpu#27.ttwu_local
23766 ± 2% +2316.8% 574378 ± 1% TOTAL sched_debug.cpu#21.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#27.nr_running
16 ±23% +8230.9% 1349 ±31% TOTAL sched_debug.cfs_rq[21]:/.nr_spread_over
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[21]:/.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[26]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#22.nr_running
24877 ± 2% +2200.2% 572209 ± 1% TOTAL sched_debug.cpu#26.ttwu_local
333909 ± 4% -98.5% 4876 ±31% TOTAL sched_debug.cpu#26.sched_goidle
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#26.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[25]:/.nr_running
21 ±22% +8784.9% 1883 ±35% TOTAL sched_debug.cfs_rq[25]:/.nr_spread_over
24697 ± 4% +2214.3% 571582 ± 1% TOTAL sched_debug.cpu#25.ttwu_local
24175 ± 5% +2283.3% 576162 ± 1% TOTAL sched_debug.cpu#22.ttwu_local
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#25.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[22]:/.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[24]:/.nr_running
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#23.nr_running
25076 ± 2% +2194.8% 575457 ± 1% TOTAL sched_debug.cpu#24.ttwu_local
336969 ± 2% -99.1% 3193 ±49% TOTAL sched_debug.cpu#24.sched_goidle
0 ± 0% +Inf% 2 ± 0% TOTAL sched_debug.cpu#24.nr_running
0 ± 0% +Inf% 1 ± 0% TOTAL sched_debug.cfs_rq[23]:/.nr_running
420529 ±14% -80.8% 80883 ± 6% TOTAL sched_debug.cpu#24.avg_idle
35 ± 1% +394.3% 174 ± 0% TOTAL vmstat.procs.r
320932 ±15% -77.7% 71461 ± 8% TOTAL sched_debug.cpu#26.avg_idle
295750 ±21% -76.1% 70793 ±17% TOTAL sched_debug.cpu#2.avg_idle
623387 ± 0% -79.5% 127567 ± 1% TOTAL softirqs.SCHED
6 ±42% +320.6% 28 ± 1% TOTAL sched_debug.cpu#40.cpu_load[3]
6 ±33% +376.7% 28 ± 3% TOTAL sched_debug.cpu#51.cpu_load[3]
6 ±27% +358.1% 28 ± 4% TOTAL sched_debug.cpu#62.cpu_load[3]
6 ±27% +358.1% 28 ± 2% TOTAL sched_debug.cpu#59.cpu_load[4]
5 ±20% +403.4% 29 ± 2% TOTAL sched_debug.cpu#44.cpu_load[3]
5 ±25% +437.0% 29 ± 2% TOTAL sched_debug.cpu#38.cpu_load[4]
6 ±21% +361.3% 28 ± 3% TOTAL sched_debug.cpu#46.cpu_load[1]
6 ±10% +380.0% 28 ± 4% TOTAL sched_debug.cpu#46.cpu_load[2]
6 ±18% +364.5% 28 ± 2% TOTAL sched_debug.cpu#44.cpu_load[2]
6 ±32% +364.5% 28 ± 1% TOTAL sched_debug.cpu#36.cpu_load[4]
6 ±21% +358.1% 28 ± 2% TOTAL sched_debug.cpu#44.cpu_load[1]
6 ±29% +354.8% 28 ± 3% TOTAL sched_debug.cpu#44.cpu_load[0]
6 ±44% +361.3% 28 ± 1% TOTAL sched_debug.cpu#32.cpu_load[4]
6 ±33% +373.3% 28 ± 2% TOTAL sched_debug.cpu#55.cpu_load[3]
6 ±28% +323.5% 28 ± 4% TOTAL sched_debug.cpu#46.cpu_load[0]
5 ± 6% +406.9% 29 ± 3% TOTAL sched_debug.cpu#46.cpu_load[3]
340647 ±14% -77.1% 77839 ±14% TOTAL sched_debug.cpu#25.avg_idle
6 ±18% +376.7% 28 ± 2% TOTAL sched_debug.cpu#38.cpu_load[3]
4 ±37% +508.7% 28 ± 2% TOTAL sched_debug.cpu#35.cpu_load[3]
6 ±29% +314.7% 28 ± 4% TOTAL sched_debug.cpu#56.cpu_load[0]
6 ±15% +346.9% 28 ± 2% TOTAL sched_debug.cpu#63.cpu_load[3]
7 ±47% +305.7% 28 ± 1% TOTAL sched_debug.cpu#32.cpu_load[3]
6 ±17% +317.6% 28 ± 3% TOTAL sched_debug.cpu#52.cpu_load[4]
6 ±15% +343.8% 28 ± 3% TOTAL sched_debug.cpu#56.cpu_load[2]
6 ±37% +337.5% 28 ± 3% TOTAL sched_debug.cpu#55.cpu_load[2]
6 ±23% +343.8% 28 ± 1% TOTAL sched_debug.cpu#39.cpu_load[3]
6 ±29% +351.6% 28 ± 5% TOTAL sched_debug.cpu#34.cpu_load[1]
6 ±31% +354.8% 28 ± 4% TOTAL sched_debug.cpu#34.cpu_load[2]
6 ±42% +358.1% 28 ± 5% TOTAL sched_debug.cpu#53.cpu_load[1]
6 ±40% +373.3% 28 ± 3% TOTAL sched_debug.cpu#53.cpu_load[2]
6 ±28% +327.3% 28 ± 1% TOTAL sched_debug.cpu#36.cpu_load[3]
6 ±20% +327.3% 28 ± 2% TOTAL sched_debug.cpu#56.cpu_load[1]
7 ±29% +297.1% 27 ± 1% TOTAL sched_debug.cpu#50.cpu_load[2]
7 ±44% +265.8% 27 ± 1% TOTAL sched_debug.cpu#32.cpu_load[2]
317802 ± 9% -77.7% 70785 ± 9% TOTAL sched_debug.cpu#4.avg_idle
346844 ±10% -77.6% 77616 ± 8% TOTAL sched_debug.cpu#16.avg_idle
4 ±40% +475.0% 27 ± 2% TOTAL sched_debug.cpu#35.cpu_load[1]
5 ±35% +452.0% 27 ± 2% TOTAL sched_debug.cfs_rq[48]:/.runnable_load_avg
4 ±40% +470.8% 27 ± 2% TOTAL sched_debug.cpu#35.cpu_load[2]
5 ±18% +385.7% 27 ± 4% TOTAL sched_debug.cpu#37.cpu_load[0]
5 ± 6% +369.0% 27 ± 2% TOTAL sched_debug.cpu#37.cpu_load[1]
366939 ±13% -78.7% 78115 ± 9% TOTAL sched_debug.cpu#30.avg_idle
1023 ±37% +323.5% 4333 ± 2% TOTAL sched_debug.cpu#48.curr->pid
327585 ± 9% -77.2% 74734 ±11% TOTAL sched_debug.cpu#17.avg_idle
12595 ± 0% -76.7% 2929 ± 0% TOTAL uptime.idle
338343 ±14% -78.4% 73057 ± 5% TOTAL sched_debug.cpu#31.avg_idle
6 ±37% +353.1% 29 ± 4% TOTAL sched_debug.cpu#58.cpu_load[4]
7 ±23% +322.9% 29 ± 3% TOTAL sched_debug.cpu#47.cpu_load[4]
7 ±28% +316.7% 30 ± 2% TOTAL sched_debug.cpu#42.cpu_load[4]
316565 ±15% -74.7% 80204 ±11% TOTAL sched_debug.cpu#23.avg_idle
295009 ±22% -75.0% 73750 ± 9% TOTAL sched_debug.cpu#22.avg_idle
19299589 ± 1% +316.6% 80411460 ± 1% TOTAL proc-vmstat.pgalloc_dma32
6 ±26% +311.8% 28 ± 4% TOTAL sched_debug.cpu#62.cpu_load[2]
6 ±21% +393.3% 29 ± 2% TOTAL sched_debug.cpu#45.cpu_load[3]
7 ±18% +311.4% 28 ± 2% TOTAL sched_debug.cpu#45.cpu_load[2]
7 ±10% +286.8% 29 ± 1% TOTAL sched_debug.cfs_rq[45]:/.runnable_load_avg
8 ±27% +234.9% 28 ± 2% TOTAL sched_debug.cpu#42.cpu_load[1]
6 ±19% +323.5% 28 ± 4% TOTAL sched_debug.cpu#60.cpu_load[4]
8 ±26% +265.0% 29 ± 2% TOTAL sched_debug.cpu#42.cpu_load[3]
7 ±13% +297.2% 28 ± 5% TOTAL sched_debug.cfs_rq[38]:/.runnable_load_avg
8 ±27% +234.9% 28 ± 2% TOTAL sched_debug.cpu#42.cpu_load[2]
6 ±40% +364.5% 28 ± 4% TOTAL sched_debug.cpu#54.cpu_load[4]
306463 ±16% -72.5% 84417 ±11% TOTAL sched_debug.cpu#7.avg_idle
294085 ±13% -74.4% 75243 ±11% TOTAL sched_debug.cpu#58.avg_idle
295178 ± 8% -74.8% 74330 ±13% TOTAL sched_debug.cpu#19.avg_idle
310735 ±12% -74.4% 79628 ±23% TOTAL sched_debug.cpu#12.avg_idle
7 ±21% +275.7% 27 ± 3% TOTAL sched_debug.cpu#59.cpu_load[3]
7 ±30% +277.8% 27 ± 5% TOTAL sched_debug.cpu#62.cpu_load[1]
7 ±40% +300.0% 28 ± 0% TOTAL sched_debug.cpu#40.cpu_load[2]
6 ±34% +314.7% 28 ± 2% TOTAL sched_debug.cpu#51.cpu_load[2]
7 ±49% +280.6% 27 ± 4% TOTAL sched_debug.cfs_rq[40]:/.runnable_load_avg
7 ±34% +297.1% 27 ± 4% TOTAL sched_debug.cpu#55.cpu_load[1]
8 ±20% +245.0% 27 ± 1% TOTAL sched_debug.cpu#36.cpu_load[1]
7 ±23% +286.1% 27 ± 1% TOTAL sched_debug.cpu#36.cpu_load[2]
7 ±16% +288.9% 28 ± 3% TOTAL sched_debug.cpu#52.cpu_load[3]
7 ±13% +278.4% 28 ± 2% TOTAL sched_debug.cpu#63.cpu_load[2]
7 ±23% +288.9% 28 ± 2% TOTAL sched_debug.cpu#39.cpu_load[2]
7 ±28% +271.1% 28 ± 1% TOTAL sched_debug.cpu#39.cpu_load[1]
7 ±18% +288.9% 28 ± 5% TOTAL sched_debug.cfs_rq[63]:/.runnable_load_avg
6 ±28% +302.9% 27 ± 2% TOTAL sched_debug.cfs_rq[37]:/.runnable_load_avg
7 ±30% +255.3% 27 ± 6% TOTAL sched_debug.cpu#38.cpu_load[0]
7 ±25% +283.3% 27 ± 4% TOTAL sched_debug.cpu#38.cpu_load[1]
6 ±22% +324.2% 28 ± 3% TOTAL sched_debug.cpu#38.cpu_load[2]
6 ±25% +308.8% 27 ± 1% TOTAL sched_debug.cfs_rq[56]:/.runnable_load_avg
7 ±38% +267.6% 27 ± 5% TOTAL sched_debug.cpu#62.cpu_load[0]
317966 ±13% -74.2% 82042 ± 9% TOTAL sched_debug.cpu#27.avg_idle
296746 ±16% -74.7% 75078 ± 9% TOTAL sched_debug.cpu#14.avg_idle
298670 ±15% -73.3% 79831 ±22% TOTAL sched_debug.cpu#6.avg_idle
6 ±32% +302.9% 27 ± 5% TOTAL sched_debug.cpu#34.cpu_load[0]
7 ±10% +260.5% 27 ± 3% TOTAL sched_debug.cpu#52.cpu_load[2]
6 ±23% +297.1% 27 ± 0% TOTAL sched_debug.cfs_rq[61]:/.runnable_load_avg
8 ±39% +216.3% 27 ± 1% TOTAL sched_debug.cpu#32.cpu_load[1]
7 ±30% +252.6% 26 ± 2% TOTAL sched_debug.cpu#50.cpu_load[0]
6 ±38% +302.9% 27 ± 1% TOTAL sched_debug.cpu#40.cpu_load[1]
7 ±16% +283.3% 27 ± 2% TOTAL sched_debug.cpu#63.cpu_load[1]
7 ±27% +255.3% 27 ± 2% TOTAL sched_debug.cpu#50.cpu_load[1]
6 ±44% +335.5% 27 ± 3% TOTAL sched_debug.cfs_rq[35]:/.runnable_load_avg
6 ±39% +300.0% 27 ± 2% TOTAL sched_debug.cpu#40.cpu_load[0]
294207 ±16% -73.9% 76789 ±18% TOTAL sched_debug.cpu#20.avg_idle
1454 ±47% +193.6% 4270 ± 1% TOTAL sched_debug.cpu#32.curr->pid
274384 ±28% -64.9% 96227 ±33% TOTAL sched_debug.cpu#33.avg_idle
287321 ± 8% -73.1% 77157 ±10% TOTAL sched_debug.cpu#3.avg_idle
7 ±10% +255.3% 27 ± 4% TOTAL sched_debug.cpu#52.cpu_load[1]
1284 ±29% +228.6% 4222 ± 0% TOTAL sched_debug.cpu#43.curr->pid
349008 ±15% -74.2% 90084 ± 6% TOTAL sched_debug.cpu#21.avg_idle
9 ±11% +230.6% 32 ± 4% TOTAL sched_debug.cpu#49.cpu_load[4]
7 ±30% +281.6% 29 ± 2% TOTAL sched_debug.cpu#43.cpu_load[2]
7 ±29% +297.3% 29 ± 2% TOTAL sched_debug.cpu#43.cpu_load[3]
6 ±38% +323.5% 28 ± 4% TOTAL sched_debug.cpu#54.cpu_load[3]
8 ±29% +223.3% 27 ±10% TOTAL sched_debug.cfs_rq[58]:/.runnable_load_avg
7 ±18% +271.8% 29 ± 2% TOTAL sched_debug.cpu#47.cpu_load[3]
7 ±31% +322.9% 29 ± 2% TOTAL sched_debug.cpu#43.cpu_load[4]
7 ±32% +283.8% 28 ± 4% TOTAL sched_debug.cpu#58.cpu_load[3]
9 ±25% +222.2% 29 ± 3% TOTAL sched_debug.cpu#42.cpu_load[0]
291808 ±12% -72.2% 81207 ±12% TOTAL sched_debug.cpu#0.avg_idle
12900 ± 6% +248.9% 45016 ± 0% TOTAL sched_debug.cfs_rq[61]:/.avg->runnable_avg_sum
282 ± 6% +247.8% 982 ± 0% TOTAL sched_debug.cfs_rq[61]:/.tg_runnable_contrib
307360 ±20% -75.2% 76165 ± 7% TOTAL sched_debug.cpu#13.avg_idle
1213 ±36% +252.3% 4273 ± 0% TOTAL sched_debug.cpu#35.curr->pid
2108032 ± 5% -71.2% 607806 ± 1% TOTAL sched_debug.cpu#34.sched_count
2130952 ± 9% -71.4% 609735 ± 1% TOTAL sched_debug.cpu#33.sched_count
7 ±39% +268.4% 28 ± 3% TOTAL sched_debug.cpu#54.cpu_load[0]
7 ±39% +278.4% 28 ± 3% TOTAL sched_debug.cfs_rq[53]:/.runnable_load_avg
8 ±23% +223.3% 27 ± 2% TOTAL sched_debug.cpu#36.cpu_load[0]
7 ±20% +286.1% 27 ± 4% TOTAL sched_debug.cpu#60.cpu_load[3]
7 ±21% +265.8% 27 ± 4% TOTAL sched_debug.cfs_rq[50]:/.runnable_load_avg
7 ±31% +276.3% 28 ± 2% TOTAL sched_debug.cpu#45.cpu_load[0]
8 ±24% +226.2% 27 ± 2% TOTAL sched_debug.cfs_rq[55]:/.runnable_load_avg
6 ±36% +320.6% 28 ± 4% TOTAL sched_debug.cfs_rq[44]:/.runnable_load_avg
7 ±26% +281.1% 28 ± 2% TOTAL sched_debug.cpu#51.cpu_load[1]
7 ±27% +265.8% 27 ± 4% TOTAL sched_debug.cpu#51.cpu_load[0]
8 ±27% +233.3% 28 ± 2% TOTAL sched_debug.cpu#39.cpu_load[0]
7 ±21% +276.3% 28 ± 2% TOTAL sched_debug.cpu#45.cpu_load[1]
293 ± 8% +234.7% 980 ± 0% TOTAL sched_debug.cfs_rq[56]:/.tg_runnable_contrib
2074224 ± 3% -70.6% 609675 ± 2% TOTAL sched_debug.cpu#63.sched_count
2127419 ± 6% -71.7% 601604 ± 1% TOTAL sched_debug.cpu#46.sched_count
2115949 ± 3% -65.2% 737051 ±35% TOTAL sched_debug.cpu#36.sched_count
13414 ± 8% +235.7% 45027 ± 0% TOTAL sched_debug.cfs_rq[56]:/.avg->runnable_avg_sum
267808 ±13% -72.1% 74782 ±10% TOTAL sched_debug.cpu#5.avg_idle
2069493 ± 8% -68.4% 654987 ±18% TOTAL sched_debug.cpu#45.sched_count
14044 ±11% +220.7% 45039 ± 0% TOTAL sched_debug.cfs_rq[55]:/.avg->runnable_avg_sum
2053686 ± 4% -70.7% 602700 ± 2% TOTAL sched_debug.cpu#52.sched_count
289450 ±13% -68.8% 90401 ±15% TOTAL sched_debug.cpu#1.avg_idle
307 ±11% +219.5% 980 ± 0% TOTAL sched_debug.cfs_rq[55]:/.tg_runnable_contrib
2002864 ± 8% -69.9% 602844 ± 1% TOTAL sched_debug.cpu#40.sched_count
2035365 ± 2% -70.3% 603651 ± 1% TOTAL sched_debug.cpu#47.sched_count
9 ±24% +197.8% 26 ± 1% TOTAL sched_debug.cfs_rq[59]:/.runnable_load_avg
8 ±24% +226.8% 26 ± 6% TOTAL sched_debug.cpu#60.cpu_load[0]
8 ±23% +235.0% 26 ± 4% TOTAL sched_debug.cpu#59.cpu_load[1]
7 ±19% +252.6% 26 ± 4% TOTAL sched_debug.cpu#60.cpu_load[1]
8 ±23% +234.1% 27 ± 3% TOTAL sched_debug.cpu#59.cpu_load[2]
8 ±36% +209.1% 27 ± 2% TOTAL sched_debug.cfs_rq[32]:/.runnable_load_avg
7 ±20% +253.8% 27 ± 2% TOTAL sched_debug.cpu#63.cpu_load[0]
8 ±16% +206.8% 27 ± 4% TOTAL sched_debug.cfs_rq[52]:/.runnable_load_avg
7 ±19% +257.9% 27 ± 4% TOTAL sched_debug.cpu#60.cpu_load[2]
2008711 ± 1% -70.0% 602980 ± 1% TOTAL sched_debug.cpu#61.nr_switches
2100707 ± 7% -71.1% 606142 ± 2% TOTAL sched_debug.cpu#53.sched_count
2046800 ± 1% -65.3% 710307 ±28% TOTAL sched_debug.cpu#60.sched_count
2091856 ± 3% -70.6% 614687 ± 3% TOTAL sched_debug.cpu#38.sched_count
2062814 ± 3% -70.7% 604560 ± 2% TOTAL sched_debug.cpu#54.sched_count
13554 ±11% +232.3% 45048 ± 0% TOTAL sched_debug.cfs_rq[35]:/.avg->runnable_avg_sum
296 ±11% +231.2% 982 ± 0% TOTAL sched_debug.cfs_rq[35]:/.tg_runnable_contrib
1474 ±29% +188.2% 4249 ± 0% TOTAL sched_debug.cpu#62.curr->pid
2049101 ± 6% -70.4% 607303 ± 2% TOTAL sched_debug.cpu#41.sched_count
2167538 ± 9% -71.9% 609753 ± 2% TOTAL sched_debug.cpu#39.sched_count
2036680 ± 3% -70.3% 605379 ± 1% TOTAL sched_debug.cpu#51.sched_count
2017924 ± 1% -70.0% 604898 ± 0% TOTAL sched_debug.cpu#59.sched_count
14509 ±13% +210.5% 45046 ± 0% TOTAL sched_debug.cfs_rq[63]:/.avg->runnable_avg_sum
2076058 ± 5% -69.2% 640162 ±10% TOTAL sched_debug.cpu#37.sched_count
1225 ±36% +253.2% 4329 ± 3% TOTAL sched_debug.cpu#53.curr->pid
1148 ±27% +268.3% 4230 ± 1% TOTAL sched_debug.cpu#61.curr->pid
14570 ±13% +209.0% 45024 ± 0% TOTAL sched_debug.cfs_rq[34]:/.avg->runnable_avg_sum
2020274 ± 0% -69.8% 610215 ± 2% TOTAL sched_debug.cpu#62.nr_switches
319 ±13% +208.0% 982 ± 0% TOTAL sched_debug.cfs_rq[34]:/.tg_runnable_contrib
317 ±13% +208.1% 978 ± 0% TOTAL sched_debug.cfs_rq[63]:/.tg_runnable_contrib
2042951 ± 1% -69.7% 618701 ± 2% TOTAL sched_debug.cpu#62.sched_count
2058101 ± 7% -70.8% 600726 ± 1% TOTAL sched_debug.cpu#44.sched_count
1984924 ± 1% -69.5% 604880 ± 0% TOTAL sched_debug.cpu#59.nr_switches
1971315 ± 1% -69.3% 604542 ± 2% TOTAL sched_debug.cpu#54.nr_switches
2014162 ± 6% -65.2% 700760 ±28% TOTAL sched_debug.cpu#42.sched_count
1953776 ± 2% -68.8% 609656 ± 2% TOTAL sched_debug.cpu#63.nr_switches
2055362 ± 7% -65.1% 717767 ±33% TOTAL sched_debug.cpu#43.sched_count
1231 ±29% +249.4% 4303 ± 2% TOTAL sched_debug.cpu#50.curr->pid
313 ±10% +213.7% 982 ± 0% TOTAL sched_debug.cfs_rq[62]:/.tg_runnable_contrib
7 ±13% +255.3% 27 ± 6% TOTAL sched_debug.cpu#52.cpu_load[0]
8 ±31% +202.3% 26 ± 3% TOTAL sched_debug.cpu#32.cpu_load[0]
1991792 ± 1% -69.3% 611077 ± 1% TOTAL sched_debug.cpu#60.nr_switches
14342 ±10% +214.0% 45031 ± 0% TOTAL sched_debug.cfs_rq[62]:/.avg->runnable_avg_sum
1942697 ± 1% -69.0% 602682 ± 2% TOTAL sched_debug.cpu#52.nr_switches
1950644 ± 1% -69.2% 601592 ± 1% TOTAL sched_debug.cpu#46.nr_switches
1992472 ± 1% -61.6% 765257 ±25% TOTAL sched_debug.cpu#50.sched_count
8 ±20% +241.5% 28 ± 4% TOTAL sched_debug.cpu#58.cpu_load[1]
9 ±12% +215.6% 28 ± 2% TOTAL sched_debug.cfs_rq[39]:/.runnable_load_avg
7 ±31% +283.8% 28 ± 2% TOTAL sched_debug.cpu#54.cpu_load[2]
8 ±22% +238.1% 28 ± 2% TOTAL sched_debug.cpu#47.cpu_load[2]
8 ±26% +250.0% 28 ± 4% TOTAL sched_debug.cpu#58.cpu_load[2]
13572 ±12% +232.0% 45056 ± 0% TOTAL sched_debug.cfs_rq[37]:/.avg->runnable_avg_sum
2047201 ± 6% -67.2% 670954 ±18% TOTAL sched_debug.cpu#35.sched_count
297 ±12% +230.8% 983 ± 0% TOTAL sched_debug.cfs_rq[37]:/.tg_runnable_contrib
1955174 ± 2% -69.1% 603605 ± 1% TOTAL sched_debug.cpu#55.nr_switches
1971669 ± 5% -69.1% 608414 ± 1% TOTAL sched_debug.cpu#49.sched_count
1939844 ± 1% -68.8% 605845 ± 1% TOTAL sched_debug.cpu#36.nr_switches
1949899 ± 1% -68.9% 606124 ± 2% TOTAL sched_debug.cpu#53.nr_switches
315 ±12% +211.7% 984 ± 0% TOTAL sched_debug.cfs_rq[51]:/.tg_runnable_contrib
12 ±27% +166.7% 32 ± 6% TOTAL sched_debug.cpu#49.cpu_load[2]
11 ±21% +184.2% 32 ± 5% TOTAL sched_debug.cpu#49.cpu_load[3]
14438 ±12% +212.0% 45043 ± 0% TOTAL sched_debug.cfs_rq[51]:/.avg->runnable_avg_sum
2001341 ± 4% -66.4% 671981 ±17% TOTAL sched_debug.cpu#57.sched_count
303710 ±17% -70.0% 91263 ±13% TOTAL sched_debug.cpu#9.avg_idle
1900887 ± 1% -68.4% 600434 ± 1% TOTAL sched_debug.cpu#45.nr_switches
14272 ±12% +215.8% 45065 ± 0% TOTAL sched_debug.cfs_rq[59]:/.avg->runnable_avg_sum
1937164 ± 0% -68.5% 609385 ± 1% TOTAL sched_debug.cpu#37.nr_switches
1974368 ± 1% -69.1% 609730 ± 2% TOTAL sched_debug.cpu#39.nr_switches
1973848 ± 2% -62.6% 738151 ±35% TOTAL sched_debug.cpu#58.sched_count
1912735 ± 1% -68.5% 603190 ± 2% TOTAL sched_debug.cpu#50.nr_switches
1931643 ± 2% -68.6% 607250 ± 2% TOTAL sched_debug.cpu#58.nr_switches
1377 ±14% +216.7% 4363 ± 1% TOTAL sched_debug.cpu#49.curr->pid
312 ±12% +214.1% 981 ± 0% TOTAL sched_debug.cfs_rq[59]:/.tg_runnable_contrib
1944033 ± 1% -68.9% 605363 ± 1% TOTAL sched_debug.cpu#51.nr_switches
1915699 ± 1% -68.1% 610629 ± 1% TOTAL sched_debug.cpu#35.nr_switches
15047 ±12% +199.1% 45011 ± 0% TOTAL sched_debug.cfs_rq[32]:/.avg->runnable_avg_sum
46418 ± 2% +213.2% 145374 ± 0% TOTAL sched_debug.cfs_rq[62]:/.exec_clock
1924241 ± 2% -68.3% 609789 ± 2% TOTAL sched_debug.cpu#38.nr_switches
1912400 ± 1% -68.4% 603629 ± 1% TOTAL sched_debug.cpu#47.nr_switches
329 ±12% +197.8% 979 ± 0% TOTAL sched_debug.cfs_rq[32]:/.tg_runnable_contrib
1895051 ± 2% -68.3% 600708 ± 1% TOTAL sched_debug.cpu#44.nr_switches
13983 ±14% +222.1% 45044 ± 0% TOTAL sched_debug.cfs_rq[44]:/.avg->runnable_avg_sum
1901704 ± 1% -67.9% 609715 ± 1% TOTAL sched_debug.cpu#33.nr_switches
14514 ± 2% +214.1% 45583 ± 0% TOTAL sched_debug.cfs_rq[33]:/.avg->runnable_avg_sum
306 ±14% +221.0% 982 ± 0% TOTAL sched_debug.cfs_rq[44]:/.tg_runnable_contrib
1926663 ± 0% -68.2% 613403 ± 2% TOTAL sched_debug.cpu#48.sched_count
317 ± 2% +212.6% 993 ± 0% TOTAL sched_debug.cfs_rq[33]:/.tg_runnable_contrib
1203 ±34% +258.3% 4310 ± 1% TOTAL sched_debug.cpu#56.curr->pid
46837 ± 2% +210.4% 145375 ± 0% TOTAL sched_debug.cfs_rq[60]:/.exec_clock
1874381 ± 1% -67.9% 600905 ± 1% TOTAL sched_debug.cpu#42.nr_switches
8 ±16% +238.1% 28 ± 3% TOTAL sched_debug.cfs_rq[47]:/.runnable_load_avg
9 ±12% +200.0% 27 ± 4% TOTAL sched_debug.cpu#58.cpu_load[0]
9 ±24% +213.3% 28 ± 2% TOTAL sched_debug.cpu#47.cpu_load[1]
9 ±28% +200.0% 28 ± 1% TOTAL sched_debug.cpu#47.cpu_load[0]
8 ±34% +239.0% 27 ± 4% TOTAL sched_debug.cpu#55.cpu_load[0]
8 ±15% +222.7% 28 ± 3% TOTAL sched_debug.cfs_rq[36]:/.runnable_load_avg
8 ±23% +246.3% 28 ± 3% TOTAL sched_debug.cpu#43.cpu_load[1]
8 ±22% +250.0% 28 ± 2% TOTAL sched_debug.cfs_rq[43]:/.runnable_load_avg
9 ±37% +213.3% 28 ± 4% TOTAL sched_debug.cfs_rq[34]:/.runnable_load_avg
7 ±32% +268.4% 28 ± 3% TOTAL sched_debug.cpu#54.cpu_load[1]
9 ±37% +208.9% 27 ± 4% TOTAL sched_debug.cfs_rq[51]:/.runnable_load_avg
1916043 ± 2% -68.3% 607993 ± 2% TOTAL sched_debug.cpu#57.nr_switches
265392 ±17% -64.3% 94795 ±22% TOTAL sched_debug.cpu#43.avg_idle
1889707 ± 1% -68.3% 599744 ± 1% TOTAL sched_debug.cpu#43.nr_switches
9 ±29% +227.1% 31 ± 3% TOTAL sched_debug.cpu#33.cpu_load[4]
39 ±46% +117.4% 84 ±32% TOTAL sched_debug.cfs_rq[17]:/.tg_load_contrib
1867119 ± 0% -67.6% 605812 ± 1% TOTAL sched_debug.cpu#49.nr_switches
1889889 ± 1% -67.8% 607783 ± 1% TOTAL sched_debug.cpu#34.nr_switches
47035 ± 4% +209.1% 145403 ± 0% TOTAL sched_debug.cfs_rq[61]:/.exec_clock
1860154 ± 1% -67.6% 602684 ± 2% TOTAL sched_debug.cpu#48.nr_switches
325 ±10% +202.2% 982 ± 0% TOTAL sched_debug.cfs_rq[39]:/.tg_runnable_contrib
66798546 ± 0% +206.8% 2.049e+08 ± 0% TOTAL softirqs.NET_RX
13894 ±13% +224.1% 45029 ± 0% TOTAL sched_debug.cfs_rq[48]:/.avg->runnable_avg_sum
14866 ±10% +202.7% 44994 ± 0% TOTAL sched_debug.cfs_rq[39]:/.avg->runnable_avg_sum
14874 ±11% +207.3% 45713 ± 0% TOTAL sched_debug.cfs_rq[57]:/.avg->runnable_avg_sum
1893238 ± 1% -67.9% 608368 ± 1% TOTAL sched_debug.cpu#56.sched_count
304 ±13% +222.6% 981 ± 0% TOTAL sched_debug.cfs_rq[48]:/.tg_runnable_contrib
1850633 ± 0% -67.2% 606443 ± 2% TOTAL sched_debug.cpu#41.nr_switches
273840 ±22% -69.2% 84378 ± 8% TOTAL sched_debug.cpu#60.avg_idle
325 ±12% +206.1% 996 ± 0% TOTAL sched_debug.cfs_rq[57]:/.tg_runnable_contrib
14957 ±12% +201.5% 45098 ± 0% TOTAL sched_debug.cfs_rq[58]:/.avg->runnable_avg_sum
47628 ± 2% +205.5% 145491 ± 0% TOTAL sched_debug.cfs_rq[51]:/.exec_clock
271141 ±15% -64.6% 96084 ±33% TOTAL sched_debug.cpu#47.avg_idle
47097 ± 2% +208.9% 145500 ± 0% TOTAL sched_debug.cfs_rq[59]:/.exec_clock
1395 ±13% +210.6% 4334 ± 2% TOTAL sched_debug.cpu#55.curr->pid
15193 ±10% +196.8% 45096 ± 0% TOTAL sched_debug.cfs_rq[50]:/.avg->runnable_avg_sum
326 ±12% +200.6% 981 ± 0% TOTAL sched_debug.cfs_rq[58]:/.tg_runnable_contrib
315 ± 8% +211.4% 982 ± 0% TOTAL sched_debug.cfs_rq[38]:/.tg_runnable_contrib
14456 ± 8% +211.3% 45004 ± 0% TOTAL sched_debug.cfs_rq[38]:/.avg->runnable_avg_sum
47896 ± 3% +203.5% 145372 ± 0% TOTAL sched_debug.cfs_rq[54]:/.exec_clock
332 ±10% +195.7% 984 ± 0% TOTAL sched_debug.cfs_rq[50]:/.tg_runnable_contrib
1865184 ± 2% -67.4% 608353 ± 1% TOTAL sched_debug.cpu#56.nr_switches
47722 ± 2% +204.7% 145411 ± 0% TOTAL sched_debug.cfs_rq[55]:/.exec_clock
47334 ± 2% +207.3% 145475 ± 0% TOTAL sched_debug.cfs_rq[52]:/.exec_clock
15320 ± 5% +193.8% 45011 ± 0% TOTAL sched_debug.cfs_rq[46]:/.avg->runnable_avg_sum
48192 ± 1% +201.6% 145362 ± 0% TOTAL sched_debug.cfs_rq[63]:/.exec_clock
48037 ± 1% +202.7% 145411 ± 0% TOTAL sched_debug.cfs_rq[32]:/.exec_clock
1497 ±19% +193.5% 4395 ± 5% TOTAL sched_debug.cpu#52.curr->pid
1846875 ± 0% -67.1% 607481 ± 1% TOTAL sched_debug.cpu#32.nr_switches
287922 ±23% -70.6% 84722 ±16% TOTAL sched_debug.cpu#11.avg_idle
48162 ± 3% +202.0% 145448 ± 0% TOTAL sched_debug.cfs_rq[36]:/.exec_clock
335 ± 5% +192.7% 982 ± 0% TOTAL sched_debug.cfs_rq[46]:/.tg_runnable_contrib
47591 ± 2% +205.6% 145417 ± 0% TOTAL sched_debug.cfs_rq[53]:/.exec_clock
270817 ±22% -66.3% 91254 ±13% TOTAL sched_debug.cpu#63.avg_idle
15169 ±10% +196.9% 45031 ± 0% TOTAL sched_debug.cfs_rq[36]:/.avg->runnable_avg_sum
332 ±10% +196.3% 984 ± 0% TOTAL sched_debug.cfs_rq[36]:/.tg_runnable_contrib
9 ±31% +195.7% 27 ± 3% TOTAL sched_debug.cfs_rq[54]:/.runnable_load_avg
1552 ±27% +175.8% 4281 ± 2% TOTAL sched_debug.cpu#60.curr->pid
49056 ± 1% +199.2% 146799 ± 0% TOTAL sched_debug.cfs_rq[33]:/.exec_clock
1431 ±22% +201.8% 4321 ± 1% TOTAL sched_debug.cpu#34.curr->pid
48712 ± 2% +198.8% 145545 ± 0% TOTAL sched_debug.cfs_rq[50]:/.exec_clock
48311 ± 1% +201.0% 145402 ± 0% TOTAL sched_debug.cfs_rq[39]:/.exec_clock
332 ± 9% +201.0% 999 ± 0% TOTAL sched_debug.cfs_rq[49]:/.tg_runnable_contrib
226926 ±13% -62.6% 84981 ±24% TOTAL sched_debug.cpu#38.avg_idle
15180 ± 9% +201.5% 45763 ± 0% TOTAL sched_debug.cfs_rq[49]:/.avg->runnable_avg_sum
48719 ± 2% +198.5% 145438 ± 0% TOTAL sched_debug.cfs_rq[56]:/.exec_clock
1795031 ± 0% -66.4% 602823 ± 1% TOTAL sched_debug.cpu#40.nr_switches
48496 ± 2% +199.9% 145443 ± 0% TOTAL sched_debug.cfs_rq[58]:/.exec_clock
15260 ±11% +195.2% 45042 ± 0% TOTAL sched_debug.cfs_rq[40]:/.avg->runnable_avg_sum
285472 ±25% -68.5% 89860 ±28% TOTAL sched_debug.cpu#15.avg_idle
334 ±11% +194.3% 982 ± 0% TOTAL sched_debug.cfs_rq[40]:/.tg_runnable_contrib
49933 ± 2% +194.4% 147012 ± 0% TOTAL sched_debug.cfs_rq[57]:/.exec_clock
49101 ± 2% +196.1% 145384 ± 0% TOTAL sched_debug.cfs_rq[37]:/.exec_clock
49003 ± 1% +196.8% 145444 ± 0% TOTAL sched_debug.cfs_rq[35]:/.exec_clock
1390 ±22% +206.7% 4265 ± 0% TOTAL sched_debug.cpu#37.curr->pid
48581 ± 3% +199.4% 145429 ± 0% TOTAL sched_debug.cfs_rq[48]:/.exec_clock
1630 ±25% +160.5% 4247 ± 2% TOTAL sched_debug.cpu#47.curr->pid
49123 ± 2% +196.0% 145404 ± 0% TOTAL sched_debug.cfs_rq[46]:/.exec_clock
266050 ±20% -59.4% 107922 ±24% TOTAL sched_debug.cpu#52.avg_idle
14978 ±10% +200.9% 45075 ± 0% TOTAL sched_debug.cfs_rq[45]:/.avg->runnable_avg_sum
1417 ±21% +204.3% 4312 ± 2% TOTAL sched_debug.cpu#39.curr->pid
12 ±26% +160.7% 31 ± 9% TOTAL sched_debug.cpu#49.cpu_load[1]
327 ±10% +198.7% 978 ± 0% TOTAL sched_debug.cfs_rq[45]:/.tg_runnable_contrib
270121 ±17% -61.7% 103375 ±20% TOTAL sched_debug.cpu#32.avg_idle
50428 ± 2% +189.1% 145774 ± 0% TOTAL sched_debug.cfs_rq[42]:/.exec_clock
1574 ±34% +166.5% 4194 ± 1% TOTAL sched_debug.cpu#45.curr->pid
50663 ± 1% +189.6% 146717 ± 0% TOTAL sched_debug.cfs_rq[49]:/.exec_clock
49273 ± 2% +195.3% 145509 ± 0% TOTAL sched_debug.cfs_rq[34]:/.exec_clock
15310 ±13% +194.0% 45007 ± 0% TOTAL sched_debug.cfs_rq[60]:/.avg->runnable_avg_sum
236078 ±11% -65.9% 80473 ±14% TOTAL sched_debug.cpu#41.avg_idle
49673 ± 2% +192.7% 145415 ± 0% TOTAL sched_debug.cfs_rq[38]:/.exec_clock
8 ±26% +235.0% 26 ± 5% TOTAL sched_debug.cpu#59.cpu_load[0]
50433 ± 2% +188.4% 145447 ± 0% TOTAL sched_debug.cfs_rq[45]:/.exec_clock
15213 ± 5% +196.2% 45068 ± 0% TOTAL sched_debug.cfs_rq[52]:/.avg->runnable_avg_sum
334 ±13% +193.4% 980 ± 0% TOTAL sched_debug.cfs_rq[60]:/.tg_runnable_contrib
50577 ± 1% +187.9% 145609 ± 0% TOTAL sched_debug.cfs_rq[43]:/.exec_clock
249630 ±15% -65.3% 86741 ± 5% TOTAL sched_debug.cpu#8.avg_idle
332 ± 5% +194.9% 980 ± 0% TOTAL sched_debug.cfs_rq[52]:/.tg_runnable_contrib
235932 ± 9% -62.3% 88954 ±19% TOTAL sched_debug.cpu#34.avg_idle
50890 ± 2% +185.7% 145384 ± 0% TOTAL sched_debug.cfs_rq[47]:/.exec_clock
50930 ± 1% +185.7% 145517 ± 0% TOTAL sched_debug.cfs_rq[40]:/.exec_clock
14328 ±16% +214.3% 45037 ± 0% TOTAL sched_debug.cfs_rq[53]:/.avg->runnable_avg_sum
16062 ± 9% +180.4% 45031 ± 0% TOTAL sched_debug.cfs_rq[47]:/.avg->runnable_avg_sum
51840 ± 1% +183.3% 146858 ± 0% TOTAL sched_debug.cfs_rq[41]:/.exec_clock
1499 ±20% +187.0% 4301 ± 1% TOTAL sched_debug.cpu#51.curr->pid
50662 ± 2% +187.3% 145570 ± 0% TOTAL sched_debug.cfs_rq[44]:/.exec_clock
269032 ± 7% -66.8% 89230 ±17% TOTAL sched_debug.cpu#59.avg_idle
313 ±16% +212.5% 980 ± 0% TOTAL sched_debug.cfs_rq[53]:/.tg_runnable_contrib
16303 ±17% +176.2% 45035 ± 0% TOTAL sched_debug.cfs_rq[54]:/.avg->runnable_avg_sum
356 ±17% +174.9% 979 ± 0% TOTAL sched_debug.cfs_rq[54]:/.tg_runnable_contrib
351 ± 9% +178.0% 977 ± 0% TOTAL sched_debug.cfs_rq[47]:/.tg_runnable_contrib
15587 ±11% +189.2% 45070 ± 0% TOTAL sched_debug.cfs_rq[43]:/.avg->runnable_avg_sum
341 ±11% +187.8% 982 ± 0% TOTAL sched_debug.cfs_rq[43]:/.tg_runnable_contrib
99841369 ± 0% +180.1% 2.796e+08 ± 1% TOTAL numa-numastat.node2.local_node
99843858 ± 0% +180.1% 2.796e+08 ± 1% TOTAL numa-numastat.node2.numa_hit
10 ±17% +196.2% 30 ± 3% TOTAL sched_debug.cpu#57.cpu_load[4]
11 ±12% +189.1% 31 ± 8% TOTAL sched_debug.cpu#41.cpu_load[4]
1560 ±11% +175.1% 4292 ± 1% TOTAL sched_debug.cpu#59.curr->pid
49642548 ± 0% +177.8% 1.379e+08 ± 1% TOTAL numa-vmstat.node2.numa_local
8 ±28% +220.5% 28 ± 4% TOTAL sched_debug.cpu#43.cpu_load[0]
49698361 ± 0% +177.6% 1.38e+08 ± 1% TOTAL numa-vmstat.node2.numa_hit
219871 ±18% -57.9% 92485 ±15% TOTAL sched_debug.cpu#62.avg_idle
50123901 ± 0% +178.0% 1.393e+08 ± 1% TOTAL numa-vmstat.node0.numa_local
50127863 ± 0% +177.9% 1.393e+08 ± 1% TOTAL numa-vmstat.node0.numa_hit
16250 ± 5% +178.5% 45253 ± 0% TOTAL sched_debug.cfs_rq[42]:/.avg->runnable_avg_sum
49861680 ± 0% +176.2% 1.377e+08 ± 1% TOTAL numa-vmstat.node3.numa_local
355 ± 5% +177.0% 985 ± 0% TOTAL sched_debug.cfs_rq[42]:/.tg_runnable_contrib
49917445 ± 0% +176.0% 1.378e+08 ± 1% TOTAL numa-vmstat.node3.numa_hit
1564 ±12% +175.2% 4304 ± 2% TOTAL sched_debug.cpu#54.curr->pid
49911338 ± 0% +175.9% 1.377e+08 ± 1% TOTAL numa-vmstat.node1.numa_local
248524 ±15% -58.0% 104482 ±29% TOTAL sched_debug.cpu#46.avg_idle
49966794 ± 0% +175.7% 1.377e+08 ± 1% TOTAL numa-vmstat.node1.numa_hit
1429 ±20% +198.6% 4269 ± 0% TOTAL sched_debug.cpu#44.curr->pid
16484 ± 8% +175.8% 45465 ± 0% TOTAL sched_debug.cfs_rq[41]:/.avg->runnable_avg_sum
361 ± 8% +174.8% 992 ± 0% TOTAL sched_debug.cfs_rq[41]:/.tg_runnable_contrib
1453 ±23% +197.1% 4318 ± 1% TOTAL sched_debug.cpu#57.curr->pid
1424 ±28% +213.8% 4470 ± 5% TOTAL sched_debug.cpu#63.curr->pid
1420 ±23% +201.2% 4278 ± 3% TOTAL sched_debug.cpu#58.curr->pid
211265 ±21% -57.9% 88934 ±31% TOTAL sched_debug.cpu#39.avg_idle
37 ±32% +129.8% 86 ±10% TOTAL sched_debug.cfs_rq[3]:/.tg_load_contrib
2788857 ± 1% +163.2% 7340919 ± 1% TOTAL sched_debug.cfs_rq[16]:/.min_vruntime
224622 ±17% -59.3% 91521 ±15% TOTAL sched_debug.cpu#35.avg_idle
2764205 ± 1% +161.1% 7216910 ± 1% TOTAL sched_debug.cfs_rq[8]:/.min_vruntime
2819278 ± 1% +156.7% 7238266 ± 1% TOTAL sched_debug.cfs_rq[10]:/.min_vruntime
2837162 ± 1% +159.3% 7357411 ± 1% TOTAL sched_debug.cfs_rq[24]:/.min_vruntime
2817770 ± 1% +160.8% 7349390 ± 1% TOTAL sched_debug.cfs_rq[17]:/.min_vruntime
12 ±31% +160.0% 31 ± 4% TOTAL sched_debug.cpu#33.cpu_load[3]
2839736 ± 1% +154.5% 7225910 ± 1% TOTAL sched_debug.cfs_rq[13]:/.min_vruntime
37 ±38% +86.6% 69 ±45% TOTAL sched_debug.cfs_rq[27]:/.tg_load_contrib
1826 ±15% +133.4% 4262 ± 0% TOTAL sched_debug.cpu#36.curr->pid
2880692 ± 3% +154.0% 7316909 ± 1% TOTAL sched_debug.cfs_rq[0]:/.min_vruntime
1543 ±16% +175.8% 4257 ± 0% TOTAL sched_debug.cpu#46.curr->pid
2903723 ± 1% +153.0% 7346423 ± 1% TOTAL sched_debug.cfs_rq[20]:/.min_vruntime
2853451 ± 1% +156.8% 7328372 ± 1% TOTAL sched_debug.cfs_rq[18]:/.min_vruntime
2881811 ± 2% +154.8% 7342530 ± 0% TOTAL sched_debug.cfs_rq[25]:/.min_vruntime
3.22 ± 1% +149.8% 8.05 ± 3% TOTAL perf-profile.cpu-cycles.copy_user_generic_string.sctp_user_addto_chunk.sctp_datamsg_from_user.sctp_sendmsg.inet_sendmsg
2848635 ± 1% +153.7% 7225776 ± 1% TOTAL sched_debug.cfs_rq[12]:/.min_vruntime
2878875 ± 2% +155.2% 7346322 ± 1% TOTAL sched_debug.cfs_rq[2]:/.min_vruntime
222782 ±27% -61.7% 85268 ± 9% TOTAL sched_debug.cpu#56.avg_idle
2831503 ± 1% +154.5% 7205461 ± 1% TOTAL sched_debug.cfs_rq[11]:/.min_vruntime
2835663 ± 1% +154.6% 7218527 ± 1% TOTAL sched_debug.cfs_rq[9]:/.min_vruntime
1704 ±26% +152.9% 4309 ± 1% TOTAL sched_debug.cpu#41.curr->pid
12 ±35% +130.6% 28 ± 2% TOTAL sched_debug.cfs_rq[48]:/.load
2844115 ± 0% +154.4% 7235577 ± 1% TOTAL sched_debug.cfs_rq[15]:/.min_vruntime
1665 ±28% +159.1% 4315 ± 1% TOTAL sched_debug.cpu#33.curr->pid
2892891 ± 1% +154.2% 7353510 ± 1% TOTAL sched_debug.cfs_rq[19]:/.min_vruntime
2901579 ± 2% +152.7% 7332677 ± 1% TOTAL sched_debug.cfs_rq[22]:/.min_vruntime
2896475 ± 2% +154.3% 7366784 ± 0% TOTAL sched_debug.cfs_rq[23]:/.min_vruntime
2911824 ± 1% +151.7% 7327805 ± 1% TOTAL sched_debug.cfs_rq[1]:/.min_vruntime
2966470 ± 1% +147.9% 7354082 ± 1% TOTAL sched_debug.cfs_rq[29]:/.min_vruntime
2884101 ± 0% +150.3% 7219627 ± 1% TOTAL sched_debug.cfs_rq[14]:/.min_vruntime
2925842 ± 1% +151.9% 7369360 ± 1% TOTAL sched_debug.cfs_rq[31]:/.min_vruntime
2902721 ± 1% +152.7% 7334172 ± 1% TOTAL sched_debug.cfs_rq[21]:/.min_vruntime
2924791 ± 2% +150.8% 7336302 ± 1% TOTAL sched_debug.cfs_rq[26]:/.min_vruntime
11 ±26% +154.2% 30 ± 2% TOTAL sched_debug.cpu#57.cpu_load[3]
2910713 ± 1% +151.6% 7322791 ± 1% TOTAL sched_debug.cfs_rq[3]:/.min_vruntime
2952231 ± 2% +149.6% 7369935 ± 0% TOTAL sched_debug.cfs_rq[27]:/.min_vruntime
1327 ±42% +219.3% 4239 ± 0% TOTAL sched_debug.cpu#40.curr->pid
2975600 ± 0% +146.9% 7348061 ± 1% TOTAL sched_debug.cfs_rq[28]:/.min_vruntime
2927020 ± 2% +150.3% 7326407 ± 1% TOTAL sched_debug.cfs_rq[5]:/.min_vruntime
2937147 ± 0% +148.1% 7287431 ± 1% TOTAL sched_debug.cfs_rq[7]:/.min_vruntime
12 ±16% +151.6% 32 ±12% TOTAL sched_debug.cpu#49.cpu_load[0]
2972203 ± 1% +146.7% 7331240 ± 0% TOTAL sched_debug.cfs_rq[30]:/.min_vruntime
2917430 ± 1% +149.9% 7291729 ± 1% TOTAL sched_debug.cfs_rq[4]:/.min_vruntime
9 ±19% +177.6% 27 ± 4% TOTAL sched_debug.cfs_rq[60]:/.runnable_load_avg
2903992 ± 2% +150.8% 7282050 ± 1% TOTAL sched_debug.cfs_rq[6]:/.min_vruntime
37 ±31% +113.3% 80 ±23% TOTAL sched_debug.cfs_rq[29]:/.tg_load_contrib
230273 ±17% -51.7% 111201 ±32% TOTAL sched_debug.cpu#50.avg_idle
0.70 ± 1% +140.3% 1.68 ± 3% TOTAL perf-profile.cpu-cycles.get_page_from_freelist.__alloc_pages_nodemask.kmalloc_large_node.__kmalloc_node_track_caller.__kmalloc_reserve
206691 ±15% -57.8% 87201 ± 8% TOTAL sched_debug.cpu#61.avg_idle
259396 ± 5% +122.7% 577766 ± 1% TOTAL sched_debug.cpu#62.ttwu_count
12 ±34% +134.9% 29 ± 3% TOTAL sched_debug.cpu#57.cpu_load[2]
269666 ±16% -57.7% 114191 ±17% TOTAL sched_debug.cpu#36.avg_idle
14 ±36% +119.7% 31 ± 5% TOTAL sched_debug.cpu#33.cpu_load[2]
1706 ±16% +149.8% 4262 ± 0% TOTAL sched_debug.cpu#38.curr->pid
242840 ± 4% -54.8% 109685 ±27% TOTAL sched_debug.cpu#45.avg_idle
262430 ± 5% +120.6% 578956 ± 1% TOTAL sched_debug.cpu#60.ttwu_count
139692 ± 4% -54.6% 63480 ± 2% TOTAL proc-vmstat.numa_hint_faults
12 ±19% +148.4% 31 ± 9% TOTAL sched_debug.cpu#41.cpu_load[3]
264847 ± 9% +116.3% 572756 ± 2% TOTAL sched_debug.cpu#61.ttwu_count
0.50 ± 7% +122.0% 1.11 ±12% TOTAL perf-profile.cpu-cycles.sctp_sendmsg.inet_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg
30 ±12% +115.3% 64 ±46% TOTAL sched_debug.cfs_rq[19]:/.tg_load_contrib
11 ±28% +139.0% 28 ± 5% TOTAL sched_debug.cfs_rq[42]:/.runnable_load_avg
263882 ± 5% +119.9% 580288 ± 1% TOTAL sched_debug.cpu#59.ttwu_count
15 ±41% +105.3% 30 ± 7% TOTAL sched_debug.cpu#33.cpu_load[1]
15 ±41% +97.5% 31 ±10% TOTAL sched_debug.cpu#33.cpu_load[0]
232649 ± 8% -53.9% 107165 ± 9% TOTAL sched_debug.cpu#37.avg_idle
1870 ±19% +126.4% 4234 ± 0% TOTAL sched_debug.cpu#42.curr->pid
276936 ± 2% +110.2% 582259 ± 1% TOTAL sched_debug.cpu#32.ttwu_count
32 ±13% +116.7% 70 ±31% TOTAL sched_debug.cfs_rq[24]:/.tg_load_contrib
3.48 ± 0% +111.1% 7.35 ± 1% TOTAL perf-profile.cpu-cycles.copy_user_generic_string.skb_copy_datagram_iovec.sctp_recvmsg.sock_common_recvmsg.sock_recvmsg
276884 ± 2% +111.1% 584406 ± 1% TOTAL sched_debug.cpu#33.ttwu_count
278248 ± 4% +107.0% 575968 ± 2% TOTAL sched_debug.cpu#55.ttwu_count
276813 ± 5% +109.4% 579782 ± 1% TOTAL sched_debug.cpu#51.ttwu_count
274908 ± 2% +110.4% 578527 ± 1% TOTAL sched_debug.cpu#52.ttwu_count
276883 ± 5% +108.8% 578248 ± 1% TOTAL sched_debug.cpu#53.ttwu_count
13 ±48% +127.3% 30 ± 8% TOTAL sched_debug.cpu#57.cpu_load[0]
12 ±42% +130.2% 29 ± 3% TOTAL sched_debug.cpu#57.cpu_load[1]
12 ±47% +159.0% 31 ±13% TOTAL sched_debug.cfs_rq[41]:/.runnable_load_avg
96427 ± 3% -51.4% 46839 ± 0% TOTAL proc-vmstat.numa_hint_faults_local
14 ±23% +113.5% 31 ±11% TOTAL sched_debug.cpu#41.cpu_load[2]
281119 ± 3% +105.5% 577638 ± 1% TOTAL sched_debug.cpu#63.ttwu_count
279804 ± 7% +105.6% 575372 ± 1% TOTAL sched_debug.cpu#54.ttwu_count
285799 ± 3% +104.0% 583075 ± 1% TOTAL sched_debug.cpu#39.ttwu_count
291642 ±21% -55.2% 130660 ±18% TOTAL sched_debug.cpu#40.avg_idle
140 ±31% -57.1% 60 ±44% TOTAL sched_debug.cfs_rq[43]:/.tg_load_contrib
13879 ± 0% +104.3% 28362 ± 0% TOTAL proc-vmstat.pgactivate
251355 ±27% -49.9% 126023 ±33% TOTAL sched_debug.cpu#42.avg_idle
284829 ± 5% +104.2% 581719 ± 1% TOTAL sched_debug.cpu#36.ttwu_count
287754 ± 5% +101.5% 579892 ± 1% TOTAL sched_debug.cpu#57.ttwu_count
290676 ± 4% +98.4% 576782 ± 1% TOTAL sched_debug.cpu#50.ttwu_count
15 ±34% +92.4% 30 ±11% TOTAL sched_debug.cfs_rq[33]:/.runnable_load_avg
28 ±12% +100.7% 57 ±30% TOTAL sched_debug.cfs_rq[31]:/.tg_load_contrib
287909 ± 4% +101.5% 580220 ± 0% TOTAL sched_debug.cpu#56.ttwu_count
281438 ± 6% +105.0% 576839 ± 1% TOTAL sched_debug.cpu#58.ttwu_count
223246 ±22% -59.3% 90847 ±16% TOTAL sched_debug.cpu#57.avg_idle
292834 ± 2% +98.9% 582416 ± 1% TOTAL sched_debug.cpu#35.ttwu_count
299007 ± 1% +93.8% 579492 ± 1% TOTAL sched_debug.cpu#49.ttwu_count
296051 ± 3% +96.4% 581581 ± 0% TOTAL sched_debug.cpu#37.ttwu_count
155938 ± 3% -48.9% 79614 ± 2% TOTAL proc-vmstat.numa_pte_updates
292818 ± 4% +99.2% 583153 ± 1% TOTAL sched_debug.cpu#34.ttwu_count
291339 ± 3% +97.8% 576203 ± 1% TOTAL sched_debug.cpu#46.ttwu_count
1173228 ±19% -48.8% 601063 ± 1% TOTAL sched_debug.cpu#8.sched_count
291043 ± 6% +98.2% 576756 ± 1% TOTAL sched_debug.cpu#48.ttwu_count
32644 ± 0% +92.8% 62936 ± 0% TOTAL sched_debug.cfs_rq[0]:/.tg->runnable_avg
32648 ± 0% +92.8% 62937 ± 0% TOTAL sched_debug.cfs_rq[1]:/.tg->runnable_avg
32650 ± 0% +92.8% 62937 ± 0% TOTAL sched_debug.cfs_rq[2]:/.tg->runnable_avg
32654 ± 0% +92.7% 62937 ± 0% TOTAL sched_debug.cfs_rq[3]:/.tg->runnable_avg
32665 ± 0% +92.7% 62937 ± 0% TOTAL sched_debug.cfs_rq[4]:/.tg->runnable_avg
32674 ± 0% +92.6% 62938 ± 0% TOTAL sched_debug.cfs_rq[5]:/.tg->runnable_avg
32689 ± 0% +92.5% 62938 ± 0% TOTAL sched_debug.cfs_rq[8]:/.tg->runnable_avg
32678 ± 0% +92.6% 62937 ± 0% TOTAL sched_debug.cfs_rq[6]:/.tg->runnable_avg
32690 ± 0% +92.5% 62938 ± 0% TOTAL sched_debug.cfs_rq[10]:/.tg->runnable_avg
32691 ± 0% +92.5% 62938 ± 0% TOTAL sched_debug.cfs_rq[9]:/.tg->runnable_avg
32686 ± 0% +92.6% 62937 ± 0% TOTAL sched_debug.cfs_rq[7]:/.tg->runnable_avg
32692 ± 0% +92.5% 62938 ± 0% TOTAL sched_debug.cfs_rq[11]:/.tg->runnable_avg
32696 ± 0% +92.5% 62938 ± 0% TOTAL sched_debug.cfs_rq[12]:/.tg->runnable_avg
32701 ± 0% +92.5% 62938 ± 0% TOTAL sched_debug.cfs_rq[13]:/.tg->runnable_avg
32704 ± 0% +92.4% 62938 ± 0% TOTAL sched_debug.cfs_rq[14]:/.tg->runnable_avg
32702 ± 0% +92.5% 62938 ± 0% TOTAL sched_debug.cfs_rq[15]:/.tg->runnable_avg
32717 ± 0% +92.4% 62937 ± 0% TOTAL sched_debug.cfs_rq[18]:/.tg->runnable_avg
32707 ± 0% +92.4% 62938 ± 0% TOTAL sched_debug.cfs_rq[16]:/.tg->runnable_avg
32713 ± 0% +92.4% 62937 ± 0% TOTAL sched_debug.cfs_rq[17]:/.tg->runnable_avg
32722 ± 0% +92.3% 62938 ± 0% TOTAL sched_debug.cfs_rq[19]:/.tg->runnable_avg
32727 ± 0% +92.3% 62938 ± 0% TOTAL sched_debug.cfs_rq[20]:/.tg->runnable_avg
32732 ± 0% +92.3% 62938 ± 0% TOTAL sched_debug.cfs_rq[21]:/.tg->runnable_avg
32740 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[24]:/.tg->runnable_avg
32739 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[23]:/.tg->runnable_avg
32743 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[26]:/.tg->runnable_avg
32739 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[22]:/.tg->runnable_avg
32743 ± 0% +92.2% 62937 ± 0% TOTAL sched_debug.cfs_rq[27]:/.tg->runnable_avg
32746 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[25]:/.tg->runnable_avg
32751 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[29]:/.tg->runnable_avg
32751 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[30]:/.tg->runnable_avg
32747 ± 0% +92.2% 62938 ± 0% TOTAL sched_debug.cfs_rq[28]:/.tg->runnable_avg
32752 ± 0% +92.2% 62939 ± 0% TOTAL sched_debug.cfs_rq[31]:/.tg->runnable_avg
32752 ± 0% +92.2% 62939 ± 0% TOTAL sched_debug.cfs_rq[32]:/.tg->runnable_avg
32759 ± 0% +92.1% 62939 ± 0% TOTAL sched_debug.cfs_rq[33]:/.tg->runnable_avg
32770 ± 0% +92.1% 62939 ± 0% TOTAL sched_debug.cfs_rq[37]:/.tg->runnable_avg
32757 ± 0% +92.1% 62939 ± 0% TOTAL sched_debug.cfs_rq[34]:/.tg->runnable_avg
32766 ± 0% +92.1% 62939 ± 0% TOTAL sched_debug.cfs_rq[36]:/.tg->runnable_avg
32765 ± 0% +92.1% 62939 ± 0% TOTAL sched_debug.cfs_rq[35]:/.tg->runnable_avg
32767 ± 0% +92.1% 62939 ± 0% TOTAL sched_debug.cfs_rq[38]:/.tg->runnable_avg
32774 ± 0% +92.0% 62939 ± 0% TOTAL sched_debug.cfs_rq[41]:/.tg->runnable_avg
32775 ± 0% +92.0% 62939 ± 0% TOTAL sched_debug.cfs_rq[42]:/.tg->runnable_avg
32774 ± 0% +92.0% 62939 ± 0% TOTAL sched_debug.cfs_rq[39]:/.tg->runnable_avg
32779 ± 0% +92.0% 62939 ± 0% TOTAL sched_debug.cfs_rq[43]:/.tg->runnable_avg
32773 ± 0% +92.0% 62939 ± 0% TOTAL sched_debug.cfs_rq[40]:/.tg->runnable_avg
32822 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[63]:/.tg->runnable_avg
32820 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[62]:/.tg->runnable_avg
32793 ± 0% +91.9% 62939 ± 0% TOTAL sched_debug.cfs_rq[47]:/.tg->runnable_avg
32796 ± 0% +91.9% 62940 ± 0% TOTAL sched_debug.cfs_rq[48]:/.tg->runnable_avg
32818 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[61]:/.tg->runnable_avg
32821 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[60]:/.tg->runnable_avg
32790 ± 0% +91.9% 62939 ± 0% TOTAL sched_debug.cfs_rq[44]:/.tg->runnable_avg
32791 ± 0% +91.9% 62939 ± 0% TOTAL sched_debug.cfs_rq[46]:/.tg->runnable_avg
32819 ± 0% +91.8% 62941 ± 0% TOTAL sched_debug.cfs_rq[59]:/.tg->runnable_avg
32793 ± 0% +91.9% 62939 ± 0% TOTAL sched_debug.cfs_rq[45]:/.tg->runnable_avg
32812 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[56]:/.tg->runnable_avg
32817 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[57]:/.tg->runnable_avg
32809 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[52]:/.tg->runnable_avg
32811 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[55]:/.tg->runnable_avg
32801 ± 0% +91.9% 62940 ± 0% TOTAL sched_debug.cfs_rq[49]:/.tg->runnable_avg
32822 ± 0% +91.8% 62941 ± 0% TOTAL sched_debug.cfs_rq[58]:/.tg->runnable_avg
32808 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[54]:/.tg->runnable_avg
32812 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[53]:/.tg->runnable_avg
32809 ± 0% +91.8% 62940 ± 0% TOTAL sched_debug.cfs_rq[51]:/.tg->runnable_avg
32804 ± 0% +91.9% 62940 ± 0% TOTAL sched_debug.cfs_rq[50]:/.tg->runnable_avg
309179 ± 2% +87.8% 580793 ± 1% TOTAL sched_debug.cpu#41.ttwu_count
81 ±44% -62.2% 30 ± 4% TOTAL sched_debug.cfs_rq[62]:/.tg_load_contrib
5254994 ± 0% +89.6% 9963624 ± 0% TOTAL softirqs.TIMER
3.69 ± 1% +86.0% 6.87 ± 4% TOTAL perf-profile.cpu-cycles.memcpy.sctp_outq_flush.sctp_outq_uncork.sctp_cmd_interpreter.sctp_do_sm
308199 ± 4% +86.7% 575358 ± 1% TOTAL sched_debug.cpu#42.ttwu_count
305544 ± 4% +90.6% 582390 ± 1% TOTAL sched_debug.cpu#38.ttwu_count
309614 ± 1% +86.5% 577365 ± 1% TOTAL sched_debug.cpu#43.ttwu_count
14 ±33% +98.6% 28 ± 2% TOTAL sched_debug.cpu#35.load
309719 ± 5% +85.8% 575506 ± 1% TOTAL sched_debug.cpu#45.ttwu_count
1081161 ± 1% -46.1% 582374 ± 1% TOTAL sched_debug.cpu#29.ttwu_count
316152 ± 3% +82.7% 577646 ± 1% TOTAL sched_debug.cpu#40.ttwu_count
27 ± 8% +107.9% 57 ±49% TOTAL sched_debug.cfs_rq[9]:/.tg_load_contrib
1083438 ± 0% -45.9% 586638 ± 0% TOTAL sched_debug.cpu#7.ttwu_count
315864 ± 5% +82.6% 576755 ± 1% TOTAL sched_debug.cpu#47.ttwu_count
31 ± 7% +69.2% 52 ±36% TOTAL sched_debug.cfs_rq[13]:/.tg_load_contrib
1084154 ± 1% -45.7% 588437 ± 1% TOTAL sched_debug.cpu#30.ttwu_count
311962 ± 5% +84.9% 576905 ± 1% TOTAL sched_debug.cpu#44.ttwu_count
1060523 ± 1% -45.1% 582533 ± 1% TOTAL sched_debug.cpu#21.ttwu_count
1054699 ± 1% -45.1% 579000 ± 1% TOTAL sched_debug.cpu#14.ttwu_count
17 ±22% +70.8% 30 ± 2% TOTAL sched_debug.cpu#8.cpu_load[4]
15 ±26% +103.9% 31 ±12% TOTAL sched_debug.cpu#41.cpu_load[1]
28 ± 4% +85.4% 53 ±38% TOTAL sched_debug.cfs_rq[30]:/.tg_load_contrib
1075268 ± 1% -45.5% 586405 ± 1% TOTAL sched_debug.cpu#28.ttwu_count
1059908 ± 2% -44.9% 584032 ± 1% TOTAL sched_debug.cpu#31.ttwu_count
1068780 ± 1% -45.5% 582915 ± 1% TOTAL sched_debug.cpu#27.ttwu_count
1068310 ± 0% -44.9% 588354 ± 1% TOTAL sched_debug.cpu#5.ttwu_count
1055705 ± 2% -45.0% 580123 ± 2% TOTAL sched_debug.cpu#23.ttwu_count
1054570 ± 1% -44.7% 583394 ± 1% TOTAL sched_debug.cpu#20.ttwu_count
254366 ±43% -59.3% 103437 ±13% TOTAL sched_debug.cpu#54.avg_idle
1063589 ± 2% -44.7% 587718 ± 1% TOTAL sched_debug.cpu#6.ttwu_count
1058139 ± 3% -44.6% 585929 ± 1% TOTAL sched_debug.cpu#22.ttwu_count
1067145 ± 1% -44.9% 588220 ± 1% TOTAL sched_debug.cpu#4.ttwu_count
1057669 ± 0% -44.3% 588982 ± 1% TOTAL sched_debug.cpu#1.ttwu_count
1059310 ± 1% -44.3% 589806 ± 1% TOTAL sched_debug.cpu#3.ttwu_count
1043843 ± 1% -44.2% 581948 ± 1% TOTAL sched_debug.cpu#18.ttwu_count
1031973 ± 1% -43.9% 579237 ± 1% TOTAL sched_debug.cpu#11.ttwu_count
18 ±24% +70.7% 31 ±13% TOTAL sched_debug.cpu#41.cpu_load[0]
1047788 ± 2% -44.3% 583196 ± 1% TOTAL sched_debug.cpu#19.ttwu_count
1044163 ± 2% -44.2% 583017 ± 1% TOTAL sched_debug.cpu#26.ttwu_count
1288766 ±38% -52.8% 608462 ± 1% TOTAL sched_debug.cpu#1.sched_count
1032898 ± 1% -43.8% 579984 ± 1% TOTAL sched_debug.cpu#13.ttwu_count
1049631 ± 1% -43.7% 590788 ± 0% TOTAL sched_debug.cpu#2.ttwu_count
1034514 ± 1% -43.7% 582699 ± 1% TOTAL sched_debug.cpu#15.ttwu_count
16 ±20% +75.0% 29 ± 4% TOTAL sched_debug.cpu#9.cpu_load[0]
1023516 ± 1% -43.1% 582541 ± 1% TOTAL sched_debug.cpu#17.ttwu_count
1039799 ± 3% -44.3% 579547 ± 2% TOTAL sched_debug.cpu#25.ttwu_count
1302779 ±36% -54.0% 598642 ± 1% TOTAL sched_debug.cpu#9.sched_count
1012139 ± 0% -42.5% 581867 ± 1% TOTAL sched_debug.cpu#9.ttwu_count
1032088 ± 2% -43.7% 581367 ± 1% TOTAL sched_debug.cpu#12.ttwu_count
1015497 ± 1% -42.6% 582939 ± 1% TOTAL sched_debug.cpu#16.ttwu_count
1023139 ± 1% -43.1% 582533 ± 1% TOTAL sched_debug.cpu#10.ttwu_count
1039034 ± 1% -42.7% 595606 ± 1% TOTAL sched_debug.cpu#0.ttwu_count
1125131 ±22% -46.7% 600109 ± 2% TOTAL sched_debug.cpu#12.sched_count
9019 ± 6% -42.0% 5232 ±10% TOTAL proc-vmstat.pgmigrate_success
9019 ± 6% -42.0% 5232 ±10% TOTAL proc-vmstat.numa_pages_migrated
1020221 ± 2% -42.6% 586085 ± 1% TOTAL sched_debug.cpu#24.ttwu_count
18 ± 9% +69.2% 30 ± 2% TOTAL sched_debug.cpu#9.cpu_load[4]
19 ±14% +84.8% 36 ±34% TOTAL sched_debug.cfs_rq[38]:/.load
30 ±12% +56.7% 47 ±18% TOTAL sched_debug.cfs_rq[25]:/.tg_load_contrib
17 ±13% +62.5% 28 ± 3% TOTAL sched_debug.cpu#1.cpu_load[0]
17 ±15% +64.4% 28 ± 1% TOTAL sched_debug.cpu#22.cpu_load[3]
19 ±16% +52.1% 29 ± 2% TOTAL sched_debug.cpu#25.cpu_load[3]
991690 ± 0% -41.2% 582981 ± 1% TOTAL sched_debug.cpu#8.ttwu_count
18 ±12% +60.2% 29 ± 4% TOTAL sched_debug.cpu#15.cpu_load[4]
18 ±22% +66.7% 30 ± 2% TOTAL sched_debug.cpu#8.cpu_load[3]
17 ±11% +72.4% 30 ± 2% TOTAL sched_debug.cpu#9.cpu_load[3]
21 ±26% +70.5% 35 ±38% TOTAL sched_debug.cpu#61.load
19 ±25% +56.8% 29 ± 2% TOTAL sched_debug.cpu#43.load
18 ±16% +52.7% 27 ± 1% TOTAL sched_debug.cpu#30.cpu_load[2]
18 ±18% +56.0% 28 ± 3% TOTAL sched_debug.cpu#26.cpu_load[4]
17 ±17% +59.1% 28 ± 3% TOTAL sched_debug.cpu#22.cpu_load[1]
17 ± 4% +58.0% 27 ± 4% TOTAL sched_debug.cpu#3.cpu_load[0]
17 ±16% +55.1% 27 ± 3% TOTAL sched_debug.cpu#22.cpu_load[0]
18 ±19% +56.7% 28 ± 4% TOTAL sched_debug.cpu#26.cpu_load[3]
17 ±17% +64.0% 28 ± 2% TOTAL sched_debug.cpu#22.cpu_load[2]
2664 ±15% +63.7% 4362 ± 1% TOTAL sched_debug.cpu#20.curr->pid
19 ±14% +56.6% 31 ± 3% TOTAL sched_debug.cfs_rq[45]:/.load
71 ±43% -52.1% 34 ±13% TOTAL sched_debug.cfs_rq[47]:/.tg_load_contrib
13306 ± 0% +61.5% 21495 ± 0% TOTAL proc-vmstat.nr_shmem
53249 ± 0% +61.5% 85996 ± 0% TOTAL meminfo.Shmem
18 ± 8% +58.7% 29 ± 1% TOTAL sched_debug.cpu#11.cpu_load[3]
18 ±12% +57.1% 28 ± 1% TOTAL sched_debug.cpu#30.cpu_load[3]
17 ±22% +65.2% 29 ± 3% TOTAL sched_debug.cfs_rq[9]:/.runnable_load_avg
19 ± 6% +54.7% 29 ± 1% TOTAL sched_debug.cpu#23.cpu_load[4]
17 ±15% +69.8% 29 ± 3% TOTAL sched_debug.cpu#9.cpu_load[2]
18 ± 7% +59.1% 29 ± 2% TOTAL sched_debug.cpu#11.cpu_load[4]
17 ±17% +70.6% 29 ± 4% TOTAL sched_debug.cpu#9.cpu_load[1]
18 ± 8% +55.4% 28 ± 3% TOTAL sched_debug.cpu#1.cpu_load[1]
18 ± 7% +55.3% 29 ± 3% TOTAL sched_debug.cpu#1.cpu_load[2]
19 ±14% +54.7% 29 ± 1% TOTAL sched_debug.cpu#25.cpu_load[4]
17 ±14% +65.5% 28 ± 1% TOTAL sched_debug.cpu#22.cpu_load[4]
18 ± 9% +53.8% 28 ± 4% TOTAL sched_debug.cfs_rq[13]:/.runnable_load_avg
19 ±11% +54.7% 29 ± 5% TOTAL sched_debug.cpu#15.cpu_load[3]
18 ±12% +55.4% 28 ± 1% TOTAL sched_debug.cfs_rq[23]:/.runnable_load_avg
18 ± 2% +57.1% 28 ± 3% TOTAL sched_debug.cpu#3.cpu_load[3]
18 ± 2% +60.4% 29 ± 2% TOTAL sched_debug.cpu#3.cpu_load[4]
17 ± 2% +60.7% 28 ± 3% TOTAL sched_debug.cpu#3.cpu_load[2]
2777 ±10% +60.7% 4463 ± 7% TOTAL sched_debug.cpu#27.curr->pid
2744 ±13% +58.3% 4342 ± 3% TOTAL sched_debug.cpu#22.curr->pid
988578 ±10% -38.9% 604501 ± 1% TOTAL sched_debug.cpu#24.sched_count
1083808 ±31% -42.7% 621044 ± 7% TOTAL sched_debug.cpu#6.sched_count
19 ±12% +53.1% 29 ± 4% TOTAL sched_debug.cpu#19.cpu_load[3]
19 ±13% +53.6% 29 ± 4% TOTAL sched_debug.cpu#19.cpu_load[4]
966177 ±24% -37.8% 600790 ± 2% TOTAL sched_debug.cpu#27.sched_count
1146030 ±46% -46.9% 608122 ± 1% TOTAL sched_debug.cpu#2.sched_count
18 ± 3% +55.6% 28 ± 3% TOTAL sched_debug.cpu#3.cpu_load[1]
19 ±19% +41.4% 28 ± 2% TOTAL sched_debug.cpu#5.cpu_load[0]
17 ± 9% +58.4% 28 ± 5% TOTAL sched_debug.cfs_rq[3]:/.runnable_load_avg
18 ±19% +52.2% 28 ± 5% TOTAL sched_debug.cpu#26.cpu_load[2]
18 ±17% +50.5% 28 ± 3% TOTAL sched_debug.cpu#30.cpu_load[0]
18 ±17% +50.5% 27 ± 2% TOTAL sched_debug.cpu#30.cpu_load[1]
19 ±11% +49.0% 28 ± 4% TOTAL sched_debug.cpu#15.cpu_load[2]
18 ±12% +56.7% 28 ± 1% TOTAL sched_debug.cpu#13.cpu_load[0]
20 ±13% +38.0% 27 ± 3% TOTAL sched_debug.cfs_rq[31]:/.runnable_load_avg
20 ±18% +40.0% 28 ± 2% TOTAL sched_debug.cpu#5.cpu_load[1]
18 ±22% +56.5% 28 ± 5% TOTAL sched_debug.cpu#8.cpu_load[0]
19 ±16% +49.0% 28 ± 2% TOTAL sched_debug.cpu#25.cpu_load[2]
1040164 ±27% -37.2% 653364 ±10% TOTAL sched_debug.cpu#19.sched_count
80 ±29% -45.9% 43 ±26% TOTAL sched_debug.cfs_rq[37]:/.tg_load_contrib
2953 ±10% +47.1% 4345 ± 2% TOTAL sched_debug.cpu#14.curr->pid
923395 ± 6% -24.6% 695896 ±17% TOTAL sched_debug.cpu#21.sched_count
2800 ± 5% +52.5% 4269 ± 1% TOTAL sched_debug.cpu#11.curr->pid
2746 ±15% +57.0% 4312 ± 2% TOTAL sched_debug.cpu#8.curr->pid
20 ±12% +44.1% 29 ± 1% TOTAL sched_debug.cpu#5.cpu_load[4]
18 ±13% +58.1% 29 ± 1% TOTAL sched_debug.cpu#13.cpu_load[2]
18 ± 7% +54.8% 28 ± 2% TOTAL sched_debug.cpu#11.cpu_load[1]
18 ±13% +56.5% 28 ± 1% TOTAL sched_debug.cpu#13.cpu_load[1]
18 ±22% +60.9% 29 ± 5% TOTAL sched_debug.cpu#8.cpu_load[2]
19 ± 6% +47.9% 28 ± 2% TOTAL sched_debug.cpu#7.cpu_load[2]
19 ±12% +51.6% 28 ± 2% TOTAL sched_debug.cpu#11.cpu_load[0]
18 ±12% +58.7% 29 ± 3% TOTAL sched_debug.cfs_rq[11]:/.runnable_load_avg
18 ±11% +62.6% 29 ± 5% TOTAL sched_debug.cfs_rq[1]:/.runnable_load_avg
19 ± 5% +50.5% 28 ± 1% TOTAL sched_debug.cpu#23.cpu_load[3]
19 ±21% +48.0% 29 ± 3% TOTAL sched_debug.cpu#27.cpu_load[4]
18 ±12% +57.0% 29 ± 2% TOTAL sched_debug.cpu#30.cpu_load[4]
18 ± 7% +53.2% 28 ± 2% TOTAL sched_debug.cpu#11.cpu_load[2]
19 ± 4% +58.9% 30 ± 9% TOTAL sched_debug.cfs_rq[2]:/.runnable_load_avg
1101084 ±30% -44.5% 611225 ± 2% TOTAL sched_debug.cpu#4.sched_count
2942 ±10% +44.9% 4263 ± 2% TOTAL sched_debug.cpu#7.curr->pid
2905 ± 7% +53.9% 4471 ± 2% TOTAL sched_debug.cpu#23.curr->pid
1035532 ±19% -41.9% 601372 ± 1% TOTAL sched_debug.cpu#15.sched_count
663 ± 7% +48.7% 987 ± 0% TOTAL sched_debug.cfs_rq[8]:/.tg_runnable_contrib
30382 ± 7% +49.0% 45275 ± 0% TOTAL sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
30719 ± 7% +46.5% 44998 ± 0% TOTAL sched_debug.cfs_rq[15]:/.avg->runnable_avg_sum
671 ± 7% +45.8% 979 ± 0% TOTAL sched_debug.cfs_rq[15]:/.tg_runnable_contrib
20 ± 9% +46.1% 29 ± 2% TOTAL sched_debug.cpu#18.cpu_load[4]
20 ± 6% +45.6% 30 ± 4% TOTAL sched_debug.cpu#1.cpu_load[4]
19 ± 9% +52.0% 29 ± 2% TOTAL sched_debug.cpu#14.cpu_load[4]
19 ±10% +57.3% 30 ± 2% TOTAL sched_debug.cpu#13.cpu_load[4]
19 ±12% +55.2% 29 ± 2% TOTAL sched_debug.cpu#13.cpu_load[3]
20 ±13% +35.6% 27 ± 1% TOTAL sched_debug.cfs_rq[22]:/.runnable_load_avg
19 ±11% +43.8% 27 ± 2% TOTAL sched_debug.cfs_rq[5]:/.runnable_load_avg
19 ±13% +42.7% 27 ± 3% TOTAL sched_debug.cpu#31.cpu_load[0]
1045 ± 1% +52.1% 1589 ± 7% TOTAL numa-vmstat.node0.nr_alloc_batch
737654 ± 1% +50.3% 1108371 ± 0% TOTAL softirqs.RCU
2824 ±15% +54.7% 4370 ± 1% TOTAL sched_debug.cpu#31.curr->pid
2900 ± 3% +48.4% 4303 ± 1% TOTAL sched_debug.cpu#17.curr->pid
1117 ± 2% +42.8% 1595 ± 6% TOTAL numa-vmstat.node3.nr_alloc_batch
31475 ± 6% +43.6% 45205 ± 0% TOTAL sched_debug.cfs_rq[26]:/.avg->runnable_avg_sum
2819 ±13% +51.7% 4276 ± 2% TOTAL sched_debug.cpu#26.curr->pid
31215 ± 4% +44.3% 45055 ± 0% TOTAL sched_debug.cfs_rq[30]:/.avg->runnable_avg_sum
30904 ± 4% +46.4% 45232 ± 0% TOTAL sched_debug.cfs_rq[10]:/.avg->runnable_avg_sum
675 ± 4% +45.9% 985 ± 0% TOTAL sched_debug.cfs_rq[10]:/.tg_runnable_contrib
687 ± 6% +42.7% 981 ± 0% TOTAL sched_debug.cfs_rq[26]:/.tg_runnable_contrib
19 ± 3% +47.9% 28 ± 3% TOTAL sched_debug.cpu#4.cpu_load[1]
21 ±18% +32.1% 28 ± 2% TOTAL sched_debug.cpu#31.cpu_load[2]
19 ±12% +44.9% 28 ± 5% TOTAL sched_debug.cpu#15.cpu_load[1]
18 ±21% +57.6% 29 ± 6% TOTAL sched_debug.cpu#8.cpu_load[1]
19 ±17% +47.4% 28 ± 3% TOTAL sched_debug.cpu#25.cpu_load[0]
19 ± 3% +49.0% 28 ± 6% TOTAL sched_debug.cpu#4.cpu_load[0]
20 ±13% +42.0% 28 ± 1% TOTAL sched_debug.cpu#5.cpu_load[3]
20 ±10% +41.0% 28 ± 4% TOTAL sched_debug.cfs_rq[15]:/.runnable_load_avg
19 ± 4% +45.9% 28 ± 2% TOTAL sched_debug.cpu#4.cpu_load[2]
19 ± 7% +47.4% 28 ± 2% TOTAL sched_debug.cpu#23.cpu_load[2]
18 ± 3% +47.9% 27 ± 3% TOTAL sched_debug.cpu#7.cpu_load[0]
19 ±13% +46.9% 28 ± 6% TOTAL sched_debug.cpu#15.cpu_load[0]
19 ± 8% +44.8% 27 ± 2% TOTAL sched_debug.cpu#23.cpu_load[1]
19 ±16% +45.4% 28 ± 1% TOTAL sched_debug.cpu#25.cpu_load[1]
19 ±11% +40.8% 27 ± 1% TOTAL sched_debug.cpu#23.cpu_load[0]
20 ±14% +39.6% 28 ± 1% TOTAL sched_debug.cpu#5.cpu_load[2]
19 ±14% +41.4% 28 ± 2% TOTAL sched_debug.cpu#31.cpu_load[1]
19 ± 6% +45.8% 28 ± 2% TOTAL sched_debug.cpu#7.cpu_load[1]
20 ±13% +40.6% 28 ± 3% TOTAL sched_debug.cfs_rq[25]:/.runnable_load_avg
2943 ± 8% +45.8% 4289 ± 0% TOTAL sched_debug.cpu#1.curr->pid
2703 ±11% +59.2% 4302 ± 1% TOTAL sched_debug.cpu#4.curr->pid
2989 ±13% +42.7% 4265 ± 2% TOTAL sched_debug.cpu#24.curr->pid
3048 ± 8% +42.7% 4349 ± 1% TOTAL sched_debug.cpu#3.curr->pid
30576 ± 6% +47.4% 45060 ± 0% TOTAL sched_debug.cfs_rq[22]:/.avg->runnable_avg_sum
682 ± 4% +43.6% 979 ± 0% TOTAL sched_debug.cfs_rq[30]:/.tg_runnable_contrib
1100 ± 3% +49.4% 1643 ± 4% TOTAL numa-vmstat.node1.nr_alloc_batch
4389 ± 1% +46.7% 6439 ± 1% TOTAL proc-vmstat.nr_alloc_batch
668 ± 6% +46.9% 982 ± 0% TOTAL sched_debug.cfs_rq[22]:/.tg_runnable_contrib
3115 ± 9% +38.4% 4310 ± 1% TOTAL sched_debug.cpu#5.curr->pid
695 ± 4% +41.8% 985 ± 0% TOTAL sched_debug.cfs_rq[18]:/.tg_runnable_contrib
31812 ± 4% +42.5% 45324 ± 0% TOTAL sched_debug.cfs_rq[18]:/.avg->runnable_avg_sum
1157 ± 4% +40.2% 1622 ± 7% TOTAL numa-vmstat.node2.nr_alloc_batch
19 ± 6% +51.6% 28 ± 2% TOTAL sched_debug.cpu#7.cpu_load[3]
19 ± 6% +51.5% 29 ± 1% TOTAL sched_debug.cpu#2.cpu_load[4]
19 ± 6% +53.7% 29 ± 2% TOTAL sched_debug.cpu#7.cpu_load[4]
19 ±13% +51.0% 29 ± 3% TOTAL sched_debug.cpu#19.cpu_load[2]
19 ±10% +49.5% 29 ± 4% TOTAL sched_debug.cpu#2.cpu_load[2]
19 ±13% +50.0% 28 ± 4% TOTAL sched_debug.cpu#19.cpu_load[1]
19 ± 6% +49.5% 29 ± 3% TOTAL sched_debug.cpu#2.cpu_load[3]
20 ± 8% +45.5% 29 ± 1% TOTAL sched_debug.cpu#18.cpu_load[3]
19 ± 5% +51.0% 29 ± 2% TOTAL sched_debug.cpu#20.cpu_load[4]
20 ± 7% +41.2% 28 ± 2% TOTAL sched_debug.cpu#18.cpu_load[2]
20 ± 7% +47.0% 29 ± 2% TOTAL sched_debug.cpu#4.cpu_load[4]
20 ± 5% +45.0% 29 ± 2% TOTAL sched_debug.cpu#4.cpu_load[3]
19 ±16% +50.5% 28 ± 1% TOTAL sched_debug.cpu#27.cpu_load[3]
19 ±12% +44.9% 28 ± 4% TOTAL sched_debug.cpu#16.cpu_load[0]
19 ± 5% +48.5% 29 ± 3% TOTAL sched_debug.cpu#1.cpu_load[3]
19 ± 6% +50.0% 29 ± 1% TOTAL sched_debug.cpu#20.cpu_load[3]
2924 ±10% +48.1% 4330 ± 2% TOTAL sched_debug.cpu#13.curr->pid
2942 ± 7% +45.9% 4291 ± 2% TOTAL sched_debug.cpu#15.curr->pid
2957 ± 5% +44.4% 4269 ± 1% TOTAL sched_debug.cpu#9.curr->pid
2930 ±12% +47.6% 4326 ± 3% TOTAL sched_debug.cpu#16.curr->pid
684 ± 7% +44.1% 986 ± 0% TOTAL sched_debug.cfs_rq[27]:/.tg_runnable_contrib
32158 ± 4% +40.1% 45057 ± 0% TOTAL sched_debug.cfs_rq[24]:/.avg->runnable_avg_sum
2949 ± 9% +46.3% 4315 ± 2% TOTAL sched_debug.cpu#19.curr->pid
20 ± 9% +46.5% 29 ± 1% TOTAL sched_debug.cpu#16.cpu_load[4]
19 ±14% +51.0% 29 ± 5% TOTAL sched_debug.cpu#19.cpu_load[0]
31356 ± 7% +43.8% 45098 ± 0% TOTAL sched_debug.cfs_rq[27]:/.avg->runnable_avg_sum
33090 ± 7% +36.2% 45061 ± 0% TOTAL sched_debug.cfs_rq[29]:/.avg->runnable_avg_sum
721 ± 7% +35.7% 979 ± 0% TOTAL sched_debug.cfs_rq[29]:/.tg_runnable_contrib
31372 ± 5% +43.7% 45081 ± 0% TOTAL sched_debug.cfs_rq[11]:/.avg->runnable_avg_sum
701 ± 4% +39.8% 980 ± 0% TOTAL sched_debug.cfs_rq[24]:/.tg_runnable_contrib
3005 ± 6% +44.1% 4331 ± 2% TOTAL sched_debug.cpu#28.curr->pid
722 ± 5% +36.2% 983 ± 0% TOTAL sched_debug.cfs_rq[28]:/.tg_runnable_contrib
32215 ± 6% +40.3% 45183 ± 0% TOTAL sched_debug.cfs_rq[19]:/.avg->runnable_avg_sum
32108 ± 3% +40.5% 45102 ± 0% TOTAL sched_debug.cfs_rq[13]:/.avg->runnable_avg_sum
703 ± 6% +39.9% 983 ± 0% TOTAL sched_debug.cfs_rq[19]:/.tg_runnable_contrib
33049 ± 5% +36.4% 45063 ± 0% TOTAL sched_debug.cfs_rq[28]:/.avg->runnable_avg_sum
685 ± 5% +43.1% 981 ± 0% TOTAL sched_debug.cfs_rq[11]:/.tg_runnable_contrib
702 ± 3% +39.7% 981 ± 0% TOTAL sched_debug.cfs_rq[13]:/.tg_runnable_contrib
32365 ± 1% +39.3% 45086 ± 0% TOTAL sched_debug.cfs_rq[20]:/.avg->runnable_avg_sum
706 ± 1% +38.9% 982 ± 0% TOTAL sched_debug.cfs_rq[20]:/.tg_runnable_contrib
1120854 ±41% -45.0% 616182 ± 4% TOTAL sched_debug.cpu#5.sched_count
31452 ± 3% +44.5% 45437 ± 1% TOTAL sched_debug.cfs_rq[9]:/.avg->runnable_avg_sum
32534 ± 5% +38.4% 45040 ± 0% TOTAL sched_debug.cfs_rq[31]:/.avg->runnable_avg_sum
687 ± 3% +44.1% 990 ± 1% TOTAL sched_debug.cfs_rq[9]:/.tg_runnable_contrib
702 ± 7% +40.4% 985 ± 0% TOTAL sched_debug.cfs_rq[25]:/.tg_runnable_contrib
694 ± 5% +41.6% 983 ± 0% TOTAL sched_debug.cfs_rq[14]:/.tg_runnable_contrib
19 ±24% +50.0% 28 ± 7% TOTAL sched_debug.cfs_rq[8]:/.runnable_load_avg
19 ± 9% +46.9% 28 ± 3% TOTAL sched_debug.cpu#20.cpu_load[2]
19 ±12% +42.4% 28 ± 4% TOTAL sched_debug.cpu#16.cpu_load[1]
19 ±10% +47.5% 29 ± 6% TOTAL sched_debug.cpu#2.cpu_load[1]
19 ±14% +43.9% 28 ± 2% TOTAL sched_debug.cpu#20.cpu_load[0]
20 ±13% +33.7% 27 ± 2% TOTAL sched_debug.cpu#24.cpu_load[1]
20 ±13% +50.0% 30 ±11% TOTAL sched_debug.cpu#2.cpu_load[0]
18 ±16% +48.9% 28 ± 3% TOTAL sched_debug.cpu#27.cpu_load[2]
18 ±19% +46.8% 27 ± 5% TOTAL sched_debug.cpu#26.cpu_load[1]
19 ±14% +43.3% 27 ± 4% TOTAL sched_debug.cfs_rq[30]:/.runnable_load_avg
20 ± 6% +41.6% 28 ± 4% TOTAL sched_debug.cfs_rq[4]:/.runnable_load_avg
20 ±11% +40.0% 28 ± 2% TOTAL sched_debug.cfs_rq[16]:/.runnable_load_avg
20 ± 4% +36.3% 27 ± 2% TOTAL sched_debug.cfs_rq[20]:/.runnable_load_avg
19 ±13% +44.9% 28 ± 1% TOTAL sched_debug.cpu#16.cpu_load[2]
710 ± 5% +38.1% 981 ± 0% TOTAL sched_debug.cfs_rq[31]:/.tg_runnable_contrib
32116 ± 7% +40.5% 45109 ± 0% TOTAL sched_debug.cfs_rq[25]:/.avg->runnable_avg_sum
31789 ± 5% +41.7% 45041 ± 0% TOTAL sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
31727 ± 4% +42.1% 45080 ± 0% TOTAL sched_debug.cfs_rq[17]:/.avg->runnable_avg_sum
696 ± 6% +41.2% 983 ± 0% TOTAL sched_debug.cfs_rq[0]:/.tg_runnable_contrib
31871 ± 6% +41.4% 45051 ± 0% TOTAL sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
32292 ± 5% +39.4% 45011 ± 0% TOTAL sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
699 ± 3% +40.5% 982 ± 0% TOTAL sched_debug.cfs_rq[16]:/.tg_runnable_contrib
706 ± 5% +39.0% 981 ± 0% TOTAL sched_debug.cfs_rq[7]:/.tg_runnable_contrib
3025 ±16% +43.2% 4334 ± 2% TOTAL sched_debug.cpu#10.curr->pid
32043 ± 3% +40.6% 45060 ± 0% TOTAL sched_debug.cfs_rq[16]:/.avg->runnable_avg_sum
705 ± 4% +39.8% 985 ± 0% TOTAL sched_debug.cfs_rq[3]:/.tg_runnable_contrib
32260 ± 4% +39.8% 45100 ± 0% TOTAL sched_debug.cfs_rq[3]:/.avg->runnable_avg_sum
694 ± 4% +41.3% 980 ± 0% TOTAL sched_debug.cfs_rq[17]:/.tg_runnable_contrib
32722 ± 3% +38.1% 45195 ± 0% TOTAL sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
2869 ±11% +52.0% 4362 ± 3% TOTAL sched_debug.cpu#30.curr->pid
698 ± 5% +40.6% 982 ± 0% TOTAL sched_debug.cfs_rq[23]:/.tg_runnable_contrib
32007 ± 5% +40.7% 45031 ± 0% TOTAL sched_debug.cfs_rq[23]:/.avg->runnable_avg_sum
21 ± 7% +34.3% 29 ± 3% TOTAL sched_debug.cfs_rq[18]:/.runnable_load_avg
20 ±11% +40.8% 29 ± 2% TOTAL sched_debug.cpu#24.cpu_load[3]
21 ±10% +34.6% 28 ± 4% TOTAL sched_debug.cpu#29.cpu_load[3]
21 ±10% +36.8% 29 ± 3% TOTAL sched_debug.cpu#29.cpu_load[4]
21 ± 9% +39.0% 29 ± 1% TOTAL sched_debug.cpu#24.cpu_load[4]
21 ±12% +34.9% 28 ± 5% TOTAL sched_debug.cpu#29.cpu_load[2]
20 ±11% +41.7% 29 ± 2% TOTAL sched_debug.cpu#21.cpu_load[3]
20 ±10% +45.0% 29 ± 2% TOTAL sched_debug.cpu#14.cpu_load[3]
20 ±15% +38.2% 28 ± 3% TOTAL sched_debug.cpu#14.cpu_load[1]
20 ±11% +45.0% 29 ± 2% TOTAL sched_debug.cpu#16.cpu_load[3]
23 ±10% +25.2% 28 ± 2% TOTAL sched_debug.cpu#55.load
21 ± 6% +34.6% 28 ± 2% TOTAL sched_debug.cpu#28.cpu_load[4]
20 ± 6% +38.8% 28 ± 3% TOTAL sched_debug.cfs_rq[7]:/.runnable_load_avg
20 ±11% +44.1% 29 ± 1% TOTAL sched_debug.cpu#21.cpu_load[4]
2959 ± 8% +45.8% 4315 ± 1% TOTAL sched_debug.cpu#6.curr->pid
33054 ± 2% +36.8% 45208 ± 0% TOTAL sched_debug.cfs_rq[1]:/.avg->runnable_avg_sum
715 ± 2% +37.4% 983 ± 0% TOTAL sched_debug.cfs_rq[2]:/.tg_runnable_contrib
32267 ± 5% +39.7% 45074 ± 0% TOTAL sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
705 ± 4% +38.9% 979 ± 0% TOTAL sched_debug.cfs_rq[4]:/.tg_runnable_contrib
2983 ±12% +47.3% 4394 ± 2% TOTAL sched_debug.cpu#2.curr->pid
32679 ± 5% +37.9% 45073 ± 0% TOTAL sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
722 ± 2% +35.8% 981 ± 0% TOTAL sched_debug.cfs_rq[1]:/.tg_runnable_contrib
2841 ±20% +54.9% 4399 ± 1% TOTAL sched_debug.cpu#0.curr->pid
32684 ± 5% +37.8% 45024 ± 0% TOTAL sched_debug.cfs_rq[21]:/.avg->runnable_avg_sum
714 ± 5% +37.3% 980 ± 0% TOTAL sched_debug.cfs_rq[6]:/.tg_runnable_contrib
22 ±10% +33.3% 29 ± 1% TOTAL sched_debug.cpu#12.cpu_load[4]
1044460 ±29% -42.8% 597073 ± 1% TOTAL sched_debug.cpu#31.sched_count
2962 ±10% +45.6% 4314 ± 2% TOTAL sched_debug.cpu#21.curr->pid
107755 ± 0% +35.8% 146314 ± 0% TOTAL sched_debug.cfs_rq[16]:/.exec_clock
3040 ±14% +45.8% 4431 ± 3% TOTAL sched_debug.cpu#18.curr->pid
713 ± 5% +37.3% 980 ± 0% TOTAL sched_debug.cfs_rq[21]:/.tg_runnable_contrib
107444 ± 0% +36.2% 146370 ± 0% TOTAL sched_debug.cfs_rq[8]:/.exec_clock
33921 ± 7% +32.9% 45070 ± 0% TOTAL sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
19 ±16% +41.7% 27 ± 4% TOTAL sched_debug.cpu#27.cpu_load[0]
18 ±16% +46.8% 27 ± 4% TOTAL sched_debug.cpu#27.cpu_load[1]
20 ±13% +35.3% 27 ± 4% TOTAL sched_debug.cpu#24.cpu_load[0]
110484 ± 1% +34.1% 148181 ± 0% TOTAL sched_debug.cfs_rq[10]:/.exec_clock
108447 ± 0% +34.7% 146038 ± 0% TOTAL sched_debug.cfs_rq[17]:/.exec_clock
741 ± 7% +32.3% 980 ± 0% TOTAL sched_debug.cfs_rq[5]:/.tg_runnable_contrib
3131 ±10% +40.9% 4412 ± 5% TOTAL sched_debug.cpu#29.curr->pid
108894 ± 1% +33.9% 145821 ± 0% TOTAL sched_debug.cfs_rq[24]:/.exec_clock
108897 ± 0% +34.3% 146281 ± 0% TOTAL sched_debug.cfs_rq[9]:/.exec_clock
110117 ± 2% +32.5% 145856 ± 0% TOTAL sched_debug.cfs_rq[25]:/.exec_clock
109871 ± 1% +32.8% 145922 ± 0% TOTAL sched_debug.cfs_rq[15]:/.exec_clock
110887 ± 1% +32.8% 147227 ± 0% TOTAL sched_debug.cfs_rq[12]:/.exec_clock
20 ±13% +40.2% 28 ± 4% TOTAL sched_debug.cpu#0.cpu_load[3]
21 ± 7% +30.5% 27 ± 2% TOTAL sched_debug.cfs_rq[28]:/.runnable_load_avg
20 ± 7% +37.9% 28 ± 1% TOTAL sched_debug.cpu#18.cpu_load[1]
21 ± 9% +30.6% 28 ± 2% TOTAL sched_debug.cfs_rq[29]:/.runnable_load_avg
19 ±12% +43.4% 28 ± 1% TOTAL sched_debug.cpu#20.cpu_load[1]
20 ± 9% +38.8% 28 ± 2% TOTAL sched_debug.cpu#21.cpu_load[2]
21 ± 5% +33.0% 28 ± 2% TOTAL sched_debug.cpu#28.cpu_load[3]
21 ± 5% +32.4% 27 ± 2% TOTAL sched_debug.cpu#28.cpu_load[1]
20 ± 5% +33.7% 27 ± 2% TOTAL sched_debug.cpu#28.cpu_load[2]
21 ± 9% +29.0% 27 ± 5% TOTAL sched_debug.cpu#29.cpu_load[0]
21 ±13% +35.2% 28 ± 3% TOTAL sched_debug.cpu#24.cpu_load[2]
21 ± 9% +34.3% 28 ± 1% TOTAL sched_debug.cpu#18.cpu_load[0]
3098 ±11% +39.9% 4335 ± 2% TOTAL sched_debug.cpu#25.curr->pid
109876 ± 0% +33.3% 146491 ± 0% TOTAL sched_debug.cfs_rq[11]:/.exec_clock
268016 ± 3% +32.8% 355948 ± 9% TOTAL meminfo.Committed_AS
109686 ± 1% +33.3% 146181 ± 0% TOTAL sched_debug.cfs_rq[13]:/.exec_clock
109930 ± 1% +33.3% 146529 ± 0% TOTAL sched_debug.cfs_rq[18]:/.exec_clock
110814 ± 2% +31.8% 146073 ± 0% TOTAL sched_debug.cfs_rq[22]:/.exec_clock
111231 ± 1% +31.2% 145955 ± 0% TOTAL sched_debug.cfs_rq[26]:/.exec_clock
792742 ± 2% -24.3% 599925 ± 1% TOTAL sched_debug.cpu#8.nr_switches
20 ±13% +40.6% 28 ± 2% TOTAL sched_debug.cpu#14.cpu_load[2]
22 ±11% +30.4% 29 ± 1% TOTAL sched_debug.cpu#12.cpu_load[3]
22 ±10% +32.7% 29 ± 3% TOTAL sched_debug.cfs_rq[21]:/.runnable_load_avg
22 ±11% +26.5% 28 ± 1% TOTAL sched_debug.cpu#12.cpu_load[2]
20 ±11% +43.1% 29 ± 5% TOTAL sched_debug.cpu#0.cpu_load[4]
22 ±11% +29.1% 28 ± 2% TOTAL sched_debug.cpu#12.cpu_load[1]
22 ±21% +30.0% 28 ± 2% TOTAL sched_debug.cpu#17.cpu_load[3]
896719 ±18% -31.3% 615816 ± 4% TOTAL sched_debug.cpu#30.sched_count
110467 ± 1% +32.0% 145847 ± 0% TOTAL sched_debug.cfs_rq[19]:/.exec_clock
110856 ± 1% +31.7% 145977 ± 0% TOTAL sched_debug.cfs_rq[20]:/.exec_clock
112749 ± 1% +32.1% 148960 ± 0% TOTAL sched_debug.cfs_rq[2]:/.exec_clock
110628 ± 1% +32.0% 146083 ± 0% TOTAL sched_debug.cfs_rq[23]:/.exec_clock
116500 ± 1% +31.0% 152586 ± 0% TOTAL sched_debug.cfs_rq[0]:/.exec_clock
33803 ± 6% +33.2% 45033 ± 0% TOTAL sched_debug.cfs_rq[12]:/.avg->runnable_avg_sum
738 ± 6% +32.9% 981 ± 0% TOTAL sched_debug.cfs_rq[12]:/.tg_runnable_contrib
111685 ± 0% +30.4% 145683 ± 0% TOTAL sched_debug.cfs_rq[4]:/.exec_clock
111045 ± 1% +31.3% 145845 ± 0% TOTAL sched_debug.cfs_rq[21]:/.exec_clock
111156 ± 0% +31.3% 145923 ± 0% TOTAL sched_debug.cfs_rq[14]:/.exec_clock
111731 ± 0% +30.3% 145623 ± 0% TOTAL sched_debug.cfs_rq[5]:/.exec_clock
111147 ± 1% +31.0% 145641 ± 0% TOTAL sched_debug.cfs_rq[6]:/.exec_clock
3172 ± 3% +33.8% 4245 ± 0% TOTAL sched_debug.cpu#12.curr->pid
111847 ± 0% +30.4% 145836 ± 0% TOTAL sched_debug.cfs_rq[1]:/.exec_clock
20 ±39% +43.1% 29 ± 3% TOTAL sched_debug.cpu#63.load
111241 ± 0% +31.0% 145727 ± 0% TOTAL sched_debug.cfs_rq[3]:/.exec_clock
111457 ± 1% +30.7% 145630 ± 0% TOTAL sched_debug.cfs_rq[31]:/.exec_clock
112620 ± 1% +29.4% 145693 ± 0% TOTAL sched_debug.cfs_rq[27]:/.exec_clock
112963 ± 1% +28.9% 145645 ± 0% TOTAL sched_debug.cfs_rq[29]:/.exec_clock
112421 ± 0% +29.6% 145673 ± 0% TOTAL sched_debug.cfs_rq[7]:/.exec_clock
7338 ±11% +23.3% 9050 ± 7% TOTAL numa-vmstat.node3.nr_anon_pages
29353 ±11% +23.3% 36204 ± 7% TOTAL numa-meminfo.node3.AnonPages
766344 ± 4% -22.2% 596356 ± 1% TOTAL sched_debug.cpu#12.nr_switches
112871 ± 0% +29.0% 145613 ± 0% TOTAL sched_debug.cfs_rq[28]:/.exec_clock
785724 ± 2% -22.0% 612543 ± 0% TOTAL sched_debug.cpu#0.nr_switches
770632 ± 2% -22.3% 598517 ± 1% TOTAL sched_debug.cpu#15.nr_switches
113328 ± 0% +28.5% 145614 ± 0% TOTAL sched_debug.cfs_rq[30]:/.exec_clock
19 ±16% +40.2% 27 ± 2% TOTAL sched_debug.cfs_rq[0]:/.runnable_load_avg
19 ±23% +39.2% 27 ± 4% TOTAL sched_debug.cfs_rq[26]:/.runnable_load_avg
763775 ± 3% -21.8% 596955 ± 1% TOTAL sched_debug.cpu#13.nr_switches
21 ±10% +30.6% 28 ± 4% TOTAL sched_debug.cpu#29.cpu_load[1]
20 ±10% +36.5% 28 ± 3% TOTAL sched_debug.cpu#21.cpu_load[1]
21 ±18% +31.1% 27 ± 1% TOTAL sched_debug.cpu#0.cpu_load[1]
22 ± 4% +30.0% 28 ± 4% TOTAL sched_debug.cpu#52.load
21 ±11% +30.8% 28 ± 3% TOTAL sched_debug.cpu#21.cpu_load[0]
21 ± 6% +28.4% 28 ± 3% TOTAL sched_debug.cpu#28.cpu_load[0]
21 ±19% +31.8% 28 ± 2% TOTAL sched_debug.cpu#17.cpu_load[2]
19 ±20% +45.8% 28 ± 3% TOTAL sched_debug.cpu#26.cpu_load[0]
22 ±16% +27.3% 28 ± 5% TOTAL sched_debug.cfs_rq[27]:/.runnable_load_avg
21 ±17% +29.4% 28 ± 2% TOTAL sched_debug.cpu#17.cpu_load[0]
21 ±12% +32.1% 28 ± 2% TOTAL sched_debug.cfs_rq[24]:/.runnable_load_avg
20 ±17% +34.6% 28 ± 2% TOTAL sched_debug.cpu#0.cpu_load[2]
773695 ± 2% -22.5% 599935 ± 1% TOTAL sched_debug.cpu#11.nr_switches
760832 ± 2% -21.3% 598498 ± 1% TOTAL sched_debug.cpu#9.nr_switches
23 ±17% +20.3% 28 ± 5% TOTAL sched_debug.cfs_rq[52]:/.load
761399 ± 1% -21.2% 600107 ± 1% TOTAL sched_debug.cpu#10.nr_switches
770683 ± 1% -21.1% 607908 ± 1% TOTAL sched_debug.cpu#2.nr_switches
757803 ± 4% -20.7% 600923 ± 1% TOTAL sched_debug.cpu#6.nr_switches
750665 ± 1% -19.1% 606952 ± 0% TOTAL sched_debug.cpu#1.nr_switches
744077 ± 2% -19.6% 598358 ± 0% TOTAL sched_debug.cpu#16.nr_switches
753828 ± 1% -19.6% 605817 ± 1% TOTAL sched_debug.cpu#3.nr_switches
742810 ± 2% -18.9% 602251 ± 0% TOTAL sched_debug.cpu#5.nr_switches
732991 ± 2% -18.5% 597448 ± 1% TOTAL sched_debug.cpu#17.nr_switches
725349 ± 2% -17.9% 595187 ± 1% TOTAL sched_debug.cpu#14.nr_switches
21 ±18% +29.6% 28 ± 2% TOTAL sched_debug.cpu#17.cpu_load[1]
22 ±14% +28.2% 28 ± 2% TOTAL sched_debug.cpu#12.cpu_load[0]
736992 ± 2% -17.8% 605506 ± 1% TOTAL sched_debug.cpu#4.nr_switches
724402 ± 1% -16.7% 603369 ± 0% TOTAL sched_debug.cpu#7.nr_switches
712130 ± 2% -15.5% 602079 ± 1% TOTAL sched_debug.cpu#24.nr_switches
715409 ± 5% -16.1% 600473 ± 2% TOTAL sched_debug.cpu#22.nr_switches
169672 ± 0% +19.0% 201867 ± 0% TOTAL meminfo.Active(anon)
42418 ± 0% +18.9% 50446 ± 0% TOTAL proc-vmstat.nr_active_anon
710553 ± 3% -16.0% 597062 ± 1% TOTAL sched_debug.cpu#31.nr_switches
723251 ± 2% -17.1% 599549 ± 1% TOTAL sched_debug.cpu#18.nr_switches
706093 ± 4% -14.9% 601183 ± 1% TOTAL sched_debug.cpu#26.nr_switches
704669 ± 2% -14.9% 599338 ± 1% TOTAL sched_debug.cpu#20.nr_switches
703753 ± 3% -15.3% 596238 ± 1% TOTAL sched_debug.cpu#21.nr_switches
702440 ± 5% -15.1% 596380 ± 1% TOTAL sched_debug.cpu#25.nr_switches
702660 ± 4% -14.7% 599141 ± 1% TOTAL sched_debug.cpu#19.nr_switches
22 ±14% +28.2% 28 ± 4% TOTAL sched_debug.cfs_rq[12]:/.runnable_load_avg
22 ±22% +32.4% 29 ± 1% TOTAL sched_debug.cpu#34.load
701182 ± 4% -14.6% 598606 ± 1% TOTAL sched_debug.cpu#23.nr_switches
0.23 ± 5% -15.8% 0.19 ± 6% TOTAL turbostat.%pc3
238557 ± 0% +13.5% 270767 ± 0% TOTAL meminfo.Active
135003 ± 0% +12.7% 152182 ± 0% TOTAL sched_debug.cpu#56.nr_load_updates
135518 ± 1% +12.2% 152028 ± 0% TOTAL sched_debug.cpu#63.nr_load_updates
668473 ± 4% -10.5% 598372 ± 1% TOTAL sched_debug.cpu#29.nr_switches
137963 ± 0% +11.9% 154381 ± 0% TOTAL sched_debug.cpu#57.nr_load_updates
135802 ± 0% +12.1% 152194 ± 0% TOTAL sched_debug.cpu#58.nr_load_updates
673342 ± 3% -10.3% 604167 ± 2% TOTAL sched_debug.cpu#30.nr_switches
136075 ± 0% +11.8% 152117 ± 0% TOTAL sched_debug.cpu#60.nr_load_updates
136265 ± 0% +11.6% 152124 ± 0% TOTAL sched_debug.cpu#59.nr_load_updates
136657 ± 0% +11.5% 152399 ± 0% TOTAL sched_debug.cpu#48.nr_load_updates
672367 ± 4% -10.6% 600775 ± 2% TOTAL sched_debug.cpu#27.nr_switches
136400 ± 0% +11.6% 152176 ± 0% TOTAL sched_debug.cpu#61.nr_load_updates
136146 ± 0% +11.7% 152103 ± 0% TOTAL sched_debug.cpu#62.nr_load_updates
1848 ± 2% +10.9% 2049 ± 5% TOTAL numa-meminfo.node0.Mapped
136683 ± 0% +11.4% 152199 ± 0% TOTAL sched_debug.cpu#55.nr_load_updates
137175 ± 0% +11.0% 152310 ± 0% TOTAL sched_debug.cpu#51.nr_load_updates
139156 ± 0% +10.8% 154236 ± 0% TOTAL sched_debug.cpu#49.nr_load_updates
137076 ± 0% +11.1% 152253 ± 0% TOTAL sched_debug.cpu#53.nr_load_updates
672781 ± 3% -10.5% 601982 ± 1% TOTAL sched_debug.cpu#28.nr_switches
136873 ± 1% +11.2% 152248 ± 0% TOTAL sched_debug.cpu#52.nr_load_updates
137601 ± 0% +10.8% 152404 ± 0% TOTAL sched_debug.cpu#50.nr_load_updates
137409 ± 0% +10.8% 152228 ± 0% TOTAL sched_debug.cpu#54.nr_load_updates
608198 ± 1% +5895.2% 36462984 ± 0% TOTAL time.voluntary_context_switches
62.13 ± 1% +247.4% 215.86 ± 1% TOTAL time.user_time
560843 ± 0% -53.2% 262235 ± 0% TOTAL vmstat.system.cs
54.79 ± 0% +81.7% 99.56 ± 0% TOTAL turbostat.%c0
38022 ± 0% +76.6% 67144 ± 0% TOTAL vmstat.system.in
2726 ± 0% +68.6% 4596 ± 0% TOTAL time.percent_of_cpu_this_job_got
8155 ± 0% +67.2% 13638 ± 0% TOTAL time.system_time
951709 ± 0% +53.4% 1459554 ± 0% TOTAL time.involuntary_context_switches
227030 ± 1% -16.1% 190393 ± 0% TOTAL time.minor_page_faults
qperf.sctp.bw
1.3e+09 ++---------------------------------------------------------------+
1.2e+09 ++ O O O |
O OO O O O O O O OO OO OO O O O O O |
1.1e+09 ++ OO OO O O OO OO O O O O OO O O
1e+09 ++ O |
| |
9e+08 ++ |
8e+08 ++ |
7e+08 ++ |
| |
6e+08 ++ |
5e+08 ++ |
| .**. *. *. |
4e+08 *+**.**.*. *.**.**.*.**.**.*.**.**.**.* **.*.**.* * *.** |
3e+08 ++--------*------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
apt-get install ruby ruby-oj
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/setup-local job.yaml # the job file attached in this email
bin/run-local job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Fengguang
_______________________________________________
LKP mailing list
LKP(a)linux.intel.com
7 years, 7 months