b05435836e ("uaccess: Use user_access_ok() in .."): WARNING: CPU: 0 PID: 8 at arch/x86/include/asm/uaccess.h:718 strnlen_user
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Masami-Hiramatsu/tracing-probes-...
commit b05435836e5889b0999d6e2079b9e940cb147958
Author: Masami Hiramatsu <mhiramat(a)kernel.org>
AuthorDate: Thu Feb 28 21:31:02 2019 +0900
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Sun Mar 3 15:00:16 2019 +0800
uaccess: Use user_access_ok() in user_access_begin()
Use user_access_ok() instead of access_ok() in user_access_begin()
to validate the access context is user. This also allow us to
use (generic) strncpy_from_user() and strnlen_user() in atomic
state with setting USER_DS and disable pagefaults.
Signed-off-by: Masami Hiramatsu <mhiramat(a)kernel.org>
eab217f4e1 uaccess: Add user_access_ok()
b05435836e uaccess: Use user_access_ok() in user_access_begin()
6142c3f96d selftests/ftrace: Add user-memory access syntax testcase
+-----------------------------------------------------------------+------------+------------+------------+
| | eab217f4e1 | b05435836e | 6142c3f96d |
+-----------------------------------------------------------------+------------+------------+------------+
| boot_successes | 31 | 0 | 0 |
| boot_failures | 0 | 11 | 11 |
| WARNING:at_arch/x86/include/asm/uaccess.h:#strnlen_user/0x | 0 | 11 | 11 |
| EIP:strnlen_user | 0 | 11 | 11 |
| WARNING:at_arch/x86/include/asm/uaccess.h:#strncpy_from_user/0x | 0 | 11 | 11 |
| EIP:strncpy_from_user | 0 | 11 | 11 |
+-----------------------------------------------------------------+------------+------------+------------+
[ 0.136477] Speculative Store Bypass: Vulnerable
[ 0.136983] L1TF: Kernel not compiled for PAE. No mitigation for L1TF
[ 0.137965] Performance Events: unsupported p6 CPU model 60 no PMU driver, software events only.
[ 0.138727] NMI watchdog: Perf NMI watchdog permanently disabled
[ 0.139303] TSC deadline timer enabled
[ 0.139748] WARNING: CPU: 0 PID: 8 at arch/x86/include/asm/uaccess.h:718 strnlen_user+0x45/0x110
[ 0.140691] Modules linked in:
[ 0.140960] CPU: 0 PID: 8 Comm: kdevtmpfs Not tainted 5.0.0-rc8-00253-gb054358 #2
[ 0.141617] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.141908] EIP: strnlen_user+0x45/0x110
[ 0.141908] Code: 00 00 00 66 90 55 89 e5 57 56 53 8b 1d f4 3e 5e 89 83 ec 04 8b 8b 9c 0d 00 00 39 c1 76 d7 89 ce 29 c6 81 f9 00 00 00 80 74 0b <0f> 0b 8d b4 26 00 00 00 00 66 90 8b 1d f4 3e 5e 89 3b 8b 9c 0d 00
[ 0.141908] EAX: 894c4ebe EBX: 9f468000 ECX: ffffffff EDX: 00001000
[ 0.141908] ESI: 76b3b141 EDI: 00001000 EBP: 9f471ec8 ESP: 9f471eb8
[ 0.141908] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00210206
[ 0.141908] CR0: 80050033 CR2: ffffffff CR3: 09851000 CR4: 001406d0
[ 0.141908] Call Trace:
[ 0.141908] strndup_user+0xf/0x50
[ 0.141908] ksys_mount+0x1e/0xb0
[ 0.141908] devtmpfsd+0x5b/0x2e0
[ 0.141908] ? sched_clock_cpu+0xc6/0xe0
[ 0.141908] ? find_held_lock+0x27/0xa0
[ 0.141908] ? __kthread_parkme+0x1f/0x70
[ 0.141908] kthread+0xf7/0x100
[ 0.141908] ? dev_mount+0x20/0x20
[ 0.141908] ? kthread_create_on_node+0x20/0x20
[ 0.141908] ret_from_fork+0x2e/0x40
[ 0.141908] random: get_random_bytes called from init_oops_id+0x23/0x50 with crng_init=0
[ 0.141908] ---[ end trace 42d84f21bd4a5482 ]---
[ 0.141987] WARNING: CPU: 0 PID: 8 at arch/x86/include/asm/uaccess.h:718 strncpy_from_user+0x36/0x1a0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start a2654a48fee7323d7f70e4502b67ebf3510328ac 5908e6b738e3357af42c10e1183753c70a0117a9 --
git bisect bad 9ea96687a8b3649fb6c2737cebbec4b9713ea5ef # 20:14 B 0 3 28 10 Merge 'peterz-queue/x86/build' into devel-hourly-2019030316
git bisect bad e0defb06880cf6d14d38b28b696588e9148d5fe5 # 20:37 B 0 11 26 0 Merge 'linux-review/Gustavo-A-R-Silva/watchdog-sbc60xxwdt-Mark-expected-switch-fall-through/20190227-225320' into devel-hourly-2019030316
git bisect bad 781c451f793262cf2e8f54e0659dc1b327ae1c0a # 20:48 B 0 9 24 0 Merge 'linux-review/Gustavo-A-R-Silva/watchdog-sc520_wdt-Mark-expected-switch-fall-through/20190228-060527' into devel-hourly-2019030316
git bisect good 3992d158c795d12ba1d7ced6529291dfbe8015a6 # 21:08 G 11 0 0 0 Merge 'martinetd-linux/9p-test' into devel-hourly-2019030316
git bisect good 4dbe22e1e849a091acb99e2053da4298034ddf34 # 21:27 G 11 0 0 0 Merge 'linux-review/Alexandru-Ardelean/dmaengine-axi-dmac-Split-too-large-segments/20190301-050350' into devel-hourly-2019030316
git bisect bad f8c2d134418c2a325d7a895b7f3b86a06735fbd1 # 21:38 B 0 11 37 11 Merge 'linux-review/Masami-Hiramatsu/tracing-probes-uaccess-Add-support-user-space-access/20190303-150015' into devel-hourly-2019030316
git bisect good d5306055019293f131169de5d007670bd8932ade # 21:52 G 11 0 0 0 Merge 'acpi/x86' into devel-hourly-2019030316
git bisect good 3f3adad9b924f113b7803e67b171244d654cecdb # 22:08 G 10 0 0 0 Merge 'linux-review/Jonas-Karlman/ARM-dts-rockchip-Enable-HDMI-CEC-on-rk3288-tinker-s/20190226-083718' into devel-hourly-2019030316
git bisect good ae7c7c7d57817e232b8c6ecd63c77d500986ed89 # 22:30 G 11 0 0 0 Merge 'linux-xen/linux-next' into devel-hourly-2019030316
git bisect bad d04b303f3356b881694f446609ce63ebf21bd7c8 # 22:41 B 0 6 21 0 uaccess: Add non-pagefault user-space read functions
git bisect bad b05435836e5889b0999d6e2079b9e940cb147958 # 22:50 B 0 11 26 0 uaccess: Use user_access_ok() in user_access_begin()
git bisect good eab217f4e15736a45e4f808d7e2ed686ae0cfecc # 23:06 G 11 0 0 0 uaccess: Add user_access_ok()
# first bad commit: [b05435836e5889b0999d6e2079b9e940cb147958] uaccess: Use user_access_ok() in user_access_begin()
git bisect good eab217f4e15736a45e4f808d7e2ed686ae0cfecc # 23:08 G 31 0 0 0 uaccess: Add user_access_ok()
# extra tests with debug options
git bisect bad b05435836e5889b0999d6e2079b9e940cb147958 # 23:19 B 0 11 26 0 uaccess: Use user_access_ok() in user_access_begin()
# extra tests on HEAD of linux-devel/devel-hourly-2019030316
git bisect bad a2654a48fee7323d7f70e4502b67ebf3510328ac # 23:19 B 0 13 31 0 0day head guard for 'devel-hourly-2019030316'
# extra tests on tree/branch linux-review/Masami-Hiramatsu/tracing-probes-uaccess-Add-support-user-space-access/20190303-150015
git bisect bad 6142c3f96d76ed2bc5385ffdf023551ad5d63d07 # 23:32 B 0 10 25 0 selftests/ftrace: Add user-memory access syntax testcase
# extra tests with first bad commit reverted
git bisect good 8ce33c7d099a73baf6bd62c8ed87afafd6b34f9c # 23:45 G 11 0 0 0 Revert "uaccess: Use user_access_ok() in user_access_begin()"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
1 year, 10 months
5390131643 ("softirq: Allow to soft interrupt vector-specific .."): WARNING: possible recursive locking detected
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/frederic/linux-dynticks.git softirq/soft-interruptible-v2
commit 5390131643c07a2503337832d27b53c927f4c530
Author: Frederic Weisbecker <frederic(a)kernel.org>
AuthorDate: Sat Feb 9 04:33:53 2019 +0100
Commit: Frederic Weisbecker <frederic(a)kernel.org>
CommitDate: Thu Feb 28 04:46:17 2019 +0100
softirq: Allow to soft interrupt vector-specific masked contexts
Remove the old protections that prevented softirqs from interrupting any
softirq-disabled context. Now that we can disable specific vectors on
a given piece of code, we want to be able to soft-interrupt those places
with other vectors.
Reviewed-by: David S. Miller <davem(a)davemloft.net>
Signed-off-by: Frederic Weisbecker <frederic(a)kernel.org>
Cc: Mauro Carvalho Chehab <mchehab+samsung(a)kernel.org>
Cc: Joel Fernandes <joel(a)joelfernandes.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Pavan Kondeti <pkondeti(a)codeaurora.org>
Cc: Paul E . McKenney <paulmck(a)linux.vnet.ibm.com>
Cc: David S . Miller <davem(a)davemloft.net>
Cc: Ingo Molnar <mingo(a)kernel.org>
Cc: Sebastian Andrzej Siewior <bigeasy(a)linutronix.de>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
b83f2fee41 locking/lockdep: Branch the new vec-finegrained softirq masking to lockdep
5390131643 softirq: Allow to soft interrupt vector-specific masked contexts
63c8902805 net: Make softirq vector masking finegrained on release_sock()
+---------------------------------------------+------------+------------+------------+
| | b83f2fee41 | 5390131643 | 63c8902805 |
+---------------------------------------------+------------+------------+------------+
| boot_successes | 804 | 244 | 254 |
| boot_failures | 166 | 68 | 56 |
| BUG:kernel_timeout_in_torture_test_stage | 166 | 58 | 47 |
| WARNING:possible_recursive_locking_detected | 0 | 10 | 9 |
| RIP:write_comp_data | 0 | 2 | |
| BUG:unable_to_handle_kernel | 0 | 2 | |
| Oops:#[##] | 0 | 2 | |
| RIP:rb_erase_cached | 0 | 2 | |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 2 | |
| RIP:lock_acquire | 0 | 2 | 2 |
| RIP:lock_release | 0 | 1 | 2 |
| RIP:dput | 0 | 1 | |
| RIP:__dentry_kill | 0 | 1 | |
| RIP:do_raw_spin_trylock | 0 | 1 | |
| RIP:preempt_count_add | 0 | 1 | |
| RIP:__sanitizer_cov_trace_pc | 0 | 1 | |
| RIP:__schedule | 0 | 0 | 1 |
| RIP:__sanitizer_cov_trace_const_cmp4 | 0 | 0 | 1 |
| RIP:__fentry | 0 | 0 | 1 |
| RIP:__d_drop | 0 | 0 | 1 |
| RIP:dentry_unlink_inode | 0 | 0 | 1 |
+---------------------------------------------+------------+------------+------------+
[ 220.913603] random: trinity: uninitialized urandom read (4 bytes read)
[ 221.047791] init: mounted-proc main process (167) terminated with status 1
mountall: Event failed
[ 221.081179]
[ 221.081452] ============================================
[ 221.082220] WARNING: possible recursive locking detected
[ 221.082985] 5.0.0-rc2-00261-g5390131 #1 Not tainted
[ 221.083690] --------------------------------------------
[ 221.084466] sh/168 is trying to acquire lock:
[ 221.085109] (____ptrval____) (&p->pi_lock){-...-.........-.....-.}, at: try_to_wake_up+0x41/0x302
[ 221.086374]
[ 221.086374] but task is already holding lock:
[ 221.087219] (____ptrval____) (&p->pi_lock){-...-.........-.....-.}, at: try_to_wake_up+0x41/0x302
[ 221.088525]
[ 221.088525] other info that might help us debug this:
[ 221.089480] Possible unsafe locking scenario:
[ 221.089480]
[ 221.090308] CPU0
[ 221.090665] ----
[ 221.091025] lock(&p->pi_lock);
[ 221.091505] lock(&p->pi_lock);
[ 221.091986]
[ 221.091986] *** DEADLOCK ***
[ 221.091986]
[ 221.092767] May be due to missing lock nesting notation
[ 221.092767]
[ 221.093672] 3 locks held by sh/168:
[ 221.094133] #0: (____ptrval____) (&p->pi_lock){-...-.........-.....-.}, at: try_to_wake_up+0x41/0x302
[ 221.095327] #1: (____ptrval____) (&rq->lock){-...-.........-.....-.}, at: try_to_wake_up+0x1f3/0x302
[ 221.096560] #2: (____ptrval____) (rcu_read_lock){......................}, at: update_curr+0x332/0x48a
[ 221.097912]
[ 221.097912] stack backtrace:
[ 221.098483] CPU: 0 PID: 168 Comm: sh Not tainted 5.0.0-rc2-00261-g5390131 #1
[ 221.099388] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 221.100459] Call Trace:
[ 221.100785] <IRQ>
[ 221.101071] __lock_acquire+0x80b/0xf1c
[ 221.101574] ? mark_lock+0x2f/0x517
[ 221.102063] lock_acquire+0x123/0x14d
[ 221.102595] ? try_to_wake_up+0x41/0x302
[ 221.103173] _raw_spin_lock_irqsave+0x38/0x47
[ 221.103801] ? try_to_wake_up+0x41/0x302
[ 221.104374] try_to_wake_up+0x41/0x302
[ 221.104928] rcu_read_unlock_special+0x52/0x89
[ 221.105583] __rcu_read_unlock+0x58/0xaa
[ 221.106180] update_curr+0x455/0x48a
[ 221.106878] enqueue_entity+0x42/0x9e4
[ 221.107577] enqueue_task_fair+0x22/0x2d
[ 221.108167] try_to_wake_up+0x27a/0x302
[ 221.108725] raise_softirq+0x2e/0x47
[ 221.109257] rcu_do_batch+0xa6c/0xa90
[ 221.109957] rcu_process_callbacks+0x321/0x46c
[ 221.110589] __do_softirq+0x26f/0x54f
[ 221.111126] irq_exit+0x8e/0xaa
[ 221.111591] smp_apic_timer_interrupt+0x28d/0x29e
[ 221.112226] apic_timer_interrupt+0xf/0x20
[ 221.112753] </IRQ>
[ 221.113048] RIP: 0010:write_comp_data+0x1c/0x66
[ 221.113633] Code: c7 c1 28 b5 62 82 5d 41 5c e9 11 af 17 00 4c 8b 14 25 40 b0 c4 82 8b 05 3c 3c a7 01 a9 00 01 1f 00 75 50 41 8b 82 18 10 00 00 <83> f8 03 75 44 49 8b 82 20 10 00 00 45 8b 92 1c 10 00 00 4c 8b 00
[ 221.115984] RSP: 0018:ffff888016d6fce0 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13
[ 221.116953] RAX: 0000000000000000 RBX: ffffffff82e46638 RCX: ffffffff811f4b7f
[ 221.117875] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000005
[ 221.118774] RBP: 0000000000000000 R08: 0000100000000000 R09: ffff888016d6fc20
[ 221.119681] R10: ffff888016caa000 R11: 0000000000000000 R12: 0000000000000000
[ 221.120588] R13: 0000000000000000 R14: ffff888016c67a90 R15: ffff8880174f5fe0
[ 221.121503] ? ftrace_likely_update+0x20/0x63
[ 221.122092] ftrace_likely_update+0x20/0x63
[ 221.122698] do_raw_spin_unlock+0xc3/0xec
[ 221.123287] _raw_spin_unlock+0x25/0x6d
[ 221.123846] dentry_unlink_inode+0x91/0x116
[ 221.124456] __dentry_kill+0x1d6/0x2b6
[ 221.125013] ? dput+0x27/0x5bc
[ 221.125542] dentry_kill+0x43e/0x49b
[ 221.126090] ? dput+0x27/0x5bc
[ 221.126625] dput+0x5a5/0x5bc
[ 221.127158] __fput+0x2e6/0x3a0
[ 221.127631] task_work_run+0xbe/0xe7
[ 221.128160] do_exit+0x79b/0x12a3
[ 221.128651] ? ftrace_likely_update+0x54/0x63
[ 221.129287] do_group_exit+0x14a/0x14a
[ 221.129833] __x64_sys_exit_group+0x1d/0x1d
[ 221.130446] do_syscall_64+0xb7/0x41e
[ 221.130993] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 221.131719] RIP: 0033:0x7feb0716b408
[ 221.132241] Code: Bad RIP value.
[ 221.132701] RSP: 002b:00007ffddbc346e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
[ 221.133755] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007feb0716b408
[ 221.134731] RDX: 0000000000000000 RSI: 000000000000003c RDI: 0000000000000000
[ 221.135729] RBP: 00007feb0745f820 R08: 00000000000000e7 R09: ffffffffffffffa0
[ 221.136770] R10: 0000000000000000 R11: 0000000000000246 R12: 00007feb0745f820
[ 221.137800] R13: 0000000000000001 R14: 0000000000b1fea8 R15: 0000000000b28c28
[ 221.138884] BUG: unable to handle kernel NULL pointer dereference at 0000000000000000
[ 221.139962] #PF error: [normal kernel read fault]
[ 221.140629] PGD 0 P4D 0
[ 221.141029] Oops: 0000 [#1] PREEMPT DEBUG_PAGEALLOC PTI
[ 221.141796] CPU: 0 PID: 168 Comm: sh Not tainted 5.0.0-rc2-00261-g5390131 #1
[ 221.142803] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 221.143972] RIP: 0010:rb_erase_cached+0xd3/0x2c6
[ 221.144630] Code: 74 08 48 89 d0 48 89 fa eb ef 4c 8b 4a 08 4c 89 48 10 48 89 4a 08 48 8b 39 83 e7 01 48 09 d7 48 89 39 49 8b 78 10 48 89 7a 10 <48> 8b 0f 83 e1 01 48 09 d1 48 89 0f 49 8b 08 48 89 cf 48 83 e7 fc
[ 221.147323] RSP: 0018:ffff888016d6fc08 EFLAGS: 00010086
[ 221.148078] RAX: ffff888000118060 RBX: ffff888000118048 RCX: ffff888000118060
[ 221.149072] RDX: ffff88800011a060 RSI: ffffffff82c67bd8 RDI: 0000000000000000
[ 221.150058] RBP: ffffffff82c67ba8 R08: ffff888000118060 R09: 0000000000000000
[ 221.151088] R10: ffff888016caa000 R11: 0000000000000000 R12: 0000000000000000
[ 221.152116] R13: ffffffff82c67ba8 R14: 0000000000000001 R15: ffff8880174f5fe0
[ 221.153107] FS: 00007feb07ab2700(0000) GS:ffffffff82c47000(0000) knlGS:0000000000000000
[ 221.154215] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 221.154986] CR2: 0000000000000000 CR3: 0000000016c2a000 CR4: 00000000000006f0
[ 221.155935] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 221.156887] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 221.157818] Call Trace:
[ 221.158175] set_next_entity+0x297/0x40c
[ 221.158716] pick_next_task_fair+0xda/0xe8
[ 221.159287] __schedule+0x5d9/0xd49
[ 221.159764] ? ___preempt_schedule+0x16/0x18
[ 221.160357] preempt_schedule_common+0x55/0x8b
[ 221.160981] ___preempt_schedule+0x16/0x18
[ 221.161552] ? _raw_spin_unlock+0x61/0x6d
[ 221.162113] _raw_spin_unlock+0x6a/0x6d
[ 221.162647] dentry_unlink_inode+0x91/0x116
[ 221.163237] __dentry_kill+0x1d6/0x2b6
[ 221.163764] ? dput+0x27/0x5bc
[ 221.164287] dentry_kill+0x43e/0x49b
[ 221.164790] ? dput+0x27/0x5bc
[ 221.165304] dput+0x5a5/0x5bc
[ 221.165810] __fput+0x2e6/0x3a0
[ 221.166267] task_work_run+0xbe/0xe7
[ 221.166775] do_exit+0x79b/0x12a3
[ 221.167255] ? ftrace_likely_update+0x54/0x63
[ 221.167872] do_group_exit+0x14a/0x14a
[ 221.168377] __x64_sys_exit_group+0x1d/0x1d
[ 221.168937] do_syscall_64+0xb7/0x41e
[ 221.169423] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 221.170090] RIP: 0033:0x7feb0716b408
[ 221.170585] Code: Bad RIP value.
[ 221.171015] RSP: 002b:00007ffddbc346e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
[ 221.172005] RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007feb0716b408
[ 221.172949] RDX: 0000000000000000 RSI: 000000000000003c RDI: 0000000000000000
[ 221.173872] RBP: 00007feb0745f820 R08: 00000000000000e7 R09: ffffffffffffffa0
[ 221.174779] R10: 0000000000000000 R11: 0000000000000246 R12: 00007feb0745f820
[ 221.175701] R13: 0000000000000001 R14: 0000000000b1fea8 R15: 0000000000b28c28
[ 221.176612] CR2: 0000000000000000
[ 221.177056] ---[ end trace aca2e594a080558f ]---
[ 221.177656] RIP: 0010:rb_erase_cached+0xd3/0x2c6
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 4b58bc34885931fa961040f55c2d1433c9c7fd89 5908e6b738e3357af42c10e1183753c70a0117a9 --
git bisect bad 96f690f08cd1b8cb7aeb981fc9b314242ff2a118 # 10:11 B 0 1 15 0 Merge 'linux-review/Aneesh-Kumar-K-V/powerpc-hugetlb-Handle-mmap_min_addr-correctly-in-get_unmapped_area-callback/20190226-143306' into devel-hourly-2019030107
git bisect bad 7d9e67b645ccc972f54f053ca040f65714cb8a8e # 11:06 B 59 1 59 59 Merge 'dinguyen/socfpga_for_next_v5.1_defconfig_v2' into devel-hourly-2019030107
git bisect bad b54bb80db87913a086fb0850e9c6f625877d038d # 11:32 B 106 1 106 106 Merge 'linux-review/Alexandru-Ardelean/dmaengine-axi-dmac-don-t-check-the-number-of-frames-for-alignment/20190226-220704' into devel-hourly-2019030107
git bisect bad bf9e8a7ce8e0a69bc3af454ccf9cb069ccbcfae5 # 12:04 B 50 1 50 51 Merge 'linux-review/Louis-Taylor/wusb-use-correct-format-characters/20190301-010018' into devel-hourly-2019030107
git bisect good 5bc75baebe423845271170f9c38f7da2e3f1550f # 13:06 G 310 0 26 26 Merge 'linux-review/Julius-Niedworok/mac80211-Use-IFF_ECHO-to-force-delivery-of-tx_status-frames/20190227-072204' into devel-hourly-2019030107
git bisect bad 259b018f91428f4f6b9911efc84dbc80e35a4cc7 # 13:25 B 16 1 16 16 Merge 'acpi/next' into devel-hourly-2019030107
git bisect good 2a6cbc50fe47492b9bd8c405276d83ae7a48d904 # 14:31 G 310 0 29 29 Merge 'linux-review/Alexis-Ballier/arm64-dts-rockchip-Add-support-for-the-Orange-Pi-RK3399-board/20190225-184655' into devel-hourly-2019030107
git bisect good 1cdcc6ed79a8277eb9fbdd51447906ae20b06dce # 15:18 G 304 0 52 52 Merge 'ak/perf/streams-3' into devel-hourly-2019030107
git bisect bad 197f8bd789513e8f137f1bb655c5834612b3d60a # 15:42 B 150 1 150 150 Merge 'dynticks/softirq/soft-interruptible-v2-0day-1' into devel-hourly-2019030107
git bisect good 9ed00dab9e378c196a240b099f3c9662fcad87c9 # 16:41 G 300 0 47 47 softirq: Introduce disabled softirq vectors bits
git bisect good 5f13b0ac590d0c7b5c654a9a42f8c93cebcc759e # 17:28 G 300 0 51 52 softirq: Prepare for mixing all/per-vector masking
git bisect bad 5390131643c07a2503337832d27b53c927f4c530 # 17:50 B 69 1 0 0 softirq: Allow to soft interrupt vector-specific masked contexts
git bisect good 2f36bd8eea1f9e9987f5acc681e7fef2295f1c41 # 18:51 G 301 0 54 54 locking/lockdep: Remove redundant softirqs on check
git bisect good b83f2fee4154a1f18cec544d86e128afd18a2499 # 19:53 G 300 0 50 50 locking/lockdep: Branch the new vec-finegrained softirq masking to lockdep
# first bad commit: [5390131643c07a2503337832d27b53c927f4c530] softirq: Allow to soft interrupt vector-specific masked contexts
git bisect good b83f2fee4154a1f18cec544d86e128afd18a2499 # 21:00 G 907 0 103 153 locking/lockdep: Branch the new vec-finegrained softirq masking to lockdep
# extra tests with debug options
git bisect bad 5390131643c07a2503337832d27b53c927f4c530 # 21:21 B 39 1 1 1 softirq: Allow to soft interrupt vector-specific masked contexts
# extra tests on HEAD of linux-devel/devel-hourly-2019030107
git bisect bad 4b58bc34885931fa961040f55c2d1433c9c7fd89 # 21:22 B 1 4 0 340 0day head guard for 'devel-hourly-2019030107'
# extra tests on tree/branch dynticks/softirq/soft-interruptible-v2
git bisect bad 63c89028058f5920d4b5a9d38452fa4623469583 # 21:49 B 35 1 0 0 net: Make softirq vector masking finegrained on release_sock()
# extra tests with first bad commit reverted
git bisect good 4ccf96e1ccd8929012710d29e5df38d9d9e0d124 # 22:54 G 305 0 32 32 Revert "softirq: Allow to soft interrupt vector-specific masked contexts"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
1 year, 10 months
[page cache] eb797a8ee0: vm-scalability.throughput -16.5% regression
by kernel test robot
Greeting,
FYI, we noticed a -16.5% regression of vm-scalability.throughput due to commit:
commit: eb797a8ee0ab4cd03df556980ce7bf167cadaa50 ("page cache: Rearrange address_space")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 80 threads Skylake with 64G memory
with following parameters:
runtime: 300s
test: small-allocs
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+----------------------------------------------------------------------+
| testcase: change | unixbench: unixbench.score 20.9% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=shell8 |
| | ucode=0xb00002e |
+------------------+----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp2/small-allocs/vm-scalability
commit:
f32f004cdd ("ida: Convert to XArray")
eb797a8ee0 ("page cache: Rearrange address_space")
f32f004cddf86d63 eb797a8ee0ab4cd03df556980c
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
3:4 -13% 3:4 perf-profile.calltrace.cycles-pp.error_entry
3:4 -12% 3:4 perf-profile.children.cycles-pp.error_entry
1:4 -6% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
235891 -16.5% 197085 vm-scalability.median
18881481 -16.5% 15769081 vm-scalability.throughput
316.19 +14.4% 361.58 vm-scalability.time.elapsed_time
316.19 +14.4% 361.58 vm-scalability.time.elapsed_time.max
22924 +15.9% 26576 vm-scalability.time.system_time
3254041 ± 9% +36.4% 4437311 ± 3% vm-scalability.time.voluntary_context_switches
277831 ± 3% +9.5% 304359 interrupts.CAL:Function_call_interrupts
102367 ± 2% +10.1% 112694 ± 2% meminfo.Shmem
6.67 ± 5% -0.9 5.76 ± 2% mpstat.cpu.usr%
0.49 -5.0% 0.46 pmeter.Average_Active_Power
38678749 -12.1% 34005251 pmeter.performance_per_watt
2621420 ± 38% +59.2% 4173292 ± 3% turbostat.C1
62964314 ± 10% +18.7% 74735975 turbostat.IRQ
20821 ± 10% +20.6% 25103 ± 2% vmstat.system.cs
192700 ± 8% +5.9% 204006 vmstat.system.in
76742 +3.7% 79550 proc-vmstat.nr_active_anon
25578 ± 2% +10.1% 28154 ± 2% proc-vmstat.nr_shmem
76742 +3.7% 79550 proc-vmstat.nr_zone_active_anon
34211075 ± 14% +28.7% 44023085 ± 3% cpuidle.C1.time
2628057 ± 37% +59.1% 4179955 ± 3% cpuidle.C1.usage
200488 ± 20% +75.5% 351836 ± 13% cpuidle.POLL.time
57706 ± 49% +93.1% 111419 ± 18% cpuidle.POLL.usage
2.08 ± 14% +20.7% 2.51 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
50.05 ± 36% -38.0% 31.04 ± 24% sched_debug.cpu.cpu_load[1].max
0.25 ± 14% +17.4% 0.29 ± 7% sched_debug.cpu.nr_running.stddev
42905 ± 12% +26.2% 54143 ± 3% sched_debug.cpu.nr_switches.avg
19996 ± 17% +43.0% 28586 ± 7% sched_debug.cpu.nr_switches.min
43774 ± 12% +26.5% 55370 ± 3% sched_debug.cpu.sched_count.avg
20285 ± 17% +43.1% 29024 ± 8% sched_debug.cpu.sched_count.min
19949 ± 12% +28.0% 25530 ± 3% sched_debug.cpu.sched_goidle.avg
9260 ± 16% +44.9% 13422 ± 8% sched_debug.cpu.sched_goidle.min
23108 ± 12% +26.0% 29110 ± 3% sched_debug.cpu.ttwu_count.avg
25905 ± 11% +25.8% 32593 ± 2% sched_debug.cpu.ttwu_count.max
21683 ± 12% +26.0% 27323 ± 3% sched_debug.cpu.ttwu_count.min
73.74 -3.6 70.12 perf-stat.cache-miss-rate%
2.83e+10 ± 2% +5.5% 2.985e+10 perf-stat.cache-misses
3.838e+10 +10.9% 4.257e+10 perf-stat.cache-references
6787110 ± 9% +35.5% 9197959 ± 3% perf-stat.context-switches
3.17 +11.1% 3.52 perf-stat.cpi
7.609e+13 +14.5% 8.715e+13 perf-stat.cpu-cycles
21380 +11.1% 23753 perf-stat.cpu-migrations
0.08 -0.0 0.07 perf-stat.dTLB-load-miss-rate%
6.634e+12 +3.8% 6.888e+12 perf-stat.dTLB-loads
0.13 -0.0 0.12 perf-stat.dTLB-store-miss-rate%
1.135e+09 -5.6% 1.071e+09 perf-stat.dTLB-store-misses
2.184e+09 -2.3% 2.135e+09 perf-stat.iTLB-load-misses
2.404e+13 +3.1% 2.479e+13 perf-stat.instructions
11006 +5.5% 11614 perf-stat.instructions-per-iTLB-miss
0.32 -10.0% 0.28 perf-stat.ipc
2.898e+09 +15.2% 3.337e+09 ± 2% perf-stat.node-load-misses
7.235e+08 +19.1% 8.615e+08 ± 4% perf-stat.node-store-misses
4975 +5.3% 5236 perf-stat.path-length
1.10 -0.1 0.97 ± 2% perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
0.68 ± 2% -0.1 0.59 perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
0.59 ± 3% -0.0 0.55 ± 2% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
1.04 ± 5% +0.1 1.19 ± 3% perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.04 ± 5% +0.1 1.19 ± 3% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.04 ± 5% +0.1 1.19 ± 3% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.64 ± 2% +0.4 2.00 perf-profile.calltrace.cycles-pp.page_fault
1.57 ± 2% +0.4 1.93 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
1.59 ± 2% +0.4 1.96 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.65 +0.5 1.19 ± 3% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +1.0 1.03 ± 4% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.10 -0.1 0.97 ± 2% perf-profile.children.cycles-pp.rwsem_spin_on_owner
0.68 ± 2% -0.1 0.59 perf-profile.children.cycles-pp.vma_interval_tree_insert
0.62 ± 2% -0.1 0.55 ± 2% perf-profile.children.cycles-pp.native_irq_return_iret
0.47 -0.1 0.41 perf-profile.children.cycles-pp.sync_regs
0.32 ± 4% -0.1 0.27 ± 3% perf-profile.children.cycles-pp.get_unmapped_area
0.26 ± 3% -0.1 0.21 ± 3% perf-profile.children.cycles-pp.__perf_sw_event
0.21 ± 4% -0.0 0.17 ± 5% perf-profile.children.cycles-pp.find_vma
0.27 ± 3% -0.0 0.22 ± 3% perf-profile.children.cycles-pp.unmapped_area_topdown
0.11 ± 42% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.29 ± 2% -0.0 0.24 ± 3% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.19 ± 4% -0.0 0.15 ± 2% perf-profile.children.cycles-pp.___perf_sw_event
0.13 ± 3% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.vma_policy_mof
0.59 ± 3% -0.0 0.56 ± 2% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.16 ± 2% -0.0 0.13 perf-profile.children.cycles-pp.__rb_insert_augmented
0.19 ± 2% -0.0 0.17 ± 4% perf-profile.children.cycles-pp.perf_event_mmap
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.vmacache_find
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.__fget
0.08 -0.0 0.07 ± 6% perf-profile.children.cycles-pp.d_path
0.09 ± 4% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.__vma_link_rb
0.01 ±173% +0.1 0.12 ± 4% perf-profile.children.cycles-pp.osq_unlock
1.19 ± 3% +0.2 1.36 ± 2% perf-profile.children.cycles-pp.exit_to_usermode_loop
1.19 ± 3% +0.2 1.36 ± 2% perf-profile.children.cycles-pp.task_work_run
1.19 ± 3% +0.2 1.36 ± 2% perf-profile.children.cycles-pp.task_numa_work
1.66 ± 2% +0.4 2.01 perf-profile.children.cycles-pp.page_fault
1.59 ± 2% +0.4 1.96 perf-profile.children.cycles-pp.do_page_fault
1.58 ± 2% +0.4 1.95 perf-profile.children.cycles-pp.__do_page_fault
0.66 +0.5 1.20 ± 4% perf-profile.children.cycles-pp.handle_mm_fault
0.47 +0.6 1.04 ± 4% perf-profile.children.cycles-pp.__handle_mm_fault
1.09 -0.1 0.96 ± 2% perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.67 ± 2% -0.1 0.59 ± 2% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.62 ± 2% -0.1 0.55 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.46 -0.1 0.40 perf-profile.self.cycles-pp.sync_regs
0.43 -0.1 0.37 ± 2% perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.30 ± 5% -0.1 0.25 ± 3% perf-profile.self.cycles-pp.__do_page_fault
0.11 ± 42% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.27 ± 3% -0.0 0.22 ± 3% perf-profile.self.cycles-pp.unmapped_area_topdown
0.17 ± 3% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.___perf_sw_event
0.14 ± 8% -0.0 0.11 perf-profile.self.cycles-pp.mmap_region
0.18 ± 2% -0.0 0.15 ± 2% perf-profile.self.cycles-pp.handle_mm_fault
0.15 -0.0 0.12 ± 4% perf-profile.self.cycles-pp.__rb_insert_augmented
0.12 ± 4% -0.0 0.10 ± 7% perf-profile.self.cycles-pp.find_vma
0.08 ± 5% -0.0 0.06 perf-profile.self.cycles-pp.vma_policy_mof
0.08 ± 8% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.vmacache_find
0.08 ± 10% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.__fget
0.07 ± 6% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.__perf_sw_event
0.06 -0.0 0.05 perf-profile.self.cycles-pp.d_path
0.08 ± 5% +0.0 0.10 ± 7% perf-profile.self.cycles-pp.down_write
0.11 ± 4% +0.0 0.16 ± 2% perf-profile.self.cycles-pp.up_write
0.00 +0.1 0.08 ± 5% perf-profile.self.cycles-pp.__vma_link_rb
0.18 ± 9% +0.1 0.28 ± 6% perf-profile.self.cycles-pp.rwsem_down_write_failed
0.01 ±173% +0.1 0.12 ± 4% perf-profile.self.cycles-pp.osq_unlock
1.05 ± 4% +0.2 1.26 ± 2% perf-profile.self.cycles-pp.task_numa_work
0.35 ± 2% +0.6 0.95 ± 5% perf-profile.self.cycles-pp.__handle_mm_fault
vm-scalability.time.system_time
30000 +-+-----------------------------------------------------------------+
| O O |
25000 O-O O O O O O O O O O O O O O O O O O O O O |
|.+..+.+..+.+..+.+.+..+.+..+.+..+.+ .+..+.+.+..+.+..+.+..+.|
| : + + |
20000 +-+ : : : |
| : :: : |
15000 +-+ : : : : |
| : : : : |
10000 +-+ : : : : |
| : : : : |
| : : : : |
5000 +-+ :: :: |
| : : |
0 +-+-----------------------------------------------------------------+
vm-scalability.time.elapsed_time
400 +-+-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O O |
350 +-+ |
300 +-++.+..+.+..+.+..+.+..+.+..+.+..+.+ + +..+.+..+.+..+.+..+.+..+.|
| : : : |
250 +-+ : :: : |
| : : : : |
200 +-+ : : : : |
| : : : : |
150 +-+ : : : : |
100 +-+ : : : : |
| : : : : |
50 +-+ :: :: |
| : : |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.elapsed_time.max
400 +-+-------------------------------------------------------------------+
O O O O O O O O O O O O O O O O O O O O O O O O |
350 +-+ |
300 +-++.+..+.+..+.+..+.+..+.+..+.+..+.+ + +..+.+..+.+..+.+..+.+..+.|
| : : : |
250 +-+ : :: : |
| : : : : |
200 +-+ : : : : |
| : : : : |
150 +-+ : : : : |
100 +-+ : : : : |
| : : : : |
50 +-+ :: :: |
| : : |
0 +-+-------------------------------------------------------------------+
vm-scalability.throughput
2e+07 +-+---------------------------------------------------------------+
1.8e+07 +-+..+.+.+..+.+..+.+.+..+.+.+..+.+ + +..+.+..+.+.+..+.+.+..+.|
| : : : |
1.6e+07 O-O O O O O O O O O O O O O O O O O O O O O O O |
1.4e+07 +-+ : :: : |
| : : : : |
1.2e+07 +-+ : : : : |
1e+07 +-+ : : : : |
8e+06 +-+ : : : : |
| : : : : |
6e+06 +-+ : : : : |
4e+06 +-+ :: : |
| : : |
2e+06 +-+ : : |
0 +-+---------------------------------------------------------------+
vm-scalability.median
250000 +-+----------------------------------------------------------------+
|.+..+.+..+.+.+..+.+..+.+.+..+ + + +.+..+.+..+.+.+..+.+..+.|
| : : : |
200000 O-O O O O O O O O O O O O O O O O O O O O O O O |
| : :: : |
| : : : : |
150000 +-+ : : : : |
| : : : : |
100000 +-+ : : : : |
| : : : : |
| : : : : |
50000 +-+ : :: |
| : : |
| : : |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep3b: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ep3b/shell8/unixbench/0xb00002e
commit:
f32f004cdd ("ida: Convert to XArray")
eb797a8ee0 ("page cache: Rearrange address_space")
f32f004cddf86d63 eb797a8ee0ab4cd03df556980c
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
:4 25% 1:4 dmesg.WARNING:at_ip__slab_free/0x
:4 25% 1:4 dmesg.WARNING:at_ip_filename_lookup/0x
1:4 -25% :4 dmesg.WARNING:at_ip_fsnotify/0x
1:4 -25% :4 dmesg.WARNING:at_ip_perf_event_mmap_output/0x
1:4 9% 2:4 perf-profile.children.cycles-pp.error_entry
1:4 7% 1:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
39940 +20.9% 48288 unixbench.score
641.38 +1.0% 647.51 unixbench.time.elapsed_time
641.38 +1.0% 647.51 unixbench.time.elapsed_time.max
24220392 +29.1% 31263367 ± 2% unixbench.time.involuntary_context_switches
1.836e+09 +20.8% 2.217e+09 unixbench.time.minor_page_faults
7052 +2.1% 7200 unixbench.time.percent_of_cpu_this_job_got
36337 -2.7% 35347 unixbench.time.system_time
8893 +26.8% 11279 ± 2% unixbench.time.user_time
85632909 +4.3% 89281018 unixbench.time.voluntary_context_switches
15301176 +21.7% 18622462 unixbench.workload
42884872 +19.0% 51033794 softirqs.RCU
43387938 ± 6% -12.9% 37797790 ± 2% cpuidle.C1.usage
1155156 ± 4% -24.0% 878414 ± 4% cpuidle.POLL.usage
152008 ± 9% +15.2% 175048 ± 7% meminfo.DirectMap4k
255196 +13.6% 289806 meminfo.Shmem
100.25 +26.7% 127.00 ± 2% vmstat.procs.r
337268 +3.6% 349315 vmstat.system.cs
19.30 -1.7 17.59 mpstat.cpu.idle%
0.01 ± 30% +0.0 0.02 ± 35% mpstat.cpu.iowait%
15.54 +3.6 19.11 mpstat.cpu.usr%
7.086e+08 +20.8% 8.56e+08 ± 2% numa-numastat.node0.local_node
7.086e+08 +20.8% 8.561e+08 ± 2% numa-numastat.node0.numa_hit
7.053e+08 +21.3% 8.558e+08 numa-numastat.node1.local_node
7.053e+08 +21.3% 8.558e+08 numa-numastat.node1.numa_hit
2274 +2.1% 2322 turbostat.Avg_MHz
43383925 ± 6% -12.9% 37793914 ± 2% turbostat.C1
14.41 ± 2% -17.4% 11.90 ± 7% turbostat.CPU%c1
243.24 +2.2% 248.48 turbostat.PkgWatt
14.16 +8.1% 15.30 turbostat.RAMWatt
6807 ± 4% +48.1% 10084 ± 5% slabinfo.files_cache.active_objs
6807 ± 4% +48.2% 10090 ± 5% slabinfo.files_cache.num_objs
3616 +23.8% 4475 ± 5% slabinfo.kmalloc-256.active_objs
3616 +23.8% 4475 ± 5% slabinfo.kmalloc-256.num_objs
17134 ± 4% +13.8% 19493 ± 2% slabinfo.proc_inode_cache.active_objs
17134 ± 4% +14.0% 19524 ± 2% slabinfo.proc_inode_cache.num_objs
3393 ± 6% +17.7% 3994 ± 3% slabinfo.sock_inode_cache.active_objs
3393 ± 6% +17.7% 3994 ± 3% slabinfo.sock_inode_cache.num_objs
1241 ± 8% +15.0% 1427 ± 6% slabinfo.task_group.active_objs
1241 ± 8% +15.0% 1427 ± 6% slabinfo.task_group.num_objs
713963 ± 14% -20.9% 564405 ± 4% numa-meminfo.node0.FilePages
20539 ± 9% -19.7% 16496 ± 15% numa-meminfo.node0.Mapped
1281514 ± 9% -10.6% 1145886 ± 4% numa-meminfo.node0.MemUsed
683.75 ± 11% +54.1% 1053 ± 12% numa-meminfo.node0.Mlocked
48239 ± 4% -19.8% 38686 ± 17% numa-meminfo.node0.SReclaimable
669804 ± 15% +27.8% 856173 ± 3% numa-meminfo.node1.FilePages
4719 ± 56% +227.5% 15456 ± 43% numa-meminfo.node1.Inactive
4644 ± 57% +232.0% 15419 ± 43% numa-meminfo.node1.Inactive(anon)
14609 ± 12% +47.2% 21499 ± 11% numa-meminfo.node1.Mapped
40126 ± 6% +29.6% 52000 ± 12% numa-meminfo.node1.SReclaimable
119605 ± 96% +131.3% 276593 ± 3% numa-meminfo.node1.Shmem
130207 +18.0% 153583 ± 5% numa-meminfo.node1.Slab
547652 ± 4% +5.4% 577200 ± 4% numa-meminfo.node1.Unevictable
178549 ± 14% -21.0% 141123 ± 4% numa-vmstat.node0.nr_file_pages
5179 ± 9% -20.3% 4130 ± 16% numa-vmstat.node0.nr_mapped
170.50 ± 12% +54.5% 263.50 ± 13% numa-vmstat.node0.nr_mlock
12061 ± 4% -19.8% 9670 ± 17% numa-vmstat.node0.nr_slab_reclaimable
3.542e+08 +21.5% 4.304e+08 ± 2% numa-vmstat.node0.numa_hit
3.542e+08 +21.5% 4.304e+08 ± 2% numa-vmstat.node0.numa_local
167459 ± 15% +27.9% 214111 ± 3% numa-vmstat.node1.nr_file_pages
1150 ± 57% +234.3% 3845 ± 44% numa-vmstat.node1.nr_inactive_anon
3673 ± 11% +49.4% 5488 ± 13% numa-vmstat.node1.nr_mapped
29911 ± 96% +131.3% 69188 ± 3% numa-vmstat.node1.nr_shmem
10036 ± 6% +29.5% 13000 ± 12% numa-vmstat.node1.nr_slab_reclaimable
22541 +12.7% 25401 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
136898 ± 4% +5.4% 144317 ± 4% numa-vmstat.node1.nr_unevictable
1150 ± 57% +234.3% 3845 ± 44% numa-vmstat.node1.nr_zone_inactive_anon
136898 ± 4% +5.4% 144317 ± 4% numa-vmstat.node1.nr_zone_unevictable
3.525e+08 +22.0% 4.301e+08 ± 2% numa-vmstat.node1.numa_hit
3.523e+08 +22.0% 4.299e+08 ± 2% numa-vmstat.node1.numa_local
149893 +6.3% 159383 proc-vmstat.nr_active_anon
345938 +2.7% 355212 proc-vmstat.nr_file_pages
6075 +9.3% 6643 proc-vmstat.nr_inactive_anon
30490 +2.2% 31159 proc-vmstat.nr_kernel_stack
8870 +8.1% 9585 proc-vmstat.nr_mapped
9254 +2.3% 9471 proc-vmstat.nr_page_table_pages
63803 +13.6% 72460 proc-vmstat.nr_shmem
22097 +2.6% 22671 proc-vmstat.nr_slab_reclaimable
47965 +6.9% 51274 proc-vmstat.nr_slab_unreclaimable
149893 +6.3% 159383 proc-vmstat.nr_zone_active_anon
6075 +9.3% 6643 proc-vmstat.nr_zone_inactive_anon
1.414e+09 +21.1% 1.712e+09 proc-vmstat.numa_hit
1.414e+09 +21.1% 1.712e+09 proc-vmstat.numa_local
47447 +10.8% 52569 proc-vmstat.pgactivate
1.487e+09 +21.1% 1.801e+09 proc-vmstat.pgalloc_normal
1.846e+09 +20.7% 2.228e+09 proc-vmstat.pgfault
1.487e+09 +21.1% 1.801e+09 proc-vmstat.pgfree
63795 +20.6% 76962 proc-vmstat.thp_deferred_split_page
63790 +20.6% 76940 proc-vmstat.thp_fault_alloc
26894638 +20.9% 32515246 proc-vmstat.unevictable_pgs_culled
1.556e+13 +4.9% 1.632e+13 perf-stat.branch-instructions
1.54 +0.2 1.75 perf-stat.branch-miss-rate%
2.399e+11 +19.4% 2.863e+11 perf-stat.branch-misses
1.034e+11 +17.1% 1.21e+11 ± 2% perf-stat.cache-misses
9.784e+11 +18.6% 1.161e+12 perf-stat.cache-references
2.18e+08 +4.6% 2.281e+08 perf-stat.context-switches
1.60 -1.0% 1.59 perf-stat.cpi
1.281e+14 +3.2% 1.322e+14 perf-stat.cpu-cycles
46095973 +22.0% 56220581 perf-stat.cpu-migrations
0.19 +0.0 0.23 ± 2% perf-stat.dTLB-load-miss-rate%
4.148e+10 +23.3% 5.113e+10 ± 2% perf-stat.dTLB-load-misses
2.213e+13 +2.1% 2.261e+13 perf-stat.dTLB-loads
5.537e+09 +22.2% 6.767e+09 ± 2% perf-stat.dTLB-store-misses
8.001e+12 +20.3% 9.625e+12 perf-stat.dTLB-stores
2.034e+10 +20.1% 2.443e+10 perf-stat.iTLB-load-misses
1.3e+10 +17.1% 1.522e+10 perf-stat.iTLB-loads
7.991e+13 +4.3% 8.332e+13 perf-stat.instructions
3929 -13.2% 3411 perf-stat.instructions-per-iTLB-miss
1.81e+09 +20.7% 2.184e+09 perf-stat.minor-faults
91.68 -2.3 89.39 perf-stat.node-load-miss-rate%
3.395e+10 +14.5% 3.889e+10 perf-stat.node-load-misses
3.083e+09 +49.7% 4.616e+09 ± 3% perf-stat.node-loads
1.018e+10 +20.2% 1.224e+10 ± 2% perf-stat.node-store-misses
8.896e+09 +21.3% 1.079e+10 ± 2% perf-stat.node-stores
1.81e+09 +20.7% 2.184e+09 perf-stat.page-faults
5222880 -14.3% 4475405 perf-stat.path-length
1942 ± 7% +37.3% 2665 ± 18% sched_debug.cfs_rq:/.exec_clock.stddev
47.72 ± 7% +18.8% 56.67 ± 2% sched_debug.cfs_rq:/.load_avg.avg
4.95 ± 9% +44.0% 7.14 ± 7% sched_debug.cfs_rq:/.load_avg.min
265960 ± 8% +36.3% 362493 ± 15% sched_debug.cfs_rq:/.min_vruntime.stddev
0.33 ± 6% -11.5% 0.29 ± 5% sched_debug.cfs_rq:/.nr_running.stddev
2.77 ± 7% +88.4% 5.23 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
7.18 ± 5% +79.1% 12.86 ± 15% sched_debug.cfs_rq:/.nr_spread_over.max
0.27 ± 52% +358.3% 1.25 ± 25% sched_debug.cfs_rq:/.nr_spread_over.min
1.57 ± 2% +40.9% 2.21 ± 2% sched_debug.cfs_rq:/.nr_spread_over.stddev
27.36 ± 5% +13.2% 30.98 ± 6% sched_debug.cfs_rq:/.removed.load_avg.avg
1263 ± 5% +14.1% 1442 ± 6% sched_debug.cfs_rq:/.removed.runnable_sum.avg
265962 ± 8% +36.3% 362499 ± 15% sched_debug.cfs_rq:/.spread0.stddev
8169 ± 28% +64.2% 13411 ± 21% sched_debug.cpu.avg_idle.min
298.80 ± 11% +57.4% 470.43 ± 28% sched_debug.cpu.cpu_load[0].max
7.46 ± 5% +34.4% 10.03 ± 12% sched_debug.cpu.cpu_load[1].avg
212.55 ± 15% +52.5% 324.14 ± 17% sched_debug.cpu.cpu_load[1].max
25.69 ± 9% +52.4% 39.15 ± 19% sched_debug.cpu.cpu_load[1].stddev
7.18 ± 3% +32.7% 9.53 ± 5% sched_debug.cpu.cpu_load[2].avg
151.27 ± 16% +51.9% 229.80 ± 11% sched_debug.cpu.cpu_load[2].max
19.45 ± 11% +50.2% 29.22 ± 10% sched_debug.cpu.cpu_load[2].stddev
6.92 ± 2% +29.5% 8.96 ± 4% sched_debug.cpu.cpu_load[3].avg
106.45 ± 23% +52.6% 162.50 ± 15% sched_debug.cpu.cpu_load[3].max
14.94 ± 15% +48.0% 22.10 ± 10% sched_debug.cpu.cpu_load[3].stddev
6.45 ± 2% +25.9% 8.12 ± 6% sched_debug.cpu.cpu_load[4].avg
79.82 ± 34% +56.2% 124.64 ± 25% sched_debug.cpu.cpu_load[4].max
11.70 ± 23% +47.3% 17.23 ± 17% sched_debug.cpu.cpu_load[4].stddev
422565 ± 24% +22.7% 518276 ± 15% sched_debug.cpu.load.max
0.00 ± 3% +23.4% 0.00 ± 14% sched_debug.cpu.next_balance.stddev
1.14 ± 5% +31.4% 1.50 ± 7% sched_debug.cpu.nr_running.avg
3.16 ± 2% +38.1% 4.36 ± 9% sched_debug.cpu.nr_running.max
0.71 ± 2% +33.7% 0.95 ± 7% sched_debug.cpu.nr_running.stddev
39284 ± 7% +47.6% 57979 ± 20% sched_debug.cpu.nr_switches.stddev
1658 ± 12% +73.9% 2884 ± 11% sched_debug.cpu.nr_uninterruptible.max
-1902 +212.1% -5938 sched_debug.cpu.nr_uninterruptible.min
716.66 ± 9% +107.5% 1487 ± 30% sched_debug.cpu.nr_uninterruptible.stddev
43221 ± 7% +46.5% 63340 ± 18% sched_debug.cpu.sched_count.stddev
351785 ± 3% -14.1% 302212 sched_debug.cpu.sched_goidle.min
21765 ± 7% +51.5% 32969 ± 24% sched_debug.cpu.sched_goidle.stddev
148360 +23.9% 183753 ± 2% sched_debug.cpu.ttwu_local.avg
152765 +24.4% 190065 ± 2% sched_debug.cpu.ttwu_local.max
2051 ± 10% +87.1% 3838 ± 32% sched_debug.cpu.ttwu_local.stddev
0.00 ± 25% +2.1e+05% 0.56 ± 46% sched_debug.rt_rq:/.rt_time.avg
0.02 ± 25% +2.1e+05% 49.46 ± 46% sched_debug.rt_rq:/.rt_time.max
0.00 ± 25% +2.1e+05% 5.24 ± 46% sched_debug.rt_rq:/.rt_time.stddev
20.84 ± 2% -4.4 16.42 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.unlink_file_vma
21.69 ± 2% -4.4 17.32 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.unlink_file_vma.free_pgtables
17.27 ± 2% -3.4 13.86 ± 3% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.unlink_file_vma.free_pgtables.exit_mmap
17.44 -3.4 14.05 ± 3% perf-profile.calltrace.cycles-pp.down_write.unlink_file_vma.free_pgtables.exit_mmap.mmput
36.96 -3.1 33.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
36.94 -3.1 33.87 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.69 -2.2 10.53 ± 2% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.71 -2.2 10.55 ± 2% perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.36 -2.0 11.41 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
8.96 ± 2% -1.9 7.03 ± 3% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.__vma_adjust.__split_vma
8.63 ± 2% -1.9 6.71 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.__vma_adjust
13.61 -1.9 11.71 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.11 -1.8 6.36 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.copy_process
8.44 -1.7 6.70 ± 2% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.copy_process._do_fork.do_syscall_64
8.44 -1.7 6.70 ± 2% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.copy_process._do_fork
8.50 -1.7 6.76 ± 2% perf-profile.calltrace.cycles-pp.down_write.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.70 ± 2% -1.7 7.01 ± 3% perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec
9.06 -1.6 7.47 ± 2% perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.exit_mmap.mmput.do_exit
9.36 ± 2% -1.6 7.77 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary
9.71 -1.5 8.21 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
6.71 ± 2% -1.5 5.26 ± 3% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.__vma_adjust.__split_vma.mprotect_fixup
6.77 ± 2% -1.4 5.32 ± 3% perf-profile.calltrace.cycles-pp.down_write.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey
9.54 -1.3 8.23 ± 3% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
10.17 -1.3 8.86 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
9.31 -1.3 8.01 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
10.04 -1.3 8.74 ± 3% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
7.63 ± 2% -1.3 6.33 ± 2% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect
10.04 -1.3 8.75 ± 3% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
10.04 -1.3 8.75 ± 3% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
7.34 -1.3 6.05 ± 2% perf-profile.calltrace.cycles-pp.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
7.71 ± 2% -1.3 6.43 ± 2% perf-profile.calltrace.cycles-pp.__split_vma.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64
10.84 -1.2 9.62 perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
8.01 ± 2% -1.2 6.80 ± 2% perf-profile.calltrace.cycles-pp.mprotect_fixup.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.09 ± 2% -1.2 6.90 ± 2% perf-profile.calltrace.cycles-pp.do_mprotect_pkey.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.10 ± 2% -1.2 6.90 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_mprotect.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.17 -1.2 9.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fork
11.16 -1.2 9.99 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
11.15 -1.2 9.98 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
11.80 -1.1 10.73 perf-profile.calltrace.cycles-pp.__libc_fork
4.28 -1.1 3.21 ± 3% perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
4.44 ± 2% -1.0 3.47 ± 2% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.unlink_file_vma.free_pgtables.unmap_region
4.45 -1.0 3.49 ± 2% perf-profile.calltrace.cycles-pp.down_write.unlink_file_vma.free_pgtables.unmap_region.do_munmap
11.18 -1.0 10.22 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
4.48 -1.0 3.52 ± 2% perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region
11.19 -1.0 10.23 perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file
4.43 -0.9 3.48 ± 8% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.vma_link.mmap_region.do_mmap
4.43 -0.9 3.48 ± 8% perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link.mmap_region
4.67 -0.9 3.73 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap
4.45 -0.9 3.51 ± 9% perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
11.30 -0.9 10.38 perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
4.85 -0.9 3.98 ± 2% perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff
5.40 -0.8 4.57 ± 2% perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
2.25 -0.5 1.78 ± 3% perf-profile.calltrace.cycles-pp.down_write.__vma_adjust.__split_vma.do_munmap.mmap_region
2.25 -0.5 1.78 ± 3% perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.__vma_adjust.__split_vma.do_munmap
2.42 -0.4 1.98 ± 2% perf-profile.calltrace.cycles-pp.__vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap
2.43 -0.4 1.99 ± 2% perf-profile.calltrace.cycles-pp.__split_vma.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff
13.17 -0.3 12.82 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
13.17 -0.3 12.83 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
2.69 ± 2% -0.1 2.56 perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.56 +0.2 0.72 ± 2% perf-profile.calltrace.cycles-pp._IO_default_xsputn
0.63 +0.2 0.78 ± 3% perf-profile.calltrace.cycles-pp.copy_strings.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.75 +0.2 0.92 ± 2% perf-profile.calltrace.cycles-pp.wp_page_copy.do_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.82 +0.2 1.01 ± 2% perf-profile.calltrace.cycles-pp.do_wp_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.94 ± 2% +0.2 1.16 perf-profile.calltrace.cycles-pp._dl_addr
0.64 ± 2% +0.2 0.87 ± 4% perf-profile.calltrace.cycles-pp.ret_from_fork
0.63 ± 2% +0.2 0.87 ± 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.67 ± 2% +0.2 0.91 ± 4% perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.87 +0.2 1.12 ± 2% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.88 +0.2 1.13 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
0.70 ± 2% +0.3 0.96 ± 3% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.flush_old_exec
1.10 +0.3 1.37 ± 2% perf-profile.calltrace.cycles-pp.__strcoll_l
0.70 ± 2% +0.3 0.97 ± 3% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.04 +0.3 1.32 ± 2% perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
0.89 +0.3 1.17 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.flush_old_exec
0.95 +0.3 1.25 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.15 +0.3 1.46 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
4.73 +0.3 5.04 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.15 +0.3 1.46 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.setlocale
1.07 +0.3 1.39 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
1.09 ± 2% +0.3 1.42 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
1.11 ± 2% +0.3 1.45 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mmap64
1.11 +0.3 1.44 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
0.26 ±100% +0.3 0.60 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.85 +0.3 5.19 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.39 ± 57% +0.4 0.74 ± 3% perf-profile.calltrace.cycles-pp.wait4
1.20 +0.4 1.56 perf-profile.calltrace.cycles-pp.mmap64
13.89 +0.4 14.25 perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64
13.91 +0.4 14.28 perf-profile.calltrace.cycles-pp.search_binary_handler.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.78 ± 2% +0.4 2.19 perf-profile.calltrace.cycles-pp.vfprintf.__vsnprintf_chk
0.55 ± 3% +0.4 0.99 ± 3% perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.elf_map.load_elf_binary
0.60 ± 4% +0.5 1.06 ± 3% perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.elf_map.load_elf_binary.search_binary_handler
0.66 ± 3% +0.5 1.12 ± 2% perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.elf_map.load_elf_binary.search_binary_handler.__do_execve_file
5.43 +0.5 5.91 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
5.45 +0.5 5.92 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
0.00 +0.5 0.51 perf-profile.calltrace.cycles-pp.page_fault.setlocale
0.00 +0.5 0.51 ± 3% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
5.58 +0.5 6.09 perf-profile.calltrace.cycles-pp.page_fault
0.00 +0.5 0.52 ± 4% perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork
0.00 +0.5 0.53 ± 2% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.37 ± 2% +0.5 1.90 ± 5% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
1.38 ± 2% +0.5 1.90 ± 5% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.00 +0.5 0.53 ± 3% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.5 0.54 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.73 ± 2% +0.5 2.26 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.00 +0.5 0.54 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.00 +0.5 0.54 ± 3% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
2.69 +0.5 3.24 perf-profile.calltrace.cycles-pp.__vsnprintf_chk
1.80 ± 2% +0.6 2.35 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
0.00 +0.6 0.56 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +0.6 0.56 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
2.46 +0.6 3.03 ± 2% perf-profile.calltrace.cycles-pp.setlocale
0.00 +0.6 0.58 ± 3% perf-profile.calltrace.cycles-pp.write
0.00 +0.6 0.58 ± 2% perf-profile.calltrace.cycles-pp.do_faccessat.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.12 ±173% +0.6 0.71 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait4
0.12 ±173% +0.6 0.71 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.6 0.61 ± 3% perf-profile.calltrace.cycles-pp.read
0.00 +0.6 0.61 ± 5% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.62 ± 5% perf-profile.calltrace.cycles-pp.kernel_wait4.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
0.00 +0.6 0.63 ± 4% perf-profile.calltrace.cycles-pp.__do_sys_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait4
1.63 ± 2% +0.7 2.31 ± 5% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
0.00 +0.7 0.74 ± 3% perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.elf_map
0.12 ±173% +0.8 0.91 ± 3% perf-profile.calltrace.cycles-pp.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler
2.05 ± 2% +0.8 2.84 ± 4% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput
0.13 ±173% +0.8 0.92 ± 4% perf-profile.calltrace.cycles-pp.vm_munmap.elf_map.load_elf_binary.search_binary_handler.__do_execve_file
15.67 +0.8 16.48 perf-profile.calltrace.cycles-pp.__do_execve_file.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
15.68 +0.8 16.50 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
15.68 +0.8 16.50 perf-profile.calltrace.cycles-pp.__x64_sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
15.68 +0.8 16.50 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
15.71 +0.8 16.53 perf-profile.calltrace.cycles-pp.execve
1.16 ± 3% +0.9 2.05 ± 3% perf-profile.calltrace.cycles-pp.elf_map.load_elf_binary.search_binary_handler.__do_execve_file.__x64_sys_execve
42.48 -8.6 33.85 ± 3% perf-profile.children.cycles-pp.osq_lock
44.24 -8.5 35.76 ± 2% perf-profile.children.cycles-pp.rwsem_down_write_failed
44.25 -8.5 35.78 ± 2% perf-profile.children.cycles-pp.call_rwsem_down_write_failed
44.77 -8.4 36.37 ± 2% perf-profile.children.cycles-pp.down_write
22.58 -4.1 18.50 ± 2% perf-profile.children.cycles-pp.unlink_file_vma
24.10 -3.9 20.25 ± 2% perf-profile.children.cycles-pp.free_pgtables
13.81 -1.8 11.97 perf-profile.children.cycles-pp.ksys_mmap_pgoff
69.44 -1.8 67.68 perf-profile.children.cycles-pp.do_syscall_64
69.50 -1.8 67.74 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
10.31 ± 2% -1.5 8.80 ± 2% perf-profile.children.cycles-pp.__vma_adjust
13.93 -1.5 12.43 perf-profile.children.cycles-pp.mmap_region
10.40 -1.5 8.90 ± 2% perf-profile.children.cycles-pp.__split_vma
14.23 -1.4 12.80 perf-profile.children.cycles-pp.do_mmap
14.44 -1.4 13.06 perf-profile.children.cycles-pp.vm_mmap_pgoff
9.67 -1.3 8.35 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
24.58 -1.3 23.26 perf-profile.children.cycles-pp.mmput
24.55 -1.3 23.24 perf-profile.children.cycles-pp.exit_mmap
9.44 -1.3 8.12 ± 4% perf-profile.children.cycles-pp.intel_idle
10.17 -1.3 8.86 ± 4% perf-profile.children.cycles-pp.secondary_startup_64
10.17 -1.3 8.86 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
10.17 -1.3 8.86 ± 4% perf-profile.children.cycles-pp.do_idle
10.04 -1.3 8.75 ± 3% perf-profile.children.cycles-pp.start_secondary
11.07 -1.2 9.82 perf-profile.children.cycles-pp.copy_process
8.02 ± 2% -1.2 6.80 ± 2% perf-profile.children.cycles-pp.mprotect_fixup
11.38 -1.2 10.18 perf-profile.children.cycles-pp._do_fork
8.10 ± 2% -1.2 6.90 ± 2% perf-profile.children.cycles-pp.do_mprotect_pkey
8.10 ± 2% -1.2 6.90 ± 2% perf-profile.children.cycles-pp.__x64_sys_mprotect
11.84 -1.1 10.77 perf-profile.children.cycles-pp.__libc_fork
11.51 -0.9 10.57 perf-profile.children.cycles-pp.flush_old_exec
8.20 -0.8 7.44 perf-profile.children.cycles-pp.do_munmap
5.36 -0.6 4.79 perf-profile.children.cycles-pp.unmap_region
5.70 ± 2% -0.5 5.21 perf-profile.children.cycles-pp.filemap_map_pages
5.81 -0.4 5.39 perf-profile.children.cycles-pp.vma_link
0.29 ± 3% -0.1 0.15 perf-profile.children.cycles-pp.radix_tree_next_chunk
0.39 ± 3% -0.0 0.36 perf-profile.children.cycles-pp.time
0.16 ± 4% -0.0 0.13 perf-profile.children.cycles-pp.find_busiest_group
0.30 -0.0 0.29 perf-profile.children.cycles-pp.load_balance
0.23 -0.0 0.21 ± 2% perf-profile.children.cycles-pp.__strcasecmp
0.17 -0.0 0.15 ± 3% perf-profile.children.cycles-pp.fopen
0.05 +0.0 0.06 perf-profile.children.cycles-pp.__put_task_struct
0.05 +0.0 0.06 perf-profile.children.cycles-pp.vm_brk_flags
0.05 +0.0 0.06 perf-profile.children.cycles-pp.__perf_event__output_id_sample
0.05 +0.0 0.06 perf-profile.children.cycles-pp.security_mmap_addr
0.05 +0.0 0.06 perf-profile.children.cycles-pp.selinux_file_open
0.06 +0.0 0.07 perf-profile.children.cycles-pp.__switch_to
0.07 ± 5% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__x64_sys_pipe
0.07 ± 5% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.do_pipe2
0.07 ± 5% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.unmap_single_vma
0.07 +0.0 0.08 ± 5% perf-profile.children.cycles-pp.__errno_location
0.05 +0.0 0.06 ± 6% perf-profile.children.cycles-pp.mem_cgroup_throttle_swaprate
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.perf_event_task_output
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.load_elf_phdrs
0.17 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.sched_ttwu_pending
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.cpumask_next
0.06 +0.0 0.07 ± 5% perf-profile.children.cycles-pp.kfree
0.08 ± 6% +0.0 0.09 perf-profile.children.cycles-pp.do_signal
0.08 ± 6% +0.0 0.09 perf-profile.children.cycles-pp.memcpy
0.08 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.__pipe
0.08 +0.0 0.10 ± 5% perf-profile.children.cycles-pp.vfs_getattr
0.08 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.security_inode_getattr
0.08 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.avc_has_perm
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.fsnotify
0.07 +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__tsearch
0.07 ± 7% +0.0 0.08 perf-profile.children.cycles-pp.memchr
0.05 ± 8% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.file_has_perm
0.05 ± 8% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.__alloc_fd
0.05 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.free_unref_page_commit
0.05 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.up_read
0.29 +0.0 0.30 ± 2% perf-profile.children.cycles-pp.save_stack_trace_tsk
0.13 ± 3% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.pipe_wait
0.06 +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__put_anon_vma
0.06 ± 11% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.wake_up_page_bit
0.05 ± 9% +0.0 0.07 perf-profile.children.cycles-pp.dup_fd
0.19 ± 2% +0.0 0.21 perf-profile.children.cycles-pp.dequeue_entity
0.10 ± 5% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.move_queued_task
0.10 ± 7% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.__list_add_valid
0.08 ± 6% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.find_next_bit
0.07 ± 5% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.arch_dup_task_struct
0.11 +0.0 0.13 ± 3% perf-profile.children.cycles-pp.find_get_entry
0.07 ± 10% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.cp_new_stat
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.load_new_mm_cr3
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.set_next_entity
0.07 ± 6% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__install_special_mapping
0.06 ± 6% +0.0 0.08 perf-profile.children.cycles-pp._IO_setb
0.07 ± 6% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.trailing_symlink
0.06 ± 6% +0.0 0.08 perf-profile.children.cycles-pp.setup_new_exec
0.06 ± 6% +0.0 0.08 perf-profile.children.cycles-pp.perf_event_task
0.05 +0.0 0.07 ± 6% perf-profile.children.cycles-pp.__legitimize_mnt
0.05 ± 8% +0.0 0.07 perf-profile.children.cycles-pp.vma_gap_callbacks_rotate
0.05 +0.0 0.07 ± 6% perf-profile.children.cycles-pp.security_file_free
0.05 ± 8% +0.0 0.07 perf-profile.children.cycles-pp.down_write_killable
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.available_idle_cpu
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.children.cycles-pp._vm_normal_page
0.08 ± 5% +0.0 0.10 perf-profile.children.cycles-pp.expand_downwards
0.06 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp._copy_to_user
0.06 +0.0 0.08 ± 5% perf-profile.children.cycles-pp.__unlock_page_memcg
0.06 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.security_file_alloc
0.17 ± 2% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.osq_unlock
0.10 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.finish_fault
0.09 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.may_open
0.09 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.perf_output_copy
0.10 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.selinux_mmap_file
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.try_charge
0.07 ± 5% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.___slab_alloc
0.08 ± 6% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.__pthread_once_slow
0.07 +0.0 0.09 perf-profile.children.cycles-pp.move_page_tables
0.07 +0.0 0.09 perf-profile.children.cycles-pp.touch_atime
0.06 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.queue_work_on
0.09 ± 7% +0.0 0.11 perf-profile.children.cycles-pp.filp_close
0.08 ± 8% +0.0 0.10 perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.09 ± 4% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.sync_regs
0.08 +0.0 0.10 perf-profile.children.cycles-pp.__inode_security_revalidate
0.07 ± 6% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.prepend_name
0.06 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.complete_walk
0.06 +0.0 0.08 ± 8% perf-profile.children.cycles-pp.unlock_page_memcg
0.06 ± 11% +0.0 0.08 perf-profile.children.cycles-pp.cred_has_capability
0.06 +0.0 0.08 perf-profile.children.cycles-pp.security_file_open
0.06 +0.0 0.08 perf-profile.children.cycles-pp.__queue_work
0.05 ± 8% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.copyin
0.06 ± 9% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.06 ± 9% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.simple_write_begin
0.06 ± 7% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.security_file_permission
0.14 ± 5% +0.0 0.16 ± 2% perf-profile.children.cycles-pp.switch_mm_irqs_off
0.13 +0.0 0.15 ± 2% perf-profile.children.cycles-pp.__anon_vma_prepare
0.10 ± 7% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.migration_cpu_stop
0.09 ± 7% +0.0 0.11 ± 3% perf-profile.children.cycles-pp._IO_file_close
0.09 +0.0 0.11 ± 3% perf-profile.children.cycles-pp.__call_rcu
0.07 +0.0 0.09 ± 4% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.11 ± 4% +0.0 0.13 perf-profile.children.cycles-pp.do_brk_flags
0.10 ± 8% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.prepare_creds
0.09 +0.0 0.11 ± 3% perf-profile.children.cycles-pp.put_task_stack
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__slab_alloc
0.09 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__x64_sys_brk
0.08 ± 5% +0.0 0.10 perf-profile.children.cycles-pp.map_vdso
0.11 ± 7% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.get_zeroed_page
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.anon_vma_interval_tree_remove
0.12 ± 3% +0.0 0.15 ± 2% perf-profile.children.cycles-pp.__libc_sigaction
0.11 ± 4% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.cpu_stopper_thread
0.10 ± 5% +0.0 0.12 ± 5% perf-profile.children.cycles-pp.prepare_binprm
0.09 +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__d_alloc
0.09 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.simple_lookup
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.vm_area_alloc
0.04 ± 57% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.alloc_pid
0.11 ± 4% +0.0 0.13 ± 3% perf-profile.children.cycles-pp.brk
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.memset_erms
0.11 ± 3% +0.0 0.14 ± 6% perf-profile.children.cycles-pp.__get_user_8
0.09 ± 5% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.evict
0.15 ± 3% +0.0 0.17 ± 2% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.13 ± 3% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.sched_move_task
0.11 ± 7% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.__dentry_kill
0.10 ± 4% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.unmapped_area_topdown
0.10 ± 4% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.__snprintf_chk
0.09 ± 4% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.select_idle_sibling
0.08 ± 5% +0.0 0.11 ± 8% perf-profile.children.cycles-pp.lockref_get_not_dead
0.07 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.d_add
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
0.11 +0.0 0.14 ± 3% perf-profile.children.cycles-pp.fput
0.11 ± 6% +0.0 0.14 ± 3% perf-profile.children.cycles-pp._exit
0.12 ± 6% +0.0 0.15 ± 5% perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.12 ± 3% +0.0 0.15 ± 2% perf-profile.children.cycles-pp.free_pgd_range
0.11 ± 4% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.vm_area_dup
0.10 ± 7% +0.0 0.13 perf-profile.children.cycles-pp.vma_interval_tree_augment_rotate
0.10 ± 5% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.rcu_all_qs
0.09 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.munmap
0.08 ± 6% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__mmdrop
0.07 ± 6% +0.0 0.10 ± 4% perf-profile.children.cycles-pp.truncate_inode_pages_range
0.04 ± 57% +0.0 0.07 ± 6% perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.12 ± 3% +0.0 0.15 ± 2% perf-profile.children.cycles-pp.__lru_cache_add
0.11 ± 4% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.prepend_path
0.14 ± 3% +0.0 0.17 ± 6% perf-profile.children.cycles-pp.security_mmap_file
0.14 ± 3% +0.0 0.17 ± 7% perf-profile.children.cycles-pp.mem_cgroup_try_charge
0.12 ± 3% +0.0 0.15 ± 9% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.13 +0.0 0.16 ± 2% perf-profile.children.cycles-pp.lockref_put_or_lock
0.14 ± 6% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.kernel_read
0.12 ± 3% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.__put_user_4
0.09 +0.0 0.12 ± 8% perf-profile.children.cycles-pp.file_free_rcu
0.11 ± 4% +0.0 0.15 ± 5% perf-profile.children.cycles-pp.__x64_sys_close
0.20 ± 3% +0.0 0.23 ± 2% perf-profile.children.cycles-pp.update_load_avg
0.16 +0.0 0.20 ± 5% perf-profile.children.cycles-pp.vfs_statx_fd
0.17 ± 3% +0.0 0.20 ± 3% perf-profile.children.cycles-pp.d_path
0.15 ± 3% +0.0 0.19 ± 3% perf-profile.children.cycles-pp.stop_one_cpu
0.16 +0.0 0.20 ± 2% perf-profile.children.cycles-pp.anon_vma_clone
0.16 ± 2% +0.0 0.20 ± 2% perf-profile.children.cycles-pp.vma_compute_subtree_gap
0.16 ± 2% +0.0 0.20 ± 2% perf-profile.children.cycles-pp.getenv
0.15 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp._init
0.14 ± 5% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.__pud_alloc
0.12 ± 4% +0.0 0.16 ± 4% perf-profile.children.cycles-pp.__getrlimit
0.09 +0.0 0.12 ± 4% perf-profile.children.cycles-pp.wake_q_add
0.30 ± 2% +0.0 0.34 ± 3% perf-profile.children.cycles-pp.strchrnul
0.18 ± 2% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.open_exec
0.14 +0.0 0.18 ± 2% perf-profile.children.cycles-pp.shift_arg_pages
0.14 ± 5% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.preempt_schedule_common
0.19 ± 4% +0.0 0.23 perf-profile.children.cycles-pp.getname_flags
0.17 ± 4% +0.0 0.21 ± 2% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.14 ± 3% +0.0 0.18 ± 5% perf-profile.children.cycles-pp.__might_fault
0.15 ± 2% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.perf_event_mmap_output
0.13 ± 6% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.dentry_kill
0.13 ± 5% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.13 ± 3% +0.0 0.17 ± 3% perf-profile.children.cycles-pp.do_unlinkat
0.11 +0.0 0.15 ± 2% perf-profile.children.cycles-pp.inode_permission
0.14 ± 5% +0.0 0.18 perf-profile.children.cycles-pp.free_pcppages_bulk
0.08 +0.0 0.12 ± 11% perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.23 ± 2% +0.0 0.27 ± 4% perf-profile.children.cycles-pp.ptep_clear_flush
0.18 ± 4% +0.0 0.22 perf-profile.children.cycles-pp.change_protection_range
0.16 ± 2% +0.0 0.20 ± 4% perf-profile.children.cycles-pp.memcpy_erms
0.15 ± 3% +0.0 0.20 ± 4% perf-profile.children.cycles-pp.pagecache_get_page
0.14 ± 3% +0.0 0.18 ± 4% perf-profile.children.cycles-pp.legitimize_path
0.39 ± 2% +0.0 0.43 ± 3% perf-profile.children.cycles-pp.malloc
0.54 +0.0 0.58 perf-profile.children.cycles-pp.ttwu_do_activate
0.25 ± 3% +0.0 0.29 perf-profile.children.cycles-pp.dequeue_task_fair
0.19 ± 2% +0.0 0.23 ± 3% perf-profile.children.cycles-pp._copy_from_user
0.18 ± 2% +0.0 0.22 ± 3% perf-profile.children.cycles-pp.vmacache_find
0.18 ± 2% +0.0 0.22 perf-profile.children.cycles-pp.open64
0.14 ± 3% +0.0 0.18 perf-profile.children.cycles-pp.lock_page_memcg
0.01 ±173% +0.0 0.05 ± 9% perf-profile.children.cycles-pp.__follow_mount_rcu
0.21 ± 6% +0.0 0.26 ± 3% perf-profile.children.cycles-pp._IO_file_xsputn
0.17 ± 3% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.signal_wake_up_state
0.14 ± 3% +0.0 0.18 ± 3% perf-profile.children.cycles-pp.count
0.13 ± 3% +0.0 0.18 ± 2% perf-profile.children.cycles-pp.unlinkat
0.11 ± 7% +0.0 0.15 ± 9% perf-profile.children.cycles-pp.__get_vm_area_node
0.09 ± 4% +0.0 0.14 ± 13% perf-profile.children.cycles-pp.alloc_vmap_area
0.20 ± 2% +0.0 0.24 ± 3% perf-profile.children.cycles-pp.__vma_link_rb
0.17 ± 2% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.schedule_tail
0.14 ± 3% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.__rb_erase_color
0.01 ±173% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.insert_vm_struct
0.01 ±173% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.put_prev_entity
0.19 ± 2% +0.0 0.24 ± 2% perf-profile.children.cycles-pp.copy_strings_kernel
0.19 ± 2% +0.0 0.24 ± 5% perf-profile.children.cycles-pp.setup_arg_pages
0.17 ± 4% +0.0 0.22 perf-profile.children.cycles-pp.__pmd_alloc
0.15 ± 3% +0.0 0.19 ± 5% perf-profile.children.cycles-pp.unlazy_walk
0.14 ± 5% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.__get_free_pages
0.01 ±173% +0.0 0.06 perf-profile.children.cycles-pp.__free_pages_ok
0.15 +0.0 0.20 ± 4% perf-profile.children.cycles-pp.mark_page_accessed
0.15 ± 2% +0.0 0.20 ± 3% perf-profile.children.cycles-pp.d_alloc
0.57 +0.0 0.61 perf-profile.children.cycles-pp.enqueue_entity
0.23 +0.0 0.28 ± 4% perf-profile.children.cycles-pp.__do_sys_newfstat
0.16 ± 5% +0.0 0.21 ± 6% perf-profile.children.cycles-pp.__generic_file_write_iter
0.15 ± 7% +0.0 0.20 ± 5% perf-profile.children.cycles-pp.generic_perform_write
0.00 +0.1 0.05 perf-profile.children.cycles-pp.vma_merge
0.00 +0.1 0.05 perf-profile.children.cycles-pp.free_one_page
0.00 +0.1 0.05 perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__fget_light
0.17 ± 7% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.generic_file_write_iter
0.24 ± 2% +0.1 0.30 ± 3% perf-profile.children.cycles-pp.anon_vma_fork
0.19 ± 3% +0.1 0.24 ± 4% perf-profile.children.cycles-pp.mem_cgroup_try_charge_delay
0.19 +0.1 0.24 ± 3% perf-profile.children.cycles-pp.finish_task_switch
0.18 ± 2% +0.1 0.24 ± 3% perf-profile.children.cycles-pp.__send_signal
0.18 ± 2% +0.1 0.23 ± 2% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__pagevec_release
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.__perf_event_header__init_id
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.check_preempt_curr
0.21 ± 2% +0.1 0.27 ± 5% perf-profile.children.cycles-pp.autoremove_wake_function
0.21 ± 3% +0.1 0.27 ± 3% perf-profile.children.cycles-pp.generic_file_read_iter
0.30 ± 2% +0.1 0.36 ± 3% perf-profile.children.cycles-pp.wake_up_new_task
0.25 +0.1 0.30 perf-profile.children.cycles-pp.get_unmapped_area
0.19 ± 3% +0.1 0.24 ± 4% perf-profile.children.cycles-pp.get_user_arg_ptr
0.17 ± 2% +0.1 0.22 ± 3% perf-profile.children.cycles-pp.pgd_alloc
0.00 +0.1 0.05 ± 9% perf-profile.children.cycles-pp.page_add_new_anon_rmap
0.21 ± 2% +0.1 0.26 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.31 +0.1 0.36 perf-profile.children.cycles-pp.native_flush_tlb_one_user
0.26 ± 3% +0.1 0.31 ± 2% perf-profile.children.cycles-pp.do_open_execat
0.20 ± 4% +0.1 0.25 ± 7% perf-profile.children.cycles-pp.__vmalloc_node_range
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__mod_node_page_state
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.22 +0.1 0.28 ± 4% perf-profile.children.cycles-pp.__wake_up_common_lock
0.22 ± 3% +0.1 0.28 ± 4% perf-profile.children.cycles-pp.do_notify_parent
0.28 ± 2% +0.1 0.34 ± 5% perf-profile.children.cycles-pp.__rb_insert_augmented
0.23 ± 2% +0.1 0.29 perf-profile.children.cycles-pp.strlen
0.19 ± 2% +0.1 0.25 ± 5% perf-profile.children.cycles-pp.copyout
0.19 ± 4% +0.1 0.25 ± 3% perf-profile.children.cycles-pp.mm_init
0.64 +0.1 0.70 perf-profile.children.cycles-pp.enqueue_task_fair
0.29 ± 2% +0.1 0.35 ± 3% perf-profile.children.cycles-pp.___perf_sw_event
0.24 ± 2% +0.1 0.30 ± 3% perf-profile.children.cycles-pp.terminate_walk
0.26 ± 3% +0.1 0.33 perf-profile.children.cycles-pp.free_unref_page_list
0.23 +0.1 0.30 ± 5% perf-profile.children.cycles-pp.__wake_up_common
0.32 +0.1 0.38 perf-profile.children.cycles-pp.__perf_sw_event
0.28 ± 2% +0.1 0.35 ± 4% perf-profile.children.cycles-pp.get_user_pages_remote
0.25 ± 2% +0.1 0.32 ± 4% perf-profile.children.cycles-pp.pipe_write
0.22 ± 3% +0.1 0.29 ± 5% perf-profile.children.cycles-pp.copy_page_to_iter
0.23 ± 2% +0.1 0.29 ± 2% perf-profile.children.cycles-pp._IO_file_open
0.32 ± 3% +0.1 0.38 ± 2% perf-profile.children.cycles-pp._fini
0.28 +0.1 0.34 ± 4% perf-profile.children.cycles-pp.__get_user_pages
0.34 ± 2% +0.1 0.41 perf-profile.children.cycles-pp.unlock_page
0.30 ± 2% +0.1 0.36 ± 4% perf-profile.children.cycles-pp.__pte_alloc
0.32 +0.1 0.39 perf-profile.children.cycles-pp.vma_interval_tree_remove
0.29 ± 2% +0.1 0.36 ± 3% perf-profile.children.cycles-pp.pipe_read
0.23 +0.1 0.30 ± 2% perf-profile.children.cycles-pp.do_dentry_open
0.10 ± 5% +0.1 0.16 ± 10% perf-profile.children.cycles-pp.run_ksoftirqd
0.22 ± 3% +0.1 0.29 ± 3% perf-profile.children.cycles-pp.d_alloc_parallel
0.39 +0.1 0.46 ± 3% perf-profile.children.cycles-pp.__fxstat64
0.34 +0.1 0.41 perf-profile.children.cycles-pp.flush_tlb_func_common
0.27 ± 3% +0.1 0.34 ± 3% perf-profile.children.cycles-pp.__d_lookup_rcu
0.34 ± 2% +0.1 0.41 ± 2% perf-profile.children.cycles-pp._IO_padn
0.16 ± 2% +0.1 0.23 ± 3% perf-profile.children.cycles-pp.release_task
0.24 +0.1 0.31 ± 3% perf-profile.children.cycles-pp.__lookup_slow
0.00 +0.1 0.07 ± 11% perf-profile.children.cycles-pp.do_prlimit
0.32 +0.1 0.40 ± 2% perf-profile.children.cycles-pp.selinux_inode_permission
0.39 ± 2% +0.1 0.46 ± 2% perf-profile.children.cycles-pp.find_vma
0.00 +0.1 0.08 ± 6% perf-profile.children.cycles-pp.__x64_sys_getrlimit
0.59 +0.1 0.67 ± 2% perf-profile.children.cycles-pp.schedule
0.28 +0.1 0.36 ± 2% perf-profile.children.cycles-pp.__list_del_entry_valid
0.35 ± 2% +0.1 0.42 ± 2% perf-profile.children.cycles-pp.security_inode_permission
0.32 ± 3% +0.1 0.40 ± 3% perf-profile.children.cycles-pp.__do_sys_newstat
0.27 ± 3% +0.1 0.35 ± 2% perf-profile.children.cycles-pp.lookup_slow
0.33 ± 2% +0.1 0.41 ± 2% perf-profile.children.cycles-pp._cond_resched
0.32 +0.1 0.40 ± 3% perf-profile.children.cycles-pp.remove_vma
0.30 ± 2% +0.1 0.39 ± 2% perf-profile.children.cycles-pp.perf_iterate_sb
0.13 ± 5% +0.1 0.21 ± 14% perf-profile.children.cycles-pp.find_vmap_area
0.33 ± 3% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.vfs_statx
0.39 ± 2% +0.1 0.47 ± 2% perf-profile.children.cycles-pp._IO_fwrite
0.29 ± 2% +0.1 0.38 ± 3% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.18 ± 8% +0.1 0.27 ± 12% perf-profile.children.cycles-pp.__vunmap
0.43 +0.1 0.52 ± 2% perf-profile.children.cycles-pp.flush_tlb_mm_range
0.37 ± 3% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.___might_sleep
0.40 ± 3% +0.1 0.49 ± 2% perf-profile.children.cycles-pp.__slab_free
0.29 +0.1 0.38 ± 2% perf-profile.children.cycles-pp.create_elf_tables
0.18 ± 7% +0.1 0.27 ± 10% perf-profile.children.cycles-pp.free_work
0.37 ± 2% +0.1 0.46 perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.18 ± 4% +0.1 0.28 ± 4% perf-profile.children.cycles-pp.wait_consider_task
0.39 ± 2% +0.1 0.48 ± 3% perf-profile.children.cycles-pp.__xstat64
0.39 +0.1 0.48 ± 2% perf-profile.children.cycles-pp.pte_alloc_one
0.29 ± 2% +0.1 0.39 ± 2% perf-profile.children.cycles-pp.__alloc_file
0.40 ± 2% +0.1 0.49 ± 2% perf-profile.children.cycles-pp.unlink_anon_vmas
0.32 ± 2% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.alloc_empty_file
0.44 +0.1 0.54 perf-profile.children.cycles-pp.page_add_file_rmap
0.41 ± 2% +0.1 0.51 ± 2% perf-profile.children.cycles-pp.sched_exec
0.39 +0.1 0.48 perf-profile.children.cycles-pp.copy_page
0.31 ± 3% +0.1 0.41 ± 3% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.52 +0.1 0.62 ± 3% perf-profile.children.cycles-pp.native_irq_return_iret
0.20 ± 6% +0.1 0.30 ± 9% perf-profile.children.cycles-pp.process_one_work
0.36 ± 2% +0.1 0.47 ± 3% perf-profile.children.cycles-pp.__fput
0.51 +0.1 0.62 ± 2% perf-profile.children.cycles-pp.copy_page_range
0.40 ± 2% +0.1 0.50 ± 3% perf-profile.children.cycles-pp.__clear_user
0.36 ± 2% +0.1 0.47 perf-profile.children.cycles-pp.__x64_sys_munmap
0.35 +0.1 0.46 perf-profile.children.cycles-pp.strnlen_user
0.23 ± 5% +0.1 0.34 ± 9% perf-profile.children.cycles-pp.worker_thread
0.48 ± 2% +0.1 0.59 ± 2% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.40 ± 2% +0.1 0.51 ± 3% perf-profile.children.cycles-pp.dput
0.44 +0.1 0.55 ± 4% perf-profile.children.cycles-pp.lookup_fast
1.14 +0.1 1.25 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.42 ± 2% +0.1 0.54 ± 2% perf-profile.children.cycles-pp.free_pages_and_swap_cache
1.08 ± 2% +0.1 1.20 ± 2% perf-profile.children.cycles-pp.__sched_text_start
0.51 +0.1 0.63 ± 2% perf-profile.children.cycles-pp.ksys_read
0.40 +0.1 0.52 ± 4% perf-profile.children.cycles-pp.smpboot_thread_fn
0.46 +0.1 0.58 ± 3% perf-profile.children.cycles-pp.write
0.49 +0.1 0.62 ± 3% perf-profile.children.cycles-pp.read
0.42 ± 2% +0.1 0.55 ± 3% perf-profile.children.cycles-pp.__vfs_write
0.44 +0.1 0.57 ± 2% perf-profile.children.cycles-pp.task_work_run
0.99 ± 2% +0.1 1.12 ± 4% perf-profile.children.cycles-pp.rcu_process_callbacks
0.55 ± 2% +0.1 0.67 ± 2% perf-profile.children.cycles-pp.kmem_cache_free
0.54 +0.1 0.66 ± 3% perf-profile.children.cycles-pp.__vfs_read
0.62 +0.1 0.75 ± 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.46 +0.1 0.59 ± 4% perf-profile.children.cycles-pp.vfs_write
0.45 ± 2% +0.1 0.58 ± 2% perf-profile.children.cycles-pp.do_faccessat
0.47 +0.1 0.60 ± 3% perf-profile.children.cycles-pp.ksys_write
0.15 ± 6% +0.1 0.28 ± 7% perf-profile.children.cycles-pp.queued_write_lock_slowpath
0.14 ± 8% +0.1 0.27 ± 9% perf-profile.children.cycles-pp.queued_read_lock_slowpath
0.51 ± 3% +0.1 0.65 perf-profile.children.cycles-pp.kmem_cache_alloc
0.58 +0.1 0.72 ± 2% perf-profile.children.cycles-pp.__entry_SYSCALL_64_trampoline
0.62 +0.1 0.77 perf-profile.children.cycles-pp.vfs_read
0.63 +0.2 0.78 ± 2% perf-profile.children.cycles-pp.perf_event_mmap
0.62 +0.2 0.77 ± 2% perf-profile.children.cycles-pp.alloc_pages_vma
0.35 ± 4% +0.2 0.51 ± 4% perf-profile.children.cycles-pp.lru_add_drain
0.35 ± 3% +0.2 0.52 ± 5% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.60 ± 2% +0.2 0.78 ± 2% perf-profile.children.cycles-pp.path_lookupat
0.62 ± 2% +0.2 0.80 ± 2% perf-profile.children.cycles-pp.filename_lookup
0.43 ± 2% +0.2 0.61 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.64 +0.2 0.83 ± 3% perf-profile.children.cycles-pp.walk_component
0.75 ± 2% +0.2 0.95 ± 2% perf-profile.children.cycles-pp._IO_default_xsputn
0.86 +0.2 1.07 ± 3% perf-profile.children.cycles-pp.clear_page_erms
0.84 +0.2 1.04 ± 3% perf-profile.children.cycles-pp.copy_strings
0.41 ± 4% +0.2 0.62 ± 5% perf-profile.children.cycles-pp.do_wait
0.43 ± 5% +0.2 0.64 ± 5% perf-profile.children.cycles-pp.__do_sys_wait4
0.42 ± 4% +0.2 0.64 ± 5% perf-profile.children.cycles-pp.kernel_wait4
0.95 ± 2% +0.2 1.16 perf-profile.children.cycles-pp._dl_addr
0.95 +0.2 1.17 ± 2% perf-profile.children.cycles-pp.wp_page_copy
0.80 +0.2 1.02 ± 2% perf-profile.children.cycles-pp.link_path_walk
0.93 +0.2 1.16 perf-profile.children.cycles-pp.vma_interval_tree_insert
0.95 +0.2 1.18 perf-profile.children.cycles-pp.alloc_set_pte
0.51 ± 3% +0.2 0.74 ± 3% perf-profile.children.cycles-pp.wait4
0.63 ± 2% +0.2 0.87 ± 4% perf-profile.children.cycles-pp.kthread
1.04 +0.2 1.27 ± 2% perf-profile.children.cycles-pp.do_wp_page
1.29 +0.2 1.54 ± 3% perf-profile.children.cycles-pp.wake_up_q
2.06 +0.3 2.33 ± 2% perf-profile.children.cycles-pp.up_write
9.02 +0.3 9.29 perf-profile.children.cycles-pp.handle_mm_fault
1.53 +0.3 1.79 ± 2% perf-profile.children.cycles-pp.rwsem_wake
0.81 +0.3 1.08 ± 3% perf-profile.children.cycles-pp.ret_from_fork
1.53 +0.3 1.81 ± 2% perf-profile.children.cycles-pp.call_rwsem_wake
1.10 +0.3 1.38 ± 2% perf-profile.children.cycles-pp.__strcoll_l
0.81 ± 2% +0.3 1.08 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
0.93 +0.3 1.23 perf-profile.children.cycles-pp.rwsem_spin_on_owner
1.24 +0.3 1.54 ± 3% perf-profile.children.cycles-pp.get_page_from_freelist
1.35 +0.3 1.69 ± 3% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.68 +0.4 2.04 ± 3% perf-profile.children.cycles-pp.try_to_wake_up
1.22 +0.4 1.58 perf-profile.children.cycles-pp.mmap64
14.16 +0.4 14.53 perf-profile.children.cycles-pp.load_elf_binary
1.05 ± 2% +0.4 1.42 ± 4% perf-profile.children.cycles-pp.page_remove_rmap
14.19 +0.4 14.56 perf-profile.children.cycles-pp.search_binary_handler
2.09 ± 2% +0.4 2.52 perf-profile.children.cycles-pp.vfprintf
9.80 +0.5 10.26 perf-profile.children.cycles-pp.__do_page_fault
9.82 +0.5 10.29 perf-profile.children.cycles-pp.do_page_fault
1.27 ± 2% +0.5 1.79 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.96 +0.5 2.49 ± 2% perf-profile.children.cycles-pp.path_openat
2.01 +0.5 2.54 perf-profile.children.cycles-pp.do_sys_open
1.98 +0.5 2.52 ± 2% perf-profile.children.cycles-pp.do_filp_open
0.86 ± 2% +0.5 1.40 ± 2% perf-profile.children.cycles-pp.vm_munmap
10.22 +0.6 10.78 perf-profile.children.cycles-pp.page_fault
2.73 +0.6 3.29 perf-profile.children.cycles-pp.__vsnprintf_chk
2.76 +0.6 3.39 ± 2% perf-profile.children.cycles-pp.setlocale
1.77 ± 2% +0.7 2.50 ± 4% perf-profile.children.cycles-pp.release_pages
1.47 +0.8 2.27 ± 6% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
2.12 ± 2% +0.8 2.93 ± 4% perf-profile.children.cycles-pp.tlb_flush_mmu_free
15.71 +0.8 16.53 perf-profile.children.cycles-pp.execve
16.01 +0.8 16.84 perf-profile.children.cycles-pp.__do_execve_file
16.02 +0.8 16.86 perf-profile.children.cycles-pp.__x64_sys_execve
2.24 ± 2% +0.8 3.09 ± 4% perf-profile.children.cycles-pp.arch_tlb_finish_mmu
2.25 ± 2% +0.8 3.10 ± 4% perf-profile.children.cycles-pp.tlb_finish_mmu
2.77 +0.9 3.62 ± 3% perf-profile.children.cycles-pp.unmap_page_range
2.89 +0.9 3.77 ± 3% perf-profile.children.cycles-pp.unmap_vmas
1.18 ± 3% +0.9 2.09 ± 2% perf-profile.children.cycles-pp.elf_map
41.69 -8.5 33.23 ± 3% perf-profile.self.cycles-pp.osq_lock
9.44 -1.3 8.12 ± 4% perf-profile.self.cycles-pp.intel_idle
4.14 ± 2% -0.6 3.53 perf-profile.self.cycles-pp.filemap_map_pages
0.54 ± 2% -0.2 0.37 perf-profile.self.cycles-pp.rwsem_down_write_failed
0.28 ± 3% -0.1 0.15 ± 2% perf-profile.self.cycles-pp.radix_tree_next_chunk
0.13 ± 6% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.find_busiest_group
0.05 +0.0 0.06 perf-profile.self.cycles-pp.anon_vma_fork
0.05 +0.0 0.06 perf-profile.self.cycles-pp.__call_rcu
0.05 +0.0 0.06 perf-profile.self.cycles-pp.__unlock_page_memcg
0.06 +0.0 0.07 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.06 +0.0 0.07 perf-profile.self.cycles-pp.do_wp_page
0.06 +0.0 0.07 perf-profile.self.cycles-pp.kfree
0.06 +0.0 0.07 perf-profile.self.cycles-pp.__switch_to
0.07 ± 5% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.find_next_bit
0.07 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__tsearch
0.05 ± 8% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.path_openat
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.d_alloc_parallel
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.unlock_page_memcg
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.up_read
0.06 +0.0 0.07 ± 5% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.self.cycles-pp.__perf_sw_event
0.06 ± 7% +0.0 0.07 perf-profile.self.cycles-pp.strncpy_from_user
0.09 ± 4% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.update_load_avg
0.07 +0.0 0.09 ± 5% perf-profile.self.cycles-pp.find_get_entry
0.05 +0.0 0.07 ± 7% perf-profile.self.cycles-pp.__d_alloc
0.05 ± 9% +0.0 0.07 perf-profile.self.cycles-pp.create_elf_tables
0.17 ± 3% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.osq_unlock
0.07 ± 11% +0.0 0.09 perf-profile.self.cycles-pp.__alloc_pages_nodemask
0.07 ± 5% +0.0 0.09 perf-profile.self.cycles-pp.getenv
0.07 ± 11% +0.0 0.09 perf-profile.self.cycles-pp._vm_normal_page
0.11 ± 7% +0.0 0.12 ± 4% perf-profile.self.cycles-pp.__sched_text_start
0.09 ± 4% +0.0 0.11 ± 6% perf-profile.self.cycles-pp._cond_resched
0.09 ± 4% +0.0 0.11 perf-profile.self.cycles-pp.__list_add_valid
0.08 ± 5% +0.0 0.10 perf-profile.self.cycles-pp.perf_iterate_sb
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__vma_adjust
0.08 ± 8% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.sync_regs
0.07 ± 7% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.load_new_mm_cr3
0.06 ± 6% +0.0 0.08 perf-profile.self.cycles-pp._IO_setb
0.05 +0.0 0.07 ± 6% perf-profile.self.cycles-pp.__legitimize_mnt
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.__libc_fork
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.change_protection_range
0.06 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.memchr
0.08 ± 6% +0.0 0.10 ± 5% perf-profile.self.cycles-pp.anon_vma_interval_tree_remove
0.07 ± 12% +0.0 0.09 ± 9% perf-profile.self.cycles-pp.try_charge
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.inode_permission
0.07 +0.0 0.09 perf-profile.self.cycles-pp.avc_has_perm
0.08 +0.0 0.10 ± 7% perf-profile.self.cycles-pp.free_pgd_range
0.07 ± 7% +0.0 0.09 ± 5% perf-profile.self.cycles-pp.prepend_name
0.05 ± 8% +0.0 0.07 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.08 ± 10% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.available_idle_cpu
0.10 ± 8% +0.0 0.12 ± 4% perf-profile.self.cycles-pp.unlink_anon_vmas
0.10 ± 7% +0.0 0.12 ± 6% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.08 ± 8% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.memset_erms
0.07 ± 5% +0.0 0.10 ± 5% perf-profile.self.cycles-pp.rcu_all_qs
0.08 ± 6% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.lockref_get_not_dead
0.06 ± 9% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.memcpy
0.08 ± 5% +0.0 0.10 perf-profile.self.cycles-pp.fput
0.12 ± 6% +0.0 0.15 ± 4% perf-profile.self.cycles-pp.do_syscall_64
0.07 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.__snprintf_chk
0.04 ± 57% +0.0 0.06 ± 6% perf-profile.self.cycles-pp.vm_area_dup
0.11 ± 4% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.__get_user_8
0.08 ± 5% +0.0 0.11 ± 7% perf-profile.self.cycles-pp.file_free_rcu
0.08 +0.0 0.11 ± 4% perf-profile.self.cycles-pp.__vma_link_rb
0.13 ± 5% +0.0 0.16 ± 5% perf-profile.self.cycles-pp.copy_page_range
0.11 ± 3% +0.0 0.14 ± 3% perf-profile.self.cycles-pp.perf_event_mmap
0.18 ± 2% +0.0 0.21 ± 2% perf-profile.self.cycles-pp.find_vma
0.14 +0.0 0.17 ± 2% perf-profile.self.cycles-pp.copy_process
0.10 ± 4% +0.0 0.13 perf-profile.self.cycles-pp.link_path_walk
0.08 +0.0 0.11 ± 4% perf-profile.self.cycles-pp.__fput
0.08 ± 8% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.free_pcppages_bulk
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.do_mmap
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.kmem_cache_alloc_trace
0.10 ± 4% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.vma_interval_tree_augment_rotate
0.15 ± 2% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.14 ± 3% +0.0 0.17 ± 4% perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.12 +0.0 0.15 ± 2% perf-profile.self.cycles-pp.lock_page_memcg
0.11 ± 4% +0.0 0.15 ± 2% perf-profile.self.cycles-pp._init
0.10 ± 5% +0.0 0.13 ± 3% perf-profile.self.cycles-pp.unmapped_area_topdown
0.11 +0.0 0.14 ± 3% perf-profile.self.cycles-pp.lockref_put_or_lock
0.03 ±100% +0.0 0.06 perf-profile.self.cycles-pp.unmap_vmas
0.18 +0.0 0.21 ± 4% perf-profile.self.cycles-pp.handle_mm_fault
0.09 +0.0 0.12 ± 4% perf-profile.self.cycles-pp.wake_q_add
0.15 ± 5% +0.0 0.18 ± 2% perf-profile.self.cycles-pp.kmem_cache_free
0.08 ± 5% +0.0 0.12 ± 5% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.18 ± 3% +0.0 0.22 ± 3% perf-profile.self.cycles-pp._IO_file_xsputn
0.17 ± 2% +0.0 0.21 ± 3% perf-profile.self.cycles-pp.strchrnul
0.16 ± 5% +0.0 0.20 ± 4% perf-profile.self.cycles-pp.memcpy_erms
0.12 ± 3% +0.0 0.16 ± 4% perf-profile.self.cycles-pp.__rb_erase_color
0.12 ± 7% +0.0 0.15 ± 3% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.17 ± 2% +0.0 0.21 ± 2% perf-profile.self.cycles-pp.vmacache_find
0.08 +0.0 0.12 ± 11% perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.16 +0.0 0.20 perf-profile.self.cycles-pp.mmap_region
0.14 +0.0 0.18 ± 2% perf-profile.self.cycles-pp.mark_page_accessed
0.07 ± 6% +0.0 0.11 ± 11% perf-profile.self.cycles-pp.queued_read_lock_slowpath
0.21 ± 2% +0.0 0.26 ± 4% perf-profile.self.cycles-pp.get_page_from_freelist
0.17 ± 2% +0.0 0.22 perf-profile.self.cycles-pp.malloc
0.16 ± 2% +0.0 0.21 ± 2% perf-profile.self.cycles-pp.strlen
0.01 ±173% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.down_read_trylock
0.01 ±173% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.__errno_location
0.17 ± 2% +0.0 0.22 ± 3% perf-profile.self.cycles-pp.selinux_inode_permission
0.26 +0.0 0.30 ± 5% perf-profile.self.cycles-pp.__rb_insert_augmented
0.18 ± 2% +0.0 0.23 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.00 +0.1 0.05 perf-profile.self.cycles-pp.load_elf_binary
0.00 +0.1 0.05 perf-profile.self.cycles-pp.page_fault
0.00 +0.1 0.05 perf-profile.self.cycles-pp.pagevec_lru_move_fn
0.00 +0.1 0.05 perf-profile.self.cycles-pp.flush_tlb_mm_range
0.00 +0.1 0.05 perf-profile.self.cycles-pp.__inode_security_revalidate
0.00 +0.1 0.05 perf-profile.self.cycles-pp.dup_fd
0.00 +0.1 0.05 perf-profile.self.cycles-pp.acct_collect
0.00 +0.1 0.05 perf-profile.self.cycles-pp.vma_merge
0.00 +0.1 0.05 perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.24 ± 2% +0.1 0.30 ± 2% perf-profile.self.cycles-pp.___perf_sw_event
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.mem_cgroup_throttle_swaprate
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.44 ± 2% +0.1 0.49 ± 4% perf-profile.self.cycles-pp.down_write
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.__might_fault
0.00 +0.1 0.05 ± 9% perf-profile.self.cycles-pp.__mod_node_page_state
0.22 +0.1 0.28 ± 3% perf-profile.self.cycles-pp.__vsnprintf_chk
0.15 ± 2% +0.1 0.21 ± 5% perf-profile.self.cycles-pp.__alloc_file
0.30 +0.1 0.36 perf-profile.self.cycles-pp.native_flush_tlb_one_user
0.18 ± 4% +0.1 0.24 ± 3% perf-profile.self.cycles-pp.__do_page_fault
0.31 ± 2% +0.1 0.37 perf-profile.self.cycles-pp._IO_padn
0.33 ± 3% +0.1 0.40 perf-profile.self.cycles-pp.unlock_page
0.21 ± 3% +0.1 0.28 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.20 ± 2% +0.1 0.27 ± 4% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.36 +0.1 0.43 ± 2% perf-profile.self.cycles-pp.page_add_file_rmap
0.31 +0.1 0.38 perf-profile.self.cycles-pp.vma_interval_tree_remove
0.27 ± 3% +0.1 0.34 ± 3% perf-profile.self.cycles-pp.__d_lookup_rcu
0.28 +0.1 0.35 ± 3% perf-profile.self.cycles-pp.__list_del_entry_valid
0.41 +0.1 0.49 ± 3% perf-profile.self.cycles-pp.select_task_rq_fair
0.34 +0.1 0.42 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.35 ± 3% +0.1 0.43 ± 3% perf-profile.self.cycles-pp.___might_sleep
0.29 ± 3% +0.1 0.38 perf-profile.self.cycles-pp.kmem_cache_alloc
0.36 ± 3% +0.1 0.45 perf-profile.self.cycles-pp._IO_fwrite
0.44 +0.1 0.53 perf-profile.self.cycles-pp.__handle_mm_fault
0.39 ± 3% +0.1 0.48 ± 2% perf-profile.self.cycles-pp.__slab_free
0.32 ± 2% +0.1 0.41 perf-profile.self.cycles-pp.alloc_set_pte
0.38 +0.1 0.47 perf-profile.self.cycles-pp.copy_page
0.52 +0.1 0.62 ± 3% perf-profile.self.cycles-pp.native_irq_return_iret
0.34 ± 2% +0.1 0.45 ± 2% perf-profile.self.cycles-pp.strnlen_user
0.41 ± 2% +0.1 0.52 ± 2% perf-profile.self.cycles-pp.free_pages_and_swap_cache
0.47 ± 2% +0.1 0.59 ± 2% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.51 +0.1 0.64 ± 4% perf-profile.self.cycles-pp._raw_spin_lock
0.53 +0.1 0.66 perf-profile.self.cycles-pp.__entry_SYSCALL_64_trampoline
0.65 ± 2% +0.2 0.82 perf-profile.self.cycles-pp._IO_default_xsputn
0.84 +0.2 1.05 ± 3% perf-profile.self.cycles-pp.clear_page_erms
0.91 ± 2% +0.2 1.13 perf-profile.self.cycles-pp._dl_addr
0.91 +0.2 1.14 perf-profile.self.cycles-pp.vma_interval_tree_insert
1.08 +0.3 1.35 ± 2% perf-profile.self.cycles-pp.__strcoll_l
0.91 +0.3 1.20 perf-profile.self.cycles-pp.rwsem_spin_on_owner
0.92 ± 2% +0.3 1.26 ± 4% perf-profile.self.cycles-pp.page_remove_rmap
1.33 +0.4 1.72 ± 2% perf-profile.self.cycles-pp.unmap_page_range
1.75 ± 2% +0.4 2.18 perf-profile.self.cycles-pp.vfprintf
1.23 ± 2% +0.5 1.73 ± 5% perf-profile.self.cycles-pp.release_pages
1.47 +0.8 2.26 ± 6% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
[sched/fair] 2c83362734: pft.faults_per_sec_per_cpu -41.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -41.4% regression of pft.faults_per_sec_per_cpu due to commit:
commit: 2c83362734dad8e48ccc0710b5cd2436a0323893 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: pft
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
runtime: 300s
nr_task: 50%
cpufreq_governor: performance
ucode: 0xb00002e
test-description: Pft is the page fault test micro benchmark.
test-url: https://github.com/gormanm/pft
In addition to that, the commit also has significant impact on the following tests:
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=25% |
| | omp=true |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min 1.3% improvement |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory |
| test parameters | cpufreq_governor=performance |
| | nr_job=3000 |
| | nr_task=100% |
| | runtime=300s |
| | test=custom |
| | ucode=0x3d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -32.0% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | plzip: |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_threads=100% |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.jobs_per_min -11.9% regression |
| test machine | 192 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=all_utime |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | hackbench: hackbench.throughput -7.3% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | ipc=pipe |
| | mode=process |
| | nr_threads=1600% |
| | ucode=0xb00002e |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.std_dev_percent 11.4% undefined |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=custom |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: boot-time.boot 95.3% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=100% |
| | runtime=300s |
| | test=alltests |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | pft: pft.faults_per_sec_per_cpu -42.7% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=50% |
| | runtime=300s |
| | ucode=0x200004d |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -28.8% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=50000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stream: stream.add_bandwidth_MBps -30.6% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | array_size=10000000 |
| | cpufreq_governor=performance |
| | nr_threads=50% |
| | omp=true |
+------------------+--------------------------------------------------------------------------+
| testcase: change | pft: pft.faults_per_sec_per_cpu -42.5% regression |
| test machine | 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory |
| test parameters | cpufreq_governor=performance |
| | nr_task=50% |
| | runtime=300s |
+------------------+--------------------------------------------------------------------------+
| testcase: change | reaim: reaim.child_systime -1.4% undefined |
| test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | iterations=30 |
| | nr_task=1600% |
| | test=compute |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.fifo.ops_per_sec 76.2% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=pipe |
| | cpufreq_governor=performance |
| | nr_threads=100% |
| | testtime=1s |
+------------------+--------------------------------------------------------------------------+
| testcase: change | stress-ng: stress-ng.tsearch.ops_per_sec -17.1% regression |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | class=cpu |
| | cpufreq_governor=performance |
| | nr_threads=100% |
| | testtime=1s |
+------------------+--------------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ep3/pft/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=_cond_resched/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
250875 -41.4% 146900 pft.faults_per_sec_per_cpu
7127548 -32.9% 4779214 pft.time.minor_page_faults
3533 +14.5% 4047 pft.time.percent_of_cpu_this_job_got
10444 +13.3% 11828 pft.time.system_time
189.79 +84.8% 350.70 ± 5% pft.time.user_time
105380 ± 2% +31.5% 138536 ± 5% pft.time.voluntary_context_switches
6180331 ± 9% -60.3% 2451607 ± 34% numa-numastat.node1.local_node
6187616 ± 9% -60.3% 2455998 ± 34% numa-numastat.node1.numa_hit
58.50 -9.8% 52.75 vmstat.cpu.id
35.75 +14.0% 40.75 vmstat.procs.r
59.07 -6.1 52.99 mpstat.cpu.idle%
39.94 +5.5 45.45 mpstat.cpu.sys%
0.93 +0.6 1.54 ± 5% mpstat.cpu.usr%
147799 ± 6% +13.9% 168315 ± 5% cpuidle.C3.usage
1.532e+10 ± 3% -11.9% 1.35e+10 cpuidle.C6.time
15901515 ± 3% -12.4% 13931328 cpuidle.C6.usage
1.062e+08 ± 9% +123.1% 2.369e+08 ± 14% cpuidle.POLL.time
2491 ± 8% -28.0% 1793 ± 10% slabinfo.eventpoll_epi.active_objs
2491 ± 8% -28.0% 1793 ± 10% slabinfo.eventpoll_epi.num_objs
4332 ± 7% -27.9% 3125 ± 10% slabinfo.eventpoll_pwq.active_objs
4332 ± 7% -27.9% 3125 ± 10% slabinfo.eventpoll_pwq.num_objs
4601 ± 2% -16.0% 3866 ± 2% slabinfo.mm_struct.active_objs
4662 ± 2% -15.7% 3930 ± 2% slabinfo.mm_struct.num_objs
5396205 ± 2% +18.2% 6378310 meminfo.Active
5333263 ± 2% +18.4% 6316138 meminfo.Active(anon)
2054416 ± 5% +22.7% 2521118 ± 16% meminfo.AnonHugePages
5180518 ± 2% +18.7% 6147856 meminfo.AnonPages
4.652e+08 ± 2% +16.4% 5.416e+08 meminfo.Committed_AS
6863577 +13.8% 7807920 meminfo.Memused
9381 ± 2% +11.7% 10480 ± 7% meminfo.PageTables
1203 +15.0% 1383 turbostat.Avg_MHz
43.02 +6.3 49.30 turbostat.Busy%
147383 ± 6% +14.1% 168181 ± 5% turbostat.C3
15900108 ± 3% -12.4% 13931193 turbostat.C6
56.94 -6.2 50.74 turbostat.C6%
33.20 -47.1% 17.57 ± 4% turbostat.CPU%c1
23.69 ± 4% +39.5% 33.05 ± 2% turbostat.CPU%c6
186.47 -9.2% 169.38 turbostat.PkgWatt
18.43 -19.6% 14.82 turbostat.RAMWatt
715561 ± 15% +70.3% 1218740 ± 16% numa-vmstat.node0.nr_active_anon
701119 ± 15% +69.9% 1190948 ± 15% numa-vmstat.node0.nr_anon_pages
544.25 ± 20% +80.0% 979.50 ± 20% numa-vmstat.node0.nr_anon_transparent_hugepages
1240 ± 15% +37.7% 1709 ± 16% numa-vmstat.node0.nr_page_table_pages
715526 ± 15% +70.3% 1218728 ± 16% numa-vmstat.node0.nr_zone_active_anon
636795 ± 10% -44.9% 350831 ± 53% numa-vmstat.node1.nr_active_anon
614709 ± 11% -45.1% 337557 ± 54% numa-vmstat.node1.nr_anon_pages
636843 ± 11% -44.9% 350830 ± 53% numa-vmstat.node1.nr_zone_active_anon
3520461 ± 10% -41.3% 2066299 ± 34% numa-vmstat.node1.numa_hit
3380522 ± 10% -42.9% 1929309 ± 37% numa-vmstat.node1.numa_local
2860563 ± 14% +73.7% 4970140 ± 14% numa-meminfo.node0.Active
2829095 ± 14% +74.6% 4938791 ± 14% numa-meminfo.node0.Active(anon)
1029787 ± 15% +90.1% 1957511 ± 16% numa-meminfo.node0.AnonHugePages
2782901 ± 14% +73.2% 4820952 ± 13% numa-meminfo.node0.AnonPages
3598217 ± 11% +59.0% 5721834 ± 12% numa-meminfo.node0.MemUsed
4779 ± 14% +39.3% 6658 ± 13% numa-meminfo.node0.PageTables
2711469 ± 11% -48.3% 1401167 ± 53% numa-meminfo.node1.Active
2679996 ± 11% -48.9% 1370343 ± 55% numa-meminfo.node1.Active(anon)
1104830 ± 11% -52.3% 527490 ± 43% numa-meminfo.node1.AnonHugePages
2592998 ± 13% -50.4% 1285649 ± 54% numa-meminfo.node1.AnonPages
3441628 ± 9% -39.6% 2078804 ± 37% numa-meminfo.node1.MemUsed
1348502 ± 3% +15.9% 1562690 ± 2% proc-vmstat.nr_active_anon
1313538 ± 3% +16.0% 1523248 ± 2% proc-vmstat.nr_anon_pages
993.00 ± 7% +20.4% 1195 ± 4% proc-vmstat.nr_anon_transparent_hugepages
1484488 -1.4% 1464054 proc-vmstat.nr_dirty_background_threshold
2972608 -1.4% 2931689 proc-vmstat.nr_dirty_threshold
14732846 -1.4% 14528187 proc-vmstat.nr_free_pages
2334 ± 4% +11.9% 2611 ± 3% proc-vmstat.nr_page_table_pages
39493 -2.2% 38606 proc-vmstat.nr_slab_unreclaimable
1348499 ± 3% +15.9% 1562687 ± 2% proc-vmstat.nr_zone_active_anon
11707 ± 19% -79.6% 2390 ±105% proc-vmstat.numa_hint_faults
5736 ± 68% -68.8% 1789 ±122% proc-vmstat.numa_hint_faults_local
12846700 -31.1% 8854558 proc-vmstat.numa_hit
834.00 ± 15% -55.0% 375.50 ± 29% proc-vmstat.numa_huge_pte_updates
12829442 -31.1% 8837365 proc-vmstat.numa_local
29698 ± 16% -71.4% 8480 ± 72% proc-vmstat.numa_pages_migrated
464744 ± 17% -57.4% 197920 ± 31% proc-vmstat.numa_pte_updates
2.591e+09 -33.0% 1.736e+09 proc-vmstat.pgalloc_normal
7958915 -29.7% 5591702 proc-vmstat.pgfault
2.589e+09 -33.0% 1.735e+09 proc-vmstat.pgfree
29698 ± 16% -71.4% 8480 ± 72% proc-vmstat.pgmigrate_success
5041287 -33.0% 3378764 proc-vmstat.thp_deferred_split_page
5044208 -33.0% 3379878 proc-vmstat.thp_fault_alloc
495.50 ± 58% -64.8% 174.50 ± 4% interrupts.35:PCI-MSI.3145732-edge.eth0-TxRx-3
3476 ± 10% -14.1% 2986 interrupts.CPU1.CAL:Function_call_interrupts
40310 ± 8% -50.4% 20001 ± 37% interrupts.CPU1.RES:Rescheduling_interrupts
3458 ± 12% -13.5% 2992 ± 2% interrupts.CPU13.CAL:Function_call_interrupts
495.50 ± 58% -64.8% 174.50 ± 4% interrupts.CPU14.35:PCI-MSI.3145732-edge.eth0-TxRx-3
232.75 ± 37% +199.9% 698.00 ± 59% interrupts.CPU17.RES:Rescheduling_interrupts
372.75 ± 61% +226.2% 1215 ± 41% interrupts.CPU19.RES:Rescheduling_interrupts
3428 ± 10% -12.0% 3016 ± 3% interrupts.CPU2.CAL:Function_call_interrupts
16318 ± 14% -79.4% 3366 ± 18% interrupts.CPU2.RES:Rescheduling_interrupts
112.50 ± 39% +573.1% 757.25 ± 42% interrupts.CPU20.RES:Rescheduling_interrupts
103.75 ± 37% +1322.2% 1475 ± 34% interrupts.CPU21.RES:Rescheduling_interrupts
1046 ± 41% +3220.0% 34735 ± 46% interrupts.CPU22.RES:Rescheduling_interrupts
485.25 ± 36% +3116.5% 15608 ±125% interrupts.CPU23.RES:Rescheduling_interrupts
404.75 ± 48% -81.6% 74.50 ± 55% interrupts.CPU29.RES:Rescheduling_interrupts
12888 ± 17% -73.6% 3399 ± 57% interrupts.CPU3.RES:Rescheduling_interrupts
341.25 ± 47% -78.7% 72.75 ± 77% interrupts.CPU31.RES:Rescheduling_interrupts
290.75 ± 29% -65.9% 99.25 ± 99% interrupts.CPU34.RES:Rescheduling_interrupts
3520 ± 7% -28.8% 2507 ± 30% interrupts.CPU35.CAL:Function_call_interrupts
238.75 ± 50% -75.6% 58.25 ± 35% interrupts.CPU35.RES:Rescheduling_interrupts
285.50 ± 66% -87.3% 36.25 ± 70% interrupts.CPU36.RES:Rescheduling_interrupts
3520 ± 9% -22.8% 2716 ± 16% interrupts.CPU37.CAL:Function_call_interrupts
303.00 ± 55% -81.2% 57.00 ±101% interrupts.CPU38.RES:Rescheduling_interrupts
261.75 ± 68% -83.2% 44.00 ± 81% interrupts.CPU39.RES:Rescheduling_interrupts
9633 ± 7% -79.9% 1935 ± 41% interrupts.CPU4.RES:Rescheduling_interrupts
169.75 ± 47% -71.4% 48.50 ± 67% interrupts.CPU41.RES:Rescheduling_interrupts
230.25 ± 52% -73.4% 61.25 ± 92% interrupts.CPU42.RES:Rescheduling_interrupts
3426 ± 10% -11.4% 3036 interrupts.CPU6.CAL:Function_call_interrupts
4643 ± 20% -77.7% 1036 ± 13% interrupts.CPU6.RES:Rescheduling_interrupts
237.75 ± 49% -79.7% 48.25 ± 82% interrupts.CPU66.RES:Rescheduling_interrupts
231.00 ± 64% -89.0% 25.50 ± 28% interrupts.CPU69.RES:Rescheduling_interrupts
3432 ± 10% -15.0% 2918 ± 4% interrupts.CPU7.CAL:Function_call_interrupts
4134 ± 13% -64.2% 1481 ± 43% interrupts.CPU7.RES:Rescheduling_interrupts
96.75 ± 51% -80.4% 19.00 ± 46% interrupts.CPU75.RES:Rescheduling_interrupts
3453 ± 11% -12.7% 3015 interrupts.CPU8.CAL:Function_call_interrupts
18.33 ± 5% -13.3 5.08 ± 4% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
19.33 ± 3% -12.0 7.38 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
19.70 ± 3% -11.9 7.76 ± 7% perf-profile.calltrace.cycles-pp.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
19.40 ± 3% -11.9 7.50 ± 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
0.55 ± 4% +0.1 0.64 ± 2% perf-profile.calltrace.cycles-pp.clear_huge_page
0.95 +0.1 1.06 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.56 ± 4% +0.3 0.84 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms.clear_huge_page
0.62 ± 3% +0.3 0.94 ± 8% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page
0.60 ± 4% +0.3 0.93 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.63 ± 3% +0.3 0.96 ± 8% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
1.17 ± 3% +0.3 1.50 ± 2% perf-profile.calltrace.cycles-pp.__free_pages_ok.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas
0.38 ± 57% +0.4 0.76 ± 9% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.clear_page_erms
1.24 ± 3% +0.4 1.65 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
1.24 ± 2% +0.4 1.66 perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
1.31 ± 2% +0.5 1.79 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.31 ± 2% +0.5 1.78 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.34 ± 2% +0.5 1.81 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.34 ± 2% +0.5 1.81 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.35 ± 3% +0.5 1.83 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.35 ± 3% +0.5 1.83 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.34 ± 3% +0.5 1.82 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.6 0.57 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
0.00 +0.6 0.64 ± 10% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 +0.7 0.67 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
0.00 +0.7 0.68 ± 10% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.80 ± 2% +0.7 2.54 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.80 ± 2% +0.7 2.54 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.00 +0.9 0.88 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
0.00 +0.9 0.90 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
0.66 ± 62% +1.2 1.88 ± 34% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
69.36 +8.7 78.09 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
72.75 +10.1 82.87 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
74.99 +10.9 85.91 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
74.91 +10.9 85.84 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
75.01 +10.9 85.95 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
75.10 +11.0 86.06 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
75.10 +11.0 86.06 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
75.11 +11.0 86.08 perf-profile.calltrace.cycles-pp.page_fault
103.91 -34.9% 67.64 ± 3% perf-stat.i.MPKI
1.482e+09 +13.6% 1.684e+09 ± 5% perf-stat.i.branch-instructions
9762787 +14.7% 11201300 ± 2% perf-stat.i.branch-misses
7.38 ± 2% +1.0 8.41 perf-stat.i.cache-miss-rate%
45987730 ± 3% -19.9% 36831773 ± 2% perf-stat.i.cache-misses
6.184e+08 -29.2% 4.378e+08 perf-stat.i.cache-references
1.035e+11 +16.0% 1.2e+11 perf-stat.i.cpu-cycles
35.49 ± 2% -24.7% 26.73 ± 8% perf-stat.i.cpu-migrations
1053398 +19.7% 1260826 perf-stat.i.dTLB-load-misses
1.728e+09 +15.0% 1.987e+09 ± 5% perf-stat.i.dTLB-loads
0.04 ± 3% +0.0 0.04 ± 2% perf-stat.i.dTLB-store-miss-rate%
592773 -13.0% 515699 perf-stat.i.dTLB-store-misses
1.69e+09 -22.2% 1.314e+09 perf-stat.i.dTLB-stores
49.34 ± 2% +24.3 73.67 perf-stat.i.iTLB-load-miss-rate%
462159 +19.0% 550118 perf-stat.i.iTLB-load-misses
481420 ± 3% -56.1% 211333 ± 3% perf-stat.i.iTLB-loads
6.07e+09 +11.9% 6.793e+09 ± 4% perf-stat.i.instructions
13676 -8.6% 12500 ± 4% perf-stat.i.instructions-per-iTLB-miss
25994 -29.2% 18413 perf-stat.i.minor-faults
12.62 ± 5% -3.1 9.52 ± 10% perf-stat.i.node-load-miss-rate%
813594 ± 8% -45.3% 445188 ± 10% perf-stat.i.node-load-misses
6036541 ± 3% -29.9% 4234277 perf-stat.i.node-loads
2.63 ± 15% -2.0 0.66 ± 32% perf-stat.i.node-store-miss-rate%
488029 ± 37% -68.5% 153814 ± 34% perf-stat.i.node-store-misses
23876394 -3.5% 23039440 perf-stat.i.node-stores
25995 -29.2% 18414 perf-stat.i.page-faults
101.89 -36.6% 64.58 ± 3% perf-stat.overall.MPKI
7.44 +1.0 8.41 perf-stat.overall.cache-miss-rate%
2251 +44.8% 3258 perf-stat.overall.cycles-between-cache-misses
0.04 +0.0 0.04 ± 2% perf-stat.overall.dTLB-store-miss-rate%
48.98 ± 2% +23.3 72.25 perf-stat.overall.iTLB-load-miss-rate%
11.86 ± 4% -2.3 9.51 ± 10% perf-stat.overall.node-load-miss-rate%
1.99 ± 36% -1.3 0.66 ± 33% perf-stat.overall.node-store-miss-rate%
1.477e+09 +13.6% 1.677e+09 ± 5% perf-stat.ps.branch-instructions
9696637 +14.7% 11121372 ± 2% perf-stat.ps.branch-misses
45824683 ± 3% -19.9% 36699436 ± 2% perf-stat.ps.cache-misses
6.162e+08 -29.2% 4.362e+08 perf-stat.ps.cache-references
1.031e+11 +16.0% 1.195e+11 perf-stat.ps.cpu-cycles
35.37 ± 2% -24.7% 26.64 ± 8% perf-stat.ps.cpu-migrations
1049561 +19.7% 1256350 perf-stat.ps.dTLB-load-misses
1.721e+09 +15.0% 1.979e+09 ± 5% perf-stat.ps.dTLB-loads
590625 -13.0% 513863 perf-stat.ps.dTLB-store-misses
1.684e+09 -22.2% 1.31e+09 perf-stat.ps.dTLB-stores
460416 +19.1% 548246 perf-stat.ps.iTLB-load-misses
479698 ± 3% -56.1% 210661 ± 3% perf-stat.ps.iTLB-loads
6.047e+09 +11.9% 6.766e+09 ± 4% perf-stat.ps.instructions
25909 -29.2% 18350 perf-stat.ps.minor-faults
810622 ± 7% -45.3% 443783 ± 10% perf-stat.ps.node-load-misses
6015816 ± 3% -29.9% 4219461 perf-stat.ps.node-loads
486412 ± 37% -68.5% 153252 ± 33% perf-stat.ps.node-store-misses
23792569 -3.5% 22959184 perf-stat.ps.node-stores
25909 -29.2% 18349 perf-stat.ps.page-faults
1.843e+12 +10.8% 2.043e+12 ± 4% perf-stat.total.instructions
40.36 ±173% -100.0% 0.00 ± 5% sched_debug.cfs_rq:/.MIN_vruntime.stddev
49736 +54.0% 76581 ± 9% sched_debug.cfs_rq:/.exec_clock.avg
81364 ± 8% +74.4% 141938 ± 4% sched_debug.cfs_rq:/.exec_clock.max
10337 ± 14% +266.4% 37876 ± 42% sched_debug.cfs_rq:/.exec_clock.stddev
11225 ± 10% +25.9% 14136 ± 13% sched_debug.cfs_rq:/.load.avg
40.36 ±173% -100.0% 0.00 ± 5% sched_debug.cfs_rq:/.max_vruntime.stddev
2067637 +61.7% 3344320 ± 9% sched_debug.cfs_rq:/.min_vruntime.avg
3187109 ± 6% +93.2% 6157097 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
402561 ± 13% +311.1% 1655080 ± 42% sched_debug.cfs_rq:/.min_vruntime.stddev
0.42 ± 8% +26.9% 0.53 ± 4% sched_debug.cfs_rq:/.nr_running.avg
1.53 ± 10% +58.3% 2.42 ± 14% sched_debug.cfs_rq:/.nr_spread_over.avg
1.13 ± 24% +63.7% 1.85 ± 19% sched_debug.cfs_rq:/.nr_spread_over.stddev
3.41 ± 33% -57.7% 1.45 ±111% sched_debug.cfs_rq:/.removed.load_avg.avg
201.45 -58.0% 84.54 ±100% sched_debug.cfs_rq:/.removed.load_avg.max
25.53 ± 17% -57.7% 10.80 ±103% sched_debug.cfs_rq:/.removed.load_avg.stddev
156.58 ± 33% -57.5% 66.54 ±111% sched_debug.cfs_rq:/.removed.runnable_sum.avg
9231 -57.8% 3894 ±100% sched_debug.cfs_rq:/.removed.runnable_sum.max
1170 ± 17% -57.6% 496.50 ±103% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
9.87 ± 2% +27.1% 12.55 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
11202 ± 10% +25.8% 14092 ± 13% sched_debug.cfs_rq:/.runnable_weight.avg
1717275 ± 24% +109.4% 3595584 ± 21% sched_debug.cfs_rq:/.spread0.max
-313871 +414.7% -1615516 sched_debug.cfs_rq:/.spread0.min
402568 ± 13% +311.1% 1655086 ± 42% sched_debug.cfs_rq:/.spread0.stddev
441.98 ± 8% +19.5% 528.20 ± 7% sched_debug.cfs_rq:/.util_avg.avg
152317 +19.8% 182542 sched_debug.cpu.clock.avg
152324 +19.8% 182549 sched_debug.cpu.clock.max
152309 +19.8% 182535 sched_debug.cpu.clock.min
152317 +19.8% 182542 sched_debug.cpu.clock_task.avg
152324 +19.8% 182549 sched_debug.cpu.clock_task.max
152309 +19.8% 182535 sched_debug.cpu.clock_task.min
9.55 +7.5% 10.27 ± 4% sched_debug.cpu.cpu_load[1].avg
89744 +27.7% 114579 sched_debug.cpu.nr_load_updates.avg
105109 ± 2% +46.9% 154390 ± 7% sched_debug.cpu.nr_load_updates.max
7386 ± 13% +245.2% 25495 ± 34% sched_debug.cpu.nr_load_updates.stddev
6176 +32.2% 8164 ± 4% sched_debug.cpu.nr_switches.avg
33527 ± 10% +167.9% 89818 ± 23% sched_debug.cpu.nr_switches.max
5721 ± 7% +107.1% 11850 ± 23% sched_debug.cpu.nr_switches.stddev
6.70 ± 19% +122.0% 14.88 ± 40% sched_debug.cpu.nr_uninterruptible.max
2.62 ± 8% +36.4% 3.58 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
5865 +35.3% 7937 ± 4% sched_debug.cpu.sched_count.avg
45223 ± 23% +103.6% 92061 ± 23% sched_debug.cpu.sched_count.max
6871 ± 10% +83.8% 12631 ± 20% sched_debug.cpu.sched_count.stddev
2580 ± 2% +33.7% 3449 ± 6% sched_debug.cpu.sched_goidle.avg
15649 ± 9% +169.3% 42137 ± 30% sched_debug.cpu.sched_goidle.max
2697 ± 7% +106.9% 5582 ± 28% sched_debug.cpu.sched_goidle.stddev
2494 ± 2% +40.6% 3507 ± 4% sched_debug.cpu.ttwu_count.avg
20565 ± 18% +174.4% 56434 ± 39% sched_debug.cpu.ttwu_count.max
3385 ± 9% +112.6% 7196 ± 31% sched_debug.cpu.ttwu_count.stddev
856.38 +60.8% 1377 ± 3% sched_debug.cpu.ttwu_local.avg
2955 ± 13% +147.0% 7301 ± 28% sched_debug.cpu.ttwu_local.max
550.87 ± 4% +61.0% 886.64 ± 19% sched_debug.cpu.ttwu_local.stddev
152311 +19.8% 182536 sched_debug.cpu_clk
152311 +19.8% 182536 sched_debug.ktime
155210 +19.5% 185434 sched_debug.sched_clk
30209 ± 6% -41.7% 17601 ± 15% softirqs.CPU1.SCHED
94445 ± 9% +28.2% 121092 ± 5% softirqs.CPU1.TIMER
8720 ± 16% -30.8% 6038 ± 15% softirqs.CPU10.SCHED
91259 ± 7% +45.3% 132597 ± 12% softirqs.CPU10.TIMER
9600 ± 10% -37.6% 5988 ± 24% softirqs.CPU11.SCHED
93321 ± 6% +40.7% 131338 ± 14% softirqs.CPU11.TIMER
93395 ± 8% +39.0% 129851 ± 12% softirqs.CPU12.TIMER
9114 ± 11% -34.2% 6001 ± 13% softirqs.CPU13.SCHED
92702 ± 8% +44.7% 134175 ± 10% softirqs.CPU13.TIMER
98912 ± 8% +31.0% 129598 ± 11% softirqs.CPU14.TIMER
91279 ± 7% +35.5% 123650 ± 8% softirqs.CPU15.TIMER
3119 ± 6% +152.6% 7878 ± 73% softirqs.CPU16.RCU
94832 ± 7% +55.9% 147813 ± 9% softirqs.CPU16.TIMER
94741 ± 7% +38.2% 130953 ± 13% softirqs.CPU17.TIMER
89710 ± 7% +48.8% 133512 ± 12% softirqs.CPU18.TIMER
95107 ± 7% +41.2% 134251 ± 11% softirqs.CPU19.TIMER
17569 ± 3% -51.5% 8516 ± 17% softirqs.CPU2.SCHED
101951 ± 8% +34.6% 137209 ± 7% softirqs.CPU2.TIMER
95682 ± 5% +43.4% 137183 ± 8% softirqs.CPU20.TIMER
90560 ± 18% +47.5% 133620 ± 10% softirqs.CPU21.TIMER
8858 ± 9% +171.4% 24041 ± 37% softirqs.CPU22.SCHED
82796 ± 11% -32.9% 55594 ± 22% softirqs.CPU22.TIMER
92560 ± 8% -30.4% 64376 ± 14% softirqs.CPU23.TIMER
89890 ± 11% -31.9% 61223 ± 18% softirqs.CPU25.TIMER
82728 ± 11% -28.7% 58954 ± 21% softirqs.CPU26.TIMER
7585 ± 5% +65.0% 12517 ± 50% softirqs.CPU27.SCHED
87967 ± 7% -22.7% 68014 ± 20% softirqs.CPU29.TIMER
3847 ± 11% +78.0% 6847 ± 48% softirqs.CPU3.RCU
15118 ± 6% -51.1% 7400 ± 19% softirqs.CPU3.SCHED
99241 ± 8% +33.0% 131954 ± 13% softirqs.CPU3.TIMER
5273 ± 70% -52.9% 2481 ± 13% softirqs.CPU30.RCU
86738 ± 4% -26.9% 63403 ± 20% softirqs.CPU30.TIMER
87717 ± 4% -19.7% 70426 ± 11% softirqs.CPU33.TIMER
91009 ± 8% -26.6% 66826 ± 30% softirqs.CPU37.TIMER
91238 ± 6% -34.6% 59629 ± 35% softirqs.CPU39.TIMER
13630 ± 6% -48.8% 6975 ± 14% softirqs.CPU4.SCHED
98325 ± 9% +36.5% 134220 ± 10% softirqs.CPU4.TIMER
7882 ± 58% -37.7% 4910 ± 76% softirqs.CPU40.RCU
91734 ± 7% -41.1% 54044 ± 24% softirqs.CPU40.TIMER
7837 ± 57% -64.0% 2820 ± 18% softirqs.CPU42.RCU
90928 ± 7% -29.9% 63731 ± 23% softirqs.CPU42.TIMER
92238 ± 5% -36.2% 58803 ± 27% softirqs.CPU43.TIMER
9727 ± 9% -39.0% 5931 ± 17% softirqs.CPU45.SCHED
95857 ± 5% +35.5% 129846 ± 10% softirqs.CPU45.TIMER
9358 ± 8% -36.5% 5941 ± 21% softirqs.CPU46.SCHED
92711 ± 10% +43.7% 133268 ± 10% softirqs.CPU46.TIMER
9920 ± 9% -37.9% 6161 ± 22% softirqs.CPU47.SCHED
97785 ± 9% +36.4% 133427 ± 9% softirqs.CPU47.TIMER
9119 ± 13% -31.4% 6255 ± 18% softirqs.CPU48.SCHED
96161 ± 12% +39.8% 134472 ± 9% softirqs.CPU48.TIMER
95489 ± 12% +36.7% 130507 ± 10% softirqs.CPU49.TIMER
95172 ± 8% +34.9% 128353 ± 5% softirqs.CPU5.TIMER
95302 ± 9% +39.9% 133360 ± 11% softirqs.CPU50.TIMER
93574 ± 13% +47.3% 137842 ± 8% softirqs.CPU51.TIMER
3339 ± 6% +102.3% 6754 ± 50% softirqs.CPU52.RCU
96278 ± 5% +35.0% 129973 ± 10% softirqs.CPU53.TIMER
3243 ± 11% +95.5% 6339 ± 60% softirqs.CPU54.RCU
7818 ± 22% -35.3% 5060 ± 11% softirqs.CPU54.SCHED
97165 ± 4% +35.0% 131219 ± 12% softirqs.CPU54.TIMER
99534 ± 8% +33.4% 132761 ± 11% softirqs.CPU55.TIMER
93342 ± 6% +37.6% 128456 ± 13% softirqs.CPU56.TIMER
3104 ± 5% +176.2% 8574 ± 51% softirqs.CPU57.RCU
8598 ± 19% -43.3% 4871 ± 17% softirqs.CPU57.SCHED
96358 ± 5% +34.3% 129405 ± 14% softirqs.CPU57.TIMER
90709 ± 6% +45.1% 131643 ± 10% softirqs.CPU58.TIMER
94807 ± 6% +31.0% 124229 ± 10% softirqs.CPU59.TIMER
10284 ± 8% -34.9% 6694 ± 12% softirqs.CPU6.SCHED
94038 ± 11% +44.9% 136217 ± 10% softirqs.CPU6.TIMER
95144 ± 5% +37.2% 130541 ± 13% softirqs.CPU60.TIMER
95772 ± 8% +33.9% 128220 ± 13% softirqs.CPU61.TIMER
96450 ± 8% +34.0% 129241 ± 13% softirqs.CPU62.TIMER
2935 ± 8% +104.4% 5999 ± 58% softirqs.CPU63.RCU
93061 ± 7% +43.2% 133226 ± 9% softirqs.CPU63.TIMER
95785 ± 10% +38.0% 132228 ± 10% softirqs.CPU64.TIMER
87927 ± 20% +51.6% 133269 ± 8% softirqs.CPU65.TIMER
87499 ± 10% -38.1% 54161 ± 26% softirqs.CPU66.TIMER
91805 ± 9% -29.9% 64360 ± 25% softirqs.CPU67.TIMER
92714 ± 8% -29.6% 65310 ± 23% softirqs.CPU68.TIMER
87340 ± 7% -31.8% 59600 ± 28% softirqs.CPU69.TIMER
3716 ± 5% +86.8% 6942 ± 61% softirqs.CPU7.RCU
10113 ± 11% -41.6% 5907 ± 9% softirqs.CPU7.SCHED
92161 ± 9% +43.3% 132028 ± 11% softirqs.CPU7.TIMER
91161 ± 17% -27.6% 66000 ± 24% softirqs.CPU73.TIMER
85233 ± 9% -19.1% 68973 ± 14% softirqs.CPU75.TIMER
94828 ± 2% -32.7% 63848 ± 22% softirqs.CPU77.TIMER
89287 ± 7% -29.9% 62592 ± 27% softirqs.CPU78.TIMER
90874 ± 9% -24.0% 69092 ± 24% softirqs.CPU80.TIMER
7403 ± 58% -68.8% 2307 ± 12% softirqs.CPU82.RCU
90855 ± 10% -27.9% 65495 ± 25% softirqs.CPU82.TIMER
90678 ± 11% -28.8% 64580 ± 12% softirqs.CPU84.TIMER
90188 ± 15% -21.7% 70622 ± 3% softirqs.CPU85.TIMER
7438 ± 57% -37.8% 4626 ± 74% softirqs.CPU86.RCU
89728 ± 12% -35.7% 57665 ± 17% softirqs.CPU86.TIMER
5097 ± 72% -52.3% 2430 ± 5% softirqs.CPU87.RCU
88927 ± 9% -34.6% 58201 ± 25% softirqs.CPU87.TIMER
3488 ± 8% +98.4% 6921 ± 45% softirqs.CPU9.RCU
88673 ± 12% +43.7% 127443 ± 12% softirqs.CPU9.TIMER
pft.faults_per_sec_per_cpu
300000 +-+----------------------------------------------------------------+
| |
250000 +-+.++.+.+.+.++.+ +.+.++.+.+.+.+ |
| : : |
| : : |
200000 +-+ : : |
| : : |
150000 O-O O O O O OO O:O:O O OO O O OO O O O OO O O O O OO O O O OO O O
| : : |
100000 +-+ : : |
| : : |
| :: |
50000 +-+ : |
| : |
0 +-+-O--------------------------O-----------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
***************************************************************************************************
lkp-hsw-ep2: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 256G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_job/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/3000/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-hsw-ep2/custom/reaim/0x3d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
%stddev %change %stddev
\ | \
389.13 -2.7% 378.77 reaim.child_systime
540687 +1.3% 547673 reaim.jobs_per_min
7509 +1.3% 7606 reaim.jobs_per_min_child
544939 +1.2% 551502 reaim.max_jobs_per_min
23.92 -1.3% 23.62 reaim.parent_time
0.68 ± 2% +21.6% 0.83 reaim.std_dev_percent
0.16 ± 2% +20.1% 0.19 reaim.std_dev_time
311.40 -1.2% 307.71 reaim.time.elapsed_time
311.40 -1.2% 307.71 reaim.time.elapsed_time.max
5096562 -16.0% 4280492 reaim.time.involuntary_context_switches
4694 -2.7% 4569 reaim.time.system_time
21203641 +2.1% 21642097 reaim.time.voluntary_context_switches
169548 +1.9% 172739 vmstat.system.cs
5.805e+08 +14.5% 6.645e+08 cpuidle.C1E.time
5649746 +14.9% 6493131 cpuidle.C1E.usage
114430 ± 4% -6.4% 107089 softirqs.CPU33.TIMER
116034 -8.8% 105833 ± 2% softirqs.CPU35.TIMER
13118 +13.4% 14873 meminfo.PageTables
49287 ± 5% +21.9% 60093 ± 7% meminfo.Shmem
7913 +12.1% 8874 meminfo.max_used_kB
7969 ± 42% +115.6% 17180 ± 5% numa-meminfo.node0.Inactive(anon)
12094 ± 15% +35.1% 16333 ± 3% numa-meminfo.node0.Mapped
10024 ± 34% -82.7% 1739 ± 48% numa-meminfo.node1.Inactive(anon)
6709 ± 5% +30.3% 8745 ± 8% numa-meminfo.node1.PageTables
5649886 +14.9% 6493215 turbostat.C1E
2.58 +0.4 2.98 turbostat.C1E%
0.29 -58.6% 0.12 turbostat.CPU%c3
2.18 ± 2% +10.3% 2.41 ± 2% turbostat.Pkg%pc2
13.25 -3.8% 12.76 turbostat.RAMWatt
1993 ± 42% +115.1% 4288 ± 4% numa-vmstat.node0.nr_inactive_anon
3125 ± 15% +32.9% 4154 ± 2% numa-vmstat.node0.nr_mapped
1993 ± 42% +115.1% 4288 ± 4% numa-vmstat.node0.nr_zone_inactive_anon
2498 ± 34% -82.9% 426.75 ± 47% numa-vmstat.node1.nr_inactive_anon
1670 ± 5% +31.9% 2202 ± 9% numa-vmstat.node1.nr_page_table_pages
2498 ± 34% -82.9% 426.75 ± 47% numa-vmstat.node1.nr_zone_inactive_anon
2027 ± 2% -21.6% 1590 ± 8% slabinfo.eventpoll_epi.active_objs
2027 ± 2% -21.6% 1590 ± 8% slabinfo.eventpoll_epi.num_objs
3548 ± 2% -22.0% 2769 ± 9% slabinfo.eventpoll_pwq.active_objs
3548 ± 2% -22.0% 2769 ± 9% slabinfo.eventpoll_pwq.num_objs
662.25 ± 6% -19.6% 532.50 ± 3% slabinfo.file_lock_cache.active_objs
662.25 ± 6% -19.6% 532.50 ± 3% slabinfo.file_lock_cache.num_objs
933.75 ± 2% -23.5% 714.00 ± 2% slabinfo.names_cache.active_objs
938.50 ± 2% -23.9% 714.00 ± 2% slabinfo.names_cache.num_objs
93741 +3.5% 97068 proc-vmstat.nr_active_anon
84153 +1.7% 85581 proc-vmstat.nr_anon_pages
235621 +1.1% 238317 proc-vmstat.nr_file_pages
4495 +5.3% 4732 proc-vmstat.nr_inactive_anon
15557 +2.7% 15981 proc-vmstat.nr_kernel_stack
6662 +5.9% 7053 proc-vmstat.nr_mapped
3245 +13.7% 3692 proc-vmstat.nr_page_table_pages
12326 ± 5% +21.9% 15022 ± 7% proc-vmstat.nr_shmem
44278 -1.3% 43703 proc-vmstat.nr_slab_unreclaimable
93741 +3.5% 97068 proc-vmstat.nr_zone_active_anon
4495 +5.3% 4732 proc-vmstat.nr_zone_inactive_anon
158417 ± 11% -20.5% 125975 ± 3% proc-vmstat.numa_pte_updates
2340 -100.0% 0.00 sched_debug.cfs_rq:/.load.min
0.74 -17.1% 0.61 ± 3% sched_debug.cfs_rq:/.nr_running.avg
0.17 -100.0% 0.00 sched_debug.cfs_rq:/.nr_running.min
1.83 ± 22% -100.0% 0.00 sched_debug.cfs_rq:/.runnable_load_avg.min
25.56 ± 51% +131.9% 59.28 ± 32% sched_debug.cfs_rq:/.runnable_load_avg.stddev
2339 -100.0% 0.00 sched_debug.cfs_rq:/.runnable_weight.min
741.21 -19.7% 595.16 ± 2% sched_debug.cfs_rq:/.util_avg.avg
144.54 ± 16% +38.7% 200.54 ± 27% sched_debug.cpu.cpu_load[2].max
15.92 ± 13% +33.3% 21.22 ± 18% sched_debug.cpu.cpu_load[3].stddev
3.08 ± 18% -44.6% 1.71 ± 46% sched_debug.cpu.cpu_load[4].min
23537 ± 2% -16.0% 19781 ± 4% sched_debug.cpu.curr->pid.avg
4236 -100.0% 0.00 sched_debug.cpu.curr->pid.min
2340 -100.0% 0.00 sched_debug.cpu.load.min
0.89 ± 2% -15.7% 0.75 ± 5% sched_debug.cpu.nr_running.avg
0.17 -100.0% 0.00 sched_debug.cpu.nr_running.min
128.17 ± 5% +10.5% 141.67 ± 4% sched_debug.cpu.nr_uninterruptible.max
-143.46 -9.9% -129.21 sched_debug.cpu.nr_uninterruptible.min
7407 ± 7% +16.7% 8643 ± 6% sched_debug.cpu.sched_count.stddev
0.00 ± 5% -44.0% 0.00 ± 14% sched_debug.rt_rq:/.rt_time.avg
0.04 ± 7% -42.0% 0.02 ± 57% sched_debug.rt_rq:/.rt_time.max
0.00 ± 8% -36.3% 0.00 ± 40% sched_debug.rt_rq:/.rt_time.stddev
2.198e+10 +1.1% 2.223e+10 perf-stat.i.branch-instructions
2.02 -0.0 1.99 perf-stat.i.branch-miss-rate%
2.44 -0.5 1.89 perf-stat.i.cache-miss-rate%
84674052 -20.1% 67690215 perf-stat.i.cache-misses
3.972e+09 +1.0% 4.013e+09 perf-stat.i.cache-references
171277 +1.8% 174409 perf-stat.i.context-switches
4613 ± 3% +20.8% 5574 ± 3% perf-stat.i.cycles-between-cache-misses
1.248e+10 +1.3% 1.264e+10 perf-stat.i.dTLB-loads
10489856 ± 2% -6.8% 9775230 ± 2% perf-stat.i.dTLB-store-misses
61.17 -0.5 60.69 perf-stat.i.iTLB-load-miss-rate%
1.04e+11 +1.1% 1.052e+11 perf-stat.i.instructions
79.97 -1.9 78.03 perf-stat.i.node-load-miss-rate%
54374000 -26.7% 39866795 perf-stat.i.node-load-misses
5262627 -5.0% 4998608 perf-stat.i.node-loads
47.13 -1.5 45.63 perf-stat.i.node-store-miss-rate%
11256870 -13.1% 9780571 perf-stat.i.node-store-misses
12952272 -6.5% 12106552 perf-stat.i.node-stores
2.13 -0.4 1.69 perf-stat.overall.cache-miss-rate%
1806 +25.2% 2262 perf-stat.overall.cycles-between-cache-misses
0.08 ± 2% -0.0 0.07 ± 3% perf-stat.overall.dTLB-store-miss-rate%
57.16 -0.4 56.72 perf-stat.overall.iTLB-load-miss-rate%
6126 +2.6% 6282 perf-stat.overall.instructions-per-iTLB-miss
0.68 +1.0% 0.69 perf-stat.overall.ipc
91.17 -2.3 88.86 perf-stat.overall.node-load-miss-rate%
46.50 -1.8 44.68 perf-stat.overall.node-store-miss-rate%
2.191e+10 +1.1% 2.215e+10 perf-stat.ps.branch-instructions
84380063 -20.1% 67448069 perf-stat.ps.cache-misses
3.959e+09 +1.0% 4e+09 perf-stat.ps.cache-references
170674 +1.8% 173772 perf-stat.ps.context-switches
1.244e+10 +1.3% 1.259e+10 perf-stat.ps.dTLB-loads
10454233 ± 2% -6.8% 9740884 ± 2% perf-stat.ps.dTLB-store-misses
1.037e+11 +1.1% 1.048e+11 perf-stat.ps.instructions
54182937 -26.7% 39721849 perf-stat.ps.node-load-misses
5244550 -5.0% 4980894 perf-stat.ps.node-loads
11218214 -13.1% 9746115 perf-stat.ps.node-store-misses
12908974 -6.5% 12064855 perf-stat.ps.node-stores
252.25 ± 40% -32.0% 171.50 ± 4% interrupts.40:PCI-MSI.1572869-edge.eth0-TxRx-5
10875 ± 4% -15.8% 9161 ± 8% interrupts.CPU0.RES:Rescheduling_interrupts
7594 ± 2% -23.1% 5838 ± 6% interrupts.CPU1.RES:Rescheduling_interrupts
7393 -35.6% 4762 ± 3% interrupts.CPU10.RES:Rescheduling_interrupts
7422 ± 13% -33.1% 4965 ± 8% interrupts.CPU11.RES:Rescheduling_interrupts
7029 ± 2% -35.2% 4558 ± 2% interrupts.CPU12.RES:Rescheduling_interrupts
7067 ± 2% -36.0% 4526 interrupts.CPU13.RES:Rescheduling_interrupts
7201 ± 2% -34.6% 4709 interrupts.CPU14.RES:Rescheduling_interrupts
7104 ± 2% -36.1% 4537 ± 5% interrupts.CPU15.RES:Rescheduling_interrupts
7208 -35.5% 4646 ± 4% interrupts.CPU16.RES:Rescheduling_interrupts
7127 -33.9% 4709 ± 9% interrupts.CPU17.RES:Rescheduling_interrupts
7395 ± 3% -36.3% 4709 ± 2% interrupts.CPU18.RES:Rescheduling_interrupts
7005 -34.4% 4596 ± 3% interrupts.CPU19.RES:Rescheduling_interrupts
7853 ± 3% -31.2% 5406 ± 8% interrupts.CPU2.RES:Rescheduling_interrupts
7205 -31.2% 4958 ± 8% interrupts.CPU20.RES:Rescheduling_interrupts
7137 -34.3% 4686 ± 3% interrupts.CPU21.RES:Rescheduling_interrupts
6968 -32.7% 4691 interrupts.CPU22.RES:Rescheduling_interrupts
6914 -30.8% 4787 ± 3% interrupts.CPU23.RES:Rescheduling_interrupts
6922 -32.8% 4651 ± 2% interrupts.CPU24.RES:Rescheduling_interrupts
7184 ± 2% -35.8% 4615 ± 2% interrupts.CPU25.RES:Rescheduling_interrupts
7341 ± 2% -36.9% 4635 ± 3% interrupts.CPU26.RES:Rescheduling_interrupts
7230 -34.7% 4720 ± 4% interrupts.CPU27.RES:Rescheduling_interrupts
7205 ± 2% -31.4% 4939 ± 7% interrupts.CPU28.RES:Rescheduling_interrupts
7064 -33.9% 4667 ± 3% interrupts.CPU29.RES:Rescheduling_interrupts
8138 ± 11% -36.3% 5183 ± 6% interrupts.CPU3.RES:Rescheduling_interrupts
7003 -33.7% 4644 ± 2% interrupts.CPU30.RES:Rescheduling_interrupts
7189 ± 3% -37.2% 4512 ± 3% interrupts.CPU31.RES:Rescheduling_interrupts
7236 ± 4% -35.7% 4654 interrupts.CPU32.RES:Rescheduling_interrupts
7071 -35.4% 4571 interrupts.CPU33.RES:Rescheduling_interrupts
7111 ± 2% -36.1% 4540 interrupts.CPU34.RES:Rescheduling_interrupts
7196 ± 2% -33.4% 4791 ± 4% interrupts.CPU35.RES:Rescheduling_interrupts
6758 -34.0% 4462 interrupts.CPU36.RES:Rescheduling_interrupts
6853 -35.7% 4403 ± 4% interrupts.CPU37.RES:Rescheduling_interrupts
7193 ± 3% -34.8% 4691 interrupts.CPU38.RES:Rescheduling_interrupts
7197 ± 5% -35.2% 4660 ± 3% interrupts.CPU39.RES:Rescheduling_interrupts
7753 -31.2% 5333 ± 7% interrupts.CPU4.RES:Rescheduling_interrupts
7031 -33.0% 4711 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
6888 -35.7% 4431 interrupts.CPU41.RES:Rescheduling_interrupts
6914 ± 2% -32.8% 4644 ± 3% interrupts.CPU42.RES:Rescheduling_interrupts
7186 ± 6% -37.4% 4498 interrupts.CPU43.RES:Rescheduling_interrupts
7018 ± 2% -36.5% 4458 ± 2% interrupts.CPU44.RES:Rescheduling_interrupts
7083 ± 3% -37.4% 4430 ± 2% interrupts.CPU45.RES:Rescheduling_interrupts
7044 ± 3% -35.8% 4521 ± 2% interrupts.CPU46.RES:Rescheduling_interrupts
6916 -34.3% 4546 interrupts.CPU47.RES:Rescheduling_interrupts
7012 ± 3% -36.6% 4443 ± 2% interrupts.CPU48.RES:Rescheduling_interrupts
6966 -35.3% 4506 ± 3% interrupts.CPU49.RES:Rescheduling_interrupts
252.25 ± 40% -32.0% 171.50 ± 4% interrupts.CPU5.40:PCI-MSI.1572869-edge.eth0-TxRx-5
7180 ± 2% -33.7% 4760 ± 3% interrupts.CPU5.RES:Rescheduling_interrupts
6969 -35.1% 4523 interrupts.CPU50.RES:Rescheduling_interrupts
7000 -34.5% 4587 interrupts.CPU51.RES:Rescheduling_interrupts
7039 ± 2% -34.0% 4643 ± 3% interrupts.CPU52.RES:Rescheduling_interrupts
6917 -35.8% 4443 ± 2% interrupts.CPU53.RES:Rescheduling_interrupts
7059 ± 2% -37.2% 4430 ± 2% interrupts.CPU54.RES:Rescheduling_interrupts
6918 -33.8% 4580 ± 2% interrupts.CPU55.RES:Rescheduling_interrupts
7187 -35.1% 4662 ± 3% interrupts.CPU56.RES:Rescheduling_interrupts
6875 -33.5% 4568 interrupts.CPU57.RES:Rescheduling_interrupts
7008 -34.7% 4574 ± 3% interrupts.CPU58.RES:Rescheduling_interrupts
6891 -32.0% 4683 ± 3% interrupts.CPU59.RES:Rescheduling_interrupts
7151 ± 4% -33.2% 4780 ± 2% interrupts.CPU6.RES:Rescheduling_interrupts
6903 -30.9% 4767 ± 7% interrupts.CPU60.RES:Rescheduling_interrupts
7063 -33.4% 4706 ± 3% interrupts.CPU61.RES:Rescheduling_interrupts
6855 ± 3% -33.9% 4532 ± 4% interrupts.CPU62.RES:Rescheduling_interrupts
7054 -35.1% 4578 ± 2% interrupts.CPU63.RES:Rescheduling_interrupts
7195 ± 2% -37.0% 4531 ± 2% interrupts.CPU64.RES:Rescheduling_interrupts
7040 ± 2% -35.3% 4556 ± 4% interrupts.CPU65.RES:Rescheduling_interrupts
6848 -33.7% 4538 ± 2% interrupts.CPU66.RES:Rescheduling_interrupts
7026 ± 2% -35.4% 4538 interrupts.CPU67.RES:Rescheduling_interrupts
7055 -34.2% 4645 ± 2% interrupts.CPU68.RES:Rescheduling_interrupts
6876 -33.4% 4576 interrupts.CPU69.RES:Rescheduling_interrupts
7598 ± 5% -35.5% 4898 ± 2% interrupts.CPU7.RES:Rescheduling_interrupts
7097 ± 2% -35.9% 4548 ± 5% interrupts.CPU70.RES:Rescheduling_interrupts
6982 -35.7% 4488 interrupts.CPU71.RES:Rescheduling_interrupts
7378 ± 2% -36.2% 4705 ± 2% interrupts.CPU8.RES:Rescheduling_interrupts
7349 -34.9% 4784 interrupts.CPU9.RES:Rescheduling_interrupts
83.75 ± 11% +52.5% 127.75 ± 18% interrupts.IWI:IRQ_work_interrupts
516736 -34.1% 340755 interrupts.RES:Rescheduling_interrupts
22.25 -1.3 20.95 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
22.22 -1.3 20.93 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.69 ± 5% -1.1 2.60 ± 3% perf-profile.calltrace.cycles-pp.search_binary_handler.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.69 ± 5% -1.1 2.59 ± 3% perf-profile.calltrace.cycles-pp.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve.do_syscall_64
1.71 ± 4% -0.8 0.88 ± 4% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
1.64 ± 4% -0.8 0.84 ± 4% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
1.02 -0.8 0.27 ±100% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.25 ± 3% -0.6 0.60 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.74 ± 2% -0.5 3.20 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.06 ± 13% -0.5 1.56 ± 3% perf-profile.calltrace.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve
1.86 ± 3% -0.5 1.36 ± 2% perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.97 ± 12% -0.5 1.51 ± 3% perf-profile.calltrace.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common
1.96 ± 12% -0.5 1.50 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
1.70 ± 3% -0.4 1.27 ± 2% perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.29 ± 7% -0.4 0.87 ± 16% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.29 ± 7% -0.4 0.87 ± 16% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
4.20 ± 3% -0.4 3.80 ± 3% perf-profile.calltrace.cycles-pp.execve
1.21 ± 6% -0.4 0.85 ± 2% perf-profile.calltrace.cycles-pp.setlocale
3.81 ± 3% -0.4 3.46 ± 3% perf-profile.calltrace.cycles-pp.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.4 3.49 ± 3% perf-profile.calltrace.cycles-pp.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.3 3.50 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
3.84 ± 3% -0.3 3.50 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
0.73 ± 6% -0.2 0.54 perf-profile.calltrace.cycles-pp._dl_addr
0.91 ± 3% -0.2 0.75 ± 4% perf-profile.calltrace.cycles-pp.ret_from_fork
0.89 ± 3% -0.1 0.74 ± 4% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.68 ± 4% -0.1 0.60 ± 5% perf-profile.calltrace.cycles-pp.__pte_alloc.copy_page_range.copy_process._do_fork.do_syscall_64
0.72 ± 3% +0.1 0.79 ± 4% perf-profile.calltrace.cycles-pp.read
0.56 ± 7% +0.1 0.66 perf-profile.calltrace.cycles-pp.do_brk_flags.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
0.54 ± 6% +0.1 0.64 ± 4% perf-profile.calltrace.cycles-pp.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap.mmput
0.91 ± 3% +0.1 1.03 perf-profile.calltrace.cycles-pp.anon_vma_interval_tree_insert.anon_vma_clone.anon_vma_fork.copy_process._do_fork
0.56 ± 3% +0.1 0.68 ± 4% perf-profile.calltrace.cycles-pp.close
0.70 ± 6% +0.1 0.82 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.kill
0.71 ± 6% +0.1 0.83 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.kill
0.96 ± 6% +0.1 1.11 ± 3% perf-profile.calltrace.cycles-pp.kill
0.83 ± 3% +0.2 0.98 ± 2% perf-profile.calltrace.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
0.94 +0.2 1.09 ± 2% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.05 +0.2 1.22 ± 2% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
0.54 ± 4% +0.2 0.71 ± 3% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.do_munmap.sys_brk.do_syscall_64
0.54 ± 4% +0.2 0.71 ± 3% perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.unmap_region.do_munmap.sys_brk
1.12 +0.2 1.29 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.10 +0.2 1.27 ± 3% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
1.13 +0.2 1.31 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
0.71 ± 6% +0.2 0.91 ± 3% perf-profile.calltrace.cycles-pp.unmap_region.do_munmap.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.75 ± 5% +0.2 0.96 ± 3% perf-profile.calltrace.cycles-pp.do_munmap.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
1.86 ± 2% +0.2 2.08 ± 3% perf-profile.calltrace.cycles-pp.anon_vma_clone.anon_vma_fork.copy_process._do_fork.do_syscall_64
3.38 ± 3% +0.3 3.65 perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.do_exit.do_group_exit
1.72 ± 2% +0.3 1.98 perf-profile.calltrace.cycles-pp.write
1.93 ± 2% +0.3 2.23 perf-profile.calltrace.cycles-pp.page_remove_rmap.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.28 ±100% +0.3 0.60 ± 4% perf-profile.calltrace.cycles-pp.do_notify_parent.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
2.17 ± 4% +0.3 2.49 ± 2% perf-profile.calltrace.cycles-pp.unlink_anon_vmas.free_pgtables.exit_mmap.mmput.do_exit
0.26 ±100% +0.3 0.59 ± 4% perf-profile.calltrace.cycles-pp.__slab_free.kmem_cache_free.unlink_anon_vmas.free_pgtables.exit_mmap
1.44 ± 6% +0.3 1.78 perf-profile.calltrace.cycles-pp.sys_brk.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
1.47 ± 6% +0.3 1.81 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.brk
2.81 ± 3% +0.4 3.16 perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap
1.48 ± 6% +0.4 1.84 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.brk
2.75 +0.4 3.10 ± 3% perf-profile.calltrace.cycles-pp.anon_vma_fork.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.71 ± 4% +0.4 2.09 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.92 ± 2% +0.4 6.30 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.73 ± 4% +0.4 2.11 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
1.86 ± 4% +0.4 2.25 perf-profile.calltrace.cycles-pp.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
5.77 +0.4 6.16 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.89 ± 4% +0.4 2.28 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.creat
0.13 ±173% +0.4 0.53 ± 2% perf-profile.calltrace.cycles-pp.remove_vma.exit_mmap.mmput.do_exit.do_group_exit
1.89 ± 4% +0.4 2.30 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.creat
0.27 ±100% +0.4 0.69 perf-profile.calltrace.cycles-pp.wait_consider_task.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64
1.97 ± 6% +0.4 2.40 perf-profile.calltrace.cycles-pp.brk
2.36 ± 4% +0.5 2.81 perf-profile.calltrace.cycles-pp.creat
0.13 ±173% +0.5 0.60 ± 8% perf-profile.calltrace.cycles-pp.down_write.anon_vma_fork.copy_process._do_fork.do_syscall_64
0.67 ± 4% +0.5 1.16 ± 4% perf-profile.calltrace.cycles-pp.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64
3.65 ± 2% +0.5 4.15 perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput
0.13 ±173% +0.5 0.63 perf-profile.calltrace.cycles-pp.release_task.wait_consider_task.do_wait.kernel_wait4.SYSC_wait4
3.68 ± 2% +0.5 4.20 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
3.67 ± 2% +0.5 4.19 perf-profile.calltrace.cycles-pp.arch_tlb_finish_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
1.44 ± 2% +0.7 2.13 perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.45 ± 2% +0.7 2.15 perf-profile.calltrace.cycles-pp.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.47 ± 2% +0.7 2.17 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.wait
1.46 ± 2% +0.7 2.17 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.45 ± 2% +0.7 2.16 perf-profile.calltrace.cycles-pp.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe.wait
1.69 ± 2% +0.7 2.43 perf-profile.calltrace.cycles-pp.wait
11.98 +0.7 12.73 perf-profile.calltrace.cycles-pp.__libc_fork
9.99 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__libc_fork
9.99 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
9.98 ± 2% +0.8 10.77 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
9.35 +0.8 10.18 perf-profile.calltrace.cycles-pp.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe.__libc_fork
27.80 +0.8 28.64 perf-profile.calltrace.cycles-pp.secondary_startup_64
0.00 +0.9 0.86 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4
27.36 +0.9 28.29 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
27.36 +0.9 28.29 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
27.35 +0.9 28.29 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
26.38 +1.0 27.42 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
13.81 +1.2 14.97 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
13.84 +1.2 15.00 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.92 +1.3 17.21 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.25 +1.3 25.56 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase/ucode:
10000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
70910 ± 3% -32.0% 48230 ± 2% stream.add_bandwidth_MBps
76061 -39.0% 46386 ± 3% stream.copy_bandwidth_MBps
67937 ± 2% -35.7% 43683 ± 2% stream.scale_bandwidth_MBps
52.42 +47.9% 77.52 ± 2% stream.time.user_time
74940 -33.4% 49941 ± 2% stream.triad_bandwidth_MBps
19.95 ± 9% +3.4 23.35 ± 2% mpstat.cpu.usr%
3192799 ± 31% +124.6% 7172184 ± 16% cpuidle.C1E.time
23982 ± 9% +47.3% 35332 ± 28% cpuidle.C1E.usage
91178 ± 8% +43.2% 130560 ± 4% meminfo.AnonHugePages
669014 ± 11% -20.3% 533101 meminfo.max_used_kB
643.25 ± 15% +408.3% 3269 ±129% softirqs.CPU16.RCU
10783 ± 4% +13.6% 12250 ± 10% softirqs.CPU46.TIMER
84.00 -4.2% 80.50 vmstat.cpu.id
15.00 ± 8% +23.3% 18.50 ± 2% vmstat.cpu.us
5888 +3.3% 6081 proc-vmstat.nr_mapped
985.50 ± 2% +7.6% 1060 ± 2% proc-vmstat.nr_page_table_pages
81826 +2.9% 84193 proc-vmstat.numa_hit
64731 +3.6% 67074 proc-vmstat.numa_local
20069 ± 12% +60.3% 32178 ± 32% turbostat.C1E
1.15 ± 31% +1.2 2.33 ± 17% turbostat.C1E%
34.77 ± 4% -24.8% 26.13 ± 2% turbostat.CPU%c1
30.82 ± 16% +25.9% 38.80 ± 9% turbostat.CPU%c6
289644 ± 6% +17.8% 341157 ± 7% turbostat.IRQ
24.77 ± 6% -10.6% 22.14 ± 2% turbostat.RAMWatt
672.00 ± 9% +46.4% 984.00 ± 22% slabinfo.dmaengine-unmap-16.active_objs
672.00 ± 9% +46.4% 984.00 ± 22% slabinfo.dmaengine-unmap-16.num_objs
2128 ± 9% -25.9% 1576 ± 9% slabinfo.eventpoll_epi.active_objs
2128 ± 9% -25.9% 1576 ± 9% slabinfo.eventpoll_epi.num_objs
3724 ± 9% -25.9% 2758 ± 9% slabinfo.eventpoll_pwq.active_objs
3724 ± 9% -25.9% 2758 ± 9% slabinfo.eventpoll_pwq.num_objs
760.00 ± 6% -17.1% 630.00 ± 2% slabinfo.file_lock_cache.active_objs
760.00 ± 6% -17.1% 630.00 ± 2% slabinfo.file_lock_cache.num_objs
7771 ± 60% +112.3% 16501 numa-meminfo.node0.Inactive(anon)
8035 ± 58% +111.6% 17005 ± 2% numa-meminfo.node0.Shmem
93367 ± 2% +12.8% 105291 numa-meminfo.node0.Slab
21669 +8.8% 23569 ± 4% numa-meminfo.node1.Active(file)
9225 ± 51% -94.7% 493.50 ± 62% numa-meminfo.node1.Inactive(anon)
31028 ± 6% -10.9% 27656 numa-meminfo.node1.SReclaimable
62893 ± 4% -13.6% 54313 ± 7% numa-meminfo.node1.SUnreclaim
9625 ± 49% -92.8% 690.00 ± 51% numa-meminfo.node1.Shmem
93922 -12.7% 81969 ± 5% numa-meminfo.node1.Slab
1942 ± 61% +112.4% 4125 numa-vmstat.node0.nr_inactive_anon
2007 ± 59% +111.7% 4250 ± 2% numa-vmstat.node0.nr_shmem
1942 ± 61% +112.4% 4125 numa-vmstat.node0.nr_zone_inactive_anon
5390 +9.3% 5892 ± 4% numa-vmstat.node1.nr_active_file
2305 ± 51% -94.7% 123.25 ± 62% numa-vmstat.node1.nr_inactive_anon
2404 ± 49% -92.8% 172.00 ± 51% numa-vmstat.node1.nr_shmem
15666 ± 4% -13.3% 13579 ± 7% numa-vmstat.node1.nr_slab_unreclaimable
5390 +9.3% 5892 ± 4% numa-vmstat.node1.nr_zone_active_file
2305 ± 51% -94.7% 123.25 ± 62% numa-vmstat.node1.nr_zone_inactive_anon
11.33 ±120% -11.3 0.00 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
9.83 ±140% -9.8 0.00 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.83 ±103% -8.3 12.50 ±173% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.33 ±103% -7.8 12.50 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.33 ±103% -7.8 12.50 ±173% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
12.83 ±108% -0.3 12.50 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
12.83 ±108% -0.3 12.50 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
27016 ± 6% -12.9% 23538 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
2243 ± 40% -47.7% 1173 ± 35% sched_debug.cfs_rq:/.min_vruntime.min
-692.28 +943.1% -7221 sched_debug.cfs_rq:/.spread0.avg
19808 ± 11% -47.9% 10321 ± 61% sched_debug.cfs_rq:/.spread0.max
-5010 +140.7% -12059 sched_debug.cfs_rq:/.spread0.min
2.18 ± 9% -12.5% 1.91 ± 8% sched_debug.cpu.cpu_load[0].avg
160.00 ± 22% -42.2% 92.50 ± 25% sched_debug.cpu.cpu_load[4].max
18.25 ± 22% -37.0% 11.50 ± 14% sched_debug.cpu.cpu_load[4].stddev
282.50 ± 11% -16.2% 236.72 ± 6% sched_debug.cpu.curr->pid.avg
642.46 ± 4% -7.3% 595.31 sched_debug.cpu.curr->pid.stddev
229.75 -19.0% 186.00 ± 9% sched_debug.cpu.nr_switches.min
1754 ± 11% +30.2% 2284 ± 8% sched_debug.cpu.nr_switches.stddev
5.25 ± 15% +128.6% 12.00 ± 20% sched_debug.cpu.nr_uninterruptible.max
2.38 ± 13% +50.4% 3.57 ± 19% sched_debug.cpu.nr_uninterruptible.stddev
1.56 ±171% +602.6% 10.93 ± 52% sched_debug.cpu.ttwu_count.avg
9.75 ±161% +2407.7% 244.50 ± 51% sched_debug.cpu.ttwu_count.max
3.53 ±167% +1270.9% 48.37 ± 50% sched_debug.cpu.ttwu_count.stddev
3.14 ± 12% +41.1% 4.43 ± 17% perf-stat.i.cpi
0.32 ± 13% -24.8% 0.24 ± 12% perf-stat.i.ipc
14784 ± 4% -25.8% 10970 ± 16% perf-stat.i.minor-faults
37.43 ± 11% -32.1 5.36 ± 27% perf-stat.i.node-load-miss-rate%
1700226 ± 23% -86.3% 233356 ± 9% perf-stat.i.node-load-misses
2814735 ± 20% +183.3% 7973698 ± 61% perf-stat.i.node-loads
36.40 ± 15% -34.6 1.80 ± 79% perf-stat.i.node-store-miss-rate%
3649361 ± 35% -95.2% 175235 ± 54% perf-stat.i.node-store-misses
14790 ± 4% -25.8% 10971 ± 16% perf-stat.i.page-faults
3.14 ± 12% +45.8% 4.58 ± 20% perf-stat.overall.cpi
0.32 ± 13% -29.6% 0.23 ± 20% perf-stat.overall.ipc
37.43 ± 11% -33.1 4.30 ± 57% perf-stat.overall.node-load-miss-rate%
36.40 ± 15% -34.8 1.58 ± 99% perf-stat.overall.node-store-miss-rate%
7440 ± 4% -15.4% 6296 ± 6% perf-stat.ps.minor-faults
855845 ± 24% -83.9% 138073 ± 21% perf-stat.ps.node-load-misses
1416749 ± 20% +257.9% 5070429 ± 69% perf-stat.ps.node-loads
1837180 ± 35% -94.4% 103194 ± 56% perf-stat.ps.node-store-misses
7443 ± 4% -15.4% 6297 ± 6% perf-stat.ps.page-faults
2645 ± 6% +21.1% 3205 ± 8% interrupts.CPU0.LOC:Local_timer_interrupts
2659 ± 5% +19.9% 3189 ± 8% interrupts.CPU1.LOC:Local_timer_interrupts
2690 ± 4% +21.8% 3278 ± 10% interrupts.CPU10.LOC:Local_timer_interrupts
2649 ± 5% +22.0% 3233 ± 8% interrupts.CPU11.LOC:Local_timer_interrupts
2665 ± 5% +20.0% 3198 ± 8% interrupts.CPU12.LOC:Local_timer_interrupts
2697 ± 7% +18.6% 3200 ± 8% interrupts.CPU13.LOC:Local_timer_interrupts
2668 ± 5% +19.6% 3192 ± 8% interrupts.CPU14.LOC:Local_timer_interrupts
2656 ± 5% +21.3% 3223 ± 7% interrupts.CPU15.LOC:Local_timer_interrupts
2652 ± 5% +20.6% 3199 ± 8% interrupts.CPU16.LOC:Local_timer_interrupts
2680 ± 4% +20.4% 3227 ± 8% interrupts.CPU17.LOC:Local_timer_interrupts
2653 ± 5% +21.5% 3224 ± 9% interrupts.CPU18.LOC:Local_timer_interrupts
2685 ± 5% +20.2% 3228 ± 8% interrupts.CPU19.LOC:Local_timer_interrupts
2676 ± 5% +20.7% 3229 ± 9% interrupts.CPU2.LOC:Local_timer_interrupts
2665 ± 5% +20.6% 3215 ± 9% interrupts.CPU20.LOC:Local_timer_interrupts
2681 ± 5% +19.3% 3198 ± 8% interrupts.CPU21.LOC:Local_timer_interrupts
2653 ± 5% +20.8% 3206 ± 8% interrupts.CPU22.LOC:Local_timer_interrupts
2643 ± 6% +22.8% 3245 ± 8% interrupts.CPU23.LOC:Local_timer_interrupts
2654 ± 5% +21.7% 3231 ± 7% interrupts.CPU24.LOC:Local_timer_interrupts
2649 ± 5% +21.8% 3226 ± 7% interrupts.CPU25.LOC:Local_timer_interrupts
2707 ± 7% +20.0% 3248 ± 7% interrupts.CPU26.LOC:Local_timer_interrupts
2701 ± 6% +20.2% 3247 ± 8% interrupts.CPU27.LOC:Local_timer_interrupts
2701 ± 6% +18.8% 3208 ± 8% interrupts.CPU28.LOC:Local_timer_interrupts
2702 ± 5% +18.4% 3201 ± 8% interrupts.CPU29.LOC:Local_timer_interrupts
2688 ± 6% +18.8% 3193 ± 8% interrupts.CPU30.LOC:Local_timer_interrupts
2641 ± 6% +21.5% 3208 ± 8% interrupts.CPU31.LOC:Local_timer_interrupts
2666 ± 4% +20.4% 3210 ± 8% interrupts.CPU32.LOC:Local_timer_interrupts
2639 ± 5% +21.4% 3204 ± 8% interrupts.CPU33.LOC:Local_timer_interrupts
2656 ± 5% +22.2% 3246 ± 8% interrupts.CPU34.LOC:Local_timer_interrupts
2664 ± 5% +21.8% 3245 ± 8% interrupts.CPU35.LOC:Local_timer_interrupts
2680 ± 6% +20.9% 3241 ± 8% interrupts.CPU36.LOC:Local_timer_interrupts
2650 ± 6% +22.4% 3245 ± 7% interrupts.CPU37.LOC:Local_timer_interrupts
2653 ± 6% +20.8% 3206 ± 8% interrupts.CPU38.LOC:Local_timer_interrupts
2653 ± 5% +21.6% 3226 ± 8% interrupts.CPU39.LOC:Local_timer_interrupts
2650 ± 5% +23.6% 3276 ± 6% interrupts.CPU4.LOC:Local_timer_interrupts
2679 ± 6% +19.6% 3204 ± 8% interrupts.CPU40.LOC:Local_timer_interrupts
2649 ± 5% +20.9% 3204 ± 8% interrupts.CPU41.LOC:Local_timer_interrupts
2666 ± 6% +20.2% 3204 ± 8% interrupts.CPU42.LOC:Local_timer_interrupts
2699 ± 6% +18.9% 3209 ± 8% interrupts.CPU43.LOC:Local_timer_interrupts
2685 ± 8% +21.2% 3254 ± 9% interrupts.CPU44.LOC:Local_timer_interrupts
2718 ± 8% +18.9% 3233 ± 8% interrupts.CPU45.LOC:Local_timer_interrupts
2644 ± 6% +21.0% 3200 ± 9% interrupts.CPU46.LOC:Local_timer_interrupts
2672 ± 5% +19.8% 3201 ± 9% interrupts.CPU47.LOC:Local_timer_interrupts
2662 ± 5% +21.0% 3220 ± 9% interrupts.CPU48.LOC:Local_timer_interrupts
2676 ± 5% +20.2% 3216 ± 8% interrupts.CPU49.LOC:Local_timer_interrupts
2655 ± 5% +20.5% 3200 ± 8% interrupts.CPU5.LOC:Local_timer_interrupts
623.75 +14.3% 712.75 ± 11% interrupts.CPU50.CAL:Function_call_interrupts
2672 ± 6% +19.4% 3191 ± 8% interrupts.CPU50.LOC:Local_timer_interrupts
2653 ± 5% +20.4% 3194 ± 8% interrupts.CPU51.LOC:Local_timer_interrupts
2640 ± 6% +20.9% 3193 ± 8% interrupts.CPU52.LOC:Local_timer_interrupts
2665 ± 6% +19.9% 3196 ± 8% interrupts.CPU53.LOC:Local_timer_interrupts
2657 ± 5% +20.0% 3189 ± 8% interrupts.CPU54.LOC:Local_timer_interrupts
2653 ± 5% +21.1% 3212 ± 9% interrupts.CPU55.LOC:Local_timer_interrupts
2659 ± 5% +20.0% 3192 ± 8% interrupts.CPU56.LOC:Local_timer_interrupts
208.75 ± 59% -84.7% 32.00 ±173% interrupts.CPU56.TLB:TLB_shootdowns
2657 ± 5% +20.2% 3194 ± 8% interrupts.CPU57.LOC:Local_timer_interrupts
2638 ± 6% +21.0% 3192 ± 8% interrupts.CPU58.LOC:Local_timer_interrupts
2667 ± 7% +19.7% 3194 ± 8% interrupts.CPU59.LOC:Local_timer_interrupts
2649 ± 5% +21.0% 3205 ± 8% interrupts.CPU6.LOC:Local_timer_interrupts
236.00 ± 66% -86.2% 32.50 ±169% interrupts.CPU6.TLB:TLB_shootdowns
2677 ± 6% +19.0% 3186 ± 8% interrupts.CPU60.LOC:Local_timer_interrupts
2674 ± 6% +19.3% 3191 ± 8% interrupts.CPU61.LOC:Local_timer_interrupts
2658 ± 5% +20.2% 3194 ± 8% interrupts.CPU62.LOC:Local_timer_interrupts
2666 ± 5% +20.8% 3221 ± 9% interrupts.CPU63.LOC:Local_timer_interrupts
2690 ± 6% +19.9% 3225 ± 10% interrupts.CPU64.LOC:Local_timer_interrupts
2653 ± 5% +20.4% 3194 ± 8% interrupts.CPU65.LOC:Local_timer_interrupts
652.50 ± 9% +21.9% 795.25 ± 18% interrupts.CPU66.CAL:Function_call_interrupts
2669 ± 5% +19.7% 3196 ± 8% interrupts.CPU66.LOC:Local_timer_interrupts
2655 ± 5% +20.2% 3191 ± 8% interrupts.CPU67.LOC:Local_timer_interrupts
2665 ± 5% +20.4% 3207 ± 8% interrupts.CPU68.LOC:Local_timer_interrupts
614.50 +27.0% 780.50 ± 25% interrupts.CPU69.CAL:Function_call_interrupts
2666 ± 5% +21.2% 3232 ± 9% interrupts.CPU69.LOC:Local_timer_interrupts
2678 ± 6% +21.1% 3243 ± 7% interrupts.CPU7.LOC:Local_timer_interrupts
657.50 ± 9% +24.1% 815.75 ± 21% interrupts.CPU70.CAL:Function_call_interrupts
2676 ± 6% +19.9% 3208 ± 8% interrupts.CPU70.LOC:Local_timer_interrupts
657.00 ± 9% +24.2% 815.75 ± 21% interrupts.CPU71.CAL:Function_call_interrupts
2674 ± 6% +19.5% 3194 ± 8% interrupts.CPU71.LOC:Local_timer_interrupts
2653 ± 5% +21.4% 3221 ± 9% interrupts.CPU72.LOC:Local_timer_interrupts
2670 ± 6% +20.0% 3204 ± 8% interrupts.CPU73.LOC:Local_timer_interrupts
2669 ± 6% +19.8% 3197 ± 9% interrupts.CPU74.LOC:Local_timer_interrupts
652.00 ± 9% +25.3% 816.75 ± 21% interrupts.CPU75.CAL:Function_call_interrupts
2641 ± 5% +21.4% 3205 ± 8% interrupts.CPU75.LOC:Local_timer_interrupts
625.00 +25.8% 786.25 ± 16% interrupts.CPU76.CAL:Function_call_interrupts
2660 ± 5% +20.6% 3208 ± 8% interrupts.CPU76.LOC:Local_timer_interrupts
656.75 ± 9% +20.6% 791.75 ± 17% interrupts.CPU77.CAL:Function_call_interrupts
2665 ± 5% +19.6% 3187 ± 8% interrupts.CPU77.LOC:Local_timer_interrupts
657.00 ± 9% +21.3% 796.75 ± 18% interrupts.CPU78.CAL:Function_call_interrupts
2666 ± 5% +19.7% 3190 ± 8% interrupts.CPU78.LOC:Local_timer_interrupts
2660 ± 5% +19.9% 3190 ± 8% interrupts.CPU79.LOC:Local_timer_interrupts
648.75 ± 7% +19.2% 773.00 ± 16% interrupts.CPU8.CAL:Function_call_interrupts
2647 ± 5% +21.3% 3211 ± 7% interrupts.CPU8.LOC:Local_timer_interrupts
2671 ± 6% +19.5% 3191 ± 8% interrupts.CPU80.LOC:Local_timer_interrupts
2660 ± 5% +20.9% 3217 ± 8% interrupts.CPU81.LOC:Local_timer_interrupts
2639 ± 6% +21.6% 3209 ± 8% interrupts.CPU82.LOC:Local_timer_interrupts
2652 ± 5% +21.5% 3223 ± 7% interrupts.CPU83.LOC:Local_timer_interrupts
2656 ± 5% +20.5% 3201 ± 8% interrupts.CPU84.LOC:Local_timer_interrupts
2678 ± 4% +19.8% 3207 ± 8% interrupts.CPU85.LOC:Local_timer_interrupts
2659 ± 5% +20.7% 3209 ± 8% interrupts.CPU86.LOC:Local_timer_interrupts
656.25 ± 9% +24.8% 819.00 ± 22% interrupts.CPU87.CAL:Function_call_interrupts
2656 ± 5% +20.5% 3200 ± 8% interrupts.CPU87.LOC:Local_timer_interrupts
2655 ± 5% +20.9% 3211 ± 8% interrupts.CPU9.LOC:Local_timer_interrupts
234559 ± 5% +20.5% 282643 ± 8% interrupts.LOC:Local_timer_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep6/plzip/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
4674851 -5.8% 4402603 plzip.time.minor_page_faults
187.93 -1.7% 184.66 plzip.time.system_time
236915 +3.3% 244744 plzip.time.voluntary_context_switches
20188778 ± 7% +62.1% 32719513 ± 32% cpuidle.POLL.time
0.00 -0.0 0.00 ± 24% mpstat.cpu.soft%
11424 ± 49% -62.2% 4321 ±169% numa-numastat.node0.other_node
2223 -2.7% 2163 vmstat.system.cs
199507 -6.3% 186973 vmstat.system.in
11254 +22.5% 13784 ± 16% numa-meminfo.node0.Mapped
75.00 +544.0% 483.00 ± 81% numa-meminfo.node0.Mlocked
75.00 +544.0% 483.00 ± 81% numa-meminfo.node0.Unevictable
38772 ± 5% -11.6% 34263 ± 10% numa-meminfo.node1.SReclaimable
2793 +25.4% 3502 ± 14% numa-vmstat.node0.nr_mapped
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_mlock
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_unevictable
18.50 ± 2% +550.0% 120.25 ± 82% numa-vmstat.node0.nr_zone_unevictable
11543 ± 49% -60.8% 4519 ±159% numa-vmstat.node0.numa_other
9693 ± 5% -11.6% 8566 ± 10% numa-vmstat.node1.nr_slab_reclaimable
3008454 -9.0% 2737405 proc-vmstat.numa_hint_faults
2421331 -7.5% 2240371 proc-vmstat.numa_hint_faults_local
185494 -11.9% 163404 proc-vmstat.numa_huge_pte_updates
5591805 ± 3% -12.4% 4899638 ± 3% proc-vmstat.numa_pages_migrated
98610869 -11.8% 86933972 proc-vmstat.numa_pte_updates
5570737 -4.9% 5295155 proc-vmstat.pgfault
5591805 ± 3% -12.4% 4899638 ± 3% proc-vmstat.pgmigrate_success
2358 ± 3% -24.9% 1770 ± 14% slabinfo.eventpoll_epi.active_objs
2358 ± 3% -24.9% 1770 ± 14% slabinfo.eventpoll_epi.num_objs
4127 ± 3% -25.6% 3071 ± 15% slabinfo.eventpoll_pwq.active_objs
4127 ± 3% -25.6% 3071 ± 15% slabinfo.eventpoll_pwq.num_objs
767.50 ± 5% -16.6% 640.00 ± 10% slabinfo.file_lock_cache.active_objs
767.50 ± 5% -16.6% 640.00 ± 10% slabinfo.file_lock_cache.num_objs
548.00 ± 12% -13.4% 474.75 ± 17% slabinfo.skbuff_fclone_cache.active_objs
548.00 ± 12% -13.4% 474.75 ± 17% slabinfo.skbuff_fclone_cache.num_objs
3424 -13.7% 2955 ± 8% slabinfo.sock_inode_cache.active_objs
3424 -13.7% 2955 ± 8% slabinfo.sock_inode_cache.num_objs
1.38 +8.7% 1.50 ± 6% sched_debug.cfs_rq:/.nr_spread_over.avg
123441 ± 32% -87.5% 15420 ± 31% sched_debug.cfs_rq:/.spread0.avg
310956 ± 19% -40.1% 186343 ± 7% sched_debug.cfs_rq:/.spread0.max
29.67 ± 5% -7.4% 27.46 ± 5% sched_debug.cpu.cpu_load[1].max
53.50 ± 8% -16.1% 44.88 ± 16% sched_debug.cpu.cpu_load[4].max
4.96 ± 7% -16.0% 4.17 ± 17% sched_debug.cpu.cpu_load[4].stddev
3942 ± 15% -36.1% 2519 ± 42% sched_debug.cpu.curr->pid.min
312.17 ± 4% -12.6% 272.83 ± 4% sched_debug.cpu.sched_goidle.stddev
5485 +31.8% 7231 ± 15% sched_debug.cpu.ttwu_count.max
829.33 ± 6% -10.5% 742.58 ± 12% sched_debug.cpu.ttwu_count.min
3789 ± 3% +38.5% 5249 ± 18% sched_debug.cpu.ttwu_local.max
361.67 ± 16% -24.7% 272.42 ± 9% sched_debug.cpu.ttwu_local.min
42735 ± 4% -9.5% 38655 ± 4% softirqs.CPU1.RCU
43986 -11.1% 39106 ± 6% softirqs.CPU11.RCU
138451 ± 3% -6.2% 129877 ± 3% softirqs.CPU11.TIMER
43616 ± 3% -11.3% 38683 ± 8% softirqs.CPU14.RCU
46391 ± 3% -15.0% 39420 ± 4% softirqs.CPU2.RCU
46679 ± 2% -10.4% 41843 ± 2% softirqs.CPU35.RCU
54405 ± 9% -13.0% 47347 ± 11% softirqs.CPU36.RCU
53662 ± 5% -16.5% 44815 ± 5% softirqs.CPU37.RCU
51563 ± 11% -17.6% 42505 ± 5% softirqs.CPU39.RCU
146287 ± 3% -7.7% 135015 ± 3% softirqs.CPU4.TIMER
51810 ± 8% -12.9% 45103 ± 11% softirqs.CPU41.RCU
144965 ± 3% -7.2% 134488 ± 2% softirqs.CPU5.TIMER
36399 ± 2% +19.8% 43611 ± 2% softirqs.CPU66.RCU
54540 -17.2% 45175 ± 2% softirqs.CPU68.RCU
45027 -10.8% 40167 ± 6% softirqs.CPU7.RCU
46747 ± 10% -20.2% 37293 ± 10% softirqs.CPU77.RCU
42610 ± 2% -10.8% 38011 ± 7% softirqs.CPU83.RCU
43534 ± 3% -10.6% 38935 ± 9% softirqs.CPU87.RCU
45144 ± 4% -8.2% 41426 ± 5% softirqs.CPU9.RCU
2207 -2.7% 2149 perf-stat.i.context-switches
132.50 -6.8% 123.43 ± 2% perf-stat.i.cpu-migrations
0.04 -0.0 0.03 ± 2% perf-stat.i.dTLB-store-miss-rate%
2870039 -7.1% 2665015 perf-stat.i.dTLB-store-misses
69.91 +2.8 72.76 ± 2% perf-stat.i.iTLB-load-miss-rate%
593399 -13.4% 513752 ± 3% perf-stat.i.iTLB-loads
15830 -5.4% 14979 perf-stat.i.minor-faults
5.76 ± 6% -1.1 4.63 ± 12% perf-stat.i.node-load-miss-rate%
29280168 ± 5% -22.1% 22809166 ± 7% perf-stat.i.node-load-misses
5.96 ± 8% -1.4 4.52 ± 11% perf-stat.i.node-store-miss-rate%
4992121 ± 7% -27.5% 3621719 ± 7% perf-stat.i.node-store-misses
15830 -5.4% 14979 perf-stat.i.page-faults
0.04 -0.0 0.03 perf-stat.overall.dTLB-store-miss-rate%
5.46 ± 5% -1.2 4.24 ± 7% perf-stat.overall.node-load-miss-rate%
5.68 ± 7% -1.6 4.12 ± 8% perf-stat.overall.node-store-miss-rate%
2201 -2.6% 2143 perf-stat.ps.context-switches
132.11 -6.8% 123.07 ± 2% perf-stat.ps.cpu-migrations
2861551 -7.1% 2657323 perf-stat.ps.dTLB-store-misses
591695 -13.4% 512284 ± 3% perf-stat.ps.iTLB-loads
15785 -5.4% 14937 perf-stat.ps.minor-faults
29194092 ± 5% -22.1% 22742509 ± 7% perf-stat.ps.node-load-misses
4977398 ± 7% -27.4% 3611184 ± 7% perf-stat.ps.node-store-misses
15785 -5.4% 14937 perf-stat.ps.page-faults
37.67 -3.5 34.22 ± 3% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
36.20 -3.2 32.98 ± 3% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
30.64 -2.6 28.08 ± 3% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
15.44 ± 8% -2.2 13.23 ± 5% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
16.59 ± 7% -2.2 14.40 ± 4% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
14.94 ± 8% -2.1 12.84 ± 5% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
23.31 ± 2% -2.1 21.23 ± 3% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
11.29 ± 5% -1.7 9.60 ± 5% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
8.27 ± 2% -1.0 7.23 ± 4% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
12.36 -0.6 11.71 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
12.36 -0.6 11.71 ± 4% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.88 -0.6 3.31 ± 4% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 ± 2% -0.6 3.30 ± 4% perf-profile.calltrace.cycles-pp.pipe_write.__vfs_write.vfs_write.sys_write.do_syscall_64
3.88 -0.6 3.32 ± 4% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.86 ± 2% -0.6 3.31 ± 3% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.08 -0.5 3.56 ± 2% perf-profile.calltrace.cycles-pp.update_load_avg.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
2.81 ± 2% -0.4 2.39 ± 7% perf-profile.calltrace.cycles-pp.copy_page_from_iter.pipe_write.__vfs_write.vfs_write.sys_write
2.62 ± 4% -0.4 2.21 ± 6% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin.copy_page_from_iter.pipe_write.__vfs_write
2.62 ± 4% -0.4 2.21 ± 6% perf-profile.calltrace.cycles-pp.copyin.copy_page_from_iter.pipe_write.__vfs_write.vfs_write
0.87 ± 6% -0.4 0.47 ± 57% perf-profile.calltrace.cycles-pp.native_apic_msr_eoi_write.smp_apic_timer_interrupt.apic_timer_interrupt
0.83 ± 7% -0.4 0.44 ± 58% perf-profile.calltrace.cycles-pp.account_user_time.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
3.40 ± 3% -0.3 3.12 ± 2% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
3.00 ± 4% -0.3 2.71 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.44 ± 2% -0.2 1.29 ± 10% perf-profile.calltrace.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.77 ± 3% -0.1 0.62 ± 12% perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
545.00 ± 25% -41.9% 316.75 ± 47% interrupts.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
203.50 ± 9% +45.3% 295.75 ± 24% interrupts.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
38166429 -11.0% 33961585 interrupts.CAL:Function_call_interrupts
432615 -10.9% 385659 interrupts.CPU0.CAL:Function_call_interrupts
434222 -11.2% 385591 interrupts.CPU1.CAL:Function_call_interrupts
432253 ± 2% -10.6% 386505 interrupts.CPU10.CAL:Function_call_interrupts
432492 ± 2% -11.0% 384996 interrupts.CPU11.CAL:Function_call_interrupts
450925 ± 2% -7.4% 417361 ± 2% interrupts.CPU11.TLB:TLB_shootdowns
431152 -10.4% 386198 interrupts.CPU12.CAL:Function_call_interrupts
7893 -37.4% 4939 ± 34% interrupts.CPU12.NMI:Non-maskable_interrupts
7893 -37.4% 4939 ± 34% interrupts.CPU12.PMI:Performance_monitoring_interrupts
431989 ± 2% -10.8% 385478 interrupts.CPU13.CAL:Function_call_interrupts
432141 -10.8% 385372 interrupts.CPU14.CAL:Function_call_interrupts
430309 -10.2% 386216 interrupts.CPU15.CAL:Function_call_interrupts
545.00 ± 25% -41.9% 316.75 ± 47% interrupts.CPU16.36:IR-PCI-MSI.1572867-edge.eth0-TxRx-2
432466 -10.4% 387465 interrupts.CPU16.CAL:Function_call_interrupts
432582 -11.0% 384925 interrupts.CPU17.CAL:Function_call_interrupts
434853 -11.0% 386855 interrupts.CPU18.CAL:Function_call_interrupts
203.50 ± 9% +45.3% 295.75 ± 24% interrupts.CPU19.39:IR-PCI-MSI.1572870-edge.eth0-TxRx-5
432706 -10.6% 386950 interrupts.CPU19.CAL:Function_call_interrupts
434387 -12.4% 380661 interrupts.CPU2.CAL:Function_call_interrupts
453229 -9.2% 411516 interrupts.CPU2.TLB:TLB_shootdowns
435346 -11.3% 386104 interrupts.CPU20.CAL:Function_call_interrupts
454122 -7.9% 418229 ± 2% interrupts.CPU20.TLB:TLB_shootdowns
434493 -10.9% 387125 interrupts.CPU21.CAL:Function_call_interrupts
621.00 ± 11% +64.5% 1021 ± 20% interrupts.CPU21.RES:Rescheduling_interrupts
425823 ± 3% -9.4% 385897 interrupts.CPU22.CAL:Function_call_interrupts
433468 -11.0% 385821 interrupts.CPU23.CAL:Function_call_interrupts
434913 -11.2% 386097 interrupts.CPU24.CAL:Function_call_interrupts
434374 -11.2% 385692 interrupts.CPU25.CAL:Function_call_interrupts
433527 -10.6% 387596 interrupts.CPU26.CAL:Function_call_interrupts
434308 -11.3% 385115 interrupts.CPU27.CAL:Function_call_interrupts
431181 -10.9% 384326 interrupts.CPU28.CAL:Function_call_interrupts
434448 -11.5% 384490 interrupts.CPU29.CAL:Function_call_interrupts
453474 ± 2% -8.0% 417056 interrupts.CPU29.TLB:TLB_shootdowns
433938 -11.1% 385867 interrupts.CPU3.CAL:Function_call_interrupts
2739 ± 14% -41.8% 1594 ± 27% interrupts.CPU3.RES:Rescheduling_interrupts
433773 -11.0% 386014 interrupts.CPU30.CAL:Function_call_interrupts
434250 -11.1% 385917 interrupts.CPU31.CAL:Function_call_interrupts
436173 -11.6% 385778 interrupts.CPU32.CAL:Function_call_interrupts
455271 ± 2% -8.0% 418794 ± 2% interrupts.CPU32.TLB:TLB_shootdowns
435787 -11.4% 386189 interrupts.CPU33.CAL:Function_call_interrupts
436438 -11.5% 386321 interrupts.CPU34.CAL:Function_call_interrupts
433917 -11.4% 384653 interrupts.CPU35.CAL:Function_call_interrupts
434362 -10.8% 387306 interrupts.CPU36.CAL:Function_call_interrupts
434686 -11.3% 385416 interrupts.CPU37.CAL:Function_call_interrupts
453983 ± 2% -7.8% 418472 interrupts.CPU37.TLB:TLB_shootdowns
433245 -10.7% 386871 interrupts.CPU38.CAL:Function_call_interrupts
430705 -10.0% 387585 interrupts.CPU39.CAL:Function_call_interrupts
434411 -11.2% 385547 interrupts.CPU4.CAL:Function_call_interrupts
1251 ± 9% +90.3% 2380 ± 32% interrupts.CPU4.RES:Rescheduling_interrupts
434631 -11.1% 386349 interrupts.CPU40.CAL:Function_call_interrupts
436453 -11.6% 385886 interrupts.CPU41.CAL:Function_call_interrupts
436514 -11.5% 386455 interrupts.CPU42.CAL:Function_call_interrupts
432629 ± 2% -11.0% 385234 interrupts.CPU43.CAL:Function_call_interrupts
1982 ± 40% -39.4% 1201 ± 44% interrupts.CPU43.RES:Rescheduling_interrupts
438403 -11.0% 390257 interrupts.CPU44.CAL:Function_call_interrupts
437673 -10.6% 391240 interrupts.CPU45.CAL:Function_call_interrupts
436956 -10.0% 393070 interrupts.CPU46.CAL:Function_call_interrupts
436535 ± 2% -10.3% 391586 interrupts.CPU47.CAL:Function_call_interrupts
436404 ± 2% -10.3% 391331 interrupts.CPU48.CAL:Function_call_interrupts
440924 -11.0% 392351 interrupts.CPU49.CAL:Function_call_interrupts
455336 ± 2% -8.0% 418988 ± 2% interrupts.CPU49.TLB:TLB_shootdowns
429597 -10.2% 385817 interrupts.CPU5.CAL:Function_call_interrupts
2657 ± 16% -57.7% 1125 ± 38% interrupts.CPU5.RES:Rescheduling_interrupts
432290 -10.8% 385504 interrupts.CPU50.CAL:Function_call_interrupts
430785 ± 2% -10.2% 386960 interrupts.CPU51.CAL:Function_call_interrupts
433428 ± 2% -11.0% 385627 interrupts.CPU52.CAL:Function_call_interrupts
452795 ± 2% -7.6% 418273 interrupts.CPU52.TLB:TLB_shootdowns
432275 -10.8% 385758 interrupts.CPU53.CAL:Function_call_interrupts
432401 -10.8% 385721 interrupts.CPU54.CAL:Function_call_interrupts
433685 -11.2% 384984 interrupts.CPU55.CAL:Function_call_interrupts
434907 -11.3% 385865 interrupts.CPU56.CAL:Function_call_interrupts
454111 ± 2% -7.7% 419097 ± 2% interrupts.CPU56.TLB:TLB_shootdowns
433831 ± 2% -11.6% 383393 interrupts.CPU57.CAL:Function_call_interrupts
452988 ± 2% -8.1% 416171 interrupts.CPU57.TLB:TLB_shootdowns
434640 -11.0% 386694 interrupts.CPU58.CAL:Function_call_interrupts
433779 -11.1% 385790 ± 2% interrupts.CPU59.CAL:Function_call_interrupts
433594 -11.3% 384456 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
433902 -11.5% 383949 interrupts.CPU60.CAL:Function_call_interrupts
453607 -8.1% 416661 interrupts.CPU60.TLB:TLB_shootdowns
432735 -10.7% 386644 interrupts.CPU61.CAL:Function_call_interrupts
430100 ± 2% -10.3% 385648 interrupts.CPU62.CAL:Function_call_interrupts
1361 ± 17% -46.0% 734.75 ± 21% interrupts.CPU62.RES:Rescheduling_interrupts
430135 ± 2% -10.4% 385243 interrupts.CPU63.CAL:Function_call_interrupts
433506 -11.4% 384298 interrupts.CPU64.CAL:Function_call_interrupts
428442 -10.6% 383084 interrupts.CPU65.CAL:Function_call_interrupts
934.00 +41.7% 1323 ± 6% interrupts.CPU65.RES:Rescheduling_interrupts
429851 -10.2% 385861 ± 2% interrupts.CPU66.CAL:Function_call_interrupts
432662 -10.6% 386768 interrupts.CPU67.CAL:Function_call_interrupts
1137 ± 30% -37.3% 713.25 ± 22% interrupts.CPU67.RES:Rescheduling_interrupts
435998 -11.2% 387345 interrupts.CPU68.CAL:Function_call_interrupts
433910 -11.1% 385547 ± 2% interrupts.CPU69.CAL:Function_call_interrupts
433665 -11.1% 385376 interrupts.CPU7.CAL:Function_call_interrupts
433964 -11.2% 385200 interrupts.CPU70.CAL:Function_call_interrupts
434280 -11.2% 385800 interrupts.CPU71.CAL:Function_call_interrupts
436755 -12.1% 383884 interrupts.CPU72.CAL:Function_call_interrupts
434534 -11.1% 386259 interrupts.CPU73.CAL:Function_call_interrupts
435838 -11.3% 386437 interrupts.CPU74.CAL:Function_call_interrupts
849.50 +22.9% 1044 ± 15% interrupts.CPU74.RES:Rescheduling_interrupts
431545 -11.2% 383340 interrupts.CPU75.CAL:Function_call_interrupts
435028 -11.6% 384709 interrupts.CPU76.CAL:Function_call_interrupts
434623 -11.7% 383907 interrupts.CPU77.CAL:Function_call_interrupts
432569 -11.0% 384848 interrupts.CPU78.CAL:Function_call_interrupts
433842 -12.4% 379948 ± 3% interrupts.CPU79.CAL:Function_call_interrupts
453571 -9.3% 411502 ± 3% interrupts.CPU79.TLB:TLB_shootdowns
434907 -11.6% 384337 ± 2% interrupts.CPU8.CAL:Function_call_interrupts
435116 -11.4% 385431 interrupts.CPU80.CAL:Function_call_interrupts
677.50 ± 25% -24.1% 514.00 ± 28% interrupts.CPU80.RES:Rescheduling_interrupts
432192 -11.0% 384794 interrupts.CPU81.CAL:Function_call_interrupts
433320 -10.7% 387090 interrupts.CPU82.CAL:Function_call_interrupts
1174 ± 9% -50.1% 586.50 ± 29% interrupts.CPU82.RES:Rescheduling_interrupts
435824 -11.9% 384143 interrupts.CPU83.CAL:Function_call_interrupts
434477 -11.0% 386568 interrupts.CPU84.CAL:Function_call_interrupts
432705 -10.4% 387511 interrupts.CPU85.CAL:Function_call_interrupts
434758 ± 2% -11.5% 384778 interrupts.CPU86.CAL:Function_call_interrupts
454777 ± 2% -7.9% 418646 interrupts.CPU86.TLB:TLB_shootdowns
432224 -11.3% 383338 interrupts.CPU87.CAL:Function_call_interrupts
432930 -11.2% 384537 interrupts.CPU9.CAL:Function_call_interrupts
***************************************************************************************************
lkp-bdw-ex1: 192 threads Intel(R) Xeon(R) CPU E7-8890 v4 @ 2.20GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ex1/all_utime/reaim/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
0.08 -10.2% 0.07 reaim.child_systime
753802 -11.9% 663774 reaim.jobs_per_min
3926 -11.9% 3457 reaim.jobs_per_min_child
97.65 -3.8% 93.97 reaim.jti
806247 -6.3% 755738 reaim.max_jobs_per_min
1.56 +13.6% 1.78 reaim.parent_time
1.88 ± 3% +194.4% 5.53 ± 4% reaim.std_dev_percent
0.03 ± 3% +217.2% 0.09 ± 4% reaim.std_dev_time
104138 +86.0% 193748 reaim.time.involuntary_context_switches
1074884 +2.8% 1105377 reaim.time.minor_page_faults
7721 -6.3% 7236 reaim.time.percent_of_cpu_this_job_got
23369 -6.5% 21847 reaim.time.user_time
66048 -1.7% 64924 reaim.time.voluntary_context_switches
1612800 -5.7% 1521600 reaim.workload
1.448e+08 ± 4% +75.9% 2.548e+08 ± 27% cpuidle.POLL.time
449918 ± 4% -25.3% 336278 ± 15% numa-numastat.node1.local_node
462355 ± 2% -23.2% 354953 ± 14% numa-numastat.node1.numa_hit
59.00 +5.1% 62.00 vmstat.cpu.id
40.00 -7.5% 37.00 vmstat.cpu.us
2033 +18.8% 2415 vmstat.system.cs
1092 -5.8% 1029 turbostat.Avg_MHz
12.57 +17.7% 14.79 turbostat.CPU%c1
2.17 ± 5% -26.9% 1.59 ± 13% turbostat.Pkg%pc6
20.95 +7.4% 22.50 ± 4% turbostat.RAMWatt
22388 ± 34% -62.8% 8322 ± 85% numa-vmstat.node1.nr_active_anon
6845 ± 15% -38.8% 4189 ± 10% numa-vmstat.node1.nr_slab_reclaimable
17978 ± 3% -26.0% 13303 ± 10% numa-vmstat.node1.nr_slab_unreclaimable
22388 ± 34% -62.8% 8322 ± 85% numa-vmstat.node1.nr_zone_active_anon
2740 ± 62% -95.1% 135.50 ± 77% numa-vmstat.node3.nr_inactive_anon
2921 ± 59% -91.1% 259.00 ± 31% numa-vmstat.node3.nr_shmem
2740 ± 62% -95.1% 135.50 ± 77% numa-vmstat.node3.nr_zone_inactive_anon
101194 ± 30% -53.7% 46870 ± 62% numa-meminfo.node1.Active
89611 ± 34% -62.9% 33266 ± 85% numa-meminfo.node1.Active(anon)
566915 ± 3% -18.3% 463403 ± 8% numa-meminfo.node1.MemUsed
27379 ± 15% -38.8% 16759 ± 10% numa-meminfo.node1.SReclaimable
71912 ± 3% -26.0% 53214 ± 10% numa-meminfo.node1.SUnreclaim
99292 ± 5% -29.5% 69973 ± 8% numa-meminfo.node1.Slab
10962 ± 62% -95.0% 543.25 ± 77% numa-meminfo.node3.Inactive(anon)
11687 ± 59% -91.1% 1037 ± 31% numa-meminfo.node3.Shmem
77999 -3.5% 75239 proc-vmstat.nr_slab_unreclaimable
38308 ± 13% +242.0% 131006 ± 2% proc-vmstat.numa_hint_faults
780.75 ±100% +571.6% 5243 ± 75% proc-vmstat.numa_hint_faults_local
1910995 +2.1% 1950408 proc-vmstat.numa_hit
1854894 +2.1% 1894321 proc-vmstat.numa_local
40180 ± 21% +204.9% 122514 ± 4% proc-vmstat.numa_pages_migrated
145766 ± 44% +152.1% 367445 proc-vmstat.numa_pte_updates
2082332 +1.5% 2113859 proc-vmstat.pgalloc_normal
2106944 +1.5% 2139589 proc-vmstat.pgfault
2018610 +1.6% 2050051 proc-vmstat.pgfree
40180 ± 21% +204.9% 122514 ± 4% proc-vmstat.pgmigrate_success
9772 ± 7% +48.0% 14465 ± 12% softirqs.CPU0.SCHED
62268 +13.2% 70466 softirqs.CPU0.TIMER
14263 ± 6% +19.6% 17059 ± 7% softirqs.CPU148.RCU
53614 +10.2% 59094 ± 6% softirqs.CPU148.TIMER
15228 ± 3% +19.8% 18246 ± 14% softirqs.CPU37.RCU
13686 ± 9% +19.4% 16346 ± 7% softirqs.CPU53.RCU
14747 ± 9% +15.4% 17023 ± 7% softirqs.CPU54.RCU
14975 ± 4% +20.3% 18017 ± 18% softirqs.CPU55.RCU
14654 ± 4% +12.7% 16510 ± 9% softirqs.CPU59.RCU
14608 ± 4% +9.0% 15927 ± 8% softirqs.CPU60.RCU
14141 +13.6% 16067 ± 7% softirqs.CPU61.RCU
13788 ± 3% +28.1% 17667 ± 19% softirqs.CPU63.RCU
14460 ± 7% -16.3% 12105 ± 9% softirqs.CPU96.RCU
3871 ± 3% -23.7% 2953 ± 7% slabinfo.biovec-64.active_objs
3871 ± 3% -23.7% 2953 ± 7% slabinfo.biovec-64.num_objs
4456 ± 4% -66.5% 1490 ± 3% slabinfo.buffer_head.active_objs
4456 ± 4% -66.5% 1490 ± 3% slabinfo.buffer_head.num_objs
9327 ± 7% -49.8% 4681 ± 20% slabinfo.eventpoll_epi.active_objs
9327 ± 7% -49.8% 4681 ± 20% slabinfo.eventpoll_epi.num_objs
8161 ± 7% -49.8% 4096 ± 20% slabinfo.eventpoll_pwq.active_objs
8161 ± 7% -49.8% 4096 ± 20% slabinfo.eventpoll_pwq.num_objs
870.00 ± 10% -22.0% 678.25 ± 5% slabinfo.file_lock_cache.active_objs
870.00 ± 10% -22.0% 678.25 ± 5% slabinfo.file_lock_cache.num_objs
11826 -12.3% 10377 ± 4% slabinfo.shmem_inode_cache.active_objs
11826 -12.3% 10377 ± 4% slabinfo.shmem_inode_cache.num_objs
7919 ± 2% -16.7% 6596 ± 2% slabinfo.sighand_cache.active_objs
8073 ± 2% -18.0% 6616 ± 2% slabinfo.sighand_cache.num_objs
13382 -20.2% 10683 slabinfo.signal_cache.active_objs
13441 -20.4% 10695 slabinfo.signal_cache.num_objs
9837 +26.3% 12423 slabinfo.sigqueue.active_objs
9837 +26.3% 12423 slabinfo.sigqueue.num_objs
6269 ± 3% -20.0% 5015 ± 6% slabinfo.sock_inode_cache.active_objs
6269 ± 3% -20.0% 5015 ± 6% slabinfo.sock_inode_cache.num_objs
16.88 ± 23% -8.9 7.94 ± 20% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
16.79 ± 23% -8.9 7.88 ± 20% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
11.45 ± 34% -7.5 3.90 ± 28% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
11.06 ± 35% -7.4 3.63 ± 29% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
9.53 ± 42% -6.6 2.90 ± 22% perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
9.49 ± 42% -6.6 2.88 ± 22% perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
8.59 ± 41% -6.2 2.44 ± 29% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
6.58 ± 4% -1.5 5.11 ± 11% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
6.38 ± 4% -1.4 4.93 ± 11% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
5.84 ± 4% -1.4 4.48 ± 12% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.18 ± 6% -1.0 3.16 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.96 ± 3% -0.8 2.14 ± 12% perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.70 ± 4% -0.8 1.91 ± 13% perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
3.17 ± 3% -0.8 2.40 ± 14% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
2.67 ± 4% -0.8 1.90 ± 13% perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.98 ± 7% -0.8 1.20 ± 13% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
2.38 ± 5% -0.7 1.65 ± 13% perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
1.97 ± 4% -0.5 1.49 ± 12% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.59 ± 7% -0.5 1.13 ± 17% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
1.39 ± 5% -0.4 0.97 ± 17% perf-profile.calltrace.cycles-pp.__tick_nohz_idle_enter.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
0.68 ± 8% -0.4 0.29 ±100% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.19 ± 6% -0.4 0.81 ± 18% perf-profile.calltrace.cycles-pp.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.82 ± 6% -0.2 0.62 ± 11% perf-profile.calltrace.cycles-pp.cpuidle_enter_state
86.66 +3.0 89.68 perf-profile.calltrace.cycles-pp.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
86.09 +3.1 89.20 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
84.14 +3.5 87.62 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
9.63 ± 15% +5.5 15.11 ± 20% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
56.81 ± 6% +7.1 63.92 ± 2% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.00 ± 22% +4.8e+24% 71.37 ±105% sched_debug.cfs_rq:/.MIN_vruntime.stddev
612.74 ± 12% +597.4% 4272 ± 9% sched_debug.cfs_rq:/.exec_clock.stddev
433198 ± 28% -50.6% 214019 ± 58% sched_debug.cfs_rq:/.load.max
0.00 ± 22% +4.8e+24% 71.37 ±105% sched_debug.cfs_rq:/.max_vruntime.stddev
72913 ± 11% +344.0% 323724 ± 12% sched_debug.cfs_rq:/.min_vruntime.stddev
370.32 ± 23% -45.8% 200.85 ± 64% sched_debug.cfs_rq:/.runnable_load_avg.max
432768 ± 28% -52.6% 204920 ± 65% sched_debug.cfs_rq:/.runnable_weight.max
70393 ±132% -862.4% -536655 sched_debug.cfs_rq:/.spread0.avg
333239 ± 42% -70.2% 99452 ± 81% sched_debug.cfs_rq:/.spread0.max
-139349 +878.6% -1363651 sched_debug.cfs_rq:/.spread0.min
72941 ± 11% +344.6% 324292 ± 12% sched_debug.cfs_rq:/.spread0.stddev
276.49 ± 35% +61.9% 447.55 ± 28% sched_debug.cfs_rq:/.util_avg.avg
109.18 ± 13% +77.2% 193.51 ± 19% sched_debug.cfs_rq:/.util_avg.stddev
24.69 ± 30% -43.1% 14.05 ± 18% sched_debug.cpu.clock.stddev
24.69 ± 30% -43.1% 14.05 ± 18% sched_debug.cpu.clock_task.stddev
304.28 ± 18% -42.8% 173.94 ± 59% sched_debug.cpu.cpu_load[3].max
22.83 ± 18% -40.2% 13.66 ± 54% sched_debug.cpu.cpu_load[3].stddev
2.88 ± 9% +14.8% 3.30 ± 7% sched_debug.cpu.cpu_load[4].avg
231.47 ± 14% -35.1% 150.19 ± 45% sched_debug.cpu.cpu_load[4].max
17.23 ± 14% -31.6% 11.79 ± 43% sched_debug.cpu.cpu_load[4].stddev
433198 ± 28% -50.6% 214019 ± 58% sched_debug.cpu.load.max
37141 ± 29% -47.0% 19685 ± 56% sched_debug.cpu.load.stddev
0.00 ± 29% -26.6% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
1243 ± 7% +228.3% 4082 ± 12% sched_debug.cpu.nr_load_updates.stddev
0.15 ± 21% +37.7% 0.20 ± 31% sched_debug.cpu.nr_running.stddev
926.31 ± 7% -15.7% 780.48 ± 4% sched_debug.cpu.nr_switches.min
1739 ± 8% +24.6% 2167 ± 8% sched_debug.cpu.nr_switches.stddev
-6.12 +51.0% -9.25 sched_debug.cpu.nr_uninterruptible.min
3408 ± 12% +120.8% 7525 ± 20% sched_debug.cpu.sched_goidle.max
187.60 ± 10% -23.7% 143.10 ± 10% sched_debug.cpu.sched_goidle.min
501.83 ± 2% +49.4% 749.50 ± 9% sched_debug.cpu.sched_goidle.stddev
4105 ± 11% +145.9% 10094 ± 12% sched_debug.cpu.ttwu_count.max
591.57 ± 5% +66.5% 985.13 ± 11% sched_debug.cpu.ttwu_count.stddev
177.60 ± 11% +60.5% 285.13 ± 7% sched_debug.cpu.ttwu_local.stddev
0.02 ± 27% +33.0% 0.03 ± 12% sched_debug.rt_rq:/.rt_time.max
0.00 ± 20% +51.7% 0.00 ± 17% sched_debug.rt_rq:/.rt_time.stddev
6.249e+09 -4.9% 5.943e+09 perf-stat.i.branch-instructions
733533 ± 4% -12.9% 638917 ± 3% perf-stat.i.cache-misses
2001 +19.5% 2392 perf-stat.i.context-switches
2.108e+11 -5.9% 1.983e+11 perf-stat.i.cpu-cycles
124.23 +51.2% 187.86 perf-stat.i.cpu-migrations
404084 ± 2% -17.4% 333693 ± 3% perf-stat.i.cycles-between-cache-misses
4.2e+10 -5.1% 3.984e+10 perf-stat.i.dTLB-loads
0.06 ± 3% -0.0 0.06 ± 2% perf-stat.i.dTLB-store-miss-rate%
787597 -2.2% 770036 perf-stat.i.dTLB-store-misses
2.75e+10 -5.2% 2.606e+10 perf-stat.i.dTLB-stores
83.86 -4.9 78.94 perf-stat.i.iTLB-load-miss-rate%
989375 +1.9% 1008403 perf-stat.i.iTLB-load-misses
153254 ± 6% +31.7% 201890 ± 2% perf-stat.i.iTLB-loads
1.251e+11 -5.3% 1.184e+11 perf-stat.i.instructions
406474 ± 2% -13.6% 351010 perf-stat.i.instructions-per-iTLB-miss
6821 +1.9% 6953 perf-stat.i.minor-faults
279978 -9.6% 253077 ± 2% perf-stat.i.node-load-misses
6824 +1.9% 6954 perf-stat.i.page-faults
0.63 ± 2% +7.1% 0.68 perf-stat.overall.MPKI
0.26 +0.0 0.28 ± 3% perf-stat.overall.branch-miss-rate%
0.94 ± 6% -0.1 0.80 ± 3% perf-stat.overall.cache-miss-rate%
284356 ± 4% +8.2% 307605 ± 3% perf-stat.overall.cycles-between-cache-misses
0.01 +0.0 0.01 perf-stat.overall.dTLB-load-miss-rate%
0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate%
86.63 -3.2 83.38 perf-stat.overall.iTLB-load-miss-rate%
124752 -7.0% 115987 perf-stat.overall.instructions-per-iTLB-miss
6.168e+09 -4.8% 5.869e+09 perf-stat.ps.branch-instructions
734409 ± 4% -13.1% 638302 ± 2% perf-stat.ps.cache-misses
1996 +19.2% 2380 perf-stat.ps.context-switches
2.084e+11 -5.9% 1.962e+11 perf-stat.ps.cpu-cycles
123.49 +50.6% 185.98 perf-stat.ps.cpu-migrations
4.154e+10 -5.1% 3.942e+10 perf-stat.ps.dTLB-loads
789367 -2.3% 771264 perf-stat.ps.dTLB-store-misses
2.72e+10 -5.2% 2.579e+10 perf-stat.ps.dTLB-stores
990459 +1.9% 1008890 perf-stat.ps.iTLB-load-misses
152951 ± 6% +31.5% 201083 ± 2% perf-stat.ps.iTLB-loads
1.236e+11 -5.3% 1.17e+11 perf-stat.ps.instructions
6818 +1.9% 6949 perf-stat.ps.minor-faults
280027 -9.8% 252575 ± 2% perf-stat.ps.node-load-misses
6819 +1.9% 6949 perf-stat.ps.page-faults
3.747e+13 -5.6% 3.536e+13 perf-stat.total.instructions
156.25 ± 4% +253.4% 552.25 ± 78% interrupts.132:PCI-MSI.1574914-edge.eth3-TxRx-2
2522 ± 24% +31.5% 3317 ± 3% interrupts.CPU0.NMI:Non-maskable_interrupts
2522 ± 24% +31.5% 3317 ± 3% interrupts.CPU0.PMI:Performance_monitoring_interrupts
9419 ± 4% +114.1% 20162 ± 15% interrupts.CPU0.RES:Rescheduling_interrupts
2454 ± 23% +27.4% 3126 ± 5% interrupts.CPU10.NMI:Non-maskable_interrupts
2454 ± 23% +27.4% 3126 ± 5% interrupts.CPU10.PMI:Performance_monitoring_interrupts
2438 ± 23% +44.2% 3516 ± 22% interrupts.CPU11.NMI:Non-maskable_interrupts
2438 ± 23% +44.2% 3516 ± 22% interrupts.CPU11.PMI:Performance_monitoring_interrupts
2807 ± 2% +11.9% 3141 ± 4% interrupts.CPU119.NMI:Non-maskable_interrupts
2807 ± 2% +11.9% 3141 ± 4% interrupts.CPU119.PMI:Performance_monitoring_interrupts
2800 ± 2% +12.0% 3135 ± 5% interrupts.CPU12.NMI:Non-maskable_interrupts
2800 ± 2% +12.0% 3135 ± 5% interrupts.CPU12.PMI:Performance_monitoring_interrupts
2800 +13.9% 3190 ± 6% interrupts.CPU120.NMI:Non-maskable_interrupts
2800 +13.9% 3190 ± 6% interrupts.CPU120.PMI:Performance_monitoring_interrupts
62.25 ± 47% +101.6% 125.50 ± 39% interrupts.CPU121.RES:Rescheduling_interrupts
2800 ± 2% +12.2% 3143 ± 5% interrupts.CPU13.NMI:Non-maskable_interrupts
2800 ± 2% +12.2% 3143 ± 5% interrupts.CPU13.PMI:Performance_monitoring_interrupts
2770 +14.5% 3171 ± 6% interrupts.CPU131.NMI:Non-maskable_interrupts
2770 +14.5% 3171 ± 6% interrupts.CPU131.PMI:Performance_monitoring_interrupts
63.00 ± 46% +174.2% 172.75 ± 33% interrupts.CPU133.RES:Rescheduling_interrupts
73.75 ± 27% +1000.7% 811.75 ±142% interrupts.CPU134.RES:Rescheduling_interrupts
118.25 ± 22% +35.7% 160.50 ± 16% interrupts.CPU137.RES:Rescheduling_interrupts
108.50 ± 35% +241.0% 370.00 ± 44% interrupts.CPU141.RES:Rescheduling_interrupts
2791 ± 2% +22.5% 3420 ± 5% interrupts.CPU145.NMI:Non-maskable_interrupts
2791 ± 2% +22.5% 3420 ± 5% interrupts.CPU145.PMI:Performance_monitoring_interrupts
2776 ± 2% +14.8% 3188 ± 4% interrupts.CPU15.NMI:Non-maskable_interrupts
2776 ± 2% +14.8% 3188 ± 4% interrupts.CPU15.PMI:Performance_monitoring_interrupts
2813 ± 2% +15.5% 3248 ± 4% interrupts.CPU16.NMI:Non-maskable_interrupts
2813 ± 2% +15.5% 3248 ± 4% interrupts.CPU16.PMI:Performance_monitoring_interrupts
124.50 ± 40% +134.5% 292.00 ± 38% interrupts.CPU16.RES:Rescheduling_interrupts
2801 ± 2% +11.4% 3119 ± 3% interrupts.CPU17.NMI:Non-maskable_interrupts
2801 ± 2% +11.4% 3119 ± 3% interrupts.CPU17.PMI:Performance_monitoring_interrupts
2802 ± 2% +17.4% 3288 ± 9% interrupts.CPU183.NMI:Non-maskable_interrupts
2802 ± 2% +17.4% 3288 ± 9% interrupts.CPU183.PMI:Performance_monitoring_interrupts
44.00 ± 31% +383.5% 212.75 ± 77% interrupts.CPU183.RES:Rescheduling_interrupts
2809 +17.5% 3301 ± 10% interrupts.CPU189.NMI:Non-maskable_interrupts
2809 +17.5% 3301 ± 10% interrupts.CPU189.PMI:Performance_monitoring_interrupts
2848 ± 3% +11.6% 3179 interrupts.CPU19.NMI:Non-maskable_interrupts
2848 ± 3% +11.6% 3179 interrupts.CPU19.PMI:Performance_monitoring_interrupts
82.00 ± 45% +145.7% 201.50 ± 53% interrupts.CPU190.RES:Rescheduling_interrupts
2797 ± 2% +25.5% 3509 ± 8% interrupts.CPU191.NMI:Non-maskable_interrupts
2797 ± 2% +25.5% 3509 ± 8% interrupts.CPU191.PMI:Performance_monitoring_interrupts
156.25 ± 4% +253.4% 552.25 ± 78% interrupts.CPU2.132:PCI-MSI.1574914-edge.eth3-TxRx-2
668.50 ± 16% +290.0% 2607 ± 31% interrupts.CPU2.RES:Rescheduling_interrupts
2852 +14.2% 3256 ± 7% interrupts.CPU23.NMI:Non-maskable_interrupts
2852 +14.2% 3256 ± 7% interrupts.CPU23.PMI:Performance_monitoring_interrupts
190.50 ± 18% +169.8% 514.00 ± 91% interrupts.CPU26.RES:Rescheduling_interrupts
570.25 ± 19% +109.6% 1195 ± 15% interrupts.CPU3.RES:Rescheduling_interrupts
87.25 ± 29% +769.6% 758.75 ±119% interrupts.CPU35.RES:Rescheduling_interrupts
96.00 ± 27% +182.0% 270.75 ± 33% interrupts.CPU38.RES:Rescheduling_interrupts
1619 ± 96% -90.1% 160.00 ±101% interrupts.CPU49.RES:Rescheduling_interrupts
157.75 ± 25% -38.5% 97.00 ± 29% interrupts.CPU59.RES:Rescheduling_interrupts
140.50 ± 33% -46.6% 75.00 ± 67% interrupts.CPU62.RES:Rescheduling_interrupts
152.25 ± 39% +160.8% 397.00 ± 34% interrupts.CPU72.RES:Rescheduling_interrupts
176.25 ± 29% -50.1% 88.00 ± 33% interrupts.CPU86.RES:Rescheduling_interrupts
60.50 ± 47% +114.0% 129.50 ± 35% interrupts.CPU87.RES:Rescheduling_interrupts
2799 ± 2% +23.8% 3466 ± 7% interrupts.CPU89.NMI:Non-maskable_interrupts
2799 ± 2% +23.8% 3466 ± 7% interrupts.CPU89.PMI:Performance_monitoring_interrupts
2813 ± 3% +17.3% 3300 ± 8% interrupts.CPU95.NMI:Non-maskable_interrupts
2813 ± 3% +17.3% 3300 ± 8% interrupts.CPU95.PMI:Performance_monitoring_interrupts
2834 +14.9% 3256 ± 3% interrupts.CPU96.NMI:Non-maskable_interrupts
2834 +14.9% 3256 ± 3% interrupts.CPU96.PMI:Performance_monitoring_interrupts
51108 +36.3% 69684 ± 2% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep3: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase/ucode:
gcc-7/performance/pipe/x86_64-rhel-7.2/process/1600%/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep3/hackbench/0xb00002e
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=sched_slice/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
%stddev %change %stddev
\ | \
494809 -7.3% 458896 hackbench.throughput
612.95 +1.8% 624.20 hackbench.time.elapsed_time
612.95 +1.8% 624.20 hackbench.time.elapsed_time.max
4.886e+08 +88.4% 9.206e+08 ± 2% hackbench.time.involuntary_context_switches
48035824 -4.2% 46000152 hackbench.time.minor_page_faults
7176 +1.1% 7255 hackbench.time.percent_of_cpu_this_job_got
36530 +3.8% 37917 hackbench.time.system_time
7459 -1.2% 7372 hackbench.time.user_time
8.862e+08 +62.7% 1.442e+09 hackbench.time.voluntary_context_switches
2.534e+09 -4.2% 2.429e+09 hackbench.workload
2794 ± 8% -9.4% 2530 ± 2% boot-time.idle
62869 ± 8% -27.5% 45585 ± 26% numa-meminfo.node1.SReclaimable
15712 ± 8% -27.4% 11404 ± 26% numa-vmstat.node1.nr_slab_reclaimable
66.75 +2.6% 68.50 vmstat.cpu.sy
2236124 +69.2% 3782640 vmstat.system.cs
151493 +64.8% 249683 vmstat.system.in
25739582 +28.3% 33017943 ± 17% cpuidle.C1.time
2600683 +89.6% 4931044 ± 21% cpuidle.C1.usage
1.177e+08 -23.5% 90006774 ± 3% cpuidle.C3.time
400081 -26.3% 294861 cpuidle.C3.usage
28889782 ± 37% +210.6% 89736479 ± 39% cpuidle.POLL.time
32231 ± 2% +139.5% 77205 ± 21% cpuidle.POLL.usage
2274 +1.7% 2314 turbostat.Avg_MHz
2598175 +89.7% 4928921 ± 21% turbostat.C1
399873 -26.3% 294659 turbostat.C3
0.22 -0.1 0.16 ± 2% turbostat.C3%
5.81 ± 2% -10.8% 5.18 turbostat.CPU%c1
0.12 -33.3% 0.08 ± 15% turbostat.CPU%c3
93461424 +67.4% 1.564e+08 ± 2% turbostat.IRQ
2.65 ± 6% +52.8% 4.05 ± 8% turbostat.Pkg%pc2
1541845 +14.3% 1762636 meminfo.Active
1480497 +14.9% 1701349 meminfo.Active(anon)
1440429 ± 2% +11.9% 1612079 meminfo.AnonPages
37373539 ± 2% +16.5% 43542569 meminfo.Committed_AS
18778 +11.2% 20883 ± 3% meminfo.Inactive(anon)
510694 ± 2% +14.5% 584945 meminfo.KernelStack
6655144 +12.7% 7497435 meminfo.Memused
1212773 +16.3% 1410085 meminfo.PageTables
1202917 +12.2% 1349397 meminfo.SUnreclaim
54584 ± 12% +64.5% 89814 ± 10% meminfo.Shmem
1318488 +11.2% 1465692 meminfo.Slab
55493 ± 2% -28.9% 39468 ± 2% meminfo.max_used_kB
369842 ± 2% +14.7% 424031 proc-vmstat.nr_active_anon
359829 ± 2% +11.7% 401857 proc-vmstat.nr_anon_pages
1491001 -1.4% 1470310 proc-vmstat.nr_dirty_background_threshold
2985649 -1.4% 2944217 proc-vmstat.nr_dirty_threshold
234629 +3.7% 243333 proc-vmstat.nr_file_pages
14800700 -1.4% 14593509 proc-vmstat.nr_free_pages
4698 +10.9% 5209 ± 3% proc-vmstat.nr_inactive_anon
510506 ± 2% +14.3% 583465 proc-vmstat.nr_kernel_stack
6228 +2.9% 6408 proc-vmstat.nr_mapped
303038 ± 2% +15.9% 351178 proc-vmstat.nr_page_table_pages
13683 ± 13% +63.7% 22404 ± 11% proc-vmstat.nr_shmem
301079 ± 2% +12.1% 337558 proc-vmstat.nr_slab_unreclaimable
369842 ± 2% +14.7% 424031 proc-vmstat.nr_zone_active_anon
4698 +10.9% 5209 ± 3% proc-vmstat.nr_zone_inactive_anon
403.75 ±123% +1057.7% 4674 ±122% proc-vmstat.numa_hint_faults
47.25 ± 81% +4297.4% 2077 ±112% proc-vmstat.numa_hint_faults_local
6.147e+08 -7.3% 5.701e+08 proc-vmstat.numa_hit
6.147e+08 -7.3% 5.701e+08 proc-vmstat.numa_local
11583 ± 16% +71.6% 19875 ± 14% proc-vmstat.pgactivate
6.224e+08 -7.2% 5.774e+08 proc-vmstat.pgalloc_normal
48581147 -3.7% 46773973 proc-vmstat.pgfault
6.221e+08 -7.2% 5.773e+08 proc-vmstat.pgfree
400811 ± 7% -48.8% 205202 ± 51% sched_debug.cfs_rq:/.load.max
63923 ± 21% -56.9% 27565 ± 49% sched_debug.cfs_rq:/.load.stddev
473.56 ± 9% -34.1% 311.94 ± 18% sched_debug.cfs_rq:/.load_avg.max
70.01 ± 13% -36.1% 44.72 ± 20% sched_debug.cfs_rq:/.load_avg.stddev
24050559 +66.1% 39942558 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
27186152 +150.9% 68197622 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
22417469 -26.5% 16476834 ± 5% sched_debug.cfs_rq:/.min_vruntime.min
934692 ± 9% +2357.1% 22966409 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.42 ± 4% -28.3% 0.30 ± 9% sched_debug.cfs_rq:/.nr_running.stddev
10.48 ± 24% -37.7% 6.53 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.avg
343.08 ± 13% -66.5% 114.81 ± 78% sched_debug.cfs_rq:/.runnable_load_avg.max
43.75 ± 18% -64.0% 15.76 ± 57% sched_debug.cfs_rq:/.runnable_load_avg.stddev
399576 ± 7% -49.9% 200210 ± 54% sched_debug.cfs_rq:/.runnable_weight.max
63786 ± 21% -57.9% 26851 ± 52% sched_debug.cfs_rq:/.runnable_weight.stddev
3479430 ± 23% +1081.6% 41111920 ± 35% sched_debug.cfs_rq:/.spread0.max
934513 ± 9% +2358.3% 22973201 ± 6% sched_debug.cfs_rq:/.spread0.stddev
329009 ± 9% +75.8% 578522 ± 9% sched_debug.cpu.avg_idle.avg
100.19 ± 17% +339.0% 439.87 ± 71% sched_debug.cpu.clock.stddev
100.19 ± 17% +339.0% 439.87 ± 71% sched_debug.cpu.clock_task.stddev
255.50 ± 31% -43.7% 143.92 ± 61% sched_debug.cpu.cpu_load[0].max
31.10 ± 29% -40.5% 18.51 ± 48% sched_debug.cpu.cpu_load[0].stddev
239.47 ± 25% -49.9% 119.92 ± 31% sched_debug.cpu.cpu_load[2].max
29.44 ± 24% -46.2% 15.84 ± 24% sched_debug.cpu.cpu_load[2].stddev
225.33 ± 23% -50.8% 110.81 ± 30% sched_debug.cpu.cpu_load[3].max
27.43 ± 23% -45.6% 14.91 ± 22% sched_debug.cpu.cpu_load[3].stddev
210.61 ± 21% -45.9% 114.03 ± 30% sched_debug.cpu.cpu_load[4].max
25.26 ± 21% -40.7% 14.98 ± 20% sched_debug.cpu.cpu_load[4].stddev
22185 ± 8% -20.7% 17584 ± 12% sched_debug.cpu.curr->pid.stddev
401289 ± 7% -38.3% 247619 ± 32% sched_debug.cpu.load.max
63993 ± 21% -51.8% 30820 ± 28% sched_debug.cpu.load.stddev
0.00 ± 25% +314.4% 0.00 ± 65% sched_debug.cpu.next_balance.stddev
2161 ± 6% +73.4% 3748 ± 3% sched_debug.cpu.nr_load_updates.stddev
1.80 ± 44% +1219.7% 23.78 ± 53% sched_debug.cpu.nr_running.avg
18.56 ± 32% +447.0% 101.50 ± 63% sched_debug.cpu.nr_running.max
3.54 ± 40% +673.8% 27.43 ± 58% sched_debug.cpu.nr_running.stddev
7777523 +69.6% 13189331 sched_debug.cpu.nr_switches.avg
8892283 +187.0% 25524847 sched_debug.cpu.nr_switches.max
6863956 ± 2% -34.3% 4508011 ± 4% sched_debug.cpu.nr_switches.min
399499 ± 6% +1992.9% 8360950 ± 2% sched_debug.cpu.nr_switches.stddev
0.25 ± 52% +8572.4% 21.46 ± 62% sched_debug.cpu.nr_uninterruptible.avg
889.83 ± 4% +243.9% 3060 ± 4% sched_debug.cpu.nr_uninterruptible.max
-802.22 +323.8% -3399 sched_debug.cpu.nr_uninterruptible.min
348.15 ± 3% +506.4% 2111 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
584593 ± 2% +14.5% 669643 slabinfo.anon_vma.active_objs
13084 ± 2% +14.5% 14982 slabinfo.anon_vma.active_slabs
601913 ± 2% +14.5% 689224 slabinfo.anon_vma.num_objs
13084 ± 2% +14.5% 14982 slabinfo.anon_vma.num_slabs
63008 +10.7% 69740 slabinfo.cred_jar.active_objs
1505 +10.9% 1669 slabinfo.cred_jar.active_slabs
63231 +10.9% 70129 slabinfo.cred_jar.num_objs
1505 +10.9% 1669 slabinfo.cred_jar.num_slabs
2494 ± 7% -20.4% 1985 ± 17% slabinfo.eventpoll_epi.active_objs
2494 ± 7% -20.4% 1985 ± 17% slabinfo.eventpoll_epi.num_objs
4365 ± 7% -20.4% 3474 ± 17% slabinfo.eventpoll_pwq.active_objs
4365 ± 7% -20.4% 3474 ± 17% slabinfo.eventpoll_pwq.num_objs
777.00 ± 8% -22.1% 605.00 ± 11% slabinfo.file_lock_cache.active_objs
777.00 ± 8% -22.1% 605.00 ± 11% slabinfo.file_lock_cache.num_objs
4667 ± 5% -9.5% 4226 ± 2% slabinfo.kmalloc-128.active_objs
4667 ± 5% -9.5% 4226 ± 2% slabinfo.kmalloc-128.num_objs
63873 ± 2% +10.2% 70416 ± 3% slabinfo.kmalloc-96.active_objs
36770 ± 2% +13.3% 41645 slabinfo.mm_struct.active_objs
2448 ± 2% +13.3% 2774 slabinfo.mm_struct.active_slabs
39182 ± 2% +13.3% 44397 slabinfo.mm_struct.num_objs
2448 ± 2% +13.3% 2774 slabinfo.mm_struct.num_slabs
1239866 ± 2% +15.4% 1430445 slabinfo.pid.active_objs
23099 ± 2% +16.9% 27003 slabinfo.pid.active_slabs
1478380 ± 2% +16.9% 1728215 slabinfo.pid.num_objs
23099 ± 2% +16.9% 27003 slabinfo.pid.num_slabs
347835 -12.6% 304030 ± 2% slabinfo.selinux_file_security.active_objs
1364 -12.7% 1191 ± 3% slabinfo.selinux_file_security.active_slabs
349266 -12.6% 305089 ± 3% slabinfo.selinux_file_security.num_objs
1364 -12.7% 1191 ± 3% slabinfo.selinux_file_security.num_slabs
41509 +16.7% 48454 slabinfo.sighand_cache.active_objs
2772 +17.0% 3243 slabinfo.sighand_cache.active_slabs
41590 +17.0% 48658 slabinfo.sighand_cache.num_objs
2772 +17.0% 3243 slabinfo.sighand_cache.num_slabs
44791 ± 2% +15.0% 51511 slabinfo.signal_cache.active_objs
1498 ± 2% +15.3% 1726 slabinfo.signal_cache.active_slabs
44952 ± 2% +15.3% 51811 slabinfo.signal_cache.num_objs
1498 ± 2% +15.3% 1726 slabinfo.signal_cache.num_slabs
39448 ± 2% +17.6% 46374 slabinfo.task_struct.active_objs
13155 ± 2% +17.7% 15484 slabinfo.task_struct.active_slabs
39467 ± 2% +17.7% 46452 slabinfo.task_struct.num_objs
13155 ± 2% +17.7% 15484 slabinfo.task_struct.num_slabs
886246 ± 2% +16.0% 1027892 slabinfo.vm_area_struct.active_objs
22632 ± 2% +15.6% 26163 slabinfo.vm_area_struct.active_slabs
905325 ± 2% +15.6% 1046529 slabinfo.vm_area_struct.num_objs
22632 ± 2% +15.6% 26163 slabinfo.vm_area_struct.num_slabs
36.79 ± 6% -36.8 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_write
29.71 ± 8% -29.7 0.00 perf-profile.calltrace.cycles-pp.__GI___libc_read
24.00 ± 6% -24.0 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_write
23.48 ± 6% -23.5 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
22.75 ± 6% -22.8 0.00 perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
21.26 ± 6% -21.3 0.00 perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_write
16.73 ± 8% -16.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__GI___libc_read
16.22 ± 8% -16.2 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
15.47 ± 8% -15.5 0.00 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
14.40 ± 8% -14.4 0.00 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.__GI___libc_read
3.88 ± 3% -1.7 2.15 ± 6% perf-profile.calltrace.cycles-pp.mutex_unlock.pipe_write.__vfs_write.vfs_write.sys_write
0.65 ± 11% +0.3 0.97 ± 28% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function
0.68 ± 11% +0.4 1.05 ± 7% perf-profile.calltrace.cycles-pp.__fget_light.__fdget_pos.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.22 ± 7% +0.4 2.65 ± 9% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout.copy_page_to_iter.pipe_read.__vfs_read
1.10 ± 8% +0.5 1.55 ± 11% perf-profile.calltrace.cycles-pp.touch_atime.pipe_read.__vfs_read.vfs_read.sys_read
0.42 ± 59% +0.6 0.97 ± 25% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.pipe_wait.pipe_read
0.28 ±101% +0.6 0.86 ± 25% perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
1.18 ± 10% +0.6 1.79 ± 36% perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.pipe_wait.pipe_read
1.37 ± 11% +0.6 2.00 ± 31% perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.00 +0.6 0.64 ± 7% perf-profile.calltrace.cycles-pp.__lock_text_start.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
1.47 ± 11% +0.7 2.17 ± 30% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
0.30 ±101% +0.7 0.99 ± 29% perf-profile.calltrace.cycles-pp.__switch_to
0.00 +0.7 0.73 ± 20% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.do_syscall_64
0.00 +0.7 0.73 ± 20% perf-profile.calltrace.cycles-pp.cpumask_next_wrap.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
0.00 +0.8 0.78 ± 10% perf-profile.calltrace.cycles-pp.__indirect_thunk_start
0.00 +1.0 0.96 ± 10% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_stage2
1.28 ± 15% +1.0 2.28 ± 26% perf-profile.calltrace.cycles-pp.__schedule.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.29 ± 15% +1.0 2.29 ± 26% perf-profile.calltrace.cycles-pp.schedule.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.32 ± 15% +1.0 2.35 ± 26% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.13 ± 8% perf-profile.calltrace.cycles-pp.__fdget_pos.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.1 1.15 ± 11% perf-profile.calltrace.cycles-pp.__fdget_pos.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
3.71 ± 9% +1.5 5.17 ± 35% perf-profile.calltrace.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
3.80 ± 9% +1.5 5.28 ± 35% perf-profile.calltrace.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
4.22 ± 9% +1.7 5.91 ± 35% perf-profile.calltrace.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.sys_read
2.54 ± 18% +1.8 4.29 ± 34% perf-profile.calltrace.cycles-pp.idle_cpu.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function
4.11 ± 9% +2.2 6.30 ± 32% perf-profile.calltrace.cycles-pp.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
14.37 +2.4 16.75 ± 5% perf-profile.calltrace.cycles-pp.pipe_read.__vfs_read.vfs_read.sys_read.do_syscall_64
3.42 ± 17% +2.5 5.88 ± 31% perf-profile.calltrace.cycles-pp.select_idle_sibling.select_task_rq_fair.try_to_wake_up.autoremove_wake_function.__wake_up_common
15.21 +2.5 17.68 ± 4% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.46 ± 8% +2.5 11.97 ± 28% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write.sys_write
7.24 ± 11% +2.8 10.03 ± 33% perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write
7.63 ± 10% +2.8 10.43 ± 33% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write.vfs_write
7.29 ± 10% +2.8 10.12 ± 33% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.pipe_write.__vfs_write
1.45 ± 29% +8.8 10.29 ± 14% perf-profile.calltrace.cycles-pp._entry_trampoline
1.54 ± 27% +9.0 10.56 ± 14% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
6.32 ± 18% +17.4 23.75 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.63 ± 19% +18.7 25.34 perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.10 ± 17% +20.3 30.45 ± 3% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.36 ± 17% +21.7 32.02 ± 2% perf-profile.calltrace.cycles-pp.sys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.46 ± 17% +42.8 62.30 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
19.69 ± 17% +44.0 63.67 ± 2% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
17.10 -21.3% 13.46 ± 2% perf-stat.i.MPKI
4.035e+10 -17.5% 3.33e+10 ± 3% perf-stat.i.branch-instructions
2.14 -0.2 1.95 perf-stat.i.branch-miss-rate%
7.194e+08 -18.6% 5.855e+08 ± 3% perf-stat.i.branch-misses
12.70 -1.4 11.33 perf-stat.i.cache-miss-rate%
1.014e+08 -11.3% 89967376 ± 3% perf-stat.i.cache-misses
3519724 +39.3% 4901677 ± 3% perf-stat.i.context-switches
2.06 ± 2% -6.7% 1.92 perf-stat.i.cpi
3.139e+11 -16.5% 2.622e+11 ± 3% perf-stat.i.cpu-cycles
113205 ± 2% +72.7% 195554 ± 5% perf-stat.i.cpu-migrations
0.47 +0.1 0.53 ± 3% perf-stat.i.dTLB-load-miss-rate%
1.716e+08 +16.6% 2.001e+08 ± 5% perf-stat.i.dTLB-load-misses
6.393e+10 -18.1% 5.234e+10 ± 3% perf-stat.i.dTLB-loads
8147741 +33.5% 10877323 ± 3% perf-stat.i.dTLB-store-misses
4.046e+10 -18.6% 3.294e+10 ± 3% perf-stat.i.dTLB-stores
55.06 +1.1 56.14 perf-stat.i.iTLB-load-miss-rate%
2.161e+08 -8.0% 1.989e+08 ± 3% perf-stat.i.iTLB-load-misses
1.927e+08 -14.7% 1.643e+08 ± 4% perf-stat.i.iTLB-loads
2.102e+11 -17.7% 1.731e+11 ± 3% perf-stat.i.instructions
1069 ± 2% -8.4% 979.54 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.56 +3.8% 0.58 perf-stat.i.ipc
124005 -22.0% 96666 ± 3% perf-stat.i.minor-faults
277317 -17.9% 227661 ± 3% perf-stat.i.msec
63.61 -8.3 55.35 ± 2% perf-stat.i.node-load-miss-rate%
14378837 ± 2% -5.7% 13566343 ± 3% perf-stat.i.node-loads
21.74 ± 2% -4.3 17.40 perf-stat.i.node-store-miss-rate%
5331671 -16.6% 4445492 ± 3% perf-stat.i.node-store-misses
124005 -22.0% 96666 ± 3% perf-stat.i.page-faults
3.76 +23.4% 4.64 perf-stat.overall.MPKI
1.78 -0.0 1.76 perf-stat.overall.branch-miss-rate%
12.84 -1.6 11.22 perf-stat.overall.cache-miss-rate%
1.49 +1.5% 1.52 perf-stat.overall.cpi
0.27 +0.1 0.38 ± 5% perf-stat.overall.dTLB-load-miss-rate%
0.02 +0.0 0.03 perf-stat.overall.dTLB-store-miss-rate%
52.87 +1.9 54.77 perf-stat.overall.iTLB-load-miss-rate%
972.65 -10.6% 869.88 perf-stat.overall.instructions-per-iTLB-miss
0.67 -1.4% 0.66 perf-stat.overall.ipc
55.35 +1.8 57.18 perf-stat.overall.node-load-miss-rate%
23.25 -2.9 20.38 perf-stat.overall.node-store-miss-rate%
32464 +6.1% 34453 perf-stat.overall.path-length
1.579e+13 +2.0% 1.61e+13 perf-stat.total.branch-instructions
3.969e+10 +9.6% 4.352e+10 perf-stat.total.cache-misses
3.09e+11 +25.5% 3.879e+11 perf-stat.total.cache-references
1.378e+09 +72.1% 2.371e+09 perf-stat.total.context-switches
1.229e+14 +3.2% 1.268e+14 perf-stat.total.cpu-cycles
44320452 ± 2% +113.8% 94752867 ± 7% perf-stat.total.cpu-migrations
6.719e+10 +44.1% 9.685e+10 ± 5% perf-stat.total.dTLB-load-misses
2.502e+13 +1.1% 2.531e+13 perf-stat.total.dTLB-loads
3.189e+09 +65.0% 5.262e+09 perf-stat.total.dTLB-store-misses
8.459e+10 +13.7% 9.621e+10 perf-stat.total.iTLB-load-misses
7.542e+10 +5.3% 7.945e+10 perf-stat.total.iTLB-loads
8.228e+13 +1.7% 8.368e+13 perf-stat.total.instructions
48538538 -3.7% 46749483 perf-stat.total.minor-faults
1.086e+08 +1.4% 1.101e+08 perf-stat.total.msec
6.976e+09 +25.7% 8.768e+09 ± 3% perf-stat.total.node-load-misses
5.627e+09 +16.6% 6.562e+09 perf-stat.total.node-loads
2.087e+09 +3.0% 2.15e+09 perf-stat.total.node-store-misses
6.891e+09 +21.9% 8.4e+09 perf-stat.total.node-stores
48538482 -3.7% 46749452 perf-stat.total.page-faults
44380 ± 9% +36.4% 60536 ± 3% softirqs.CPU0.RCU
42700 +26.8% 54157 softirqs.CPU0.SCHED
41219 ± 3% +51.8% 62583 ± 3% softirqs.CPU1.RCU
40291 ± 2% +53.7% 61915 ± 2% softirqs.CPU10.RCU
39712 ± 2% +60.0% 63530 ± 2% softirqs.CPU11.RCU
39626 +58.3% 62721 softirqs.CPU12.RCU
39746 ± 3% +56.8% 62327 softirqs.CPU13.RCU
40013 ± 2% +62.5% 65013 ± 4% softirqs.CPU14.RCU
44176 ± 8% +49.0% 65821 ± 3% softirqs.CPU15.RCU
43495 ± 8% +50.4% 65412 ± 2% softirqs.CPU16.RCU
41320 +53.6% 63484 softirqs.CPU17.RCU
41735 +60.7% 67060 ± 7% softirqs.CPU18.RCU
44197 ± 9% +51.0% 66754 ± 2% softirqs.CPU19.RCU
40193 ± 3% +52.0% 61104 ± 2% softirqs.CPU2.RCU
44564 ± 10% +52.5% 67952 ± 6% softirqs.CPU20.RCU
42527 +55.1% 65966 ± 3% softirqs.CPU21.RCU
42153 +58.6% 66857 ± 5% softirqs.CPU22.RCU
42327 ± 3% +68.9% 71495 ± 5% softirqs.CPU23.RCU
42306 ± 2% +59.1% 67318 ± 9% softirqs.CPU24.RCU
41475 ± 2% +66.9% 69214 ± 14% softirqs.CPU25.RCU
41655 ± 2% +60.5% 66866 ± 3% softirqs.CPU26.RCU
41766 ± 2% +51.7% 63373 softirqs.CPU27.RCU
41548 ± 2% +60.0% 66490 ± 5% softirqs.CPU28.RCU
44279 ± 6% +50.4% 66601 ± 9% softirqs.CPU29.RCU
42001 ± 8% +51.4% 63605 ± 5% softirqs.CPU3.RCU
40890 ± 6% +50.8% 61645 ± 2% softirqs.CPU30.RCU
40491 ± 2% +53.8% 62280 ± 3% softirqs.CPU31.RCU
41471 ± 2% +56.5% 64894 ± 4% softirqs.CPU32.RCU
40349 ± 2% +62.0% 65360 ± 5% softirqs.CPU33.RCU
40479 ± 2% +62.5% 65781 ± 3% softirqs.CPU34.RCU
42319 ± 10% +48.6% 62876 ± 3% softirqs.CPU35.RCU
41799 +55.1% 64841 ± 3% softirqs.CPU36.RCU
43731 ± 8% +54.5% 67558 ± 3% softirqs.CPU37.RCU
41031 ± 3% +73.3% 71116 ± 8% softirqs.CPU38.RCU
42978 ± 6% +53.1% 65803 ± 7% softirqs.CPU39.RCU
39754 ± 2% +55.0% 61636 ± 3% softirqs.CPU4.RCU
46367 ± 9% +41.4% 65558 ± 9% softirqs.CPU40.RCU
40343 ± 2% +59.9% 64513 ± 3% softirqs.CPU41.RCU
42227 ± 3% +55.1% 65511 ± 4% softirqs.CPU42.RCU
40727 ± 3% +65.6% 67457 ± 3% softirqs.CPU43.RCU
40253 ± 4% +45.8% 58700 ± 2% softirqs.CPU44.RCU
40121 ± 2% +52.7% 61278 ± 2% softirqs.CPU45.RCU
40516 ± 3% +51.1% 61219 ± 2% softirqs.CPU46.RCU
41106 ± 10% +48.5% 61035 ± 6% softirqs.CPU47.RCU
39554 +58.7% 62780 ± 6% softirqs.CPU48.RCU
38899 +53.9% 59879 ± 2% softirqs.CPU49.RCU
39677 ± 2% +59.7% 63369 ± 6% softirqs.CPU5.RCU
42427 ± 9% +47.9% 62754 softirqs.CPU50.RCU
41751 ± 7% +47.8% 61711 ± 3% softirqs.CPU51.RCU
39992 +65.4% 66128 ± 7% softirqs.CPU52.RCU
39026 +54.8% 60393 ± 3% softirqs.CPU53.RCU
42161 ± 10% +47.6% 62218 ± 7% softirqs.CPU54.RCU
40080 ± 3% +60.2% 64206 ± 6% softirqs.CPU55.RCU
39427 ± 3% +57.5% 62094 softirqs.CPU56.RCU
39673 ± 3% +65.7% 65735 ± 7% softirqs.CPU57.RCU
40865 +53.4% 62669 ± 3% softirqs.CPU58.RCU
40346 +55.4% 62713 ± 3% softirqs.CPU59.RCU
39668 ± 2% +58.5% 62879 ± 2% softirqs.CPU6.RCU
39930 ± 2% +68.1% 67122 ± 5% softirqs.CPU60.RCU
40045 ± 2% +55.1% 62093 ± 2% softirqs.CPU61.RCU
40057 +57.6% 63145 ± 5% softirqs.CPU62.RCU
40579 +57.4% 63889 ± 7% softirqs.CPU63.RCU
40947 ± 2% +51.6% 62075 ± 2% softirqs.CPU64.RCU
42989 ± 8% +45.6% 62592 softirqs.CPU65.RCU
44393 ± 8% +53.9% 68301 ± 6% softirqs.CPU66.RCU
41370 ± 3% +57.9% 65307 ± 5% softirqs.CPU67.RCU
41532 +59.4% 66200 ± 2% softirqs.CPU68.RCU
40604 ± 3% +58.8% 64470 ± 6% softirqs.CPU69.RCU
39822 +53.2% 61021 softirqs.CPU7.RCU
40977 ± 2% +58.1% 64805 ± 6% softirqs.CPU70.RCU
42698 ± 7% +52.4% 65068 ± 6% softirqs.CPU71.RCU
41420 +70.9% 70802 ± 3% softirqs.CPU72.RCU
41458 +52.1% 63040 ± 2% softirqs.CPU73.RCU
41010 ± 2% +64.8% 67576 ± 7% softirqs.CPU74.RCU
38997 ± 9% +58.5% 61793 ± 5% softirqs.CPU75.RCU
37004 ± 3% +55.5% 57533 ± 2% softirqs.CPU76.RCU
37324 ± 2% +61.2% 60153 ± 6% softirqs.CPU77.RCU
40007 ± 8% +45.9% 58373 ± 3% softirqs.CPU78.RCU
37264 ± 4% +52.2% 56698 ± 3% softirqs.CPU79.RCU
39957 ± 2% +54.1% 61588 ± 2% softirqs.CPU8.RCU
38811 ± 2% +51.8% 58931 ± 2% softirqs.CPU80.RCU
40186 ± 6% +53.6% 61730 ± 6% softirqs.CPU81.RCU
37439 ± 2% +77.3% 66389 ± 6% softirqs.CPU82.RCU
37454 ± 3% +62.9% 61016 ± 8% softirqs.CPU83.RCU
39574 ± 7% +49.7% 59255 ± 3% softirqs.CPU84.RCU
36603 ± 2% +77.5% 64978 ± 7% softirqs.CPU85.RCU
40354 ± 8% +46.8% 59245 ± 5% softirqs.CPU86.RCU
37744 ± 3% +65.6% 62495 ± 5% softirqs.CPU87.RCU
39090 +58.6% 62000 softirqs.CPU9.RCU
3594895 +56.1% 5612323 softirqs.RCU
500558 +17.5% 588306 ± 2% softirqs.SCHED
555.75 ± 24% -31.4% 381.50 ± 16% interrupts.34:PCI-MSI.3145731-edge.eth0-TxRx-2
352056 +19.6% 421007 ± 2% interrupts.CAL:Function_call_interrupts
3999 ± 3% +18.8% 4752 ± 6% interrupts.CPU0.CAL:Function_call_interrupts
4038 +19.2% 4812 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
3898 ± 7% +23.6% 4816 ± 3% interrupts.CPU10.CAL:Function_call_interrupts
2771 ± 32% +160.5% 7220 ± 17% interrupts.CPU10.NMI:Non-maskable_interrupts
2771 ± 32% +160.5% 7220 ± 17% interrupts.CPU10.PMI:Performance_monitoring_interrupts
3957 ± 4% +17.7% 4657 ± 7% interrupts.CPU11.CAL:Function_call_interrupts
2930 ± 28% +145.8% 7204 ± 18% interrupts.CPU11.NMI:Non-maskable_interrupts
2930 ± 28% +145.8% 7204 ± 18% interrupts.CPU11.PMI:Performance_monitoring_interrupts
3996 +21.3% 4848 ± 2% interrupts.CPU12.CAL:Function_call_interrupts
2777 ± 32% +159.5% 7207 ± 18% interrupts.CPU12.NMI:Non-maskable_interrupts
2777 ± 32% +159.5% 7207 ± 18% interrupts.CPU12.PMI:Performance_monitoring_interrupts
555.75 ± 24% -31.4% 381.50 ± 16% interrupts.CPU13.34:PCI-MSI.3145731-edge.eth0-TxRx-2
4059 +18.7% 4819 ± 3% interrupts.CPU13.CAL:Function_call_interrupts
2781 ± 32% +159.2% 7207 ± 18% interrupts.CPU13.NMI:Non-maskable_interrupts
2781 ± 32% +159.2% 7207 ± 18% interrupts.CPU13.PMI:Performance_monitoring_interrupts
3994 ± 2% +21.0% 4834 ± 3% interrupts.CPU14.CAL:Function_call_interrupts
2773 ± 32% +168.6% 7448 ± 12% interrupts.CPU14.NMI:Non-maskable_interrupts
2773 ± 32% +168.6% 7448 ± 12% interrupts.CPU14.PMI:Performance_monitoring_interrupts
4047 +19.2% 4823 ± 3% interrupts.CPU15.CAL:Function_call_interrupts
2774 ± 32% +159.6% 7201 ± 18% interrupts.CPU15.NMI:Non-maskable_interrupts
2774 ± 32% +159.6% 7201 ± 18% interrupts.CPU15.PMI:Performance_monitoring_interrupts
4056 +19.4% 4844 ± 3% interrupts.CPU16.CAL:Function_call_interrupts
2781 ± 32% +158.8% 7198 ± 18% interrupts.CPU16.NMI:Non-maskable_interrupts
2781 ± 32% +158.8% 7198 ± 18% interrupts.CPU16.PMI:Performance_monitoring_interrupts
3921 ± 5% +23.2% 4829 ± 2% interrupts.CPU17.CAL:Function_call_interrupts
3303 ± 31% +118.2% 7209 ± 18% interrupts.CPU17.NMI:Non-maskable_interrupts
3303 ± 31% +118.2% 7209 ± 18% interrupts.CPU17.PMI:Performance_monitoring_interrupts
4048 +18.8% 4808 ± 3% interrupts.CPU18.CAL:Function_call_interrupts
3315 ± 31% +117.3% 7204 ± 18% interrupts.CPU18.NMI:Non-maskable_interrupts
3315 ± 31% +117.3% 7204 ± 18% interrupts.CPU18.PMI:Performance_monitoring_interrupts
4056 +18.4% 4803 ± 3% interrupts.CPU19.CAL:Function_call_interrupts
3316 ± 31% +117.2% 7204 ± 18% interrupts.CPU19.NMI:Non-maskable_interrupts
3316 ± 31% +117.2% 7204 ± 18% interrupts.CPU19.PMI:Performance_monitoring_interrupts
4021 ± 2% +20.5% 4844 ± 2% interrupts.CPU2.CAL:Function_call_interrupts
3307 ± 31% +117.9% 7204 ± 18% interrupts.CPU20.NMI:Non-maskable_interrupts
3307 ± 31% +117.9% 7204 ± 18% interrupts.CPU20.PMI:Performance_monitoring_interrupts
4039 +19.5% 4824 ± 3% interrupts.CPU21.CAL:Function_call_interrupts
3307 ± 31% +117.9% 7206 ± 18% interrupts.CPU21.NMI:Non-maskable_interrupts
3307 ± 31% +117.9% 7206 ± 18% interrupts.CPU21.PMI:Performance_monitoring_interrupts
4018 +17.6% 4725 ± 4% interrupts.CPU22.CAL:Function_call_interrupts
3306 ± 31% +117.1% 7179 ± 19% interrupts.CPU22.NMI:Non-maskable_interrupts
3306 ± 31% +117.1% 7179 ± 19% interrupts.CPU22.PMI:Performance_monitoring_interrupts
4060 +18.1% 4794 interrupts.CPU23.CAL:Function_call_interrupts
3853 ± 23% +86.0% 7169 ± 19% interrupts.CPU23.NMI:Non-maskable_interrupts
3853 ± 23% +86.0% 7169 ± 19% interrupts.CPU23.PMI:Performance_monitoring_interrupts
4009 ± 2% +20.7% 4838 ± 3% interrupts.CPU24.CAL:Function_call_interrupts
3840 ± 23% +86.7% 7172 ± 19% interrupts.CPU24.NMI:Non-maskable_interrupts
3840 ± 23% +86.7% 7172 ± 19% interrupts.CPU24.PMI:Performance_monitoring_interrupts
3886 ± 5% +22.9% 4775 ± 3% interrupts.CPU25.CAL:Function_call_interrupts
3874 ± 23% +85.2% 7175 ± 19% interrupts.CPU25.NMI:Non-maskable_interrupts
3874 ± 23% +85.2% 7175 ± 19% interrupts.CPU25.PMI:Performance_monitoring_interrupts
4008 +19.2% 4779 ± 4% interrupts.CPU26.CAL:Function_call_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU26.NMI:Non-maskable_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU26.PMI:Performance_monitoring_interrupts
4051 +19.6% 4846 ± 2% interrupts.CPU27.CAL:Function_call_interrupts
3307 ± 31% +116.9% 7175 ± 19% interrupts.CPU27.NMI:Non-maskable_interrupts
3307 ± 31% +116.9% 7175 ± 19% interrupts.CPU27.PMI:Performance_monitoring_interrupts
3885 ± 6% +23.4% 4793 interrupts.CPU28.CAL:Function_call_interrupts
3305 ± 31% +117.4% 7186 ± 18% interrupts.CPU28.NMI:Non-maskable_interrupts
3305 ± 31% +117.4% 7186 ± 18% interrupts.CPU28.PMI:Performance_monitoring_interrupts
3303 ± 31% +117.4% 7182 ± 19% interrupts.CPU29.NMI:Non-maskable_interrupts
3303 ± 31% +117.4% 7182 ± 19% interrupts.CPU29.PMI:Performance_monitoring_interrupts
4041 +19.6% 4831 ± 2% interrupts.CPU3.CAL:Function_call_interrupts
3753 ± 13% +30.1% 4882 ± 2% interrupts.CPU30.CAL:Function_call_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU30.NMI:Non-maskable_interrupts
3848 ± 23% +86.4% 7173 ± 19% interrupts.CPU30.PMI:Performance_monitoring_interrupts
3954 +22.5% 4845 ± 2% interrupts.CPU31.CAL:Function_call_interrupts
3851 ± 24% +86.1% 7165 ± 19% interrupts.CPU31.NMI:Non-maskable_interrupts
3851 ± 24% +86.1% 7165 ± 19% interrupts.CPU31.PMI:Performance_monitoring_interrupts
4032 +19.4% 4814 ± 2% interrupts.CPU32.CAL:Function_call_interrupts
3898 ± 6% +23.8% 4825 ± 3% interrupts.CPU33.CAL:Function_call_interrupts
3919 ± 2% +20.4% 4718 ± 6% interrupts.CPU34.CAL:Function_call_interrupts
3863 ± 24% +85.7% 7173 ± 19% interrupts.CPU34.NMI:Non-maskable_interrupts
3863 ± 24% +85.7% 7173 ± 19% interrupts.CPU34.PMI:Performance_monitoring_interrupts
3993 +16.2% 4641 ± 5% interrupts.CPU35.CAL:Function_call_interrupts
3855 ± 24% +85.9% 7168 ± 19% interrupts.CPU35.NMI:Non-maskable_interrupts
3855 ± 24% +85.9% 7168 ± 19% interrupts.CPU35.PMI:Performance_monitoring_interrupts
3911 ± 2% +23.2% 4817 ± 2% interrupts.CPU36.CAL:Function_call_interrupts
3856 ± 24% +85.9% 7171 ± 19% interrupts.CPU36.NMI:Non-maskable_interrupts
3856 ± 24% +85.9% 7171 ± 19% interrupts.CPU36.PMI:Performance_monitoring_interrupts
3928 ± 4% +22.3% 4802 ± 2% interrupts.CPU37.CAL:Function_call_interrupts
4026 +15.0% 4629 ± 6% interrupts.CPU38.CAL:Function_call_interrupts
4034 +19.7% 4829 interrupts.CPU39.CAL:Function_call_interrupts
4036 +19.1% 4805 ± 3% interrupts.CPU4.CAL:Function_call_interrupts
4031 +20.5% 4856 ± 2% interrupts.CPU40.CAL:Function_call_interrupts
4035 +17.9% 4758 ± 5% interrupts.CPU41.CAL:Function_call_interrupts
3919 ± 4% +20.0% 4704 ± 6% interrupts.CPU42.CAL:Function_call_interrupts
3950 +19.2% 4710 ± 4% interrupts.CPU43.CAL:Function_call_interrupts
3931 ± 7% +22.5% 4816 ± 3% interrupts.CPU44.CAL:Function_call_interrupts
2836 ± 32% +119.7% 6232 ± 28% interrupts.CPU44.NMI:Non-maskable_interrupts
2836 ± 32% +119.7% 6232 ± 28% interrupts.CPU44.PMI:Performance_monitoring_interrupts
4018 +20.5% 4841 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
2770 ± 32% +124.5% 6218 ± 28% interrupts.CPU45.NMI:Non-maskable_interrupts
2770 ± 32% +124.5% 6218 ± 28% interrupts.CPU45.PMI:Performance_monitoring_interrupts
4043 +19.1% 4817 ± 3% interrupts.CPU46.CAL:Function_call_interrupts
2778 ± 32% +102.1% 5616 ± 42% interrupts.CPU46.NMI:Non-maskable_interrupts
2778 ± 32% +102.1% 5616 ± 42% interrupts.CPU46.PMI:Performance_monitoring_interrupts
4064 +19.3% 4846 ± 3% interrupts.CPU47.CAL:Function_call_interrupts
2771 ± 32% +138.2% 6600 ± 35% interrupts.CPU47.NMI:Non-maskable_interrupts
2771 ± 32% +138.2% 6600 ± 35% interrupts.CPU47.PMI:Performance_monitoring_interrupts
4037 +19.5% 4825 ± 3% interrupts.CPU48.CAL:Function_call_interrupts
2774 ± 32% +102.6% 5622 ± 42% interrupts.CPU48.NMI:Non-maskable_interrupts
2774 ± 32% +102.6% 5622 ± 42% interrupts.CPU48.PMI:Performance_monitoring_interrupts
4054 +18.6% 4809 ± 3% interrupts.CPU49.CAL:Function_call_interrupts
2769 ± 32% +102.9% 5618 ± 42% interrupts.CPU49.NMI:Non-maskable_interrupts
2769 ± 32% +102.9% 5618 ± 42% interrupts.CPU49.PMI:Performance_monitoring_interrupts
4036 +15.9% 4680 ± 6% interrupts.CPU5.CAL:Function_call_interrupts
4056 +19.0% 4826 ± 3% interrupts.CPU50.CAL:Function_call_interrupts
2770 ± 32% +138.3% 6600 ± 35% interrupts.CPU50.NMI:Non-maskable_interrupts
2770 ± 32% +138.3% 6600 ± 35% interrupts.CPU50.PMI:Performance_monitoring_interrupts
4039 +19.6% 4832 ± 3% interrupts.CPU51.CAL:Function_call_interrupts
2774 ± 32% +137.9% 6602 ± 35% interrupts.CPU51.NMI:Non-maskable_interrupts
2774 ± 32% +137.9% 6602 ± 35% interrupts.CPU51.PMI:Performance_monitoring_interrupts
4059 +18.5% 4811 ± 3% interrupts.CPU52.CAL:Function_call_interrupts
2768 ± 32% +138.5% 6602 ± 35% interrupts.CPU52.NMI:Non-maskable_interrupts
2768 ± 32% +138.5% 6602 ± 35% interrupts.CPU52.PMI:Performance_monitoring_interrupts
4040 +18.2% 4775 ± 3% interrupts.CPU53.CAL:Function_call_interrupts
2768 ± 32% +138.4% 6599 ± 35% interrupts.CPU53.NMI:Non-maskable_interrupts
2768 ± 32% +138.4% 6599 ± 35% interrupts.CPU53.PMI:Performance_monitoring_interrupts
3982 ± 4% +21.5% 4839 ± 2% interrupts.CPU54.CAL:Function_call_interrupts
4036 ± 2% +16.0% 4681 ± 7% interrupts.CPU55.CAL:Function_call_interrupts
4002 ± 2% +17.5% 4701 ± 7% interrupts.CPU56.CAL:Function_call_interrupts
3319 ± 31% +117.0% 7201 ± 18% interrupts.CPU56.NMI:Non-maskable_interrupts
3319 ± 31% +117.0% 7201 ± 18% interrupts.CPU56.PMI:Performance_monitoring_interrupts
4049 +18.9% 4816 ± 4% interrupts.CPU57.CAL:Function_call_interrupts
3312 ± 31% +117.3% 7197 ± 18% interrupts.CPU57.NMI:Non-maskable_interrupts
3312 ± 31% +117.3% 7197 ± 18% interrupts.CPU57.PMI:Performance_monitoring_interrupts
3973 ± 3% +22.3% 4861 ± 2% interrupts.CPU58.CAL:Function_call_interrupts
3314 ± 31% +139.8% 7948 interrupts.CPU58.NMI:Non-maskable_interrupts
3314 ± 31% +139.8% 7948 interrupts.CPU58.PMI:Performance_monitoring_interrupts
4046 +20.2% 4861 ± 2% interrupts.CPU59.CAL:Function_call_interrupts
2779 ± 32% +158.8% 7191 ± 18% interrupts.CPU59.NMI:Non-maskable_interrupts
2779 ± 32% +158.8% 7191 ± 18% interrupts.CPU59.PMI:Performance_monitoring_interrupts
4030 +20.2% 4845 ± 2% interrupts.CPU6.CAL:Function_call_interrupts
3315 ± 32% +117.2% 7199 ± 18% interrupts.CPU6.NMI:Non-maskable_interrupts
3315 ± 32% +117.2% 7199 ± 18% interrupts.CPU6.PMI:Performance_monitoring_interrupts
4047 +19.0% 4817 ± 3% interrupts.CPU60.CAL:Function_call_interrupts
2782 ± 32% +158.6% 7194 ± 18% interrupts.CPU60.NMI:Non-maskable_interrupts
2782 ± 32% +158.6% 7194 ± 18% interrupts.CPU60.PMI:Performance_monitoring_interrupts
3935 ± 5% +22.3% 4812 ± 3% interrupts.CPU61.CAL:Function_call_interrupts
2775 ± 32% +159.4% 7200 ± 18% interrupts.CPU61.NMI:Non-maskable_interrupts
2775 ± 32% +159.4% 7200 ± 18% interrupts.CPU61.PMI:Performance_monitoring_interrupts
4055 +19.9% 4861 ± 2% interrupts.CPU62.CAL:Function_call_interrupts
2776 ± 32% +159.3% 7198 ± 18% interrupts.CPU62.NMI:Non-maskable_interrupts
2776 ± 32% +159.3% 7198 ± 18% interrupts.CPU62.PMI:Performance_monitoring_interrupts
4055 +19.0% 4827 ± 3% interrupts.CPU63.CAL:Function_call_interrupts
2781 ± 32% +158.8% 7197 ± 18% interrupts.CPU63.NMI:Non-maskable_interrupts
2781 ± 32% +158.8% 7197 ± 18% interrupts.CPU63.PMI:Performance_monitoring_interrupts
4044 +16.9% 4729 ± 5% interrupts.CPU64.CAL:Function_call_interrupts
2770 ± 32% +159.7% 7195 ± 18% interrupts.CPU64.NMI:Non-maskable_interrupts
2770 ± 32% +159.7% 7195 ± 18% interrupts.CPU64.PMI:Performance_monitoring_interrupts
4040 +19.6% 4833 ± 3% interrupts.CPU65.CAL:Function_call_interrupts
2782 ± 32% +158.9% 7204 ± 18% interrupts.CPU65.NMI:Non-maskable_interrupts
2782 ± 32% +158.9% 7204 ± 18% interrupts.CPU65.PMI:Performance_monitoring_interrupts
4047 +15.9% 4691 ± 6% interrupts.CPU66.CAL:Function_call_interrupts
2778 ± 32% +158.0% 7168 ± 19% interrupts.CPU66.NMI:Non-maskable_interrupts
2778 ± 32% +158.0% 7168 ± 19% interrupts.CPU66.PMI:Performance_monitoring_interrupts
4069 +13.7% 4626 ± 4% interrupts.CPU67.CAL:Function_call_interrupts
2783 ± 31% +157.4% 7165 ± 19% interrupts.CPU67.NMI:Non-maskable_interrupts
2783 ± 31% +157.4% 7165 ± 19% interrupts.CPU67.PMI:Performance_monitoring_interrupts
4045 +19.9% 4851 ± 2% interrupts.CPU68.CAL:Function_call_interrupts
2779 ± 32% +157.8% 7166 ± 19% interrupts.CPU68.NMI:Non-maskable_interrupts
2779 ± 32% +157.8% 7166 ± 19% interrupts.CPU68.PMI:Performance_monitoring_interrupts
3897 ± 5% +23.9% 4828 ± 3% interrupts.CPU69.CAL:Function_call_interrupts
2784 ± 31% +157.6% 7171 ± 19% interrupts.CPU69.NMI:Non-maskable_interrupts
2784 ± 31% +157.6% 7171 ± 19% interrupts.CPU69.PMI:Performance_monitoring_interrupts
4045 +19.7% 4841 ± 2% interrupts.CPU7.CAL:Function_call_interrupts
2774 ± 32% +159.5% 7200 ± 18% interrupts.CPU7.NMI:Non-maskable_interrupts
2774 ± 32% +159.5% 7200 ± 18% interrupts.CPU7.PMI:Performance_monitoring_interrupts
4040 +17.1% 4730 ± 5% interrupts.CPU70.CAL:Function_call_interrupts
2784 ± 32% +157.7% 7174 ± 18% interrupts.CPU70.NMI:Non-maskable_interrupts
2784 ± 32% +157.7% 7174 ± 18% interrupts.CPU70.PMI:Performance_monitoring_interrupts
4051 +19.3% 4833 ± 2% interrupts.CPU71.CAL:Function_call_interrupts
2784 ± 32% +157.5% 7169 ± 19% interrupts.CPU71.NMI:Non-maskable_interrupts
2784 ± 32% +157.5% 7169 ± 19% interrupts.CPU71.PMI:Performance_monitoring_interrupts
2785 ± 33% +159.1% 7217 ± 17% interrupts.CPU72.NMI:Non-maskable_interrupts
2785 ± 33% +159.1% 7217 ± 17% interrupts.CPU72.PMI:Performance_monitoring_interrupts
3999 ± 3% +17.1% 4682 ± 5% interrupts.CPU73.CAL:Function_call_interrupts
2780 ± 32% +157.8% 7169 ± 19% interrupts.CPU73.NMI:Non-maskable_interrupts
2780 ± 32% +157.8% 7169 ± 19% interrupts.CPU73.PMI:Performance_monitoring_interrupts
3834 ± 8% +26.8% 4861 ± 2% interrupts.CPU74.CAL:Function_call_interrupts
2778 ± 32% +158.4% 7177 ± 19% interrupts.CPU74.NMI:Non-maskable_interrupts
2778 ± 32% +158.4% 7177 ± 19% interrupts.CPU74.PMI:Performance_monitoring_interrupts
4063 +19.1% 4840 ± 3% interrupts.CPU75.CAL:Function_call_interrupts
3331 ± 32% +115.3% 7171 ± 19% interrupts.CPU75.NMI:Non-maskable_interrupts
3331 ± 32% +115.3% 7171 ± 19% interrupts.CPU75.PMI:Performance_monitoring_interrupts
4021 +19.4% 4801 ± 3% interrupts.CPU76.CAL:Function_call_interrupts
3461 ± 26% +108.5% 7214 ± 18% interrupts.CPU76.NMI:Non-maskable_interrupts
3461 ± 26% +108.5% 7214 ± 18% interrupts.CPU76.PMI:Performance_monitoring_interrupts
3984 ± 3% +20.9% 4819 ± 3% interrupts.CPU77.CAL:Function_call_interrupts
3653 ± 39% +96.4% 7176 ± 19% interrupts.CPU77.NMI:Non-maskable_interrupts
3653 ± 39% +96.4% 7176 ± 19% interrupts.CPU77.PMI:Performance_monitoring_interrupts
4024 +18.9% 4786 ± 4% interrupts.CPU78.CAL:Function_call_interrupts
2782 ± 32% +158.1% 7181 ± 19% interrupts.CPU78.NMI:Non-maskable_interrupts
2782 ± 32% +158.1% 7181 ± 19% interrupts.CPU78.PMI:Performance_monitoring_interrupts
3897 ± 3% +22.8% 4787 ± 2% interrupts.CPU79.CAL:Function_call_interrupts
2784 ± 32% +157.7% 7174 ± 19% interrupts.CPU79.NMI:Non-maskable_interrupts
2784 ± 32% +157.7% 7174 ± 19% interrupts.CPU79.PMI:Performance_monitoring_interrupts
4024 ± 2% +19.5% 4808 ± 3% interrupts.CPU8.CAL:Function_call_interrupts
2770 ± 32% +159.9% 7201 ± 18% interrupts.CPU8.NMI:Non-maskable_interrupts
2770 ± 32% +159.9% 7201 ± 18% interrupts.CPU8.PMI:Performance_monitoring_interrupts
3710 ± 10% +30.5% 4840 ± 2% interrupts.CPU80.CAL:Function_call_interrupts
3323 ± 32% +115.8% 7172 ± 19% interrupts.CPU80.NMI:Non-maskable_interrupts
3323 ± 32% +115.8% 7172 ± 19% interrupts.CPU80.PMI:Performance_monitoring_interrupts
3997 +18.4% 4733 ± 2% interrupts.CPU81.CAL:Function_call_interrupts
3311 ± 32% +117.4% 7198 ± 18% interrupts.CPU81.NMI:Non-maskable_interrupts
3311 ± 32% +117.4% 7198 ± 18% interrupts.CPU81.PMI:Performance_monitoring_interrupts
4049 +17.6% 4763 ± 2% interrupts.CPU82.CAL:Function_call_interrupts
3319 ± 32% +116.1% 7172 ± 19% interrupts.CPU82.NMI:Non-maskable_interrupts
3319 ± 32% +116.1% 7172 ± 19% interrupts.CPU82.PMI:Performance_monitoring_interrupts
4013 +20.3% 4826 ± 2% interrupts.CPU83.CAL:Function_call_interrupts
3318 ± 32% +116.9% 7199 ± 18% interrupts.CPU83.NMI:Non-maskable_interrupts
3318 ± 32% +116.9% 7199 ± 18% interrupts.CPU83.PMI:Performance_monitoring_interrupts
4040 +19.6% 4834 ± 2% interrupts.CPU84.CAL:Function_call_interrupts
3345 ± 32% +114.5% 7175 ± 19% interrupts.CPU84.NMI:Non-maskable_interrupts
3345 ± 32% +114.5% 7175 ± 19% interrupts.CPU84.PMI:Performance_monitoring_interrupts
3991 ± 2% +19.8% 4781 ± 4% interrupts.CPU86.CAL:Function_call_interrupts
3350 ± 33% +113.7% 7162 ± 19% interrupts.CPU86.NMI:Non-maskable_interrupts
3350 ± 33% +113.7% 7162 ± 19% interrupts.CPU86.PMI:Performance_monitoring_interrupts
3326 ± 32% +117.0% 7216 ± 17% interrupts.CPU87.NMI:Non-maskable_interrupts
3326 ± 32% +117.0% 7216 ± 17% interrupts.CPU87.PMI:Performance_monitoring_interrupts
4058 +19.1% 4835 ± 3% interrupts.CPU9.CAL:Function_call_interrupts
2782 ± 32% +159.5% 7220 ± 18% interrupts.CPU9.NMI:Non-maskable_interrupts
2782 ± 32% +159.5% 7220 ± 18% interrupts.CPU9.PMI:Performance_monitoring_interrupts
51.00 ± 59% +150.0% 127.50 ± 23% interrupts.IWI:IRQ_work_interrupts
283310 ± 21% +114.7% 608405 ± 20% interrupts.NMI:Non-maskable_interrupts
283310 ± 21% +114.7% 608405 ± 20% interrupts.PMI:Performance_monitoring_interrupts
38196883 +161.0% 99690427 ± 2% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/custom/reaim/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 kmsg.do_IRQ:#No_irq_handler_for_vector
:4 25% 1:4 dmesg.WARNING:at#for_ip_error_entry/0x
:4 25% 1:4 dmesg.WARNING:at#for_ip_ret_from_intr/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
2:4 17% 2:4 perf-profile.calltrace.cycles-pp.error_entry
2:4 24% 3:4 perf-profile.children.cycles-pp.error_entry
%stddev %change %stddev
\ | \
4.77 +11.4% 5.32 ± 2% reaim.std_dev_percent
0.03 +8.2% 0.03 reaim.std_dev_time
2025938 -10.6% 1811037 reaim.time.involuntary_context_switches
8797312 +1.0% 8887670 reaim.time.voluntary_context_switches
9078 +12.0% 10169 meminfo.PageTables
2269 +11.6% 2531 proc-vmstat.nr_page_table_pages
41893 +6.7% 44683 ± 4% softirqs.CPU98.RCU
4.65 ± 2% -15.3% 3.94 ± 10% turbostat.Pkg%pc6
178.24 +2.0% 181.79 turbostat.PkgWatt
12123 ± 18% +33.1% 16134 ± 3% numa-meminfo.node0.Mapped
41009 ± 5% +12.4% 46109 ± 4% numa-meminfo.node0.SReclaimable
131166 ± 5% +4.8% 137483 ± 5% numa-meminfo.node0.SUnreclaim
172176 ± 2% +6.6% 183593 ± 3% numa-meminfo.node0.Slab
3014 ± 18% +34.4% 4052 ± 2% numa-vmstat.node0.nr_mapped
10257 ± 6% +12.4% 11530 ± 4% numa-vmstat.node0.nr_slab_reclaimable
32780 ± 5% +4.9% 34383 ± 5% numa-vmstat.node0.nr_slab_unreclaimable
3664 ± 16% -25.6% 2725 ± 2% numa-vmstat.node1.nr_mapped
1969 ± 4% +2.7% 2022 ± 5% slabinfo.kmalloc-4096.active_objs
5584 +9.8% 6130 ± 3% slabinfo.mm_struct.active_objs
5614 +9.7% 6158 ± 3% slabinfo.mm_struct.num_objs
877.00 +11.5% 978.25 ± 3% slabinfo.names_cache.active_objs
877.00 +11.5% 978.25 ± 3% slabinfo.names_cache.num_objs
1495 -9.8% 1348 ± 5% slabinfo.nsproxy.active_objs
1495 -9.8% 1348 ± 5% slabinfo.nsproxy.num_objs
1313 ± 15% +27.1% 1668 ± 2% sched_debug.cpu.nr_load_updates.stddev
2664 +44.4% 3847 ± 14% sched_debug.cpu.nr_switches.stddev
0.02 ± 77% -89.5% 0.00 ±223% sched_debug.cpu.nr_uninterruptible.avg
81.22 ± 12% +30.5% 106.02 ± 8% sched_debug.cpu.nr_uninterruptible.max
30.10 +53.8% 46.29 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
2408 ± 3% +52.2% 3665 ± 18% sched_debug.cpu.sched_count.stddev
1066 ± 6% +68.1% 1792 ± 17% sched_debug.cpu.sched_goidle.stddev
1413 ± 3% +36.5% 1929 ± 19% sched_debug.cpu.ttwu_count.stddev
553.65 ± 2% -8.6% 505.96 ± 2% sched_debug.cpu.ttwu_local.stddev
3.19 -0.2 2.99 ± 2% perf-stat.i.branch-miss-rate%
7.45 ± 2% -0.7 6.79 ± 2% perf-stat.i.cache-miss-rate%
56644639 -11.5% 50150131 perf-stat.i.cache-misses
5.795e+10 +1.5% 5.883e+10 perf-stat.i.cpu-cycles
0.33 -0.0 0.29 ± 3% perf-stat.i.dTLB-load-miss-rate%
0.10 -0.0 0.09 ± 3% perf-stat.i.dTLB-store-miss-rate%
44.46 -1.5 42.99 ± 2% perf-stat.i.iTLB-load-miss-rate%
3761774 +5.5% 3969797 perf-stat.i.iTLB-load-misses
79.67 -4.1 75.54 perf-stat.i.node-load-miss-rate%
16697015 -17.2% 13819973 perf-stat.i.node-load-misses
2830435 -2.3% 2764205 perf-stat.i.node-loads
60.87 ± 2% -6.0 54.85 ± 3% perf-stat.i.node-store-miss-rate%
3631713 -15.0% 3088593 ± 3% perf-stat.i.node-store-misses
2047418 +7.2% 2195246 perf-stat.i.node-stores
6.61 -0.9 5.71 perf-stat.overall.cache-miss-rate%
44.36 +0.7 45.06 perf-stat.overall.iTLB-load-miss-rate%
11359 -3.1% 11011 perf-stat.overall.instructions-per-iTLB-miss
85.51 -2.2 83.33 perf-stat.overall.node-load-miss-rate%
63.95 -5.5 58.44 perf-stat.overall.node-store-miss-rate%
1.782e+10 -13.1% 1.548e+10 perf-stat.total.cache-misses
5160156 -1.5% 5081874 perf-stat.total.cpu-migrations
1.184e+09 +3.5% 1.225e+09 perf-stat.total.iTLB-load-misses
5.254e+09 -18.8% 4.266e+09 perf-stat.total.node-load-misses
8.905e+08 -4.2% 8.532e+08 perf-stat.total.node-loads
1.143e+09 -16.6% 9.537e+08 ± 3% perf-stat.total.node-store-misses
6.442e+08 +5.2% 6.776e+08 perf-stat.total.node-stores
2.51 -0.3 2.23 perf-profile.calltrace.cycles-pp.copy_page_range.copy_process._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.07 ± 2% -0.2 0.84 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu.tlb_finish_mmu
1.04 ± 2% -0.2 0.81 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.tlb_flush_mmu_free.arch_tlb_finish_mmu
1.37 ± 3% -0.2 1.21 ± 9% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.88 ± 6% -0.1 0.77 ± 6% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.flush_old_exec.load_elf_binary
0.81 ± 6% -0.1 0.72 ± 6% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.flush_old_exec
0.60 ± 8% +0.1 0.72 ± 16% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath.queued_read_lock_slowpath.do_wait.kernel_wait4.SYSC_wait4
0.17 ±141% +0.4 0.57 ± 3% perf-profile.calltrace.cycles-pp.queued_write_lock_slowpath.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.29 ± 10% +0.4 1.70 ± 12% perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.SYSC_wait4.do_syscall_64.entry_SYSCALL_64_after_hwframe
25.70 +1.0 26.66 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
4.15 ± 23% -0.9 3.20 ± 7% perf-profile.children.cycles-pp._raw_spin_lock
14.46 -0.5 14.00 ± 2% perf-profile.children.cycles-pp.exit_mmap
10.89 -0.5 10.43 perf-profile.children.cycles-pp._do_fork
2.46 -0.4 2.09 ± 3% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
3.90 -0.3 3.56 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
3.87 -0.3 3.53 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
10.20 -0.3 9.88 perf-profile.children.cycles-pp.copy_process
2.53 -0.3 2.24 ± 2% perf-profile.children.cycles-pp.copy_page_range
1.64 ± 5% -0.2 1.41 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
1.73 ± 4% -0.2 1.51 perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.29 ± 3% -0.2 1.10 ± 4% perf-profile.children.cycles-pp.__schedule
1.22 ± 8% -0.2 1.04 ± 5% perf-profile.children.cycles-pp.ret_from_fork
0.59 ± 3% -0.1 0.45 ± 5% perf-profile.children.cycles-pp.pick_next_task_fair
0.43 ± 2% -0.1 0.30 ± 6% perf-profile.children.cycles-pp.free_pgd_range
0.64 -0.1 0.51 ± 4% perf-profile.children.cycles-pp.wake_up_new_task
0.30 -0.1 0.18 ± 6% perf-profile.children.cycles-pp.free_unref_page
0.98 ± 2% -0.1 0.86 ± 2% perf-profile.children.cycles-pp.rcu_process_callbacks
0.50 ± 4% -0.1 0.38 ± 4% perf-profile.children.cycles-pp.load_balance
0.51 ± 4% -0.1 0.40 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
1.22 -0.1 1.10 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.39 ± 9% -0.1 0.28 ± 7% perf-profile.children.cycles-pp.schedule_tail
0.29 -0.1 0.18 ± 4% perf-profile.children.cycles-pp.free_pcppages_bulk
0.35 ± 4% -0.1 0.25 ± 3% perf-profile.children.cycles-pp.do_task_dead
0.88 -0.1 0.78 ± 2% perf-profile.children.cycles-pp.select_task_rq_fair
0.35 -0.1 0.26 ± 4% perf-profile.children.cycles-pp.free_unref_page_commit
0.13 ± 7% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.sched_ttwu_pending
1.04 -0.1 0.96 ± 3% perf-profile.children.cycles-pp.kmem_cache_free
0.70 ± 2% -0.1 0.62 ± 3% perf-profile.children.cycles-pp.__pte_alloc
0.21 ± 3% -0.1 0.14 ± 7% perf-profile.children.cycles-pp.idle_cpu
0.32 ± 6% -0.1 0.25 ± 6% perf-profile.children.cycles-pp.find_busiest_group
0.36 ± 4% -0.1 0.29 ± 5% perf-profile.children.cycles-pp.finish_task_switch
0.97 -0.1 0.91 perf-profile.children.cycles-pp.anon_vma_interval_tree_insert
0.32 -0.1 0.25 ± 6% perf-profile.children.cycles-pp.mm_init
0.28 ± 3% -0.1 0.22 ± 4% perf-profile.children.cycles-pp.pgd_alloc
0.14 ± 10% -0.1 0.08 ± 8% perf-profile.children.cycles-pp.free_one_page
0.56 ± 4% -0.1 0.50 ± 4% perf-profile.children.cycles-pp.schedule
0.23 ± 2% -0.1 0.18 ± 8% perf-profile.children.cycles-pp.__get_free_pages
0.81 ± 2% -0.1 0.75 ± 3% perf-profile.children.cycles-pp.__slab_free
0.31 ± 9% -0.1 0.25 ± 7% perf-profile.children.cycles-pp.__put_user_4
0.19 ± 2% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.dup_userfaultfd
0.15 ± 6% -0.1 0.10 ± 9% perf-profile.children.cycles-pp.__free_pages_ok
2.25 -0.0 2.20 perf-profile.children.cycles-pp.anon_vma_clone
0.08 ± 5% -0.0 0.04 ± 60% perf-profile.children.cycles-pp.unfreeze_partials
1.00 -0.0 0.96 perf-profile.children.cycles-pp.sys_write
0.20 ± 4% -0.0 0.16 ± 13% perf-profile.children.cycles-pp.devkmsg_write
0.20 ± 4% -0.0 0.16 ± 13% perf-profile.children.cycles-pp.printk_emit
0.21 ± 3% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.update_curr
0.89 -0.0 0.86 ± 2% perf-profile.children.cycles-pp.__vfs_write
0.09 -0.0 0.06 ± 11% perf-profile.children.cycles-pp.new_slab
0.16 ± 7% -0.0 0.13 ± 11% perf-profile.children.cycles-pp.__mmdrop
0.09 ± 9% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.put_cpu_partial
0.44 -0.0 0.41 perf-profile.children.cycles-pp.remove_vma
0.52 -0.0 0.49 ± 3% perf-profile.children.cycles-pp.enqueue_task_fair
0.07 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.lapic_next_deadline
0.20 ± 4% -0.0 0.18 ± 6% perf-profile.children.cycles-pp.__put_task_struct
0.17 ± 7% -0.0 0.15 ± 5% perf-profile.children.cycles-pp.__lock_text_start
0.14 ± 5% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.do_send_sig_info
0.09 -0.0 0.07 perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.13 ± 6% -0.0 0.11 ± 3% perf-profile.children.cycles-pp.put_task_stack
0.09 ± 9% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.__queue_work
0.14 ± 3% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.unmap_single_vma
0.10 -0.0 0.09 ± 4% perf-profile.children.cycles-pp.tcp_sendmsg_locked
0.28 ± 4% +0.0 0.31 ± 2% perf-profile.children.cycles-pp.vfs_statx
0.26 ± 4% +0.0 0.29 ± 3% perf-profile.children.cycles-pp.SYSC_newstat
0.56 ± 3% +0.0 0.59 ± 3% perf-profile.children.cycles-pp.elf_map
0.45 ± 5% +0.0 0.48 ± 3% perf-profile.children.cycles-pp.__wake_up_common
1.12 +0.2 1.32 ± 3% perf-profile.children.cycles-pp.queued_read_lock_slowpath
1.26 ± 2% +0.2 1.49 ± 3% perf-profile.children.cycles-pp.queued_write_lock_slowpath
2.16 ± 2% +0.2 2.39 ± 2% perf-profile.children.cycles-pp.do_wait
2.19 ± 2% +0.2 2.43 ± 2% perf-profile.children.cycles-pp.SYSC_wait4
2.18 ± 2% +0.2 2.42 ± 2% perf-profile.children.cycles-pp.kernel_wait4
25.95 +1.0 26.92 perf-profile.children.cycles-pp.intel_idle
1.46 -0.1 1.31 ± 3% perf-profile.self.cycles-pp.copy_page_range
0.51 ± 4% -0.1 0.40 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.21 ± 3% -0.1 0.14 ± 7% perf-profile.self.cycles-pp.idle_cpu
0.96 -0.1 0.89 perf-profile.self.cycles-pp.anon_vma_interval_tree_insert
0.19 -0.1 0.14 ± 3% perf-profile.self.cycles-pp.dup_userfaultfd
0.35 ± 2% -0.0 0.30 ± 5% perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.25 ± 5% -0.0 0.20 ± 6% perf-profile.self.cycles-pp.find_busiest_group
0.28 ± 2% -0.0 0.24 ± 3% perf-profile.self.cycles-pp.unlink_anon_vmas
0.57 -0.0 0.53 ± 3% perf-profile.self.cycles-pp.select_task_rq_fair
0.15 ± 3% -0.0 0.11 ± 7% perf-profile.self.cycles-pp.free_pcppages_bulk
0.55 -0.0 0.52 ± 2% perf-profile.self.cycles-pp.__slab_free
0.09 -0.0 0.07 perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.07 ± 11% -0.0 0.05 perf-profile.self.cycles-pp.update_rq_clock
0.14 ± 6% +0.0 0.16 perf-profile.self.cycles-pp.handle_mm_fault
25.94 +1.0 26.91 perf-profile.self.cycles-pp.intel_idle
2592 ± 11% -20.4% 2062 ± 11% interrupts.CPU1.RES:Rescheduling_interrupts
1584 -20.9% 1254 ± 8% interrupts.CPU10.RES:Rescheduling_interrupts
1405 ± 3% -21.0% 1110 ± 2% interrupts.CPU100.RES:Rescheduling_interrupts
1420 ± 4% -23.8% 1082 ± 10% interrupts.CPU101.RES:Rescheduling_interrupts
1421 ± 2% -19.7% 1141 ± 11% interrupts.CPU102.RES:Rescheduling_interrupts
1501 ± 27% +35.5% 2033 ± 8% interrupts.CPU103.NMI:Non-maskable_interrupts
1501 ± 27% +35.5% 2033 ± 8% interrupts.CPU103.PMI:Performance_monitoring_interrupts
1394 -23.0% 1074 ± 5% interrupts.CPU103.RES:Rescheduling_interrupts
1566 -19.1% 1266 ± 11% interrupts.CPU11.RES:Rescheduling_interrupts
1531 -17.2% 1267 ± 7% interrupts.CPU12.RES:Rescheduling_interrupts
1559 ± 2% -22.6% 1206 ± 8% interrupts.CPU13.RES:Rescheduling_interrupts
1503 -23.4% 1151 ± 6% interrupts.CPU15.RES:Rescheduling_interrupts
1584 ± 3% -24.2% 1201 ± 8% interrupts.CPU16.RES:Rescheduling_interrupts
1528 ± 6% -18.8% 1240 ± 13% interrupts.CPU17.RES:Rescheduling_interrupts
1518 ± 3% -21.1% 1197 ± 6% interrupts.CPU18.RES:Rescheduling_interrupts
2303 ± 11% -19.5% 1854 interrupts.CPU19.NMI:Non-maskable_interrupts
2303 ± 11% -19.5% 1854 interrupts.CPU19.PMI:Performance_monitoring_interrupts
1457 ± 4% -18.0% 1194 ± 4% interrupts.CPU19.RES:Rescheduling_interrupts
1884 ± 5% -15.3% 1596 ± 4% interrupts.CPU2.RES:Rescheduling_interrupts
1543 ± 4% -22.9% 1189 ± 7% interrupts.CPU20.RES:Rescheduling_interrupts
1480 -19.4% 1193 ± 5% interrupts.CPU21.RES:Rescheduling_interrupts
1492 ± 2% -17.5% 1231 ± 6% interrupts.CPU22.RES:Rescheduling_interrupts
1482 ± 2% -17.0% 1230 ± 7% interrupts.CPU24.RES:Rescheduling_interrupts
1434 ± 3% -17.4% 1184 ± 6% interrupts.CPU25.RES:Rescheduling_interrupts
1568 ± 4% -12.7% 1368 ± 4% interrupts.CPU26.RES:Rescheduling_interrupts
1544 -16.5% 1289 ± 3% interrupts.CPU27.RES:Rescheduling_interrupts
1486 ± 3% -16.6% 1238 ± 5% interrupts.CPU29.RES:Rescheduling_interrupts
1856 ± 3% -14.3% 1591 ± 8% interrupts.CPU3.RES:Rescheduling_interrupts
1507 -18.8% 1224 ± 9% interrupts.CPU30.RES:Rescheduling_interrupts
1561 ± 2% -19.9% 1250 ± 3% interrupts.CPU31.RES:Rescheduling_interrupts
1551 -23.4% 1187 ± 3% interrupts.CPU32.RES:Rescheduling_interrupts
1449 ± 4% -16.6% 1208 ± 9% interrupts.CPU33.RES:Rescheduling_interrupts
1521 ± 2% -21.6% 1193 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
1644 ± 15% -26.2% 1212 ± 7% interrupts.CPU35.RES:Rescheduling_interrupts
1498 -22.5% 1161 ± 2% interrupts.CPU36.RES:Rescheduling_interrupts
1487 ± 3% -19.8% 1192 ± 4% interrupts.CPU37.RES:Rescheduling_interrupts
1538 ± 4% -25.0% 1153 ± 5% interrupts.CPU38.RES:Rescheduling_interrupts
1486 -20.6% 1181 ± 7% interrupts.CPU39.RES:Rescheduling_interrupts
1488 ± 2% -20.2% 1187 ± 3% interrupts.CPU40.RES:Rescheduling_interrupts
1503 -22.3% 1168 ± 10% interrupts.CPU41.RES:Rescheduling_interrupts
1560 ± 23% +24.5% 1942 ± 4% interrupts.CPU43.NMI:Non-maskable_interrupts
1560 ± 23% +24.5% 1942 ± 4% interrupts.CPU43.PMI:Performance_monitoring_interrupts
1654 ± 7% -26.5% 1216 ± 4% interrupts.CPU43.RES:Rescheduling_interrupts
1501 ± 5% -23.5% 1148 ± 4% interrupts.CPU44.RES:Rescheduling_interrupts
1473 ± 3% -21.0% 1164 ± 7% interrupts.CPU45.RES:Rescheduling_interrupts
1424 ± 3% -18.0% 1167 ± 6% interrupts.CPU46.RES:Rescheduling_interrupts
1481 -25.1% 1109 ± 4% interrupts.CPU47.RES:Rescheduling_interrupts
1436 -19.8% 1152 ± 4% interrupts.CPU48.RES:Rescheduling_interrupts
1688 ± 2% -20.2% 1347 ± 8% interrupts.CPU5.RES:Rescheduling_interrupts
1440 ± 2% -20.9% 1139 ± 7% interrupts.CPU50.RES:Rescheduling_interrupts
1462 -23.5% 1118 ± 7% interrupts.CPU51.RES:Rescheduling_interrupts
1410 ± 2% -14.7% 1203 ± 5% interrupts.CPU52.RES:Rescheduling_interrupts
1524 ± 2% -24.6% 1149 ± 5% interrupts.CPU53.RES:Rescheduling_interrupts
1438 -16.5% 1201 ± 9% interrupts.CPU54.RES:Rescheduling_interrupts
1454 ± 2% -19.5% 1170 ± 6% interrupts.CPU55.RES:Rescheduling_interrupts
1468 -20.1% 1173 ± 4% interrupts.CPU56.RES:Rescheduling_interrupts
1461 -20.6% 1159 ± 4% interrupts.CPU57.RES:Rescheduling_interrupts
1410 ± 2% -18.1% 1155 ± 4% interrupts.CPU58.RES:Rescheduling_interrupts
1452 ± 3% -19.0% 1176 ± 6% interrupts.CPU59.RES:Rescheduling_interrupts
1621 ± 4% -16.3% 1357 ± 5% interrupts.CPU6.RES:Rescheduling_interrupts
1455 ± 2% -22.7% 1124 ± 8% interrupts.CPU60.RES:Rescheduling_interrupts
1491 ± 3% -25.8% 1106 ± 11% interrupts.CPU61.RES:Rescheduling_interrupts
1401 -18.4% 1143 ± 3% interrupts.CPU62.RES:Rescheduling_interrupts
1429 -19.4% 1152 ± 9% interrupts.CPU63.RES:Rescheduling_interrupts
1437 -22.8% 1109 ± 8% interrupts.CPU64.RES:Rescheduling_interrupts
1499 -26.4% 1104 ± 7% interrupts.CPU65.RES:Rescheduling_interrupts
1485 ± 4% -23.0% 1144 ± 7% interrupts.CPU66.RES:Rescheduling_interrupts
1405 ± 3% -19.0% 1138 ± 8% interrupts.CPU67.RES:Rescheduling_interrupts
1492 ± 2% -22.4% 1159 ± 12% interrupts.CPU68.RES:Rescheduling_interrupts
1435 ± 4% -19.9% 1149 ± 14% interrupts.CPU69.RES:Rescheduling_interrupts
1625 ± 3% -15.6% 1371 ± 6% interrupts.CPU7.RES:Rescheduling_interrupts
1480 ± 3% -21.4% 1164 ± 12% interrupts.CPU70.RES:Rescheduling_interrupts
2355 ± 10% -30.9% 1627 ± 25% interrupts.CPU71.NMI:Non-maskable_interrupts
2355 ± 10% -30.9% 1627 ± 25% interrupts.CPU71.PMI:Performance_monitoring_interrupts
1428 ± 3% -19.4% 1151 ± 8% interrupts.CPU71.RES:Rescheduling_interrupts
1427 ± 2% -19.7% 1145 ± 9% interrupts.CPU72.RES:Rescheduling_interrupts
1452 ± 4% -17.5% 1198 ± 7% interrupts.CPU73.RES:Rescheduling_interrupts
1419 ± 2% -19.0% 1149 ± 6% interrupts.CPU74.RES:Rescheduling_interrupts
1441 ± 2% -18.4% 1176 ± 9% interrupts.CPU75.RES:Rescheduling_interrupts
1435 ± 3% -16.1% 1204 ± 6% interrupts.CPU76.RES:Rescheduling_interrupts
1445 -22.2% 1124 ± 6% interrupts.CPU77.RES:Rescheduling_interrupts
1481 ± 4% -23.8% 1128 ± 8% interrupts.CPU78.RES:Rescheduling_interrupts
1392 -20.7% 1104 ± 9% interrupts.CPU79.RES:Rescheduling_interrupts
1621 ± 4% -22.7% 1252 ± 9% interrupts.CPU8.RES:Rescheduling_interrupts
1478 ± 2% -24.3% 1118 ± 6% interrupts.CPU80.RES:Rescheduling_interrupts
1481 ± 4% -23.2% 1137 ± 8% interrupts.CPU81.RES:Rescheduling_interrupts
1453 ± 2% -20.8% 1151 ± 4% interrupts.CPU82.RES:Rescheduling_interrupts
1431 -22.5% 1110 ± 10% interrupts.CPU83.RES:Rescheduling_interrupts
1477 ± 4% -25.9% 1094 ± 7% interrupts.CPU84.RES:Rescheduling_interrupts
1467 ± 2% -21.4% 1153 ± 6% interrupts.CPU85.RES:Rescheduling_interrupts
1427 ± 3% -20.1% 1140 ± 12% interrupts.CPU86.RES:Rescheduling_interrupts
1512 ± 5% -25.5% 1126 ± 5% interrupts.CPU87.RES:Rescheduling_interrupts
1409 -20.8% 1115 ± 5% interrupts.CPU88.RES:Rescheduling_interrupts
1408 ± 2% -18.8% 1143 ± 5% interrupts.CPU89.RES:Rescheduling_interrupts
1659 ± 2% -22.0% 1294 ± 11% interrupts.CPU9.RES:Rescheduling_interrupts
1475 ± 3% -23.7% 1126 ± 5% interrupts.CPU90.RES:Rescheduling_interrupts
1444 -22.8% 1114 ± 7% interrupts.CPU91.RES:Rescheduling_interrupts
1442 ± 6% -21.3% 1135 ± 7% interrupts.CPU92.RES:Rescheduling_interrupts
1466 ± 2% -21.7% 1148 ± 2% interrupts.CPU93.RES:Rescheduling_interrupts
1413 ± 2% -17.7% 1163 ± 6% interrupts.CPU94.RES:Rescheduling_interrupts
1611 ± 3% -29.8% 1131 ± 5% interrupts.CPU95.RES:Rescheduling_interrupts
1406 -20.1% 1123 ± 5% interrupts.CPU96.RES:Rescheduling_interrupts
1386 ± 3% -20.0% 1109 ± 9% interrupts.CPU97.RES:Rescheduling_interrupts
1406 ± 4% -21.6% 1102 ± 7% interrupts.CPU98.RES:Rescheduling_interrupts
1379 ± 4% -18.9% 1118 ± 8% interrupts.CPU99.RES:Rescheduling_interrupts
163356 -19.1% 132229 interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/alltests/reaim/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_error_entry/0x
:4 25% 1:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
1:4 9% 1:4 perf-profile.children.cycles-pp.error_entry
0:4 4% 0:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
8.37 -4.0% 8.03 reaim.child_systime
0.42 +2.4% 0.43 reaim.std_dev_percent
294600 -7.7% 271774 reaim.time.involuntary_context_switches
329.71 -4.0% 316.42 reaim.time.system_time
1675279 +1.3% 1696674 reaim.time.voluntary_context_switches
4.516e+08 ± 6% +17.8% 5.322e+08 ± 10% cpuidle.POLL.time
200189 ± 3% +10.4% 220923 meminfo.AnonHugePages
154.84 +0.8% 156.13 turbostat.PkgWatt
64.74 ± 20% +95.3% 126.45 ± 19% boot-time.boot
6459 ± 21% +99.3% 12874 ± 20% boot-time.idle
40.50 ± 43% +96.3% 79.50 ± 32% numa-vmstat.node1.nr_anon_transparent_hugepages
12122 ± 7% -20.2% 9674 ± 14% numa-vmstat.node1.nr_slab_reclaimable
153053 ± 3% +10.6% 169298 ± 7% numa-meminfo.node0.Slab
84083 ± 42% +95.4% 164331 ± 32% numa-meminfo.node1.AnonHugePages
48467 ± 7% -20.2% 38679 ± 14% numa-meminfo.node1.SReclaimable
97.25 ± 3% +10.8% 107.75 proc-vmstat.nr_anon_transparent_hugepages
26303 -1.8% 25827 proc-vmstat.nr_kernel_stack
4731 -5.2% 4485 ± 3% proc-vmstat.nr_page_table_pages
2463 ±134% -91.1% 220.50 ± 98% proc-vmstat.numa_hint_faults
16885 ± 6% -9.6% 15258 ± 4% softirqs.CPU32.RCU
30855 ± 13% -14.3% 26453 softirqs.CPU4.TIMER
17409 ± 7% -12.5% 15231 softirqs.CPU51.RCU
18157 ± 9% -12.9% 15809 ± 2% softirqs.CPU55.RCU
16588 ± 10% -11.6% 14662 ± 2% softirqs.CPU78.RCU
17307 ± 7% -12.6% 15130 ± 2% softirqs.CPU87.RCU
17146 ± 9% -13.2% 14880 ± 2% softirqs.CPU90.RCU
2593 ± 6% -17.3% 2145 ± 10% slabinfo.Acpi-ParseExt.active_objs
2593 ± 6% -17.3% 2145 ± 10% slabinfo.Acpi-ParseExt.num_objs
682.50 ± 6% -17.1% 565.50 ± 5% slabinfo.bdev_cache.active_objs
682.50 ± 6% -17.1% 565.50 ± 5% slabinfo.bdev_cache.num_objs
6102 ± 5% -30.6% 4234 ± 13% slabinfo.eventpoll_epi.active_objs
6102 ± 5% -30.6% 4234 ± 13% slabinfo.eventpoll_epi.num_objs
5340 ± 5% -30.6% 3704 ± 13% slabinfo.eventpoll_pwq.active_objs
5340 ± 5% -30.6% 3704 ± 13% slabinfo.eventpoll_pwq.num_objs
1018 ± 9% -17.6% 839.00 ± 2% slabinfo.file_lock_cache.active_objs
1018 ± 9% -17.6% 839.00 ± 2% slabinfo.file_lock_cache.num_objs
3359 ± 7% -12.7% 2933 ± 8% slabinfo.fsnotify_mark_connector.active_objs
3359 ± 7% -12.7% 2933 ± 8% slabinfo.fsnotify_mark_connector.num_objs
1485 ± 3% -10.9% 1323 ± 6% slabinfo.nsproxy.active_objs
1485 ± 3% -10.9% 1323 ± 6% slabinfo.nsproxy.num_objs
1.67 -0.0 1.64 perf-stat.branch-miss-rate%
1.383e+10 -1.6% 1.361e+10 perf-stat.branch-misses
7.52 -1.0 6.54 perf-stat.cache-miss-rate%
3.062e+09 -13.7% 2.643e+09 perf-stat.cache-misses
1.041e+13 +2.4% 1.066e+13 perf-stat.cpu-cycles
744682 -1.2% 735818 perf-stat.cpu-migrations
4.046e+08 +3.2% 4.177e+08 perf-stat.iTLB-load-misses
4.821e+08 +3.3% 4.98e+08 perf-stat.iTLB-loads
16668 -2.8% 16200 perf-stat.instructions-per-iTLB-miss
87.92 -2.1 85.80 perf-stat.node-load-miss-rate%
8.016e+08 -20.8% 6.351e+08 perf-stat.node-load-misses
1.102e+08 -4.6% 1.051e+08 perf-stat.node-loads
59.53 -6.5 53.07 perf-stat.node-store-miss-rate%
1.435e+08 -24.7% 1.081e+08 ± 2% perf-stat.node-store-misses
97539783 -2.0% 95550500 perf-stat.node-stores
552.25 ± 27% +67.3% 923.75 ± 24% interrupts.CPU10.NMI:Non-maskable_interrupts
552.25 ± 27% +67.3% 923.75 ± 24% interrupts.CPU10.PMI:Performance_monitoring_interrupts
455.50 +32.9% 605.50 ± 19% interrupts.CPU15.RES:Rescheduling_interrupts
361.75 ± 6% +58.9% 574.75 ± 23% interrupts.CPU26.RES:Rescheduling_interrupts
321.25 ± 7% +22.0% 392.00 ± 6% interrupts.CPU30.RES:Rescheduling_interrupts
278.25 ± 9% +18.1% 328.75 ± 13% interrupts.CPU41.RES:Rescheduling_interrupts
746.75 ± 11% +60.5% 1198 ± 37% interrupts.CPU44.NMI:Non-maskable_interrupts
746.75 ± 11% +60.5% 1198 ± 37% interrupts.CPU44.PMI:Performance_monitoring_interrupts
645.25 ± 32% +43.0% 922.50 ± 13% interrupts.CPU47.NMI:Non-maskable_interrupts
645.25 ± 32% +43.0% 922.50 ± 13% interrupts.CPU47.PMI:Performance_monitoring_interrupts
631.25 ± 23% +37.4% 867.25 ± 12% interrupts.CPU58.NMI:Non-maskable_interrupts
631.25 ± 23% +37.4% 867.25 ± 12% interrupts.CPU58.PMI:Performance_monitoring_interrupts
713.50 ± 12% +22.2% 871.75 ± 10% interrupts.CPU65.NMI:Non-maskable_interrupts
713.50 ± 12% +22.2% 871.75 ± 10% interrupts.CPU65.PMI:Performance_monitoring_interrupts
620.00 ± 14% +95.4% 1211 ± 56% interrupts.CPU72.NMI:Non-maskable_interrupts
620.00 ± 14% +95.4% 1211 ± 56% interrupts.CPU72.PMI:Performance_monitoring_interrupts
620.75 ± 30% +72.8% 1072 ± 33% interrupts.CPU83.NMI:Non-maskable_interrupts
620.75 ± 30% +72.8% 1072 ± 33% interrupts.CPU83.PMI:Performance_monitoring_interrupts
779.83 ± 4% +53.9% 1200 ± 16% sched_debug.cfs_rq:/.exec_clock.stddev
43531 ± 4% +69.1% 73628 ± 7% sched_debug.cfs_rq:/.min_vruntime.stddev
1.00 +9.3% 1.09 ± 3% sched_debug.cfs_rq:/.nr_spread_over.avg
2.39 ± 75% +256.7% 8.54 ± 53% sched_debug.cfs_rq:/.removed.load_avg.avg
16.68 ± 62% +171.4% 45.26 ± 35% sched_debug.cfs_rq:/.removed.load_avg.stddev
110.67 ± 75% +254.3% 392.14 ± 53% sched_debug.cfs_rq:/.removed.runnable_sum.avg
771.19 ± 62% +170.1% 2082 ± 35% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
43530 ± 4% +69.1% 73628 ± 7% sched_debug.cfs_rq:/.spread0.stddev
216468 ± 5% +36.6% 295649 ± 6% sched_debug.cpu.clock.avg
216487 ± 5% +36.6% 295669 ± 6% sched_debug.cpu.clock.max
216210 ± 5% +36.6% 295436 ± 6% sched_debug.cpu.clock.min
216468 ± 5% +36.6% 295649 ± 6% sched_debug.cpu.clock_task.avg
216487 ± 5% +36.6% 295669 ± 6% sched_debug.cpu.clock_task.max
216210 ± 5% +36.6% 295436 ± 6% sched_debug.cpu.clock_task.min
1593 ± 11% -7.6% 1473 ± 5% sched_debug.cpu.curr->pid.avg
34247 +11.9% 38310 ± 6% sched_debug.cpu.nr_switches.max
2145 ± 2% +19.3% 2559 ± 5% sched_debug.cpu.nr_switches.stddev
31588 +16.2% 36707 ± 6% sched_debug.cpu.sched_count.max
1791 ± 3% +27.7% 2288 ± 7% sched_debug.cpu.sched_count.stddev
13661 +18.6% 16205 ± 8% sched_debug.cpu.sched_goidle.max
838.65 ± 4% +25.7% 1054 ± 9% sched_debug.cpu.sched_goidle.stddev
12457 ± 3% +33.1% 16586 ± 13% sched_debug.cpu.ttwu_count.max
887.04 ± 3% +37.4% 1218 ± 13% sched_debug.cpu.ttwu_count.stddev
264.20 ± 4% +25.9% 332.56 ± 11% sched_debug.cpu.ttwu_local.stddev
216473 ± 5% +36.6% 295655 ± 6% sched_debug.cpu_clk
216473 ± 5% +36.6% 295655 ± 6% sched_debug.ktime
217713 ± 5% +36.3% 296841 ± 6% sched_debug.sched_clk
0.54 ± 4% +0.1 0.67 ± 8% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
1.60 ± 5% +0.3 1.89 ± 9% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
1.40 ± 12% +0.9 2.29 ± 17% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
1.60 ± 12% +1.0 2.60 ± 17% perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
1.61 ± 12% +1.0 2.62 ± 17% perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
3.06 ± 6% +1.3 4.36 ± 12% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
3.13 ± 5% +1.3 4.45 ± 12% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
5.58 ± 2% +1.7 7.32 ± 12% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
5.60 ± 2% +1.8 7.36 ± 12% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.07 ± 17% +0.0 0.10 ± 8% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.10 ± 4% +0.0 0.13 ± 11% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.10 ± 4% +0.0 0.14 ± 12% perf-profile.children.cycles-pp.vfs_statx
0.09 ± 4% +0.0 0.12 ± 13% perf-profile.children.cycles-pp.SYSC_newstat
0.06 ± 14% +0.0 0.09 ± 24% perf-profile.children.cycles-pp.__vmalloc_node_range
0.11 ± 14% +0.0 0.14 ± 25% perf-profile.children.cycles-pp.tlb_flush_mmu_tlbonly
0.06 ± 63% +0.0 0.10 ± 18% perf-profile.children.cycles-pp.terminate_walk
0.00 +0.1 0.06 ± 14% perf-profile.children.cycles-pp.find_vmap_area
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.remove_vm_area
0.00 +0.1 0.07 ± 38% perf-profile.children.cycles-pp.__get_vm_area_node
0.18 ± 11% +0.1 0.26 ± 15% perf-profile.children.cycles-pp.creat
0.58 ± 5% +0.1 0.73 ± 8% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
1.57 ± 11% +1.0 2.52 ± 16% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
1.66 ± 14% +1.0 2.67 ± 16% perf-profile.children.cycles-pp.tick_do_update_jiffies64
2.28 ± 7% +1.1 3.35 ± 14% perf-profile.children.cycles-pp._raw_spin_lock
3.15 ± 7% +1.3 4.44 ± 12% perf-profile.children.cycles-pp.tick_irq_enter
3.21 ± 6% +1.3 4.53 ± 12% perf-profile.children.cycles-pp.irq_enter
6.39 +1.8 8.22 ± 11% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
6.42 +1.9 8.29 ± 11% perf-profile.children.cycles-pp.apic_timer_interrupt
0.07 ± 13% +0.0 0.08 ± 13% perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.12 ± 22% +0.0 0.16 ± 21% perf-profile.self.cycles-pp.do_syscall_64
0.27 ± 8% +0.1 0.35 ± 12% perf-profile.self.cycles-pp.tick_irq_enter
0.55 ± 4% +0.1 0.69 ± 9% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
1.57 ± 11% +1.0 2.52 ± 16% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/pft/0x200004d
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_error_entry/0x
1:4 -25% :4 dmesg.WARNING:at#for_ip_ret_from_intr/0x
%stddev %change %stddev
\ | \
230070 -42.7% 131828 pft.faults_per_sec_per_cpu
24154 -4.1% 23160 ± 3% pft.time.involuntary_context_switches
7817260 -32.7% 5262150 pft.time.minor_page_faults
4048 +18.0% 4778 pft.time.percent_of_cpu_this_job_got
11917 +16.9% 13933 pft.time.system_time
250.86 +73.7% 435.77 ± 10% pft.time.user_time
142515 +59.4% 227173 ± 5% pft.time.voluntary_context_switches
7334244 ± 7% -64.3% 2618153 ± 65% numa-numastat.node1.local_node
7344576 ± 7% -64.2% 2628307 ± 65% numa-numastat.node1.numa_hit
59.50 -11.3% 52.75 vmstat.cpu.id
39.75 +20.1% 47.75 vmstat.procs.r
59.92 -6.7 53.19 mpstat.cpu.idle%
0.00 ± 26% +0.0 0.01 ± 7% mpstat.cpu.soft%
39.08 +6.1 45.22 mpstat.cpu.sys%
0.99 +0.6 1.58 ± 10% mpstat.cpu.usr%
4673420 ± 11% +30.2% 6084101 ± 13% cpuidle.C1.time
2.712e+08 ± 5% +30.1% 3.529e+08 ± 2% cpuidle.C1E.time
624111 ± 2% +36.3% 850658 ± 4% cpuidle.C1E.usage
1.791e+10 -12.1% 1.574e+10 cpuidle.C6.time
18607622 -13.1% 16169546 cpuidle.C6.usage
1.361e+08 ± 19% +118.3% 2.971e+08 ± 13% cpuidle.POLL.time
5103004 ± 2% +26.6% 6459744 meminfo.Active
5067740 ± 2% +27.0% 6436925 meminfo.Active(anon)
756326 ± 8% +124.1% 1695158 ± 25% meminfo.AnonHugePages
4887369 ± 3% +26.7% 6193791 meminfo.AnonPages
5.41e+08 +19.0% 6.437e+08 meminfo.Committed_AS
6892 +33.2% 9179 ± 10% meminfo.PageTables
574999 ± 9% +115.7% 1240273 ± 28% numa-vmstat.node0.nr_active_anon
554302 ± 7% +116.5% 1200168 ± 29% numa-vmstat.node0.nr_anon_pages
574609 ± 9% +115.8% 1240281 ± 28% numa-vmstat.node0.nr_zone_active_anon
10839 ± 9% -25.3% 8092 ± 8% numa-vmstat.node1.nr_slab_reclaimable
4140994 ± 7% -49.5% 2091526 ± 62% numa-vmstat.node1.numa_hit
3996598 ± 7% -51.3% 1947593 ± 68% numa-vmstat.node1.numa_local
2314942 ± 12% +115.3% 4985067 ± 27% numa-meminfo.node0.Active
2297543 ± 13% +116.5% 4973510 ± 27% numa-meminfo.node0.Active(anon)
359259 ± 13% +249.8% 1256563 ± 42% numa-meminfo.node0.AnonHugePages
2191741 ± 12% +118.8% 4794936 ± 28% numa-meminfo.node0.AnonPages
3351366 ± 9% +80.5% 6050637 ± 27% numa-meminfo.node0.MemUsed
3133 ± 9% +91.1% 5990 ± 32% numa-meminfo.node0.PageTables
43356 ± 9% -25.3% 32370 ± 8% numa-meminfo.node1.SReclaimable
1177 +15.6% 1360 turbostat.Avg_MHz
42.29 +6.6 48.84 turbostat.Busy%
602767 ± 3% +35.6% 817443 ± 2% turbostat.C1E
0.86 ± 6% +0.3 1.11 turbostat.C1E%
18604760 -13.1% 16166704 turbostat.C6
57.00 -6.9 50.14 turbostat.C6%
56.09 -12.8% 48.90 ± 4% turbostat.CPU%c1
255.40 -8.2% 234.42 turbostat.PkgWatt
65.03 -17.0% 53.98 turbostat.RAMWatt
6202 ± 2% -28.7% 4424 ± 6% slabinfo.eventpoll_epi.active_objs
6202 ± 2% -28.7% 4424 ± 6% slabinfo.eventpoll_epi.num_objs
5427 ± 2% -28.7% 3871 ± 6% slabinfo.eventpoll_pwq.active_objs
5427 ± 2% -28.7% 3871 ± 6% slabinfo.eventpoll_pwq.num_objs
14969 ± 4% -21.6% 11739 ± 3% slabinfo.files_cache.active_objs
15026 ± 4% -21.3% 11821 ± 3% slabinfo.files_cache.num_objs
5869 -25.8% 4356 ± 11% slabinfo.mm_struct.active_objs
5933 -25.8% 4402 ± 12% slabinfo.mm_struct.num_objs
6573 ± 2% -19.3% 5303 ± 12% slabinfo.sighand_cache.active_objs
6602 ± 2% -19.1% 5339 ± 12% slabinfo.sighand_cache.num_objs
1424 ± 6% -17.5% 1175 ± 3% slabinfo.skbuff_fclone_cache.active_objs
1424 ± 6% -17.5% 1175 ± 3% slabinfo.skbuff_fclone_cache.num_objs
1804 ± 4% -12.8% 1573 ± 8% slabinfo.task_group.active_objs
1804 ± 4% -12.8% 1573 ± 8% slabinfo.task_group.num_objs
74.90 -7.1 67.78 perf-stat.cache-miss-rate%
1.917e+10 -30.7% 1.329e+10 ± 2% perf-stat.cache-misses
2.559e+10 -23.4% 1.96e+10 ± 2% perf-stat.cache-references
3.608e+13 +15.1% 4.153e+13 perf-stat.cpu-cycles
10929 -21.3% 8606 ± 15% perf-stat.cpu-migrations
0.08 -0.0 0.05 ± 13% perf-stat.dTLB-store-miss-rate%
1.187e+08 -48.3% 61361329 ± 7% perf-stat.dTLB-store-misses
1.41e+11 -12.2% 1.238e+11 ± 6% perf-stat.dTLB-stores
2.149e+08 -30.2% 1.5e+08 ± 7% perf-stat.iTLB-loads
8598974 -29.6% 6054449 perf-stat.minor-faults
1.421e+08 ± 3% -21.4% 1.117e+08 ± 6% perf-stat.node-load-misses
1.5e+09 -27.5% 1.088e+09 perf-stat.node-loads
0.85 ± 70% -0.5 0.38 ± 13% perf-stat.node-store-miss-rate%
98747708 ± 70% -67.5% 32046709 ± 13% perf-stat.node-store-misses
1.146e+10 -26.0% 8.481e+09 perf-stat.node-stores
8598995 -29.6% 6054463 perf-stat.page-faults
1300724 ± 2% +22.6% 1594074 proc-vmstat.nr_active_anon
1256932 ± 2% +22.5% 1539865 proc-vmstat.nr_anon_pages
406.25 ± 8% +79.9% 731.00 ± 14% proc-vmstat.nr_anon_transparent_hugepages
1461780 -2.2% 1429319 proc-vmstat.nr_dirty_background_threshold
2927135 -2.2% 2862134 proc-vmstat.nr_dirty_threshold
14564117 -1.9% 14293825 proc-vmstat.nr_free_pages
8545 +6.6% 9109 ± 2% proc-vmstat.nr_mapped
1771 ± 2% +23.5% 2188 ± 5% proc-vmstat.nr_page_table_pages
1300723 ± 2% +22.6% 1594070 proc-vmstat.nr_zone_active_anon
13839779 -30.8% 9576432 proc-vmstat.numa_hit
13819218 -30.9% 9555852 proc-vmstat.numa_local
2.725e+09 -32.6% 1.837e+09 proc-vmstat.pgalloc_normal
8634919 -29.6% 6080480 proc-vmstat.pgfault
2.725e+09 -32.6% 1.836e+09 proc-vmstat.pgfree
5304368 -32.6% 3574443 proc-vmstat.thp_deferred_split_page
5305872 -32.6% 3575998 proc-vmstat.thp_fault_alloc
402915 +1.3% 408011 interrupts.CAL:Function_call_interrupts
186176 ± 6% -21.8% 145644 ± 12% interrupts.CPU0.RES:Rescheduling_interrupts
12448 ± 16% -71.5% 3551 ± 58% interrupts.CPU2.RES:Rescheduling_interrupts
334.25 ± 58% +181.7% 941.50 ± 24% interrupts.CPU21.RES:Rescheduling_interrupts
202.75 ± 50% +1023.3% 2277 ± 98% interrupts.CPU22.RES:Rescheduling_interrupts
138.50 ± 59% +1086.3% 1643 ± 36% interrupts.CPU23.RES:Rescheduling_interrupts
179.00 ± 55% +910.3% 1808 ±106% interrupts.CPU25.RES:Rescheduling_interrupts
485.50 ± 29% +8854.8% 43475 ± 37% interrupts.CPU26.RES:Rescheduling_interrupts
248.75 ± 38% +1876.2% 4915 ± 52% interrupts.CPU27.RES:Rescheduling_interrupts
116.75 ± 12% +297.9% 464.50 ± 11% interrupts.CPU29.RES:Rescheduling_interrupts
8061 ± 28% -54.5% 3669 ± 61% interrupts.CPU3.RES:Rescheduling_interrupts
3674 ± 6% -57.9% 1546 ± 95% interrupts.CPU31.NMI:Non-maskable_interrupts
3674 ± 6% -57.9% 1546 ± 95% interrupts.CPU31.PMI:Performance_monitoring_interrupts
79.25 ± 40% -56.8% 34.25 ± 97% interrupts.CPU50.RES:Rescheduling_interrupts
86.75 ± 56% -77.8% 19.25 ± 71% interrupts.CPU51.RES:Rescheduling_interrupts
669.25 ± 78% -73.4% 177.75 ±155% interrupts.CPU53.RES:Rescheduling_interrupts
498.25 ± 80% -95.1% 24.50 ± 37% interrupts.CPU55.RES:Rescheduling_interrupts
238.00 ± 58% -82.7% 41.25 ± 81% interrupts.CPU58.RES:Rescheduling_interrupts
278.50 ± 28% -92.8% 20.00 ± 52% interrupts.CPU59.RES:Rescheduling_interrupts
256.75 ± 47% -90.4% 24.75 ± 50% interrupts.CPU60.RES:Rescheduling_interrupts
225.25 ± 71% -91.2% 19.75 ± 27% interrupts.CPU61.RES:Rescheduling_interrupts
236.00 ± 92% -88.2% 27.75 ± 80% interrupts.CPU63.RES:Rescheduling_interrupts
171.25 ± 73% -91.2% 15.00 ± 22% interrupts.CPU64.RES:Rescheduling_interrupts
239.00 ± 36% -76.4% 56.50 ±130% interrupts.CPU65.RES:Rescheduling_interrupts
196.75 ± 51% -89.8% 20.00 ± 15% interrupts.CPU66.RES:Rescheduling_interrupts
196.50 ± 53% -78.1% 43.00 ±111% interrupts.CPU70.RES:Rescheduling_interrupts
191.00 ± 45% -90.2% 18.75 ± 45% interrupts.CPU71.RES:Rescheduling_interrupts
203.25 ± 81% -93.7% 12.75 ± 23% interrupts.CPU72.RES:Rescheduling_interrupts
103.25 ± 59% -78.9% 21.75 ± 24% interrupts.CPU73.RES:Rescheduling_interrupts
111.25 ± 79% -80.9% 21.25 ± 76% interrupts.CPU74.RES:Rescheduling_interrupts
93.50 ±106% -78.1% 20.50 ± 67% interrupts.CPU78.RES:Rescheduling_interrupts
400.50 ±155% -97.0% 12.00 ± 21% interrupts.CPU81.RES:Rescheduling_interrupts
347.50 ±156% -96.3% 13.00 ± 75% interrupts.CPU82.RES:Rescheduling_interrupts
285.00 ±149% -96.7% 9.50 ± 49% interrupts.CPU83.RES:Rescheduling_interrupts
265.00 ±136% -94.2% 15.25 ± 48% interrupts.CPU84.RES:Rescheduling_interrupts
153.50 ±145% -89.7% 15.75 ± 63% interrupts.CPU85.RES:Rescheduling_interrupts
167.00 ±101% -91.0% 15.00 ± 54% interrupts.CPU87.RES:Rescheduling_interrupts
81.50 ± 79% -87.4% 10.25 ± 78% interrupts.CPU88.RES:Rescheduling_interrupts
114.00 ± 92% -79.2% 23.75 ±103% interrupts.CPU98.RES:Rescheduling_interrupts
54567 ± 11% +54.4% 84277 ± 12% sched_debug.cfs_rq:/.exec_clock.avg
102845 ± 10% +41.8% 145786 ± 2% sched_debug.cfs_rq:/.exec_clock.max
12134 ± 15% +286.2% 46865 ± 36% sched_debug.cfs_rq:/.exec_clock.stddev
190887 ± 88% -83.1% 32327 ± 43% sched_debug.cfs_rq:/.load.max
26347 ± 56% -61.1% 10240 ± 19% sched_debug.cfs_rq:/.load.stddev
2704219 ± 11% +61.5% 4367328 ± 12% sched_debug.cfs_rq:/.min_vruntime.avg
4650229 ± 8% +60.8% 7478802 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
546421 ± 15% +343.4% 2422973 ± 37% sched_debug.cfs_rq:/.min_vruntime.stddev
0.41 ± 4% +38.1% 0.56 ± 8% sched_debug.cfs_rq:/.nr_running.avg
1.65 ± 14% +40.8% 2.32 ± 21% sched_debug.cfs_rq:/.nr_spread_over.avg
1.07 ± 23% +49.1% 1.59 ± 20% sched_debug.cfs_rq:/.nr_spread_over.stddev
8.96 ± 5% +21.6% 10.90 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
12.33 ± 22% -28.3% 8.84 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.stddev
188871 ± 89% -83.6% 30944 ± 47% sched_debug.cfs_rq:/.runnable_weight.max
26233 ± 57% -61.0% 10219 ± 20% sched_debug.cfs_rq:/.runnable_weight.stddev
3417321 ± 11% +37.6% 4701239 ± 20% sched_debug.cfs_rq:/.spread0.max
-122012 +1597.6% -2071325 sched_debug.cfs_rq:/.spread0.min
546405 ± 15% +343.4% 2422990 ± 37% sched_debug.cfs_rq:/.spread0.stddev
425.53 ± 5% +34.3% 571.59 ± 6% sched_debug.cfs_rq:/.util_avg.avg
172409 ± 17% -51.9% 83000 ± 34% sched_debug.cpu.avg_idle.min
15.80 ± 35% -55.0% 7.11 ± 31% sched_debug.cpu.clock.stddev
15.80 ± 35% -55.0% 7.11 ± 31% sched_debug.cpu.clock_task.stddev
11.99 ± 20% -22.2% 9.34 ± 4% sched_debug.cpu.cpu_load[0].stddev
25155 ± 9% -18.1% 20603 sched_debug.cpu.curr->pid.max
11971 ± 8% -16.4% 10012 ± 2% sched_debug.cpu.curr->pid.stddev
139384 ± 77% -46.1% 75152 ±117% sched_debug.cpu.load.max
110514 ± 11% +33.6% 147689 ± 4% sched_debug.cpu.nr_load_updates.max
9429 ± 13% +240.3% 32086 ± 33% sched_debug.cpu.nr_load_updates.stddev
6703 ± 11% +31.8% 8832 ± 9% sched_debug.cpu.nr_switches.avg
42299 ± 4% +297.2% 168004 ± 44% sched_debug.cpu.nr_switches.max
6781 ± 7% +161.4% 17726 ± 37% sched_debug.cpu.nr_switches.stddev
16.50 ± 22% +35.1% 22.29 ± 9% sched_debug.cpu.nr_uninterruptible.max
5906 ± 12% +37.3% 8110 ± 8% sched_debug.cpu.sched_count.avg
41231 +301.8% 165649 ± 45% sched_debug.cpu.sched_count.max
6649 ± 4% +162.1% 17430 ± 38% sched_debug.cpu.sched_count.stddev
2794 ± 14% +37.8% 3849 ± 11% sched_debug.cpu.sched_goidle.avg
19770 ± 5% +318.5% 82741 ± 45% sched_debug.cpu.sched_goidle.max
3237 ± 7% +168.2% 8684 ± 39% sched_debug.cpu.sched_goidle.stddev
2707 ± 14% +42.0% 3845 ± 10% sched_debug.cpu.ttwu_count.avg
24144 ± 4% +222.3% 77807 ± 43% sched_debug.cpu.ttwu_count.max
3710 ± 6% +149.1% 9243 ± 35% sched_debug.cpu.ttwu_count.stddev
890.11 ± 12% +63.4% 1454 ± 3% sched_debug.cpu.ttwu_local.avg
3405 ± 20% +65.9% 5649 ± 9% sched_debug.cpu.ttwu_local.max
599.71 ± 10% +47.1% 881.99 ± 12% sched_debug.cpu.ttwu_local.stddev
101950 ± 5% -22.2% 79276 ± 13% softirqs.CPU0.SCHED
76082 ± 8% +65.3% 125732 ± 20% softirqs.CPU10.TIMER
87879 ± 13% -33.2% 58674 ± 30% softirqs.CPU102.TIMER
84916 ± 12% -28.8% 60445 ± 28% softirqs.CPU103.TIMER
79117 ± 7% +60.4% 126925 ± 19% softirqs.CPU11.TIMER
80478 ± 7% +57.9% 127096 ± 19% softirqs.CPU12.TIMER
79602 ± 8% +58.9% 126500 ± 19% softirqs.CPU13.TIMER
76463 ± 9% +63.9% 125308 ± 21% softirqs.CPU14.TIMER
71283 ± 15% +76.9% 126120 ± 21% softirqs.CPU15.TIMER
75177 ± 10% +71.9% 129197 ± 19% softirqs.CPU16.TIMER
75260 ± 13% +69.9% 127848 ± 20% softirqs.CPU17.TIMER
78227 ± 14% +49.7% 117133 ± 28% softirqs.CPU18.TIMER
76725 ± 15% +60.8% 123342 ± 24% softirqs.CPU19.TIMER
14186 ± 12% -60.0% 5675 ± 37% softirqs.CPU2.SCHED
78243 ± 12% +57.5% 123220 ± 23% softirqs.CPU20.TIMER
74069 ± 13% +66.7% 123496 ± 24% softirqs.CPU21.TIMER
72445 ± 8% +71.5% 124276 ± 23% softirqs.CPU24.TIMER
71429 ± 7% +69.8% 121306 ± 23% softirqs.CPU25.TIMER
6838 ± 11% +291.4% 26765 ± 31% softirqs.CPU26.SCHED
84914 ± 11% -44.7% 46972 ± 35% softirqs.CPU26.TIMER
85536 ± 11% -37.9% 53112 ± 36% softirqs.CPU27.TIMER
92319 ± 11% -39.7% 55695 ± 41% softirqs.CPU29.TIMER
11818 ± 11% -50.4% 5865 ± 27% softirqs.CPU3.SCHED
91562 ± 6% -37.7% 57040 ± 41% softirqs.CPU30.TIMER
94268 ± 6% -43.9% 52887 ± 45% softirqs.CPU31.TIMER
93396 ± 4% -40.8% 55292 ± 45% softirqs.CPU32.TIMER
6065 ± 24% +65.3% 10023 ± 19% softirqs.CPU34.SCHED
5558 ± 18% +49.6% 8313 ± 19% softirqs.CPU35.SCHED
5181 ± 19% +91.7% 9930 ± 23% softirqs.CPU36.SCHED
10638 ± 5% -45.1% 5843 ± 17% softirqs.CPU4.SCHED
82385 ± 7% -30.8% 57037 ± 20% softirqs.CPU42.TIMER
85276 ± 10% -42.1% 49344 ± 47% softirqs.CPU45.TIMER
90182 ± 11% -38.8% 55156 ± 54% softirqs.CPU48.TIMER
9407 ± 11% -33.4% 6268 ± 36% softirqs.CPU5.SCHED
86739 ± 5% +50.2% 130246 ± 16% softirqs.CPU5.TIMER
90646 ± 10% -40.1% 54319 ± 46% softirqs.CPU51.TIMER
8726 ± 21% -54.2% 3998 ± 40% softirqs.CPU55.SCHED
77984 ± 6% +59.3% 124232 ± 21% softirqs.CPU55.TIMER
8399 ± 18% -53.5% 3905 ± 25% softirqs.CPU57.SCHED
8031 ± 17% -39.5% 4859 ± 35% softirqs.CPU58.SCHED
77083 ± 15% +56.7% 120805 ± 18% softirqs.CPU58.TIMER
8371 ± 17% -50.9% 4107 ± 37% softirqs.CPU59.SCHED
80820 ± 10% +54.3% 124710 ± 21% softirqs.CPU59.TIMER
84277 ± 10% +51.3% 127533 ± 11% softirqs.CPU6.TIMER
77675 ± 7% +57.8% 122547 ± 19% softirqs.CPU60.TIMER
79746 ± 7% +56.4% 124719 ± 19% softirqs.CPU61.TIMER
75401 ± 16% +64.4% 123975 ± 21% softirqs.CPU62.TIMER
80622 ± 9% +54.4% 124509 ± 20% softirqs.CPU63.TIMER
78498 ± 7% +57.4% 123570 ± 22% softirqs.CPU64.TIMER
78854 ± 11% +58.7% 125157 ± 19% softirqs.CPU65.TIMER
78398 ± 10% +56.5% 122679 ± 22% softirqs.CPU66.TIMER
70518 ± 14% +77.2% 124944 ± 21% softirqs.CPU67.TIMER
76676 ± 18% +65.8% 127094 ± 20% softirqs.CPU68.TIMER
79804 ± 13% +59.8% 127489 ± 19% softirqs.CPU69.TIMER
87607 ± 7% +47.0% 128769 ± 16% softirqs.CPU7.TIMER
72877 ± 11% +64.6% 119922 ± 24% softirqs.CPU74.TIMER
72901 ± 8% +69.5% 123554 ± 22% softirqs.CPU77.TIMER
89804 ± 9% -49.5% 45318 ± 45% softirqs.CPU78.TIMER
83519 ± 4% +44.1% 120383 ± 16% softirqs.CPU8.TIMER
88956 ± 14% -37.8% 55317 ± 39% softirqs.CPU81.TIMER
85839 ± 10% -40.4% 51145 ± 51% softirqs.CPU83.TIMER
5980 ± 25% +65.2% 9881 ± 13% softirqs.CPU86.SCHED
5232 ± 15% +68.0% 8790 ± 17% softirqs.CPU87.SCHED
79808 ± 6% +57.2% 125474 ± 20% softirqs.CPU9.TIMER
19.41 ± 4% -16.0 3.44 ± 9% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.calltrace.cycles-pp.secondary_startup_64
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
20.77 ± 2% -15.2 5.61 ± 19% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
20.81 ± 2% -15.2 5.66 ± 19% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.23 ± 2% +0.2 1.46 ± 2% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
1.22 ± 2% +0.2 1.45 ± 2% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
0.59 ± 5% +0.4 0.95 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
1.39 +0.5 1.89 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.39 +0.5 1.89 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.38 +0.5 1.88 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.37 +0.5 1.87 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.37 +0.5 1.88 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.33 +0.5 1.84 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
1.34 +0.5 1.85 perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.45 ± 2% +0.5 2.00 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.93 ± 2% +0.6 1.50 ± 3% perf-profile.calltrace.cycles-pp.clear_huge_page
0.28 ±173% +1.0 1.25 ± 44% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.71 ± 2% +1.0 2.73 perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.70 ± 9% +1.1 1.81 ± 2% perf-profile.calltrace.cycles-pp.clear_page_erms
0.83 ± 7% +1.8 2.62 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
0.83 ± 7% +1.8 2.63 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
2.51 ± 2% +2.2 4.69 ± 4% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
2.52 ± 2% +2.2 4.71 ± 4% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
61.29 +8.6 69.94 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
69.75 +10.3 80.06 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
72.75 +12.6 85.32 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.78 +12.6 85.35 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.64 +12.6 85.23 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
72.84 +12.6 85.45 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
72.84 +12.6 85.45 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
72.84 +12.6 85.47 perf-profile.calltrace.cycles-pp.page_fault
19.73 ± 3% -16.2 3.58 ± 10% perf-profile.children.cycles-pp.intel_idle
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.secondary_startup_64
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.cpu_startup_entry
21.19 ± 2% -15.3 5.84 ± 17% perf-profile.children.cycles-pp.do_idle
21.16 ± 2% -15.3 5.82 ± 17% perf-profile.children.cycles-pp.cpuidle_enter_state
20.82 ± 2% -15.2 5.66 ± 19% perf-profile.children.cycles-pp.start_secondary
0.37 ± 18% -0.2 0.18 ± 74% perf-profile.children.cycles-pp.start_kernel
0.10 ± 15% -0.1 0.04 ±107% perf-profile.children.cycles-pp.read
0.18 ± 6% -0.0 0.14 ± 5% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 7% +0.0 0.08 perf-profile.children.cycles-pp.__list_del_entry_valid
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__lru_cache_add
0.08 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.07 ± 13% +0.0 0.09 ± 11% perf-profile.children.cycles-pp.cmd_stat
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.__run_perf_stat
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.process_interval
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.read_counters
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.07 ± 13% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.__read_nocancel
0.06 ± 15% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.perf_event_read
0.06 ± 20% +0.0 0.10 ± 9% perf-profile.children.cycles-pp.perf_read
0.06 ± 15% +0.0 0.09 ± 4% perf-profile.children.cycles-pp.smp_call_function_single
0.03 ±100% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.09 ± 9% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.00 +0.1 0.05 perf-profile.children.cycles-pp.___perf_sw_event
0.06 ± 20% +0.1 0.11 ± 25% perf-profile.children.cycles-pp.ktime_get
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__perf_sw_event
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.__perf_event_read_value
0.13 ± 6% +0.1 0.23 ± 5% perf-profile.children.cycles-pp.__put_compound_page
0.40 +0.1 0.50 perf-profile.children.cycles-pp.rcu_all_qs
0.11 ± 7% +0.1 0.22 ± 5% perf-profile.children.cycles-pp.__page_cache_release
0.03 ±100% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.free_transhuge_page
0.22 ± 6% +0.2 0.42 perf-profile.children.cycles-pp.free_one_page
1.23 ± 2% +0.2 1.46 ± 2% perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.23 ± 2% +0.2 1.47 ± 2% perf-profile.children.cycles-pp.release_pages
0.06 ± 11% +0.3 0.33 ± 11% perf-profile.children.cycles-pp.deferred_split_huge_page
0.09 ± 4% +0.3 0.37 ± 9% perf-profile.children.cycles-pp.zap_huge_pmd
1.38 +0.5 1.88 perf-profile.children.cycles-pp.__wake_up_parent
1.38 +0.5 1.88 perf-profile.children.cycles-pp.do_group_exit
1.38 +0.5 1.88 perf-profile.children.cycles-pp.do_exit
1.37 +0.5 1.88 perf-profile.children.cycles-pp.mmput
1.37 +0.5 1.88 perf-profile.children.cycles-pp.exit_mmap
1.33 ± 2% +0.5 1.84 perf-profile.children.cycles-pp.unmap_page_range
1.34 +0.5 1.85 perf-profile.children.cycles-pp.unmap_vmas
1.75 +0.6 2.37 perf-profile.children.cycles-pp._cond_resched
0.54 ± 63% +0.7 1.25 ± 44% perf-profile.children.cycles-pp.poll_idle
2.31 ± 3% +1.4 3.69 perf-profile.children.cycles-pp.___might_sleep
2.61 ± 2% +2.2 4.83 ± 4% perf-profile.children.cycles-pp.get_page_from_freelist
2.62 ± 2% +2.2 4.85 ± 4% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.06 ± 6% +2.3 3.36 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.69 ± 11% +2.5 4.17 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
62.42 +9.8 72.18 perf-profile.children.cycles-pp.clear_page_erms
70.69 +10.9 81.56 perf-profile.children.cycles-pp.clear_huge_page
72.77 +12.6 85.33 perf-profile.children.cycles-pp.__handle_mm_fault
72.79 +12.6 85.36 perf-profile.children.cycles-pp.handle_mm_fault
72.64 +12.6 85.23 perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
72.86 +12.6 85.47 perf-profile.children.cycles-pp.__do_page_fault
72.86 +12.6 85.47 perf-profile.children.cycles-pp.do_page_fault
72.86 +12.6 85.48 perf-profile.children.cycles-pp.page_fault
19.73 ± 3% -16.2 3.58 ± 10% perf-profile.self.cycles-pp.intel_idle
0.78 ± 3% -0.2 0.60 ± 6% perf-profile.self.cycles-pp.__free_pages_ok
0.18 ± 6% -0.0 0.14 ± 5% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 7% +0.0 0.08 perf-profile.self.cycles-pp.__list_del_entry_valid
0.06 ± 15% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.smp_call_function_single
0.03 ±102% +0.1 0.10 ± 29% perf-profile.self.cycles-pp.ktime_get
0.39 +0.1 0.49 perf-profile.self.cycles-pp.rcu_all_qs
1.62 +0.3 1.97 ± 2% perf-profile.self.cycles-pp.get_page_from_freelist
1.50 +0.6 2.08 perf-profile.self.cycles-pp._cond_resched
6.08 +0.7 6.78 ± 2% perf-profile.self.cycles-pp.clear_huge_page
0.54 ± 64% +0.7 1.25 ± 44% perf-profile.self.cycles-pp.poll_idle
2.29 ± 3% +1.4 3.66 perf-profile.self.cycles-pp.___might_sleep
1.69 ± 11% +2.5 4.17 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
61.81 +9.9 71.69 perf-profile.self.cycles-pp.clear_page_erms
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase:
50000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
70867 ± 16% -28.8% 50456 ± 6% stream.add_bandwidth_MBps
11358 ± 9% +13.4% 12879 stream.time.minor_page_faults
263.61 ± 16% +36.1% 358.87 stream.time.user_time
9477 ± 14% -22.8% 7312 ± 6% stream.time.voluntary_context_switches
34.65 -2.3% 33.85 ± 2% boot-time.boot
14973 ± 5% -7.9% 13783 ± 6% softirqs.CPU6.TIMER
67.50 -3.7% 65.00 vmstat.cpu.id
31.50 ± 2% +7.9% 34.00 ± 3% vmstat.cpu.us
3619 ± 15% -29.4% 2555 ± 7% vmstat.system.cs
2863126 ± 6% -15.5% 2419307 ± 2% cpuidle.C3.time
8360 ± 6% -15.7% 7050 ± 2% cpuidle.C3.usage
4.188e+08 ± 14% +32.5% 5.548e+08 ± 2% cpuidle.C6.time
428117 ± 14% +31.9% 564534 ± 2% cpuidle.C6.usage
3694 ± 23% +130.5% 8515 ± 73% proc-vmstat.numa_hint_faults
98740 ± 2% +8.8% 107468 ± 3% proc-vmstat.numa_hit
81603 ± 2% +10.7% 90367 ± 4% proc-vmstat.numa_local
48999 ± 5% +25.0% 61261 ± 11% proc-vmstat.pgfault
7862 ± 6% -14.0% 6758 ± 2% turbostat.C3
0.38 ± 17% -0.1 0.24 ± 4% turbostat.C3%
428197 ± 14% +32.2% 566126 ± 2% turbostat.C6
19.93 ± 28% +53.7% 30.64 ± 5% turbostat.CPU%c6
960324 ± 12% +23.7% 1187618 turbostat.IRQ
2392 ± 3% -26.4% 1760 ± 8% slabinfo.eventpoll_epi.active_objs
2392 ± 3% -26.4% 1760 ± 8% slabinfo.eventpoll_epi.num_objs
4186 ± 3% -26.4% 3080 ± 8% slabinfo.eventpoll_pwq.active_objs
4186 ± 3% -26.4% 3080 ± 8% slabinfo.eventpoll_pwq.num_objs
650.00 ± 7% -13.2% 564.50 ± 3% slabinfo.file_lock_cache.active_objs
650.00 ± 7% -13.2% 564.50 ± 3% slabinfo.file_lock_cache.num_objs
28794 ± 14% -24.0% 21889 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
4667 ± 33% -60.6% 1837 ± 65% sched_debug.cfs_rq:/.min_vruntime.min
3081 ± 24% -73.8% 808.08 ±142% sched_debug.cfs_rq:/.spread0.avg
-2493 +70.7% -4255 sched_debug.cfs_rq:/.spread0.min
295.31 ± 3% -16.9% 245.52 ± 11% sched_debug.cfs_rq:/.util_avg.stddev
0.00 ± 8% +19.3% 0.00 ± 7% sched_debug.cpu.next_balance.stddev
240.50 ± 4% -14.9% 204.75 ± 11% sched_debug.cpu.nr_switches.min
2.62 ± 3% +32.0% 3.46 ± 7% sched_debug.cpu.nr_uninterruptible.stddev
3.769e+10 ± 5% -7.8% 3.476e+10 ± 4% perf-stat.branch-instructions
9.742e+09 +6.0% 1.032e+10 ± 4% perf-stat.cache-references
28360 ± 6% -14.5% 24239 ± 8% perf-stat.context-switches
4.31 ± 20% +40.5% 6.06 perf-stat.cpi
8.342e+11 ± 17% +34.0% 1.118e+12 ± 4% perf-stat.cpu-cycles
451.50 ± 10% -27.8% 326.00 ± 8% perf-stat.cpu-migrations
0.02 ± 19% +0.0 0.03 ± 8% perf-stat.dTLB-load-miss-rate%
9450196 ± 21% +41.5% 13368418 ± 4% perf-stat.dTLB-load-misses
0.24 ± 17% -31.3% 0.17 perf-stat.ipc
47611 ± 7% +22.9% 58520 ± 10% perf-stat.minor-faults
47365 ± 6% +23.6% 58532 ± 10% perf-stat.page-faults
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
1.19 ±173% +5.6 6.76 ± 54% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe.read
0.00 +6.6 6.55 ± 54% perf-profile.calltrace.cycles-pp.kstat_irqs_cpu.show_interrupts.seq_read.proc_reg_read.__vfs_read
1.19 ±173% +6.7 7.84 ± 43% perf-profile.calltrace.cycles-pp.read
1.19 ±173% +10.0 11.20 ± 56% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.19 ±173% +10.0 11.20 ± 56% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.78 ±173% +10.2 11.99 ± 52% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.__vfs_read.vfs_read
2.38 ±173% +14.5 16.87 ± 38% perf-profile.calltrace.cycles-pp.proc_reg_read.__vfs_read.vfs_read.sys_read.do_syscall_64
2.38 ±173% +14.5 16.87 ± 38% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.__vfs_read.vfs_read.sys_read
2.38 ±173% +15.6 17.96 ± 38% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +6.5 6.55 ± 54% perf-profile.children.cycles-pp.kstat_irqs_cpu
1.19 ±173% +6.7 7.84 ± 43% perf-profile.children.cycles-pp.read
1.78 ±173% +10.2 11.98 ± 52% perf-profile.children.cycles-pp.show_interrupts
2.38 ±173% +14.5 16.87 ± 38% perf-profile.children.cycles-pp.proc_reg_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.sys_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.vfs_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.__vfs_read
2.38 ±173% +15.6 17.95 ± 38% perf-profile.children.cycles-pp.seq_read
17.64 ± 77% +28.9 46.59 ± 26% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
17.64 ± 77% +28.9 46.59 ± 26% perf-profile.children.cycles-pp.do_syscall_64
8103 ± 15% +30.9% 10608 interrupts.CPU0.LOC:Local_timer_interrupts
8095 ± 15% +30.5% 10561 interrupts.CPU1.LOC:Local_timer_interrupts
8078 ± 15% +31.3% 10610 interrupts.CPU10.LOC:Local_timer_interrupts
8026 ± 14% +32.0% 10598 interrupts.CPU11.LOC:Local_timer_interrupts
8103 ± 15% +30.9% 10603 interrupts.CPU12.LOC:Local_timer_interrupts
8070 ± 15% +31.3% 10593 interrupts.CPU13.LOC:Local_timer_interrupts
8068 ± 15% +32.4% 10678 ± 2% interrupts.CPU14.LOC:Local_timer_interrupts
8067 ± 15% +31.5% 10609 interrupts.CPU15.LOC:Local_timer_interrupts
8103 ± 15% +30.7% 10588 interrupts.CPU16.LOC:Local_timer_interrupts
8192 ± 16% +29.3% 10596 interrupts.CPU17.LOC:Local_timer_interrupts
8099 ± 15% +31.2% 10628 interrupts.CPU18.LOC:Local_timer_interrupts
8061 ± 15% +31.6% 10604 interrupts.CPU19.LOC:Local_timer_interrupts
8114 ± 15% +30.8% 10610 interrupts.CPU2.LOC:Local_timer_interrupts
8086 ± 15% +31.2% 10610 interrupts.CPU20.LOC:Local_timer_interrupts
8088 ± 15% +31.4% 10629 interrupts.CPU21.LOC:Local_timer_interrupts
8064 ± 16% +31.3% 10591 interrupts.CPU22.LOC:Local_timer_interrupts
8069 ± 15% +30.5% 10530 interrupts.CPU23.LOC:Local_timer_interrupts
8079 ± 15% +31.3% 10611 interrupts.CPU24.LOC:Local_timer_interrupts
8089 ± 15% +31.0% 10598 interrupts.CPU25.LOC:Local_timer_interrupts
8114 ± 15% +30.6% 10597 interrupts.CPU26.LOC:Local_timer_interrupts
8093 ± 15% +31.3% 10630 interrupts.CPU27.LOC:Local_timer_interrupts
8092 ± 15% +30.9% 10589 interrupts.CPU28.LOC:Local_timer_interrupts
8084 ± 15% +31.1% 10599 interrupts.CPU29.LOC:Local_timer_interrupts
8083 ± 15% +30.5% 10547 interrupts.CPU3.LOC:Local_timer_interrupts
8096 ± 15% +31.2% 10624 interrupts.CPU30.LOC:Local_timer_interrupts
8154 ± 15% +31.0% 10680 ± 2% interrupts.CPU31.LOC:Local_timer_interrupts
8129 ± 15% +30.7% 10628 interrupts.CPU32.LOC:Local_timer_interrupts
8096 ± 15% +31.9% 10676 ± 2% interrupts.CPU33.LOC:Local_timer_interrupts
8119 ± 15% +30.8% 10620 interrupts.CPU34.LOC:Local_timer_interrupts
8085 ± 15% +31.2% 10612 interrupts.CPU35.LOC:Local_timer_interrupts
8083 ± 15% +31.2% 10602 interrupts.CPU36.LOC:Local_timer_interrupts
819.25 ± 35% +162.8% 2153 ± 47% interrupts.CPU37.CAL:Function_call_interrupts
8100 ± 15% +31.5% 10655 interrupts.CPU37.LOC:Local_timer_interrupts
32.75 ±173% +4387.0% 1469 ± 71% interrupts.CPU37.TLB:TLB_shootdowns
8085 ± 15% +31.2% 10612 interrupts.CPU38.LOC:Local_timer_interrupts
985.25 ± 24% +124.5% 2211 ± 52% interrupts.CPU39.CAL:Function_call_interrupts
8154 ± 15% +30.3% 10625 interrupts.CPU39.LOC:Local_timer_interrupts
162.75 ±110% +852.4% 1550 ± 74% interrupts.CPU39.TLB:TLB_shootdowns
8171 ± 14% +29.8% 10603 interrupts.CPU4.LOC:Local_timer_interrupts
8088 ± 15% +31.2% 10609 interrupts.CPU40.LOC:Local_timer_interrupts
8091 ± 15% +31.6% 10652 ± 2% interrupts.CPU41.LOC:Local_timer_interrupts
8097 ± 15% +30.9% 10601 interrupts.CPU42.LOC:Local_timer_interrupts
8105 ± 15% +30.7% 10595 ± 2% interrupts.CPU43.LOC:Local_timer_interrupts
8069 ± 15% +31.3% 10593 interrupts.CPU44.LOC:Local_timer_interrupts
8075 ± 15% +31.7% 10631 interrupts.CPU45.LOC:Local_timer_interrupts
8070 ± 15% +31.3% 10596 interrupts.CPU46.LOC:Local_timer_interrupts
8093 ± 15% +31.2% 10617 interrupts.CPU47.LOC:Local_timer_interrupts
8085 ± 15% +30.9% 10586 interrupts.CPU48.LOC:Local_timer_interrupts
8093 ± 15% +31.8% 10668 interrupts.CPU49.LOC:Local_timer_interrupts
2611 ± 26% -41.5% 1526 ± 15% interrupts.CPU5.CAL:Function_call_interrupts
8078 ± 15% +31.1% 10592 interrupts.CPU5.LOC:Local_timer_interrupts
1801 ± 23% -51.2% 878.50 ± 27% interrupts.CPU5.TLB:TLB_shootdowns
8082 ± 15% +31.1% 10596 interrupts.CPU50.LOC:Local_timer_interrupts
8093 ± 15% +31.3% 10628 interrupts.CPU51.LOC:Local_timer_interrupts
8075 ± 15% +31.3% 10600 interrupts.CPU52.LOC:Local_timer_interrupts
8057 ± 15% +31.6% 10602 interrupts.CPU53.LOC:Local_timer_interrupts
8079 ± 15% +30.8% 10566 interrupts.CPU54.LOC:Local_timer_interrupts
8068 ± 15% +31.3% 10596 interrupts.CPU55.LOC:Local_timer_interrupts
8085 ± 15% +31.0% 10593 interrupts.CPU56.LOC:Local_timer_interrupts
8059 ± 15% +31.6% 10605 interrupts.CPU57.LOC:Local_timer_interrupts
8069 ± 15% +31.3% 10599 interrupts.CPU58.LOC:Local_timer_interrupts
8065 ± 15% +31.5% 10608 interrupts.CPU59.LOC:Local_timer_interrupts
8077 ± 15% +31.2% 10600 interrupts.CPU6.LOC:Local_timer_interrupts
8079 ± 15% +31.3% 10607 interrupts.CPU60.LOC:Local_timer_interrupts
8155 ± 15% +30.3% 10628 interrupts.CPU61.LOC:Local_timer_interrupts
8090 ± 15% +31.4% 10633 interrupts.CPU62.LOC:Local_timer_interrupts
8127 ± 16% +30.5% 10603 interrupts.CPU63.LOC:Local_timer_interrupts
8091 ± 15% +31.8% 10664 interrupts.CPU64.LOC:Local_timer_interrupts
8090 ± 16% +31.1% 10604 interrupts.CPU65.LOC:Local_timer_interrupts
8078 ± 15% +32.6% 10714 interrupts.CPU66.LOC:Local_timer_interrupts
8090 ± 15% +31.2% 10611 interrupts.CPU67.LOC:Local_timer_interrupts
8087 ± 15% +31.1% 10606 interrupts.CPU68.LOC:Local_timer_interrupts
8059 ± 15% +31.4% 10588 interrupts.CPU69.LOC:Local_timer_interrupts
7999 ± 16% +32.6% 10610 interrupts.CPU7.LOC:Local_timer_interrupts
8069 ± 15% +31.7% 10625 interrupts.CPU70.LOC:Local_timer_interrupts
8074 ± 15% +31.1% 10586 interrupts.CPU71.LOC:Local_timer_interrupts
8076 ± 15% +31.3% 10602 interrupts.CPU72.LOC:Local_timer_interrupts
8112 ± 16% +30.9% 10618 interrupts.CPU73.LOC:Local_timer_interrupts
8075 ± 15% +31.7% 10635 interrupts.CPU74.LOC:Local_timer_interrupts
8075 ± 15% +31.2% 10595 interrupts.CPU75.LOC:Local_timer_interrupts
8077 ± 15% +31.2% 10598 interrupts.CPU76.LOC:Local_timer_interrupts
8191 ± 14% +30.4% 10682 interrupts.CPU77.LOC:Local_timer_interrupts
8025 ± 16% +32.2% 10609 interrupts.CPU78.LOC:Local_timer_interrupts
8086 ± 15% +31.3% 10615 interrupts.CPU79.LOC:Local_timer_interrupts
8072 ± 15% +31.3% 10600 interrupts.CPU8.LOC:Local_timer_interrupts
3.00 ±137% +1808.3% 57.25 ± 87% interrupts.CPU8.RES:Rescheduling_interrupts
8085 ± 15% +32.1% 10681 interrupts.CPU80.LOC:Local_timer_interrupts
8088 ± 15% +31.2% 10608 interrupts.CPU81.LOC:Local_timer_interrupts
8082 ± 15% +31.5% 10625 interrupts.CPU82.LOC:Local_timer_interrupts
8092 ± 15% +31.2% 10620 interrupts.CPU83.LOC:Local_timer_interrupts
8110 ± 15% +30.7% 10600 interrupts.CPU84.LOC:Local_timer_interrupts
8082 ± 15% +31.4% 10623 interrupts.CPU85.LOC:Local_timer_interrupts
8088 ± 15% +31.0% 10597 interrupts.CPU86.LOC:Local_timer_interrupts
8081 ± 15% +31.2% 10600 interrupts.CPU87.LOC:Local_timer_interrupts
8082 ± 15% +31.1% 10598 interrupts.CPU9.LOC:Local_timer_interrupts
207.25 ±170% +301.3% 831.75 ± 52% interrupts.CPU9.TLB:TLB_shootdowns
711846 ± 15% +31.2% 933907 interrupts.LOC:Local_timer_interrupts
5731 ± 21% +26.1% 7229 ± 16% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep4: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
array_size/compiler/cpufreq_governor/kconfig/nr_threads/omp/rootfs/tbox_group/testcase:
10000000/gcc-7/performance/x86_64-rhel-7.2/50%/true/debian-x86_64-2018-04-03.cgz/lkp-bdw-ep4/stream
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
:4 25% 1:4 dmesg.WARNING:stack_recursion
%stddev %change %stddev
\ | \
68287 -30.6% 47413 ± 3% stream.add_bandwidth_MBps
74377 ± 3% -38.8% 45504 ± 3% stream.copy_bandwidth_MBps
71751 -39.8% 43172 ± 4% stream.scale_bandwidth_MBps
51.82 +51.5% 78.49 ± 3% stream.time.user_time
221.50 ± 45% +431.6% 1177 ± 35% stream.time.voluntary_context_switches
74941 -34.4% 49156 ± 3% stream.triad_bandwidth_MBps
1792 ± 9% +28.3% 2299 ± 14% numa-meminfo.node0.PageTables
447.25 ± 9% +31.8% 589.50 ± 15% numa-vmstat.node0.nr_page_table_pages
983.50 +4.8% 1031 ± 2% proc-vmstat.nr_page_table_pages
18576 ± 24% -29.5% 13093 ± 19% softirqs.CPU44.TIMER
6562 ± 14% -31.8% 4477 ± 44% turbostat.C1
35.21 ± 11% -31.6% 24.09 ± 9% turbostat.CPU%c1
2376 ± 12% -27.3% 1728 ± 10% slabinfo.eventpoll_epi.active_objs
2376 ± 12% -27.3% 1728 ± 10% slabinfo.eventpoll_epi.num_objs
4158 ± 12% -27.3% 3024 ± 10% slabinfo.eventpoll_pwq.active_objs
4158 ± 12% -27.3% 3024 ± 10% slabinfo.eventpoll_pwq.num_objs
3904 ± 5% -9.2% 3544 ± 3% slabinfo.skbuff_head_cache.active_objs
1475 ± 4% -13.1% 1282 ± 10% slabinfo.task_group.active_objs
1475 ± 4% -13.1% 1282 ± 10% slabinfo.task_group.num_objs
40.50 ± 50% +210.5% 125.75 ± 52% interrupts.CPU1.RES:Rescheduling_interrupts
1.00 +7075.0% 71.75 ±141% interrupts.CPU22.RES:Rescheduling_interrupts
150.25 ± 12% -57.4% 64.00 ±100% interrupts.CPU27.TLB:TLB_shootdowns
166.50 ± 19% -61.6% 64.00 ±100% interrupts.CPU28.TLB:TLB_shootdowns
160.50 ± 21% -60.1% 64.00 ±100% interrupts.CPU40.TLB:TLB_shootdowns
174.25 ± 19% -62.7% 65.00 ±100% interrupts.CPU74.TLB:TLB_shootdowns
149.25 ± 5% -57.1% 64.00 ±100% interrupts.CPU82.TLB:TLB_shootdowns
7124 ± 14% -29.4% 5030 ± 3% interrupts.TLB:TLB_shootdowns
4984 ± 45% -78.7% 1060 ± 46% sched_debug.cfs_rq:/.min_vruntime.min
25.89 ± 57% -66.2% 8.74 ±111% sched_debug.cfs_rq:/.removed.load_avg.avg
152.68 ± 29% -57.6% 64.71 ±103% sched_debug.cfs_rq:/.removed.load_avg.stddev
1199 ± 56% -66.4% 402.43 ±111% sched_debug.cfs_rq:/.removed.runnable_sum.avg
7076 ± 29% -57.9% 2981 ±103% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
2404 ± 88% -368.5% -6456 sched_debug.cfs_rq:/.spread0.avg
20903 ± 15% -52.0% 10029 ± 29% sched_debug.cfs_rq:/.spread0.max
-2008 +507.1% -12193 sched_debug.cfs_rq:/.spread0.min
225.00 ± 8% -16.7% 187.50 ± 15% sched_debug.cpu.nr_switches.min
1.356e+10 ± 9% -28.4% 9.705e+09 ± 17% perf-stat.branch-instructions
3.35 ± 4% +69.5% 5.67 ± 9% perf-stat.cpi
1694005 ± 15% -32.0% 1152591 ± 21% perf-stat.iTLB-loads
6.601e+10 ± 9% -24.9% 4.959e+10 ± 17% perf-stat.instructions
0.30 ± 4% -40.5% 0.18 ± 10% perf-stat.ipc
39.73 ± 4% -36.0 3.70 ± 35% perf-stat.node-load-miss-rate%
18391383 ± 6% -90.9% 1670191 ± 34% perf-stat.node-load-misses
27864581 ± 2% +56.6% 43648277 ± 3% perf-stat.node-loads
40.45 ± 4% -38.9 1.51 ± 85% perf-stat.node-store-miss-rate%
44193260 ± 5% -96.7% 1470898 ± 85% perf-stat.node-store-misses
65038165 ± 3% +48.9% 96838822 ± 4% perf-stat.node-stores
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable.cmd_record.run_builtin
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable.cmd_record
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.__ioctl.perf_evlist__disable.cmd_record.run_builtin.main
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.sys_ioctl.do_syscall_64.entry_SYSCALL_64_after_hwframe.__ioctl.perf_evlist__disable
7.68 ±133% -4.9 2.78 ±173% perf-profile.calltrace.cycles-pp.perf_evlist__disable.cmd_record.run_builtin.main.generic_start_main
4.37 ±111% -4.4 0.00 perf-profile.calltrace.cycles-pp.__mutex_lock.show_interrupts.seq_read.proc_reg_read.__vfs_read
4.37 ±111% -4.4 0.00 perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.show_interrupts.seq_read.proc_reg_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.seq_printf.show_interrupts.seq_read.proc_reg_read.__vfs_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.seq_vprintf.seq_printf.show_interrupts.seq_read.proc_reg_read
4.33 ±109% -4.3 0.00 perf-profile.calltrace.cycles-pp.vsnprintf.seq_vprintf.seq_printf.show_interrupts.seq_read
13.76 ± 91% -3.8 10.00 ±173% perf-profile.calltrace.cycles-pp.proc_reg_read.__vfs_read.vfs_read.sys_read.do_syscall_64
13.76 ± 91% -3.8 10.00 ±173% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.__vfs_read.vfs_read.sys_read
12.31 ± 90% -2.3 10.00 ±173% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.__vfs_read.vfs_read
3.51 ±103% -1.2 2.31 ±173% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
3.51 ±103% -1.2 2.31 ±173% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.27 ±104% +2.7 10.00 ±173% perf-profile.calltrace.cycles-pp.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
7.27 ±104% +2.7 10.00 ±173% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.seq_printf
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.seq_vprintf
5.78 ±105% -5.8 0.00 perf-profile.children.cycles-pp.vsnprintf
7.68 ±133% -4.9 2.78 ±173% perf-profile.children.cycles-pp.perf_evlist__disable
14.46 ± 83% -4.5 10.00 ±173% perf-profile.children.cycles-pp.seq_read
4.37 ±111% -4.4 0.00 perf-profile.children.cycles-pp.__mutex_lock
4.37 ±111% -4.4 0.00 perf-profile.children.cycles-pp.mutex_spin_on_owner
13.76 ± 91% -3.8 10.00 ±173% perf-profile.children.cycles-pp.proc_reg_read
12.31 ± 90% -2.3 10.00 ±173% perf-profile.children.cycles-pp.show_interrupts
3.51 ±103% -1.2 2.31 ±173% perf-profile.children.cycles-pp.do_filp_open
3.51 ±103% -1.2 2.31 ±173% perf-profile.children.cycles-pp.path_openat
4.37 ±111% -4.4 0.00 perf-profile.self.cycles-pp.mutex_spin_on_owner
***************************************************************************************************
lkp-skl-2sp4: 104 threads Intel(R) Xeon(R) Platinum 8170 CPU @ 2.10GHz with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/testcase:
gcc-7/performance/x86_64-rhel-7.2/50%/debian-x86_64-2018-04-03.cgz/300s/lkp-skl-2sp4/pft
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_apic_timer_interrupt/0x
%stddev %change %stddev
\ | \
230747 -42.5% 132607 pft.faults_per_sec_per_cpu
7861494 -33.0% 5265420 pft.time.minor_page_faults
4057 +17.5% 4767 pft.time.percent_of_cpu_this_job_got
11947 +16.2% 13877 pft.time.system_time
249.55 +86.6% 465.61 ± 7% pft.time.user_time
143006 +47.0% 210176 ± 4% pft.time.voluntary_context_switches
6501752 ± 9% -40.5% 3870149 ± 34% numa-numastat.node0.local_node
6515543 ± 9% -40.4% 3881040 ± 34% numa-numastat.node0.numa_hit
59.50 -10.1% 53.50 vmstat.cpu.id
39.75 ± 2% +20.1% 47.75 vmstat.procs.r
5101 ± 2% +31.8% 6723 ± 9% vmstat.system.cs
59.84 -6.1 53.74 mpstat.cpu.idle%
0.00 ± 61% +0.0 0.01 ± 33% mpstat.cpu.soft%
39.14 +5.4 44.54 mpstat.cpu.sys%
0.99 +0.7 1.67 ± 7% mpstat.cpu.usr%
32208 ± 14% +37.3% 44213 ± 4% numa-meminfo.node0.SReclaimable
411790 ± 16% +161.6% 1077435 ± 17% numa-meminfo.node1.AnonHugePages
3769 ± 13% +29.0% 4860 ± 9% numa-meminfo.node1.PageTables
41429 ± 11% -31.3% 28458 ± 7% numa-meminfo.node1.SReclaimable
45429 ± 49% -59.8% 18251 ±108% numa-meminfo.node1.Shmem
14247 -16.1% 11952 ± 4% slabinfo.files_cache.active_objs
14326 -15.5% 12108 ± 4% slabinfo.files_cache.num_objs
5797 -18.6% 4718 ± 8% slabinfo.mm_struct.active_objs
5863 -18.2% 4797 ± 8% slabinfo.mm_struct.num_objs
1193 ± 3% -10.7% 1066 ± 2% slabinfo.nsproxy.active_objs
1193 ± 3% -10.7% 1066 ± 2% slabinfo.nsproxy.num_objs
6180787 ± 3% -15.2% 5242821 ± 14% cpuidle.C1.time
342482 ± 4% +37.0% 469078 ± 7% cpuidle.C1.usage
2.939e+08 ± 3% +31.0% 3.85e+08 ± 3% cpuidle.C1E.time
848558 ± 6% +30.9% 1110683 ± 2% cpuidle.C1E.usage
1.796e+10 -10.1% 1.615e+10 ± 2% cpuidle.C6.time
18678065 -10.9% 16650357 cpuidle.C6.usage
1.116e+08 ± 11% +128.8% 2.554e+08 ± 10% cpuidle.POLL.time
6702 ± 3% +38.5% 9283 cpuidle.POLL.usage
5173027 ± 2% +19.2% 6166000 ± 2% meminfo.Active
5128144 ± 2% +19.4% 6121960 ± 2% meminfo.Active(anon)
733868 +130.3% 1690255 ± 33% meminfo.AnonHugePages
4988775 ± 2% +19.2% 5948484 ± 3% meminfo.AnonPages
5.318e+08 +18.8% 6.319e+08 meminfo.Committed_AS
1082 ± 84% +87.0% 2024 ± 9% meminfo.Mlocked
7070 +26.5% 8941 ± 12% meminfo.PageTables
1083 ± 84% +87.1% 2027 ± 9% meminfo.Unevictable
8051 ± 14% +37.3% 11053 ± 4% numa-vmstat.node0.nr_slab_reclaimable
3703595 ± 7% -43.4% 2095655 ± 15% numa-vmstat.node0.numa_hit
3689371 ± 7% -43.5% 2084541 ± 15% numa-vmstat.node0.numa_local
191.75 ± 16% +164.5% 507.25 ± 34% numa-vmstat.node1.nr_anon_transparent_hugepages
11353 ± 49% -59.8% 4565 ±108% numa-vmstat.node1.nr_shmem
10358 ± 11% -31.3% 7114 ± 7% numa-vmstat.node1.nr_slab_reclaimable
4161642 ± 6% -12.2% 3653839 ± 9% numa-vmstat.node1.numa_hit
4020032 ± 6% -12.7% 3508992 ± 9% numa-vmstat.node1.numa_local
1177 +14.0% 1341 turbostat.Avg_MHz
42.29 +5.9 48.23 turbostat.Busy%
340371 ± 4% +37.3% 467253 ± 7% turbostat.C1
842290 ± 6% +31.3% 1105626 ± 2% turbostat.C1E
0.93 ± 3% +0.3 1.21 ± 4% turbostat.C1E%
18676469 -10.9% 16648172 turbostat.C6
56.92 -6.2 50.68 turbostat.C6%
56.18 -13.0% 48.89 ± 3% turbostat.CPU%c1
255.16 -8.8% 232.68 turbostat.PkgWatt
65.13 -17.8% 53.51 turbostat.RAMWatt
25420 ± 11% -32.7% 17115 ± 18% softirqs.CPU1.SCHED
16168 ± 10% -56.0% 7113 ± 22% softirqs.CPU2.SCHED
91499 ± 10% +18.0% 107985 ± 15% softirqs.CPU29.TIMER
13561 ± 16% -51.4% 6584 ± 16% softirqs.CPU3.SCHED
84774 ± 6% +27.2% 107811 ± 18% softirqs.CPU30.TIMER
82435 ± 8% +24.5% 102670 ± 21% softirqs.CPU31.TIMER
77710 ± 13% +35.6% 105402 ± 20% softirqs.CPU34.TIMER
11043 ± 8% -41.2% 6493 ± 19% softirqs.CPU4.SCHED
87767 ± 4% +22.1% 107187 ± 15% softirqs.CPU48.TIMER
10229 ± 10% -40.4% 6098 ± 21% softirqs.CPU5.SCHED
86042 ± 4% +22.1% 105097 ± 17% softirqs.CPU80.TIMER
86912 ± 5% +23.7% 107494 ± 16% softirqs.CPU81.TIMER
82985 ± 10% +32.6% 110071 ± 13% softirqs.CPU87.TIMER
79387 ± 12% +40.0% 111162 ± 13% softirqs.CPU90.TIMER
82466 ± 13% +39.0% 114662 ± 11% softirqs.CPU93.TIMER
6797 ± 31% -45.9% 3674 ± 34% softirqs.CPU99.SCHED
355628 ± 2% +14.3% 406464 ± 6% softirqs.RCU
8446116 +9.7% 9267049 softirqs.TIMER
1262046 +22.3% 1542932 proc-vmstat.nr_active_anon
1228899 +22.1% 1500715 proc-vmstat.nr_anon_pages
356.25 ± 9% +100.1% 712.75 ± 3% proc-vmstat.nr_anon_transparent_hugepages
1473503 -1.8% 1446646 proc-vmstat.nr_dirty_background_threshold
2950611 -1.8% 2896831 proc-vmstat.nr_dirty_threshold
14627049 -1.8% 14358270 proc-vmstat.nr_free_pages
269.75 ± 84% +87.8% 506.50 ± 8% proc-vmstat.nr_mlock
1758 +21.4% 2136 ± 2% proc-vmstat.nr_page_table_pages
18410 -1.3% 18167 proc-vmstat.nr_slab_reclaimable
52708 -2.6% 51354 proc-vmstat.nr_slab_unreclaimable
270.00 ± 84% +87.8% 507.00 ± 8% proc-vmstat.nr_unevictable
1262043 +22.3% 1542927 proc-vmstat.nr_zone_active_anon
270.00 ± 84% +87.8% 507.00 ± 8% proc-vmstat.nr_zone_unevictable
13918060 -31.1% 9583409 proc-vmstat.numa_hit
13897019 -31.2% 9562209 proc-vmstat.numa_local
2.743e+09 -32.9% 1.841e+09 proc-vmstat.pgalloc_normal
8687624 -30.0% 6084756 proc-vmstat.pgfault
2.742e+09 -32.9% 1.841e+09 proc-vmstat.pgfree
5339231 -32.9% 3584118 proc-vmstat.thp_deferred_split_page
5339481 -32.9% 3584076 proc-vmstat.thp_fault_alloc
4.798e+11 +20.9% 5.802e+11 ± 7% perf-stat.branch-instructions
0.45 ± 4% -0.1 0.37 ± 3% perf-stat.branch-miss-rate%
73.80 -8.9 64.94 perf-stat.cache-miss-rate%
1.921e+10 -30.4% 1.337e+10 ± 2% perf-stat.cache-misses
2.603e+10 -20.9% 2.059e+10 ± 2% perf-stat.cache-references
1527973 ± 2% +34.0% 2047421 ± 9% perf-stat.context-switches
3.625e+13 +15.2% 4.178e+13 perf-stat.cpu-cycles
11029 -14.8% 9402 ± 5% perf-stat.cpu-migrations
5.477e+11 +18.3% 6.481e+11 ± 5% perf-stat.dTLB-loads
0.08 -0.0 0.05 ± 8% perf-stat.dTLB-store-miss-rate%
1.202e+08 -48.5% 61867822 ± 4% perf-stat.dTLB-store-misses
1.521e+11 -11.0% 1.354e+11 ± 4% perf-stat.dTLB-stores
22.62 +10.0 32.67 ± 10% perf-stat.iTLB-load-miss-rate%
63435492 +20.8% 76661143 ± 8% perf-stat.iTLB-load-misses
2.17e+08 -26.9% 1.586e+08 ± 7% perf-stat.iTLB-loads
1.91e+12 +16.2% 2.219e+12 ± 4% perf-stat.instructions
8642807 -30.0% 6049660 perf-stat.minor-faults
8.59 ± 2% -0.7 7.90 ± 6% perf-stat.node-load-miss-rate%
1.413e+08 ± 3% -34.4% 92653965 ± 6% perf-stat.node-load-misses
1.504e+09 -28.1% 1.081e+09 perf-stat.node-loads
1.152e+10 -27.4% 8.37e+09 perf-stat.node-stores
8642829 -30.0% 6049673 perf-stat.page-faults
611362 ± 11% +11.6% 682342 interrupts.CAL:Function_call_interrupts
34040 ± 11% -33.7% 22553 ± 16% interrupts.CPU1.RES:Rescheduling_interrupts
94.00 ±115% -82.7% 16.25 ± 84% interrupts.CPU103.RES:Rescheduling_interrupts
5539 ± 10% +18.1% 6544 interrupts.CPU16.CAL:Function_call_interrupts
5560 ± 9% +18.5% 6589 interrupts.CPU17.CAL:Function_call_interrupts
15417 ± 18% -66.0% 5237 ± 47% interrupts.CPU2.RES:Rescheduling_interrupts
150.00 ± 74% +346.3% 669.50 ± 32% interrupts.CPU23.RES:Rescheduling_interrupts
145.25 ± 65% +430.1% 770.00 ± 61% interrupts.CPU24.RES:Rescheduling_interrupts
522.00 ± 75% +3786.5% 20287 ± 52% interrupts.CPU26.RES:Rescheduling_interrupts
268.00 ± 57% +1782.2% 5044 ± 80% interrupts.CPU27.RES:Rescheduling_interrupts
129.25 ± 48% +713.9% 1052 ± 82% interrupts.CPU28.RES:Rescheduling_interrupts
101.75 ± 31% +397.8% 506.50 ± 41% interrupts.CPU29.RES:Rescheduling_interrupts
9029 ± 10% -72.3% 2498 ± 65% interrupts.CPU3.RES:Rescheduling_interrupts
5206 ± 16% +28.1% 6672 interrupts.CPU36.CAL:Function_call_interrupts
5821 ± 12% -73.2% 1561 ± 51% interrupts.CPU4.RES:Rescheduling_interrupts
83.00 ± 17% -32.5% 56.00 ± 29% interrupts.CPU40.RES:Rescheduling_interrupts
4349 ± 30% -80.8% 836.00 ± 50% interrupts.CPU5.RES:Rescheduling_interrupts
2458 ± 44% -68.3% 780.00 ±114% interrupts.CPU55.NMI:Non-maskable_interrupts
2458 ± 44% -68.3% 780.00 ±114% interrupts.CPU55.PMI:Performance_monitoring_interrupts
3153 ± 48% -78.7% 673.25 ± 74% interrupts.CPU56.NMI:Non-maskable_interrupts
3153 ± 48% -78.7% 673.25 ± 74% interrupts.CPU56.PMI:Performance_monitoring_interrupts
3769 ± 13% -80.7% 728.75 ± 48% interrupts.CPU6.RES:Rescheduling_interrupts
336.75 ±126% -91.2% 29.50 ± 95% interrupts.CPU74.RES:Rescheduling_interrupts
1766 ± 54% -65.7% 606.50 ± 54% interrupts.CPU8.RES:Rescheduling_interrupts
30.00 ± 28% +432.5% 159.75 ± 92% interrupts.CPU85.RES:Rescheduling_interrupts
320.65 ±173% +2126.1% 7138 ±100% sched_debug.cfs_rq:/.MIN_vruntime.avg
60894 +24.1% 75590 ± 14% sched_debug.cfs_rq:/.exec_clock.avg
13190 ± 9% +144.9% 32297 ± 29% sched_debug.cfs_rq:/.exec_clock.stddev
320.65 ±173% +2126.1% 7138 ±100% sched_debug.cfs_rq:/.max_vruntime.avg
3022932 +29.2% 3906103 ± 13% sched_debug.cfs_rq:/.min_vruntime.avg
5123973 ± 16% +40.1% 7181169 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
593877 ± 8% +180.2% 1664130 ± 28% sched_debug.cfs_rq:/.min_vruntime.stddev
0.40 ± 6% +45.8% 0.59 ± 3% sched_debug.cfs_rq:/.nr_running.avg
1.71 ± 2% +67.3% 2.86 ± 19% sched_debug.cfs_rq:/.nr_spread_over.avg
1.29 ± 21% +52.4% 1.97 ± 15% sched_debug.cfs_rq:/.nr_spread_over.stddev
9.02 ± 3% +24.5% 11.23 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
28.38 ± 16% -30.8% 19.62 sched_debug.cfs_rq:/.runnable_load_avg.max
11.55 ± 11% -26.2% 8.53 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.stddev
3730816 ± 19% +54.5% 5765499 sched_debug.cfs_rq:/.spread0.max
593850 ± 8% +180.2% 1664097 ± 28% sched_debug.cfs_rq:/.spread0.stddev
406.87 ± 7% +48.3% 603.22 ± 3% sched_debug.cfs_rq:/.util_avg.avg
28.33 ± 16% -30.6% 19.67 sched_debug.cpu.cpu_load[0].max
11.23 ± 9% -19.2% 9.07 sched_debug.cpu.cpu_load[0].stddev
48.62 ± 23% -59.1% 19.88 sched_debug.cpu.cpu_load[1].max
12.53 ± 7% -28.2% 9.00 sched_debug.cpu.cpu_load[1].stddev
67.50 ± 44% -68.6% 21.21 ± 11% sched_debug.cpu.cpu_load[2].max
14.14 ± 24% -36.6% 8.97 sched_debug.cpu.cpu_load[2].stddev
121.54 ±111% -81.0% 23.08 ± 12% sched_debug.cpu.cpu_load[3].max
19.34 ± 71% -53.4% 9.01 sched_debug.cpu.cpu_load[3].stddev
139.17 ±118% -80.4% 27.33 ± 14% sched_debug.cpu.cpu_load[4].max
20.88 ± 79% -55.4% 9.30 ± 3% sched_debug.cpu.cpu_load[4].stddev
27826 -25.9% 20621 sched_debug.cpu.curr->pid.max
12953 ± 3% -21.4% 10185 sched_debug.cpu.curr->pid.stddev
0.00 ± 17% +24.4% 0.00 ± 12% sched_debug.cpu.next_balance.stddev
122120 ± 3% +15.7% 141347 ± 2% sched_debug.cpu.nr_load_updates.max
10094 ± 7% +127.8% 22993 ± 25% sched_debug.cpu.nr_load_updates.stddev
0.37 ± 6% +15.8% 0.43 sched_debug.cpu.nr_running.avg
7428 ± 2% +23.6% 9182 ± 7% sched_debug.cpu.nr_switches.avg
42101 ± 7% +105.4% 86497 ± 25% sched_debug.cpu.nr_switches.max
6842 ± 3% +82.1% 12458 ± 10% sched_debug.cpu.nr_switches.stddev
6724 +24.8% 8389 ± 7% sched_debug.cpu.sched_count.avg
40147 ± 7% +109.1% 83942 ± 26% sched_debug.cpu.sched_count.max
6608 ± 3% +83.2% 12108 ± 11% sched_debug.cpu.sched_count.stddev
3230 +19.5% 3861 ± 6% sched_debug.cpu.sched_goidle.avg
19924 ± 7% +110.0% 41838 ± 26% sched_debug.cpu.sched_goidle.max
3301 ± 3% +78.9% 5906 ± 12% sched_debug.cpu.sched_goidle.stddev
3123 ± 2% +29.1% 4032 ± 7% sched_debug.cpu.ttwu_count.avg
27111 ± 20% +105.9% 55815 ± 34% sched_debug.cpu.ttwu_count.max
4227 ± 10% +83.8% 7771 ± 11% sched_debug.cpu.ttwu_count.stddev
992.95 +56.0% 1548 ± 10% sched_debug.cpu.ttwu_local.avg
3551 ± 15% +159.8% 9228 ± 48% sched_debug.cpu.ttwu_local.max
194.04 ± 71% +90.8% 370.25 ± 9% sched_debug.cpu.ttwu_local.min
603.41 ± 7% +114.2% 1292 ± 45% sched_debug.cpu.ttwu_local.stddev
19.92 -16.7 3.21 ± 15% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
21.42 -16.0 5.42 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
21.46 -16.0 5.49 ± 8% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.calltrace.cycles-pp.secondary_startup_64
1.17 ± 3% +0.3 1.43 ± 2% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap
1.17 ± 3% +0.3 1.44 ± 2% perf-profile.calltrace.cycles-pp.tlb_flush_mmu_free.unmap_page_range.unmap_vmas.exit_mmap.mmput
0.55 +0.4 0.95 ± 2% perf-profile.calltrace.cycles-pp.___might_sleep
1.43 ± 2% +0.5 1.97 perf-profile.calltrace.cycles-pp._cond_resched.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.31 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__wake_up_parent.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__wake_up_parent.do_syscall_64
1.29 ± 3% +0.6 1.88 ± 3% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__wake_up_parent
1.32 ± 3% +0.6 1.90 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.32 ± 3% +0.6 1.90 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.calltrace.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.exit_mmap.mmput.do_exit
0.92 ± 2% +0.6 1.54 perf-profile.calltrace.cycles-pp.clear_huge_page
0.55 ±102% +0.9 1.42 ± 11% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.69 ± 4% +1.0 1.69 ± 2% perf-profile.calltrace.cycles-pp.clear_page_erms
1.66 ± 2% +1.1 2.72 perf-profile.calltrace.cycles-pp.___might_sleep.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.77 ± 4% +1.7 2.44 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault
0.76 ± 4% +1.7 2.43 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page
2.43 ± 2% +2.1 4.53 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
2.44 ± 2% +2.1 4.55 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
60.91 +9.2 70.07 perf-profile.calltrace.cycles-pp.clear_page_erms.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
69.30 +10.9 80.25 perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault
72.23 +13.1 85.35 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.25 +13.1 85.38 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
72.11 +13.1 85.25 perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
72.31 +13.2 85.48 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
72.31 +13.2 85.48 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
72.32 +13.2 85.50 perf-profile.calltrace.cycles-pp.page_fault
20.15 ± 2% -16.6 3.55 ± 4% perf-profile.children.cycles-pp.intel_idle
21.46 -16.0 5.49 ± 8% perf-profile.children.cycles-pp.start_secondary
21.71 ± 2% -15.9 5.82 ± 6% perf-profile.children.cycles-pp.cpuidle_enter_state
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.do_idle
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.secondary_startup_64
21.74 ± 2% -15.9 5.86 ± 6% perf-profile.children.cycles-pp.cpu_startup_entry
0.14 ± 16% -0.1 0.08 ± 10% perf-profile.children.cycles-pp.read
0.17 ± 6% -0.0 0.15 ± 4% perf-profile.children.cycles-pp.native_irq_return_iret
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.children.cycles-pp.__list_del_entry_valid
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.cmd_stat
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.__run_perf_stat
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.process_interval
0.08 ± 17% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.read_counters
0.07 ± 17% +0.0 0.10 ± 10% perf-profile.children.cycles-pp.perf_event_read
0.08 ± 19% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.08 ± 19% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.__read_nocancel
0.08 ± 16% +0.0 0.11 ± 11% perf-profile.children.cycles-pp.perf_read
0.07 ± 15% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.smp_call_function_single
0.03 ±100% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.__pagevec_lru_add_fn
0.57 ± 5% +0.0 0.61 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.00 +0.1 0.05 perf-profile.children.cycles-pp.___perf_sw_event
0.08 ± 8% +0.1 0.13 ± 3% perf-profile.children.cycles-pp.pte_alloc_one
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.__perf_sw_event
0.05 ± 62% +0.1 0.12 ± 19% perf-profile.children.cycles-pp.ktime_get
0.40 ± 2% +0.1 0.49 ± 2% perf-profile.children.cycles-pp.rcu_all_qs
0.13 ± 3% +0.1 0.25 ± 17% perf-profile.children.cycles-pp.__put_compound_page
0.10 ± 8% +0.1 0.23 ± 18% perf-profile.children.cycles-pp.__page_cache_release
0.00 +0.2 0.16 ± 4% perf-profile.children.cycles-pp.free_transhuge_page
0.20 ± 4% +0.2 0.41 ± 5% perf-profile.children.cycles-pp.free_one_page
1.17 ± 3% +0.3 1.44 ± 2% perf-profile.children.cycles-pp.tlb_flush_mmu_free
1.18 ± 3% +0.3 1.46 ± 2% perf-profile.children.cycles-pp.release_pages
0.07 ± 6% +0.3 0.39 ± 11% perf-profile.children.cycles-pp.zap_huge_pmd
0.00 +0.3 0.34 ± 12% perf-profile.children.cycles-pp.deferred_split_huge_page
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.children.cycles-pp.mmput
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.__wake_up_parent
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.do_group_exit
1.31 ± 3% +0.6 1.89 ± 3% perf-profile.children.cycles-pp.do_exit
1.30 ± 3% +0.6 1.88 ± 3% perf-profile.children.cycles-pp.exit_mmap
1.27 ± 3% +0.6 1.85 ± 3% perf-profile.children.cycles-pp.unmap_vmas
1.26 ± 3% +0.6 1.85 ± 3% perf-profile.children.cycles-pp.unmap_page_range
1.71 +0.6 2.34 perf-profile.children.cycles-pp._cond_resched
0.67 ± 73% +0.8 1.42 ± 11% perf-profile.children.cycles-pp.poll_idle
2.22 ± 2% +1.5 3.67 perf-profile.children.cycles-pp.___might_sleep
2.54 ± 2% +2.2 4.69 ± 2% perf-profile.children.cycles-pp.__alloc_pages_nodemask
2.52 ± 2% +2.2 4.68 ± 2% perf-profile.children.cycles-pp.get_page_from_freelist
0.97 ± 4% +2.2 3.19 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.57 ± 14% +2.3 3.85 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
62.03 +10.2 72.23 perf-profile.children.cycles-pp.clear_page_erms
70.21 +11.6 81.78 perf-profile.children.cycles-pp.clear_huge_page
72.25 +13.1 85.37 perf-profile.children.cycles-pp.__handle_mm_fault
72.27 +13.1 85.40 perf-profile.children.cycles-pp.handle_mm_fault
72.11 +13.1 85.26 perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
72.33 +13.2 85.50 perf-profile.children.cycles-pp.do_page_fault
72.33 +13.2 85.50 perf-profile.children.cycles-pp.__do_page_fault
72.34 +13.2 85.51 perf-profile.children.cycles-pp.page_fault
20.15 ± 2% -16.6 3.55 ± 4% perf-profile.self.cycles-pp.intel_idle
0.78 ± 4% -0.2 0.59 ± 4% perf-profile.self.cycles-pp.__free_pages_ok
0.17 ± 6% -0.0 0.15 ± 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.07 ± 6% +0.0 0.08 ± 5% perf-profile.self.cycles-pp.__list_del_entry_valid
0.07 ± 15% +0.0 0.11 ± 8% perf-profile.self.cycles-pp.smp_call_function_single
0.03 ±100% +0.1 0.11 ± 23% perf-profile.self.cycles-pp.ktime_get
0.39 ± 2% +0.1 0.48 ± 2% perf-profile.self.cycles-pp.rcu_all_qs
1.61 +0.4 2.00 perf-profile.self.cycles-pp.get_page_from_freelist
1.46 +0.6 2.07 perf-profile.self.cycles-pp._cond_resched
0.66 ± 73% +0.8 1.42 ± 11% perf-profile.self.cycles-pp.poll_idle
6.08 +0.8 6.88 perf-profile.self.cycles-pp.clear_huge_page
2.19 +1.5 3.65 perf-profile.self.cycles-pp.___might_sleep
1.57 ± 14% +2.3 3.85 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
61.40 +10.3 71.66 perf-profile.self.cycles-pp.clear_page_erms
***************************************************************************************************
lkp-hsx04: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/iterations/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/30/x86_64-rhel-7.2/1600%/debian-x86_64-2016-08-31.cgz/lkp-hsx04/compute/reaim
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 dmesg.WARNING:at#for_ip_error_entry/0x
1:4 -25% :4 dmesg.WARNING:at#for_ip_retint_user/0x
1:4 -25% :4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=__slab_free/0x
:4 50% 2:4 dmesg.WARNING:stack_going_in_the_wrong_direction?ip=schedule_tail/0x
6:4 0% 6:4 perf-profile.calltrace.cycles-pp.error_entry
7:4 -1% 7:4 perf-profile.children.cycles-pp.error_entry
5:4 1% 5:4 perf-profile.self.cycles-pp.error_entry
%stddev %change %stddev
\ | \
124.00 -1.4% 122.25 reaim.child_systime
87.40 -9.5% 79.08 reaim.jti
12.09 ± 2% +69.5% 20.49 ± 2% reaim.std_dev_percent
1.53 +50.7% 2.30 ± 2% reaim.std_dev_time
17534460 -5.9% 16498575 reaim.time.involuntary_context_switches
3734 -1.4% 3683 reaim.time.system_time
49689016 ± 17% +26.1% 62671778 ± 6% cpuidle.POLL.time
0.39 ± 3% -15.5% 0.33 ± 10% turbostat.Pkg%pc6
70.96 -2.2% 69.38 turbostat.RAMWatt
1695 -10.0% 1525 vmstat.procs.r
62117 -3.2% 60144 vmstat.system.cs
7903 ± 4% -56.9% 3405 ± 24% slabinfo.eventpoll_epi.active_objs
7903 ± 4% -56.9% 3405 ± 24% slabinfo.eventpoll_epi.num_objs
6915 ± 4% -56.9% 2979 ± 24% slabinfo.eventpoll_pwq.active_objs
6915 ± 4% -56.9% 2979 ± 24% slabinfo.eventpoll_pwq.num_objs
1872863 -15.2% 1587464 ± 2% meminfo.Active
1789974 -15.8% 1506890 meminfo.Active(anon)
1680128 -17.4% 1388186 meminfo.AnonPages
3268127 -13.1% 2840098 meminfo.Committed_AS
1855 -58.7% 766.75 meminfo.Mlocked
1861 -58.5% 773.25 meminfo.Unevictable
1.69 -0.2 1.52 perf-stat.cache-miss-rate%
2.587e+10 -10.6% 2.313e+10 perf-stat.cache-misses
30767306 -3.3% 29760895 perf-stat.context-switches
5874734 -14.8% 5007295 perf-stat.cpu-migrations
71.33 -0.8 70.58 perf-stat.node-load-miss-rate%
8.155e+09 -10.6% 7.291e+09 perf-stat.node-load-misses
3.277e+09 -7.3% 3.039e+09 perf-stat.node-loads
2.765e+09 -11.5% 2.448e+09 ± 2% perf-stat.node-store-misses
1.519e+10 -13.1% 1.32e+10 perf-stat.node-stores
396395 ± 4% -18.4% 323621 ± 7% numa-meminfo.node1.AnonPages
281797 ± 8% -9.8% 254296 ± 3% numa-meminfo.node1.Inactive(file)
521557 ± 6% -17.7% 429170 ± 5% numa-meminfo.node2.Active
500694 ± 7% -18.6% 407636 ± 5% numa-meminfo.node2.Active(anon)
478540 ± 8% -17.2% 396013 ± 12% numa-meminfo.node3.Active
459279 ± 7% -17.9% 377244 ± 13% numa-meminfo.node3.Active(anon)
32735 ± 41% -97.8% 716.00 ±100% numa-meminfo.node3.AnonHugePages
434193 ± 4% -23.8% 330780 ± 11% numa-meminfo.node3.AnonPages
285808 ± 5% -10.7% 255254 ± 4% numa-meminfo.node3.Inactive
280289 ± 6% -9.2% 254380 ± 3% numa-meminfo.node3.Inactive(file)
445067 -15.6% 375712 proc-vmstat.nr_active_anon
417525 -17.1% 346011 proc-vmstat.nr_anon_pages
56599 -5.0% 53752 proc-vmstat.nr_kernel_stack
463.50 -58.9% 190.50 proc-vmstat.nr_mlock
100103 -2.7% 97448 proc-vmstat.nr_slab_unreclaimable
465.00 -58.7% 192.25 proc-vmstat.nr_unevictable
445067 -15.6% 375712 proc-vmstat.nr_zone_active_anon
465.00 -58.7% 192.25 proc-vmstat.nr_zone_unevictable
195.25 ± 56% +729.4% 1619 ± 87% proc-vmstat.numa_hint_faults_local
7687 ± 3% +4.5% 8036 ± 2% proc-vmstat.numa_pte_updates
634.12 ± 6% -26.8% 464.15 ± 12% sched_debug.cfs_rq:/.exec_clock.stddev
49884777 ± 5% +21.8% 60745074 sched_debug.cfs_rq:/.min_vruntime.avg
56196226 ± 6% +25.2% 70352699 sched_debug.cfs_rq:/.min_vruntime.max
43890810 ± 4% +15.7% 50767701 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
2523222 ± 6% +98.7% 5013049 ± 19% sched_debug.cfs_rq:/.min_vruntime.stddev
1.56 ± 5% +12.6% 1.76 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
-5929434 +78.4% -10580036 sched_debug.cfs_rq:/.spread0.min
2487301 ± 6% +101.3% 5007639 ± 19% sched_debug.cfs_rq:/.spread0.stddev
243.96 ± 2% -10.0% 219.50 sched_debug.cfs_rq:/.util_avg.stddev
670504 ± 5% -15.4% 566996 ± 6% sched_debug.cpu.avg_idle.min
58664 ± 11% +40.9% 82632 ± 12% sched_debug.cpu.avg_idle.stddev
470.51 ± 17% -47.7% 246.06 ± 20% sched_debug.cpu.clock.stddev
470.51 ± 17% -47.7% 246.06 ± 20% sched_debug.cpu.clock_task.stddev
7233 ± 9% +18.2% 8551 ± 12% sched_debug.cpu.load.avg
137222 ± 62% +119.4% 301049 ± 35% sched_debug.cpu.load.max
15787 ± 43% +85.8% 29335 ± 30% sched_debug.cpu.load.stddev
0.00 ± 17% -35.3% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
169.75 ± 13% -25.0% 127.31 ± 8% sched_debug.cpu.sched_goidle.min
3.38 +39.4% 4.71 ± 15% sched_debug.rt_rq:/.rt_runtime.stddev
131.75 ± 23% -58.3% 55.00 ± 28% numa-vmstat.node0.nr_mlock
132.00 ± 23% -58.1% 55.25 ± 27% numa-vmstat.node0.nr_unevictable
132.00 ± 23% -58.1% 55.25 ± 27% numa-vmstat.node0.nr_zone_unevictable
98455 ± 4% -18.2% 80516 ± 6% numa-vmstat.node1.nr_anon_pages
319.25 ±142% -100.0% 0.00 numa-vmstat.node1.nr_dirtied
70449 ± 8% -9.8% 63574 ± 3% numa-vmstat.node1.nr_inactive_file
99.50 ± 3% -60.1% 39.75 numa-vmstat.node1.nr_mlock
100.00 ± 4% -60.2% 39.75 numa-vmstat.node1.nr_unevictable
319.25 ±142% -100.0% 0.00 numa-vmstat.node1.nr_written
70449 ± 8% -9.8% 63574 ± 3% numa-vmstat.node1.nr_zone_inactive_file
100.00 ± 4% -60.2% 39.75 numa-vmstat.node1.nr_zone_unevictable
124407 ± 7% -18.3% 101653 ± 5% numa-vmstat.node2.nr_active_anon
124405 ± 7% -18.3% 101652 ± 5% numa-vmstat.node2.nr_zone_active_anon
114276 ± 7% -17.8% 93899 ± 12% numa-vmstat.node3.nr_active_anon
108032 ± 4% -23.8% 82312 ± 11% numa-vmstat.node3.nr_anon_pages
70071 ± 6% -9.2% 63594 ± 3% numa-vmstat.node3.nr_inactive_file
134.00 ± 21% -64.6% 47.50 ± 28% numa-vmstat.node3.nr_mlock
135.00 ± 21% -64.6% 47.75 ± 28% numa-vmstat.node3.nr_unevictable
114275 ± 7% -17.8% 93901 ± 12% numa-vmstat.node3.nr_zone_active_anon
70071 ± 6% -9.2% 63594 ± 3% numa-vmstat.node3.nr_zone_inactive_file
135.00 ± 21% -64.6% 47.75 ± 28% numa-vmstat.node3.nr_zone_unevictable
1.411e+09 ± 8% -3.3e+08 1.077e+09 ± 14% syscalls.sys_brk.noise.100%
1.42e+09 ± 7% -3.3e+08 1.086e+09 ± 14% syscalls.sys_brk.noise.2%
1.416e+09 ± 7% -3.3e+08 1.082e+09 ± 14% syscalls.sys_brk.noise.25%
1.42e+09 ± 7% -3.3e+08 1.085e+09 ± 14% syscalls.sys_brk.noise.5%
1.414e+09 ± 8% -3.3e+08 1.08e+09 ± 14% syscalls.sys_brk.noise.50%
1.413e+09 ± 8% -3.3e+08 1.079e+09 ± 14% syscalls.sys_brk.noise.75%
4.046e+09 ± 13% -1.3e+09 2.793e+09 ± 6% syscalls.sys_newstat.noise.100%
4.119e+09 ± 12% -1.3e+09 2.868e+09 ± 6% syscalls.sys_newstat.noise.2%
4.101e+09 ± 12% -1.3e+09 2.849e+09 ± 6% syscalls.sys_newstat.noise.25%
4.117e+09 ± 12% -1.3e+09 2.866e+09 ± 6% syscalls.sys_newstat.noise.5%
4.08e+09 ± 12% -1.3e+09 2.828e+09 ± 6% syscalls.sys_newstat.noise.50%
4.062e+09 ± 12% -1.3e+09 2.811e+09 ± 6% syscalls.sys_newstat.noise.75%
1.541e+11 ± 10% -4.7e+10 1.072e+11 ± 15% syscalls.sys_read.noise.100%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.2%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.25%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.5%
1.541e+11 ± 10% -4.7e+10 1.073e+11 ± 15% syscalls.sys_read.noise.50%
1.541e+11 ± 10% -4.7e+10 1.072e+11 ± 15% syscalls.sys_read.noise.75%
130453 ± 16% -69.2% 40150 ±103% syscalls.sys_rt_sigaction.max
19777092 ± 4% -1.3e+07 6543904 ±100% syscalls.sys_rt_sigaction.noise.100%
27560343 ± 2% -1.7e+07 10538623 ±100% syscalls.sys_rt_sigaction.noise.2%
26718095 ± 3% -1.7e+07 9971159 ±100% syscalls.sys_rt_sigaction.noise.25%
27550355 ± 2% -1.7e+07 10510393 ±100% syscalls.sys_rt_sigaction.noise.5%
24718035 ± 3% -1.6e+07 8356079 ±100% syscalls.sys_rt_sigaction.noise.50%
22249116 ± 4% -1.5e+07 7149959 ±100% syscalls.sys_rt_sigaction.noise.75%
27266292 ± 11% -1.6e+07 11532735 ±100% syscalls.sys_times.noise.100%
32337364 ± 9% -1.8e+07 14209280 ±100% syscalls.sys_times.noise.2%
31159606 ± 9% -1.8e+07 13621578 ±100% syscalls.sys_times.noise.25%
32279406 ± 9% -1.8e+07 14182805 ±100% syscalls.sys_times.noise.5%
30086951 ± 9% -1.7e+07 13027260 ±100% syscalls.sys_times.noise.50%
28978220 ± 10% -1.7e+07 12426543 ±100% syscalls.sys_times.noise.75%
4922 ± 13% -12.0% 4333 interrupts.CPU102.CAL:Function_call_interrupts
15763 ± 12% -17.4% 13021 ± 3% interrupts.CPU104.RES:Rescheduling_interrupts
4930 ± 14% -12.3% 4325 interrupts.CPU11.CAL:Function_call_interrupts
4910 ± 13% -12.1% 4318 interrupts.CPU118.CAL:Function_call_interrupts
4935 ± 14% -17.6% 4064 ± 10% interrupts.CPU119.CAL:Function_call_interrupts
4926 ± 13% -12.0% 4332 interrupts.CPU120.CAL:Function_call_interrupts
4838 ± 32% +34.0% 6483 ± 21% interrupts.CPU122.NMI:Non-maskable_interrupts
4838 ± 32% +34.0% 6483 ± 21% interrupts.CPU122.PMI:Performance_monitoring_interrupts
4913 ± 14% -17.4% 4058 ± 8% interrupts.CPU123.CAL:Function_call_interrupts
4907 ± 13% -11.7% 4330 interrupts.CPU124.CAL:Function_call_interrupts
14843 ± 3% -10.8% 13246 ± 3% interrupts.CPU126.RES:Rescheduling_interrupts
15606 -17.6% 12854 ± 4% interrupts.CPU130.RES:Rescheduling_interrupts
15198 ± 8% -14.3% 13028 interrupts.CPU131.RES:Rescheduling_interrupts
4902 ± 14% -12.6% 4286 interrupts.CPU134.CAL:Function_call_interrupts
4878 ± 14% -12.2% 4285 interrupts.CPU14.CAL:Function_call_interrupts
4945 ± 14% -13.1% 4297 interrupts.CPU140.CAL:Function_call_interrupts
15827 ± 4% -13.6% 13669 ± 5% interrupts.CPU141.RES:Rescheduling_interrupts
4786 ± 12% -14.4% 4097 ± 7% interrupts.CPU18.CAL:Function_call_interrupts
15216 ± 12% -15.3% 12883 ± 2% interrupts.CPU18.RES:Rescheduling_interrupts
15525 ± 5% -15.0% 13200 ± 2% interrupts.CPU19.RES:Rescheduling_interrupts
14771 ± 3% -7.8% 13620 interrupts.CPU2.RES:Rescheduling_interrupts
15066 ± 7% -10.6% 13468 ± 5% interrupts.CPU20.RES:Rescheduling_interrupts
4822 ± 10% -9.7% 4352 interrupts.CPU21.CAL:Function_call_interrupts
4908 ± 14% -12.3% 4305 interrupts.CPU26.CAL:Function_call_interrupts
4944 ± 31% +54.2% 7623 ± 5% interrupts.CPU27.NMI:Non-maskable_interrupts
4944 ± 31% +54.2% 7623 ± 5% interrupts.CPU27.PMI:Performance_monitoring_interrupts
4947 ± 13% -14.2% 4246 ± 4% interrupts.CPU3.CAL:Function_call_interrupts
4031 ± 5% +73.1% 6977 ± 22% interrupts.CPU3.NMI:Non-maskable_interrupts
4031 ± 5% +73.1% 6977 ± 22% interrupts.CPU3.PMI:Performance_monitoring_interrupts
15219 ± 4% -10.9% 13568 ± 5% interrupts.CPU3.RES:Rescheduling_interrupts
4849 ± 32% +54.9% 7510 ± 8% interrupts.CPU30.NMI:Non-maskable_interrupts
4849 ± 32% +54.9% 7510 ± 8% interrupts.CPU30.PMI:Performance_monitoring_interrupts
15365 ± 5% -10.0% 13833 ± 6% interrupts.CPU33.RES:Rescheduling_interrupts
4758 ± 21% +63.9% 7797 interrupts.CPU5.NMI:Non-maskable_interrupts
4758 ± 21% +63.9% 7797 interrupts.CPU5.PMI:Performance_monitoring_interrupts
4937 ± 14% -12.5% 4321 interrupts.CPU56.CAL:Function_call_interrupts
4932 ± 14% -12.0% 4340 interrupts.CPU58.CAL:Function_call_interrupts
4935 ± 14% -11.9% 4347 interrupts.CPU60.CAL:Function_call_interrupts
4836 ± 32% +35.3% 6542 ± 21% interrupts.CPU60.NMI:Non-maskable_interrupts
4836 ± 32% +35.3% 6542 ± 21% interrupts.CPU60.PMI:Performance_monitoring_interrupts
4867 ± 14% -12.5% 4260 interrupts.CPU62.CAL:Function_call_interrupts
4922 ± 14% -12.0% 4333 interrupts.CPU64.CAL:Function_call_interrupts
15118 ± 7% -14.0% 13008 ± 8% interrupts.CPU64.RES:Rescheduling_interrupts
4922 ± 13% -12.1% 4329 interrupts.CPU65.CAL:Function_call_interrupts
15324 ± 9% -12.5% 13415 ± 3% interrupts.CPU67.RES:Rescheduling_interrupts
4884 ± 14% -13.0% 4248 interrupts.CPU71.CAL:Function_call_interrupts
4890 ± 14% -11.8% 4311 interrupts.CPU77.CAL:Function_call_interrupts
4889 ± 13% -11.4% 4330 interrupts.CPU80.CAL:Function_call_interrupts
14898 ± 3% -9.4% 13504 ± 2% interrupts.CPU80.RES:Rescheduling_interrupts
15793 ± 12% -13.6% 13651 ± 3% interrupts.CPU83.RES:Rescheduling_interrupts
14835 ± 3% -9.2% 13466 ± 3% interrupts.CPU85.RES:Rescheduling_interrupts
4831 ± 32% +41.0% 6809 ± 15% interrupts.CPU86.NMI:Non-maskable_interrupts
4831 ± 32% +41.0% 6809 ± 15% interrupts.CPU86.PMI:Performance_monitoring_interrupts
15141 ± 11% -14.7% 12921 ± 2% interrupts.CPU91.RES:Rescheduling_interrupts
4919 ± 13% -12.0% 4328 interrupts.CPU94.CAL:Function_call_interrupts
15256 ± 11% -10.1% 13721 ± 6% interrupts.CPU94.RES:Rescheduling_interrupts
15869 ± 7% -13.2% 13771 ± 4% interrupts.CPU96.RES:Rescheduling_interrupts
4919 ± 13% -12.3% 4316 interrupts.CPU97.CAL:Function_call_interrupts
4.50 ± 3% -0.3 4.23 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.execve
4.50 ± 3% -0.3 4.23 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
4.48 ± 3% -0.3 4.22 perf-profile.calltrace.cycles-pp.do_execveat_common.sys_execve.do_syscall_64.entry_SYSCALL_64_after_hwframe.execve
4.55 ± 3% -0.2 4.35 perf-profile.calltrace.cycles-pp._do_fork.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.45 ± 2% -0.1 2.31 perf-profile.calltrace.cycles-pp.__libc_fork
1.67 ± 2% -0.1 1.55 ± 2% perf-profile.calltrace.cycles-pp.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary
1.22 ± 6% -0.1 1.13 ± 6% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
2.17 -0.1 2.10 perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.60 -0.1 1.53 perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.69 ± 4% -0.1 0.62 ± 2% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.69 ± 4% -0.1 0.63 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.sync
0.68 ± 4% -0.1 0.62 ± 2% perf-profile.calltrace.cycles-pp.sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.62 ± 4% -0.1 0.56 ± 3% perf-profile.calltrace.cycles-pp.iterate_supers.sys_sync.do_syscall_64.entry_SYSCALL_64_after_hwframe.sync
0.92 ± 2% -0.0 0.88 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.71 ± 3% -0.0 0.67 perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe.setlocale
0.72 -0.0 0.68 ± 3% perf-profile.calltrace.cycles-pp.rcu_process_callbacks.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.66 ± 5% +0.0 0.71 ± 2% perf-profile.calltrace.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.__lru_cache_add.__handle_mm_fault.handle_mm_fault
1.23 ± 3% +0.1 1.30 ± 2% perf-profile.calltrace.cycles-pp.__lru_cache_add.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.06 ± 4% +0.1 1.16 perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.50 ± 3% +0.1 1.61 ± 4% perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
1.89 +0.1 2.02 ± 2% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
12.85 +0.2 13.05 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
13.47 +0.2 13.67 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
16.66 +0.2 16.91 perf-profile.calltrace.cycles-pp.page_fault
16.49 +0.3 16.75 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
16.46 +0.3 16.72 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.26 ±100% +0.3 0.54 ± 3% perf-profile.calltrace.cycles-pp.selinux_vm_enough_memory.security_vm_enough_memory_mm.do_brk_flags.sys_brk.do_syscall_64
0.00 +0.5 0.51 ± 2% perf-profile.calltrace.cycles-pp.__xstat64
1.24 ± 16% -0.3 0.90 ± 10% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
4.98 ± 2% -0.2 4.80 perf-profile.children.cycles-pp.unmap_vmas
1.90 ± 4% -0.1 1.76 ± 5% perf-profile.children.cycles-pp.__softirqentry_text_start
2.45 ± 2% -0.1 2.31 perf-profile.children.cycles-pp.__libc_fork
2.38 -0.1 2.27 perf-profile.children.cycles-pp.do_mmap
2.73 -0.1 2.63 perf-profile.children.cycles-pp.free_pgtables
2.10 -0.1 2.00 perf-profile.children.cycles-pp.mmap_region
2.54 -0.1 2.45 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.74 ± 3% -0.1 0.66 ± 3% perf-profile.children.cycles-pp.wake_up_new_task
2.11 -0.1 2.03 perf-profile.children.cycles-pp.path_openat
2.12 -0.1 2.04 perf-profile.children.cycles-pp.do_filp_open
0.69 ± 3% -0.1 0.62 ± 2% perf-profile.children.cycles-pp.sys_sync
0.36 ± 4% -0.1 0.30 ± 5% perf-profile.children.cycles-pp.update_rq_clock
0.62 ± 4% -0.1 0.57 ± 3% perf-profile.children.cycles-pp.iterate_supers
0.15 ± 6% -0.0 0.11 ± 9% perf-profile.children.cycles-pp.mark_page_accessed
0.09 ± 11% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.alloc_vmap_area
0.78 ± 2% -0.0 0.75 perf-profile.children.cycles-pp.lookup_fast
0.11 ± 6% -0.0 0.08 ± 17% perf-profile.children.cycles-pp.__get_vm_area_node
0.15 ± 7% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.__vunmap
0.15 ± 5% -0.0 0.12 ± 8% perf-profile.children.cycles-pp.__d_alloc
0.15 ± 5% -0.0 0.13 ± 5% perf-profile.children.cycles-pp.free_work
0.31 ± 3% -0.0 0.29 perf-profile.children.cycles-pp.__update_load_avg_se
0.10 ± 7% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.generic_permission
0.10 ± 8% +0.0 0.12 ± 6% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.10 ± 14% +0.0 0.14 ± 10% perf-profile.children.cycles-pp.__do_fault
0.47 ± 3% +0.0 0.51 ± 2% perf-profile.children.cycles-pp.__xstat64
0.01 ±173% +0.0 0.05 ± 9% perf-profile.children.cycles-pp.__alloc_fd
0.68 +0.0 0.73 ± 3% perf-profile.children.cycles-pp.sys_read
0.18 ± 4% +0.1 0.23 ± 16% perf-profile.children.cycles-pp.get_mem_cgroup_from_mm
0.41 ± 6% +0.1 0.48 ± 8% perf-profile.children.cycles-pp.mem_cgroup_try_charge
2.12 ± 3% +0.1 2.20 perf-profile.children.cycles-pp.vfs_statx
1.60 ± 2% +0.1 1.70 ± 3% perf-profile.children.cycles-pp.pagevec_lru_move_fn
1.64 ± 2% +0.1 1.75 ± 3% perf-profile.children.cycles-pp.task_tick_fair
1.21 ± 4% +0.1 1.31 perf-profile.children.cycles-pp.__perf_sw_event
0.35 ± 9% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.wait_consider_task
2.10 ± 2% +0.1 2.22 perf-profile.children.cycles-pp.syscall_return_via_sysret
3.41 ± 2% +0.1 3.54 perf-profile.children.cycles-pp.tlb_flush_mmu_free
0.75 ± 4% +0.1 0.89 ± 4% perf-profile.children.cycles-pp.SYSC_wait4
0.75 ± 4% +0.1 0.89 ± 4% perf-profile.children.cycles-pp.kernel_wait4
0.70 ± 5% +0.1 0.84 ± 5% perf-profile.children.cycles-pp.do_wait
5.25 +0.2 5.42 perf-profile.children.cycles-pp.alloc_pages_vma
19.59 +0.2 19.82 perf-profile.children.cycles-pp.do_page_fault
1.23 ± 16% -0.3 0.90 ± 10% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.01 -0.1 2.90 perf-profile.self.cycles-pp.unmap_page_range
0.32 ± 6% -0.1 0.26 ± 6% perf-profile.self.cycles-pp.update_rq_clock
0.19 ± 13% -0.0 0.15 ± 7% perf-profile.self.cycles-pp.scheduler_tick
1.20 ± 2% -0.0 1.16 perf-profile.self.cycles-pp._dl_addr
0.84 -0.0 0.81 perf-profile.self.cycles-pp.copy_page_range
0.15 ± 7% -0.0 0.11 ± 9% perf-profile.self.cycles-pp.mark_page_accessed
0.09 -0.0 0.07 ± 16% perf-profile.self.cycles-pp.__d_alloc
0.15 ± 3% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.lru_cache_add_active_or_unevictable
0.22 ± 8% +0.0 0.26 ± 5% perf-profile.self.cycles-pp.__perf_sw_event
0.18 ± 4% +0.0 0.23 ± 16% perf-profile.self.cycles-pp.get_mem_cgroup_from_mm
0.07 +0.1 0.12 ± 4% perf-profile.self.cycles-pp.queued_write_lock_slowpath
0.69 ± 5% +0.1 0.76 ± 2% perf-profile.self.cycles-pp.__pagevec_lru_add_fn
0.69 ± 7% +0.1 0.78 ± 4% perf-profile.self.cycles-pp.__vma_adjust
0.05 ± 62% +0.1 0.14 ± 20% perf-profile.self.cycles-pp.wait_consider_task
2.10 ± 2% +0.1 2.22 perf-profile.self.cycles-pp.syscall_return_via_sysret
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
pipe/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
3463075 ± 10% +76.2% 6101037 ± 32% stress-ng.fifo.ops
3460753 ± 10% +76.2% 6096158 ± 32% stress-ng.fifo.ops_per_sec
22252433 ± 4% -41.9% 12934864 ± 38% cpuidle.C1.time
1240735 ± 5% -41.6% 724182 ± 38% cpuidle.C1.usage
9537 ± 12% +72.0% 16402 ± 19% sched_debug.cpu.nr_switches.max
1567 ± 5% +28.0% 2006 ± 11% sched_debug.cpu.nr_switches.stddev
1239038 ± 5% -41.7% 722719 ± 38% turbostat.C1
3.32 ± 5% -1.4 1.89 ± 41% turbostat.C1%
696934 ± 3% +7.9% 751814 ± 6% turbostat.IRQ
2473 ± 9% -25.8% 1834 ± 10% slabinfo.eventpoll_epi.active_objs
2473 ± 9% -25.8% 1834 ± 10% slabinfo.eventpoll_epi.num_objs
4267 ± 9% -25.8% 3165 ± 10% slabinfo.eventpoll_pwq.active_objs
4267 ± 9% -25.8% 3165 ± 10% slabinfo.eventpoll_pwq.num_objs
316.70 ± 42% +286.3% 1223 ± 88% interrupts.CPU1.RES:Rescheduling_interrupts
214.30 ± 35% +225.2% 697.00 ± 67% interrupts.CPU14.RES:Rescheduling_interrupts
249.60 ± 36% +241.5% 852.30 ± 82% interrupts.CPU15.RES:Rescheduling_interrupts
280.70 ± 45% +166.1% 746.90 ± 68% interrupts.CPU17.RES:Rescheduling_interrupts
290.10 ± 31% +178.8% 808.70 ± 88% interrupts.CPU2.RES:Rescheduling_interrupts
221.70 ± 27% +239.7% 753.10 ± 63% interrupts.CPU20.RES:Rescheduling_interrupts
256.40 ± 28% +174.5% 703.90 ± 97% interrupts.CPU49.RES:Rescheduling_interrupts
22379 ± 11% +107.5% 46443 ± 20% interrupts.RES:Rescheduling_interrupts
***************************************************************************************************
lkp-bdw-ep6: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
class/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase/testtime:
cpu/gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s
commit:
24d0c1d6e6 ("sched/fair: Do not migrate due to a sync wakeup on exit")
2c83362734 ("sched/fair: Consider SD_NUMA when selecting the most idle group to schedule on")
24d0c1d6e65f635b 2c83362734dad8e48ccc0710b5c
---------------- ---------------------------
%stddev %change %stddev
\ | \
189975 -25.8% 140873 ± 7% stress-ng.context.ops
190032 -25.9% 140895 ± 7% stress-ng.context.ops_per_sec
180580 -15.3% 152874 ± 9% stress-ng.hsearch.ops
180604 -15.4% 152852 ± 9% stress-ng.hsearch.ops_per_sec
47965 +6.3% 50971 stress-ng.time.involuntary_context_switches
4076749 +6.0% 4319630 stress-ng.time.minor_page_faults
6259 ± 3% -8.8% 5706 ± 2% stress-ng.time.percent_of_cpu_this_job_got
1601 -8.3% 1468 stress-ng.time.user_time
836.00 -17.3% 691.50 ± 3% stress-ng.tsearch.ops
806.28 -17.1% 668.40 ± 3% stress-ng.tsearch.ops_per_sec
103796 ± 48% -49.0% 52979 ± 9% meminfo.AnonHugePages
54.87 ± 3% -6.2 48.67 ± 6% mpstat.cpu.usr%
64134 ± 45% -62.1% 24298 ± 56% numa-meminfo.node0.AnonHugePages
3.66 ±105% -3.7 0.00 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
3.66 ±105% -3.7 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
338.42 ± 4% -10.1% 304.30 sched_debug.cfs_rq:/.util_avg.avg
10253 ± 21% +66.7% 17088 ± 6% sched_debug.cpu.nr_switches.max
1602 ± 7% +25.6% 2013 ± 5% sched_debug.cpu.nr_switches.stddev
2071 ± 15% -18.7% 1683 ± 8% slabinfo.eventpoll_epi.num_objs
3605 ± 15% -19.2% 2912 ± 8% slabinfo.eventpoll_pwq.num_objs
490.00 ± 10% -28.0% 353.00 ± 3% slabinfo.file_lock_cache.num_objs
11628 ± 3% -8.3% 10665 ± 2% slabinfo.kmalloc-96.active_objs
33103068 ± 5% -14.1% 28438786 ± 7% cpuidle.C3.time
141695 ± 5% -8.0% 130380 ± 6% cpuidle.C3.usage
6.907e+08 ± 10% +29.0% 8.912e+08 ± 11% cpuidle.C6.time
642637 ± 11% +20.4% 773937 ± 8% cpuidle.C6.usage
3162428 ± 53% +81.0% 5724968 ± 26% cpuidle.POLL.time
22657 ± 2% -9.2% 20570 softirqs.CPU46.TIMER
22637 ± 3% -7.4% 20972 ± 4% softirqs.CPU5.TIMER
21862 ± 3% -7.9% 20139 ± 2% softirqs.CPU52.TIMER
22165 ± 4% -9.8% 19993 softirqs.CPU54.TIMER
21851 ± 4% -8.6% 19977 ± 2% softirqs.CPU59.TIMER
9771 +2.0% 9966 proc-vmstat.nr_mapped
2767 ± 2% -5.0% 2629 ± 2% proc-vmstat.nr_page_table_pages
1602 ± 13% +188.0% 4614 ± 27% proc-vmstat.numa_hint_faults
379391 ± 2% -38.9% 231826 ± 16% proc-vmstat.numa_pte_updates
4840033 +4.8% 5074161 proc-vmstat.pgfault
2502 ± 80% -81.7% 457.50 ± 7% proc-vmstat.thp_fault_alloc
2011 ± 2% -8.6% 1839 ± 3% turbostat.Avg_MHz
1.12 ± 7% -0.2 0.95 ± 11% turbostat.C3%
644225 ± 10% +20.2% 774089 ± 8% turbostat.C6
23.43 ± 9% +6.3 29.72 ± 7% turbostat.C6%
12.89 ± 10% +17.2% 15.10 ± 7% turbostat.CPU%c1
14.33 ± 6% +28.9% 18.47 ± 11% turbostat.CPU%c6
220.42 -4.7% 210.07 ± 2% turbostat.PkgWatt
1.206e+11 ± 7% -22.6% 9.329e+10 ± 9% perf-stat.branch-instructions
4.75 ± 5% +0.5 5.25 perf-stat.branch-miss-rate%
5.708e+09 -14.0% 4.908e+09 ± 10% perf-stat.branch-misses
1.676e+09 ± 19% -24.5% 1.266e+09 ± 4% perf-stat.cache-references
1.017e+12 ± 3% -16.3% 8.519e+11 ± 3% perf-stat.cpu-cycles
0.02 ± 3% +0.0 0.03 ± 5% perf-stat.dTLB-load-miss-rate%
1.196e+11 ± 5% -16.7% 9.964e+10 ± 5% perf-stat.dTLB-loads
0.01 ± 4% +0.0 0.01 ± 4% perf-stat.dTLB-store-miss-rate%
5273411 ± 5% +9.0% 5746770 ± 3% perf-stat.dTLB-store-misses
4.876e+10 ± 8% -21.0% 3.853e+10 ± 4% perf-stat.dTLB-stores
42.05 ± 2% -4.0 38.06 perf-stat.iTLB-load-miss-rate%
2.264e+08 ± 3% -31.9% 1.542e+08 ± 7% perf-stat.iTLB-load-misses
3.119e+08 -19.6% 2.508e+08 ± 6% perf-stat.iTLB-loads
6.53e+11 ± 6% -19.6% 5.253e+11 ± 5% perf-stat.instructions
13642943 ± 9% -14.0% 11738663 ± 4% perf-stat.node-load-misses
3805381 ± 34% -32.1% 2584140 ± 8% perf-stat.node-stores
30946 ± 6% -15.8% 26047 ± 7% interrupts.CPU0.LOC:Local_timer_interrupts
1326 ± 8% +22.0% 1618 ± 12% interrupts.CPU1.RES:Rescheduling_interrupts
271.50 ± 4% -9.8% 245.00 ± 5% interrupts.CPU12.RES:Rescheduling_interrupts
31204 ± 4% -13.8% 26901 ± 3% interrupts.CPU13.LOC:Local_timer_interrupts
407.50 ± 25% -41.7% 237.75 ± 17% interrupts.CPU13.RES:Rescheduling_interrupts
37.75 ± 61% +341.7% 166.75 ± 68% interrupts.CPU14.34:IR-PCI-MSI.1572865-edge.eth0-TxRx-0
527.00 ± 24% +45.3% 765.75 ± 13% interrupts.CPU2.RES:Rescheduling_interrupts
359.50 ± 14% +67.1% 600.75 ± 18% interrupts.CPU22.RES:Rescheduling_interrupts
350.50 ± 25% -42.4% 201.75 ± 3% interrupts.CPU27.RES:Rescheduling_interrupts
318.25 ± 23% +90.7% 606.75 ± 25% interrupts.CPU3.RES:Rescheduling_interrupts
6321 ± 7% +36.9% 8655 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
3534 ± 13% +74.1% 6154 ± 6% interrupts.CPU30.TLB:TLB_shootdowns
287.75 ± 11% -28.7% 205.25 ± 6% interrupts.CPU31.RES:Rescheduling_interrupts
326.75 ± 13% -26.9% 239.00 ± 16% interrupts.CPU32.RES:Rescheduling_interrupts
265.25 ± 7% -22.9% 204.50 ± 14% interrupts.CPU35.RES:Rescheduling_interrupts
6704 ± 13% +21.5% 8146 ± 7% interrupts.CPU36.CAL:Function_call_interrupts
3893 ± 20% +46.3% 5696 ± 8% interrupts.CPU36.TLB:TLB_shootdowns
6798 ± 18% +25.9% 8555 ± 5% interrupts.CPU37.CAL:Function_call_interrupts
273.00 ± 19% -30.1% 190.75 ± 11% interrupts.CPU37.RES:Rescheduling_interrupts
4075 ± 29% +48.7% 6058 ± 7% interrupts.CPU37.TLB:TLB_shootdowns
303.75 ± 12% -38.5% 186.75 ± 22% interrupts.CPU41.RES:Rescheduling_interrupts
6346 ± 9% +46.8% 9315 ± 12% interrupts.CPU42.CAL:Function_call_interrupts
3690 ± 15% +82.6% 6736 ± 20% interrupts.CPU42.TLB:TLB_shootdowns
255.75 ± 10% -37.8% 159.00 ± 12% interrupts.CPU43.RES:Rescheduling_interrupts
7579 ± 9% -25.8% 5620 ± 33% interrupts.CPU50.CAL:Function_call_interrupts
398.75 ± 47% -50.3% 198.00 ± 22% interrupts.CPU51.RES:Rescheduling_interrupts
31904 ± 5% -13.3% 27660 ± 9% interrupts.CPU52.LOC:Local_timer_interrupts
7487 ± 11% -19.2% 6048 ± 9% interrupts.CPU54.CAL:Function_call_interrupts
4747 ± 18% -26.8% 3474 ± 16% interrupts.CPU54.TLB:TLB_shootdowns
7486 ± 17% -30.7% 5187 ± 11% interrupts.CPU57.CAL:Function_call_interrupts
270.75 ± 11% -16.9% 225.00 ± 11% interrupts.CPU57.RES:Rescheduling_interrupts
4782 ± 28% -46.9% 2537 ± 31% interrupts.CPU57.TLB:TLB_shootdowns
238.25 ± 7% +12.7% 268.50 ± 5% interrupts.CPU63.RES:Rescheduling_interrupts
286.75 ± 10% -29.6% 202.00 ± 10% interrupts.CPU64.RES:Rescheduling_interrupts
290.25 ± 14% -41.3% 170.25 ± 29% interrupts.CPU65.RES:Rescheduling_interrupts
6570 ± 31% +30.0% 8538 ± 4% interrupts.CPU66.CAL:Function_call_interrupts
3829 ± 58% +60.1% 6130 ± 7% interrupts.CPU66.TLB:TLB_shootdowns
267.75 ± 18% -34.9% 174.25 ± 22% interrupts.CPU67.RES:Rescheduling_interrupts
257.00 ± 12% -30.4% 178.75 ± 20% interrupts.CPU68.RES:Rescheduling_interrupts
4377 ± 28% +47.4% 6450 ± 17% interrupts.CPU68.TLB:TLB_shootdowns
244.50 ± 17% -22.4% 189.75 interrupts.CPU69.RES:Rescheduling_interrupts
4263 ± 30% +41.9% 6050 ± 7% interrupts.CPU69.TLB:TLB_shootdowns
262.50 ± 17% -28.9% 186.75 ± 9% interrupts.CPU71.RES:Rescheduling_interrupts
6230 ± 10% +43.2% 8922 ± 12% interrupts.CPU72.CAL:Function_call_interrupts
30763 ± 6% -11.1% 27340 ± 7% interrupts.CPU72.LOC:Local_timer_interrupts
3562 ± 18% +84.8% 6584 ± 19% interrupts.CPU72.TLB:TLB_shootdowns
259.25 ± 2% -19.7% 208.25 ± 10% interrupts.CPU74.RES:Rescheduling_interrupts
237.75 ± 21% -24.5% 179.50 ± 11% interrupts.CPU76.RES:Rescheduling_interrupts
297.25 ± 29% -40.2% 177.75 ± 9% interrupts.CPU79.RES:Rescheduling_interrupts
271.75 ± 14% -33.0% 182.00 ± 10% interrupts.CPU80.RES:Rescheduling_interrupts
221.25 ± 18% -23.1% 170.25 ± 13% interrupts.CPU81.RES:Rescheduling_interrupts
323.75 ± 25% -34.7% 211.50 ± 9% interrupts.CPU86.RES:Rescheduling_interrupts
308.00 ± 17% -26.2% 227.25 ± 31% interrupts.CPU87.RES:Rescheduling_interrupts
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 10 months
6d571f333f ("tty/serial: Add a serial port simulator"): BUG: spinlock bad magic on CPU#0, ksoftirqd/0/7
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/cminyard/linux-ipmi serialsim
commit 6d571f333f663ea940d0bdce381bdbed6a202260
Author: Corey Minyard <cminyard(a)mvista.com>
AuthorDate: Thu Feb 28 09:32:32 2019 -0600
Commit: Corey Minyard <cminyard(a)mvista.com>
CommitDate: Thu Feb 28 12:52:52 2019 -0600
tty/serial: Add a serial port simulator
This creates simulated serial ports, both as echo devices and pipe
devices. The driver reasonably approximates the serial port speed
and simulates some modem control lines. It allows error injection
and dirct control of modem control lines.
Signed-off-by: Corey Minyard <cminyard(a)mvista.com>
df3865f8f5 Merge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
6d571f333f tty/serial: Add a serial port simulator
e891860b50 tty/serial: Add a serial port simulator
+-------------------------------------------------------+------------+------------+------------+
| | df3865f8f5 | 6d571f333f | e891860b50 |
+-------------------------------------------------------+------------+------------+------------+
| boot_successes | 20 | 0 | 0 |
| boot_failures | 13 | 11 | 11 |
| BUG:kernel_hang_in_boot-around-mounting-root_stage | 11 | | |
| BUG:kernel_timeout_in_boot-around-mounting-root_stage | 2 | | |
| BUG:spinlock_bad_magic_on_CPU | 0 | 11 | 11 |
+-------------------------------------------------------+------------+------------+------------+
[ 1.864058] ttyEcho0 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialEcho
[ 1.864984] ttyEcho1 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialEcho
[ 1.865921] ttyEcho2 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialEcho
[ 1.866851] ttyEcho3 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialEcho
[ 1.867796] ttyPipeA0 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialPipeA
[ 1.868742] BUG: spinlock bad magic on CPU#0, ksoftirqd/0/7
[ 1.869496] lock: 0xffff8880003162d8, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
[ 1.869893] CPU: 0 PID: 7 Comm: ksoftirqd/0 Tainted: G T 5.0.0-rc5-00359-g6d571f3 #2
[ 1.869893] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.869893] Call Trace:
[ 1.869893] do_raw_spin_lock+0x25/0x90
[ 1.869893] serialsim_null_modem_lock_irq+0xc/0x60
[ 1.869893] mctrl_tasklet+0xc/0x30
[ 1.869893] tasklet_action_common+0x49/0x90
[ 1.869893] __do_softirq+0x143/0x30a
[ 1.869893] ? smpboot_thread_fn+0x2e/0x190
[ 1.869893] ? smpboot_thread_fn+0x11b/0x190
[ 1.869893] ? smpboot_thread_fn+0xee/0x190
[ 1.869893] run_ksoftirqd+0x17/0x40
[ 1.869893] smpboot_thread_fn+0x172/0x190
[ 1.869893] kthread+0x117/0x120
[ 1.869893] ? sort_range+0x30/0x30
[ 1.869893] ? kthread_delayed_work_timer_fn+0x80/0x80
[ 1.869893] ret_from_fork+0x24/0x30
[ 1.881499] ttyPipeB0 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialPipeB
[ 1.882466] ttyPipeA1 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialPipeA
[ 1.883423] ttyPipeB1 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialPipeB
[ 1.884377] ttyPipeA2 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialPipeA
[ 1.885343] ttyPipeB2 at I/O 0x1 (irq = 0, base_baud = 0) is a SerialPipeB
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 0a894d80d8b8215bd1647f9de146afd05bddb3c2 5908e6b738e3357af42c10e1183753c70a0117a9 --
git bisect good ee28ad49620fd13b8b11fe35b92ab37eeff837fe # 19:20 G 11 0 6 6 Merge 'linux-review/Alexandru-Ardelean/dmaengine-axi-dmac-extend-support-for-ZynqMP-arch/20190301-051152' into devel-catchup-201903011633
git bisect bad 3561708fe91c3d120f72f835881038febcae7c40 # 19:43 B 0 11 26 0 Merge 'jpirko-mlxsw/net_next_queue' into devel-catchup-201903011633
git bisect bad e7d88c14e6afad2f171dc126431951411396b3e6 # 20:20 B 0 6 21 0 Merge 'linux-review/Ezequiel-Garcia/gspca-Check-device-presence-before-accessing-hardware/20190301-025824' into devel-catchup-201903011633
git bisect good e8b5675d08177137c40dbb379066749753f085bc # 21:09 G 10 0 0 0 Merge 'linux-review/Andrew-Lunn/net-dsa-mv88e6xxx-Fix-u64-statistics/20190301-035622' into devel-catchup-201903011633
git bisect good 51178497f247f9bcbf09f58eea8bb95beb83d392 # 21:56 G 11 0 5 5 Merge 'linux-review/David-Ahern/selftests-rtnetlink-use-internal-netns-switch-for-ip-commands/20190301-033905' into devel-catchup-201903011633
git bisect good 16abfeee0bcd7f593aa49a84ac0d249ab0ddc8f5 # 22:37 G 10 0 0 0 Merge 'linux-review/Stanislaw-Gruszka/mt76x02-fix-hdr-pointer-in-write-txwi-for-USB/20190301-031418' into devel-catchup-201903011633
git bisect bad 969651c455541239c9a9994b75cd05824f82ef5e # 23:18 B 0 11 26 0 Merge 'ipmi/serialsim' into devel-catchup-201903011633
git bisect bad 6d571f333f663ea940d0bdce381bdbed6a202260 # 00:14 B 0 11 26 0 tty/serial: Add a serial port simulator
# first bad commit: [6d571f333f663ea940d0bdce381bdbed6a202260] tty/serial: Add a serial port simulator
git bisect good df3865f8f56879b7e9f0ca47fa7bc5f2252df6d3 # 00:56 G 33 0 13 13 Merge branch 'i2c/for-current' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
# extra tests with debug options
git bisect bad 6d571f333f663ea940d0bdce381bdbed6a202260 # 01:20 B 0 10 25 0 tty/serial: Add a serial port simulator
# extra tests on HEAD of linux-devel/devel-catchup-201903011633
git bisect bad 0a894d80d8b8215bd1647f9de146afd05bddb3c2 # 01:20 B 0 13 31 0 0day head guard for 'devel-catchup-201903011633'
# extra tests on tree/branch ipmi/serialsim
git bisect bad e891860b503937d59412294343e86c81ccd25f0a # 02:03 B 0 11 26 0 tty/serial: Add a serial port simulator
# extra tests with first bad commit reverted
git bisect good 801094972f44e9a6bbfd645593311d5d187921f5 # 02:23 G 11 0 6 6 Revert "tty/serial: Add a serial port simulator"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
1 year, 10 months
[platform/x86] 5b4b72cb33: WARNING:Could_NOT_find_tracepoint_structs_for_some_tracepoints
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 5b4b72cb33f57ece6e4ce5909a1f1292d58c649b ("platform/x86: add sep and socwatch drivers without socperf.")
2019-01-12 03:41:41
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 3G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------------+----------+------------+
| | v5.0-rc1 | 5b4b72cb33 |
+----------------------------------------------------------------+----------+------------+
| boot_successes | 65 | 3 |
| boot_failures | 1 | 25 |
| BUG:kernel_in_stage | 1 | 2 |
| WARNING:Could_NOT_find_tracepoint_structs_for_some_tracepoints | 0 | 23 |
+----------------------------------------------------------------+----------+------------+
[ 12.488618] WARNING: Could NOT find tracepoint structs for some tracepoints!
[ 12.490022] -----------------------------------------
[ 12.491091] OK: LOADED SoC Watch Driver
[ 12.491958] -----------------------------------------
[ 12.497743] fake-fmc-carrier: mezzanine 0
[ 12.498675] Manufacturer: fake-vendor
[ 12.499605] Product name: fake-design-for-testing
[ 12.502389] fmc fake-design-for-testing-f001: Driver has no ID: matches all
[ 12.503812] fmc_trivial: probe of fake-design-for-testing-f001 failed with error -95
[ 12.505512] fmc fake-design-for-testing-f001: Driver has no ID: matches all
[ 12.507008] fmc_chardev fake-design-for-testing-f001: Created misc device "fake-design-for-testing-f001"
[ 12.509634] gnss: GNSS driver registered with major 506
[ 12.514266] pktgen: Packet Generator for packet performance testing. Version: 2.75
[ 12.516031] NET: Registered protocol family 26
[ 12.518228] xt_time: kernel timezone is -0000
[ 12.519292] ipip: IPv4 and MPLS over IPv4 tunneling driver
[ 12.520824] gre: GRE over IPv4 demultiplexor driver
[ 12.521879] ip_gre: GRE over IPv4 tunneling driver
[ 12.524078] Initializing XFRM netlink socket
[ 12.526181] NET: Registered protocol family 10
[ 12.529164] Segment Routing with IPv6
[ 12.530240] mip6: Mobile IPv6
[ 12.531333] sit: IPv6, IPv4 and MPLS over IPv4 tunneling driver
[ 12.533425] ip6_gre: GRE over IPv6 tunneling driver
[ 12.535596] bpfilter: Loaded bpfilter_umh pid 212
Started bpfilter
[ 12.537446] NET: Registered protocol family 15
[ 12.538431] NET: Registered protocol family 9
[ 12.539425] X25: Linux Version 0.2
[ 12.541435] NET: Registered protocol family 6
[ 12.545388] NET: Registered protocol family 11
[ 12.546415] NET: Registered protocol family 3
[ 12.547401] can: controller area network core (rev 20170425 abi 9)
[ 12.548845] NET: Registered protocol family 29
[ 12.549839] can: broadcast manager protocol (rev 20170425 t)
[ 12.551527] NET: Registered protocol family 41
[ 12.552608] lec:lane_module_init: lec.c: initialized
[ 12.553680] NET4: DECnet for Linux: V.2.5.68s (C) 1995-2003 Linux DECnet Project Team
[ 12.555737] DECnet: Routing cache hash table of 1024 buckets, 8Kbytes
[ 12.557057] NET: Registered protocol family 12
[ 12.558065] NET: Registered protocol family 35
[ 12.567921] DCCP: Activated CCID 2 (TCP-like)
[ 12.569237] DCCP: Activated CCID 3 (TCP-Friendly Rate Control)
[ 12.571422] tipc: Activated (version 2.0.0)
[ 12.572582] NET: Registered protocol family 30
[ 12.573677] tipc: Started in single node mode
[ 12.574982] NET: Registered protocol family 43
[ 12.576248] 9pnet: Installing 9P2000 support
[ 12.577248] NET: Registered protocol family 37
[ 12.578546] NET: Registered protocol family 36
[ 12.579730] Key type dns_resolver registered
[ 12.580701] Key type ceph registered
[ 12.581883] libceph: loaded (mon/osd proto 15/24)
[ 12.582953] openvswitch: Open vSwitch switching datapath
[ 12.584679] mpls_gso: MPLS GSO support
[ 12.586477] ... APIC ID: 00000000 (0)
[ 12.587398] ... APIC VERSION: 01050014
[ 12.588263] 0000000000000000000000000000000000000000000000000000000000000000
[ 12.588966] 0000000000000000000000000000000000000000000000000000000000000000
[ 12.588966] 0000000000000000000000000000000000000000000000000000000000001000
[ 12.592474] number of MP IRQ sources: 15.
[ 12.593377] number of IO-APIC #0 registers: 24.
[ 12.594365] testing the IO APIC.......................
[ 12.595460] IO APIC #0......
[ 12.596181] .... register #00: 00000000
[ 12.597046] ....... : physical APIC id: 00
[ 12.597999] ....... : Delivery Type: 0
[ 12.598896] ....... : LTS : 0
[ 12.599800] .... register #01: 00170011
[ 12.600673] ....... : max redirection entries: 17
[ 12.601743] ....... : PRQ implemented: 0
[ 12.602693] ....... : IO APIC version: 11
[ 12.603653] .... register #02: 00000000
[ 12.604522] ....... : arbitration: 00
[ 12.605422] .... IRQ redirection table:
[ 12.606294] IOAPIC 0:
[ 12.606918] pin00, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.608603] pin01, enabled , edge , high, V(22), IRR(0), S(0), logical , D(01), M(0)
[ 12.610282] pin02, enabled , edge , high, V(30), IRR(0), S(0), logical , D(01), M(0)
[ 12.611967] pin03, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.613646] pin04, enabled , edge , high, V(23), IRR(0), S(0), logical , D(01), M(0)
[ 12.615329] pin05, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.617014] pin06, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.618707] pin07, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.620385] pin08, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.622063] pin09, enabled , level, high, V(20), IRR(0), S(0), logical , D(01), M(0)
[ 12.623741] pin0a, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.625428] pin0b, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.627099] pin0c, enabled , edge , high, V(21), IRR(0), S(0), logical , D(01), M(0)
[ 12.628787] pin0d, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.630469] pin0e, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.632160] pin0f, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.633828] pin10, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.635518] pin11, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.637198] pin12, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.638879] pin13, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.640549] pin14, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.642235] pin15, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.643910] pin16, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.645614] pin17, disabled, edge , high, V(00), IRR(0), S(0), physical, D(00), M(0)
[ 12.647288] IRQ to pin mappings:
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
eywa
1 year, 10 months
[selftests] 6ea3dfe1e0: kernel_selftests.tpm2.test_smoke.sh.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 6ea3dfe1e0732c5bd3be1e073690b06a83c03c25 ("selftests: add TPM 2.0 tests")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-03
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
KERNEL SELFTESTS: linux_headers_dir is /usr/src/linux-headers-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25
2019-02-27 15:15:21 ln -sf /usr/bin/clang-7 /usr/bin/clang
2019-02-27 15:15:21 ln -sf /usr/bin/llc-7 /usr/bin/llc
2019-02-27 15:15:21 make run_tests -C timers
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers'
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm posix_timers.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/posix_timers
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm nanosleep.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/nanosleep
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm nsleep-lat.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/nsleep-lat
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm set-timer-lat.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/set-timer-lat
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm mqueue-lat.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/mqueue-lat
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm inconsistency-check.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/inconsistency-check
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm raw_skew.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/raw_skew
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm threadtest.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/threadtest
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm rtcpie.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/rtcpie
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm alarmtimer-suspend.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/alarmtimer-suspend
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm valid-adjtimex.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/valid-adjtimex
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm adjtick.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/adjtick
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm change_skew.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/change_skew
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm skew_consistency.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/skew_consistency
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm clocksource-switch.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/clocksource-switch
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm freq-step.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/freq-step
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm leap-a-day.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/leap-a-day
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm leapcrash.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/leapcrash
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm set-tai.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/set-tai
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm set-2038.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/set-2038
gcc -O3 -Wl,-no-as-needed -Wall -lrt -lpthread -lm set-tz.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers/set-tz
TAP version 13
selftests: timers: posix_timers
========================================
Testing posix timers. False negative may happen on CPU execution
based timers if other threads run on the CPU...
Check itimer virtual... [OK]
Check itimer prof... [OK]
Check itimer real... [OK]
Check timer_create() per thread... [OK]
Check timer_create() per process... [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..1 selftests: timers: posix_timers [PASS]
selftests: timers: nanosleep
========================================
Nanosleep CLOCK_REALTIME [OK]
Nanosleep CLOCK_MONOTONIC [OK]
Nanosleep CLOCK_MONOTONIC_RAW [UNSUPPORTED]
Nanosleep CLOCK_REALTIME_COARSE [UNSUPPORTED]
Nanosleep CLOCK_MONOTONIC_COARSE [UNSUPPORTED]
Nanosleep CLOCK_BOOTTIME [OK]
Nanosleep CLOCK_REALTIME_ALARM [OK]
Nanosleep CLOCK_BOOTTIME_ALARM [OK]
Nanosleep CLOCK_TAI [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..2 selftests: timers: nanosleep [PASS]
selftests: timers: nsleep-lat
========================================
nsleep latency CLOCK_REALTIME [OK]
nsleep latency CLOCK_MONOTONIC [OK]
nsleep latency CLOCK_MONOTONIC_RAW [UNSUPPORTED]
nsleep latency CLOCK_REALTIME_COARSE [UNSUPPORTED]
nsleep latency CLOCK_MONOTONIC_COARSE [UNSUPPORTED]
nsleep latency CLOCK_BOOTTIME [OK]
nsleep latency CLOCK_REALTIME_ALARM [OK]
nsleep latency CLOCK_BOOTTIME_ALARM [OK]
nsleep latency CLOCK_TAI [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..3 selftests: timers: nsleep-lat [PASS]
selftests: timers: set-timer-lat
========================================
Setting timers for every 1 seconds
CLOCK_REALTIME ABSTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_REALTIME ABSTIME PERIODIC max latency: 112099 ns : [OK]
CLOCK_REALTIME RELTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_REALTIME RELTIME PERIODIC max latency: 82088 ns : [OK]
CLOCK_REALTIME ABSTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_REALTIME ABSTIME ONE-SHOT max latency: 56827 ns : [OK]
CLOCK_REALTIME ABSTIME ONE-SHOT count: 1 : [OK]
CLOCK_REALTIME RELTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_REALTIME RELTIME ONE-SHOT max latency: 73434 ns : [OK]
CLOCK_REALTIME RELTIME ONE-SHOT count: 1 : [OK]
CLOCK_MONOTONIC ABSTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_MONOTONIC ABSTIME PERIODIC max latency: 90264 ns : [OK]
CLOCK_MONOTONIC RELTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_MONOTONIC RELTIME PERIODIC max latency: 90327 ns : [OK]
CLOCK_MONOTONIC ABSTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_MONOTONIC ABSTIME ONE-SHOT max latency: 75888 ns : [OK]
CLOCK_MONOTONIC ABSTIME ONE-SHOT count: 1 : [OK]
CLOCK_MONOTONIC RELTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_MONOTONIC RELTIME ONE-SHOT max latency: 64950 ns : [OK]
CLOCK_MONOTONIC RELTIME ONE-SHOT count: 1 : [OK]
CLOCK_BOOTTIME ABSTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_BOOTTIME ABSTIME PERIODIC max latency: 83457 ns : [OK]
CLOCK_BOOTTIME RELTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_BOOTTIME RELTIME PERIODIC max latency: 118242 ns : [OK]
CLOCK_BOOTTIME ABSTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_BOOTTIME ABSTIME ONE-SHOT max latency: 101604 ns : [OK]
CLOCK_BOOTTIME ABSTIME ONE-SHOT count: 1 : [OK]
CLOCK_BOOTTIME RELTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_BOOTTIME RELTIME ONE-SHOT max latency: 64288 ns : [OK]
CLOCK_BOOTTIME RELTIME ONE-SHOT count: 1 : [OK]
CLOCK_REALTIME_ALARM ABSTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_REALTIME_ALARM ABSTIME PERIODIC max latency: 88942 ns : [OK]
CLOCK_REALTIME_ALARM RELTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_REALTIME_ALARM RELTIME PERIODIC max latency: 92606 ns : [OK]
CLOCK_REALTIME_ALARM ABSTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_REALTIME_ALARM ABSTIME ONE-SHOT max latency: 79689 ns : [OK]
CLOCK_REALTIME_ALARM ABSTIME ONE-SHOT count: 1 : [OK]
CLOCK_REALTIME_ALARM RELTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_REALTIME_ALARM RELTIME ONE-SHOT max latency: 73682 ns : [OK]
CLOCK_REALTIME_ALARM RELTIME ONE-SHOT count: 1 : [OK]
CLOCK_BOOTTIME_ALARM ABSTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_BOOTTIME_ALARM ABSTIME PERIODIC max latency: 111494 ns : [OK]
CLOCK_BOOTTIME_ALARM RELTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_BOOTTIME_ALARM RELTIME PERIODIC max latency: 106944 ns : [OK]
CLOCK_BOOTTIME_ALARM ABSTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_BOOTTIME_ALARM ABSTIME ONE-SHOT max latency: 96530 ns : [OK]
CLOCK_BOOTTIME_ALARM ABSTIME ONE-SHOT count: 1 : [OK]
CLOCK_BOOTTIME_ALARM RELTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_BOOTTIME_ALARM RELTIME ONE-SHOT max latency: 84931 ns : [OK]
CLOCK_BOOTTIME_ALARM RELTIME ONE-SHOT count: 1 : [OK]
CLOCK_TAI ABSTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_TAI ABSTIME PERIODIC max latency: 96775 ns : [OK]
CLOCK_TAI RELTIME PERIODIC timer fired early: 0 : [OK]
CLOCK_TAI RELTIME PERIODIC max latency: 84091 ns : [OK]
CLOCK_TAI ABSTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_TAI ABSTIME ONE-SHOT max latency: 82905 ns : [OK]
CLOCK_TAI ABSTIME ONE-SHOT count: 1 : [OK]
CLOCK_TAI RELTIME ONE-SHOT timer fired early: 0 : [OK]
CLOCK_TAI RELTIME ONE-SHOT max latency: 147295 ns : [OK]
CLOCK_TAI RELTIME ONE-SHOT count: 1 : [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..4 selftests: timers: set-timer-lat [PASS]
selftests: timers: mqueue-lat
========================================
Mqueue latency : [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..5 selftests: timers: mqueue-lat [PASS]
selftests: timers: inconsistency-check
========================================
Consistent CLOCK_REALTIME [OK]
Consistent CLOCK_MONOTONIC [OK]
Consistent CLOCK_PROCESS_CPUTIME_ID [OK]
Consistent CLOCK_THREAD_CPUTIME_ID [OK]
Consistent CLOCK_MONOTONIC_RAW [OK]
Consistent CLOCK_REALTIME_COARSE [OK]
Consistent CLOCK_MONOTONIC_COARSE [OK]
Consistent CLOCK_BOOTTIME [OK]
Consistent CLOCK_REALTIME_ALARM [OK]
Consistent CLOCK_BOOTTIME_ALARM [OK]
Consistent CLOCK_TAI [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..6 selftests: timers: inconsistency-check [PASS]
selftests: timers: raw_skew
========================================
Estimating clock drift: 0.0(est) 0.0(act) [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..7 selftests: timers: raw_skew [PASS]
selftests: timers: threadtest
========================================
Wed, 27 Feb 2019 15:23:51 +0800
Testing consistency with 8 threads for 30 seconds: [OK]
Pass 0 Fail 0 Xfail 0 Xpass 0 Skip 0 Error 0
1..0
ok 1..8 selftests: timers: threadtest [PASS]
selftests: timers: rtcpie
========================================
Periodic IRQ rate is 1024Hz.
Counting 20 interrupts at:
2Hz: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
4Hz: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
8Hz: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
16Hz: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
32Hz: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
64Hz:
PIE delta error: 0.017446 should be close to 0.015625
not ok 1..9 selftests: timers: rtcpie [FAIL]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/timers'
2019-02-27 15:24:40 make run_tests -C tpm2
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/tpm2'
TAP version 13
selftests: tpm2: test_smoke.sh
========================================
test_seal_with_auth (tpm2_tests.SmokeTest) ... ERROR
test_seal_with_policy (tpm2_tests.SmokeTest) ... ERROR
test_seal_with_too_long_auth (tpm2_tests.SmokeTest) ... ERROR
test_too_short_cmd (tpm2_tests.SmokeTest) ... ERROR
test_unseal_with_wrong_auth (tpm2_tests.SmokeTest) ... ERROR
test_unseal_with_wrong_policy (tpm2_tests.SmokeTest) ... ERROR
======================================================================
ERROR: test_seal_with_auth (tpm2_tests.SmokeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 15, in setUp
self.client = tpm2.Client()
File "tpm2.py", line 360, in __init__
self.tpm = open('/dev/tpm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpm0'
======================================================================
ERROR: test_seal_with_policy (tpm2_tests.SmokeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 15, in setUp
self.client = tpm2.Client()
File "tpm2.py", line 360, in __init__
self.tpm = open('/dev/tpm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpm0'
======================================================================
ERROR: test_seal_with_too_long_auth (tpm2_tests.SmokeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 15, in setUp
self.client = tpm2.Client()
File "tpm2.py", line 360, in __init__
self.tpm = open('/dev/tpm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpm0'
======================================================================
ERROR: test_too_short_cmd (tpm2_tests.SmokeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 15, in setUp
self.client = tpm2.Client()
File "tpm2.py", line 360, in __init__
self.tpm = open('/dev/tpm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpm0'
======================================================================
ERROR: test_unseal_with_wrong_auth (tpm2_tests.SmokeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 15, in setUp
self.client = tpm2.Client()
File "tpm2.py", line 360, in __init__
self.tpm = open('/dev/tpm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpm0'
======================================================================
ERROR: test_unseal_with_wrong_policy (tpm2_tests.SmokeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 15, in setUp
self.client = tpm2.Client()
File "tpm2.py", line 360, in __init__
self.tpm = open('/dev/tpm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpm0'
----------------------------------------------------------------------
Ran 6 tests in 0.001s
FAILED (errors=6)
not ok 1..1 selftests: tpm2: test_smoke.sh [FAIL]
selftests: tpm2: test_space.sh
========================================
test_flush_context (tpm2_tests.SpaceTest) ... ERROR
test_get_handles (tpm2_tests.SpaceTest) ... ERROR
test_invalid_cc (tpm2_tests.SpaceTest) ... ERROR
test_make_two_spaces (tpm2_tests.SpaceTest) ... ERROR
======================================================================
ERROR: test_flush_context (tpm2_tests.SpaceTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 183, in test_flush_context
space1 = tpm2.Client(tpm2.Client.FLAG_SPACE)
File "tpm2.py", line 362, in __init__
self.tpm = open('/dev/tpmrm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpmrm0'
======================================================================
ERROR: test_get_handles (tpm2_tests.SpaceTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 193, in test_get_handles
space1 = tpm2.Client(tpm2.Client.FLAG_SPACE)
File "tpm2.py", line 362, in __init__
self.tpm = open('/dev/tpmrm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpmrm0'
======================================================================
ERROR: test_invalid_cc (tpm2_tests.SpaceTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 212, in test_invalid_cc
space1 = tpm2.Client(tpm2.Client.FLAG_SPACE)
File "tpm2.py", line 362, in __init__
self.tpm = open('/dev/tpmrm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpmrm0'
======================================================================
ERROR: test_make_two_spaces (tpm2_tests.SpaceTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "tpm2_tests.py", line 169, in test_make_two_spaces
space1 = tpm2.Client(tpm2.Client.FLAG_SPACE)
File "tpm2.py", line 362, in __init__
self.tpm = open('/dev/tpmrm0', 'r+b')
IOError: [Errno 2] No such file or directory: '/dev/tpmrm0'
----------------------------------------------------------------------
Ran 4 tests in 0.002s
FAILED (errors=4)
not ok 1..2 selftests: tpm2: test_space.sh [FAIL]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/tpm2'
uevent test: not in Makefile
2019-02-27 15:24:40 make TARGETS=uevent
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/uevent'
gcc -Wl,-no-as-needed -Wall uevent_filtering.c -o uevent_filtering
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/uevent'
2019-02-27 15:24:41 make run_tests -C uevent
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/uevent'
TAP version 13
selftests: uevent: uevent_filtering
========================================
add@/devices/virtual/mem/fullACTION=addDEVPATH=/devices/virtual/mem/fullSUBSYSTEM=memSYNTH_UUID=0MAJOR=1MINOR=7DEVNAME=fullDEVMODE=0666SEQNUM=1697
add@/devices/virtual/mem/fullACTION=addDEVPATH=/devices/virtual/mem/fullSUBSYSTEM=memSYNTH_UUID=0MAJOR=1MINOR=7DEVNAME=fullDEVMODE=0666SEQNUM=1710
No buffer space available - Failed to receive uevent
add@/devices/virtual/mem/fullACTION=addDEVPATH=/devices/virtual/mem/fullSUBSYSTEM=memSYNTH_UUID=0MAJOR=1MINOR=7DEVNAME=fullDEVMODE=0666SEQNUM=1746
add@/devices/virtual/mem/fullACTION=addDEVPATH=/devices/virtual/mem/fullSUBSYSTEM=memSYNTH_UUID=0MAJOR=1MINOR=7DEVNAME=fullDEVMODE=0666SEQNUM=1756
add@/devices/virtual/mem/fullACTION=addDEVPATH=/devices/virtual/mem/fullSUBSYSTEM=memSYNTH_UUID=0MAJOR=1MINOR=7DEVNAME=fullDEVMODE=0666SEQNUM=1769
[==========] Running 1 tests from 1 test cases.
[ RUN ] global.uevent_filtering
[ OK ] global.uevent_filtering
[==========] 1 / 1 tests passed.
[ PASSED ]
ok 1..1 selftests: uevent: uevent_filtering [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/uevent'
2019-02-27 15:24:43 make run_tests -C user
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/user'
TAP version 13
selftests: user: test_user_copy.sh
========================================
user_copy: ok
ok 1..1 selftests: user: test_user_copy.sh [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/user'
vDSO test: not in Makefile
2019-02-27 15:24:43 make TARGETS=vDSO
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vDSO'
gcc -std=gnu99 vdso_test.c parse_vdso.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vDSO/vdso_test
gcc -std=gnu99 -nostdlib -fno-asynchronous-unwind-tables -fno-stack-protector \
vdso_standalone_test_x86.c parse_vdso.c \
-o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vDSO/vdso_standalone_test_x86
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vDSO'
2019-02-27 15:24:43 make run_tests -C vDSO
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vDSO'
TAP version 13
selftests: vDSO: vdso_test
========================================
The time is 1551252283.583175
ok 1..1 selftests: vDSO: vdso_test [PASS]
selftests: vDSO: vdso_standalone_test_x86
========================================
The time is 1551252283.593524
ok 1..2 selftests: vDSO: vdso_standalone_test_x86 [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vDSO'
2019-02-27 15:24:43 make run_tests -C vm
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm'
make ARCH=x86 -C ../../../.. headers_install
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25'
HOSTCC scripts/basic/fixdep
WRAP arch/x86/include/generated/uapi/asm/bpf_perf_event.h
WRAP arch/x86/include/generated/uapi/asm/poll.h
SYSTBL arch/x86/include/generated/asm/syscalls_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_32.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_64.h
SYSHDR arch/x86/include/generated/uapi/asm/unistd_x32.h
HOSTCC arch/x86/tools/relocs_32.o
HOSTCC arch/x86/tools/relocs_64.o
HOSTCC arch/x86/tools/relocs_common.o
HOSTLD arch/x86/tools/relocs
UPD include/generated/uapi/linux/version.h
HOSTCC scripts/unifdef
INSTALL usr/include/asm-generic/ (37 files)
INSTALL usr/include/drm/ (26 files)
INSTALL usr/include/linux/ (503 files)
INSTALL usr/include/linux/android/ (2 files)
INSTALL usr/include/linux/byteorder/ (2 files)
INSTALL usr/include/linux/caif/ (2 files)
INSTALL usr/include/linux/can/ (6 files)
INSTALL usr/include/linux/cifs/ (1 file)
INSTALL usr/include/linux/dvb/ (8 files)
INSTALL usr/include/linux/genwqe/ (1 file)
INSTALL usr/include/linux/hdlc/ (1 file)
INSTALL usr/include/linux/hsi/ (2 files)
INSTALL usr/include/linux/iio/ (2 files)
INSTALL usr/include/linux/isdn/ (1 file)
INSTALL usr/include/linux/mmc/ (1 file)
INSTALL usr/include/linux/netfilter/ (88 files)
INSTALL usr/include/linux/netfilter/ipset/ (4 files)
INSTALL usr/include/linux/netfilter_arp/ (2 files)
INSTALL usr/include/linux/netfilter_bridge/ (17 files)
INSTALL usr/include/linux/netfilter_ipv4/ (9 files)
INSTALL usr/include/linux/netfilter_ipv6/ (13 files)
INSTALL usr/include/linux/nfsd/ (5 files)
INSTALL usr/include/linux/raid/ (2 files)
INSTALL usr/include/linux/sched/ (1 file)
INSTALL usr/include/linux/spi/ (1 file)
INSTALL usr/include/linux/sunrpc/ (1 file)
INSTALL usr/include/linux/tc_act/ (15 files)
INSTALL usr/include/linux/tc_ematch/ (5 files)
INSTALL usr/include/linux/usb/ (13 files)
INSTALL usr/include/linux/wimax/ (1 file)
INSTALL usr/include/misc/ (2 files)
INSTALL usr/include/mtd/ (5 files)
INSTALL usr/include/rdma/ (25 files)
INSTALL usr/include/rdma/hfi/ (2 files)
INSTALL usr/include/scsi/ (5 files)
INSTALL usr/include/scsi/fc/ (4 files)
INSTALL usr/include/sound/ (16 files)
INSTALL usr/include/video/ (3 files)
INSTALL usr/include/xen/ (4 files)
INSTALL usr/include/asm/ (62 files)
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25'
gcc -Wall -I ../../../../usr/include compaction_test.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/compaction_test
gcc -Wall -I ../../../../usr/include gup_benchmark.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/gup_benchmark
gcc -Wall -I ../../../../usr/include hugepage-mmap.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/hugepage-mmap
gcc -Wall -I ../../../../usr/include hugepage-shm.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/hugepage-shm
gcc -Wall -I ../../../../usr/include map_hugetlb.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/map_hugetlb
gcc -Wall -I ../../../../usr/include map_fixed_noreplace.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/map_fixed_noreplace
gcc -Wall -I ../../../../usr/include map_populate.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/map_populate
gcc -Wall -I ../../../../usr/include mlock-random-test.c -lrt -lcap -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/mlock-random-test
gcc -Wall -I ../../../../usr/include mlock2-tests.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/mlock2-tests
gcc -Wall -I ../../../../usr/include on-fault-limit.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/on-fault-limit
gcc -Wall -I ../../../../usr/include thuge-gen.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/thuge-gen
gcc -Wall -I ../../../../usr/include transhuge-stress.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/transhuge-stress
gcc -Wall -I ../../../../usr/include userfaultfd.c -lrt -lpthread -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/userfaultfd
gcc -Wall -I ../../../../usr/include va_128TBswitch.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/va_128TBswitch
gcc -Wall -I ../../../../usr/include virtual_address_range.c -lrt -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm/virtual_address_range
TAP version 13
selftests: vm: run_vmtests
========================================
---------------------
running hugepage-mmap
---------------------
Returned address is 0x7ff0b7a00000
First hex is 0
First hex is 3020100
[PASS]
--------------------
running hugepage-shm
--------------------
shmid: 0x0
shmaddr: 0x7f5d81800000
Starting the writes:
................................................................................................................................................................................................................................................................
Starting the Check...Done.
[PASS]
-------------------
running map_hugetlb
-------------------
Returned address is 0x7ff874000000
First hex is 0
First hex is 3020100
[PASS]
NOTE: The above hugetlb tests provide minimal coverage. Use
https://github.com/libhugetlbfs/libhugetlbfs.git for
hugetlb regression testing.
-------------------
running userfaultfd
-------------------
nr_pages: 32768, nr_pages_per_cpu: 16384
bounces: 31, mode: rnd racing ver poll, userfaults: 2703 2888
bounces: 30, mode: racing ver poll, userfaults: 5001 2425
bounces: 29, mode: rnd ver poll, userfaults: 4419 4637
bounces: 28, mode: ver poll, userfaults: 15930 6292
bounces: 27, mode: rnd racing poll, userfaults: 4262 2689
bounces: 26, mode: racing poll, userfaults: 8537 3080
bounces: 25, mode: rnd poll, userfaults: 6019 4104
bounces: 24, mode: poll, userfaults: 8587 5465
bounces: 23, mode: rnd racing ver, userfaults: 1721 7204
bounces: 22, mode: racing ver, userfaults: 2549 2129
bounces: 21, mode: rnd ver, userfaults: 4066 3737
bounces: 20, mode: ver, userfaults: 6170 5100
bounces: 19, mode: rnd racing, userfaults: 4268 4094
bounces: 18, mode: racing, userfaults: 2870 2494
bounces: 17, mode: rnd, userfaults: 6824 6297
bounces: 16, mode:, userfaults: 6829 6360
bounces: 15, mode: rnd racing ver poll, userfaults: 3115 2918
bounces: 14, mode: racing ver poll, userfaults: 6667 1434
bounces: 13, mode: rnd ver poll, userfaults: 4019 5395
bounces: 12, mode: ver poll, userfaults: 2491 4733
bounces: 11, mode: rnd racing poll, userfaults: 3615 3008
bounces: 10, mode: racing poll, userfaults: 2960 4775
bounces: 9, mode: rnd poll, userfaults: 4541 4587
bounces: 8, mode: poll, userfaults: 6640 6258
bounces: 7, mode: rnd racing ver, userfaults: 4449 3356
bounces: 6, mode: racing ver, userfaults: 5784 5127
bounces: 5, mode: rnd ver, userfaults: 6475 5466
bounces: 4, mode: ver, userfaults: 7559 7026
bounces: 3, mode: rnd racing, userfaults: 1727 5211
bounces: 2, mode: racing, userfaults: 4792 4723
bounces: 1, mode: rnd, userfaults: 5464 5139
bounces: 0, mode:, userfaults: 8359 8239
testing UFFDIO_ZEROPAGE: done.
testing signal delivery: done.
testing events (fork, remap, remove): userfaults: 32768
[PASS]
---------------------------
running userfaultfd_hugetlb
---------------------------
nr_pages: 64, nr_pages_per_cpu: 32
bounces: 31, mode: rnd racing ver poll, userfaults: 11 14
bounces: 30, mode: racing ver poll, userfaults: 12 13
bounces: 29, mode: rnd ver poll, userfaults: 16 7
bounces: 28, mode: ver poll, userfaults: 5 2
bounces: 27, mode: rnd racing poll, userfaults: 17 10
bounces: 26, mode: racing poll, userfaults: 10 17
bounces: 25, mode: rnd poll, userfaults: 18 13
bounces: 24, mode: poll, userfaults: 16 4
bounces: 23, mode: rnd racing ver, userfaults: 30 5
bounces: 22, mode: racing ver, userfaults: 25 15
bounces: 21, mode: rnd ver, userfaults: 19 19
bounces: 20, mode: ver, userfaults: 18 11
bounces: 19, mode: rnd racing, userfaults: 24 6
bounces: 18, mode: racing, userfaults: 15 9
bounces: 17, mode: rnd, userfaults: 21 10
bounces: 16, mode:, userfaults: 14 13
bounces: 15, mode: rnd racing ver poll, userfaults: 23 8
bounces: 14, mode: racing ver poll, userfaults: 17 7
bounces: 13, mode: rnd ver poll, userfaults: 19 12
bounces: 12, mode: ver poll, userfaults: 18 12
bounces: 11, mode: rnd racing poll, userfaults: 17 10
bounces: 10, mode: racing poll, userfaults: 16 3
bounces: 9, mode: rnd poll, userfaults: 14 7
bounces: 8, mode: poll, userfaults: 19 13
bounces: 7, mode: rnd racing ver, userfaults: 19 11
bounces: 6, mode: racing ver, userfaults: 21 5
bounces: 5, mode: rnd ver, userfaults: 18 18
bounces: 4, mode: ver, userfaults: 19 11
bounces: 3, mode: rnd racing, userfaults: 17 11
bounces: 2, mode: racing, userfaults: 13 10
bounces: 1, mode: rnd, userfaults: 22 15
bounces: 0, mode:, userfaults: 17 12
testing UFFDIO_ZEROPAGE: done.
testing signal delivery: done.
testing events (fork, remap, remove): userfaults: 64
[PASS]
-------------------------
running userfaultfd_shmem
-------------------------
nr_pages: 32768, nr_pages_per_cpu: 16384
bounces: 31, mode: rnd racing ver poll, userfaults: 4874 832
bounces: 30, mode: racing ver poll, userfaults: 4198 3987
bounces: 29, mode: rnd ver poll, userfaults: 4214 4832
bounces: 28, mode: ver poll, userfaults: 6352 6804
bounces: 27, mode: rnd racing poll, userfaults: 3565 2613
bounces: 26, mode: racing poll, userfaults: 5095 2547
bounces: 25, mode: rnd poll, userfaults: 4706 5501
bounces: 24, mode: poll, userfaults: 5077 5306
bounces: 23, mode: rnd racing ver, userfaults: 3660 3855
bounces: 22, mode: racing ver, userfaults: 3990 1906
bounces: 21, mode: rnd ver, userfaults: 6546 6123
bounces: 20, mode: ver, userfaults: 4790 4657
bounces: 19, mode: rnd racing, userfaults: 4831 4642
bounces: 18, mode: racing, userfaults: 3696 2936
bounces: 17, mode: rnd, userfaults: 6943 5752
bounces: 16, mode:, userfaults: 6531 6499
bounces: 15, mode: rnd racing ver poll, userfaults: 4303 3680
bounces: 14, mode: racing ver poll, userfaults: 3082 4979
bounces: 13, mode: rnd ver poll, userfaults: 5504 9595
bounces: 12, mode: ver poll, userfaults: 6365 6848
bounces: 11, mode: rnd racing poll, userfaults: 4572 4120
bounces: 10, mode: racing poll, userfaults: 3318 5788
bounces: 9, mode: rnd poll, userfaults: 4396 4692
bounces: 8, mode: poll, userfaults: 6163 6676
bounces: 7, mode: rnd racing ver, userfaults: 3148 3606
bounces: 6, mode: racing ver, userfaults: 3530 3421
bounces: 5, mode: rnd ver, userfaults: 5903 5757
bounces: 4, mode: ver, userfaults: 4970 4901
bounces: 3, mode: rnd racing, userfaults: 3964 3317
bounces: 2, mode: racing, userfaults: 4445 4453
bounces: 1, mode: rnd, userfaults: 5309 6216
bounces: 0, mode:, userfaults: 5058 4684
testing UFFDIO_ZEROPAGE: done.
testing signal delivery: done.
testing events (fork, remap, remove): userfaults: 32768
[PASS]
-----------------------
running compaction_test
-----------------------
[ignored_by_lkp]
[PASS]
----------------------
running on-fault-limit
----------------------
[PASS]
--------------------
running map_populate
--------------------
[PASS]
--------------------
running mlock2-tests
--------------------
Failed to make faulted page unevictable
Failed to make faulted page unevictable
Failed to make present page unevictable
[FAIL]
-----------------------------
running virtual_address_range
-----------------------------
[PASS]
-----------------------------
running virtual address 128TB switch test
-----------------------------
[ignored_by_lkp]
[PASS]
not ok 1..1 selftests: vm: run_vmtests [FAIL]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/vm'
watchdog test: not in Makefile
2019-02-27 15:25:16 make TARGETS=watchdog
make[1]: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/watchdog'
gcc watchdog-test.c -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/watchdog/watchdog-test
make[1]: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/watchdog'
ignored_by_lkp watchdog test
ignored_by_lkp x86.mov_ss_trap test
2019-02-27 15:25:16 make run_tests -C x86
make: Entering directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86'
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/single_step_syscall_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 single_step_syscall.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/sysret_ss_attrs_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 sysret_ss_attrs.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/syscall_nt_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 syscall_nt.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_mremap_vdso_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_mremap_vdso.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/check_initial_reg_state_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -Wl,-ereal_start -static -DCAN_BUILD_32 -DCAN_BUILD_64 check_initial_reg_state.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/sigreturn_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 sigreturn.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/iopl_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 iopl.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/mpx-mini-test_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 mpx-mini-test.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/ioperm_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 ioperm.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/protection_keys_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 protection_keys.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_vdso_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_vdso.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_vsyscall_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_vsyscall.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/entry_from_vm86_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 entry_from_vm86.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/syscall_arg_fault_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 syscall_arg_fault.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_syscall_vdso_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_syscall_vdso.c thunks_32.S -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/unwind_vdso_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 unwind_vdso.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_FCMOV_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_FCMOV.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_FCOMI_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_FCOMI.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_FISTTP_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_FISTTP.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/vdso_restorer_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 vdso_restorer.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/ldt_gdt_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 ldt_gdt.c -lrt -ldl -lm
gcc -m32 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/ptrace_syscall_32 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 ptrace_syscall.c raw_syscall_helper_32.S -lrt -ldl -lm
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/single_step_syscall_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 single_step_syscall.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/sysret_ss_attrs_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 sysret_ss_attrs.c thunks.S -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/syscall_nt_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 syscall_nt.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_mremap_vdso_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_mremap_vdso.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/check_initial_reg_state_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -Wl,-ereal_start -static -DCAN_BUILD_32 -DCAN_BUILD_64 check_initial_reg_state.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/sigreturn_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 sigreturn.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/iopl_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 iopl.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/mpx-mini-test_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 mpx-mini-test.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/ioperm_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 ioperm.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/protection_keys_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 protection_keys.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_vdso_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_vdso.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/test_vsyscall_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 test_vsyscall.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/fsgsbase_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 fsgsbase.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/sysret_rip_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 sysret_rip.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/ldt_gdt_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 ldt_gdt.c -lrt -ldl
gcc -m64 -o /usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86/ptrace_syscall_64 -O2 -g -std=gnu99 -pthread -Wall -no-pie -DCAN_BUILD_32 -DCAN_BUILD_64 ptrace_syscall.c -lrt -ldl
TAP version 13
selftests: x86: single_step_syscall_32
========================================
[RUN] Set TF and check nop
[OK] Survived with TF set and 14 traps
[RUN] Set TF and check int80
[OK] Survived with TF set and 14 traps
[RUN] Set TF and check a fast syscall
[OK] Survived with TF set and 43 traps
[RUN] Fast syscall with TF cleared
[OK] Nothing unexpected happened
ok 1..1 selftests: x86: single_step_syscall_32 [PASS]
selftests: x86: sysret_ss_attrs_32
========================================
[RUN] Syscalls followed by SS validation
[OK] We survived
ok 1..2 selftests: x86: sysret_ss_attrs_32 [PASS]
selftests: x86: syscall_nt_32
========================================
[RUN] Set NT and issue a syscall
[OK] The syscall worked and flags are still set
[RUN] Set NT|TF and issue a syscall
[OK] The syscall worked and flags are still set
ok 1..3 selftests: x86: syscall_nt_32 [PASS]
selftests: x86: test_mremap_vdso_32
========================================
AT_SYSINFO_EHDR is 0xf7ee7000
[NOTE] Moving vDSO: [0xf7ee7000, 0xf7ee8000] -> [0xf7f0f000, 0xf7f10000]
[OK]
ok 1..4 selftests: x86: test_mremap_vdso_32 [PASS]
selftests: x86: check_initial_reg_state_32
========================================
[OK] All GPRs except SP are 0
[OK] FLAGS is 0x202
ok 1..5 selftests: x86: check_initial_reg_state_32 [PASS]
selftests: x86: sigreturn_32
========================================
[OK] set_thread_area refused 16-bit data
[OK] set_thread_area refused 16-bit data
[RUN] Valid sigreturn: 64-bit CS (33), 32-bit SS (2b, GDT)
[OK] all registers okay
[RUN] Valid sigreturn: 32-bit CS (23), 32-bit SS (2b, GDT)
[OK] all registers okay
[RUN] Valid sigreturn: 16-bit CS (37), 32-bit SS (2b, GDT)
[OK] all registers okay
[RUN] Valid sigreturn: 64-bit CS (33), 16-bit SS (3f)
[OK] all registers okay
[RUN] Valid sigreturn: 32-bit CS (23), 16-bit SS (3f)
[OK] all registers okay
[RUN] Valid sigreturn: 16-bit CS (37), 16-bit SS (3f)
[OK] all registers okay
[RUN] 64-bit CS (33), bogus SS (47)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
[RUN] 32-bit CS (23), bogus SS (47)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
[RUN] 16-bit CS (37), bogus SS (47)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
[RUN] 64-bit CS (33), bogus SS (23)
[OK] Got #GP(0x20) (i.e. GDT index 4, Segmentation fault)
[RUN] 32-bit CS (23), bogus SS (23)
[OK] Got #GP(0x20) (i.e. GDT index 4, Segmentation fault)
[RUN] 16-bit CS (37), bogus SS (23)
[OK] Got #GP(0x20) (i.e. GDT index 4, Segmentation fault)
[RUN] 32-bit CS (4f), bogus SS (2b)
[OK] Got #NP(0x4c) (i.e. LDT index 9, Bus error)
[RUN] 32-bit CS (23), bogus SS (57)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
ok 1..6 selftests: x86: sigreturn_32 [PASS]
selftests: x86: iopl_32
========================================
child: set IOPL to 3
[RUN] child: write to 0x80
[OK] Child succeeded
[RUN] parent: write to 0x80 (should fail)
[OK] write was denied
iopl(3)
Drop privileges
[RUN] iopl(3) unprivileged but with IOPL==3
[RUN] iopl(0) unprivileged
[RUN] iopl(3) unprivileged
[OK] Failed as expected
ok 1..7 selftests: x86: iopl_32 [PASS]
selftests: x86: mpx-mini-test_32
========================================
processor lacks MPX XSTATE(s), can not run MPX tests
XSAVE is supported by HW & OS
XSAVE processor supported state mask: 0x7
XSAVE OS supported state mask: 0x7
ok 1..8 selftests: x86: mpx-mini-test_32 [PASS]
selftests: x86: ioperm_32
========================================
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[RUN] enable 0x80
[OK] outb to 0x80 worked
[OK] outb to 0xed failed
[RUN] disable 0x80
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[RUN] child: check that we inherited permissions
[OK] outb to 0x80 worked
[OK] outb to 0xed failed
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[RUN] enable 0x80
[OK] outb to 0x80 worked
[OK] outb to 0xed failed
[RUN] disable 0x80
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[OK] Child succeeded
Drop privileges
[RUN] disable 0x80
[OK] it worked
[RUN] enable 0x80 again
[OK] it failed
ok 1..9 selftests: x86: ioperm_32 [PASS]
selftests: x86: protection_keys_32
========================================
has pku: 0
running PKEY tests for unsupported CPU/OS
ok 1..10 selftests: x86: protection_keys_32 [PASS]
selftests: x86: test_vdso_32
========================================
Warning: failed to find getcpu in vDSO
[RUN] Testing clock_gettime for clock CLOCK_REALTIME (0)...
1551252325.597246296 1551252325.597252343 1551252325.597252973
[RUN] Testing clock_gettime for clock CLOCK_MONOTONIC (1)...
632.985155857 632.985156269 632.985156737
[RUN] Testing clock_gettime for clock CLOCK_PROCESS_CPUTIME_ID (2)...
0.001371749 0.001372886 0.001373730
[RUN] Testing clock_gettime for clock CLOCK_THREAD_CPUTIME_ID (3)...
0.001376964 0.001377806 0.001378641
[RUN] Testing clock_gettime for clock CLOCK_MONOTONIC_RAW (4)...
632.577313626 632.577314342 632.577315071
[RUN] Testing clock_gettime for clock CLOCK_REALTIME_COARSE (5)...
1551252325.596853650 1551252325.596853650 1551252325.596853650
[RUN] Testing clock_gettime for clock CLOCK_MONOTONIC_COARSE (6)...
632.984752162 632.984752162 632.984752162
[RUN] Testing clock_gettime for clock CLOCK_BOOTTIME (7)...
632.985178930 632.985179662 632.985180406
[RUN] Testing clock_gettime for clock CLOCK_REALTIME_ALARM (8)...
1551252325.597284486 1551252325.597285249 1551252325.597286030
[RUN] Testing clock_gettime for clock CLOCK_BOOTTIME_ALARM (9)...
632.985187015 632.985187786 632.985188568
[RUN] Testing clock_gettime for clock CLOCK_SGI_CYCLE (10)...
[OK] No such clock.
[RUN] Testing clock_gettime for clock CLOCK_TAI (11)...
1551252325.597295127 1551252325.597295489 1551252325.597295966
[RUN] Testing clock_gettime for clock invalid (-1)...
[OK] No such clock.
[RUN] Testing clock_gettime for clock invalid (-2147483648)...
[OK] No such clock.
[RUN] Testing clock_gettime for clock invalid (2147483647)...
[OK] No such clock.
[RUN] Testing gettimeofday...
1551252325.597304 1551252325.597304 1551252325.597305
[OK] timezones match: minuteswest=-480, dsttime=0
[RUN] Testing getcpu...
[OK] CPU 0: syscall: cpu 0, node 0
[OK] CPU 1: syscall: cpu 1, node 0
ok 1..11 selftests: x86: test_vdso_32 [PASS]
selftests: x86: test_vsyscall_32
========================================
[NOTE] failed to find getcpu in vDSO
[RUN] test gettimeofday()
vDSO time offsets: 0.000005 0.000000
[OK] vDSO gettimeofday()'s timeval was okay
[RUN] test time()
[OK] vDSO time() is okay
[RUN] getcpu() on CPU 0
[RUN] getcpu() on CPU 1
ok 1..12 selftests: x86: test_vsyscall_32 [PASS]
selftests: x86: entry_from_vm86_32
========================================
[RUN] #BR from vm86 mode
[SKIP] vm86 not supported
[RUN] SYSENTER from vm86 mode
[SKIP] vm86 not supported
[RUN] SYSCALL from vm86 mode
[SKIP] vm86 not supported
[RUN] STI with VIP set from vm86 mode
[SKIP] vm86 not supported
[RUN] POPF with VIP set and IF clear from vm86 mode
[SKIP] vm86 not supported
[RUN] POPF with VIP and IF set from vm86 mode
[SKIP] vm86 not supported
[RUN] POPF with VIP clear and IF set from vm86 mode
[SKIP] vm86 not supported
[RUN] INT3 from vm86 mode
[SKIP] vm86 not supported
[RUN] int80 from vm86 mode
[SKIP] vm86 not supported
[RUN] UMIP tests from vm86 mode
[SKIP] vm86 not supported
[INFO] Result from SMSW:[0x0000]
[INFO] Result from SIDT: limit[0x0000]base[0x00000000]
[INFO] Result from SGDT: limit[0x0000]base[0x00000000]
[PASS] All the results from SMSW are identical.
[PASS] All the results from SGDT are identical.
[PASS] All the results from SIDT are identical.
[RUN] STR instruction from vm86 mode
[SKIP] vm86 not supported
[RUN] SLDT instruction from vm86 mode
[SKIP] vm86 not supported
[RUN] Execute null pointer from vm86 mode
[SKIP] vm86 not supported
ok 1..13 selftests: x86: entry_from_vm86_32 [PASS]
[RUN] #BR from vm86 mode
[SKIP] vm86 not supported
[RUN] SYSENTER from vm86 mode
[SKIP] vm86 not supported
[RUN] SYSCALL from vm86 mode
[SKIP] vm86 not supported
[RUN] STI with VIP set from vm86 mode
[SKIP] vm86 not supported
[RUN] POPF with VIP set and IF clear from vm86 mode
[SKIP] vm86 not supported
[RUN] POPF with VIP and IF set from vm86 mode
[SKIP] vm86 not supported
[RUN] POPF with VIP clear and IF set from vm86 mode
[SKIP] vm86 not supported
[RUN] INT3 from vm86 mode
[SKIP] vm86 not supported
[RUN] int80 from vm86 mode
[SKIP] vm86 not supported
[RUN] UMIP tests from vm86 mode
[SKIP] vm86 not supported
[INFO] Result from SMSW:[0x0000]
[INFO] Result from SIDT: limit[0x0000]base[0x00000000]
[INFO] Result from SGDT: limit[0x0000]base[0x00000000]
[PASS] All the results from SMSW are identical.
[PASS] All the results from SGDT are identical.
[PASS] All the results from SIDT are identical.
[RUN] STR instruction from vm86 mode
[SKIP] vm86 not supported
[RUN] SLDT instruction from vm86 mode
[SKIP] vm86 not supported
[RUN] Execute null pointer from vm86 mode
[SKIP] vm86 not supported
selftests: x86: syscall_arg_fault_32
========================================
[RUN] SYSENTER with invalid state
[OK] Seems okay
[RUN] SYSCALL with invalid state
[SKIP] Illegal instruction
ok 1..14 selftests: x86: syscall_arg_fault_32 [PASS]
selftests: x86: test_syscall_vdso_32
========================================
[RUN] Executing 6-argument 32-bit syscall via VDSO
[WARN] Flags before=0000000000200ed7 id 0 00 o d i s z 0 a 0 p 1 c
[WARN] Flags after=0000000000200606 id 0 00 d i 0 0 p 1
[WARN] Flags change=00000000000008d1 0 00 o s z 0 a 0 0 c
[OK] Arguments are preserved across syscall
[NOTE] R11 has changed:0000000000200606 - assuming clobbered by SYSRET insn
[OK] R8..R15 did not leak kernel data
[RUN] Executing 6-argument 32-bit syscall via INT 80
[OK] Arguments are preserved across syscall
[OK] R8..R15 did not leak kernel data
[RUN] Executing 6-argument 32-bit syscall via VDSO
[WARN] Flags before=0000000000200ed7 id 0 00 o d i s z 0 a 0 p 1 c
[WARN] Flags after=0000000000200606 id 0 00 d i 0 0 p 1
[WARN] Flags change=00000000000008d1 0 00 o s z 0 a 0 0 c
[OK] Arguments are preserved across syscall
[NOTE] R11 has changed:0000000000200606 - assuming clobbered by SYSRET insn
[OK] R8..R15 did not leak kernel data
[RUN] Executing 6-argument 32-bit syscall via INT 80
[OK] Arguments are preserved across syscall
[OK] R8..R15 did not leak kernel data
[RUN] Running tests under ptrace
ok 1..15 selftests: x86: test_syscall_vdso_32 [PASS]
selftests: x86: unwind_vdso_32
========================================
AT_SYSINFO is 0xf7fbd940
[OK] AT_SYSINFO maps to linux-gate.so.1, loaded at 0x0xf7fbd000
[RUN] Set TF and check a fast syscall
In vsyscall at 0xf7fbd940, returning to 0xf7da9877
SIGTRAP at 0xf7fbd940
0xf7fbd940
0xf7da9877
[OK] NR = 20, args = 1, 2, 3, 4, 5, 6
SIGTRAP at 0xf7fbd941
0xf7fbd941
0xf7da9877
[OK] NR = 20, args = 1, 2, 3, 4, 5, 6
SIGTRAP at 0xf7fbd942
0xf7fbd942
0xf7da9877
[OK] NR = 20, args = 1, 2, 3, 4, 5, 6
SIGTRAP at 0xf7fbd943
0xf7fbd943
0xf7da9877
[OK] NR = 20, args = 1, 2, 3, 4, 5, 6
SIGTRAP at 0xf7fbd945
0xf7fbd945
0xf7da9877
[OK] NR = 20, args = 1, 2, 3, 4, 5, 6
SIGTRAP at 0xf7fbd94a
0xf7fbd94a
0xf7da9877
[OK] NR = 10002, args = 1, 2, 3, 4, 5, 6
SIGTRAP at 0xf7fbd94b
0xf7fbd94b
0xf7da9877
[OK] NR = 10002, args = 1, 2, 3, 4, 5, 6
SIGTRAP at 0xf7fbd94c
0xf7fbd94c
0xf7da9877
[OK] NR = 10002, args = 1, 2, 3, 4, 5, 6
Vsyscall is done
[OK] All is well
ok 1..16 selftests: x86: unwind_vdso_32 [PASS]
selftests: x86: test_FCMOV_32
========================================
[RUN] Testing fcmovCC instructions
[OK] fcmovCC
ok 1..17 selftests: x86: test_FCMOV_32 [PASS]
selftests: x86: test_FCOMI_32
========================================
[RUN] Testing f[u]comi[p] instructions
[OK] f[u]comi[p]
ok 1..18 selftests: x86: test_FCOMI_32 [PASS]
selftests: x86: test_FISTTP_32
========================================
[RUN] Testing fisttp instructions
[OK] fisttp
ok 1..19 selftests: x86: test_FISTTP_32 [PASS]
selftests: x86: vdso_restorer_32
========================================
[OK] SA_SIGINFO handler returned successfully
[OK] !SA_SIGINFO handler returned successfully
ok 1..20 selftests: x86: vdso_restorer_32 [PASS]
selftests: x86: ldt_gdt_32
========================================
[NOTE] set_thread_area is available; will use GDT index 13
[OK] LDT entry 0 has AR 0x0040FB00 and limit 0x0000000A
[OK] LDT entry 0 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00907B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07300 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07100 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07500 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00507700 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507F00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507D00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507B00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[RUN] Test fork
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[OK] LDT entry 1 is invalid
[OK] LDT entry 0 is invalid
[NOTE] set_thread_area is available; will use GDT index 13
[OK] LDT entry 0 has AR 0x0040FB00 and limit 0x0000000A
[OK] LDT entry 0 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00907B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07300 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07100 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07500 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00507700 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507F00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507D00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507B00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[RUN] Test fork
[OK] Child succeeded
[RUN] Test size
[DONE] Size test
[OK] modify_ldt failure 22
[OK] LDT entry 0 has AR 0x0000F300 and limit 0x00000000
[OK] LDT entry 0 has AR 0x00007300 and limit 0x00000000
[OK] LDT entry 0 has AR 0x0000F100 and limit 0x00000000
[OK] LDT entry 0 has AR 0x00007300 and limit 0x00000000
[OK] LDT entry 0 has AR 0x00007100 and limit 0x00000001
[OK] LDT entry 0 has AR 0x00007100 and limit 0x00000000
[OK] LDT entry 0 is invalid
[OK] LDT entry 0 has AR 0x0040F300 and limit 0x000FFFFF
[OK] GDT entry 13 has AR 0x0040F300 and limit 0x000FFFFF
[OK] LDT entry 0 has AR 0x00C0F300 and limit 0xFFFFFFFF
[OK] GDT entry 13 has AR 0x00C0F300 and limit 0xFFFFFFFF
[OK] LDT entry 0 has AR 0x00C0F100 and limit 0xFFFFFFFF
[OK] GDT entry 13 has AR 0x00C0F100 and limit 0xFFFFFFFF
[OK] LDT entry 0 has AR 0x00C0F700 and limit 0xFFFFFFFF
[OK] GDT entry 13 has AR 0x00C0F700 and limit 0xFFFFFFFF
[OK] LDT entry 0 has AR 0x00C0F500 and limit 0xFFFFFFFF
[OK] GDT entry 13 has AR 0x00C0F500 and limit 0xFFFFFFFF
[OK] LDT entry 0 is invalid
[RUN] Cross-CPU LDT invalidation
[OK] All 5 iterations succeeded
[RUN] Test exec
[OK] LDT entry 0 has AR 0x0040FB00 and limit 0x0000002A
[OK] Child succeeded
[OK] Invalidate DS with set_thread_area: new DS = 0x0
[OK] Invalidate ES with set_thread_area: new ES = 0x0
[OK] Invalidate FS with set_thread_area: new FS = 0x0
[OK] Invalidate GS with set_thread_area: new GS = 0x0
ok 1..21 selftests: x86: ldt_gdt_32 [PASS]
selftests: x86: ptrace_syscall_32
========================================
[RUN] Check int80 return regs
[OK] getpid() preserves regs
[OK] kill(getpid(), SIGUSR1) preserves regs
[RUN] Check AT_SYSINFO return regs
[OK] getpid() preserves regs
[OK] kill(getpid(), SIGUSR1) preserves regs
[RUN] ptrace-induced syscall restart
[RUN] SYSEMU
[OK] Initial nr and args are correct
[RUN] Restart the syscall (ip = 0xf7f0b949)
[OK] Restarted nr and args are correct
[RUN] Change nr and args and restart the syscall (ip = 0xf7f0b949)
[OK] Replacement nr and args are correct
[OK] Child exited cleanly
[RUN] kernel syscall restart under ptrace
[RUN] SYSCALL
[OK] Initial nr and args are correct
[RUN] SYSCALL
[OK] Args after SIGUSR1 are correct (ax = -514)
[OK] Child got SIGUSR1
[RUN] Step again
[OK] pause(2) restarted correctly
ok 1..22 selftests: x86: ptrace_syscall_32 [PASS]
selftests: x86: single_step_syscall_64
========================================
[RUN] Set TF and check nop
[OK] Survived with TF set and 9 traps
[RUN] Set TF and check syscall-less opportunistic sysret
[OK] Survived with TF set and 12 traps
[RUN] Set TF and check int80
[OK] Survived with TF set and 9 traps
[RUN] Set TF and check a fast syscall
[OK] Survived with TF set and 22 traps
[RUN] Fast syscall with TF cleared
[OK] Nothing unexpected happened
ok 1..23 selftests: x86: single_step_syscall_64 [PASS]
selftests: x86: sysret_ss_attrs_64
========================================
[RUN] Syscalls followed by SS validation
[OK] We survived
ok 1..24 selftests: x86: sysret_ss_attrs_64 [PASS]
selftests: x86: syscall_nt_64
========================================
[RUN] Set NT and issue a syscall
[OK] The syscall worked and flags are still set
[RUN] Set NT|TF and issue a syscall
[OK] The syscall worked and flags are still set
ok 1..25 selftests: x86: syscall_nt_64 [PASS]
selftests: x86: test_mremap_vdso_64
========================================
AT_SYSINFO_EHDR is 0x7ffe44b92000
[NOTE] Moving vDSO: [0x7ffe44b92000, 0x7ffe44b93000] -> [0x7fb552928000, 0x7fb552929000]
[OK]
ok 1..26 selftests: x86: test_mremap_vdso_64 [PASS]
selftests: x86: check_initial_reg_state_64
========================================
[OK] All GPRs except SP are 0
[OK] FLAGS is 0x202
ok 1..27 selftests: x86: check_initial_reg_state_64 [PASS]
selftests: x86: sigreturn_64
========================================
[OK] set_thread_area refused 16-bit data
[OK] set_thread_area refused 16-bit data
[RUN] Valid sigreturn: 64-bit CS (33), 32-bit SS (2b, GDT)
[OK] all registers okay
[RUN] Valid sigreturn: 32-bit CS (23), 32-bit SS (2b, GDT)
[NOTE] SP: 8badf00d5aadc0de -> 5aadc0de
[OK] all registers okay
[RUN] Valid sigreturn: 16-bit CS (37), 32-bit SS (2b, GDT)
[NOTE] SP: 8badf00d5aadc0de -> 5aadc0de
[OK] all registers okay
[RUN] Valid sigreturn: 64-bit CS (33), 16-bit SS (3f)
[OK] all registers okay
[RUN] Valid sigreturn: 32-bit CS (23), 16-bit SS (3f)
[NOTE] SP: 8badf00d5aadc0de -> 5aadc0de
[OK] all registers okay
[RUN] Valid sigreturn: 16-bit CS (37), 16-bit SS (3f)
[NOTE] SP: 8badf00d5aadc0de -> 5aadc0de
[OK] all registers okay
[RUN] Valid sigreturn: 32-bit CS (23), 32-bit SS (2b, GDT)
Corrupting SS on return to 64-bit mode
[NOTE] SP: 8badf00d5aadc0de -> 5aadc0de
[OK] all registers okay
[RUN] Valid sigreturn: 32-bit CS (23), 16-bit SS (3f)
Corrupting SS on return to 64-bit mode
[NOTE] SP: 8badf00d5aadc0de -> 5aadc0de
[OK] all registers okay
[RUN] 64-bit CS (33), bogus SS (47)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
[RUN] 32-bit CS (23), bogus SS (47)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
[RUN] 16-bit CS (37), bogus SS (47)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
[RUN] 64-bit CS (33), bogus SS (33)
[OK] Got #GP(0x30) (i.e. GDT index 6, Segmentation fault)
[RUN] 32-bit CS (23), bogus SS (33)
[OK] Got #GP(0x30) (i.e. GDT index 6, Segmentation fault)
[RUN] 16-bit CS (37), bogus SS (33)
[OK] Got #GP(0x30) (i.e. GDT index 6, Segmentation fault)
[RUN] 32-bit CS (4f), bogus SS (2b)
[OK] Got #NP(0x4c) (i.e. LDT index 9, Bus error)
[RUN] 32-bit CS (23), bogus SS (57)
[OK] Got #GP(0x0) (i.e. Segmentation fault)
[RUN] Clear UC_STRICT_RESTORE_SS and corrupt SS
[OK] It worked
ok 1..28 selftests: x86: sigreturn_64 [PASS]
selftests: x86: iopl_64
========================================
child: set IOPL to 3
[RUN] child: write to 0x80
[OK] Child succeeded
[RUN] parent: write to 0x80 (should fail)
[OK] write was denied
iopl(3)
Drop privileges
[RUN] iopl(3) unprivileged but with IOPL==3
[RUN] iopl(0) unprivileged
[RUN] iopl(3) unprivileged
[OK] Failed as expected
ok 1..29 selftests: x86: iopl_64 [PASS]
selftests: x86: mpx-mini-test_64
========================================
processor lacks MPX XSTATE(s), can not run MPX tests
XSAVE is supported by HW & OS
XSAVE processor supported state mask: 0x7
XSAVE OS supported state mask: 0x7
ok 1..30 selftests: x86: mpx-mini-test_64 [PASS]
selftests: x86: ioperm_64
========================================
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[RUN] enable 0x80
[OK] outb to 0x80 worked
[OK] outb to 0xed failed
[RUN] disable 0x80
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[RUN] child: check that we inherited permissions
[OK] outb to 0x80 worked
[OK] outb to 0xed failed
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[RUN] enable 0x80
[OK] outb to 0x80 worked
[OK] outb to 0xed failed
[RUN] disable 0x80
[OK] outb to 0x80 failed
[OK] outb to 0xed failed
[OK] Child succeeded
Drop privileges
[RUN] disable 0x80
[OK] it worked
[RUN] enable 0x80 again
[OK] it failed
ok 1..31 selftests: x86: ioperm_64 [PASS]
selftests: x86: protection_keys_64
========================================
has pku: 0
running PKEY tests for unsupported CPU/OS
ok 1..32 selftests: x86: protection_keys_64 [PASS]
selftests: x86: test_vdso_64
========================================
[RUN] Testing clock_gettime for clock CLOCK_REALTIME (0)...
1551252326.064454763 1551252326.064460618 1551252326.064461264
[RUN] Testing clock_gettime for clock CLOCK_MONOTONIC (1)...
633.452362833 633.452363285 633.452363750
[RUN] Testing clock_gettime for clock CLOCK_PROCESS_CPUTIME_ID (2)...
0.000703738 0.000705015 0.000705910
[RUN] Testing clock_gettime for clock CLOCK_THREAD_CPUTIME_ID (3)...
0.000708187 0.000709083 0.000709956
[RUN] Testing clock_gettime for clock CLOCK_MONOTONIC_RAW (4)...
633.044519105 633.044519915 633.044520670
[RUN] Testing clock_gettime for clock CLOCK_REALTIME_COARSE (5)...
1551252326.061853650 1551252326.061853650 1551252326.061853650
[RUN] Testing clock_gettime for clock CLOCK_MONOTONIC_COARSE (6)...
633.449752162 633.449752162 633.449752162
[RUN] Testing clock_gettime for clock CLOCK_BOOTTIME (7)...
633.452383428 633.452384219 633.452385002
[RUN] Testing clock_gettime for clock CLOCK_REALTIME_ALARM (8)...
1551252326.064488798 1551252326.064489620 1551252326.064490434
[RUN] Testing clock_gettime for clock CLOCK_BOOTTIME_ALARM (9)...
633.452390937 633.452391757 633.452392569
[RUN] Testing clock_gettime for clock CLOCK_SGI_CYCLE (10)...
[OK] No such clock.
[RUN] Testing clock_gettime for clock CLOCK_TAI (11)...
1551252326.064499465 1551252326.064499907 1551252326.064500337
[RUN] Testing clock_gettime for clock invalid (-1)...
[OK] No such clock.
[RUN] Testing clock_gettime for clock invalid (-2147483648)...
[OK] No such clock.
[RUN] Testing clock_gettime for clock invalid (2147483647)...
[OK] No such clock.
[RUN] Testing gettimeofday...
1551252326.064508 1551252326.064508 1551252326.064509
[OK] timezones match: minuteswest=-480, dsttime=0
[RUN] Testing getcpu...
[OK] CPU 0: syscall: cpu 0, node 0 vdso: cpu 0, node 0 vsyscall: cpu 0, node 0
[OK] CPU 1: syscall: cpu 1, node 0 vdso: cpu 1, node 0 vsyscall: cpu 1, node 0
ok 1..33 selftests: x86: test_vdso_64 [PASS]
selftests: x86: test_vsyscall_64
========================================
vsyscall map: ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]
vsyscall permissions are r-x
[RUN] test gettimeofday()
vDSO time offsets: 0.000005 0.000002
[OK] vDSO gettimeofday()'s timeval was okay
vsyscall time offsets: 0.000006 0.000001
[OK] vsyscall gettimeofday()'s timeval was okay
[RUN] test time()
[OK] vDSO time() is okay
[OK] vsyscall time() is okay
[RUN] getcpu() on CPU 0
[OK] vDSO reported correct CPU
[OK] vDSO reported correct node
[OK] vsyscall reported correct CPU
[OK] vsyscall reported correct node
[RUN] getcpu() on CPU 1
[OK] vDSO reported correct CPU
[OK] vDSO reported correct node
[OK] vsyscall reported correct CPU
[OK] vsyscall reported correct node
[RUN] Checking read access to the vsyscall page
[OK] got expected result
[RUN] checking that vsyscalls are emulated
[OK] vsyscalls are emulated (1 instructions in vsyscall page)
ok 1..34 selftests: x86: test_vsyscall_64 [PASS]
selftests: x86: fsgsbase_64
========================================
[RUN] ARCH_SET_GS to 0x0
[OK] GSBASE was set as expected (selector 0x0)
[OK] ARCH_GET_GS worked as expected (selector 0x0)
[RUN] ARCH_SET_GS to 0x1
[OK] GSBASE was set as expected (selector 0x0)
[OK] ARCH_GET_GS worked as expected (selector 0x0)
[RUN] ARCH_SET_GS to 0x200000000
[OK] GSBASE was set as expected (selector 0x0)
[OK] ARCH_GET_GS worked as expected (selector 0x0)
[RUN] ARCH_SET_GS to 0x0
[OK] GSBASE was set as expected (selector 0x0)
[OK] ARCH_GET_GS worked as expected (selector 0x0)
[RUN] ARCH_SET_GS to 0x200000000
[OK] GSBASE was set as expected (selector 0x0)
[OK] ARCH_GET_GS worked as expected (selector 0x0)
[RUN] ARCH_SET_GS to 0x1
[OK] GSBASE was set as expected (selector 0x0)
[OK] ARCH_GET_GS worked as expected (selector 0x0)
[RUN] ARCH_SET_GS to 0x0 then mov 0 to %gs
[OK] GSBASE is 0x0
[RUN] ARCH_SET_GS to 0x1 then mov 0 to %gs
[OK] GSBASE is 0x0
[RUN] ARCH_SET_GS to 0x200000000 then mov 0 to %gs
[OK] GSBASE is 0x0
[RUN] ARCH_SET_GS to 0x0 then mov 0 to %gs and schedule
[OK] GSBASE is 0x0
[RUN] ARCH_SET_GS to 0x1 then mov 0 to %gs and schedule
[OK] GSBASE is 0x0
[RUN] ARCH_SET_GS to 0x200000000 then mov 0 to %gs and schedule
[OK] GSBASE is 0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x0
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x0
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x0
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x0
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x0
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0xa1fa5f343cb85fa4
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x1
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x1
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x1
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x1
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x200000000
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x200000000
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x200000000
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x200000000
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0), then schedule to 0x200000000
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x0
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x0
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x0
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x0
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x0
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0xa1fa5f343cb85fa4
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x1
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x1
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x1
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x1
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x200000000
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x200000000
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x200000000
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x200000000
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x0) and clear gs, then schedule to 0x200000000
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x0
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x1
[RUN] ARCH_SET_GS(0x1), then schedule to 0x0
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x0
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x0
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x0
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0xa1fa5f343cb85fa4
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x0/0x1
[RUN] ARCH_SET_GS(0x1), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x1
[RUN] ARCH_SET_GS(0x1), then schedule to 0x1
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x1
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x1
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x1
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x200000000
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x1
[RUN] ARCH_SET_GS(0x1), then schedule to 0x200000000
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x200000000
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x200000000
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x1), then schedule to 0x200000000
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x0
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x200000000
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x0
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x0
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x0
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x0
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0xa1fa5f343cb85fa4
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x0/0x200000000
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0xa1fa5f343cb85fa4
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x0) and clear gs -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x200000000
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x1
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x1
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x1
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x1
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x1) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x200000000
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x0/0x200000000
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x200000000
Before schedule, set selector to 0x1
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x1/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x200000000
Before schedule, set selector to 0x2
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x200000000
Before schedule, set selector to 0x3
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x3/0x0
[RUN] ARCH_SET_GS(0x200000000), then schedule to 0x200000000
Before schedule, set selector to 0x2b
other thread: ARCH_SET_GS(0x200000000) -- sel is 0x0
[OK] GS/BASE remained 0x2b/0x0
[RUN] ARCH_SET_GS(0), clear gs, then manipulate GSBASE in a different thread
other thread: using LDT slot 0
[OK] GSBASE remained 0
ok 1..35 selftests: x86: fsgsbase_64 [PASS]
selftests: x86: sysret_rip_64
========================================
[RUN] sigreturn to 0x800000000000
[OK] Got SIGSEGV at RIP=0x800000000000
[RUN] sigreturn to 0x1000000000000
[OK] Got SIGSEGV at RIP=0x1000000000000
[RUN] sigreturn to 0x2000000000000
[OK] Got SIGSEGV at RIP=0x2000000000000
[RUN] sigreturn to 0x4000000000000
[OK] Got SIGSEGV at RIP=0x4000000000000
[RUN] sigreturn to 0x8000000000000
[OK] Got SIGSEGV at RIP=0x8000000000000
[RUN] sigreturn to 0x10000000000000
[OK] Got SIGSEGV at RIP=0x10000000000000
[RUN] sigreturn to 0x20000000000000
[OK] Got SIGSEGV at RIP=0x20000000000000
[RUN] sigreturn to 0x40000000000000
[OK] Got SIGSEGV at RIP=0x40000000000000
[RUN] sigreturn to 0x80000000000000
[OK] Got SIGSEGV at RIP=0x80000000000000
[RUN] sigreturn to 0x100000000000000
[OK] Got SIGSEGV at RIP=0x100000000000000
[RUN] sigreturn to 0x200000000000000
[OK] Got SIGSEGV at RIP=0x200000000000000
[RUN] sigreturn to 0x400000000000000
[OK] Got SIGSEGV at RIP=0x400000000000000
[RUN] sigreturn to 0x800000000000000
[OK] Got SIGSEGV at RIP=0x800000000000000
[RUN] sigreturn to 0x1000000000000000
[OK] Got SIGSEGV at RIP=0x1000000000000000
[RUN] sigreturn to 0x2000000000000000
[OK] Got SIGSEGV at RIP=0x2000000000000000
[RUN] sigreturn to 0x4000000000000000
[OK] Got SIGSEGV at RIP=0x4000000000000000
[RUN] sigreturn to 0x8000000000000000
[OK] Got SIGSEGV at RIP=0x8000000000000000
[RUN] Trying a SYSCALL that falls through to 0x7fffffffe000
[OK] We survived
[RUN] Trying a SYSCALL that falls through to 0x7ffffffff000
[OK] We survived
[RUN] Trying a SYSCALL that falls through to 0x800000000000
[OK] mremap to 0x7ffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0xfffffffff000
[OK] mremap to 0xffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x1000000000000
[OK] mremap to 0xfffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x1fffffffff000
[OK] mremap to 0x1ffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x2000000000000
[OK] mremap to 0x1fffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x3fffffffff000
[OK] mremap to 0x3ffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x4000000000000
[OK] mremap to 0x3fffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x7fffffffff000
[OK] mremap to 0x7ffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x8000000000000
[OK] mremap to 0x7fffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0xffffffffff000
[OK] mremap to 0xfffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x10000000000000
[OK] mremap to 0xffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x1ffffffffff000
[OK] mremap to 0x1fffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x20000000000000
[OK] mremap to 0x1ffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x3ffffffffff000
[OK] mremap to 0x3fffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x40000000000000
[OK] mremap to 0x3ffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x7ffffffffff000
[OK] mremap to 0x7fffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x80000000000000
[OK] mremap to 0x7ffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0xfffffffffff000
[OK] mremap to 0xffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x100000000000000
[OK] mremap to 0xfffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x1fffffffffff000
[OK] mremap to 0x1ffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x200000000000000
[OK] mremap to 0x1fffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x3fffffffffff000
[OK] mremap to 0x3ffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x400000000000000
[OK] mremap to 0x3fffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x7fffffffffff000
[OK] mremap to 0x7ffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x800000000000000
[OK] mremap to 0x7fffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0xffffffffffff000
[OK] mremap to 0xfffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x1000000000000000
[OK] mremap to 0xffffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x1ffffffffffff000
[OK] mremap to 0x1fffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x2000000000000000
[OK] mremap to 0x1ffffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x3ffffffffffff000
[OK] mremap to 0x3fffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x4000000000000000
[OK] mremap to 0x3ffffffffffff000 failed
[RUN] Trying a SYSCALL that falls through to 0x7ffffffffffff000
[OK] mremap to 0x7fffffffffffe000 failed
[RUN] Trying a SYSCALL that falls through to 0x8000000000000000
[OK] mremap to 0x7ffffffffffff000 failed
ok 1..36 selftests: x86: sysret_rip_64 [PASS]
selftests: x86: ldt_gdt_64
========================================
[NOTE] set_thread_area is available; will use GDT index 12
[OK] LDT entry 0 has AR 0x0040FB00 and limit 0x0000000A
[OK] LDT entry 0 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00907B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07300 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07100 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07500 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00507700 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507F00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507D00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507B00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[RUN] Test fork
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[OK] LDT entry 1 is invalid
[OK] LDT entry 0 is invalid
[NOTE] set_thread_area is available; will use GDT index 12
[OK] LDT entry 0 has AR 0x0040FB00 and limit 0x0000000A
[OK] LDT entry 0 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 1 is invalid
[OK] LDT entry 2 has AR 0x00C0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D0FB00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00907B00 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07300 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07100 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00D07500 and limit 0x0000AFFF
[OK] LDT entry 2 has AR 0x00507700 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507F00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507D00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507B00 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[OK] LDT entry 2 has AR 0x00507900 and limit 0x0000000A
[RUN] Test fork
[OK] Child succeeded
[RUN] Test size
[DONE] Size test
[OK] modify_ldt failure 22
[OK] LDT entry 0 has AR 0x0000F300 and limit 0x00000000
[OK] LDT entry 0 has AR 0x00007300 and limit 0x00000000
[OK] LDT entry 0 has AR 0x0000F100 and limit 0x00000000
[OK] LDT entry 0 has AR 0x00007300 and limit 0x00000000
[OK] LDT entry 0 has AR 0x00007100 and limit 0x00000001
[OK] LDT entry 0 has AR 0x00007100 and limit 0x00000000
[OK] LDT entry 0 is invalid
[OK] LDT entry 0 has AR 0x0040F300 and limit 0x000FFFFF
[OK] LDT entry 0 has AR 0x00C0F300 and limit 0xFFFFFFFF
[OK] LDT entry 0 has AR 0x00C0F100 and limit 0xFFFFFFFF
[OK] LDT entry 0 has AR 0x00C0F700 and limit 0xFFFFFFFF
[OK] LDT entry 0 has AR 0x00C0F500 and limit 0xFFFFFFFF
[OK] LDT entry 0 is invalid
[RUN] Cross-CPU LDT invalidation
[OK] All 5 iterations succeeded
[RUN] Test exec
[OK] LDT entry 0 has AR 0x0040FB00 and limit 0x0000002A
[OK] Child succeeded
[OK] Invalidate DS with set_thread_area: new DS = 0x0
[OK] Invalidate ES with set_thread_area: new ES = 0x0
[OK] Invalidate FS with set_thread_area: new FS = 0x0
[OK] New FSBASE was zero
[OK] Invalidate GS with set_thread_area: new GS = 0x0
[OK] New GSBASE was zero
ok 1..37 selftests: x86: ldt_gdt_64 [PASS]
selftests: x86: ptrace_syscall_64
========================================
[RUN] Check int80 return regs
[OK] getpid() preserves regs
[OK] kill(getpid(), SIGUSR1) preserves regs
[RUN] ptrace-induced syscall restart
[RUN] SYSEMU
[OK] Initial nr and args are correct
[RUN] Restart the syscall (ip = 0x7f395f0aa309)
[OK] Restarted nr and args are correct
[RUN] Change nr and args and restart the syscall (ip = 0x7f395f0aa309)
[OK] Replacement nr and args are correct
[OK] Child exited cleanly
[RUN] kernel syscall restart under ptrace
[RUN] SYSCALL
[OK] Initial nr and args are correct
[RUN] SYSCALL
[OK] Args after SIGUSR1 are correct (ax = -514)
[OK] Child got SIGUSR1
[RUN] Step again
[OK] pause(2) restarted correctly
ok 1..38 selftests: x86: ptrace_syscall_64 [PASS]
make: Leaving directory '/usr/src/perf_selftests-x86_64-rhel-7.2-6ea3dfe1e0732c5bd3be1e073690b06a83c03c25/tools/testing/selftests/x86'
ignored_by_lkp zram test
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 10 months
[drm/amd/display] e1be4cb583: BUG:pagefault_on_kernel_address#in_non-whitelisted_uaccess
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: e1be4cb583800db36ed7f6303f7a8c205be24ceb ("drm/amd/display: Use memset to initialize variables in fill_plane_dcc_attributes")
git://people.freedesktop.org/~agd5f/linux.git amd-staging-drm-next
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 1G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------------------------+------------+------------+
| | a0c15edcb4 | e1be4cb583 |
+------------------------------------------------------------+------------+------------+
| boot_successes | 4 | 6 |
| boot_failures | 2 | 8 |
| BUG:kernel_in_stage | 2 | 2 |
| BUG:pagefault_on_kernel_address#in_non-whitelisted_uaccess | 0 | 4 |
| BUG:unable_to_handle_kernel | 0 | 4 |
| Oops:#[##] | 0 | 4 |
| RIP:strncpy_from_user | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 |
| BUG:soft_lockup-CPU##stuck_for#s | 0 | 2 |
| RIP:update_stack_state | 0 | 1 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 2 |
| RIP:ftrace_likely_update | 0 | 1 |
+------------------------------------------------------------+------------+------------+
[ 3.376773] BUG: pagefault on kernel address 0xffff888030b40000 in non-whitelisted uaccess
[ 3.377644] BUG: unable to handle kernel paging request at ffff888030b40000
[ 3.377644] #PF error: [normal kernel read fault]
[ 3.377644] PGD 23200067 P4D 23200067 PUD 23201067 PMD 3b17a067 PTE 800fffffcf4bf060
[ 3.377644] Oops: 0000 [#1] DEBUG_PAGEALLOC KASAN
[ 3.377644] CPU: 0 PID: 1 Comm: swapper Not tainted 5.0.0-rc1-00282-ge1be4cb #84
[ 3.377644] RIP: 0010:strncpy_from_user+0x1d7/0x31d
[ 3.377644] Code: ff 4d 39 fc 4d 89 e8 4d 0f 46 fc 4c 89 7d d0 45 31 ff 48 8b 55 d0 4c 29 fa 48 83 fa 07 0f 86 1f 01 00 00 4c 89 45 c0 45 31 c9 <4e> 8b 34 3b 45 85 c9 48 c7 c7 a0 05 59 a3 44 89 4d c8 40 0f 95 c6
[ 3.377644] RSP: 0000:ffff888000397938 EFLAGS: 00010246
[ 3.377644] RAX: dffffc0000000000 RBX: ffff888030b3ffba RCX: 0000000000000000
[ 3.377644] RDX: 0000000000000fa0 RSI: 0000000000000000 RDI: ffffffffa35905b8
[ 3.377644] RBP: ffff888000397978 R08: ffff8880250f5560 R09: 0000000000000000
[ 3.377644] R10: ffffed1004a1ec9f R11: 0000000000000001 R12: 0000000000000fe0
[ 3.377644] R13: ffff8880250f5520 R14: 0000000000000000 R15: 0000000000000040
[ 3.377644] FS: 0000000000000000(0000) GS:ffffffffa2e5c000(0000) knlGS:0000000000000000
[ 3.377644] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 3.377644] CR2: ffff888030b40000 CR3: 0000000021c20000 CR4: 00000000000406f0
[ 3.377644] Call Trace:
[ 3.377644] getname_flags+0x15c/0x518
[ 3.377644] user_path_at_empty+0x23/0x3d
[ 3.377644] vfs_statx+0xd8/0x13f
[ 3.377644] ? vfs_statx_fd+0x6e/0x6e
[ 3.377644] clean_path+0xa0/0x153
[ 3.377644] ? clean_path+0x47/0x47
[ 3.377644] ? do_collect+0xe6/0xe6
[ 3.377644] ? __list_add_valid+0x124/0x14d
[ 3.377644] do_name+0x16f/0x6b0
[ 3.377644] write_buffer+0x89/0xc4
[ 3.377644] flush_buffer+0x6f/0x198
[ 3.377644] __gunzip+0x9a4/0xc07
[ 3.377644] ? bunzip2+0xe89/0xe89
[ 3.377644] ? write_buffer+0xc4/0xc4
[ 3.377644] gunzip+0x19/0x40
[ 3.377644] ? initrd_load+0xad/0xad
[ 3.377644] unpack_to_rootfs+0x2da/0x775
[ 3.377644] ? initrd_load+0xad/0xad
[ 3.377644] ? maybe_link+0x504/0x504
[ 3.377644] populate_rootfs+0x18a/0x4b3
[ 3.377644] ? unpack_to_rootfs+0x775/0x775
[ 3.377644] do_one_initcall+0x14a/0x30e
[ 3.377644] ? __wake_up_locked_key_bookmark+0x1d/0x1d
[ 3.377644] ? initcall_blacklisted+0x13f/0x13f
[ 3.377644] ? __might_sleep+0x1bc/0x1c9
[ 3.377644] ? kasan_unpoison_shadow+0x19/0x3a
[ 3.377644] kernel_init_freeable+0x6b1/0x7d4
[ 3.377644] ? schedule_tail+0xaa/0xae
[ 3.377644] ? rest_init+0xc7/0xc7
[ 3.377644] kernel_init+0x11/0x105
[ 3.377644] ? rest_init+0xc7/0xc7
[ 3.377644] ret_from_fork+0x24/0x30
[ 3.377644] Modules linked in:
[ 3.377644] CR2: ffff888030b40000
[ 3.377644] _warn_unseeded_randomness: 3 callbacks suppressed
[ 3.377644] random: get_random_bytes called from init_oops_id+0x26/0x37 with crng_init=0
[ 3.377644] ---[ end trace 23878c06f4fcb65e ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 10 months
[sg] d6d8ff1c80: INFO:rcu_sched_detected_stalls_on_CPUs/tasks
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: d6d8ff1c80f31da7a832e5a45d85b4197abff295 ("sg: Convert sg_index_idr to XArray")
git://git.infradead.org/users/willy/linux-dax.git xarray-conv
in testcase: locktorture
with following parameters:
runtime: 300s
test: default
test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------+------------+------------+
| | 4f588eb2e9 | d6d8ff1c80 |
+----------------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 4 |
| INFO:rcu_sched_detected_stalls_on_CPUs/tasks | 0 | 4 |
| RIP:queued_spin_lock_slowpath | 0 | 4 |
| BUG:kernel_hang_in_boot-around-mounting-root_stage | 0 | 4 |
+----------------------------------------------------+------------+------------+
[ 105.608544] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
[ 105.612537] rcu: 0-...0: (7 ticks this GP) idle=406/1/0x4000000000000000 softirq=4060/4061 fqs=12500
[ 105.612537] rcu: (detected by 1, t=25002 jiffies, g=25, q=19)
[ 105.612537] Sending NMI from CPU 1 to CPUs 0:
[ 105.612537] NMI backtrace for cpu 0
[ 105.612537] CPU: 0 PID: 7 Comm: kworker/u4:0 Not tainted 5.0.0-rc5-00189-gd6d8ff1 #1
[ 105.612537] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 105.612537] Workqueue: events_unbound async_run_entry_fn
[ 105.612537] RIP: 0010:queued_spin_lock_slowpath+0x16f/0x17a
[ 105.612537] Code: 0f b1 37 74 15 eb e9 48 8b 0a 48 85 c9 75 04 f3 90 eb f4 c7 41 08 01 00 00 00 65 ff 0d b5 b4 e9 7e c3 8b 07 85 c0 74 04 f3 90 <eb> f6 f0 0f b1 17 85 c0 75 ee c3 66 66 66 66 90 48 c7 47 18 00 00
[ 105.612537] RSP: 0018:ffffc90000353c80 EFLAGS: 00000002
[ 105.612537] RAX: 0000000000000001 RBX: ffff88806ab10900 RCX: 0000000000000000
[ 105.612537] RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff8252fa30
[ 105.612537] RBP: ffff88806ada8480 R08: ffff88806ba248a0 R09: 0000000000000000
[ 105.612537] R10: 00000000ffffffe0 R11: 0000000000000000 R12: ffff88806a641000
[ 105.612537] R13: ffff88806a640968 R14: ffff88806a640800 R15: ffff88806a640968
[ 105.612537] FS: 0000000000000000(0000) GS:ffff88806ba00000(0000) knlGS:0000000000000000
[ 105.612537] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 105.612537] CR2: 00007fffb7473e08 CR3: 000000000240e000 CR4: 00000000000006f0
[ 105.612537] Call Trace:
[ 105.612537] sg_add_device+0x112/0x41a
[ 105.612537] device_add+0x4b7/0x59f
[ 105.612537] ? __pm_runtime_resume+0x79/0x85
[ 105.612537] scsi_sysfs_add_sdev+0x130/0x223
[ 105.612537] scsi_probe_and_add_lun+0xa84/0xac0
[ 105.612537] __scsi_add_device+0xd4/0x128
[ 105.612537] ata_scsi_scan_host+0x86/0x173
[ 105.612537] async_run_entry_fn+0x6f/0x12f
[ 105.612537] process_one_work+0x1cf/0x327
[ 105.612537] ? worker_thread+0x242/0x2c4
[ 105.612537] worker_thread+0x1e6/0x2c4
[ 105.612537] ? rescuer_thread+0x2bd/0x2bd
[ 105.612537] kthread+0x121/0x129
[ 105.612537] ? kthread_park+0x76/0x76
[ 105.612537] ret_from_fork+0x3a/0x50
BUG: kernel hang in boot-around-mounting-root stage
Elapsed time: 320
qemu-img create -f qcow2 disk-vm-snb-2G-443-0 256G
qemu-img create -f qcow2 disk-vm-snb-2G-443-1 256G
qemu-img create -f qcow2 disk-vm-snb-2G-443-2 256G
qemu-img create -f qcow2 disk-vm-snb-2G-443-3 256G
qemu-img create -f qcow2 disk-vm-snb-2G-443-4 256G
qemu-img create -f qcow2 disk-vm-snb-2G-443-5 256G
qemu-img create -f qcow2 disk-vm-snb-2G-443-6 256G
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu SandyBridge
-kernel $kernel
-initrd initrd-vm-snb-2G-443
-m 2048
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0,hostfwd=tcp::23550-:22
-boot order=nc
-no-reboot
-watchdog i6300esb
-watchdog-action debug
-rtc base=localtime
-drive file=disk-vm-snb-2G-443-0,media=disk,if=virtio
-drive file=disk-vm-snb-2G-443-1,media=disk,if=virtio
-drive file=disk-vm-snb-2G-443-2,media=disk,if=virtio
-drive file=disk-vm-snb-2G-443-3,media=disk,if=virtio
-drive file=disk-vm-snb-2G-443-4,media=disk,if=virtio
-drive file=disk-vm-snb-2G-443-5,media=disk,if=virtio
-drive file=disk-vm-snb-2G-443-6,media=disk,if=virtio
-serial stdio
-display none
-monitor null
)
append=(
ip=::::vm-snb-2G-443::dhcp
root=/dev/ram0
user=lkp
job=/job-script
ARCH=x86_64
kconfig=x86_64-lkp
branch=dax/xarray-conv
commit=d6d8ff1c80f31da7a832e5a45d85b4197abff295
BOOT_IMAGE=/pkg/linux/x86_64-lkp/gcc-7/d6d8ff1c80f31da7a832e5a45d85b4197abff295/vmlinuz-5.0.0-rc5-00189-gd6d8ff1
max_uptime=1500
RESULT_ROOT=/result/locktorture/300s-default/vm-snb-2G/debian-x86_64-2018-04-03.cgz/x86_64-lkp/gcc-7/d6d8ff1c80f31da7a832e5a45d85b4197abff295/3
result_service=tmpfs
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
net.ifnames=0
printk.devkmsg=on
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
drbd.minor_count=8
systemd.log_level=err
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
1 year, 10 months
d697415be6: WARNING:at_kernel/sched/sched.h:#sched_cpu_dying
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: d697415be692349156bca1733648c7cb7f8dad77 ("Core scheduling")
git://bee.sh.intel.com/git/aubrey/fast_idle.git core_scheduling
in testcase: locktorture
with following parameters:
runtime: 300s
test: cpuhotplug
test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------------------------------------+------------+------------+
| | 9dd58fe7f8 | d697415be6 |
+---------------------------------------------------------+------------+------------+
| boot_successes | 2 | 4 |
| boot_failures | 2 | 8 |
| BUG:kernel_in_stage | 1 | |
| invoked_oom-killer:gfp_mask=0x | 1 | 4 |
| Mem-Info | 1 | 4 |
| Out_of_memory_and_no_killable_processes | 1 | 4 |
| Kernel_panic-not_syncing:System_is_deadlocked_on_memory | 1 | 4 |
| WARNING:at_kernel/sched/sched.h:#sched_cpu_dying | 0 | 4 |
| RIP:sched_cpu_dying | 0 | 4 |
+---------------------------------------------------------+------------+------------+
[ 106.996450] WARNING: CPU: 1 PID: 15 at kernel/sched/sched.h:1777 sched_cpu_dying+0x43a/0x4b0
[ 106.997907] Modules linked in: locktorture torture virtio_blk ppdev crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel bochs_drm snd_pcm ttm snd_timer drm_kms_helper snd soundcore joydev pcspkr drm serio_raw i6300esb parport_pc parport ata_generic floppy virtio_pci virtio_ring virtio i2c_piix4 qemu_fw_cfg pata_acpi
[ 107.001989] CPU: 1 PID: 15 Comm: migration/1 Not tainted 5.0.0-rc8-00542-gd697415 #22
[ 107.003100] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 107.004303] RIP: 0010:sched_cpu_dying+0x43a/0x4b0
[ 107.004993] Code: fc ff ff 48 8b bb 50 0c 00 00 e9 2d fc ff ff 48 89 df e8 29 78 ff ff 48 8b 45 c0 48 89 45 b8 8b 45 c8 89 45 b4 e9 5f fc ff ff <0f> 0b e9 f4 fc ff ff 48 8d 55 c0 48 c7 c6 c0 f0 24 b9 4c 89 ff 48
[ 107.007648] RSP: 0018:ffffb4598039bd78 EFLAGS: 00010087
[ 107.008417] RAX: ffff99fdff0e5a00 RBX: ffff99fddcf22ec0 RCX: dead000000000200
[ 107.009433] RDX: ffff99fddcf23910 RSI: ffff99fddcf23910 RDI: ffff99fdff0e5ab0
[ 107.010462] RBP: ffffb4598039bdd0 R08: ffff99fddcf23910 R09: 000000000004ce13
[ 107.011482] R10: ffffb4598039bcc8 R11: 00000000004fa1e0 R12: 0000000000022ec0
[ 107.012502] R13: ffff99fdff0e5a00 R14: ffffffffb8e137e0 R15: ffff99fddcf22ec0
[ 107.013522] FS: 0000000000000000(0000) GS:ffff99fddcf00000(0000) knlGS:0000000000000000
[ 107.014673] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 107.015492] CR2: 00000000004215e0 CR3: 000000007f1ca000 CR4: 00000000000406e0
[ 107.016510] Call Trace:
[ 107.017564] ? sched_cpu_starting+0x1b0/0x1b0
[ 107.018402] cpuhp_invoke_callback+0x86/0x5d0
[ 107.019056] ? cpu_disable_common+0x1cf/0x1f0
[ 107.019749] take_cpu_down+0x60/0x90
[ 107.020173] multi_cpu_stop+0x68/0xe0
[ 107.020606] ? cpu_stopper_thread+0x100/0x100
[ 107.021123] cpu_stopper_thread+0x94/0x100
[ 107.021609] ? smpboot_thread_fn+0x2f/0x1e0
[ 107.022105] ? smpboot_thread_fn+0x74/0x1e0
[ 107.022595] ? smpboot_thread_fn+0x14e/0x1e0
[ 107.023104] smpboot_thread_fn+0x149/0x1e0
[ 107.023623] ? sort_range+0x20/0x20
[ 107.024042] kthread+0x11e/0x140
[ 107.024492] ? kthread_park+0x90/0x90
[ 107.024927] ret_from_fork+0x35/0x40
[ 107.025381] ---[ end trace a876bdc6aab2744c ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
lkp
1 year, 10 months