[RFC] replace ->get_poll_head with a waitqueue pointer in struct file
by Christoph Hellwig
Introducing the new poll methods showed up a regression in the
will-it-scale ltp tests. One reason for that is that indirect function
calls are very expensive now with the spectre mitigations. I'm waiting
for better numbers, but this series has shown a 5% improvements in the
ops per second so far, while for the get_poll_head addition we had
regressions of 3.7% or 8.8% depending on the measurement.
This series removes the get_poll_head method again and instead stores an
optional wait_queue_head pointer in struct file, on which the poll_mask
method can be used if it is set. The only complication is the networking
poll code which not only does some interesting gymnastics to get at the
wait queue pointer, but also has a mode to to hardware polling before
waiting for an even from poll or epoll. Because of that this series has
a few net prep patches that need careful review.
3 years, 10 months
29b848451d ("<linux/taggedptr.h>: Introduce tagged pointer"): WARNING: lock held when returning to user space!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Gao-Xiang/linux-taggedptr-h-Intr...
commit 29b848451ddc66bd7f72780d70d7022f0eed9931
Author: Gao Xiang <gaoxiang25(a)huawei.com>
AuthorDate: Thu Jun 28 17:06:29 2018 +0800
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Thu Jun 28 17:55:56 2018 +0800
<linux/taggedptr.h>: Introduce tagged pointer
Currently kernel has scattered tagged pointer usages hacked
by hand in plain code, without a unique and portable functionset
to highlight the tagged pointer itself and wrap these hacked code
in order to clean up all over meaningless magic masks.
Therefore, this patch introduces simple generic methods to fold
tags into a pointer integer. It currently reuses the last 2 bits
of the pointer for tags, which are safely for all modern platforms.
In addition, it will also be used for the upcoming EROFS filesystem,
which heavily uses tagged pointer approach for high performance
and reducing extra memory allocation.
Refer to:
https://en.wikipedia.org/wiki/Tagged_pointer
To: Alexander Viro <viro(a)zeniv.linux.org.uk>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Chao Yu <yuchao0(a)huawei.com>
Cc: Miao Xie <miaoxie(a)huawei.com>
Cc: linux-fsdevel(a)vger.kernel.org
Cc: linux-erofs(a)lists.ozlabs.org
Cc: linux-kernel(a)vger.kernel.org
Signed-off-by: Gao Xiang <gaoxiang25(a)huawei.com>
f57494321c Merge tag 'xfs-4.18-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
29b848451d <linux/taggedptr.h>: Introduce tagged pointer
+--------------------------------------------------+------------+------------+
| | f57494321c | 29b848451d |
+--------------------------------------------------+------------+------------+
| boot_successes | 33 | 0 |
| boot_failures | 0 | 26 |
| WARNING:lock_held_when_returning_to_user_space | 0 | 26 |
| is_leaving_the_kernel_with_locks_still_held | 0 | 26 |
| INFO:task_blocked_for_more_than#seconds | 0 | 26 |
| Kernel_panic-not_syncing:hung_task:blocked_tasks | 0 | 22 |
| BUG:kernel_hang_in_test_stage | 0 | 4 |
+--------------------------------------------------+------------+------------+
[ 1.797604] random: mktemp: uninitialized urandom read (6 bytes read)
[ 1.799577] mktemp (122) used greatest stack depth: 6456 bytes left
[ 1.803144] random: mktemp: uninitialized urandom read (6 bytes read)
[ 1.805630]
[ 1.805918] ================================================
[ 1.806501] WARNING: lock held when returning to user space!
[ 1.807382] 4.18.0-rc2-00133-g29b8484 #326 Not tainted
[ 1.808241] ------------------------------------------------
[ 1.809178] lock/124 is leaving the kernel with locks still held!
[ 1.810169] 1 lock held by lock/124:
[ 1.810808] #0: (ptrval) (&f->f_pos_lock){....}, at: __fdget_pos+0x30/0x40
[ 246.752095] INFO: task lock:124 blocked for more than 120 seconds.
[ 246.753185] Not tainted 4.18.0-rc2-00133-g29b8484 #326
[ 246.754130] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 246.755432] lock D 6676 124 119 0x00000000
[ 246.756353] Call Trace:
[ 246.756897] __schedule+0x4fa/0xde0
[ 246.757538] ? sched_clock_cpu+0x3d/0x1b0
[ 246.759022] schedule+0x3a/0xb0
[ 246.759593] ? preempt_count_sub+0x54/0x80
[ 246.760308] schedule_preempt_disabled+0x17/0x30
[ 246.761073] __mutex_lock+0x2ba/0x980
[ 246.761742] ? __fdget_pos+0x30/0x40
[ 246.762385] mutex_lock_nested+0x25/0x30
[ 246.763085] ? __fdget_pos+0x30/0x40
[ 246.763734] __fdget_pos+0x30/0x40
[ 246.764349] ksys_write+0x1e/0xb0
[ 246.764978] sys_write+0x16/0x20
[ 246.765581] do_int80_syscall_32+0x8f/0x200
[ 246.766308] entry_INT80_32+0x2f/0x2f
[ 246.766959] EIP: 0xb7e89c9e
[ 246.767497] Code: 31 d2 29 c2 65 89 11 83 c8 ff eb e2 65 83 3d 0c 00 00 00 00 75 1d 53 8b 54 24 10 8b 4c 24 0c 8b 5c 24 08 b8 04 00 00 00 cd 80 <5b> 3d 01 f0 ff ff 73 2d c3 e8 fd 12 03 00 50 53 8b 54 24 14 8b 4c
[ 246.770378] EAX: ffffffda EBX: 00000004 ECX: bfe54664 EDX: 00000004
[ 246.771349] ESI: bfe54664 EDI: bfe547b8 EBP: bfe54678 ESP: bfe5464c
[ 246.772343] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 007b EFLAGS: 00000246
[ 246.773424] INFO: lockdep is turned off.
[ 246.774093] NMI backtrace for cpu 0
[ 246.787787] CPU: 0 PID: 19 Comm: khungtaskd Not tainted 4.18.0-rc2-00133-g29b8484 #326
[ 246.789092] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 246.790458] Call Trace:
[ 246.790983] dump_stack+0x16/0x26
[ 246.791599] nmi_cpu_backtrace+0xa4/0xb0
[ 246.792303] ? preempt_count_add+0x3c/0x60
[ 246.793032] nmi_trigger_cpumask_backtrace+0x147/0x190
[ 246.793893] ? debug_show_all_locks+0x112/0x120
[ 246.794653] ? watchdog+0x2d0/0x520
[ 246.795270] arch_trigger_cpumask_backtrace+0x15/0x20
[ 246.796061] watchdog+0x30f/0x520
[ 246.796615] kthread+0x10f/0x140
[ 246.797170] ? reset_hung_task_detector+0x20/0x20
[ 246.797894] ? kthread_flush_work_fn+0x20/0x20
[ 246.798581] ret_from_fork+0x2e/0x38
[ 246.800016] Kernel panic - not syncing: hung_task: blocked tasks
[ 246.814001] CPU: 0 PID: 19 Comm: khungtaskd Not tainted 4.18.0-rc2-00133-g29b8484 #326
[ 246.815271] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 246.816649] Call Trace:
[ 246.817164] dump_stack+0x16/0x26
[ 246.817786] panic+0x90/0x1ce
[ 246.818352] watchdog+0x31b/0x520
[ 246.818979] kthread+0x10f/0x140
[ 246.819582] ? reset_hung_task_detector+0x20/0x20
[ 246.820394] ? kthread_flush_work_fn+0x20/0x20
[ 246.821164] ret_from_fork+0x2e/0x38
[ 246.821824] Kernel Offset: 0xb800000 from 0xc1000000 (relocation range: 0xc0000000-0xd07dffff)
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 1f59bd02fb3281d50632ad41771b9c4533397a6d 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect good 55b0ee6dc5af2a9ebca9b4bec22b8397d0b66925 # 18:39 G 10 0 4 5 Merge 'linux-review/Uwe-Kleine-K-nig/siox-don-t-create-a-thread-without-starting-it/20180628-174207' into devel-catchup-201806281801
git bisect bad 76b4975ef4ff9c48c111d5cb8c917670e9be9fc5 # 18:46 B 0 2 16 0 Merge 'linux-review/Gao-Xiang/linux-taggedptr-h-Introduce-tagged-pointer/20180628-175554' into devel-catchup-201806281801
git bisect good fbd495a21b0b545ebc9d13368261f9726e50609a # 18:59 G 10 0 3 4 Merge 'linux-review/Laurentiu-Tudor/mmc-sdhci-of-esdhc-set-proper-dma-mask-for-ls104x-chips/20180628-175113' into devel-catchup-201806281801
git bisect bad 29b848451ddc66bd7f72780d70d7022f0eed9931 # 19:08 B 0 9 34 11 <linux/taggedptr.h>: Introduce tagged pointer
# first bad commit: [29b848451ddc66bd7f72780d70d7022f0eed9931] <linux/taggedptr.h>: Introduce tagged pointer
git bisect good f57494321cbf5b1e7769b6135407d2995a369e28 # 19:21 G 31 0 9 10 Merge tag 'xfs-4.18-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
# extra tests with debug options
git bisect bad 29b848451ddc66bd7f72780d70d7022f0eed9931 # 19:34 B 0 1 15 0 <linux/taggedptr.h>: Introduce tagged pointer
# extra tests on HEAD of linux-devel/devel-catchup-201806281801
git bisect bad 1f59bd02fb3281d50632ad41771b9c4533397a6d # 19:40 B 0 29 46 0 0day head guard for 'devel-catchup-201806281801'
# extra tests on tree/branch linux-review/Gao-Xiang/linux-taggedptr-h-Introduce-tagged-pointer/20180628-175554
git bisect bad 29b848451ddc66bd7f72780d70d7022f0eed9931 # 19:59 B 0 26 40 0 <linux/taggedptr.h>: Introduce tagged pointer
# extra tests with first bad commit reverted
git bisect good 1e0167cd2de5818c3783ad180087fefa31a8dfc9 # 20:16 G 11 0 2 2 Revert "<linux/taggedptr.h>: Introduce tagged pointer"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 10 months
[lkp-robot] 29b848451d [ 1.806501] WARNING: lock held when returning to user space!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Gao-Xiang/linux-taggedptr-h-Intr...
commit 29b848451ddc66bd7f72780d70d7022f0eed9931
Author: Gao Xiang <gaoxiang25(a)huawei.com>
AuthorDate: Thu Jun 28 17:06:29 2018 +0800
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Thu Jun 28 17:55:56 2018 +0800
<linux/taggedptr.h>: Introduce tagged pointer
Currently kernel has scattered tagged pointer usages hacked
by hand in plain code, without a unique and portable functionset
to highlight the tagged pointer itself and wrap these hacked code
in order to clean up all over meaningless magic masks.
Therefore, this patch introduces simple generic methods to fold
tags into a pointer integer. It currently reuses the last 2 bits
of the pointer for tags, which are safely for all modern platforms.
In addition, it will also be used for the upcoming EROFS filesystem,
which heavily uses tagged pointer approach for high performance
and reducing extra memory allocation.
Refer to:
https://en.wikipedia.org/wiki/Tagged_pointer
To: Alexander Viro <viro(a)zeniv.linux.org.uk>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Chao Yu <yuchao0(a)huawei.com>
Cc: Miao Xie <miaoxie(a)huawei.com>
Cc: linux-fsdevel(a)vger.kernel.org
Cc: linux-erofs(a)lists.ozlabs.org
Cc: linux-kernel(a)vger.kernel.org
Signed-off-by: Gao Xiang <gaoxiang25(a)huawei.com>
f57494321c Merge tag 'xfs-4.18-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
29b848451d <linux/taggedptr.h>: Introduce tagged pointer
+--------------------------------------------------+------------+------------+
| | f57494321c | 29b848451d |
+--------------------------------------------------+------------+------------+
| boot_successes | 33 | 0 |
| boot_failures | 0 | 26 |
| WARNING:lock_held_when_returning_to_user_space | 0 | 26 |
| is_leaving_the_kernel_with_locks_still_held | 0 | 26 |
| INFO:task_blocked_for_more_than#seconds | 0 | 26 |
| Kernel_panic-not_syncing:hung_task:blocked_tasks | 0 | 22 |
| BUG:kernel_hang_in_test_stage | 0 | 4 |
+--------------------------------------------------+------------+------------+
[ 1.797604] random: mktemp: uninitialized urandom read (6 bytes read)
[ 1.799577] mktemp (122) used greatest stack depth: 6456 bytes left
[ 1.803144] random: mktemp: uninitialized urandom read (6 bytes read)
[ 1.805630]
[ 1.805918] ================================================
[ 1.806501] WARNING: lock held when returning to user space!
[ 1.807382] 4.18.0-rc2-00133-g29b8484 #326 Not tainted
[ 1.808241] ------------------------------------------------
[ 1.809178] lock/124 is leaving the kernel with locks still held!
[ 1.810169] 1 lock held by lock/124:
[ 1.810808] #0: (ptrval) (&f->f_pos_lock){....}, at: __fdget_pos+0x30/0x40
[ 246.752095] INFO: task lock:124 blocked for more than 120 seconds.
[ 246.753185] Not tainted 4.18.0-rc2-00133-g29b8484 #326
[ 246.754130] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 246.755432] lock D 6676 124 119 0x00000000
[ 246.756353] Call Trace:
[ 246.756897] __schedule+0x4fa/0xde0
[ 246.757538] ? sched_clock_cpu+0x3d/0x1b0
[ 246.759022] schedule+0x3a/0xb0
[ 246.759593] ? preempt_count_sub+0x54/0x80
[ 246.760308] schedule_preempt_disabled+0x17/0x30
[ 246.761073] __mutex_lock+0x2ba/0x980
[ 246.761742] ? __fdget_pos+0x30/0x40
[ 246.762385] mutex_lock_nested+0x25/0x30
[ 246.763085] ? __fdget_pos+0x30/0x40
[ 246.763734] __fdget_pos+0x30/0x40
[ 246.764349] ksys_write+0x1e/0xb0
[ 246.764978] sys_write+0x16/0x20
[ 246.765581] do_int80_syscall_32+0x8f/0x200
[ 246.766308] entry_INT80_32+0x2f/0x2f
[ 246.766959] EIP: 0xb7e89c9e
[ 246.767497] Code: 31 d2 29 c2 65 89 11 83 c8 ff eb e2 65 83 3d 0c 00 00 00 00 75 1d 53 8b 54 24 10 8b 4c 24 0c 8b 5c 24 08 b8 04 00 00 00 cd 80 <5b> 3d 01 f0 ff ff 73 2d c3 e8 fd 12 03 00 50 53 8b 54 24 14 8b 4c
[ 246.770378] EAX: ffffffda EBX: 00000004 ECX: bfe54664 EDX: 00000004
[ 246.771349] ESI: bfe54664 EDI: bfe547b8 EBP: bfe54678 ESP: bfe5464c
[ 246.772343] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 007b EFLAGS: 00000246
[ 246.773424] INFO: lockdep is turned off.
[ 246.774093] NMI backtrace for cpu 0
[ 246.787787] CPU: 0 PID: 19 Comm: khungtaskd Not tainted 4.18.0-rc2-00133-g29b8484 #326
[ 246.789092] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 246.790458] Call Trace:
[ 246.790983] dump_stack+0x16/0x26
[ 246.791599] nmi_cpu_backtrace+0xa4/0xb0
[ 246.792303] ? preempt_count_add+0x3c/0x60
[ 246.793032] nmi_trigger_cpumask_backtrace+0x147/0x190
[ 246.793893] ? debug_show_all_locks+0x112/0x120
[ 246.794653] ? watchdog+0x2d0/0x520
[ 246.795270] arch_trigger_cpumask_backtrace+0x15/0x20
[ 246.796061] watchdog+0x30f/0x520
[ 246.796615] kthread+0x10f/0x140
[ 246.797170] ? reset_hung_task_detector+0x20/0x20
[ 246.797894] ? kthread_flush_work_fn+0x20/0x20
[ 246.798581] ret_from_fork+0x2e/0x38
[ 246.800016] Kernel panic - not syncing: hung_task: blocked tasks
[ 246.814001] CPU: 0 PID: 19 Comm: khungtaskd Not tainted 4.18.0-rc2-00133-g29b8484 #326
[ 246.815271] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 246.816649] Call Trace:
[ 246.817164] dump_stack+0x16/0x26
[ 246.817786] panic+0x90/0x1ce
[ 246.818352] watchdog+0x31b/0x520
[ 246.818979] kthread+0x10f/0x140
[ 246.819582] ? reset_hung_task_detector+0x20/0x20
[ 246.820394] ? kthread_flush_work_fn+0x20/0x20
[ 246.821164] ret_from_fork+0x2e/0x38
[ 246.821824] Kernel Offset: 0xb800000 from 0xc1000000 (relocation range: 0xc0000000-0xd07dffff)
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 1f59bd02fb3281d50632ad41771b9c4533397a6d 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect good 55b0ee6dc5af2a9ebca9b4bec22b8397d0b66925 # 18:39 G 10 0 4 5 Merge 'linux-review/Uwe-Kleine-K-nig/siox-don-t-create-a-thread-without-starting-it/20180628-174207' into devel-catchup-201806281801
git bisect bad 76b4975ef4ff9c48c111d5cb8c917670e9be9fc5 # 18:46 B 0 2 16 0 Merge 'linux-review/Gao-Xiang/linux-taggedptr-h-Introduce-tagged-pointer/20180628-175554' into devel-catchup-201806281801
git bisect good fbd495a21b0b545ebc9d13368261f9726e50609a # 18:59 G 10 0 3 4 Merge 'linux-review/Laurentiu-Tudor/mmc-sdhci-of-esdhc-set-proper-dma-mask-for-ls104x-chips/20180628-175113' into devel-catchup-201806281801
git bisect bad 29b848451ddc66bd7f72780d70d7022f0eed9931 # 19:08 B 0 9 34 11 <linux/taggedptr.h>: Introduce tagged pointer
# first bad commit: [29b848451ddc66bd7f72780d70d7022f0eed9931] <linux/taggedptr.h>: Introduce tagged pointer
git bisect good f57494321cbf5b1e7769b6135407d2995a369e28 # 19:21 G 31 0 9 10 Merge tag 'xfs-4.18-fixes-2' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux
# extra tests with debug options
git bisect bad 29b848451ddc66bd7f72780d70d7022f0eed9931 # 19:34 B 0 1 15 0 <linux/taggedptr.h>: Introduce tagged pointer
# extra tests on HEAD of linux-devel/devel-catchup-201806281801
git bisect bad 1f59bd02fb3281d50632ad41771b9c4533397a6d # 19:40 B 0 29 46 0 0day head guard for 'devel-catchup-201806281801'
# extra tests on tree/branch linux-review/Gao-Xiang/linux-taggedptr-h-Introduce-tagged-pointer/20180628-175554
git bisect bad 29b848451ddc66bd7f72780d70d7022f0eed9931 # 19:59 B 0 26 40 0 <linux/taggedptr.h>: Introduce tagged pointer
# extra tests with first bad commit reverted
git bisect good 1e0167cd2de5818c3783ad180087fefa31a8dfc9 # 20:16 G 11 0 2 2 Revert "<linux/taggedptr.h>: Introduce tagged pointer"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 10 months
[lkp-robot] 72041f9847 BUG: kernel hang in early-boot stage, last printk: early console in setup code
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://anongit.freedesktop.org/drm-intel topic/core-for-CI
commit 72041f9847abb05b9d4d7dea17631b579191ca99
Author: Daniel Vetter <daniel.vetter(a)ffwll.ch>
AuthorDate: Tue Mar 20 17:02:58 2018 +0100
Commit: Daniel Vetter <daniel.vetter(a)ffwll.ch>
CommitDate: Thu May 31 13:34:55 2018 +0200
RFC: debugobjects: capture stack traces at _init() time
Sometimes it's really easy to know which object has gone boom and
where the offending code is, and sometimes it's really hard. One case
we're trying to hunt down is when module unload catches a live debug
object, with a module with lots of them.
Capture a full stack trace from debug_object_init() and dump it when
there's a problem.
FIXME: Should we have a separate Kconfig knob for the backtraces,
they're quite expensive? Atm I'm just selecting it for the general
debug object stuff.
v2: Drop the locks for gathering&storing the backtrace. This is
required because depot_save_stack can call free_pages (to drop it's
preallocation), which can call debug_check_no_obj_freed, which will
recurse into the db->lock spinlocks.
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Waiman Long <longman(a)redhat.com>
Acked-by-for-CI-testing: Chris Wilson <chris(a)chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180320160258.11381-1-dani...
6e56dae35f kernel/panic: Repeat the line and caller information at the end of the OOPS
72041f9847 RFC: debugobjects: capture stack traces at _init() time
b1ee8b0d94 Documentation: e100: Fix docs build error
+-----------------------------------------------------------------------------+------------+------------+------------+
| | 6e56dae35f | 72041f9847 | b1ee8b0d94 |
+-----------------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 33 | 0 | 0 |
| boot_failures | 0 | 15 | 15 |
| BUG:kernel_hang_in_early-boot_stage,last_printk:early_console_in_setup_code | 0 | 15 | 15 |
+-----------------------------------------------------------------------------+------------+------------+------------+
early console in setup code
BUG: kernel hang in early-boot stage, last printk: early console in setup code
Linux version 4.17.0-rc7-00007-g72041f9 #1
Command line: root=/dev/ram0 hung_task_panic=1 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw link=/kbuild-tests/run-queue/yocto-ivb41/x86_64-randconfig-s2-06231218/linux-devel:devel-catchup-201806231126:72041f9847abb05b9d4d7dea17631b579191ca99:bisect-linux-30/.vmlinuz-72041f9847abb05b9d4d7dea17631b579191ca99-20180628161416-11:yocto-ivb41-109 branch=linux-devel/devel-catchup-201806231126 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s2-06231218/gcc-6/72041f9847abb05b9d4d7dea17631b579191ca99/vmlinuz-4.17.0-rc7-00007-g72041f9 drbd.minor_count=8 rcuperf.shutdown=0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start b0d0dca1dd65be7ce2dcb1a4904bffce4806957b ce397d215ccd07b8ae3f71db689aedb85d56ab40 --
git bisect good fe789f020d3b3b46f6603278462f244cc75e066f # 08:29 G 11 0 11 13 Merge 'linux-review/Stafford-Horne/Fix-GCC-Wstringop-truncation-warnings/20180623-101155' into devel-catchup-201806231126
git bisect good e9f8304a2ad9d5498841cdc3a82d015bd25c46d5 # 08:43 G 11 0 11 11 Merge 'platform-drivers-x86/review-dvhart' into devel-catchup-201806231126
git bisect bad ec08f5ad440d3bad127fa5850900336cca324509 # 09:08 B 0 11 24 0 Merge 'linux-review/Palmer-Dabbelt/MAINTAINERS-Add-Daniel-Lustig-as-a-LKMM-reviewer/20180623-051937' into devel-catchup-201806231126
git bisect bad b3e16d367003523579bcad96766d7a9e28f7e70a # 09:35 B 0 4 17 0 Merge 'drm-tip/drm-tip' into devel-catchup-201806231126
git bisect good c238ad62588981b3e83e1ba41bf5ec57b8875dd4 # 10:10 G 11 0 0 0 drm/i915/psr: fix copy-paste error with setting of tp2_wakeup_time_us
git bisect good a21daa88d4f08c959a36ad9760df045407a080e5 # 10:54 G 11 0 0 0 drm/amdgpu: Use correct enum to set powergating state
git bisect good e2810a7167df14c762e085fae5aade38425b71bf # 11:38 G 11 0 0 2 drm/rockchip: vop: split out core clock enablement into separate functions
git bisect good 8819c5ee1c023a4af292a54b3a6fcf9250005bf3 # 12:36 G 11 0 0 0 Merge remote-tracking branch 'sound/for-linus' into drm-tip
git bisect good f159f390e1287a27abf4b9b7b6ce729c020c83d9 # 13:36 G 11 0 0 0 Merge remote-tracking branch 'sound/for-next' into drm-tip
git bisect bad 7d94a8e74a759c96d6feb6c438f5254f209fddf6 # 14:40 B 0 11 24 0 net/sch_generic: Shut up noise
git bisect good 929a683bdff76cd9ea0ea39cc160f8f7e4540fe0 # 15:02 G 11 0 0 0 lockdep: Up MAX_LOCKDEP_CHAINS
git bisect good 6e56dae35f697cf41c39d4ba370ff5b8bda8c5c7 # 15:33 G 10 0 0 0 kernel/panic: Repeat the line and caller information at the end of the OOPS
git bisect bad e2ea2db1734a0e38b89e4d706b5f9ad9f73b1543 # 15:57 B 0 3 16 0 mei: discard messages from not connected client during power down.
git bisect bad 72041f9847abb05b9d4d7dea17631b579191ca99 # 16:20 B 0 1 14 0 RFC: debugobjects: capture stack traces at _init() time
# first bad commit: [72041f9847abb05b9d4d7dea17631b579191ca99] RFC: debugobjects: capture stack traces at _init() time
git bisect good 6e56dae35f697cf41c39d4ba370ff5b8bda8c5c7 # 16:34 G 31 0 0 2 kernel/panic: Repeat the line and caller information at the end of the OOPS
# extra tests with debug options
git bisect bad 72041f9847abb05b9d4d7dea17631b579191ca99 # 17:13 B 0 2 15 0 RFC: debugobjects: capture stack traces at _init() time
# extra tests on HEAD of linux-devel/devel-catchup-201806231126
git bisect bad b0d0dca1dd65be7ce2dcb1a4904bffce4806957b # 17:18 B 0 13 29 0 0day head guard for 'devel-catchup-201806231126'
# extra tests on tree/branch drm-intel/topic/core-for-CI
git bisect bad b1ee8b0d945633e4165f9e160af4cda8be6497f5 # 18:04 B 0 2 15 0 Documentation: e100: Fix docs build error
# extra tests with first bad commit reverted
git bisect good 13dbd0ce953185223d4fb4ab4c0d1e6a92adb4bc # 18:30 G 10 0 0 0 Revert "RFC: debugobjects: capture stack traces at _init() time"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 10 months
[lkp-robot] 00eebefa74 [ 26.613362] Out of memory: Kill process 628 (trinity-c1) score 602 or sacrifice child
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Amritha-Nambiar/Symmetric-queue-...
commit 00eebefa747d87360baaa46e15157b0cddce5607
Author: Amritha Nambiar <amritha.nambiar(a)intel.com>
AuthorDate: Wed Jun 27 15:31:23 2018 -0700
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Thu Jun 28 11:36:07 2018 +0800
net: Use static_key for XPS maps
Use static_key for XPS maps to reduce the cost of extra map checks,
similar to how it is used for RPS and RFS. This includes static_key
'xps_needed' for XPS and another for 'xps_rxqs_needed' for XPS using
Rx queues map.
Signed-off-by: Amritha Nambiar <amritha.nambiar(a)intel.com>
b156e46242 net: Refactor XPS for CPUs and Rx queues
00eebefa74 net: Use static_key for XPS maps
369ff28773 Documentation: Add explanation for XPS using Rx-queue(s) map
+------------------------------------------------------------------+------------+------------+------------+
| | b156e46242 | 00eebefa74 | 369ff28773 |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 35 | 4 | 4 |
| boot_failures | 0 | 10 | 10 |
| WARNING:at_kernel/jump_label.c:#__static_key_slow_dec_cpuslocked | 0 | 10 | 10 |
| EIP:__static_key_slow_dec_cpuslocked | 0 | 10 | 10 |
+------------------------------------------------------------------+------------+------------+------------+
[ 15.905591] sock: process `trinity-main' is using obsolete setsockopt SO_BSDCOMPAT
[ 15.945109] VFS: Warning: trinity-c2 using old stat() call. Recompile your binary.
[child0:627] init_module (128) returned ENOSYS, marking as inactive.
[child0:623] vm86old (113) returned ENOSYS, marking as inactive.
[ 26.471491] CPU: 0 PID: 628 Comm: trinity-c1 Tainted: G T 4.18.0-rc2-00152-g00eebef #1
[ 26.473554] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 26.475538] Call Trace:
[ 26.476374] dump_stack+0x58/0x76
[ 26.477319] dump_header+0x64/0x29c
[ 26.478231] ? ___ratelimit+0x7a/0x110
[ 26.479081] oom_kill_process+0x20d/0x450
[ 26.479973] ? oom_badness+0x78/0x180
[ 26.480949] ? out_of_memory+0x27e/0x3c0
[ 26.481981] out_of_memory+0xbc/0x3c0
[ 26.482970] ? out_of_memory+0x1ec/0x3c0
[ 26.483971] ? __alloc_pages_nodemask+0x783/0xc40
[ 26.485070] __alloc_pages_nodemask+0xae5/0xc40
[ 26.486173] shmem_alloc_page+0x4a/0x60
[ 26.487122] shmem_getpage_gfp+0x3d9/0x8b0
[ 26.488073] ? __perf_sw_event+0x43/0x80
[ 26.489017] ? __lock_acquire+0x4e6/0x920
[ 26.490020] ? kvm_read_and_reset_pf_reason+0x30/0x30
[ 26.491172] shmem_getpage+0x34/0x40
[ 26.492139] shmem_write_begin+0x3a/0x80
[ 26.493151] generic_perform_write+0x9d/0x190
[ 26.494103] __generic_file_write_iter+0x1b9/0x200
[ 26.495108] generic_file_write_iter+0x22b/0x350
[ 26.496113] do_iter_readv_writev+0x1a4/0x1c0
[ 26.497085] do_iter_write+0x95/0x1f0
[ 26.497971] ? vfs_writev+0xc5/0xf0
[ 26.498918] vfs_writev+0x7c/0xf0
[ 26.499812] ? mutex_lock_nested+0x20/0x30
[ 26.500805] ? __fdget_pos+0x36/0x40
[ 26.501734] do_writev+0x51/0xc0
[ 26.502640] sys_writev+0x1b/0x20
[ 26.503561] do_fast_syscall_32+0xa1/0x210
[ 26.504595] entry_SYSENTER_32+0x4e/0x7c
[ 26.505602] EIP: 0xb7f94d61
[ 26.506414] Code: c1 16 f6 ff ff 89 e5 8b 55 08 85 d2 8b 81 5c cd ff ff 74 02 89 02 5d c3 8b 0c 24 c3 8b 1c 24 c3 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d 76 00 58 b8 77 00 00 00 cd 80 90 8d 76
[ 26.510278] EAX: ffffffda EBX: 00000142 ECX: 08d41a18 EDX: 0000009d
[ 26.511591] ESI: fffffff7 EDI: 00000000 EBP: 0000005c ESP: bfed984c
[ 26.512931] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000292
[child0:623] pkey_alloc (381) returned ENOSYS, marking as inactive.
[ 26.517606] active_file:0 inactive_file:0 isolated_file:0
[ 26.517606] unevictable:5422 dirty:0 writeback:0 unstable:0
[ 26.517606] slab_reclaimable:4423 slab_unreclaimable:2781
[ 26.517606] mapped:16907 shmem:35233 pagetables:465 bounce:0
[ 26.517606] free:657 free_pcp:84 free_cma:0
[child0:623] vm86 (166) returned ENOSYS, marking as inactive.
[ 26.529672] DMA free:916kB min:132kB low:164kB high:196kB active_anon:0kB inactive_anon:14976kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15916kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 26.534659] lowmem_reserve[]: 0 198 198 198
[ 26.535625] Normal free:1712kB min:1732kB low:2164kB high:2596kB active_anon:20848kB inactive_anon:120632kB active_file:0kB inactive_file:0kB unevictable:21688kB writepending:0kB present:245616kB managed:207932kB mlocked:944kB kernel_stack:744kB pagetables:1860kB bounce:0kB free_pcp:336kB local_pcp:336kB free_cma:0kB
[ 26.541696] lowmem_reserve[]: 0 0 0 0
[ 26.542721] DMA: 1*4kB (U) 2*8kB (UM) 0*16kB 2*32kB (UM) 3*64kB (UM) 1*128kB (U) 2*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 916kB
[ 26.545330] Normal: 0*4kB 0*8kB 1*16kB (M) 29*32kB (UE) 12*64kB (E) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1712kB
[ 26.547861] 40422 total pagecache pages
[ 26.549061] 65402 pages RAM
[ 26.549936] 0 pages HighMem/MovableOnly
[ 26.550842] 9440 pages reserved
[ 26.551655] 0 pages cma reserved
[ 26.552465] 0 pages hwpoisoned
[ 26.553228] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 04f34c66133417727f849c279b4563b82025e155 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect good 7790ba463304604ea25a34b0d8711bfb7bcf4362 # 14:31 G 10 0 4 4 Merge 'linux-review/Brad-Love/em28xx-Disconnect-oops-fix-and-cleanup/20180628-011131' into devel-spot-201806281137
git bisect good 8fcdae7f8c5b20936b18f967085f57d8abc5d9a1 # 14:54 G 11 0 4 4 Merge 'nsekhar-davinci/acked' into devel-spot-201806281137
git bisect good 353902890c19a1b02a9c7d5d9479b945ceb8b879 # 15:09 G 11 0 2 2 Merge 'gfs2/iomap-write' into devel-spot-201806281137
git bisect good 13ee9874c94191bd9f19b742a96e11ccef50af4b # 15:41 G 10 0 1 1 Merge 'linux-review/Chengguang-Xu/code-cleanups-for-btrfs_get_acl/20180627-122449' into devel-spot-201806281137
git bisect good 1024f3d25d091f05e0000420f29466b2a3fb821a # 15:59 G 10 0 2 2 Merge 'linux-review/Anson-Huang/nvmem-imx-ocotp-add-support-for-imx6sll/20180627-113547' into devel-spot-201806281137
git bisect bad 3a06c6e80fd08bf1ae6b96564589498aa3867b80 # 16:30 B 1 1 1 1 Merge 'linux-review/Amritha-Nambiar/Symmetric-queue-selection-using-XPS-for-Rx-queues/20180628-113603' into devel-spot-201806281137
git bisect good 1a8e39e0420f82a0ce29bf636e1eea75170db7bf # 16:46 G 11 0 1 1 Merge 'linux-review/Alexei-Starovoitov/bpfilter-include-bpfilter_umh-in-assembly-instead-of-using-objcopy/20180627-111633' into devel-spot-201806281137
git bisect bad 00eebefa747d87360baaa46e15157b0cddce5607 # 17:05 B 0 2 15 1 net: Use static_key for XPS maps
git bisect good b156e4624229e1a1d7d2fc45c7bfc687da82108c # 17:19 G 11 0 1 1 net: Refactor XPS for CPUs and Rx queues
# first bad commit: [00eebefa747d87360baaa46e15157b0cddce5607] net: Use static_key for XPS maps
git bisect good b156e4624229e1a1d7d2fc45c7bfc687da82108c # 17:24 G 33 0 5 6 net: Refactor XPS for CPUs and Rx queues
# extra tests with debug options
git bisect bad 00eebefa747d87360baaa46e15157b0cddce5607 # 17:35 B 1 5 1 1 net: Use static_key for XPS maps
# extra tests on HEAD of linux-devel/devel-spot-201806281137
git bisect bad 04f34c66133417727f849c279b4563b82025e155 # 17:41 B 0 12 28 0 0day head guard for 'devel-spot-201806281137'
# extra tests on tree/branch linux-review/Amritha-Nambiar/Symmetric-queue-selection-using-XPS-for-Rx-queues/20180628-113603
git bisect bad 369ff2877369109a5faee4ce6b414d0104eaa381 # 18:05 B 0 1 14 0 Documentation: Add explanation for XPS using Rx-queue(s) map
# extra tests with first bad commit reverted
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 10 months
[lkp-robot] b1ff47aace [ 22.758937] WARNING: CPU: 0 PID: 566 at kernel/jump_label.c:388 __jump_label_update
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-r...
commit b1ff47aacea95e5be1bedf2aee740395b52f4591
Author: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
AuthorDate: Wed Jun 27 18:06:04 2018 +0200
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Thu Jun 28 02:12:48 2018 +0800
x86/kernel: jump_table: use relative references
Similar to the arm64 case, 64-bit x86 can benefit from using 32-bit
relative references rather than 64-bit absolute ones when emitting
struct jump_entry instances. Not only does this reduce the memory
footprint of the entries themselves by 50%, it also removes the need
for carrying relocation metadata on relocatable builds (i.e., for KASLR)
which saves a fair chunk of .init space as well (although the savings
are not as dramatic as on arm64)
Signed-off-by: Ard Biesheuvel <ard.biesheuvel(a)linaro.org>
1843c4017f x86: jump_label: switch to jump_entry accessors
b1ff47aace x86/kernel: jump_table: use relative references
+-----------------------------------------------------+------------+------------+
| | 1843c4017f | b1ff47aace |
+-----------------------------------------------------+------------+------------+
| boot_successes | 172 | 52 |
| boot_failures | 0 | 10 |
| WARNING:at_kernel/jump_label.c:#__jump_label_update | 0 | 9 |
| RIP:__jump_label_update | 0 | 10 |
+-----------------------------------------------------+------------+------------+
[ 21.372732] rcu-perf: rcu_perf_writer 0 has 100 measurements
[ 21.480073] Dumping ftrace buffer:
[ 21.480733] (ftrace buffer empty)
[ 21.481338] rcu-perf: Test complete
01 00 00 00 50 00 00 00 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 32 be d2 0f 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 40 00 00 00 00 00 35 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 15 00 00 00 00 00 00 00 00 00 00 00 00
[ 22.758937] WARNING: CPU: 0 PID: 566 at kernel/jump_label.c:388 __jump_label_update+0xca/0x140
[ 22.761369] Modules linked in:
[ 22.761883] CPU: 0 PID: 566 Comm: trinity-main Not tainted 4.18.0-rc2-00124-gb1ff47a #1
[ 22.763202] RIP: 0010:__jump_label_update+0xca/0x140
[ 22.764022] Code: 00 31 d2 be 01 00 00 00 48 c7 c7 98 88 73 b4 c6 05 49 95 f1 00 01 e8 95 42 fd ff 48 63 33 48 c7 c7 81 3b 46 b4 e8 96 6b ef ff <0f> 0b b9 01 00 00 00 31 d2 be 01 00 00 00 48 c7 c7 68 88 73 b4 e8
[ 22.767287] RSP: 0018:ffff88001caf3ca0 EFLAGS: 00010286
[ 22.768148] RAX: 0000000000000022 RBX: ffffffffb46e7f14 RCX: 0000000000000000
[ 22.769309] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffb47202c8
[ 22.770478] RBP: ffffffffb46e0108 R08: 0000000055185df5 R09: 0000000000000000
[ 22.771631] R10: ffff88001caf3cd8 R11: 0000000000000000 R12: ffffffffb46ee79c
[ 22.772783] R13: ffffffffb46e7f1c R14: ffffffffb46e7f28 R15: ffffffffb46e7f1c
[ 22.773936] FS: 00000000020da880(0000) GS:ffff88001f800000(0000) knlGS:0000000000000000
[ 22.775257] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 22.776184] CR2: 0000000002289068 CR3: 0000000018d4e002 CR4: 00000000001606b0
[ 22.777341] Call Trace:
[ 22.777765] static_key_slow_inc_cpuslocked+0xbe/0xf0
[ 22.778618] ? try_to_run_init_process+0x30/0x30
[ 22.779380] tracepoint_probe_register_prio+0x2fa/0x3d0
[ 22.780234] perf_trace_event_init+0x18d/0x240
[ 22.780969] perf_trace_init+0x5d/0xa0
[ 22.781599] perf_tp_event_init+0x1b/0x40
[ 22.782273] perf_try_init_event+0x6e/0xe0
[ 22.782954] perf_event_alloc+0x757/0xd80
[ 22.783628] __do_sys_perf_event_open+0x396/0xef0
[ 22.784407] ? lock_acquire+0x1cf/0x200
[ 22.785037] ? find_held_lock+0x2d/0x90
[ 22.785680] do_syscall_64+0x2e6/0x330
[ 22.786324] entry_SYSCALL_64_after_hwframe+0x49/0xbe
[ 22.787146] RIP: 0033:0x457389
[ 22.787661] Code: 00 f3 c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 40 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 2b 84 00 00 c3 66 2e 0f 1f 84 00 00 00 00
[ 22.790919] RSP: 002b:00007ffd6f19ee58 EFLAGS: 00000246 ORIG_RAX: 000000000000012a
[ 22.792142] RAX: ffffffffffffffda RBX: 00007fa51074b000 RCX: 0000000000457389
[ 22.793288] RDX: ffffffffffffffff RSI: 0000000000000000 RDI: 0000000002289d70
[ 22.794458] RBP: 0000000000000000 R08: 0000000000000008 R09: 0000000000000123
[ 22.795611] R10: ffffffffffffffff R11: 0000000000000246 R12: 0000000000000007
[ 22.796762] R13: 00000000004bab40 R14: 0000000000000072 R15: ffffffffffffffff
[ 22.797919] irq event stamp: 680132
[ 22.798525] hardirqs last enabled at (680131): [<ffffffffb3727103>] console_unlock+0x583/0x630
[ 22.799926] hardirqs last disabled at (680132): [<ffffffffb3e0100f>] error_entry+0x7f/0x100
[ 22.801263] softirqs last enabled at (672910): [<ffffffffb4000550>] __do_softirq+0x550/0x5bc
[ 22.802654] softirqs last disabled at (672881): [<ffffffffb36c4917>] irq_exit+0x77/0xa0
[ 22.803943] ---[ end trace 0765a1c867ed6a08 ]---
01 00
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 79fe3ae1246048dfeff59316c9f6bcdadb7d67bb 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect bad 9f415510a2ff51a2f3ad1920169a2f25d5fe41fa # 05:24 B 22 2 0 0 Merge 'linux-review/Toshi-Kani/fix-free-pmd-pte-page-handlings-on-x86/20180628-012915' into devel-catchup-201806280251
git bisect bad 77bfb873850dcccd8c703a5016b7a57c668fc860 # 06:02 B 7 1 0 0 Merge 'drm-tip/drm-tip' into devel-catchup-201806280251
git bisect good 2c35829a988ee20c108aa07ac811f58332b5875b # 06:37 G 53 0 0 0 Merge 'linux-review/Michael-Straube/staging-rtl8723bs-fix-comparsion-to-NULL-coding-style/20180628-024336' into devel-catchup-201806280251
git bisect bad be592644c45daf187fe34ff25ec5726f0b669bf5 # 07:11 B 24 3 0 0 Merge 'linux-review/Ard-Biesheuvel/add-support-for-relative-references-in-jump-tables/20180628-021246' into devel-catchup-201806280251
git bisect good cb521439b399b4894339d92df553de497a849f78 # 07:31 G 55 0 3 3 Merge 'linux-review/Logan-Gunthorpe/PCI-Make-specifying-PCI-devices-in-kernel-parameters-reusable/20180628-021138' into devel-catchup-201806280251
git bisect good cc413955c6bb8d5708484cfb9d74349216203eb6 # 07:49 G 54 0 1 1 Merge 'linux-review/Chang-S-Bae/x86-infrastructure-to-enable-FSGSBASE/20180628-021556' into devel-catchup-201806280251
git bisect good a59a2d81d82abf2f670d9567b1737533a32938c2 # 08:03 G 58 0 0 0 arm64/kernel: jump_label: switch to relative references
git bisect bad b1ff47aacea95e5be1bedf2aee740395b52f4591 # 08:21 B 5 1 0 0 x86/kernel: jump_table: use relative references
git bisect good 1843c4017f9e9098dd8e5aee648de2cd10bda887 # 08:43 G 54 0 0 1 x86: jump_label: switch to jump_entry accessors
# first bad commit: [b1ff47aacea95e5be1bedf2aee740395b52f4591] x86/kernel: jump_table: use relative references
git bisect good 1843c4017f9e9098dd8e5aee648de2cd10bda887 # 08:54 G 168 0 3 4 x86: jump_label: switch to jump_entry accessors
# extra tests with debug options
git bisect bad b1ff47aacea95e5be1bedf2aee740395b52f4591 # 09:12 B 7 1 0 0 x86/kernel: jump_table: use relative references
# extra tests on HEAD of linux-devel/devel-catchup-201806280251
git bisect bad 79fe3ae1246048dfeff59316c9f6bcdadb7d67bb # 09:12 B 296 43 0 5 0day head guard for 'devel-catchup-201806280251'
# extra tests on tree/branch linux-review/Ard-Biesheuvel/add-support-for-relative-references-in-jump-tables/20180628-021246
git bisect bad b1ff47aacea95e5be1bedf2aee740395b52f4591 # 09:14 B 48 9 0 1 x86/kernel: jump_table: use relative references
# extra tests with first bad commit reverted
git bisect good 37fde0bf3fc667bfda03658592eb5b77d09fa253 # 09:34 G 58 0 0 0 Revert "x86/kernel: jump_table: use relative references"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 10 months
[lkp-robot] [fs] 3deb642f0d: will-it-scale.per_process_ops -8.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -8.8% regression of will-it-scale.per_process_ops due to commit:
commit: 3deb642f0de4c14f37437dd247f9c77839f043f8 ("fs: introduce new ->get_poll_head and ->poll_mask methods")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 32 threads Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz with 64G memory
with following parameters:
test: poll2
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-sb03/poll2/will-it-scale
commit:
9965ed174e ("fs: add new vfs_poll and file_can_poll helpers")
3deb642f0d ("fs: introduce new ->get_poll_head and ->poll_mask methods")
9965ed174e7d3889 3deb642f0de4c14f37437dd247
---------------- --------------------------
%stddev %change %stddev
\ | \
501456 -8.8% 457120 will-it-scale.per_process_ops
244715 -2.3% 238978 will-it-scale.per_thread_ops
0.53 ± 2% +2.4% 0.54 will-it-scale.scalability
310.44 +0.0% 310.44 will-it-scale.time.elapsed_time
310.44 +0.0% 310.44 will-it-scale.time.elapsed_time.max
15775 ± 5% +13.0% 17818 ± 10% will-it-scale.time.involuntary_context_switches
9911 +0.2% 9931 will-it-scale.time.maximum_resident_set_size
17178 +0.2% 17218 will-it-scale.time.minor_page_faults
4096 +0.0% 4096 will-it-scale.time.page_size
806.50 +0.1% 807.00 will-it-scale.time.percent_of_cpu_this_job_got
2330 +0.1% 2332 will-it-scale.time.system_time
175.19 -1.2% 173.14 ± 2% will-it-scale.time.user_time
1648 ± 15% +6.5% 1755 ± 13% will-it-scale.time.voluntary_context_switches
52370410 -6.4% 49024375 will-it-scale.workload
109841 ± 7% +6.4% 116842 ± 12% interrupts.CAL:Function_call_interrupts
25.62 +0.3% 25.69 boot-time.boot
15.04 +0.2% 15.07 boot-time.dhcp
766.39 +0.7% 771.95 boot-time.idle
15.70 +0.2% 15.73 boot-time.kernel_boot
8462 ± 22% +1.6% 8594 ± 33% softirqs.NET_RX
481569 -8.2% 442215 ± 4% softirqs.RCU
759091 -0.4% 756380 softirqs.SCHED
4791321 ± 2% -1.3% 4729715 ± 2% softirqs.TIMER
49.53 -0.0 49.49 mpstat.cpu.idle%
0.00 ± 14% +0.0 0.00 ± 16% mpstat.cpu.iowait%
0.02 ± 68% +0.0 0.07 ±111% mpstat.cpu.soft%
45.39 +0.3 45.70 mpstat.cpu.sys%
5.05 -0.3 4.74 mpstat.cpu.usr%
192.00 +0.0% 192.00 vmstat.memory.buff
1304198 -0.0% 1303934 vmstat.memory.cache
64246863 +0.0% 64247669 vmstat.memory.free
0.00 -100.0% 0.00 vmstat.procs.b
16.00 +0.0% 16.00 vmstat.procs.r
1882 ± 4% +12.3% 2112 ± 7% vmstat.system.cs
32654 +0.2% 32735 vmstat.system.in
0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
231408 ± 9% -20.9% 183107 ± 36% numa-numastat.node0.local_node
232476 ± 9% -19.7% 186744 ± 36% numa-numastat.node0.numa_hit
1070 ±164% +239.7% 3637 ± 62% numa-numastat.node0.other_node
0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
402582 ± 5% +12.0% 450977 ± 14% numa-numastat.node1.local_node
407775 ± 5% +11.2% 453631 ± 14% numa-numastat.node1.numa_hit
5193 ± 34% -48.9% 2656 ± 88% numa-numastat.node1.other_node
2026939 ± 2% +6.0% 2149119 ± 2% cpuidle.C1.time
93645 ± 5% +8.9% 101940 ± 9% cpuidle.C1.usage
5600995 ± 4% +3.3% 5783955 cpuidle.C1E.time
27629 ± 4% +7.5% 29709 ± 12% cpuidle.C1E.usage
4580663 +1.6% 4654736 ± 3% cpuidle.C3.time
14432 ± 2% +2.1% 14740 ± 2% cpuidle.C3.usage
0.00 -100.0% 0.00 cpuidle.C6.time
0.00 -100.0% 0.00 cpuidle.C6.usage
4.874e+09 +0.0% 4.876e+09 cpuidle.C7.time
4979280 -0.0% 4978987 cpuidle.C7.usage
20767 ± 2% +4.6% 21732 ± 5% cpuidle.POLL.time
1874 ± 4% +14.4% 2144 ± 14% cpuidle.POLL.usage
1573 -0.0% 1573 turbostat.Avg_MHz
51.07 -0.0 51.04 turbostat.Busy%
3088 +0.1% 3089 turbostat.Bzy_MHz
91251 ± 5% +8.7% 99209 ± 9% turbostat.C1
0.02 +0.0 0.02 turbostat.C1%
27387 ± 4% +7.5% 29434 ± 12% turbostat.C1E
0.06 ± 7% +0.0 0.06 turbostat.C1E%
14343 ± 2% +2.1% 14650 ± 2% turbostat.C3
0.05 ± 9% +0.0 0.05 ± 9% turbostat.C3%
4978589 -0.0% 4978359 turbostat.C7
48.83 +0.0 48.86 turbostat.C7%
21.60 +0.1% 21.63 turbostat.CPU%c1
27.25 +0.0% 27.26 turbostat.CPU%c3
0.07 ± 5% -3.4% 0.07 turbostat.CPU%c7
121.94 +0.9% 123.02 turbostat.CorWatt
54.75 +2.3% 56.00 ± 4% turbostat.CoreTmp
10226202 +0.2% 10250629 turbostat.IRQ
16.61 -0.1% 16.60 turbostat.Pkg%pc2
0.11 ± 35% -4.8% 0.10 ± 23% turbostat.Pkg%pc3
55.00 ± 3% +1.4% 55.75 ± 4% turbostat.PkgTmp
149.27 +0.7% 150.35 turbostat.PkgWatt
2693 +0.0% 2694 turbostat.TSC_MHz
85638 +1.2% 86662 meminfo.Active
85438 +1.2% 86462 meminfo.Active(anon)
48745 +0.2% 48823 meminfo.AnonHugePages
80292 -0.3% 80037 meminfo.AnonPages
1252146 +0.0% 1252312 meminfo.Cached
170618 ± 7% -2.1% 166957 ± 15% meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
32955004 -0.0% 32954996 meminfo.CommitLimit
238145 ± 4% -3.0% 231085 ± 2% meminfo.Committed_AS
62914560 +0.0% 62914560 meminfo.DirectMap1G
6091264 -0.1% 6087168 meminfo.DirectMap2M
159920 ± 7% +2.6% 164016 ± 14% meminfo.DirectMap4k
2048 +0.0% 2048 meminfo.Hugepagesize
9854 -0.4% 9816 meminfo.Inactive
9707 -0.4% 9669 meminfo.Inactive(anon)
8421 +0.3% 8448 meminfo.KernelStack
25445 -0.2% 25385 meminfo.Mapped
63912919 +0.0% 63913474 meminfo.MemAvailable
64246806 +0.0% 64247582 meminfo.MemFree
65910008 -0.0% 65909996 meminfo.MemTotal
1221 ± 57% -100.0% 0.00 meminfo.Mlocked
4138 -0.4% 4122 meminfo.PageTables
52017 -0.8% 51577 meminfo.SReclaimable
73138 -0.9% 72470 meminfo.SUnreclaim
15475 ± 3% +3.6% 16025 ± 3% meminfo.Shmem
125156 -0.9% 124048 meminfo.Slab
1236713 -0.0% 1236135 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
4.321e+12 ± 2% +8.0% 4.667e+12 ± 3% perf-stat.branch-instructions
0.27 ± 2% -0.0 0.25 perf-stat.branch-miss-rate%
1.17e+10 ± 5% -0.7% 1.161e+10 ± 4% perf-stat.branch-misses
8.85 ± 5% +0.2 9.05 ± 8% perf-stat.cache-miss-rate%
2.157e+08 ± 6% -2.4% 2.106e+08 ± 4% perf-stat.cache-misses
2.436e+09 ± 3% -3.9% 2.342e+09 ± 8% perf-stat.cache-references
575516 ± 4% +13.6% 653927 ± 8% perf-stat.context-switches
0.79 ± 2% -6.7% 0.74 ± 2% perf-stat.cpi
1.561e+13 +1.9% 1.59e+13 perf-stat.cpu-cycles
9256 -1.5% 9117 ± 4% perf-stat.cpu-migrations
0.73 ± 13% -0.3 0.44 ± 39% perf-stat.dTLB-load-miss-rate%
3.463e+10 ± 15% -30.7% 2.401e+10 ± 34% perf-stat.dTLB-load-misses
4.709e+12 ± 2% +15.7% 5.451e+12 ± 4% perf-stat.dTLB-loads
0.08 ± 59% -0.1 0.02 ± 41% perf-stat.dTLB-store-miss-rate%
2.096e+09 ± 61% -63.9% 7.557e+08 ± 37% perf-stat.dTLB-store-misses
2.745e+12 +20.0% 3.294e+12 ± 3% perf-stat.dTLB-stores
83.18 ± 2% +2.4 85.55 perf-stat.iTLB-load-miss-rate%
2.08e+09 ± 5% -7.5% 1.924e+09 ± 3% perf-stat.iTLB-load-misses
4.216e+08 ± 15% -22.9% 3.251e+08 ± 7% perf-stat.iTLB-loads
1.973e+13 ± 3% +9.2% 2.154e+13 ± 3% perf-stat.instructions
9503 ± 3% +17.9% 11203 ± 4% perf-stat.instructions-per-iTLB-miss
1.26 ± 2% +7.2% 1.35 ± 2% perf-stat.ipc
778803 -0.0% 778741 perf-stat.minor-faults
27.27 ± 5% +0.4 27.63 ± 8% perf-stat.node-load-miss-rate%
20008915 ± 18% -0.7% 19861107 ± 14% perf-stat.node-load-misses
53683432 ± 22% -3.6% 51734389 ± 5% perf-stat.node-loads
21.47 ± 6% +2.5 23.93 ± 9% perf-stat.node-store-miss-rate%
44312543 ± 7% +14.7% 50804799 ± 16% perf-stat.node-store-misses
1.619e+08 -0.9% 1.605e+08 ± 7% perf-stat.node-stores
778804 -0.0% 778753 perf-stat.page-faults
376850 ± 3% +16.6% 439328 ± 3% perf-stat.path-length
21359 +1.2% 21613 proc-vmstat.nr_active_anon
20080 -0.4% 20009 proc-vmstat.nr_anon_pages
1594833 +0.0% 1594853 proc-vmstat.nr_dirty_background_threshold
3193566 +0.0% 3193607 proc-vmstat.nr_dirty_threshold
313080 +0.0% 313125 proc-vmstat.nr_file_pages
42650 ± 7% -2.1% 41736 ± 15% proc-vmstat.nr_free_cma
16061673 +0.0% 16061877 proc-vmstat.nr_free_pages
2426 -0.3% 2418 proc-vmstat.nr_inactive_anon
22991 -0.5% 22885 proc-vmstat.nr_indirectly_reclaimable
8434 +0.2% 8453 proc-vmstat.nr_kernel_stack
6502 -0.2% 6486 proc-vmstat.nr_mapped
304.50 ± 57% -100.0% 0.00 proc-vmstat.nr_mlock
1033 -0.4% 1029 proc-vmstat.nr_page_table_pages
3864 ± 3% +3.7% 4005 ± 3% proc-vmstat.nr_shmem
13003 -0.8% 12893 proc-vmstat.nr_slab_reclaimable
18283 -0.9% 18116 proc-vmstat.nr_slab_unreclaimable
309177 -0.0% 309033 proc-vmstat.nr_unevictable
21431 +1.2% 21694 proc-vmstat.nr_zone_active_anon
2426 -0.3% 2418 proc-vmstat.nr_zone_inactive_anon
309178 -0.0% 309033 proc-vmstat.nr_zone_unevictable
1552 ± 11% +2.8% 1596 ± 6% proc-vmstat.numa_hint_faults
1424 ± 13% +1.7% 1448 ± 7% proc-vmstat.numa_hint_faults_local
663868 +0.4% 666440 proc-vmstat.numa_hit
657596 +0.4% 660136 proc-vmstat.numa_local
6271 +0.5% 6303 proc-vmstat.numa_other
1920 ± 9% +1.1% 1941 ± 5% proc-vmstat.numa_pte_updates
1392 ± 15% +14.5% 1593 ± 11% proc-vmstat.pgactivate
221716 ± 5% +10.7% 245443 ± 16% proc-vmstat.pgalloc_movable
449628 ± 2% -4.6% 428813 ± 9% proc-vmstat.pgalloc_normal
798885 +0.1% 799772 proc-vmstat.pgfault
664270 +0.5% 667803 proc-vmstat.pgfree
60396 ± 20% -5.5% 57070 ± 49% numa-meminfo.node0.Active
60246 ± 20% -5.4% 56970 ± 49% numa-meminfo.node0.Active(anon)
38993 ± 24% -6.4% 36496 ± 57% numa-meminfo.node0.AnonHugePages
56097 ± 19% -1.3% 55376 ± 50% numa-meminfo.node0.AnonPages
627504 ± 4% -1.8% 616208 ± 4% numa-meminfo.node0.FilePages
6886 ± 43% +5.6% 7272 ± 46% numa-meminfo.node0.Inactive
6848 ± 45% +5.1% 7199 ± 47% numa-meminfo.node0.Inactive(anon)
4904 ± 6% -9.8% 4422 ± 4% numa-meminfo.node0.KernelStack
13902 ± 14% -0.3% 13858 ± 14% numa-meminfo.node0.Mapped
32051702 +0.0% 32066302 numa-meminfo.node0.MemFree
32914928 +0.0% 32914928 numa-meminfo.node0.MemTotal
863224 ± 3% -1.7% 848624 numa-meminfo.node0.MemUsed
2517 ± 4% -6.2% 2362 ± 24% numa-meminfo.node0.PageTables
28789 ± 8% -5.2% 27293 ± 5% numa-meminfo.node0.SReclaimable
39196 ± 2% +1.5% 39800 ± 3% numa-meminfo.node0.SUnreclaim
11216 ± 30% -19.7% 9009 ± 26% numa-meminfo.node0.Shmem
67986 ± 3% -1.3% 67094 ± 3% numa-meminfo.node0.Slab
616100 ± 4% -1.5% 607027 ± 3% numa-meminfo.node0.Unevictable
25236 ± 46% +17.2% 29585 ± 93% numa-meminfo.node1.Active
25186 ± 47% +17.1% 29485 ± 94% numa-meminfo.node1.Active(anon)
9758 ±100% +26.3% 12322 ±173% numa-meminfo.node1.AnonHugePages
24194 ± 45% +1.9% 24652 ±112% numa-meminfo.node1.AnonPages
624823 ± 4% +1.8% 636288 ± 3% numa-meminfo.node1.FilePages
2967 ±102% -14.3% 2543 ±132% numa-meminfo.node1.Inactive
2858 ±108% -13.6% 2468 ±138% numa-meminfo.node1.Inactive(anon)
3519 ± 7% +14.4% 4026 ± 3% numa-meminfo.node1.KernelStack
11548 ± 17% -0.2% 11525 ± 18% numa-meminfo.node1.Mapped
32195068 -0.0% 32181267 numa-meminfo.node1.MemFree
32995080 -0.0% 32995068 numa-meminfo.node1.MemTotal
800010 ± 3% +1.7% 813800 numa-meminfo.node1.MemUsed
1617 ± 6% +8.7% 1758 ± 32% numa-meminfo.node1.PageTables
23225 ± 10% +4.5% 24281 ± 6% numa-meminfo.node1.SReclaimable
33937 ± 4% -3.7% 32668 ± 7% numa-meminfo.node1.SUnreclaim
4248 ± 71% +65.0% 7008 ± 31% numa-meminfo.node1.Shmem
57163 ± 5% -0.4% 56950 ± 5% numa-meminfo.node1.Slab
620612 ± 3% +1.4% 629107 ± 3% numa-meminfo.node1.Unevictable
65548 -0.5% 65250 slabinfo.Acpi-Namespace.active_objs
65549 -0.5% 65251 slabinfo.Acpi-Namespace.num_objs
81755 -0.2% 81607 slabinfo.Acpi-Operand.active_objs
81760 -0.2% 81620 slabinfo.Acpi-Operand.num_objs
879.75 ± 16% +21.7% 1071 ± 6% slabinfo.Acpi-State.active_objs
879.75 ± 16% +21.7% 1071 ± 6% slabinfo.Acpi-State.num_objs
1912 ± 20% +26.7% 2422 ± 5% slabinfo.avtab_node.active_objs
1912 ± 20% +26.7% 2422 ± 5% slabinfo.avtab_node.num_objs
57919 -1.9% 56815 slabinfo.dentry.active_objs
58456 -2.2% 57179 slabinfo.dentry.num_objs
8847 ± 3% -4.4% 8454 ± 2% slabinfo.filp.active_objs
9191 ± 3% -4.6% 8770 ± 2% slabinfo.filp.num_objs
45451 -0.5% 45226 slabinfo.inode_cache.active_objs
45637 -0.5% 45406 slabinfo.inode_cache.num_objs
2382 -2.6% 2321 ± 4% slabinfo.kmalloc-1024.active_objs
3630 -7.8% 3347 ± 6% slabinfo.kmalloc-2048.active_objs
3637 -7.7% 3358 ± 6% slabinfo.kmalloc-2048.num_objs
22142 ± 7% -3.0% 21477 ± 5% slabinfo.kmalloc-32.active_objs
22169 ± 7% -2.8% 21540 ± 5% slabinfo.kmalloc-32.num_objs
5657 ± 6% -7.8% 5214 ± 4% slabinfo.kmalloc-512.active_objs
5802 ± 7% -9.9% 5227 ± 4% slabinfo.kmalloc-512.num_objs
24097 +0.9% 24321 slabinfo.kmalloc-64.active_objs
24299 +0.8% 24492 slabinfo.kmalloc-64.num_objs
6576 ± 4% -2.0% 6445 ± 2% slabinfo.kmalloc-96.active_objs
6598 ± 4% -2.2% 6453 ± 2% slabinfo.kmalloc-96.num_objs
942.25 ± 7% -9.0% 857.75 ± 3% slabinfo.nsproxy.active_objs
942.25 ± 7% -9.0% 857.75 ± 3% slabinfo.nsproxy.num_objs
21213 ± 5% +4.1% 22093 ± 3% slabinfo.pid.active_objs
21226 ± 5% +4.1% 22093 ± 3% slabinfo.pid.num_objs
672.00 ± 16% +23.8% 832.00 ± 9% slabinfo.pool_workqueue.active_objs
672.00 ± 16% +23.8% 832.00 ± 9% slabinfo.pool_workqueue.num_objs
544.00 ± 18% +19.1% 648.00 ± 6% slabinfo.scsi_sense_cache.active_objs
544.00 ± 18% +19.1% 648.00 ± 6% slabinfo.scsi_sense_cache.num_objs
982.75 ± 8% -7.3% 911.50 ± 9% slabinfo.task_group.active_objs
982.75 ± 8% -7.3% 911.50 ± 9% slabinfo.task_group.num_objs
16136 ± 12% +5.1% 16966 ± 5% slabinfo.vm_area_struct.active_objs
16147 ± 12% +5.4% 17017 ± 5% slabinfo.vm_area_struct.num_objs
15065 ± 20% -5.5% 14242 ± 49% numa-vmstat.node0.nr_active_anon
14031 ± 19% -1.3% 13844 ± 50% numa-vmstat.node0.nr_anon_pages
156875 ± 4% -1.8% 154050 ± 4% numa-vmstat.node0.nr_file_pages
8012903 +0.0% 8016566 numa-vmstat.node0.nr_free_pages
1714 ± 44% +5.0% 1799 ± 47% numa-vmstat.node0.nr_inactive_anon
10082 ± 10% +17.2% 11813 ± 14% numa-vmstat.node0.nr_indirectly_reclaimable
4913 ± 6% -10.0% 4422 ± 4% numa-vmstat.node0.nr_kernel_stack
3515 ± 14% -0.4% 3500 ± 14% numa-vmstat.node0.nr_mapped
127.50 ± 57% -100.0% 0.00 numa-vmstat.node0.nr_mlock
629.00 ± 4% -6.2% 590.25 ± 24% numa-vmstat.node0.nr_page_table_pages
2803 ± 30% -19.7% 2251 ± 26% numa-vmstat.node0.nr_shmem
7197 ± 8% -5.2% 6823 ± 5% numa-vmstat.node0.nr_slab_reclaimable
9798 ± 2% +1.5% 9949 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
154024 ± 4% -1.5% 151756 ± 3% numa-vmstat.node0.nr_unevictable
15065 ± 20% -5.5% 14242 ± 49% numa-vmstat.node0.nr_zone_active_anon
1714 ± 44% +5.0% 1799 ± 47% numa-vmstat.node0.nr_zone_inactive_anon
154024 ± 4% -1.5% 151756 ± 3% numa-vmstat.node0.nr_zone_unevictable
474897 ± 2% -8.1% 436556 ± 7% numa-vmstat.node0.numa_hit
166664 +0.1% 166790 numa-vmstat.node0.numa_interleave
473733 ± 2% -8.7% 432687 ± 6% numa-vmstat.node0.numa_local
1163 ±147% +232.5% 3868 ± 58% numa-vmstat.node0.numa_other
6302 ± 46% +17.1% 7378 ± 94% numa-vmstat.node1.nr_active_anon
6058 ± 45% +1.9% 6171 ±112% numa-vmstat.node1.nr_anon_pages
156204 ± 4% +1.8% 159071 ± 3% numa-vmstat.node1.nr_file_pages
42643 ± 7% -2.1% 41729 ± 15% numa-vmstat.node1.nr_free_cma
8048742 -0.0% 8045285 numa-vmstat.node1.nr_free_pages
714.25 ±108% -13.7% 616.75 ±138% numa-vmstat.node1.nr_inactive_anon
12908 ± 8% -14.2% 11071 ± 15% numa-vmstat.node1.nr_indirectly_reclaimable
3522 ± 7% +14.5% 4031 ± 3% numa-vmstat.node1.nr_kernel_stack
2995 ± 17% -0.3% 2987 ± 18% numa-vmstat.node1.nr_mapped
176.00 ± 57% -100.0% 0.00 numa-vmstat.node1.nr_mlock
404.00 ± 6% +8.9% 440.00 ± 32% numa-vmstat.node1.nr_page_table_pages
1060 ± 71% +65.1% 1751 ± 31% numa-vmstat.node1.nr_shmem
5806 ± 10% +4.5% 6070 ± 6% numa-vmstat.node1.nr_slab_reclaimable
8483 ± 4% -3.7% 8166 ± 7% numa-vmstat.node1.nr_slab_unreclaimable
155152 ± 3% +1.4% 157276 ± 3% numa-vmstat.node1.nr_unevictable
6361 ± 46% +17.0% 7440 ± 93% numa-vmstat.node1.nr_zone_active_anon
714.25 ±108% -13.7% 616.75 ±138% numa-vmstat.node1.nr_zone_inactive_anon
155152 ± 3% +1.4% 157276 ± 3% numa-vmstat.node1.nr_zone_unevictable
460308 ± 2% +8.1% 497743 ± 5% numa-vmstat.node1.numa_hit
166767 -0.1% 166558 numa-vmstat.node1.numa_interleave
286720 ± 4% +14.0% 326994 ± 8% numa-vmstat.node1.numa_local
173587 -1.6% 170748 numa-vmstat.node1.numa_other
28.60 ± 75% -65.3% 9.93 ±101% sched_debug.cfs_rq:/.MIN_vruntime.avg
815.14 ± 73% -61.0% 317.83 ±101% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
149.28 ± 73% -63.0% 55.30 ±101% sched_debug.cfs_rq:/.MIN_vruntime.stddev
50550 +0.2% 50644 sched_debug.cfs_rq:/.exec_clock.avg
106670 ± 3% +1.2% 107917 ± 3% sched_debug.cfs_rq:/.exec_clock.max
18301 ± 32% +1.9% 18652 ± 34% sched_debug.cfs_rq:/.exec_clock.min
20049 ± 3% +1.1% 20268 ± 2% sched_debug.cfs_rq:/.exec_clock.stddev
47381 ± 16% +2.2% 48421 ± 6% sched_debug.cfs_rq:/.load.avg
230674 ± 35% -16.9% 191596 ± 37% sched_debug.cfs_rq:/.load.max
5143 -1.7% 5053 sched_debug.cfs_rq:/.load.min
66397 ± 25% -7.2% 61609 ± 19% sched_debug.cfs_rq:/.load.stddev
54.48 ± 11% +23.6% 67.35 ± 9% sched_debug.cfs_rq:/.load_avg.avg
301.38 ± 3% +7.9% 325.33 ± 7% sched_debug.cfs_rq:/.load_avg.max
6.21 ± 11% -2.0% 6.08 ± 14% sched_debug.cfs_rq:/.load_avg.min
87.81 ± 9% +18.2% 103.80 ± 6% sched_debug.cfs_rq:/.load_avg.stddev
28.62 ± 75% -65.3% 9.93 ±101% sched_debug.cfs_rq:/.max_vruntime.avg
815.64 ± 73% -61.0% 317.83 ±101% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
149.37 ± 73% -63.0% 55.30 ±101% sched_debug.cfs_rq:/.max_vruntime.stddev
975783 +0.2% 977424 sched_debug.cfs_rq:/.min_vruntime.avg
1409368 ± 4% -0.6% 1401106 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
448106 ± 24% +1.7% 455768 ± 23% sched_debug.cfs_rq:/.min_vruntime.min
251304 ± 6% -2.7% 244544 ± 3% sched_debug.cfs_rq:/.min_vruntime.stddev
0.59 ± 5% +1.7% 0.60 sched_debug.cfs_rq:/.nr_running.avg
1.04 ± 6% -4.0% 1.00 sched_debug.cfs_rq:/.nr_running.max
0.17 +0.0% 0.17 sched_debug.cfs_rq:/.nr_running.min
0.38 ± 3% +0.1% 0.38 sched_debug.cfs_rq:/.nr_running.stddev
0.92 ± 3% -6.4% 0.86 sched_debug.cfs_rq:/.nr_spread_over.avg
2.62 ± 11% -34.9% 1.71 ± 22% sched_debug.cfs_rq:/.nr_spread_over.max
0.83 +0.0% 0.83 sched_debug.cfs_rq:/.nr_spread_over.min
0.35 ± 21% -55.4% 0.15 ± 42% sched_debug.cfs_rq:/.nr_spread_over.stddev
2.67 ± 99% +408.5% 13.56 ± 43% sched_debug.cfs_rq:/.removed.load_avg.avg
85.33 ± 99% +98.6% 169.50 sched_debug.cfs_rq:/.removed.load_avg.max
14.85 ±100% +199.0% 44.39 ± 22% sched_debug.cfs_rq:/.removed.load_avg.stddev
123.37 ±100% +405.6% 623.75 ± 43% sched_debug.cfs_rq:/.removed.runnable_sum.avg
3947 ±100% +97.6% 7801 sched_debug.cfs_rq:/.removed.runnable_sum.max
686.90 ±100% +197.3% 2041 ± 22% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
1.05 ±100% +357.8% 4.82 ± 42% sched_debug.cfs_rq:/.removed.util_avg.avg
33.67 ±100% +99.4% 67.12 ± 20% sched_debug.cfs_rq:/.removed.util_avg.max
5.86 ±100% +173.8% 16.04 ± 31% sched_debug.cfs_rq:/.removed.util_avg.stddev
36.99 ± 10% +6.4% 39.35 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.avg
140.17 ± 2% -0.1% 140.00 sched_debug.cfs_rq:/.runnable_load_avg.max
4.96 -2.5% 4.83 sched_debug.cfs_rq:/.runnable_load_avg.min
48.16 ± 7% +4.3% 50.23 sched_debug.cfs_rq:/.runnable_load_avg.stddev
46667 ± 16% +2.0% 47609 ± 7% sched_debug.cfs_rq:/.runnable_weight.avg
225541 ± 35% -17.5% 186040 ± 40% sched_debug.cfs_rq:/.runnable_weight.max
5143 -1.7% 5053 sched_debug.cfs_rq:/.runnable_weight.min
65291 ± 25% -7.4% 60452 ± 21% sched_debug.cfs_rq:/.runnable_weight.stddev
0.02 ±173% -100.0% 0.00 sched_debug.cfs_rq:/.spread.avg
0.50 ±173% -100.0% 0.00 sched_debug.cfs_rq:/.spread.max
0.09 ±173% -100.0% 0.00 sched_debug.cfs_rq:/.spread.stddev
-51577 -21.7% -40360 sched_debug.cfs_rq:/.spread0.avg
382004 ± 29% +0.3% 383321 ± 26% sched_debug.cfs_rq:/.spread0.max
-579250 -3.0% -562009 sched_debug.cfs_rq:/.spread0.min
251311 ± 6% -2.7% 244551 ± 3% sched_debug.cfs_rq:/.spread0.stddev
598.80 ± 2% +1.6% 608.31 sched_debug.cfs_rq:/.util_avg.avg
1173 ± 4% -5.1% 1113 ± 2% sched_debug.cfs_rq:/.util_avg.max
202.08 ± 9% +0.1% 202.29 ± 10% sched_debug.cfs_rq:/.util_avg.min
342.36 ± 2% -1.5% 337.33 sched_debug.cfs_rq:/.util_avg.stddev
323.89 ± 8% -10.1% 291.09 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg
716.25 -8.9% 652.42 ± 9% sched_debug.cfs_rq:/.util_est_enqueued.max
46.08 ± 14% +11.2% 51.25 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.min
276.83 -12.0% 243.64 ± 9% sched_debug.cfs_rq:/.util_est_enqueued.stddev
815280 ± 4% -1.5% 802720 ± 3% sched_debug.cpu.avg_idle.avg
993248 -0.8% 985236 sched_debug.cpu.avg_idle.max
268671 ± 37% -10.7% 240022 ± 34% sched_debug.cpu.avg_idle.min
181491 ± 20% +2.7% 186451 ± 13% sched_debug.cpu.avg_idle.stddev
176552 -0.0% 176542 sched_debug.cpu.clock.avg
176555 -0.0% 176545 sched_debug.cpu.clock.max
176548 -0.0% 176538 sched_debug.cpu.clock.min
1.58 ± 7% +15.3% 1.82 ± 11% sched_debug.cpu.clock.stddev
176552 -0.0% 176542 sched_debug.cpu.clock_task.avg
176555 -0.0% 176545 sched_debug.cpu.clock_task.max
176548 -0.0% 176538 sched_debug.cpu.clock_task.min
1.58 ± 7% +15.3% 1.82 ± 11% sched_debug.cpu.clock_task.stddev
27.78 +0.8% 28.01 sched_debug.cpu.cpu_load[0].avg
143.88 ± 2% +2.2% 147.08 ± 6% sched_debug.cpu.cpu_load[0].max
4.96 -2.5% 4.83 sched_debug.cpu.cpu_load[0].min
39.72 +1.8% 40.46 ± 3% sched_debug.cpu.cpu_load[0].stddev
29.30 ± 5% -1.4% 28.89 sched_debug.cpu.cpu_load[1].avg
190.75 ± 7% -2.3% 186.29 ± 2% sched_debug.cpu.cpu_load[1].max
4.96 -2.5% 4.83 sched_debug.cpu.cpu_load[1].min
44.19 ± 8% -1.6% 43.47 sched_debug.cpu.cpu_load[1].stddev
29.46 ± 3% -1.3% 29.07 sched_debug.cpu.cpu_load[2].avg
212.54 ± 5% -2.5% 207.17 sched_debug.cpu.cpu_load[2].max
5.04 -0.8% 5.00 ± 2% sched_debug.cpu.cpu_load[2].min
46.52 ± 6% -1.9% 45.63 sched_debug.cpu.cpu_load[2].stddev
29.15 ± 2% -0.6% 28.98 sched_debug.cpu.cpu_load[3].avg
220.46 ± 3% -2.0% 215.96 sched_debug.cpu.cpu_load[3].max
5.38 ± 2% -1.6% 5.29 ± 6% sched_debug.cpu.cpu_load[3].min
47.42 ± 3% -1.3% 46.82 sched_debug.cpu.cpu_load[3].stddev
28.54 +0.3% 28.63 sched_debug.cpu.cpu_load[4].avg
221.79 -1.2% 219.12 sched_debug.cpu.cpu_load[4].max
4.96 +0.8% 5.00 ± 4% sched_debug.cpu.cpu_load[4].min
47.78 -0.4% 47.61 sched_debug.cpu.cpu_load[4].stddev
3147 -0.4% 3133 sched_debug.cpu.curr->pid.avg
5202 +0.0% 5203 sched_debug.cpu.curr->pid.max
1417 -0.0% 1416 sched_debug.cpu.curr->pid.min
1416 +0.4% 1422 sched_debug.cpu.curr->pid.stddev
33530 ± 7% -3.5% 32352 sched_debug.cpu.load.avg
191933 ± 40% -21.7% 150339 ± 2% sched_debug.cpu.load.max
5143 -1.7% 5053 sched_debug.cpu.load.min
50686 ± 25% -13.2% 43983 sched_debug.cpu.load.stddev
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.avg
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
4294 -0.0% 4294 sched_debug.cpu.next_balance.max
4294 -0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ± 31% +0.8% 0.00 ± 31% sched_debug.cpu.next_balance.stddev
156192 -0.0% 156167 sched_debug.cpu.nr_load_updates.avg
160511 -0.2% 160148 sched_debug.cpu.nr_load_updates.max
155075 -0.9% 153755 sched_debug.cpu.nr_load_updates.min
983.74 ± 3% +4.1% 1024 ± 3% sched_debug.cpu.nr_load_updates.stddev
0.55 ± 2% -0.2% 0.55 sched_debug.cpu.nr_running.avg
1.46 ± 9% -2.9% 1.42 ± 5% sched_debug.cpu.nr_running.max
0.17 +0.0% 0.17 sched_debug.cpu.nr_running.min
0.40 ± 4% +0.7% 0.41 sched_debug.cpu.nr_running.stddev
9103 ± 2% +5.2% 9573 ± 4% sched_debug.cpu.nr_switches.avg
39036 ± 9% -3.0% 37872 ± 11% sched_debug.cpu.nr_switches.max
1631 ± 27% +19.6% 1951 ± 10% sched_debug.cpu.nr_switches.min
8464 ± 12% -1.0% 8378 ± 8% sched_debug.cpu.nr_switches.stddev
0.00 ±110% -100.0% 0.00 sched_debug.cpu.nr_uninterruptible.avg
7.25 ± 8% +5.7% 7.67 ± 11% sched_debug.cpu.nr_uninterruptible.max
-8.54 +2.0% -8.71 sched_debug.cpu.nr_uninterruptible.min
3.44 ± 14% +10.2% 3.78 ± 8% sched_debug.cpu.nr_uninterruptible.stddev
10856 ± 4% -1.0% 10746 ± 4% sched_debug.cpu.sched_count.avg
114419 ± 16% -21.2% 90112 ± 19% sched_debug.cpu.sched_count.max
876.04 ± 36% +16.6% 1021 ± 21% sched_debug.cpu.sched_count.min
20384 ± 14% -15.8% 17163 ± 10% sched_debug.cpu.sched_count.stddev
3022 ± 3% +2.9% 3109 ± 5% sched_debug.cpu.sched_goidle.avg
14082 ± 10% +9.1% 15363 ± 15% sched_debug.cpu.sched_goidle.max
319.96 ± 48% +10.4% 353.33 ± 25% sched_debug.cpu.sched_goidle.min
3191 ± 11% -0.7% 3168 ± 13% sched_debug.cpu.sched_goidle.stddev
3462 ± 3% +6.8% 3697 ± 5% sched_debug.cpu.ttwu_count.avg
15960 ± 11% -5.7% 15054 ± 5% sched_debug.cpu.ttwu_count.max
677.54 ± 12% -7.4% 627.62 ± 32% sched_debug.cpu.ttwu_count.min
3454 ± 12% -1.8% 3392 ± 3% sched_debug.cpu.ttwu_count.stddev
1414 ± 5% +10.8% 1567 ± 5% sched_debug.cpu.ttwu_local.avg
11015 ± 16% +13.8% 12535 ± 17% sched_debug.cpu.ttwu_local.max
178.12 ± 13% +7.0% 190.58 ± 12% sched_debug.cpu.ttwu_local.min
2147 ± 10% +19.7% 2570 ± 10% sched_debug.cpu.ttwu_local.stddev
176549 -0.0% 176539 sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
176549 -0.0% 176539 sched_debug.ktime
0.01 -50.0% 0.00 ± 99% sched_debug.rt_rq:/.rt_nr_migratory.avg
0.17 -50.0% 0.08 ± 99% sched_debug.rt_rq:/.rt_nr_migratory.max
0.03 -50.0% 0.01 ±100% sched_debug.rt_rq:/.rt_nr_migratory.stddev
0.01 -50.0% 0.00 ± 99% sched_debug.rt_rq:/.rt_nr_running.avg
0.17 -50.0% 0.08 ± 99% sched_debug.rt_rq:/.rt_nr_running.max
0.03 -50.0% 0.01 ±100% sched_debug.rt_rq:/.rt_nr_running.stddev
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
0.05 ± 14% +5.3% 0.05 ± 14% sched_debug.rt_rq:/.rt_time.avg
1.55 ± 14% +5.2% 1.63 ± 13% sched_debug.rt_rq:/.rt_time.max
0.27 ± 14% +5.2% 0.28 ± 13% sched_debug.rt_rq:/.rt_time.stddev
176870 -0.0% 176859 sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4118331 +0.0% 4118331 sched_debug.sysctl_sched.sysctl_sched_features
24.00 +0.0% 24.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
4.00 +0.0% 4.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
29.43 ± 31% -2.3 27.12 ± 35% perf-profile.calltrace.cycles-pp.secondary_startup_64
25.83 ± 40% -1.2 24.65 ± 39% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
25.83 ± 40% -1.2 24.65 ± 39% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
25.83 ± 40% -1.2 24.65 ± 39% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
3.60 ± 70% -1.1 2.46 ±107% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
3.60 ± 70% -1.1 2.46 ±107% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_kernel.secondary_startup_64
3.60 ± 70% -1.1 2.46 ±107% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64
3.59 ± 71% -1.1 2.46 ±107% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
3.53 ± 70% -1.1 2.40 ±107% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_kernel
25.33 ± 40% -1.0 24.35 ± 39% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.75 ± 12% -0.9 0.87 ± 14% perf-profile.calltrace.cycles-pp.fput
23.86 ± 41% -0.5 23.37 ± 40% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.29 ± 31% -0.5 0.80 ± 24% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.28 ± 31% -0.5 0.80 ± 24% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
0.59 ± 62% -0.4 0.14 ±173% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.32 ±102% -0.3 0.00 perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle
0.31 ±100% -0.3 0.00 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
22.27 ± 14% -0.2 22.07 ± 13% perf-profile.calltrace.cycles-pp.fput.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.15 ±173% -0.2 0.00 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state
22.97 ± 13% -0.1 22.83 ± 13% perf-profile.calltrace.cycles-pp.__fget.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64
24.80 ± 13% -0.1 24.68 ± 13% perf-profile.calltrace.cycles-pp.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.89 ± 14% -0.0 0.84 ± 12% perf-profile.calltrace.cycles-pp.__fget_light.__fget_light.do_sys_poll.__x64_sys_poll.do_syscall_64
0.86 ± 12% -0.0 0.83 ± 16% perf-profile.calltrace.cycles-pp.__fdget.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.86 ± 16% +0.0 1.86 ± 12% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
0.94 ± 10% +0.0 0.94 ± 23% perf-profile.calltrace.cycles-pp.__kmalloc.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.64 ± 13% +0.0 0.68 ± 20% perf-profile.calltrace.cycles-pp.kfree.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.08 ± 13% +0.1 2.20 ± 10% perf-profile.calltrace.cycles-pp.copy_user_generic_string._copy_from_user.do_sys_poll.__x64_sys_poll.do_syscall_64
2.38 ± 12% +0.2 2.58 ± 10% perf-profile.calltrace.cycles-pp._copy_from_user.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.8 0.83 ± 13% perf-profile.calltrace.cycles-pp.vfs_poll
0.00 +1.6 1.64 ± 12% perf-profile.calltrace.cycles-pp.vfs_poll.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.39 ± 13% +2.4 66.80 ± 13% perf-profile.calltrace.cycles-pp.do_sys_poll.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
64.61 ± 13% +2.4 67.05 ± 13% perf-profile.calltrace.cycles-pp.__x64_sys_poll.do_syscall_64.entry_SYSCALL_64_after_hwframe
65.59 ± 13% +2.5 68.06 ± 13% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
65.39 ± 13% +2.5 67.86 ± 13% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
29.45 ± 31% -2.3 27.12 ± 35% perf-profile.children.cycles-pp.do_idle
29.43 ± 31% -2.3 27.12 ± 35% perf-profile.children.cycles-pp.secondary_startup_64
29.43 ± 31% -2.3 27.12 ± 35% perf-profile.children.cycles-pp.cpu_startup_entry
28.97 ± 31% -2.1 26.84 ± 35% perf-profile.children.cycles-pp.cpuidle_enter_state
27.40 ± 32% -1.6 25.77 ± 35% perf-profile.children.cycles-pp.intel_idle
25.83 ± 40% -1.2 24.65 ± 39% perf-profile.children.cycles-pp.start_secondary
3.60 ± 70% -1.1 2.46 ±107% perf-profile.children.cycles-pp.start_kernel
24.02 ± 14% -1.1 22.95 ± 13% perf-profile.children.cycles-pp.fput
1.52 ± 25% -0.4 1.08 ± 13% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.53 ± 25% -0.4 1.09 ± 13% perf-profile.children.cycles-pp.apic_timer_interrupt
0.89 ± 22% -0.2 0.69 ± 9% perf-profile.children.cycles-pp.hrtimer_interrupt
0.52 ± 32% -0.2 0.32 ± 27% perf-profile.children.cycles-pp.irq_exit
25.69 ± 13% -0.2 25.52 ± 13% perf-profile.children.cycles-pp.__fget_light
0.43 ± 31% -0.2 0.27 ± 31% perf-profile.children.cycles-pp.__softirqentry_text_start
0.35 ± 24% -0.1 0.21 ± 28% perf-profile.children.cycles-pp.menu_select
22.97 ± 13% -0.1 22.83 ± 13% perf-profile.children.cycles-pp.__fget
0.66 ± 20% -0.1 0.53 ± 10% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.41 ± 31% -0.1 0.29 ± 19% perf-profile.children.cycles-pp.io_serial_in
0.60 ± 22% -0.1 0.48 ± 13% perf-profile.children.cycles-pp.irq_work_run_list
0.58 ± 22% -0.1 0.47 ± 14% perf-profile.children.cycles-pp.irq_work_interrupt
0.58 ± 22% -0.1 0.47 ± 14% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.58 ± 22% -0.1 0.47 ± 14% perf-profile.children.cycles-pp.irq_work_run
0.58 ± 22% -0.1 0.47 ± 14% perf-profile.children.cycles-pp.printk
0.58 ± 22% -0.1 0.47 ± 14% perf-profile.children.cycles-pp.vprintk_emit
0.59 ± 19% -0.1 0.48 ± 12% perf-profile.children.cycles-pp.serial8250_console_write
0.61 ± 19% -0.1 0.50 ± 12% perf-profile.children.cycles-pp.console_unlock
0.58 ± 19% -0.1 0.47 ± 12% perf-profile.children.cycles-pp.serial8250_console_putchar
0.58 ± 19% -0.1 0.48 ± 12% perf-profile.children.cycles-pp.wait_for_xmitr
0.58 ± 20% -0.1 0.48 ± 12% perf-profile.children.cycles-pp.uart_console_write
0.10 ± 30% -0.1 0.02 ±173% perf-profile.children.cycles-pp.lapic_next_deadline
0.16 ± 23% -0.1 0.09 ± 27% perf-profile.children.cycles-pp.tick_nohz_next_event
0.18 ± 24% -0.1 0.12 ± 21% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.11 ± 24% -0.1 0.06 ± 60% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.13 ± 26% -0.1 0.07 ± 27% perf-profile.children.cycles-pp.clockevents_program_event
0.10 ± 9% -0.1 0.04 ±103% perf-profile.children.cycles-pp.run_timer_softirq
0.40 ± 15% -0.1 0.34 ± 15% perf-profile.children.cycles-pp.tick_sched_timer
0.14 ± 34% -0.1 0.08 ± 19% perf-profile.children.cycles-pp.native_write_msr
0.36 ± 16% -0.1 0.30 ± 18% perf-profile.children.cycles-pp.tick_sched_handle
0.17 ± 31% -0.1 0.12 ± 7% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.06 ± 60% -0.0 0.01 ±173% perf-profile.children.cycles-pp.irq_enter
0.08 ± 29% -0.0 0.03 ±105% perf-profile.children.cycles-pp.find_busiest_group
0.07 ± 24% -0.0 0.03 ±102% perf-profile.children.cycles-pp.__next_timer_interrupt
0.33 ± 16% -0.0 0.28 ± 14% perf-profile.children.cycles-pp.update_process_times
0.06 ± 66% -0.0 0.01 ±173% perf-profile.children.cycles-pp.sched_clock_cpu
0.16 ± 44% -0.0 0.12 ± 33% perf-profile.children.cycles-pp.rebalance_domains
0.08 ± 37% -0.0 0.04 ± 59% perf-profile.children.cycles-pp.native_irq_return_iret
0.06 ± 60% -0.0 0.02 ±173% perf-profile.children.cycles-pp.update_blocked_averages
0.04 ±107% -0.0 0.00 perf-profile.children.cycles-pp.native_sched_clock
0.06 ± 28% -0.0 0.03 ±100% perf-profile.children.cycles-pp.rcu_check_callbacks
0.06 ± 28% -0.0 0.03 ±100% perf-profile.children.cycles-pp.read_tsc
0.04 ± 59% -0.0 0.01 ±173% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.11 ± 24% -0.0 0.08 ± 67% perf-profile.children.cycles-pp.load_balance
0.86 ± 12% -0.0 0.83 ± 16% perf-profile.children.cycles-pp.__fdget
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp._raw_spin_lock_irq
0.20 ± 11% -0.0 0.18 ± 18% perf-profile.children.cycles-pp.scheduler_tick
0.12 ± 24% -0.0 0.09 ± 23% perf-profile.children.cycles-pp.ktime_get
0.04 ±103% -0.0 0.01 ±173% perf-profile.children.cycles-pp.sched_clock
0.04 ±102% -0.0 0.01 ±173% perf-profile.children.cycles-pp.tick_irq_enter
0.02 ±173% -0.0 0.00 perf-profile.children.cycles-pp._raw_spin_trylock
0.03 ±105% -0.0 0.01 ±173% perf-profile.children.cycles-pp._raw_spin_lock
0.06 ± 64% -0.0 0.04 ±107% perf-profile.children.cycles-pp.run_rebalance_domains
0.15 ± 69% -0.0 0.13 ±110% perf-profile.children.cycles-pp.memcpy
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.intel_pmu_disable_all
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.find_next_bit
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.__remove_hrtimer
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.rcu_idle_exit
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.timerqueue_del
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.call_function_interrupt
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.smp_call_function_interrupt
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp._cond_resched
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.try_to_wake_up
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.call_timer_fn
0.16 ± 62% -0.0 0.14 ±115% perf-profile.children.cycles-pp.ret_from_fork
0.16 ± 62% -0.0 0.14 ±115% perf-profile.children.cycles-pp.kthread
0.14 ± 69% -0.0 0.12 ±109% perf-profile.children.cycles-pp.fb_flashcursor
0.14 ± 69% -0.0 0.12 ±109% perf-profile.children.cycles-pp.bit_cursor
0.14 ± 69% -0.0 0.12 ±109% perf-profile.children.cycles-pp.soft_cursor
0.14 ± 69% -0.0 0.12 ±109% perf-profile.children.cycles-pp.mga_dirty_update
0.12 ± 15% -0.0 0.11 ± 24% perf-profile.children.cycles-pp.__might_sleep
0.15 ± 62% -0.0 0.14 ±115% perf-profile.children.cycles-pp.worker_thread
0.15 ± 62% -0.0 0.14 ±115% perf-profile.children.cycles-pp.process_one_work
0.12 ± 13% -0.0 0.11 ± 15% perf-profile.children.cycles-pp.___might_sleep
0.02 ±173% -0.0 0.01 ±173% perf-profile.children.cycles-pp.ksys_read
0.01 ±173% +0.0 0.01 ±173% perf-profile.children.cycles-pp.vfs_read
0.01 ±173% +0.0 0.01 ±173% perf-profile.children.cycles-pp.__vfs_read
0.01 ±173% +0.0 0.01 ±173% perf-profile.children.cycles-pp.wake_up_klogd_work_func
0.07 ± 17% +0.0 0.07 ± 11% perf-profile.children.cycles-pp.__indirect_thunk_start
0.24 ± 23% +0.0 0.24 ± 20% perf-profile.children.cycles-pp.kmalloc_slab
1.86 ± 16% +0.0 1.87 ± 12% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.12 ± 9% +0.0 0.12 ± 27% perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.__libc_read
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.perf_read
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.smp_call_function_single
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.perf_event_read
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.cpumask_next_and
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.find_next_and_bit
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.cmd_stat
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.__run_perf_stat
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.process_interval
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.read_counters
0.00 +0.0 0.02 ±173% perf-profile.children.cycles-pp.__libc_start_main
0.00 +0.0 0.02 ±173% perf-profile.children.cycles-pp.main
0.00 +0.0 0.02 ±173% perf-profile.children.cycles-pp.handle_internal_command
0.00 +0.0 0.02 ±173% perf-profile.children.cycles-pp.run_builtin
0.17 ± 47% +0.0 0.19 ± 24% perf-profile.children.cycles-pp.delay_tsc
0.69 ± 11% +0.0 0.71 ± 20% perf-profile.children.cycles-pp.kfree
0.05 ± 60% +0.0 0.07 ± 31% perf-profile.children.cycles-pp.task_tick_fair
0.99 ± 10% +0.0 1.02 ± 21% perf-profile.children.cycles-pp.__kmalloc
0.00 +0.0 0.03 ±100% perf-profile.children.cycles-pp.update_load_avg
0.22 ± 15% +0.0 0.27 ± 23% perf-profile.children.cycles-pp.__might_fault
2.09 ± 13% +0.1 2.21 ± 10% perf-profile.children.cycles-pp.copy_user_generic_string
2.48 ± 12% +0.2 2.68 ± 10% perf-profile.children.cycles-pp._copy_from_user
64.39 ± 13% +2.4 66.81 ± 13% perf-profile.children.cycles-pp.do_sys_poll
64.71 ± 13% +2.4 67.15 ± 13% perf-profile.children.cycles-pp.__x64_sys_poll
0.00 +2.5 2.47 ± 13% perf-profile.children.cycles-pp.vfs_poll
65.67 ± 13% +2.5 68.14 ± 13% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
65.49 ± 13% +2.5 67.96 ± 13% perf-profile.children.cycles-pp.do_syscall_64
27.38 ± 32% -1.6 25.76 ± 35% perf-profile.self.cycles-pp.intel_idle
23.88 ± 14% -1.0 22.87 ± 13% perf-profile.self.cycles-pp.fput
22.78 ± 13% -0.2 22.60 ± 13% perf-profile.self.cycles-pp.__fget
0.41 ± 31% -0.1 0.29 ± 19% perf-profile.self.cycles-pp.io_serial_in
0.12 ± 30% -0.1 0.06 ± 62% perf-profile.self.cycles-pp.menu_select
0.14 ± 34% -0.1 0.08 ± 19% perf-profile.self.cycles-pp.native_write_msr
0.08 ± 37% -0.0 0.04 ± 59% perf-profile.self.cycles-pp.native_irq_return_iret
0.04 ±107% -0.0 0.00 perf-profile.self.cycles-pp.native_sched_clock
0.06 ± 70% -0.0 0.03 ±100% perf-profile.self.cycles-pp.__softirqentry_text_start
0.05 ± 62% -0.0 0.01 ±173% perf-profile.self.cycles-pp.find_busiest_group
0.06 ± 28% -0.0 0.03 ±100% perf-profile.self.cycles-pp.read_tsc
0.03 ±105% -0.0 0.00 perf-profile.self.cycles-pp.do_idle
0.03 ±102% -0.0 0.00 perf-profile.self.cycles-pp.__next_timer_interrupt
0.04 ± 58% -0.0 0.01 ±173% perf-profile.self.cycles-pp.run_timer_softirq
0.06 ± 63% -0.0 0.03 ±100% perf-profile.self.cycles-pp.ktime_get
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp._raw_spin_lock_irq
0.85 ± 11% -0.0 0.83 ± 16% perf-profile.self.cycles-pp.__fdget
0.03 ±105% -0.0 0.01 ±173% perf-profile.self.cycles-pp._raw_spin_lock
0.15 ± 69% -0.0 0.13 ±110% perf-profile.self.cycles-pp.memcpy
0.03 ±102% -0.0 0.01 ±173% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.03 ±102% -0.0 0.01 ±173% perf-profile.self.cycles-pp.rcu_check_callbacks
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.irq_exit
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.find_next_bit
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.69 ± 11% -0.0 0.68 ± 14% perf-profile.self.cycles-pp.kfree
0.12 ± 13% -0.0 0.11 ± 15% perf-profile.self.cycles-pp.___might_sleep
0.11 ± 15% -0.0 0.11 ± 24% perf-profile.self.cycles-pp.__might_sleep
0.07 ± 62% -0.0 0.07 ± 12% perf-profile.self.cycles-pp.cpuidle_enter_state
0.21 ± 18% -0.0 0.20 ± 14% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.01 ±173% +0.0 0.01 ±173% perf-profile.self.cycles-pp.update_blocked_averages
0.07 ± 17% +0.0 0.07 ± 11% perf-profile.self.cycles-pp.__indirect_thunk_start
0.24 ± 23% +0.0 0.24 ± 20% perf-profile.self.cycles-pp.kmalloc_slab
1.86 ± 16% +0.0 1.87 ± 12% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.12 ± 9% +0.0 0.12 ± 27% perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.00 +0.0 0.01 ±173% perf-profile.self.cycles-pp.smp_call_function_single
0.10 ± 37% +0.0 0.12 ± 16% perf-profile.self.cycles-pp.__might_fault
0.17 ± 47% +0.0 0.19 ± 24% perf-profile.self.cycles-pp.delay_tsc
0.30 ± 12% +0.0 0.33 ± 18% perf-profile.self.cycles-pp.__x64_sys_poll
0.67 ± 5% +0.0 0.69 ± 23% perf-profile.self.cycles-pp.__kmalloc
0.18 ± 12% +0.0 0.23 ± 17% perf-profile.self.cycles-pp._copy_from_user
0.76 ± 10% +0.1 0.81 ± 11% perf-profile.self.cycles-pp.do_syscall_64
2.62 ± 15% +0.1 2.68 ± 9% perf-profile.self.cycles-pp.__fget_light
2.08 ± 13% +0.1 2.16 ± 12% perf-profile.self.cycles-pp.copy_user_generic_string
11.46 ± 14% +1.0 12.45 ± 13% perf-profile.self.cycles-pp.do_sys_poll
0.00 +2.5 2.47 ± 13% perf-profile.self.cycles-pp.vfs_poll
will-it-scale.per_process_ops
540000 +-+----------------------------------------------------------------+
530000 +-+ +. |
|+.+ ++.++.++.+++.++.+ .++.+ .++.+++.++.++.++.++.+++. .++. + |
520000 +-+ + + ++ + : |
510000 +-+ : |
| :+. |
500000 +-+ + +|
490000 +-+ |
480000 +-+ |
| |
470000 +-+ |
460000 OO+OO O O O O OO OO OOO OO O O OO |
| O OO OO OO O OO O O OO |
450000 +-+ O O OO |
440000 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 10 months
00eebefa74 ("net: Use static_key for XPS maps"): Out of memory: Kill process 628 (trinity-c1) score 602 or sacrifice child
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Amritha-Nambiar/Symmetric-queue-...
commit 00eebefa747d87360baaa46e15157b0cddce5607
Author: Amritha Nambiar <amritha.nambiar(a)intel.com>
AuthorDate: Wed Jun 27 15:31:23 2018 -0700
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Thu Jun 28 11:36:07 2018 +0800
net: Use static_key for XPS maps
Use static_key for XPS maps to reduce the cost of extra map checks,
similar to how it is used for RPS and RFS. This includes static_key
'xps_needed' for XPS and another for 'xps_rxqs_needed' for XPS using
Rx queues map.
Signed-off-by: Amritha Nambiar <amritha.nambiar(a)intel.com>
b156e46242 net: Refactor XPS for CPUs and Rx queues
00eebefa74 net: Use static_key for XPS maps
369ff28773 Documentation: Add explanation for XPS using Rx-queue(s) map
+------------------------------------------------------------------+------------+------------+------------+
| | b156e46242 | 00eebefa74 | 369ff28773 |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 35 | 4 | 4 |
| boot_failures | 0 | 10 | 10 |
| WARNING:at_kernel/jump_label.c:#__static_key_slow_dec_cpuslocked | 0 | 10 | 10 |
| EIP:__static_key_slow_dec_cpuslocked | 0 | 10 | 10 |
+------------------------------------------------------------------+------------+------------+------------+
[ 15.905591] sock: process `trinity-main' is using obsolete setsockopt SO_BSDCOMPAT
[ 15.945109] VFS: Warning: trinity-c2 using old stat() call. Recompile your binary.
[child0:627] init_module (128) returned ENOSYS, marking as inactive.
[child0:623] vm86old (113) returned ENOSYS, marking as inactive.
[ 26.471491] CPU: 0 PID: 628 Comm: trinity-c1 Tainted: G T 4.18.0-rc2-00152-g00eebef #1
[ 26.473554] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 26.475538] Call Trace:
[ 26.476374] dump_stack+0x58/0x76
[ 26.477319] dump_header+0x64/0x29c
[ 26.478231] ? ___ratelimit+0x7a/0x110
[ 26.479081] oom_kill_process+0x20d/0x450
[ 26.479973] ? oom_badness+0x78/0x180
[ 26.480949] ? out_of_memory+0x27e/0x3c0
[ 26.481981] out_of_memory+0xbc/0x3c0
[ 26.482970] ? out_of_memory+0x1ec/0x3c0
[ 26.483971] ? __alloc_pages_nodemask+0x783/0xc40
[ 26.485070] __alloc_pages_nodemask+0xae5/0xc40
[ 26.486173] shmem_alloc_page+0x4a/0x60
[ 26.487122] shmem_getpage_gfp+0x3d9/0x8b0
[ 26.488073] ? __perf_sw_event+0x43/0x80
[ 26.489017] ? __lock_acquire+0x4e6/0x920
[ 26.490020] ? kvm_read_and_reset_pf_reason+0x30/0x30
[ 26.491172] shmem_getpage+0x34/0x40
[ 26.492139] shmem_write_begin+0x3a/0x80
[ 26.493151] generic_perform_write+0x9d/0x190
[ 26.494103] __generic_file_write_iter+0x1b9/0x200
[ 26.495108] generic_file_write_iter+0x22b/0x350
[ 26.496113] do_iter_readv_writev+0x1a4/0x1c0
[ 26.497085] do_iter_write+0x95/0x1f0
[ 26.497971] ? vfs_writev+0xc5/0xf0
[ 26.498918] vfs_writev+0x7c/0xf0
[ 26.499812] ? mutex_lock_nested+0x20/0x30
[ 26.500805] ? __fdget_pos+0x36/0x40
[ 26.501734] do_writev+0x51/0xc0
[ 26.502640] sys_writev+0x1b/0x20
[ 26.503561] do_fast_syscall_32+0xa1/0x210
[ 26.504595] entry_SYSENTER_32+0x4e/0x7c
[ 26.505602] EIP: 0xb7f94d61
[ 26.506414] Code: c1 16 f6 ff ff 89 e5 8b 55 08 85 d2 8b 81 5c cd ff ff 74 02 89 02 5d c3 8b 0c 24 c3 8b 1c 24 c3 90 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d 76 00 58 b8 77 00 00 00 cd 80 90 8d 76
[ 26.510278] EAX: ffffffda EBX: 00000142 ECX: 08d41a18 EDX: 0000009d
[ 26.511591] ESI: fffffff7 EDI: 00000000 EBP: 0000005c ESP: bfed984c
[ 26.512931] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000292
[child0:623] pkey_alloc (381) returned ENOSYS, marking as inactive.
[ 26.517606] active_file:0 inactive_file:0 isolated_file:0
[ 26.517606] unevictable:5422 dirty:0 writeback:0 unstable:0
[ 26.517606] slab_reclaimable:4423 slab_unreclaimable:2781
[ 26.517606] mapped:16907 shmem:35233 pagetables:465 bounce:0
[ 26.517606] free:657 free_pcp:84 free_cma:0
[child0:623] vm86 (166) returned ENOSYS, marking as inactive.
[ 26.529672] DMA free:916kB min:132kB low:164kB high:196kB active_anon:0kB inactive_anon:14976kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15916kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[ 26.534659] lowmem_reserve[]: 0 198 198 198
[ 26.535625] Normal free:1712kB min:1732kB low:2164kB high:2596kB active_anon:20848kB inactive_anon:120632kB active_file:0kB inactive_file:0kB unevictable:21688kB writepending:0kB present:245616kB managed:207932kB mlocked:944kB kernel_stack:744kB pagetables:1860kB bounce:0kB free_pcp:336kB local_pcp:336kB free_cma:0kB
[ 26.541696] lowmem_reserve[]: 0 0 0 0
[ 26.542721] DMA: 1*4kB (U) 2*8kB (UM) 0*16kB 2*32kB (UM) 3*64kB (UM) 1*128kB (U) 2*256kB (UM) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 916kB
[ 26.545330] Normal: 0*4kB 0*8kB 1*16kB (M) 29*32kB (UE) 12*64kB (E) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1712kB
[ 26.547861] 40422 total pagecache pages
[ 26.549061] 65402 pages RAM
[ 26.549936] 0 pages HighMem/MovableOnly
[ 26.550842] 9440 pages reserved
[ 26.551655] 0 pages cma reserved
[ 26.552465] 0 pages hwpoisoned
[ 26.553228] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 04f34c66133417727f849c279b4563b82025e155 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect good 7790ba463304604ea25a34b0d8711bfb7bcf4362 # 14:31 G 10 0 4 4 Merge 'linux-review/Brad-Love/em28xx-Disconnect-oops-fix-and-cleanup/20180628-011131' into devel-spot-201806281137
git bisect good 8fcdae7f8c5b20936b18f967085f57d8abc5d9a1 # 14:54 G 11 0 4 4 Merge 'nsekhar-davinci/acked' into devel-spot-201806281137
git bisect good 353902890c19a1b02a9c7d5d9479b945ceb8b879 # 15:09 G 11 0 2 2 Merge 'gfs2/iomap-write' into devel-spot-201806281137
git bisect good 13ee9874c94191bd9f19b742a96e11ccef50af4b # 15:41 G 10 0 1 1 Merge 'linux-review/Chengguang-Xu/code-cleanups-for-btrfs_get_acl/20180627-122449' into devel-spot-201806281137
git bisect good 1024f3d25d091f05e0000420f29466b2a3fb821a # 15:59 G 10 0 2 2 Merge 'linux-review/Anson-Huang/nvmem-imx-ocotp-add-support-for-imx6sll/20180627-113547' into devel-spot-201806281137
git bisect bad 3a06c6e80fd08bf1ae6b96564589498aa3867b80 # 16:30 B 1 1 1 1 Merge 'linux-review/Amritha-Nambiar/Symmetric-queue-selection-using-XPS-for-Rx-queues/20180628-113603' into devel-spot-201806281137
git bisect good 1a8e39e0420f82a0ce29bf636e1eea75170db7bf # 16:46 G 11 0 1 1 Merge 'linux-review/Alexei-Starovoitov/bpfilter-include-bpfilter_umh-in-assembly-instead-of-using-objcopy/20180627-111633' into devel-spot-201806281137
git bisect bad 00eebefa747d87360baaa46e15157b0cddce5607 # 17:05 B 0 2 15 1 net: Use static_key for XPS maps
git bisect good b156e4624229e1a1d7d2fc45c7bfc687da82108c # 17:19 G 11 0 1 1 net: Refactor XPS for CPUs and Rx queues
# first bad commit: [00eebefa747d87360baaa46e15157b0cddce5607] net: Use static_key for XPS maps
git bisect good b156e4624229e1a1d7d2fc45c7bfc687da82108c # 17:24 G 33 0 5 6 net: Refactor XPS for CPUs and Rx queues
# extra tests with debug options
git bisect bad 00eebefa747d87360baaa46e15157b0cddce5607 # 17:35 B 1 5 1 1 net: Use static_key for XPS maps
# extra tests on HEAD of linux-devel/devel-spot-201806281137
git bisect bad 04f34c66133417727f849c279b4563b82025e155 # 17:41 B 0 12 28 0 0day head guard for 'devel-spot-201806281137'
# extra tests on tree/branch linux-review/Amritha-Nambiar/Symmetric-queue-selection-using-XPS-for-Rx-queues/20180628-113603
git bisect bad 369ff2877369109a5faee4ce6b414d0104eaa381 # 18:05 B 0 1 14 0 Documentation: Add explanation for XPS using Rx-queue(s) map
# extra tests with first bad commit reverted
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 10 months
[lkp-robot] 02a5c550b2 BUG: kernel reboot-without-warning in test stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 02a5c550b2738f2bfea8e1e00aa75944d71c9e18
Author: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
AuthorDate: Wed Nov 2 17:25:06 2016 -0700
Commit: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
CommitDate: Mon Jan 23 11:44:18 2017 -0800
rcu: Abstract extended quiescent state determination
This commit is the fourth step towards full abstraction of all accesses
to the ->dynticks counter, implementing previously open-coded checks and
comparisons in new rcu_dynticks_in_eqs() and rcu_dynticks_in_eqs_since()
functions. This abstraction will ease changes to the ->dynticks counter
operation.
Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
Reviewed-by: Josh Triplett <josh(a)joshtriplett.org>
2625d469ba rcu: Abstract dynticks extended quiescent state enter/exit operations
02a5c550b2 rcu: Abstract extended quiescent state determination
6f0d349d92 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
deb5571a33 Add linux-next specific files for 20180625
+------------------------------------------------------------------+------------+------------+------------+---------------+
| | 2625d469ba | 02a5c550b2 | 6f0d349d92 | next-20180625 |
+------------------------------------------------------------------+------------+------------+------------+---------------+
| boot_successes | 1028 | 464 | 62 | 60 |
| boot_failures | 1 | 17 | 29 | 38 |
| WARNING:at_mm/page_alloc.c:#__alloc_pages_nodemask | 1 | | | |
| BUG:kernel_reboot-without-warning_in_test_stage | 0 | 17 | 1 | 1 |
| invoked_oom-killer:gfp_mask=0x | 0 | 0 | 28 | 36 |
| Mem-Info | 0 | 0 | 28 | 36 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 0 | 0 | 28 | 36 |
| WARNING:stack_recursion | 0 | 0 | 0 | 1 |
| WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x | 0 | 0 | 0 | 1 |
| WARNING:stack_going_in_the_wrong_direction?ip=__schedule/0x | 0 | 0 | 0 | 1 |
+------------------------------------------------------------------+------------+------------+------------+---------------+
[main] Setsockopt(0 1 68b000 10) on fd 377 [2:3:59]
[main] Setsockopt(10e 3 68b000 4) on fd 378 [16:3:10]
[main] Setsockopt(1 2d 68b000 73) on fd 380 [1:5:1]
[main] Setsockopt(1 c 68b000 ee) on fd 381 [2:1:6]
[main] 375 sockets created based on info from socket cachefile.
BUG: kernel reboot-without-warning in test stage
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start v4.11 v4.10 --
git bisect bad ce70df089143c49385b4f32f39d41fb50fbf6a7c # 18:32 B 0 1 17 1 mm, gup: fix typo in gup_p4d_range()
git bisect bad 94eae8034002401d71ae950106659e16add36e77 # 18:47 B 19 1 0 8 Merge tag 'platform-drivers-x86-v4.11-1' of git://git.infradead.org/linux-platform-drivers-x86
git bisect bad 7bb033829ef3ecfc491c0ed0197966e8f197fbdc # 18:58 B 1 1 0 0 Merge tag 'rodata-v4.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux
git bisect bad f790bd9c8e826434ab6c326b225276ed0f73affe # 19:50 B 71 1 0 0 Merge tag 'regulator-v4.11' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator
git bisect bad 4cee9fe53e4d181b608c7758090ed492b45d6801 # 20:16 B 29 1 0 1 Merge branch 'x86-apic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 575260e3f8f8ac72dc0c41a4a20190d1a5f2b887 # 00:08 G 580 0 1 1 Merge branch 'core-debugobjects-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 7f4eb0a6d5a76ee054acd7255c05b8d5ca31c5d9 # 00:37 B 39 1 1 1 Merge branch 'perf-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad f7458a5d631df2ecdbfe4a606053aba19913cc41 # 00:47 B 36 1 0 1 Merge branch 'core-rcu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 8dc79888a792f6c365c2a26903e49ff919e72488 # 02:47 G 580 0 0 0 rcu: Add lockdep checks to synchronous expedited primitives
git bisect good 7d025948e4982ee3fa741c0fa56385c8b4a7072d # 04:14 G 571 0 0 0 torture: Enable DEBUG_OBJECTS_RCU_HEAD for Tiny RCU
git bisect bad 38d30b336ccf8ee98e0e494a13738a0fade5a5e6 # 04:50 B 77 1 0 0 rcu: Adjust FQS offline checks for exact online-CPU detection
git bisect good 2625d469baeef3aabdfe122572e00c517e2d9451 # 07:46 G 572 0 0 1 rcu: Abstract dynticks extended quiescent state enter/exit operations
git bisect bad 3a19b46a5c17b12ef0691df19c676ba3da330a57 # 07:52 B 0 1 15 0 rcu: Check cond_resched_rcu_qs() state less often to reduce GP overhead
git bisect bad 02a5c550b2738f2bfea8e1e00aa75944d71c9e18 # 07:59 B 0 1 18 2 rcu: Abstract extended quiescent state determination
# first bad commit: [02a5c550b2738f2bfea8e1e00aa75944d71c9e18] rcu: Abstract extended quiescent state determination
git bisect good 2625d469baeef3aabdfe122572e00c517e2d9451 # 11:00 G 1000 0 2 3 rcu: Abstract dynticks extended quiescent state enter/exit operations
# extra tests on HEAD of linux-devel/devel-catchup-201806242320
git bisect bad 21f7550fbc2ac5e373767674593a34350af2ad59 # 11:01 B 10 2 0 1 0day head guard for 'devel-catchup-201806242320'
# extra tests on tree/branch linus/master
git bisect bad 6f0d349d922ba44e4348a17a78ea51b7135965b1 # 11:20 B 14 1 0 28 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
# extra tests on tree/branch linux-next/master
git bisect bad deb5571a333c08f20bee8cb1324644f774b27a66 # 11:21 B 0 1 94 38 Add linux-next specific files for 20180625
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years, 10 months