Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
1 year, 8 months
[lkp] [x86] db23da8b95: BUG: using __this_cpu_add_return() in preemptible [00000000] code: init/1
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Paolo-Bonzini/context_tracking-remove-duplicate-enabled-check/20151028-094317
commit db23da8b95ece9b57d4cfd63f5ee10502f1af0c8 ("x86: context_tracking: avoid irq_save/irq_restore on kernel entry and exit")
+------------------------------------------------------------------+------------+------------+
| | 66b6c205f3 | db23da8b95 |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 17 | 16 |
| invoked_oom-killer:gfp_mask=0x | 11 | 11 |
| Mem-Info | 11 | 11 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 11 | 11 |
| backtrace:reg_todo | 9 | 11 |
| BUG:kernel_boot_hang | 2 | 3 |
| IP-Config:Auto-configuration_of_network_failed | 4 | |
| backtrace:_do_fork | 2 | |
| BUG:using__this_cpu_add_return()in_preemptible[#]code:init | 0 | 2 |
| BUG:using__this_cpu_read()in_preemptible[#]code:init | 0 | 2 |
| BUG:using__this_cpu_write()in_preemptible[#]code:init | 0 | 2 |
| BUG:using__this_cpu_add()in_preemptible[#]code:init | 0 | 2 |
| WARNING:at_arch/x86/entry/common.c:#syscall_return_slowpath() | 0 | 1 |
| BUG:spinlock_recursion_on_CPU | 0 | 1 |
| BUG:using__this_cpu_add_return()in_preemptible[#]code:systemd | 0 | 1 |
| BUG:using__this_cpu_read()in_preemptible[#]code:systemd | 0 | 1 |
| BUG:using__this_cpu_write()in_preemptible[#]code:systemd | 0 | 1 |
| BUG:using__this_cpu_add()in_preemptible[#]code:systemd | 0 | 1 |
| BUG:spinlock_cpu_recursion_on_CPU | 0 | 1 |
+------------------------------------------------------------------+------------+------------+
[ 7.337137] irq: no irq domain found for /testcase-data/interrupts/intc0 !
[ 7.339035] ### dt-test ### end of unittest - 110 passed, 0 failed
[ 19.381542] Freeing unused kernel memory: 1528K (ffffffff820e6000 - ffffffff82264000)
[ 19.383048] BUG: using __this_cpu_add_return() in preemptible [00000000] code: init/1
[ 19.384165] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.384928] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.385942] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.387202] ffffffff81e7df04 ffff880035c8be68 ffffffff8142e66c 0000000000000001
[ 19.388396] ffff880035c8be90 ffffffff8145a506 0000000000000001 ffff880035c8c000
[ 19.389521] 00000000c000003e ffff880035c8bea0 ffffffff8145a543 ffff880035c8beb0
[ 19.390673] Call Trace:
[ 19.391041] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.391764] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.392683] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.395059] [<ffffffff81185a00>] context_tracking_recursion_enter+0x10/0x80
[ 19.395950] [<ffffffff81185b9e>] __context_tracking_exit+0xe/0x90
[ 19.396761] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.397518] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.398348] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.399169] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.400032] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.400861] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.401471] BUG: using __this_cpu_read() in preemptible [00000000] code: init/1
[ 19.402440] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.403166] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.404081] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.405296] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.406612] ffff880035c8bea0 ffffffff8145a506 0000000000000001 ffff880035c8c000
[ 19.407620] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.413524] Call Trace:
[ 19.413860] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.414501] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.415331] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.416136] [<ffffffff81185bb3>] __context_tracking_exit+0x23/0x90
[ 19.416974] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.417727] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.419426] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.422561] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.425343] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.427593] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.429644] BUG: using __this_cpu_read() in preemptible [00000000] code: init/1
[ 19.431429] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.432867] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.434629] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.440855] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.441993] ffff880035c8bea0 ffffffff8145a506 0000000000000001 0000000000000001
[ 19.443278] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.445377] Call Trace:
[ 19.446012] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.447491] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.453779] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.454589] [<ffffffff81185be4>] __context_tracking_exit+0x54/0x90
[ 19.455382] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.456234] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.459339] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.460225] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.461040] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.461864] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.462482] BUG: using __this_cpu_write() in preemptible [00000000] code: init/1
[ 19.494831] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.495526] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.496449] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.497566] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.504200] ffff880035c8bea0 ffffffff8145a506 0000000000000001 0000000000000001
[ 19.505328] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.506465] Call Trace:
[ 19.506831] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.507557] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.508484] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.509443] [<ffffffff81185c0f>] __context_tracking_exit+0x7f/0x90
[ 19.510286] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.511041] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.511867] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.512673] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.513499] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.514328] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.514944] BUG: using __this_cpu_add() in preemptible [00000000] code: init/1
[ 19.516021] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.518282] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.523966] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.525080] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.526085] ffff880035c8bea0 ffffffff8145a506 0000000000000001 0000000000000001
[ 19.530561] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.531685] Call Trace:
[ 19.532080] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.532843] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.534251] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.535690] [<ffffffff81185bcc>] __context_tracking_exit+0x3c/0x90
[ 19.538940] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.540965] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.543138] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.545032] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.550517] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.551537] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.552306] BUG: using __this_cpu_add_return() in preemptible [00000000] code: init/1
[ 19.553574] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.554411] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.555526] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.560755] ffffffff81e7df04 ffff880035c8be68 ffffffff8142e66c 0000000000000001
[ 19.569900] ffff880035c8be90 ffffffff8145a506 0000000000000001 ffff880035c8c000
[ 19.571026] 00000000c000003e ffff880035c8bea0 ffffffff8145a543 ffff880035c8beb0
[ 19.572148] Call Trace:
[ 19.572509] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.573712] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.576323] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.578862] [<ffffffff81185a00>] context_tracking_recursion_enter+0x10/0x80
[ 19.581389] [<ffffffff81185b9e>] __context_tracking_exit+0xe/0x90
[ 19.582989] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.584714] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.586804] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.588899] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.591601] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.594353] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.596328] BUG: using __this_cpu_read() in preemptible [00000000] code: init/1
[ 19.603832] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.604582] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.605594] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.610225] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.611348] ffff880035c8bea0 ffffffff8145a506 0000000000000001 ffff880035c8c000
[ 19.612468] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.613632] Call Trace:
[ 19.613997] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.614720] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.615640] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.620345] [<ffffffff81185bb3>] __context_tracking_exit+0x23/0x90
[ 19.624037] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.626562] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.628568] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.631070] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.633781] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.635675] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.637762] BUG: using __this_cpu_read() in preemptible [00000000] code: init/1
[ 19.640682] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.647057] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.648083] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.649324] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.650517] ffff880035c8bea0 ffffffff8145a506 0000000000000001 0000000000000001
[ 19.651639] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.652850] Call Trace:
[ 19.659956] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.660687] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.661610] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.662510] [<ffffffff81185be4>] __context_tracking_exit+0x54/0x90
[ 19.664210] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.666667] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.669243] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.671987] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.676125] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.678849] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.680956] BUG: using __this_cpu_write() in preemptible [00000000] code: init/1
[ 19.683670] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.685634] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.693507] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.695108] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.696327] ffff880035c8bea0 ffffffff8145a506 0000000000000001 0000000000000001
[ 19.699736] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.705888] Call Trace:
[ 19.706211] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.706891] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.707709] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.708522] [<ffffffff81185c0f>] __context_tracking_exit+0x7f/0x90
[ 19.709322] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.710110] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.710950] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.711768] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.712592] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.713453] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.732126] BUG: using __this_cpu_add() in preemptible [00000000] code: init/1
[ 19.733062] caller is __this_cpu_preempt_check+0x13/0x20
[ 19.733728] CPU: 1 PID: 1 Comm: init Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.734644] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.735755] ffffffff81e7df04 ffff880035c8be78 ffffffff8142e66c 0000000000000001
[ 19.736791] ffff880035c8bea0 ffffffff8145a506 0000000000000001 0000000000000001
[ 19.737793] 00000000c000003e ffff880035c8beb0 ffffffff8145a543 ffff880035c8bed0
[ 19.738794] Call Trace:
[ 19.739117] [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.739795] [<ffffffff8145a506>] check_preemption_disabled+0xf6/0x100
[ 19.775557] [<ffffffff8145a543>] __this_cpu_preempt_check+0x13/0x20
[ 19.776384] [<ffffffff81185bcc>] __context_tracking_exit+0x3c/0x90
[ 19.777188] [<ffffffff81001074>] enter_from_user_mode+0x24/0x60
[ 19.777953] [<ffffffff8100118e>] syscall_trace_enter_phase1+0xde/0x140
[ 19.778782] [<ffffffff8145a543>] ? __this_cpu_preempt_check+0x13/0x20
[ 19.779607] [<ffffffff81185ae8>] ? __context_tracking_enter+0x78/0xc0
[ 19.780458] [<ffffffff810014be>] ? prepare_exit_to_usermode+0xee/0x100
[ 19.781302] [<ffffffff81ac6474>] tracesys+0xd/0x44
[ 19.811081] systemd[1]: RTC configured in localtime, applying delta of 480 minutes to system time.
[ 19.813386] random: systemd urandom read with 3 bits of entropy available
[ 19.839736] BUG: spinlock cpu recursion on CPU#1, systemd/1
[ 19.843531] lock: 0xffff880035c848b8, .magic: dead4ead, .owner: <none>/-1, .owner_cpu: 1
[ 19.850084] CPU: 1 PID: 1 Comm: systemd Not tainted 4.3.0-rc3-00098-gdb23da8 #619
[ 19.851134] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 19.852373] ffff880035c848b8 ffff880037d03ea0 ffffffff8142e66c ffffffffffffffff
[ 19.856601] ffff880037d03ec0 ffffffff81129bb0 ffff880035c848b8 ffffffff81e4c2ab
[ 19.857715] ffff880037d03ee0 ffffffff81129c21 ffff880035c848b8 ffff880035c848b8
[ 19.858833] Call Trace:
[ 19.859187] <IRQ> [<ffffffff8142e66c>] dump_stack+0x4e/0x82
[ 19.866732] [<ffffffff81129bb0>] spin_dump+0x80/0xd0
[ 19.867457] [<ffffffff81129c21>] spin_bug+0x21/0x30
[ 19.868167] [<ffffffff81129d98>] do_raw_spin_lock+0x108/0x120
[ 19.868995] [<ffffffff81ac585e>] _raw_spin_lock+0x3e/0x50
[ 19.876474] [<ffffffff81111d6f>] ? vtime_account_user+0x1f/0xa0
Thanks,
Ying Huang
5 years, 3 months
[lkp] [mm, page_alloc] 43993977ba: +88% OOM possibility
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 43993977baecd838d66ccabc7f682342fc6ff635 ("mm, page_alloc: distinguish between being unable to sleep, unwilling to sleep and avoiding waking kswapd")
We found the OOM possibility increased 88% in a virtual machine with 1G memory.
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/disk/fs/test:
vm-kbuild-1G/xfstests/debian-x86_64-2015-02-07.cgz/x86_64-allyesdebian/gcc-4.9/4HDD/btrfs/generic-mid
commit:
74fad8a3a917b9e0a407af8a4150c61f7b836591
43993977baecd838d66ccabc7f682342fc6ff635
74fad8a3a917b9e0 43993977baecd838d66ccabc7f
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:24 -4% :24 xfstests.generic.192.fail
1:24 -4% :24 xfstests.nr_fail
:24 88% 21:24 dmesg.Mem-Info
:24 62% 15:24 dmesg.page_allocation_failure:order:#,mode
:24 88% 21:24 dmesg.warn_alloc_failed+0x
:24 75% 18:24 last_state.is_incomplete_run
1:24 -4% :24 last_state.xfstests.exit_code.1
:24 54% 13:24 last_state.xfstests.exit_code.143
:24 71% 17:24 kmsg.SLAB:Unable_to_allocate_memory_on_node#(gfp=#)
1:24 -4% :24 kmsg.TDH<#>
1:24 -4% :24 kmsg.TDH<c7>
1:24 -4% :24 kmsg.TDT<#>
1:24 -4% :24 kmsg.TDT<c7>
1:24 -4% :24 kmsg.Tx_Queue<#>
1:24 -4% :24 kmsg.buffer_info[next_to_clean]
1:24 -4% :24 kmsg.e1000#:#:#eth0:Detected_Tx_Unit_Hang
1:24 -4% :24 kmsg.jiffies<#>
1:24 -4% :24 kmsg.jiffies<#c5c>
1:24 -4% :24 kmsg.next_to_clean<#>
1:24 -4% :24 kmsg.next_to_clean<c7>
1:24 -4% :24 kmsg.next_to_use<#>
1:24 -4% :24 kmsg.next_to_use<c9>
1:24 -4% :24 kmsg.next_to_watch.status<#>
1:24 -4% :24 kmsg.next_to_watch<#>
1:24 -4% :24 kmsg.next_to_watch<c8>
1:24 -4% :24 kmsg.time_stamp<#>
1:24 -4% :24 kmsg.time_stamp<#afd>
vm-kbuild-1G: qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap
Memory: 1G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 3 months
[lkp] [printk] 92ecc53f15: RIP: 0010:[<ffffffff834e7e45>] [<ffffffff834e7e45>] vprintk_emit+0x155/0x560
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux David-Howells/printk-Don-t-discard-earlier-unprinted-messages-to-make-space/20151022-181856
commit 92ecc53f15ad184d46e568b9b502291b61ae2837 ("printk: Don't discard earlier unprinted messages to make space")
This patch may be not the root cause, but it may reveals another issues,
so still send it out.
+---------------------------------------------------+----------+------------+
| | v4.3-rc6 | 92ecc53f15 |
+---------------------------------------------------+----------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 283 | 61 |
| BUG:kernel_boot_oversize | 274 | |
| BUG:kernel_boot_crashed | 5 | 11 |
| BUG:kernel_early-boot_crashed_Decompressing_Linux | 4 | |
| RIP:of_unittest | 0 | 14 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 30 |
| backtrace:kernel_init_freeable | 0 | 30 |
| backtrace:apic_timer_interrupt | 0 | 11 |
| RIP:lock_acquire | 0 | 4 |
| backtrace:of_unittest | 0 | 19 |
| RIP:vprintk_emit | 0 | 21 |
| RIP:__mutex_unlock_slowpath | 0 | 2 |
| RIP:_raw_spin_unlock_irqrestore | 0 | 5 |
| RIP:printk | 0 | 1 |
| RIP:mutex_lock_nested | 0 | 1 |
| RIP:__might_sleep | 0 | 1 |
| RIP:of_overlay_destroy | 0 | 1 |
+---------------------------------------------------+----------+------------+
[ 14.088391] ### dt-test ### of_unittest_destroy_tracked_overlays: overlay destroy failed for #6
[ 14.089431] of_overlay_destroy: Could not find overlay #6
[ 14.090238] ### dt-test ### of_unittest_destroy_tracked_overlays: overlay destroy failed for #6
[ 40.070004] Modules linked in:
[ 40.070004] RIP: 0010:[<ffffffff834e7e45>] [<ffffffff834e7e45>] vprintk_emit+0x155/0x560
[ 40.070004] Stack:
[ 40.070004] ffffffff847b3fc2 0000000000000053 0000000000000000 ffffffff83a6ee28
[ 40.070004] 0000000000000246 0000000000000006 0000000000000000 0000000000000000
[ 40.070004] 0000000000000006 ffff880015966198 0000000000000001 0000000000000000
[ 40.070004] Call Trace:
[ 40.070004] [<ffffffff834e839f>] vprintk_default+0x1f/0x30
[ 40.070004] [<ffffffff8356a172>] printk+0x48/0x50
[ 40.070004] [<ffffffff83c03eeb>] of_unittest+0x24f3/0x255c
[ 40.070004] [<ffffffff834002c9>] ? do_one_initcall+0x89/0x1f0
[ 40.070004] [<ffffffff83c019f8>] ? of_unittest_check_tree_linkage+0x93/0x93
[ 40.070004] [<ffffffff834002d9>] do_one_initcall+0x99/0x1f0
[ 40.070004] [<ffffffff834b7600>] ? parse_args+0x1f0/0x410
[ 40.070004] [<ffffffff834ae012>] ? __usermodehelper_set_disable_depth+0x42/0x50
[ 40.070004] [<ffffffff83bc3168>] kernel_init_freeable+0x119/0x19c
[ 40.070004] [<ffffffff838737f0>] ? rest_init+0xd0/0xd0
[ 40.070004] [<ffffffff838737fe>] kernel_init+0xe/0xe0
[ 40.070004] [<ffffffff8387b35f>] ret_from_fork+0x3f/0x70
[ 40.070004] [<ffffffff838737f0>] ? rest_init+0xd0/0xd0
[ 40.070004] Code: dc ff ff 44 8b 75 d0 41 01 c6 48 c7 c7 e0 d9 af 83 c7 05 63 5a 61 00 ff ff ff ff e8 26 28 39 00 e8 f1 3a ff ff 48 8b 7d c0 57 9d <0f> 1f 44 00 00 80 7d cf 00 44 89 f2 0f 84 48 01 00 00 48 83 c4
[ 40.070004] Kernel panic - not syncing: softlockup: hung tasks
[ 40.070004] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G L 4.3.0-rc6-00001-g92ecc53 #1
[ 40.070004] ffff880014803ea8 ffff880014803e18 ffffffff83670239 ffffffff83a2bc63
[ 40.070004] ffff880014803e98 ffffffff83569cde 0000000000000008 ffff880014803ea8
[ 40.070004] ffff880014803e40 ffff880014803e98 0000000000000000 0000000000000000
[ 40.070004] Call Trace:
[ 40.070004] <IRQ> [<ffffffff83670239>] dump_stack+0x4b/0x72
[ 40.070004] [<ffffffff83569cde>] panic+0xc6/0x203
[ 40.070004] [<ffffffff8352814c>] watchdog_timer_fn+0x1ec/0x1f0
[ 40.070004] [<ffffffff83527f60>] ? watchdog+0x40/0x40
[ 40.070004] [<ffffffff834fc4e7>] hrtimer_run_queues+0x127/0x2d0
[ 40.070004] [<ffffffff834fb8e7>] update_process_times+0x27/0x60
[ 40.070004] [<ffffffff8350a9f0>] tick_nohz_handler+0x70/0xe0
[ 40.070004] [<ffffffff834314a8>] local_apic_timer_interrupt+0x38/0x60
[ 40.070004] [<ffffffff8387d6ad>] smp_apic_timer_interrupt+0x3d/0x50
[ 40.070004] [<ffffffff8387bd64>] apic_timer_interrupt+0x84/0x90
[ 40.070004] <EOI> [<ffffffff834e7e3a>] ? vprintk_emit+0x14a/0x560
[ 40.070004] [<ffffffff834e7e45>] ? vprintk_emit+0x155/0x560
[ 40.070004] [<ffffffff834e839f>] vprintk_default+0x1f/0x30
[ 40.070004] [<ffffffff8356a172>] printk+0x48/0x50
[ 40.070004] [<ffffffff83c03eeb>] of_unittest+0x24f3/0x255c
[ 40.070004] [<ffffffff834002c9>] ? do_one_initcall+0x89/0x1f0
[ 40.070004] [<ffffffff83c019f8>] ? of_unittest_check_tree_linkage+0x93/0x93
[ 40.070004] [<ffffffff834002d9>] do_one_initcall+0x99/0x1f0
[ 40.070004] [<ffffffff834b7600>] ? parse_args+0x1f0/0x410
[ 40.070004] [<ffffffff834ae012>] ? __usermodehelper_set_disable_depth+0x42/0x50
[ 40.070004] [<ffffffff83bc3168>] kernel_init_freeable+0x119/0x19c
[ 40.070004] [<ffffffff838737f0>] ? rest_init+0xd0/0xd0
[ 40.070004] [<ffffffff838737fe>] kernel_init+0xe/0xe0
[ 40.070004] [<ffffffff8387b35f>] ret_from_fork+0x3f/0x70
[ 40.070004] [<ffffffff838737f0>] ? rest_init+0xd0/0xd0
[ 40.070004] Kernel Offset: 0x2400000 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffffbfffffff)
Elapsed time: 60
Thanks,
Ying Huang
5 years, 3 months
[lkp] [Add htu21 meas] 2b5c53d2c9:
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/jic23/iio.git testing
commit 2b5c53d2c958bda92310d1b371a9314f4aa8c274 ("Add htu21 meas-spec driver support")
It appears that is name conflict between your driver and an existing driver.
[ 21.992858] Error: Driver 'htu21' is already registered, aborting...
[ 21.992858] Error: Driver 'htu21' is already registered, aborting...
Thanks,
Ying Huang
5 years, 3 months
[lkp] [virtio_ring] 46d2219cb0: Buffer I/O error on dev vda, logical block 0, async page read
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git virtio_dma
commit 46d2219cb027a0ccee90b980b26a1bb46d186a87 ("virtio_ring: Support DMA APIs")
+----------------------+------------+------------+
| | a2b5cd8102 | 46d2219cb0 |
+----------------------+------------+------------+
| boot_successes | 15 | 0 |
| boot_failures | 0 | 14 |
| BUG:kernel_boot_hang | 0 | 14 |
+----------------------+------------+------------+
[ 1.295643] blk_update_request: I/O error, dev vda, sector 0
[ 1.297189] Buffer I/O error on dev vda, logical block 0, async page read
[ 2.134088] tsc: Refined TSC clocksource calibration: 2693.503 MHz
[ 2.136924] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x26d3451f606, max_idle_ns: 440795333933 ns
Elapsed time: 340
BUG: kernel boot hang
qemu-system-x86_64 -enable-kvm -cpu SandyBridge -kernel /pkg/linux/x86_64-rhel/gcc-4.9/46d2219cb027a0ccee90b980b26a1bb46d186a87/vmlinuz-4.3.0-rc7-00002-g46d2219 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-x86_64-40/bisect_boot-1-yocto-minimal-x86_64.cgz-x86_64-rhel-46d2219cb027a0ccee90b980b26a1bb46d186a87-20151028-61392-107kks6-0.yaml ARCH=x86_64 kconfig=x86_64-rhel branch=linux-devel/devel-hourly-2015102814 commit=46d2219cb027a0ccee90b980b26a1bb46d186a87 BOOT_IMAGE=/pkg/linux/x86_64-rhel/gcc-4.9/46d2219cb027a0ccee90b980b26a1bb46d186a87/vmlinuz-4.3.0-rc7-00002-g46d2219 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-x86_64/yocto-minimal-x86_64.cgz/x86_64-rhel/gcc-4.9/46d2219cb027a0ccee90b980b26a1bb46d186a87/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-x86_64-40::dhcp drbd.minor_count=8' -initrd /fs/sdc1/initrd-vm-kbuild-yocto-x86_64-40 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-kbuild-yocto-x86_64-40,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-x86_64-40 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-x86_64-40 -daemonize -display none -monitor null
Thanks,
Ying Huang
5 years, 3 months
[lkp] [sched/fair] 1746babbb1: +2.3% unixbench.score
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 1746babbb15594ba2d8d8196589bbbc2b5ff51c9 ("sched/fair: Have task_move_group_fair() also detach entity load from the old runqueue")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/shell8
commit:
50a2a3b246149d041065a67ccb3e98145f780a2f
1746babbb15594ba2d8d8196589bbbc2b5ff51c9
50a2a3b246149d04 1746babbb15594ba2d8d819658
---------------- --------------------------
%stddev %change %stddev
\ | \
25166 ± 0% +2.3% 25756 ± 0% unixbench.score
9026404 ± 0% +3.9% 9376242 ± 0% unixbench.time.involuntary_context_switches
4.683e+08 ± 0% +1.8% 4.765e+08 ± 0% unixbench.time.minor_page_faults
949.00 ± 0% +3.0% 977.25 ± 0% unixbench.time.percent_of_cpu_this_job_got
2286 ± 0% +2.8% 2349 ± 0% unixbench.time.system_time
1305 ± 0% +3.3% 1348 ± 0% unixbench.time.user_time
31346 ± 8% -10.7% 27987 ± 1% cpuidle.POLL.time
2679 ± 8% +179.4% 7486 ± 40% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
107334 ± 22% +251.7% 377450 ± 37% latency_stats.sum.walk_component.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newstat.SyS_newstat.entry_SYSCALL_64_fastpath
979.25 ± 1% +17.5% 1150 ± 2% slabinfo.files_cache.active_objs
979.25 ± 1% +17.5% 1150 ± 2% slabinfo.files_cache.num_objs
33.25 ± 1% +14.3% 38.00 ± 0% vmstat.procs.r
25287 ± 0% +5.2% 26600 ± 0% vmstat.system.in
61.09 ± 0% +2.9% 62.85 ± 0% turbostat.%Busy
2011 ± 0% +2.9% 2069 ± 0% turbostat.Avg_MHz
2.58 ± 2% -10.4% 2.32 ± 4% turbostat.CPU%c3
1.16 ± 28% +30.9% 1.52 ± 0% turbostat.RAMWatt
1363139 ± 2% +390.2% 6682436 ± 5% sched_debug.cfs_rq[0]:/.min_vruntime
508.25 ± 35% -30.8% 351.50 ± 8% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
1483779 ± 3% +248.1% 5164998 ± 0% sched_debug.cfs_rq[10]:/.min_vruntime
120613 ± 52% -1358.2% -1517619 ±-26% sched_debug.cfs_rq[10]:/.spread0
1481440 ± 3% +243.7% 5092246 ± 2% sched_debug.cfs_rq[11]:/.min_vruntime
118271 ± 54% -1444.7% -1590395 ±-26% sched_debug.cfs_rq[11]:/.spread0
1455329 ± 0% +252.8% 5133998 ± 1% sched_debug.cfs_rq[12]:/.min_vruntime
92159 ± 30% -1780.4% -1548645 ±-27% sched_debug.cfs_rq[12]:/.spread0
536.00 ± 18% -24.9% 402.50 ± 18% sched_debug.cfs_rq[13]:/.load_avg
1451734 ± 0% +254.5% 5146739 ± 0% sched_debug.cfs_rq[13]:/.min_vruntime
88561 ± 34% -1834.3% -1535905 ±-24% sched_debug.cfs_rq[13]:/.spread0
326.25 ± 21% +73.2% 565.00 ± 20% sched_debug.cfs_rq[14]:/.load_avg
1479346 ± 3% +246.9% 5132386 ± 0% sched_debug.cfs_rq[14]:/.min_vruntime
47.67 ±141% +472.7% 273.00 ± 42% sched_debug.cfs_rq[14]:/.removed_load_avg
46.00 ±141% +467.9% 261.25 ± 47% sched_debug.cfs_rq[14]:/.removed_util_avg
116171 ± 55% -1434.5% -1550271 ±-25% sched_debug.cfs_rq[14]:/.spread0
777.75 ± 9% +35.8% 1056 ± 14% sched_debug.cfs_rq[14]:/.util_avg
1451847 ± 0% +252.2% 5113656 ± 1% sched_debug.cfs_rq[15]:/.min_vruntime
88669 ± 29% -1869.5% -1569030 ±-28% sched_debug.cfs_rq[15]:/.spread0
1346003 ± 0% +413.1% 6906332 ± 0% sched_debug.cfs_rq[1]:/.min_vruntime
347.50 ± 3% -10.0% 312.75 ± 4% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
1362040 ± 2% +408.4% 6924492 ± 0% sched_debug.cfs_rq[2]:/.min_vruntime
1363305 ± 2% +394.7% 6744667 ± 4% sched_debug.cfs_rq[3]:/.min_vruntime
1.50 ±100% +400.0% 7.50 ± 22% sched_debug.cfs_rq[3]:/.nr_spread_over
1346931 ± 0% +398.4% 6712511 ± 5% sched_debug.cfs_rq[4]:/.min_vruntime
1345064 ± 0% +416.2% 6942624 ± 0% sched_debug.cfs_rq[5]:/.min_vruntime
36.00 ± 9% -36.1% 23.00 ± 21% sched_debug.cfs_rq[5]:/.runnable_load_avg
-18089 ±-155% -1537.8% 260083 ±142% sched_debug.cfs_rq[5]:/.spread0
1362865 ± 2% +405.4% 6887733 ± 0% sched_debug.cfs_rq[6]:/.min_vruntime
1343531 ± 0% +397.9% 6689576 ± 5% sched_debug.cfs_rq[7]:/.min_vruntime
45.75 ± 22% -51.9% 22.00 ± 50% sched_debug.cfs_rq[8]:/.load
1479661 ± 3% +246.4% 5125934 ± 0% sched_debug.cfs_rq[8]:/.min_vruntime
116500 ± 18% -1436.2% -1556673 ±-22% sched_debug.cfs_rq[8]:/.spread0
37.50 ± 28% -50.7% 18.50 ± 60% sched_debug.cfs_rq[9]:/.load
1453951 ± 0% +254.7% 5156506 ± 0% sched_debug.cfs_rq[9]:/.min_vruntime
90788 ± 30% -1780.9% -1526106 ±-25% sched_debug.cfs_rq[9]:/.spread0
258966 ± 12% +33.3% 345105 ± 12% sched_debug.cpu#12.avg_idle
0.25 ±173% +800.0% 2.25 ± 57% sched_debug.cpu#14.nr_running
216317 ± 17% +28.8% 278519 ± 11% sched_debug.cpu#4.avg_idle
-210.00 ± -3% +15.5% -242.50 ± -8% sched_debug.cpu#5.nr_uninterruptible
182159 ± 15% +54.5% 281450 ± 12% sched_debug.cpu#6.avg_idle
40.25 ± 33% -32.3% 27.25 ± 23% sched_debug.cpu#7.cpu_load[2]
41.50 ± 17% -67.5% 13.50 ± 64% sched_debug.cpu#9.load
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white2/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/shell8
commit:
50a2a3b246149d041065a67ccb3e98145f780a2f
1746babbb15594ba2d8d8196589bbbc2b5ff51c9
50a2a3b246149d04 1746babbb15594ba2d8d819658
---------------- --------------------------
%stddev %change %stddev
\ | \
1319 ± 0% +1.4% 1337 ± 0% unixbench.time.system_time
856.00 ± 0% +1.3% 867.52 ± 0% unixbench.time.user_time
31791175 ±141% -34.2% 20924107 ± 51% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
4.97 ± 4% -9.7% 4.48 ± 5% turbostat.CPU%c3
1739900 ± 3% +23.5% 2148246 ± 7% cpuidle.C1-NHM.usage
26162 ± 4% +33.4% 34904 ± 7% cpuidle.POLL.usage
882905 ± 7% +514.3% 5423689 ± 7% sched_debug.cfs_rq[0]:/.min_vruntime
878971 ± 6% +539.3% 5618859 ± 5% sched_debug.cfs_rq[1]:/.min_vruntime
-3940 ±-545% -5052.2% 195127 ±120% sched_debug.cfs_rq[1]:/.spread0
95.00 ± 8% -43.2% 54.00 ± 42% sched_debug.cfs_rq[2]:/.load
878054 ± 6% +507.4% 5333052 ± 6% sched_debug.cfs_rq[2]:/.min_vruntime
878105 ± 5% +524.5% 5483875 ± 6% sched_debug.cfs_rq[3]:/.min_vruntime
883009 ± 7% +423.6% 4623117 ± 5% sched_debug.cfs_rq[4]:/.min_vruntime
87.47 ±4504% -9.2e+05% -800722 ±-23% sched_debug.cfs_rq[4]:/.spread0
880682 ± 7% +432.6% 4690114 ± 5% sched_debug.cfs_rq[5]:/.min_vruntime
6.75 ± 16% +85.2% 12.50 ± 25% sched_debug.cfs_rq[5]:/.nr_spread_over
-2242 ±-737% +32626.7% -733759 ±-40% sched_debug.cfs_rq[5]:/.spread0
883065 ± 7% +416.0% 4556427 ± 7% sched_debug.cfs_rq[6]:/.min_vruntime
138.28 ±14053% -6.3e+05% -867453 ±-44% sched_debug.cfs_rq[6]:/.spread0
884890 ± 5% +419.8% 4599371 ± 5% sched_debug.cfs_rq[7]:/.min_vruntime
7.50 ± 35% +60.0% 12.00 ± 30% sched_debug.cfs_rq[7]:/.nr_spread_over
1960 ±821% -42164.6% -824542 ±-45% sched_debug.cfs_rq[7]:/.spread0
-186.50 ±-16% -40.6% -110.75 ±-13% sched_debug.cpu#0.nr_uninterruptible
91.00 ± 23% -38.7% 55.75 ± 28% sched_debug.cpu#2.load
400614 ± 7% -30.4% 278901 ± 27% sched_debug.cpu#6.avg_idle
323918 ± 45% +96.5% 636444 ± 64% sched_debug.cpu#6.sched_goidle
0.41 ± 69% +3122.5% 13.28 ±163% sched_debug.rt_rq[0]:/.rt_time
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/shell8
commit:
50a2a3b246149d041065a67ccb3e98145f780a2f
1746babbb15594ba2d8d8196589bbbc2b5ff51c9
50a2a3b246149d04 1746babbb15594ba2d8d819658
---------------- --------------------------
%stddev %change %stddev
\ | \
563.50 ± 3% +3.3% 582.25 ± 0% unixbench.time.percent_of_cpu_this_job_got
1318 ± 0% +1.5% 1338 ± 0% unixbench.time.system_time
854.73 ± 0% +1.3% 866.10 ± 0% unixbench.time.user_time
6687472 ± 0% -1.3% 6602598 ± 0% unixbench.time.voluntary_context_switches
5085 ± 2% +143.2% 12369 ± 58% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
1878596 ± 4% +13.2% 2127099 ± 6% cpuidle.C1-NHM.usage
191783 ± 4% +38.6% 265815 ± 18% cpuidle.POLL.time
24853 ± 2% +42.7% 35470 ± 8% cpuidle.POLL.usage
72.67 ± 3% +3.5% 75.22 ± 0% turbostat.%Busy
2123 ± 3% +3.5% 2197 ± 0% turbostat.Avg_MHz
5.22 ± 3% -8.9% 4.75 ± 0% turbostat.CPU%c3
441.00 ± 16% -23.0% 339.75 ± 2% sched_debug.cfs_rq[0]:/.load_avg
908244 ± 5% +503.6% 5481834 ± 6% sched_debug.cfs_rq[0]:/.min_vruntime
10.75 ± 13% +53.5% 16.50 ± 20% sched_debug.cfs_rq[0]:/.nr_spread_over
910039 ± 5% +512.9% 5577184 ± 3% sched_debug.cfs_rq[1]:/.min_vruntime
915823 ± 4% +523.4% 5709506 ± 0% sched_debug.cfs_rq[2]:/.min_vruntime
544.25 ± 10% -24.3% 412.00 ± 14% sched_debug.cfs_rq[3]:/.load_avg
911018 ± 5% +495.7% 5427014 ± 5% sched_debug.cfs_rq[3]:/.min_vruntime
8.00 ± 34% +46.9% 11.75 ± 11% sched_debug.cfs_rq[3]:/.nr_spread_over
193.00 ± 31% -65.3% 67.00 ±100% sched_debug.cfs_rq[3]:/.removed_load_avg
190.25 ± 29% -64.8% 67.00 ±100% sched_debug.cfs_rq[3]:/.removed_util_avg
1111 ± 7% -10.0% 1000 ± 5% sched_debug.cfs_rq[3]:/.util_avg
912227 ± 5% +415.3% 4700282 ± 1% sched_debug.cfs_rq[4]:/.min_vruntime
3970 ± 50% -19783.8% -781587 ±-36% sched_debug.cfs_rq[4]:/.spread0
917396 ± 5% +410.3% 4681186 ± 3% sched_debug.cfs_rq[5]:/.min_vruntime
9137 ±125% -8862.6% -800697 ±-52% sched_debug.cfs_rq[5]:/.spread0
923564 ± 4% +417.5% 4779073 ± 0% sched_debug.cfs_rq[6]:/.min_vruntime
15304 ± 75% -4692.4% -702818 ±-46% sched_debug.cfs_rq[6]:/.spread0
916876 ± 5% +405.9% 4638348 ± 2% sched_debug.cfs_rq[7]:/.min_vruntime
8613 ± 95% -9893.4% -843599 ±-48% sched_debug.cfs_rq[7]:/.spread0
91.00 ± 13% +123.9% 203.75 ± 51% sched_debug.cpu#0.load
-194.75 ±-18% -47.5% -102.25 ±-49% sched_debug.cpu#0.nr_uninterruptible
85.50 ± 17% +94.2% 166.00 ± 31% sched_debug.cpu#1.load
1.00 ±122% +200.0% 3.00 ± 33% sched_debug.cpu#1.nr_running
-152.50 ±-19% -43.1% -86.75 ±-39% sched_debug.cpu#1.nr_uninterruptible
13309 ± 24% -42.6% 7642 ± 34% sched_debug.cpu#2.curr->pid
2535607 ± 57% -58.2% 1061081 ± 0% sched_debug.cpu#2.nr_switches
-174.50 ±-29% -63.2% -64.25 ±-29% sched_debug.cpu#2.nr_uninterruptible
2536010 ± 57% -58.1% 1061543 ± 0% sched_debug.cpu#2.sched_count
601197 ± 44% -49.3% 304897 ± 0% sched_debug.cpu#2.sched_goidle
203.75 ± 4% -70.2% 60.75 ± 26% sched_debug.cpu#4.nr_uninterruptible
77.25 ± 5% -16.8% 64.25 ± 9% sched_debug.cpu#5.cpu_load[2]
79.00 ± 6% -19.0% 64.00 ± 7% sched_debug.cpu#5.cpu_load[3]
78.25 ± 7% -16.0% 65.75 ± 6% sched_debug.cpu#5.cpu_load[4]
85.50 ± 12% -28.1% 61.50 ± 9% sched_debug.cpu#7.cpu_load[1]
83.25 ± 10% -26.1% 61.50 ± 13% sched_debug.cpu#7.cpu_load[2]
78.25 ± 7% -20.8% 62.00 ± 12% sched_debug.cpu#7.cpu_load[3]
75.75 ± 4% -15.2% 64.25 ± 7% sched_debug.cpu#7.cpu_load[4]
1390226 ± 60% +56.0% 2168907 ± 57% sched_debug.cpu#7.nr_switches
1390620 ± 60% +56.0% 2169299 ± 57% sched_debug.cpu#7.sched_count
370003 ± 57% +59.8% 591403 ± 56% sched_debug.cpu#7.sched_goidle
614114 ± 68% +62.3% 996936 ± 63% sched_debug.cpu#7.ttwu_count
431156 ± 97% +87.5% 808605 ± 78% sched_debug.cpu#7.ttwu_local
6.61 ±157% -99.9% 0.01 ± 83% sched_debug.rt_rq[6]:/.rt_time
lituya: Grantley Haswell
Memory: 16G
nhm-white2: Nehalem
Memory: 4G
nhm-white: Nehalem
Memory: 6G
unixbench.time.system_time
1400 ++-------------------------------------------------------------------+
O O O O O O O O O O O O O O |
1380 ++ O O O |
1360 ++ |
| |
1340 ++ O O O O O O |
| O O O O |
1320 ++ *..*.*..*
| : |
1300 ++ : |
1280 ++ : |
*..*.*..*.*..*.*..*.*..*.. *..* |
1260 ++ *.*..*.*..*. .*. + |
| *. *..*..*.*..* |
1240 ++-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 3 months
[lkp] [sched/fair] fde7d22e01: -25.1% unixbench.score
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit fde7d22e01aa0d252fc5c95fa11f0dac35a4dd59 ("sched/fair: Fix overly small weight for interactive group entities")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white2/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/shell1
commit:
9babcd7929bc8967ae3bb6093f603b93c2f9958f
fde7d22e01aa0d252fc5c95fa11f0dac35a4dd59
9babcd7929bc8967 fde7d22e01aa0d252fc5c95fa1
---------------- --------------------------
%stddev %change %stddev
\ | \
10085 ± 0% -4.4% 9639 ± 0% unixbench.score
2689102 ± 0% -13.5% 2326547 ± 0% unixbench.time.involuntary_context_switches
7382 ± 0% +0.7% 7433 ± 0% unixbench.time.maximum_resident_set_size
1.428e+08 ± 0% -3.9% 1.373e+08 ± 0% unixbench.time.minor_page_faults
368.25 ± 0% -9.2% 334.50 ± 5% unixbench.time.percent_of_cpu_this_job_got
898.97 ± 0% -4.4% 859.05 ± 0% unixbench.time.system_time
496.35 ± 0% -4.1% 475.79 ± 0% unixbench.time.user_time
4537090 ± 0% -4.1% 4352615 ± 0% unixbench.time.voluntary_context_switches
2689102 ± 0% -13.5% 2326547 ± 0% time.involuntary_context_switches
1795 ± 0% +10.4% 1982 ± 6% uptime.idle
2570 ± 2% +40.3% 3607 ± 3% slabinfo.kmalloc-128.active_objs
2607 ± 2% +41.6% 3692 ± 3% slabinfo.kmalloc-128.num_objs
54942 ± 0% -16.0% 46156 ± 6% vmstat.system.cs
16089 ± 0% -19.9% 12893 ± 8% vmstat.system.in
47.77 ± 0% -8.8% 43.56 ± 5% turbostat.%Busy
1371 ± 0% -8.8% 1250 ± 5% turbostat.Avg_MHz
7.99 ± 0% +69.4% 13.55 ± 32% turbostat.CPU%c6
0.00 ± -1% +Inf% 11920829 ±119% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
761.75 ± 71% +2.8e+05% 2145589 ±105% latency_stats.avg.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 19732421 ±101% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
4823 ±107% +4.2e+05% 20091553 ±101% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 24618851 ±108% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
9025 ± 73% +2.3e+05% 20945539 ±101% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
66400760 ± 0% -11.6% 58707782 ± 1% cpuidle.C1-NHM.time
3829370 ± 0% -20.1% 3060906 ± 6% cpuidle.C1-NHM.usage
1.065e+08 ± 0% -22.5% 82521983 ± 0% cpuidle.C1E-NHM.time
1309487 ± 0% -28.4% 937861 ± 0% cpuidle.C1E-NHM.usage
7.872e+08 ± 0% +23.1% 9.694e+08 ± 0% cpuidle.C3-NHM.time
129395 ± 4% +22.0% 157908 ± 5% cpuidle.POLL.time
18511 ± 2% -20.0% 14807 ± 5% cpuidle.POLL.usage
67769 ± 0% -19.4% 54588 ± 13% sched_debug.cfs_rq[0]:/.exec_clock
49.25 ± 26% -82.7% 8.50 ± 39% sched_debug.cfs_rq[0]:/.load
1445770 ± 2% +129.0% 3310199 ± 16% sched_debug.cfs_rq[0]:/.min_vruntime
37.00 ± 25% -85.1% 5.50 ± 15% sched_debug.cfs_rq[0]:/.runnable_load_avg
65.50 ± 6% -87.0% 8.50 ± 54% sched_debug.cfs_rq[1]:/.load
1414810 ± 3% +148.4% 3514441 ± 14% sched_debug.cfs_rq[1]:/.min_vruntime
43.25 ± 15% -83.2% 7.25 ± 39% sched_debug.cfs_rq[1]:/.runnable_load_avg
-30963 ±-82% -759.6% 204224 ± 47% sched_debug.cfs_rq[1]:/.spread0
142.50 ± 88% -87.7% 17.50 ± 82% sched_debug.cfs_rq[2]:/.load
1404786 ± 3% +146.5% 3462555 ± 14% sched_debug.cfs_rq[2]:/.min_vruntime
106.75 ±108% -96.0% 4.25 ± 56% sched_debug.cfs_rq[2]:/.runnable_load_avg
3.50 ± 47% -85.7% 0.50 ±100% sched_debug.cfs_rq[2]:/.tg_load_avg_contrib
100.75 ± 23% -83.6% 16.50 ± 86% sched_debug.cfs_rq[3]:/.load
1463930 ± 1% +140.4% 3519907 ± 14% sched_debug.cfs_rq[3]:/.min_vruntime
10.00 ± 18% -50.0% 5.00 ± 44% sched_debug.cfs_rq[3]:/.nr_spread_over
51.50 ± 10% -86.4% 7.00 ± 36% sched_debug.cfs_rq[3]:/.runnable_load_avg
18144 ±239% +1055.5% 209666 ± 41% sched_debug.cfs_rq[3]:/.spread0
134.50 ± 91% -90.1% 13.25 ± 11% sched_debug.cfs_rq[4]:/.load
675319 ± 0% +388.5% 3299162 ± 14% sched_debug.cfs_rq[4]:/.min_vruntime
37.25 ± 33% -84.6% 5.75 ± 25% sched_debug.cfs_rq[4]:/.runnable_load_avg
-770467 ± -4% -98.6% -11087 ±-722% sched_debug.cfs_rq[4]:/.spread0
662789 ± 2% +406.9% 3359566 ± 14% sched_debug.cfs_rq[5]:/.min_vruntime
-783000 ± -3% -106.3% 49307 ±234% sched_debug.cfs_rq[5]:/.spread0
42.50 ± 35% -74.7% 10.75 ± 63% sched_debug.cfs_rq[6]:/.load
675333 ± 4% +391.3% 3318169 ± 14% sched_debug.cfs_rq[6]:/.min_vruntime
176.75 ± 36% -76.4% 41.67 ±141% sched_debug.cfs_rq[6]:/.removed_load_avg
175.25 ± 35% -76.2% 41.67 ±141% sched_debug.cfs_rq[6]:/.removed_util_avg
35.75 ± 22% -85.3% 5.25 ± 15% sched_debug.cfs_rq[6]:/.runnable_load_avg
-770458 ± -7% -101.0% 7903 ±1955% sched_debug.cfs_rq[6]:/.spread0
77.50 ± 54% -81.6% 14.25 ± 15% sched_debug.cfs_rq[7]:/.load
680645 ± 2% +395.2% 3370439 ± 14% sched_debug.cfs_rq[7]:/.min_vruntime
33.00 ± 37% -83.3% 5.50 ± 39% sched_debug.cfs_rq[7]:/.runnable_load_avg
-765169 ± -2% -107.9% 60166 ±155% sched_debug.cfs_rq[7]:/.spread0
40.25 ± 23% -88.2% 4.75 ± 17% sched_debug.cpu#0.cpu_load[0]
37.00 ± 11% -87.2% 4.75 ± 27% sched_debug.cpu#0.cpu_load[1]
35.00 ± 12% -86.4% 4.75 ± 27% sched_debug.cpu#0.cpu_load[2]
34.00 ± 13% -83.8% 5.50 ± 20% sched_debug.cpu#0.cpu_load[3]
35.50 ± 17% -81.0% 6.75 ± 12% sched_debug.cpu#0.cpu_load[4]
50.25 ± 50% -85.1% 7.50 ± 46% sched_debug.cpu#0.load
-21.00 ±-44% +504.8% -127.00 ±-32% sched_debug.cpu#0.nr_uninterruptible
344270 ± 14% +40.3% 482926 ± 5% sched_debug.cpu#1.avg_idle
31.25 ± 39% -80.8% 6.00 ± 26% sched_debug.cpu#1.cpu_load[0]
30.50 ± 20% -80.3% 6.00 ± 11% sched_debug.cpu#1.cpu_load[1]
31.50 ± 15% -77.8% 7.00 ± 17% sched_debug.cpu#1.cpu_load[2]
33.00 ± 13% -75.0% 8.25 ± 27% sched_debug.cpu#1.cpu_load[3]
34.50 ± 9% -76.1% 8.25 ± 23% sched_debug.cpu#1.cpu_load[4]
61.75 ± 8% -87.0% 8.00 ± 57% sched_debug.cpu#1.load
1826264 ± 61% -65.5% 629410 ± 9% sched_debug.cpu#1.nr_switches
-29.25 ±-54% +309.4% -119.75 ±-23% sched_debug.cpu#1.nr_uninterruptible
1826525 ± 61% -65.5% 629747 ± 9% sched_debug.cpu#1.sched_count
763177 ± 64% -68.1% 243350 ± 7% sched_debug.cpu#1.sched_goidle
818609 ± 68% -71.5% 233331 ± 9% sched_debug.cpu#1.ttwu_count
665884 ± 85% -87.5% 83067 ± 11% sched_debug.cpu#1.ttwu_local
41.00 ± 9% -90.2% 4.00 ± 30% sched_debug.cpu#2.cpu_load[0]
40.75 ± 7% -90.2% 4.00 ± 17% sched_debug.cpu#2.cpu_load[1]
38.25 ± 7% -86.9% 5.00 ± 14% sched_debug.cpu#2.cpu_load[2]
37.50 ± 7% -84.0% 6.00 ± 26% sched_debug.cpu#2.cpu_load[3]
36.75 ± 7% -80.3% 7.25 ± 32% sched_debug.cpu#2.cpu_load[4]
-24.00 ±-40% +410.4% -122.50 ±-28% sched_debug.cpu#2.nr_uninterruptible
39.50 ± 35% -87.3% 5.00 ± 34% sched_debug.cpu#3.cpu_load[0]
38.25 ± 21% -86.3% 5.25 ± 24% sched_debug.cpu#3.cpu_load[1]
38.75 ± 18% -84.5% 6.00 ± 23% sched_debug.cpu#3.cpu_load[2]
39.50 ± 15% -82.9% 6.75 ± 46% sched_debug.cpu#3.cpu_load[3]
40.00 ± 12% -81.9% 7.25 ± 48% sched_debug.cpu#3.cpu_load[4]
89.75 ± 38% -81.1% 17.00 ± 95% sched_debug.cpu#3.load
-42.25 ±-30% +203.0% -128.00 ±-18% sched_debug.cpu#3.nr_uninterruptible
259068 ± 1% -12.3% 227166 ± 8% sched_debug.cpu#3.ttwu_count
97491 ± 0% -16.6% 81303 ± 9% sched_debug.cpu#3.ttwu_local
39.00 ± 26% -85.9% 5.50 ± 20% sched_debug.cpu#4.cpu_load[0]
39.50 ± 19% -86.1% 5.50 ± 20% sched_debug.cpu#4.cpu_load[1]
40.00 ± 18% -83.8% 6.50 ± 49% sched_debug.cpu#4.cpu_load[2]
40.00 ± 16% -83.1% 6.75 ± 53% sched_debug.cpu#4.cpu_load[3]
39.25 ± 10% -81.5% 7.25 ± 29% sched_debug.cpu#4.cpu_load[4]
211.00 ±123% -93.4% 14.00 ± 13% sched_debug.cpu#4.load
103219 ± 0% +16.8% 120564 ± 7% sched_debug.cpu#4.nr_load_updates
17.00 ± 71% +648.5% 127.25 ± 18% sched_debug.cpu#4.nr_uninterruptible
53.00 ± 17% -92.5% 4.00 ± 81% sched_debug.cpu#5.cpu_load[0]
54.50 ± 24% -74.8% 13.75 ±103% sched_debug.cpu#5.cpu_load[1]
48.50 ± 21% -70.6% 14.25 ± 76% sched_debug.cpu#5.cpu_load[2]
44.00 ± 13% -68.2% 14.00 ± 58% sched_debug.cpu#5.cpu_load[3]
41.25 ± 12% -69.1% 12.75 ± 47% sched_debug.cpu#5.cpu_load[4]
101.75 ± 32% -86.2% 14.00 ± 64% sched_debug.cpu#5.load
34.00 ± 40% +246.3% 117.75 ± 15% sched_debug.cpu#5.nr_uninterruptible
1117882 ± 92% -93.2% 76324 ± 12% sched_debug.cpu#5.ttwu_local
28.00 ± 46% -87.5% 3.50 ± 31% sched_debug.cpu#6.cpu_load[0]
31.75 ± 18% -85.8% 4.50 ± 19% sched_debug.cpu#6.cpu_load[1]
32.75 ± 5% -84.7% 5.00 ± 14% sched_debug.cpu#6.cpu_load[2]
34.00 ± 8% -78.7% 7.25 ± 20% sched_debug.cpu#6.cpu_load[3]
36.00 ± 11% -76.4% 8.50 ± 24% sched_debug.cpu#6.cpu_load[4]
46.00 ± 37% -80.4% 9.00 ± 84% sched_debug.cpu#6.load
30.25 ± 44% +318.2% 126.50 ± 16% sched_debug.cpu#6.nr_uninterruptible
1182298 ± 92% -71.7% 334147 ±135% sched_debug.cpu#6.ttwu_local
42.00 ± 33% -85.7% 6.00 ± 11% sched_debug.cpu#7.cpu_load[0]
40.50 ± 20% -85.8% 5.75 ± 14% sched_debug.cpu#7.cpu_load[1]
39.75 ± 11% -85.5% 5.75 ± 7% sched_debug.cpu#7.cpu_load[2]
40.25 ± 7% -82.6% 7.00 ± 10% sched_debug.cpu#7.cpu_load[3]
39.50 ± 7% -81.6% 7.25 ± 11% sched_debug.cpu#7.cpu_load[4]
103791 ± 0% +17.2% 121689 ± 8% sched_debug.cpu#7.nr_load_updates
32.00 ± 35% +281.2% 122.00 ± 26% sched_debug.cpu#7.nr_uninterruptible
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white2/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/spawn
commit:
9babcd7929bc8967ae3bb6093f603b93c2f9958f
fde7d22e01aa0d252fc5c95fa11f0dac35a4dd59
9babcd7929bc8967 fde7d22e01aa0d252fc5c95fa1
---------------- --------------------------
%stddev %change %stddev
\ | \
4166 ± 0% -25.1% 3121 ± 0% unixbench.score
341678 ± 2% -44.4% 189815 ± 24% unixbench.time.involuntary_context_switches
1.347e+08 ± 0% -27.4% 97844330 ± 0% unixbench.time.minor_page_faults
313.50 ± 10% -13.2% 272.25 ± 0% unixbench.time.percent_of_cpu_this_job_got
720.99 ± 0% -26.1% 532.63 ± 0% unixbench.time.system_time
11508855 ± 0% -27.6% 8332095 ± 0% unixbench.time.voluntary_context_switches
253.34 ± 9% -15.1% 215.19 ± 0% uptime.boot
2925 ± 4% +34.6% 3938 ± 1% slabinfo.anon_vma.active_objs
2925 ± 4% +36.3% 3988 ± 1% slabinfo.anon_vma.num_objs
270278 ± 0% -29.2% 191227 ± 0% softirqs.RCU
327794 ± 0% -42.4% 188830 ± 0% softirqs.SCHED
398497 ± 0% -27.0% 290848 ± 0% softirqs.TIMER
42.46 ± 10% -12.4% 37.17 ± 0% turbostat.%Busy
1239 ± 10% -14.7% 1056 ± 0% turbostat.Avg_MHz
2.17 ± 3% +245.4% 7.49 ± 2% turbostat.CPU%c3
1.432e+08 ± 0% -26.7% 1.05e+08 ± 0% proc-vmstat.numa_hit
1.432e+08 ± 0% -26.7% 1.05e+08 ± 0% proc-vmstat.numa_local
1.223e+08 ± 0% -26.6% 89790760 ± 0% proc-vmstat.pgalloc_dma32
41596743 ± 0% -26.4% 30615786 ± 0% proc-vmstat.pgalloc_normal
1.342e+08 ± 0% -27.3% 97507136 ± 0% proc-vmstat.pgfault
1.639e+08 ± 0% -26.5% 1.204e+08 ± 0% proc-vmstat.pgfree
341678 ± 2% -44.4% 189815 ± 24% time.involuntary_context_switches
1.347e+08 ± 0% -27.4% 97844330 ± 0% time.minor_page_faults
313.50 ± 10% -13.2% 272.25 ± 0% time.percent_of_cpu_this_job_got
720.99 ± 0% -26.1% 532.63 ± 0% time.system_time
11.87 ± 1% -30.3% 8.27 ± 1% time.user_time
11508855 ± 0% -27.6% 8332095 ± 0% time.voluntary_context_switches
2.624e+08 ± 1% -57.0% 1.128e+08 ± 1% cpuidle.C1-NHM.time
6717622 ± 2% -30.1% 4698658 ± 14% cpuidle.C1-NHM.usage
1.212e+08 ± 0% +46.2% 1.772e+08 ± 1% cpuidle.C1E-NHM.time
1704312 ± 1% +88.5% 3213346 ± 1% cpuidle.C1E-NHM.usage
83802866 ± 3% +136.9% 1.985e+08 ± 2% cpuidle.C3-NHM.time
215374 ± 0% +454.0% 1193095 ± 1% cpuidle.C3-NHM.usage
152900 ± 12% +84.5% 282080 ± 37% cpuidle.POLL.time
42421 ± 3% -37.0% 26731 ± 21% cpuidle.POLL.usage
44911227 ± 29% -96.6% 1536687 ±100% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
3072074 ± 62% -100.0% 609.50 ± 60% latency_stats.avg.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
44925 ± 11% -79.7% 9129 ± 2% latency_stats.hits.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
49632519 ± 22% -96.9% 1536687 ±100% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
31980111 ± 70% -100.0% 3453 ± 83% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
92953 ± 11% -80.7% 17962 ± 4% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
56601104 ± 26% -97.3% 1536687 ±100% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
36953594 ± 62% -100.0% 6067 ± 60% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
66.00 ± 38% -98.9% 0.75 ±173% sched_debug.cfs_rq[0]:/.load
2149597 ± 20% +342.4% 9510654 ± 16% sched_debug.cfs_rq[0]:/.min_vruntime
37.25 ± 31% -98.7% 0.50 ±173% sched_debug.cfs_rq[0]:/.runnable_load_avg
35.00 ± 24% -87.1% 4.50 ± 57% sched_debug.cfs_rq[1]:/.load
2156450 ± 5% +356.4% 9840991 ± 2% sched_debug.cfs_rq[1]:/.min_vruntime
31.75 ± 26% -95.3% 1.50 ± 33% sched_debug.cfs_rq[1]:/.runnable_load_avg
70.50 ± 48% -99.6% 0.25 ±173% sched_debug.cfs_rq[2]:/.load
1896064 ± 8% +412.1% 9708867 ± 7% sched_debug.cfs_rq[2]:/.min_vruntime
10.75 ± 35% -55.8% 4.75 ± 22% sched_debug.cfs_rq[2]:/.nr_spread_over
43.25 ± 30% -98.3% 0.75 ±110% sched_debug.cfs_rq[2]:/.runnable_load_avg
131.00 ± 92% -99.2% 1.00 ± 0% sched_debug.cfs_rq[3]:/.load
1980384 ± 8% +400.7% 9915445 ± 8% sched_debug.cfs_rq[3]:/.min_vruntime
105.75 ±101% -99.3% 0.75 ± 57% sched_debug.cfs_rq[3]:/.runnable_load_avg
558090 ± 29% +1264.0% 7612282 ± 1% sched_debug.cfs_rq[4]:/.min_vruntime
338.25 ± 35% -81.4% 62.75 ±172% sched_debug.cfs_rq[4]:/.removed_load_avg
338.25 ± 35% -81.4% 62.75 ±172% sched_debug.cfs_rq[4]:/.removed_util_avg
154.75 ±133% -98.1% 3.00 ± 94% sched_debug.cfs_rq[5]:/.load
437512 ± 35% +1695.5% 7855731 ± 6% sched_debug.cfs_rq[5]:/.min_vruntime
149.50 ±136% -99.3% 1.00 ±122% sched_debug.cfs_rq[5]:/.runnable_load_avg
63.75 ± 62% -98.0% 1.25 ±173% sched_debug.cfs_rq[6]:/.load
499326 ± 13% +1476.5% 7872108 ± 5% sched_debug.cfs_rq[6]:/.min_vruntime
8.75 ± 40% -57.1% 3.75 ± 22% sched_debug.cfs_rq[6]:/.nr_spread_over
31.00 ± 21% -95.2% 1.50 ± 74% sched_debug.cfs_rq[6]:/.runnable_load_avg
40.50 ± 53% -96.3% 1.50 ±110% sched_debug.cfs_rq[7]:/.load
413625 ± 23% +1860.3% 8108224 ± 10% sched_debug.cfs_rq[7]:/.min_vruntime
32.25 ± 46% -100.0% 0.00 ± 0% sched_debug.cfs_rq[7]:/.runnable_load_avg
475552 ± 23% -58.7% 196603 ± 40% sched_debug.cpu#0.avg_idle
121618 ± 12% -12.6% 106321 ± 0% sched_debug.cpu#0.clock
121618 ± 12% -12.6% 106321 ± 0% sched_debug.cpu#0.clock_task
32.75 ± 65% -98.5% 0.50 ±173% sched_debug.cpu#0.cpu_load[0]
46.75 ± 55% -88.2% 5.50 ±118% sched_debug.cpu#0.cpu_load[1]
55.00 ± 48% -79.1% 11.50 ± 91% sched_debug.cpu#0.cpu_load[2]
58.75 ± 40% -74.5% 15.00 ± 32% sched_debug.cpu#0.cpu_load[3]
60.00 ± 34% -73.8% 15.75 ± 14% sched_debug.cpu#0.cpu_load[4]
121618 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#1.clock
121618 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#1.clock_task
32.00 ± 61% -97.7% 0.75 ±110% sched_debug.cpu#1.cpu_load[0]
49.25 ± 67% -90.4% 4.75 ± 89% sched_debug.cpu#1.cpu_load[1]
60.00 ± 54% -76.7% 14.00 ± 82% sched_debug.cpu#1.cpu_load[2]
65.00 ± 40% -71.2% 18.75 ± 71% sched_debug.cpu#1.cpu_load[3]
64.25 ± 28% -70.0% 19.25 ± 55% sched_debug.cpu#1.cpu_load[4]
1121772 ± 17% -38.2% 693272 ± 6% sched_debug.cpu#1.nr_switches
1121837 ± 17% -38.2% 693317 ± 6% sched_debug.cpu#1.sched_count
465414 ± 24% -33.7% 308355 ± 7% sched_debug.cpu#1.sched_goidle
360318 ± 36% -52.9% 169582 ± 12% sched_debug.cpu#1.ttwu_count
209570 ± 74% -86.5% 28358 ± 5% sched_debug.cpu#1.ttwu_local
121618 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#2.clock
121618 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#2.clock_task
31.75 ± 48% -98.4% 0.50 ±100% sched_debug.cpu#2.cpu_load[0]
53.25 ± 28% -58.7% 22.00 ± 28% sched_debug.cpu#2.cpu_load[3]
53.50 ± 25% -71.0% 15.50 ± 13% sched_debug.cpu#2.cpu_load[4]
1683 ±118% +210.7% 5231 ± 29% sched_debug.cpu#2.curr->pid
56.25 ± 62% -98.7% 0.75 ± 57% sched_debug.cpu#2.load
121619 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#3.clock
121619 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#3.clock_task
27.50 ± 9% -100.0% 0.00 ± 0% sched_debug.cpu#3.cpu_load[0]
33.75 ± 13% -81.5% 6.25 ± 93% sched_debug.cpu#3.cpu_load[1]
8.75 ±131% -351.4% -22.00 ±-43% sched_debug.cpu#3.nr_uninterruptible
737612 ± 58% -76.7% 172042 ±144% sched_debug.cpu#3.ttwu_local
121617 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#4.clock
121617 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#4.clock_task
35.75 ± 28% -98.6% 0.50 ±173% sched_debug.cpu#4.cpu_load[0]
33.50 ± 29% -60.4% 13.25 ± 98% sched_debug.cpu#4.cpu_load[1]
51.00 ± 20% -49.5% 25.75 ± 57% sched_debug.cpu#4.cpu_load[4]
4.25 ±126% +282.4% 16.25 ± 45% sched_debug.cpu#4.nr_uninterruptible
99233 ± 29% -70.2% 29599 ± 4% sched_debug.cpu#4.ttwu_local
121611 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#5.clock
121611 ± 12% -12.6% 106322 ± 0% sched_debug.cpu#5.clock_task
26.25 ± 93% -97.1% 0.75 ±173% sched_debug.cpu#5.cpu_load[0]
27.25 ± 51% -97.2% 0.75 ±110% sched_debug.cpu#5.cpu_load[1]
32.25 ± 30% -88.4% 3.75 ± 78% sched_debug.cpu#5.cpu_load[2]
37.50 ± 28% -76.7% 8.75 ± 50% sched_debug.cpu#5.cpu_load[3]
40.25 ± 30% -70.8% 11.75 ± 25% sched_debug.cpu#5.cpu_load[4]
1884512 ±118% -73.6% 497744 ± 10% sched_debug.cpu#5.nr_switches
1884576 ±118% -73.6% 497787 ± 10% sched_debug.cpu#5.sched_count
886706 ±128% -78.7% 188449 ± 10% sched_debug.cpu#5.ttwu_count
783005 ±147% -96.4% 27830 ± 1% sched_debug.cpu#5.ttwu_local
121618 ± 12% -12.6% 106323 ± 0% sched_debug.cpu#6.clock
121618 ± 12% -12.6% 106323 ± 0% sched_debug.cpu#6.clock_task
60.50 ± 38% -64.5% 21.50 ±137% sched_debug.cpu#6.cpu_load[2]
55.00 ± 21% -58.6% 22.75 ± 95% sched_debug.cpu#6.cpu_load[3]
52.75 ± 22% -56.4% 23.00 ± 68% sched_debug.cpu#6.cpu_load[4]
35564 ± 18% +43.9% 51159 ± 18% sched_debug.cpu#6.nr_load_updates
3.50 ±107% +435.7% 18.75 ± 17% sched_debug.cpu#6.nr_uninterruptible
478682 ± 24% -49.7% 240698 ± 33% sched_debug.cpu#7.avg_idle
121618 ± 12% -12.6% 106323 ± 0% sched_debug.cpu#7.clock
121618 ± 12% -12.6% 106323 ± 0% sched_debug.cpu#7.clock_task
53.00 ± 60% -67.5% 17.25 ± 54% sched_debug.cpu#7.cpu_load[3]
53.25 ± 49% -77.9% 11.75 ± 29% sched_debug.cpu#7.cpu_load[4]
46.50 ± 41% -95.0% 2.33 ± 80% sched_debug.cpu#7.load
3.25 ± 79% +569.2% 21.75 ± 17% sched_debug.cpu#7.nr_uninterruptible
121618 ± 12% -12.6% 106321 ± 0% sched_debug.cpu_clk
121450 ± 12% -12.6% 106155 ± 0% sched_debug.ktime
15.36 ±127% -92.3% 1.18 ±172% sched_debug.rt_rq[3]:/.rt_time
8.25 ±173% -100.0% 0.00 ±107% sched_debug.rt_rq[5]:/.rt_time
121618 ± 12% -12.6% 106321 ± 0% sched_debug.sched_clk
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/spawn
commit:
9babcd7929bc8967ae3bb6093f603b93c2f9958f
fde7d22e01aa0d252fc5c95fa11f0dac35a4dd59
9babcd7929bc8967 fde7d22e01aa0d252fc5c95fa1
---------------- --------------------------
%stddev %change %stddev
\ | \
4164 ± 0% -25.1% 3117 ± 0% unixbench.score
320351 ± 5% -24.0% 243618 ± 15% unixbench.time.involuntary_context_switches
1.342e+08 ± 0% -27.2% 97704510 ± 0% unixbench.time.minor_page_faults
366.00 ± 0% -27.7% 264.50 ± 5% unixbench.time.percent_of_cpu_this_job_got
719.84 ± 0% -26.0% 532.61 ± 0% unixbench.time.system_time
11479915 ± 0% -27.9% 8278678 ± 1% unixbench.time.voluntary_context_switches
981.75 ± 1% +23.9% 1216 ± 5% uptime.idle
121209 ± 4% -13.3% 105117 ± 5% vmstat.system.cs
2926 ± 3% +29.0% 3776 ± 0% slabinfo.anon_vma.active_objs
2926 ± 3% +31.2% 3839 ± 0% slabinfo.anon_vma.num_objs
1677 ± 0% +10.9% 1859 ± 3% slabinfo.proc_inode_cache.num_objs
268865 ± 0% -28.5% 192157 ± 0% softirqs.RCU
325184 ± 0% -41.9% 188773 ± 0% softirqs.SCHED
394097 ± 0% -26.6% 289277 ± 0% softirqs.TIMER
49.49 ± 0% -26.9% 36.18 ± 5% turbostat.%Busy
1446 ± 0% -29.0% 1026 ± 5% turbostat.Avg_MHz
37.25 ± 0% -19.0% 30.18 ± 4% turbostat.CPU%c1
2.35 ± 4% +219.0% 7.50 ± 7% turbostat.CPU%c3
10.91 ± 4% +139.5% 26.13 ± 12% turbostat.CPU%c6
1.439e+08 ± 0% -27.2% 1.048e+08 ± 0% proc-vmstat.numa_hit
1.439e+08 ± 0% -27.2% 1.048e+08 ± 0% proc-vmstat.numa_local
83056815 ± 0% -27.2% 60467213 ± 0% proc-vmstat.pgalloc_dma32
81595838 ± 0% -26.9% 59667061 ± 1% proc-vmstat.pgalloc_normal
1.341e+08 ± 0% -27.3% 97442961 ± 0% proc-vmstat.pgfault
1.646e+08 ± 0% -27.0% 1.201e+08 ± 1% proc-vmstat.pgfree
320351 ± 5% -24.0% 243618 ± 15% time.involuntary_context_switches
1.342e+08 ± 0% -27.2% 97704510 ± 0% time.minor_page_faults
366.00 ± 0% -27.7% 264.50 ± 5% time.percent_of_cpu_this_job_got
719.84 ± 0% -26.0% 532.61 ± 0% time.system_time
11.81 ± 0% -29.9% 8.28 ± 1% time.user_time
11479915 ± 0% -27.9% 8278678 ± 1% time.voluntary_context_switches
2.548e+08 ± 0% -56.5% 1.108e+08 ± 5% cpuidle.C1-NHM.time
6328394 ± 6% -25.1% 4741683 ± 10% cpuidle.C1-NHM.usage
1.224e+08 ± 3% +43.2% 1.754e+08 ± 1% cpuidle.C1E-NHM.time
1707009 ± 1% +85.3% 3162580 ± 2% cpuidle.C1E-NHM.usage
83590493 ± 3% +138.9% 1.997e+08 ± 4% cpuidle.C3-NHM.time
214120 ± 2% +450.7% 1179146 ± 3% cpuidle.C3-NHM.usage
3.569e+08 ± 3% +62.6% 5.805e+08 ± 15% cpuidle.C6-NHM.time
134404 ± 1% +33.0% 178770 ± 3% cpuidle.C6-NHM.usage
96172 ±162% -99.8% 226.50 ± 67% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open
34142 ±108% -99.5% 162.75 ± 49% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_proc_setattr.[nfsv4].nfs_setattr.notify_change.chmod_common.SyS_fchmodat
65931 ±146% -99.7% 181.25 ± 52% latency_stats.avg.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_proc_setattr.[nfsv4].nfs_setattr.notify_change.chown_common.SyS_fchownat
83879 ±154% -99.5% 425.75 ± 43% latency_stats.avg.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
41163 ± 7% -77.9% 9110 ± 9% latency_stats.hits.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
244420 ±165% -99.9% 302.75 ± 75% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open
34142 ±108% -99.5% 162.75 ± 49% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_proc_setattr.[nfsv4].nfs_setattr.notify_change.chmod_common.SyS_fchmodat
65931 ±146% -99.7% 181.25 ± 52% latency_stats.max.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_proc_setattr.[nfsv4].nfs_setattr.notify_change.chown_common.SyS_fchownat
190909 ±146% -99.5% 937.25 ± 84% latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
486429 ±166% +872.7% 4731410 ±173% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
85615 ± 7% -78.8% 18177 ± 6% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
288518 ±162% -99.8% 679.75 ± 67% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open
34142 ±108% -99.5% 162.75 ± 49% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_proc_setattr.[nfsv4].nfs_setattr.notify_change.chmod_common.SyS_fchmodat
65931 ±146% -99.7% 181.25 ± 52% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_do_setattr.[nfsv4].nfs4_do_setattr.[nfsv4].nfs4_proc_setattr.[nfsv4].nfs_setattr.notify_change.chown_common.SyS_fchownat
335516 ±154% -99.5% 1703 ± 43% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
729856 ±167% +796.0% 6539353 ±173% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
32481 ± 7% -19.1% 26264 ± 8% sched_debug.cfs_rq[0]:/.exec_clock
143.25 ± 79% -99.7% 0.50 ±173% sched_debug.cfs_rq[0]:/.load
6.00 ±115% -91.7% 0.50 ±100% sched_debug.cfs_rq[0]:/.load_avg
1721596 ± 34% +454.0% 9537654 ± 10% sched_debug.cfs_rq[0]:/.min_vruntime
55.25 ± 33% -100.0% 0.00 ± 0% sched_debug.cfs_rq[0]:/.runnable_load_avg
6.00 ±115% -91.7% 0.50 ±100% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
32739 ± 1% -28.6% 23382 ± 8% sched_debug.cfs_rq[1]:/.exec_clock
77.00 ± 22% -99.4% 0.50 ±100% sched_debug.cfs_rq[1]:/.load
2158421 ± 4% +308.7% 8820555 ± 8% sched_debug.cfs_rq[1]:/.min_vruntime
53.25 ± 18% -100.0% 0.00 ± 0% sched_debug.cfs_rq[1]:/.runnable_load_avg
31653 ± 4% -23.9% 24092 ± 7% sched_debug.cfs_rq[2]:/.exec_clock
133.50 ± 68% -98.1% 2.50 ±103% sched_debug.cfs_rq[2]:/.load
2008812 ± 9% +385.1% 9744763 ± 6% sched_debug.cfs_rq[2]:/.min_vruntime
103.00 ±100% -98.1% 2.00 ±117% sched_debug.cfs_rq[2]:/.runnable_load_avg
30993 ± 2% -23.6% 23673 ± 7% sched_debug.cfs_rq[3]:/.exec_clock
92.75 ± 42% -98.1% 1.75 ± 74% sched_debug.cfs_rq[3]:/.load
1881985 ± 4% +401.4% 9436172 ± 5% sched_debug.cfs_rq[3]:/.min_vruntime
55.50 ± 30% -99.1% 0.50 ±100% sched_debug.cfs_rq[3]:/.runnable_load_avg
24562 ± 1% -17.2% 20346 ± 4% sched_debug.cfs_rq[4]:/.exec_clock
112.25 ± 90% -86.6% 15.00 ±169% sched_debug.cfs_rq[4]:/.load
518402 ± 13% +1427.4% 7917987 ± 4% sched_debug.cfs_rq[4]:/.min_vruntime
7.50 ± 24% -56.7% 3.25 ± 13% sched_debug.cfs_rq[4]:/.nr_spread_over
105.50 ± 95% -99.5% 0.50 ±100% sched_debug.cfs_rq[4]:/.runnable_load_avg
24648 ± 1% -26.5% 18104 ± 9% sched_debug.cfs_rq[5]:/.exec_clock
57.50 ± 23% -97.4% 1.50 ±137% sched_debug.cfs_rq[5]:/.load
600129 ± 21% +1067.3% 7005079 ± 8% sched_debug.cfs_rq[5]:/.min_vruntime
4.75 ± 31% +89.5% 9.00 ± 24% sched_debug.cfs_rq[5]:/.nr_spread_over
49.50 ± 25% -89.4% 5.25 ±162% sched_debug.cfs_rq[5]:/.runnable_load_avg
24460 ± 4% -25.3% 18266 ± 10% sched_debug.cfs_rq[6]:/.exec_clock
93.25 ± 36% -97.9% 2.00 ±117% sched_debug.cfs_rq[6]:/.load
517932 ± 29% +1307.8% 7291246 ± 11% sched_debug.cfs_rq[6]:/.min_vruntime
65.00 ± 17% -99.6% 0.25 ±173% sched_debug.cfs_rq[6]:/.runnable_load_avg
73.50 ± 24% -94.1% 4.33 ± 57% sched_debug.cfs_rq[7]:/.load
499653 ± 37% +1539.6% 8192385 ± 9% sched_debug.cfs_rq[7]:/.min_vruntime
51.25 ± 23% -95.4% 2.33 ± 20% sched_debug.cfs_rq[7]:/.runnable_load_avg
83.00 ± 25% -63.9% 30.00 ± 78% sched_debug.cpu#0.cpu_load[2]
90.25 ± 14% -70.4% 26.75 ± 47% sched_debug.cpu#0.cpu_load[3]
87.25 ± 8% -76.2% 20.75 ± 31% sched_debug.cpu#0.cpu_load[4]
206.75 ±106% -99.6% 0.75 ±110% sched_debug.cpu#0.load
119.75 ± 93% -100.0% 0.00 ± 0% sched_debug.cpu#1.cpu_load[0]
93.50 ± 62% -82.1% 16.75 ±159% sched_debug.cpu#1.cpu_load[1]
90.00 ± 36% -82.5% 15.75 ±120% sched_debug.cpu#1.cpu_load[2]
87.50 ± 18% -86.3% 12.00 ± 91% sched_debug.cpu#1.cpu_load[3]
81.25 ± 12% -88.9% 9.00 ± 49% sched_debug.cpu#1.cpu_load[4]
141.25 ± 90% -99.5% 0.75 ± 57% sched_debug.cpu#1.load
119.75 ± 87% -100.0% 0.00 ± 0% sched_debug.cpu#2.cpu_load[0]
79.25 ± 40% -65.3% 27.50 ±102% sched_debug.cpu#2.cpu_load[2]
75.00 ± 30% -73.7% 19.75 ±102% sched_debug.cpu#2.cpu_load[3]
74.25 ± 21% -81.1% 14.00 ± 92% sched_debug.cpu#2.cpu_load[4]
12421 ± 20% -63.2% 4575 ± 70% sched_debug.cpu#2.curr->pid
1417299 ± 56% -53.4% 659778 ± 6% sched_debug.cpu#2.nr_switches
-5.75 ±-39% +230.4% -19.00 ±-24% sched_debug.cpu#2.nr_uninterruptible
1417359 ± 56% -53.4% 659820 ± 6% sched_debug.cpu#2.sched_count
590769 ± 62% -50.5% 292590 ± 6% sched_debug.cpu#2.sched_goidle
518107 ± 81% -70.6% 152295 ± 8% sched_debug.cpu#2.ttwu_count
364621 ±117% -92.4% 27717 ± 7% sched_debug.cpu#2.ttwu_local
56.25 ± 26% -99.6% 0.25 ±173% sched_debug.cpu#3.cpu_load[0]
64.50 ± 20% -86.8% 8.50 ±159% sched_debug.cpu#3.cpu_load[1]
71.75 ± 19% -83.3% 12.00 ±115% sched_debug.cpu#3.cpu_load[2]
73.00 ± 15% -84.6% 11.25 ± 83% sched_debug.cpu#3.cpu_load[3]
72.50 ± 13% -88.3% 8.50 ± 67% sched_debug.cpu#3.cpu_load[4]
100.75 ± 26% -98.5% 1.50 ±100% sched_debug.cpu#3.load
62712 ± 8% -9.6% 56704 ± 5% sched_debug.cpu#3.nr_load_updates
1768336 ± 47% -62.9% 655383 ± 5% sched_debug.cpu#3.nr_switches
1768398 ± 47% -62.9% 655427 ± 5% sched_debug.cpu#3.sched_count
750094 ± 51% -61.4% 289776 ± 6% sched_debug.cpu#3.sched_goidle
710243 ± 62% -76.8% 164891 ± 9% sched_debug.cpu#3.ttwu_count
563179 ± 79% -95.1% 27631 ± 6% sched_debug.cpu#3.ttwu_local
48.75 ± 24% -99.0% 0.50 ±100% sched_debug.cpu#4.cpu_load[0]
72.75 ± 17% -62.9% 27.00 ± 72% sched_debug.cpu#4.cpu_load[3]
80.50 ± 18% -70.2% 24.00 ± 69% sched_debug.cpu#4.cpu_load[4]
8619 ± 37% -65.7% 2957 ± 56% sched_debug.cpu#4.curr->pid
112.00 ± 74% -87.1% 14.50 ±169% sched_debug.cpu#4.load
37154 ± 1% +18.1% 43893 ± 1% sched_debug.cpu#4.nr_load_updates
185296 ± 2% +18.8% 220143 ± 2% sched_debug.cpu#4.sched_goidle
217547 ± 1% -16.3% 181985 ± 7% sched_debug.cpu#4.ttwu_count
111756 ± 3% -76.3% 26520 ± 11% sched_debug.cpu#4.ttwu_local
118.00 ± 75% -100.0% 0.00 ± 0% sched_debug.cpu#5.cpu_load[1]
113.50 ± 51% -99.3% 0.75 ±173% sched_debug.cpu#5.cpu_load[2]
105.75 ± 34% -96.2% 4.00 ± 88% sched_debug.cpu#5.cpu_load[3]
96.00 ± 21% -91.4% 8.25 ± 58% sched_debug.cpu#5.cpu_load[4]
65.25 ± 0% -96.9% 2.00 ± 86% sched_debug.cpu#5.load
37623 ± 2% +48.8% 55980 ± 17% sched_debug.cpu#5.nr_load_updates
361205 ± 25% +48.7% 537025 ± 2% sched_debug.cpu#6.avg_idle
133.50 ± 83% -99.8% 0.25 ±173% sched_debug.cpu#6.cpu_load[0]
89.75 ± 63% -98.6% 1.25 ±131% sched_debug.cpu#6.cpu_load[1]
75.50 ± 44% -91.4% 6.50 ± 95% sched_debug.cpu#6.cpu_load[2]
72.25 ± 25% -78.9% 15.25 ± 68% sched_debug.cpu#6.cpu_load[3]
73.50 ± 16% -73.5% 19.50 ± 66% sched_debug.cpu#6.cpu_load[4]
93.00 ± 36% -97.6% 2.25 ±101% sched_debug.cpu#6.load
1488721 ±103% -69.5% 454376 ± 11% sched_debug.cpu#6.nr_switches
1488784 ±103% -69.5% 454418 ± 11% sched_debug.cpu#6.sched_count
676524 ±114% -76.2% 161114 ± 9% sched_debug.cpu#6.ttwu_count
567800 ±138% -95.3% 26653 ± 10% sched_debug.cpu#6.ttwu_local
63.00 ± 19% -60.3% 25.00 ± 73% sched_debug.cpu#7.cpu_load[4]
136.00 ± 78% -97.6% 3.25 ± 70% sched_debug.cpu#7.load
2557263 ± 76% -78.9% 538671 ± 10% sched_debug.cpu#7.nr_switches
2557333 ± 76% -78.9% 538711 ± 10% sched_debug.cpu#7.sched_count
1214320 ± 81% -85.5% 176506 ± 8% sched_debug.cpu#7.ttwu_count
1111314 ± 90% -97.6% 26445 ± 10% sched_debug.cpu#7.ttwu_local
0.47 ± 94% +6705.5% 32.01 ± 75% sched_debug.rt_rq[1]:/.rt_time
nhm-white2: Nehalem
Memory: 4G
nhm-white: Nehalem
Memory: 6G
unixbench.score
4500 ++-------------------------------------------------------------------+
*...*..*...*...*..* *...*...*..*...*...*..*...*..*...*...*..*...*
4000 ++ : : |
3500 ++ : : |
O O O O O O: O :O O O O O O O O O O O O |
3000 ++ : : |
2500 ++ : : |
| : : |
2000 ++ : : |
1500 ++ : : |
| : : |
1000 ++ : : |
500 ++ :: |
| : |
0 ++--------------------*----------------------------------------------+
unixbench.time.system_time
800 ++--------------------------------------------------------------------+
*...*..*...*...*..* *..*...*...*...*..*...*...*..*...*...*..*...*
700 ++ : : |
600 ++ : : |
| : : |
500 O+ O O O O O: O :O O O O O O O O O O O O |
| : : |
400 ++ : : |
| : : |
300 ++ : : |
200 ++ : : |
| : : |
100 ++ : : |
| : |
0 ++--------------------*-----------------------------------------------+
unixbench.time.minor_page_faults
1.4e+08 ++-*----------*---------*------*-------------*---*-------------*--+
*. *..*. * : *. *..*...*. *...*..*. *
1.2e+08 ++ : : |
| : : |
1e+08 O+ O O O O O: O :O O O O O O O O O O O O |
| : : |
8e+07 ++ : : |
| : : |
6e+07 ++ : : |
| : : |
4e+07 ++ : : |
| : : |
2e+07 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
unixbench.time.voluntary_context_switches
1.2e+07 ++-*------------------------*--*----------------------------------*
*. *..*...*..* *. *..*...*..*...*..*...*..*...*. |
1e+07 ++ : : |
| : : |
O O O O O O: O :O O O O O O O O O O O |
8e+06 ++ : : O |
| : : |
6e+06 ++ : : |
| : : |
4e+06 ++ : : |
| : : |
| : : |
2e+06 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
time.user_time
12 *+--*--*---*---*---*------*---*---*--*---*---*---*--*---*---*---*--*---*
| : : |
10 ++ : : |
| : : |
O O O O O: O :O O O O O O O O O O |
8 ++ O : : O O |
| : : |
6 ++ : : |
| : : |
4 ++ : : |
| : : |
| : : |
2 ++ :: |
| : |
0 ++--------------------*------------------------------------------------+
time.system_time
800 ++--------------------------------------------------------------------+
*...*..*...*...*..* *..*...*...*...*..*...*...*..*...*...*..*...*
700 ++ : : |
600 ++ : : |
| : : |
500 O+ O O O O O: O :O O O O O O O O O O O O |
| : : |
400 ++ : : |
| : : |
300 ++ : : |
200 ++ : : |
| : : |
100 ++ : : |
| : |
0 ++--------------------*-----------------------------------------------+
time.minor_page_faults
1.4e+08 ++-*----------*---------*------*-------------*---*-------------*--+
*. *..*. * : *. *..*...*. *...*..*. *
1.2e+08 ++ : : |
| : : |
1e+08 O+ O O O O O: O :O O O O O O O O O O O O |
| : : |
8e+07 ++ : : |
| : : |
6e+07 ++ : : |
| : : |
4e+07 ++ : : |
| : : |
2e+07 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
time.voluntary_context_switches
1.2e+07 ++-*------------------------*--*----------------------------------*
*. *..*...*..* *. *..*...*..*...*..*...*..*...*. |
1e+07 ++ : : |
| : : |
O O O O O O: O :O O O O O O O O O O O |
8e+06 ++ : : O |
| : : |
6e+06 ++ : : |
| : : |
4e+06 ++ : : |
| : : |
| : : |
2e+06 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
proc-vmstat.numa_hit
1.6e+08 ++----------------------------------------------------------------+
| .*... ..*..* *...*..*...*..*...*.. ..*..*...*..*...*..*
1.4e+08 *+ *..*. : : *. |
1.2e+08 ++ : : |
| : : |
1e+08 O+ O O O O O: O :O O O O O O O O O O O O |
| : : |
8e+07 ++ : : |
| : : |
6e+07 ++ : : |
4e+07 ++ : : |
| : : |
2e+07 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
proc-vmstat.numa_local
1.6e+08 ++----------------------------------------------------------------+
| .*... ..*..* *...*..*...*..*...*.. ..*..*...*..*...*..*
1.4e+08 *+ *..*. : : *. |
1.2e+08 ++ : : |
| : : |
1e+08 O+ O O O O O: O :O O O O O O O O O O O O |
| : : |
8e+07 ++ : : |
| : : |
6e+07 ++ : : |
4e+07 ++ : : |
| : : |
2e+07 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
proc-vmstat.pgalloc_dma32
9e+07 ++------------------------------------------------------------------+
*...*..*...*..*...* *...*..*...*..*...*...*..*...*..*...*..*...*
8e+07 ++ : : |
7e+07 ++ : : |
O O : :O |
6e+07 ++ O O O O: O : O O O O O O O O O O O |
5e+07 ++ : : |
| : : |
4e+07 ++ : : |
3e+07 ++ : : |
| : : |
2e+07 ++ : : |
1e+07 ++ :: |
| : |
0 ++-------------------*----------------------------------------------+
proc-vmstat.pgalloc_normal
9e+07 ++------------------------------------------------------------------+
*...*.. ..*..*...* *...*..*...*..*...*...*..*...*..*...*..*...*
8e+07 ++ *. : : |
7e+07 ++ : : |
| : : |
6e+07 O+ O O O O O: O :O O O O O O O O O O O O |
5e+07 ++ : : |
| : : |
4e+07 ++ : : |
3e+07 ++ : : |
| : : |
2e+07 ++ : : |
1e+07 ++ :: |
| : |
0 ++-------------------*----------------------------------------------+
proc-vmstat.pgfree
1.8e+08 ++----------------------------------------------------------------+
*..*...*..*...*..* *...*..*...*..*...*..*...*..*...*..*...*..*
1.6e+08 ++ : : |
1.4e+08 ++ : : |
| : : |
1.2e+08 O+ O O O O O: O :O O O O O O O O O O O O |
1e+08 ++ : : |
| : : |
8e+07 ++ : : |
6e+07 ++ : : |
| : : |
4e+07 ++ : : |
2e+07 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
proc-vmstat.pgfault
1.4e+08 ++-*---------------------------*-----------------*---------*------+
*. *..*...*..* *...*. *..*...*..*. *...*. *..*
1.2e+08 ++ : : |
| : : |
1e+08 O+ O O O O O: O :O O O O O O O O O O O O |
| : : |
8e+07 ++ : : |
| : : |
6e+07 ++ : : |
| : : |
4e+07 ++ : : |
| : : |
2e+07 ++ :: |
| : |
0 ++-------------------*--------------------------------------------+
slabinfo.anon_vma.num_objs
4500 ++-------------------------------------------------------------------+
O O |
4000 ++ O O O O O O O O O O O O O O O O O |
3500 ++ |
*... .*... .*.. *... |
3000 ++ *..*...*...*..* *...*... .. *. . .. *...*..*...*
2500 ++ : : *..* * |
| : : |
2000 ++ : : |
1500 ++ : : |
| : : |
1000 ++ : : |
500 ++ : : |
| : |
0 ++--------------------*----------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 3 months
[lkp] [x86/setup] f5f3497cad: BUG: kernel boot crashed
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit f5f3497cad8c8416a74b9aaceb127908755d020a ("x86/setup: Extend low identity map to cover whole kernel range")
+------------------------------------------------+------------+------------+
| | 8a53554e12 | f5f3497cad |
+------------------------------------------------+------------+------------+
| boot_successes | 19 | 10 |
| boot_failures | 2 | 9 |
| IP-Config:Auto-configuration_of_network_failed | 2 | 2 |
| BUG:kernel_boot_crashed | 0 | 7 |
+------------------------------------------------+------------+------------+
[ 0.053410] smpboot: CPU0: GenuineIntel GenuineIntel QEMU Virtual CPU version 2.4.0QEMU Virtual CPU version 2.4.0 (family: 0x6, model: 0x6 (family: 0x6, model: 0x6, stepping: 0x3)
, stepping: 0x3)
[ 0.056666] Performance Events:
[ 0.056666] Performance Events: no PMU driver, software events only.
no PMU driver, software events only.
[ 0.060520] CPU 1 irqstacks, hard=8a832000 soft=8a834000
[ 0.060520] CPU 1 irqstacks, hard=8a832000 soft=8a834000
[ 0.061605] x86: Booting SMP configuration:
[ 0.061605] x86: Booting SMP configuration:
[ 0.062445] .... node #0, CPUs:
[ 0.062445] .... node #0, CPUs: #1 #1
Elapsed time: 10
BUG: kernel boot crashed
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-i1-201543/gcc-4.9/f5f3497cad8c8416a74b9aaceb127908755d020a/vmlinuz-4.3.0-rc5-00039-gf5f3497 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-i386-64/bisect_boot-1-yocto-minimal-i386.cgz-i386-randconfig-i1-201543-f5f3497cad8c8416a74b9aaceb127908755d020a-20151028-63408-uxh2ow-0.yaml ARCH=i386 kconfig=i386-randconfig-i1-201543 branch=linus/master commit=f5f3497cad8c8416a74b9aaceb127908755d020a BOOT_IMAGE=/pkg/linux/i386-randconfig-i1-201543/gcc-4.9/f5f3497cad8c8416a74b9aaceb127908755d020a/vmlinuz-4.3.0-rc5-00039-gf5f3497 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-i386/yocto-minimal-i386.cgz/i386-randconfig-i1-201543/gcc-4.9/f5f3497cad8c8416a74b9aaceb127908755d020a/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-i386-64::dhcp drbd.minor_count=8' -initrd /fs/sdh1/initrd-vm-kbuild-yocto-i386-64 -m 320 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdh1/disk0-vm-kbuild-yocto-i386-64,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-i386-64 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-i386-64 -daemonize -display none -monitor null
Thanks,
Ying Huang
5 years, 3 months
[lkp] [xfrm] 37ec89f091: BUG: sleeping function called from invalid context at kernel/locking/mutex.c:97
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux dan-streetman-canonical-com/xfrm-dst_entries_init-per-net-dst_ops/20151028-001910
commit 37ec89f091076963ebaa125622a564659f05b00a ("xfrm: dst_entries_init() per-net dst_ops")
+-----------------------------------------------------------------------------+------------+------------+
| | ca064bd893 | 37ec89f091 |
+-----------------------------------------------------------------------------+------------+------------+
| boot_successes | 15 | 18 |
| boot_failures | 0 | 11 |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c | 0 | 10 |
| backtrace:__alloc_percpu_gfp | 0 | 8 |
| backtrace:__percpu_counter_init | 0 | 8 |
| backtrace:xfrm_net_init | 0 | 8 |
| backtrace:ops_init | 0 | 8 |
| backtrace:unshare_nsproxy_namespaces | 0 | 8 |
| backtrace:SyS_unshare | 0 | 8 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/slub.c | 0 | 1 |
+-----------------------------------------------------------------------------+------------+------------+
[ 13.720801] sock: sock_set_timeout: `trinity-main' (pid 490) tries to set negative timeout
[ 13.729720] sock: sock_set_timeout: `trinity-main' (pid 490) tries to set negative timeout
[ 14.100559] BUG: sleeping function called from invalid context at kernel/locking/mutex.c:97
[ 14.103070] in_atomic(): 1, irqs_disabled(): 0, pid: 6133, name: trinity-c0
[ 14.104378] CPU: 0 PID: 6133 Comm: trinity-c0 Not tainted 4.2.0-06114-g37ec89f #1
[ 14.106168] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 14.108126] 0000000000000061 ffff88001352fcc8 ffffffff81894767 ffffffff81b733b8
[ 14.110213] ffff88001352fcd8 ffffffff81096fb8 ffff88001352fd00 ffffffff81097052
[ 14.112306] ffffffff81d1ff00 0000000000000000 0000000000000060 ffff88001352fd18
[ 14.114392] Call Trace:
[ 14.115183] [<ffffffff81894767>] dump_stack+0x4b/0x63
[ 14.116292] [<ffffffff81096fb8>] ___might_sleep+0xe8/0x130
[ 14.117447] [<ffffffff81097052>] __might_sleep+0x52/0xb0
[ 14.118648] [<ffffffff81899e00>] mutex_lock+0x20/0x50
[ 14.119755] [<ffffffff81184dab>] pcpu_alloc+0x47b/0x640
[ 14.120875] [<ffffffff81184f82>] __alloc_percpu_gfp+0x12/0x20
[ 14.122059] [<ffffffff8141b2b3>] __percpu_counter_init+0x23/0x80
[ 14.123270] [<ffffffff817f71f5>] xfrm_net_init+0x1f5/0x390
[ 14.124436] [<ffffffff81758001>] ops_init+0x41/0x120
[ 14.125525] [<ffffffff81758161>] setup_net+0x81/0x120
[ 14.126630] [<ffffffff81758999>] copy_net_ns+0x79/0x110
[ 14.127755] [<ffffffff81091969>] create_new_namespaces+0xf9/0x190
[ 14.128989] [<ffffffff81091b7a>] unshare_nsproxy_namespaces+0x5a/0xb0
[ 14.130258] [<ffffffff81071d68>] SyS_unshare+0x1a8/0x370
[ 14.131388] [<ffffffff8189edea>] ia32_do_call+0x1b/0x25
Thanks,
Ying Huang
5 years, 3 months