Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Frederic Weisbecker
On Fri, Sep 11, 2015 at 10:19:47AM +0800, Boqun Feng wrote:
> Subject: [PATCH 01/27] rcu: Don't disable preemption for Tiny and Tree RCU
> readers
>
> Because preempt_disable() maps to barrier() for non-debug builds,
> it forces the compiler to spill and reload registers. Because Tree
> RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> barrier() instances generate needless extra code for each instance of
> rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> RCU and bloats Tiny RCU.
>
> This commit therefore removes the preempt_disable() and preempt_enable()
> from the non-preemptible implementations of __rcu_read_lock() and
> __rcu_read_unlock(), respectively.
>
> For debug purposes, preempt_disable() and preempt_enable() are still
> kept if CONFIG_PREEMPT_COUNT=y, which makes the detection of sleeping
> inside atomic sections still work in non-preemptible kernels.
>
> Signed-off-by: Boqun Feng <boqun.feng(a)gmail.com>
> Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> ---
> include/linux/rcupdate.h | 6 ++++--
> include/linux/rcutiny.h | 1 +
> kernel/rcu/tree.c | 9 +++++++++
> 3 files changed, 14 insertions(+), 2 deletions(-)
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index d63bb77..6c3cece 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,12 +297,14 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> - preempt_disable();
> + if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
> + preempt_disable();
preempt_disable() is a no-op when !CONFIG_PREEMPT_COUNT, right?
Or rather it's a barrier(), which is anyway implied by rcu_read_lock().
So perhaps we can get rid of the IS_ENABLED() check?
3 years
Extending the 0-day system with syzkaller?
by David Drysdale
Hi Fengguang / LKP-folk,
Quick question -- how easy is it to add extra builds/tests/checks to
your marvellous 0-day kbuild system?
The reason I ask is that I've recently been exploring syzkaller [1],
which is a system call fuzzer written by some of my colleagues here at
Google (cc'ed). Although it's fairly new, it has uncovered a bunch of
kernel bugs already [2] so I wondered if it might be a good candidate
for inclusion in the 0-day checks at some point.
(As an aside, I'm in the process of writing an article about syzkaller
for LWN, which might also expose it to more folk.)
What do you think?
Thanks,
David
[1] https://github.com/google/syzkaller
[2] https://github.com/google/syzkaller/wiki/Found-Bugs
6 years
[slab] a1fd55538c: WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:2601 trace_hardirqs_on_caller()
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit a1fd55538cae9f411059c9b067a3d48c41aa876b
Author: Jesper Dangaard Brouer <brouer(a)redhat.com>
AuthorDate: Thu Jan 28 09:47:16 2016 +1100
Commit: Stephen Rothwell <sfr(a)canb.auug.org.au>
CommitDate: Thu Jan 28 09:47:16 2016 +1100
slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB
Deduplicate code in SLAB allocator functions slab_alloc() and
slab_alloc_node() by using the slab_pre_alloc_hook() call, which is now
shared between SLUB and SLAB.
Signed-off-by: Jesper Dangaard Brouer <brouer(a)redhat.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Vladimir Davydov <vdavydov(a)virtuozzo.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
+-----------------------------------------------------------------+------------+------------+---------------+
| | 074b6f53c3 | a1fd55538c | next-20160128 |
+-----------------------------------------------------------------+------------+------------+---------------+
| boot_successes | 40 | 0 | 0 |
| boot_failures | 52 | 26 | 19 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 52 | 26 | 14 |
| WARNING:at_kernel/locking/lockdep.c:#trace_hardirqs_on_caller() | 0 | 26 | 19 |
| backtrace:pcpu_mem_zalloc | 0 | 26 | 19 |
| backtrace:percpu_init_late | 0 | 26 | 19 |
| IP-Config:Auto-configuration_of_network_failed | 0 | 0 | 2 |
+-----------------------------------------------------------------+------------+------------+---------------+
[ 0.000000] Inode-cache hash table entries: 16384 (order: 5, 131072 bytes)
[ 0.000000] Memory: 194224K/261624K available (10816K kernel code, 5060K rwdata, 6628K rodata, 988K init, 33076K bss, 67400K reserved, 0K cma-reserved)
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:2601 trace_hardirqs_on_caller+0x341/0x380()
[ 0.000000] DEBUG_LOCKS_WARN_ON(unlikely(early_boot_irqs_disabled))
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.5.0-rc1-00069-ga1fd555 #1
[ 0.000000] ffffffff82403dd8 ffffffff82403d90 ffffffff813b937d ffffffff82403dc8
[ 0.000000] ffffffff810eb4d3 ffffffff812617cc 0000000000000001 ffff88000fcc50a8
[ 0.000000] ffff8800000984c0 00000000024000c0 ffffffff82403e28 ffffffff810eb5c7
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffff813b937d>] dump_stack+0x27/0x3a
[ 0.000000] [<ffffffff810eb4d3>] warn_slowpath_common+0xa3/0x100
[ 0.000000] [<ffffffff812617cc>] ? cache_alloc_refill+0x7ac/0x910
[ 0.000000] [<ffffffff810eb5c7>] warn_slowpath_fmt+0x57/0x70
[ 0.000000] [<ffffffff81143e61>] trace_hardirqs_on_caller+0x341/0x380
[ 0.000000] [<ffffffff81143ebd>] trace_hardirqs_on+0x1d/0x30
[ 0.000000] [<ffffffff812617cc>] cache_alloc_refill+0x7ac/0x910
[ 0.000000] [<ffffffff8121df6a>] ? pcpu_mem_zalloc+0x5a/0xc0
[ 0.000000] [<ffffffff81261fce>] __kmalloc+0x24e/0x440
[ 0.000000] [<ffffffff8121df6a>] pcpu_mem_zalloc+0x5a/0xc0
[ 0.000000] [<ffffffff829213aa>] percpu_init_late+0x4d/0xbb
[ 0.000000] [<ffffffff828f41c9>] start_kernel+0x30b/0x6e1
[ 0.000000] [<ffffffff828f3120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffff828f332f>] x86_64_start_reservations+0x46/0x4f
[ 0.000000] [<ffffffff828f34d4>] x86_64_start_kernel+0x19c/0x1b2
[ 0.000000] ---[ end trace cb88537fdc8fa200 ]---
[ 0.000000] Running RCU self tests
git bisect start 888c8375131656144c1605071eab2eb6ac49abc3 92e963f50fc74041b5e9e744c330dca48e04f08d --
git bisect good f664e02a71d85691fc33f116bae3eb7f0debd194 # 17:19 17+ 13 Merge remote-tracking branch 'kbuild/for-next'
git bisect good c7173552fb5efc15dd092d3a90b5d6ad0f3d9421 # 17:35 17+ 2 Merge remote-tracking branch 'audit/next'
git bisect good bd605d2e3cc724606fa7c0fd3d5d90276f07e979 # 17:47 17+ 2 Merge remote-tracking branch 'extcon/extcon-next'
git bisect good 108776431802ced1ca8ba38a9765ef81c48513de # 18:06 17+ 5 Merge remote-tracking branch 'llvmlinux/for-next'
git bisect good 56f1389517d2470a8abdb661c97d6ef640ca8cf3 # 18:30 17+ 3 Merge remote-tracking branch 'coresight/next'
git bisect bad 3cb196d8ee7f94b78c3d609bb91f5b175b3841d8 # 19:17 0- 8 Merge branch 'akpm-current/current'
git bisect good 49d5623e2407b26b532ca24f49d778b5b6fedb22 # 19:48 22+ 0 Merge remote-tracking branch 'rtc/rtc-next'
git bisect bad 8ccfb34d7450299714a9a590a764934397a818c6 # 20:06 0- 22 mm: filemap: avoid unnecessary calls to lock_page when waiting for IO to complete during a read
git bisect good d9dc8f2de4f863bef9a303b2cbae0bbd1c9dfceb # 20:32 22+ 22 ocfs2: add feature document for online file check
git bisect bad ebea6ceb9754b02bcab987af96c64782c665aa91 # 20:56 0- 18 mm/slab: remove object status buffer for DEBUG_SLAB_LEAK
git bisect bad 24d88722c03b13ef63b3b631f81454a63ac26cc4 # 21:06 0- 22 mm: kmemcheck skip object if slab allocation failed
git bisect good 1fc2d06fe0cfca10e571e2e444a4a37693495502 # 21:26 22+ 22 ocfs2/dlm: move lock to the tail of grant queue while doing in-place convert
git bisect good 3355ee84b3d96c7c30923d0bba228b0b7aa380d2 # 21:33 21+ 8 slub: cleanup code for kmem cgroup support to kmem_cache_free_bulk
git bisect good 074b6f53c320a81e975c0b5dd79daa5e78a711ba # 21:39 22+ 24 mm: fault-inject take over bootstrap kmem_cache check
git bisect bad a1fd55538cae9f411059c9b067a3d48c41aa876b # 21:49 0- 26 slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB
# first bad commit: [a1fd55538cae9f411059c9b067a3d48c41aa876b] slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB
git bisect good 074b6f53c320a81e975c0b5dd79daa5e78a711ba # 21:53 66+ 52 mm: fault-inject take over bootstrap kmem_cache check
# extra tests with DEBUG_INFO
git bisect bad a1fd55538cae9f411059c9b067a3d48c41aa876b # 22:00 0- 36 slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB
# extra tests on HEAD of linux-next/master
git bisect bad 888c8375131656144c1605071eab2eb6ac49abc3 # 22:00 0- 19 Add linux-next specific files for 20160128
# extra tests on tree/branch linux-next/master
git bisect bad 888c8375131656144c1605071eab2eb6ac49abc3 # 22:00 0- 19 Add linux-next specific files for 20160128
# extra tests with first bad commit reverted
git bisect good fea4cd9180f321dd12ec9a7932a9bfb32bfaf4c4 # 22:32 66+ 30 Revert "slab: use slab_pre_alloc_hook in SLAB allocator shared with SLUB"
# extra tests on tree/branch linus/master
git bisect good 03c21cb775a313f1ff19be59c5d02df3e3526471 # 22:52 65+ 67 Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
# extra tests on tree/branch linux-next/master
git bisect bad 888c8375131656144c1605071eab2eb6ac49abc3 # 22:52 0- 19 Add linux-next specific files for 20160128
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
6 years, 4 months
[i2c-mux] 3489746df0: WARNING: CPU: 0 PID: 1 at drivers/base/devres.c:888 devm_kfree()
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux Peter-Rosin/i2c-mux-cleanup-and-locking-update/20160106-000205
commit 3489746df00f2f18b1a6dc2ba4de263e980cfbb0
Author: Peter Rosin <peda(a)axentia.se>
AuthorDate: Tue Jan 5 16:57:13 2016 +0100
Commit: 0day robot <fengguang.wu(a)intel.com>
CommitDate: Wed Jan 6 00:02:09 2016 +0800
i2c-mux: move the slave side adapter management to i2c_mux_core
All muxes have slave side adapters, many have some arbitrary number of
them. Handle this in the mux core, so that drivers are simplified.
Add i2c_mux_reserve_adapter that can be used when it is known in advance
how many child adapters that is to be added. This avoids reallocating
memory.
Drop i2c_del_mux_adapter and replace it with i2c_del_mux_adapters, since
no mux driver is dynamically deleting individual child adapters anyway.
Signed-off-by: Peter Rosin <peda(a)axentia.se>
+------------------------------------------------------------------+------------+------------+------------+
| | f3ce0531d6 | 3489746df0 | 328de02ee7 |
+------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 16 | 0 | 0 |
| boot_failures | 76 | 22 | 43 |
| Out_of_memory:Kill_process | 46 | | |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 2 | | |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 30 | 21 | 13 |
| WARNING:at_drivers/base/devres.c:#devm_kfree() | 0 | 22 | 43 |
| backtrace:of_unittest | 0 | 22 | 43 |
| backtrace:kernel_init_freeable | 0 | 22 | 43 |
+------------------------------------------------------------------+------------+------------+------------+
[ 17.872856] overlay_removal_is_ok: overlay #5 is not topmost
[ 17.876661] of_overlay_destroy: removal check failed for overlay #5
[ 17.881163] ------------[ cut here ]------------
[ 17.882437] WARNING: CPU: 0 PID: 1 at drivers/base/devres.c:888 devm_kfree+0x61/0x76()
[ 17.884828] CPU: 0 PID: 1 Comm: swapper Not tainted 4.4.0-rc3-00086-g3489746 #1
[ 17.886694] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 17.889121] 0000000000000000 ffff88000fa3bb60 ffffffff8177aefa ffff88000fa3bb98
[ 17.891562] ffffffff81119965 ffffffff81d0a80c 0000000000000001 00000000fffffffe
[ 17.893975] ffff88000c7e1698 ffff88000c757820 ffff88000fa3bba8 ffffffff81119a50
[ 17.899218] Call Trace:
[ 17.900013] [<ffffffff8177aefa>] dump_stack+0x19/0x1b
[ 17.901286] [<ffffffff81119965>] warn_slowpath_common+0xb6/0xcf
[ 17.902684] [<ffffffff81d0a80c>] ? devm_kfree+0x61/0x76
[ 17.903953] [<ffffffff81119a50>] warn_slowpath_null+0x1a/0x1c
[ 17.905343] [<ffffffff81d0a80c>] devm_kfree+0x61/0x76
[ 17.906657] [<ffffffff8252ad4c>] i2c_mux_reserve_adapters+0xae/0xbf
[ 17.908324] [<ffffffff82842105>] unittest_i2c_mux_probe+0x1af/0x234
[ 17.911148] [<ffffffff82841f56>] ? unittest_i2c_mux_remove+0x90/0x90
[ 17.912678] [<ffffffff82526726>] i2c_device_probe+0x370/0x3b9
[ 17.914076] [<ffffffff81d05602>] driver_probe_device+0x2c5/0x6db
[ 17.915554] [<ffffffff81d05ab7>] __driver_attach+0x9f/0xd5
[ 17.917142] [<ffffffff81d05a18>] ? driver_probe_device+0x6db/0x6db
[ 17.918704] [<ffffffff81d02f3a>] bus_for_each_dev+0x91/0xa9
[ 17.920106] [<ffffffff81d05dc4>] driver_attach+0x1e/0x20
[ 17.921441] [<ffffffff81d03bd7>] bus_add_driver+0x1f7/0x36d
[ 17.922866] [<ffffffff81d06a93>] driver_register+0x10d/0x17f
[ 17.924278] [<ffffffff82528714>] i2c_register_driver+0xb1/0x147
[ 17.925762] [<ffffffff84f32561>] of_unittest_overlay+0xf05/0x1304
[ 17.950459] [<ffffffff84f339e8>] of_unittest+0x1088/0x10b6
[ 17.951884] [<ffffffff817b1043>] ? debug_check_no_obj_freed+0x26/0x28
[ 17.953463] [<ffffffff8125a594>] ? kfree+0x2c7/0x307
[ 17.954749] [<ffffffff84e87520>] ? do_one_initcall+0xf8/0x26c
[ 17.956209] [<ffffffff84f32960>] ? of_unittest_overlay+0x1304/0x1304
[ 17.958778] [<ffffffff84e875a1>] do_one_initcall+0x179/0x26c
[ 17.960190] [<ffffffff84e877f7>] kernel_init_freeable+0x163/0x22b
[ 17.961705] [<ffffffff82e3cc03>] ? rest_init+0x7a/0x7a
[ 17.963019] [<ffffffff82e3cc11>] kernel_init+0xe/0x13f
[ 17.964374] [<ffffffff82e4bfcf>] ret_from_fork+0x3f/0x70
[ 17.965704] [<ffffffff82e3cc03>] ? rest_init+0x7a/0x7a
[ 17.967146] ---[ end trace 3b2b863e959dd3c5 ]---
[ 17.968973] i2c i2c-0: Added multiplexed i2c bus 1
git bisect start 328de02ee7475df3b3e9849542e8d15adaaaa894 168309855a7d1e16db751e9c647119fe2d2dc878 --
git bisect bad 6115822e890d81471eda3c35bf203bf0e1cf2726 # 01:23 0- 18 Merge 'linux-review/Janusz-Dziedzic/mac80211-check-requested-flags-in-ieee80211_tx_prepare_skb/20160105-183905' into devel-spot-201601060011
git bisect bad 579e9f8621453bc381b04a6ad787e9c113b1eeaa # 01:34 0- 2 Merge 'peterz-queue/x86/mm' into devel-spot-201601060011
git bisect bad c13a4bf22a1ca649904deec37080e617fabc661f # 01:47 0- 17 Merge 'linux-review/Adam-Thomson/ASoC-da7219-Correct-BCLK-inversion-for-DSP-DAI-format-mode/20160105-230859' into devel-spot-201601060011
git bisect bad 8cac6f8fb02d6f55c25415e705410fc070a588ec # 01:55 0- 6 Merge 'cgroup/for-4.5' into devel-spot-201601060011
git bisect bad 4dd116948235f2da4e8d88f7df549b8311b16358 # 02:10 0- 4 Merge 'linux-review/Craig-Gallek/soreuseport-change-consume_skb-to-kfree_skb-in-error-case/20160105-235855' into devel-spot-201601060011
git bisect good bdca81bda03d112d0a2f7e972a6da677ef8bedf3 # 02:19 22+ 24 0day base guard for 'devel-spot-201601060011'
git bisect bad b5f2c558c3f81a5ddb89b1b34089d3a4357d05a1 # 02:26 0- 7 Merge 'linux-review/Peter-Rosin/i2c-mux-cleanup-and-locking-update/20160106-000205' into devel-spot-201601060011
git bisect good 5398b7ef6cdbc88581457b26b924081fc53d1de8 # 02:34 22+ 22 i2c: xlr: add interrupt support for Sigma Designs chips
git bisect good fcd0f469776ea6a2bef9c8ee535053a644c3d01a # 02:50 21+ 18 Merge branch 'i2c/for-4.5' into i2c/for-next
git bisect good bdcadbaaf804f56ee4060aada6c0162c91a540ec # 02:59 22+ 15 Merge branch 'i2c/for-4.5' into i2c/for-next
git bisect good 4222e8c152a36c482959d975d714cfbd0ccbd1e6 # 03:04 22+ 22 i2c-mux: add common core data for every mux instance
git bisect bad 3489746df00f2f18b1a6dc2ba4de263e980cfbb0 # 03:14 0- 3 i2c-mux: move the slave side adapter management to i2c_mux_core
git bisect good f3ce0531d6bf0bed5682a36cdf2e8bf044f3b67f # 03:46 22+ 11 i2c-mux: move select and deselect ops to i2c_mux_core
# first bad commit: [3489746df00f2f18b1a6dc2ba4de263e980cfbb0] i2c-mux: move the slave side adapter management to i2c_mux_core
git bisect good f3ce0531d6bf0bed5682a36cdf2e8bf044f3b67f # 03:49 66+ 76 i2c-mux: move select and deselect ops to i2c_mux_core
# extra tests with DEBUG_INFO
git bisect bad 3489746df00f2f18b1a6dc2ba4de263e980cfbb0 # 04:00 0- 17 i2c-mux: move the slave side adapter management to i2c_mux_core
# extra tests on HEAD of linux-devel/devel-spot-201601060011
git bisect bad 328de02ee7475df3b3e9849542e8d15adaaaa894 # 04:00 0- 43 0day head guard for 'devel-spot-201601060011'
# extra tests on tree/branch linux-review/Peter-Rosin/i2c-mux-cleanup-and-locking-update/20160106-000205
git bisect bad b1d2a9bab92639356ad5a2fda8ea046d1216c11c # 04:15 0- 14 i2c-mux: relax locking of the top i2c adapter during i2c controlled muxing
# extra tests on tree/branch linus/master
git bisect good 168309855a7d1e16db751e9c647119fe2d2dc878 # 04:19 66+ 165 Linux 4.4-rc8
# extra tests on tree/branch linux-next/master
git bisect good 8ef79cd05e6894c01ab9b41aa918a402fa8022a7 # 04:34 66+ 58 Add linux-next specific files for 20160105
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
6 years, 4 months
[lkp] [pipe] 759c01142a: -51.5% hackbench.throughput
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 759c01142a5d0f364a462346168a56de28a80f52 ("pipe: limit the per-user amount of pages allocated in pipes")
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/pipe/x86_64-rhel/threads/1600%/debian-x86_64-2015-02-07.cgz/lkp-wsx02/hackbench
commit:
558041d8d21b48287224dd0e32cf19563c77607c
759c01142a5d0f364a462346168a56de28a80f52
558041d8d21b4828 759c01142a5d0f364a46234616
---------------- --------------------------
%stddev %change %stddev
\ | \
219967 ± 4% -51.5% 106583 ± 1% hackbench.throughput
5.319e+08 ± 11% -19.7% 4.273e+08 ± 0% hackbench.time.involuntary_context_switches
2422668 ± 9% -36.4% 1540731 ± 2% hackbench.time.minor_page_faults
7127 ± 11% +9.4% 7800 ± 0% hackbench.time.percent_of_cpu_this_job_got
42924 ± 9% +11.9% 48049 ± 1% hackbench.time.system_time
1665 ± 11% -49.3% 844.05 ± 1% hackbench.time.user_time
534.25 ± 55% +298.5% 2129 ± 67% numa-meminfo.node3.AnonHugePages
8581 ± 57% -48.4% 4427 ± 1% uptime.idle
5705345 ± 13% +58.8% 9061490 ± 5% softirqs.RCU
279868 ± 8% +20.4% 336916 ± 1% softirqs.SCHED
26.75 ± 23% +245.8% 92.50 ± 39% vmstat.procs.b
9809 ± 13% -64.7% 3463 ± 7% vmstat.procs.r
284320 ± 13% -45.9% 153686 ± 1% vmstat.system.in
114.00 ± 10% -50.9% 56.00 ± 0% time.file_system_outputs
5.319e+08 ± 11% -19.7% 4.273e+08 ± 0% time.involuntary_context_switches
2422668 ± 9% -36.4% 1540731 ± 2% time.minor_page_faults
1665 ± 11% -49.3% 844.05 ± 1% time.user_time
89.86 ± 10% +8.9% 97.84 ± 0% turbostat.%Busy
2270 ± 10% +8.9% 2471 ± 0% turbostat.Avg_MHz
1.45 ± 8% -52.6% 0.69 ± 9% turbostat.CPU%c1
0.17 ± 13% -52.2% 0.08 ± 34% turbostat.CPU%c3
8.53 ±116% -83.7% 1.39 ± 3% turbostat.CPU%c6
63583716 ± 16% -98.1% 1183913 ± 53% numa-numastat.node0.local_node
63587769 ± 16% -98.1% 1189061 ± 53% numa-numastat.node0.numa_hit
69989955 ± 6% -96.8% 2239323 ± 8% numa-numastat.node1.local_node
69995282 ± 6% -96.8% 2244474 ± 8% numa-numastat.node1.numa_hit
56620071 ± 22% -97.3% 1520012 ± 32% numa-numastat.node2.local_node
56625301 ± 22% -97.3% 1522594 ± 31% numa-numastat.node2.numa_hit
63753303 ± 22% -96.1% 2508634 ± 14% numa-numastat.node3.local_node
63754612 ± 22% -96.1% 2511213 ± 14% numa-numastat.node3.numa_hit
32233121 ± 15% -97.4% 830659 ± 60% numa-vmstat.node0.numa_hit
32193958 ± 15% -97.5% 790185 ± 63% numa-vmstat.node0.numa_local
35020615 ± 10% -96.4% 1253289 ± 8% numa-vmstat.node1.numa_hit
34968563 ± 10% -96.6% 1201078 ± 9% numa-vmstat.node1.numa_local
30394713 ± 14% -97.2% 843134 ± 31% numa-vmstat.node2.numa_hit
30373607 ± 14% -97.3% 824676 ± 31% numa-vmstat.node2.numa_local
32334081 ± 21% -95.5% 1469250 ± 16% numa-vmstat.node3.numa_hit
32286373 ± 21% -95.6% 1420300 ± 16% numa-vmstat.node3.numa_local
1.868e+08 ± 11% -51.0% 91569694 ± 13% cpuidle.C1-NHM.time
1743049 ± 10% -34.6% 1139878 ± 11% cpuidle.C1-NHM.usage
1.918e+08 ± 12% -49.7% 96433830 ± 11% cpuidle.C1E-NHM.time
1243978 ± 16% -40.7% 738168 ± 12% cpuidle.C1E-NHM.usage
70404853 ± 8% -47.4% 37040591 ± 19% cpuidle.C3-NHM.time
4.723e+09 ±108% -81.6% 8.672e+08 ± 3% cpuidle.C6-NHM.time
311666 ± 18% -47.9% 162252 ± 15% cpuidle.C6-NHM.usage
2.456e+08 ± 18% -53.2% 1.15e+08 ± 9% cpuidle.POLL.time
260576 ± 21% -58.6% 107765 ± 17% cpuidle.POLL.usage
906770 ± 8% -19.4% 730530 ± 5% proc-vmstat.numa_hint_faults
2.54e+08 ± 11% -97.1% 7464740 ± 4% proc-vmstat.numa_hit
2.539e+08 ± 11% -97.1% 7449281 ± 4% proc-vmstat.numa_local
662291 ± 10% -33.4% 441016 ± 5% proc-vmstat.numa_pages_migrated
1203460 ± 6% -16.7% 1002984 ± 5% proc-vmstat.numa_pte_updates
5716982 ± 17% -97.2% 158482 ± 40% proc-vmstat.pgalloc_dma32
2.521e+08 ± 11% -96.3% 9338972 ± 3% proc-vmstat.pgalloc_normal
3627983 ± 4% -26.1% 2682860 ± 1% proc-vmstat.pgfault
2.576e+08 ± 11% -96.4% 9352801 ± 3% proc-vmstat.pgfree
662291 ± 10% -33.4% 441016 ± 5% proc-vmstat.pgmigrate_success
27845 ± 11% -79.8% 5624 ± 1% slabinfo.kmalloc-1024.active_objs
879.00 ± 11% -79.2% 183.00 ± 1% slabinfo.kmalloc-1024.active_slabs
28142 ± 11% -79.1% 5879 ± 0% slabinfo.kmalloc-1024.num_objs
879.00 ± 11% -79.2% 183.00 ± 1% slabinfo.kmalloc-1024.num_slabs
20288 ± 0% +129.9% 46639 ± 0% slabinfo.kmalloc-64.active_objs
317.00 ± 0% +131.4% 733.50 ± 0% slabinfo.kmalloc-64.active_slabs
20288 ± 0% +131.6% 46980 ± 0% slabinfo.kmalloc-64.num_objs
317.00 ± 0% +131.4% 733.50 ± 0% slabinfo.kmalloc-64.num_slabs
335.50 ± 9% -44.8% 185.25 ± 4% slabinfo.taskstats.active_objs
335.50 ± 9% -44.8% 185.25 ± 4% slabinfo.taskstats.num_objs
38.97 ± 29% +51.7% 59.11 ± 7% perf-profile.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
3.89 ± 60% +334.7% 16.93 ± 13% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.70 ± 30% +1969.3% 14.49 ± 11% perf-profile.cycles-pp.__wake_up_common.__wake_up_sync_key.pipe_read.__vfs_read.vfs_read
0.71 ± 30% +1971.0% 14.65 ± 11% perf-profile.cycles-pp.__wake_up_sync_key.pipe_read.__vfs_read.vfs_read.sys_read
0.69 ± 84% -92.3% 0.05 ± 63% perf-profile.cycles-pp._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop.syscall_return_slowpath
34.58 ± 33% +57.1% 54.33 ± 7% perf-profile.cycles-pp._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
39.61 ± 28% +53.7% 60.89 ± 7% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
0.70 ± 30% +1967.9% 14.47 ± 11% perf-profile.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_read.__vfs_read
0.70 ± 30% +1965.0% 14.46 ± 11% perf-profile.cycles-pp.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_read
38.99 ± 28% +54.5% 60.25 ± 7% perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
39.45 ± 28% +53.9% 60.72 ± 7% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
57.97 ± 20% +40.4% 81.35 ± 8% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath
1.31 ± 60% -92.0% 0.10 ± 67% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.__schedule.schedule.exit_to_usermode_loop
2.54 ± 68% -81.8% 0.46 ± 60% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.try_to_wake_up.default_wake_function.autoremove_wake_function
34.38 ± 33% +55.7% 53.54 ± 6% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
3.67 ± 65% +354.5% 16.69 ± 11% perf-profile.cycles-pp.pipe_read.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
5.22 ± 42% +250.6% 18.30 ± 21% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
39.95 ± 28% +53.0% 61.10 ± 7% perf-profile.cycles-pp.ttwu_do_activate.constprop.86.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
4.70 ± 48% +282.5% 17.96 ± 19% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
16323355 ± 13% -100.0% 0.00 ± -1% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1702613 ± 26% +288.1% 6607185 ± 61% latency_stats.avg.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
31.67 ± 74% +63999.7% 20298 ±149% latency_stats.avg.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3607008 ±163% -100.0% 384.25 ± 14% latency_stats.avg.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.do_dup2.SyS_dup2.entry_SYSCALL_64_fastpath
1042512 ± 15% +145.7% 2561649 ± 35% latency_stats.avg.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
384229 ± 2% -76.9% 88748 ± 24% latency_stats.hits.call_rwsem_down_read_failed.do_exit.SyS_exit.entry_SYSCALL_64_fastpath
22676827 ± 14% +762.3% 1.955e+08 ± 0% latency_stats.hits.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
108077 ± 11% -96.9% 3361 ± 8% latency_stats.hits.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
9791700 ± 80% +173.5% 26782186 ± 72% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_from_iter.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
8696153 ± 89% +196.7% 25800599 ± 75% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
13849 ±173% -64.1% 4975 ±102% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
42431494 ± 56% -100.0% 0.00 ± -1% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
11940618 ± 27% +124.4% 26793707 ± 72% latency_stats.max.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
31.67 ± 74% +73210.5% 23215 ±124% latency_stats.max.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
27276522 ±155% -100.0% 473.50 ± 18% latency_stats.max.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.do_dup2.SyS_dup2.entry_SYSCALL_64_fastpath
86431834 ±106% +708.2% 6.985e+08 ± 40% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
14619 ±173% -65.9% 4980 ±102% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
44595961 ± 58% -100.0% 0.00 ± -1% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
4.608e+08 ± 37% +202.9% 1.396e+09 ± 59% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
2.164e+10 ± 13% +960.4% 2.294e+11 ± 5% latency_stats.sum.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1.117e+11 ± 16% -97.9% 2.338e+09 ± 29% latency_stats.sum.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
35826 ± 54% -70.9% 10422 ± 6% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.link_path_walk
31.67 ± 74% +1.3e+05% 39883 ± 89% latency_stats.sum.stop_two_cpus.migrate_swap.task_numa_migrate.numa_migrate_preferred.task_numa_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
40133821 ±161% -100.0% 2693 ± 14% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.nfs4_file_fsync.[nfsv4].vfs_fsync_range.vfs_fsync.nfs4_file_flush.[nfsv4].filp_close.do_dup2.SyS_dup2.entry_SYSCALL_64_fastpath
9.857e+09 ± 19% +462.0% 5.539e+10 ± 40% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
7.50 ± 54% +73.3% 13.00 ± 9% sched_debug.cfs_rq:/.load.17
12.00 ± 17% -35.4% 7.75 ± 14% sched_debug.cfs_rq:/.load.22
5.50 ± 41% +140.9% 13.25 ± 33% sched_debug.cfs_rq:/.load.23
6.25 ± 42% +100.0% 12.50 ± 23% sched_debug.cfs_rq:/.load.25
6.00 ± 51% +145.8% 14.75 ± 28% sched_debug.cfs_rq:/.load.29
5.75 ± 56% +121.7% 12.75 ± 29% sched_debug.cfs_rq:/.load.35
7.50 ± 74% +86.7% 14.00 ± 44% sched_debug.cfs_rq:/.load.43
11.75 ± 20% -34.0% 7.75 ± 34% sched_debug.cfs_rq:/.load.46
5.75 ± 39% +113.0% 12.25 ± 18% sched_debug.cfs_rq:/.load.53
12.75 ± 12% -31.4% 8.75 ± 9% sched_debug.cfs_rq:/.load.54
13.75 ± 31% -40.0% 8.25 ± 30% sched_debug.cfs_rq:/.load.58
27.00 ± 49% -58.3% 11.25 ± 20% sched_debug.cfs_rq:/.load_avg.24
28.25 ± 58% -61.9% 10.75 ± 20% sched_debug.cfs_rq:/.load_avg.26
22.25 ± 51% -51.7% 10.75 ± 13% sched_debug.cfs_rq:/.load_avg.28
20.75 ± 48% -53.0% 9.75 ± 15% sched_debug.cfs_rq:/.load_avg.62
182.64 ± 15% -41.5% 106.77 ± 18% sched_debug.cfs_rq:/.load_avg.max
1.14 ± 30% +65.7% 1.90 ± 18% sched_debug.cfs_rq:/.load_avg.min
30.53 ± 7% -39.5% 18.46 ± 20% sched_debug.cfs_rq:/.load_avg.stddev
61751943 ± 3% -29.5% 43522729 ± 13% sched_debug.cfs_rq:/.min_vruntime.1
63695496 ± 3% -25.5% 47470305 ± 13% sched_debug.cfs_rq:/.min_vruntime.13
64969515 ± 3% -27.9% 46847510 ± 14% sched_debug.cfs_rq:/.min_vruntime.17
65957991 ± 1% -27.8% 47648514 ± 13% sched_debug.cfs_rq:/.min_vruntime.21
64485085 ± 3% -26.1% 47642410 ± 13% sched_debug.cfs_rq:/.min_vruntime.25
64721403 ± 3% -28.9% 45992467 ± 16% sched_debug.cfs_rq:/.min_vruntime.29
65138437 ± 4% -29.3% 46039876 ± 12% sched_debug.cfs_rq:/.min_vruntime.33
66399877 ± 4% -26.6% 48722623 ± 12% sched_debug.cfs_rq:/.min_vruntime.37
67958189 ± 4% -29.4% 47968815 ± 16% sched_debug.cfs_rq:/.min_vruntime.41
67595662 ± 6% -27.4% 49096370 ± 13% sched_debug.cfs_rq:/.min_vruntime.45
67690444 ± 3% -27.6% 49000109 ± 12% sched_debug.cfs_rq:/.min_vruntime.49
64098981 ± 3% -26.2% 47277630 ± 14% sched_debug.cfs_rq:/.min_vruntime.5
67783223 ± 3% -27.4% 49216714 ± 12% sched_debug.cfs_rq:/.min_vruntime.53
66220648 ± 3% -25.3% 49481358 ± 11% sched_debug.cfs_rq:/.min_vruntime.57
68140688 ± 4% -30.7% 47207825 ± 16% sched_debug.cfs_rq:/.min_vruntime.61
65498493 ± 5% -25.8% 48590023 ± 13% sched_debug.cfs_rq:/.min_vruntime.65
65049958 ± 4% -28.2% 46691564 ± 18% sched_debug.cfs_rq:/.min_vruntime.69
68046848 ± 3% -26.8% 49835973 ± 13% sched_debug.cfs_rq:/.min_vruntime.73
66382113 ± 4% -26.9% 48557976 ± 18% sched_debug.cfs_rq:/.min_vruntime.77
65055617 ± 3% -28.7% 46408287 ± 15% sched_debug.cfs_rq:/.min_vruntime.9
64257203 ± 11% -19.4% 51816609 ± 3% sched_debug.cfs_rq:/.min_vruntime.avg
81981181 ± 6% -14.9% 69761399 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
7.25 ± 39% +348.3% 32.50 ± 68% sched_debug.cfs_rq:/.nr_spread_over.0
6.75 ± 28% +622.2% 48.75 ± 58% sched_debug.cfs_rq:/.nr_spread_over.1
3.50 ± 51% +1342.9% 50.50 ± 45% sched_debug.cfs_rq:/.nr_spread_over.10
4.25 ±100% +4358.8% 189.50 ±143% sched_debug.cfs_rq:/.nr_spread_over.13
7.25 ± 86% +2000.0% 152.25 ±109% sched_debug.cfs_rq:/.nr_spread_over.14
9.00 ± 39% +933.3% 93.00 ±105% sched_debug.cfs_rq:/.nr_spread_over.17
6.50 ± 77% +826.9% 60.25 ± 67% sched_debug.cfs_rq:/.nr_spread_over.18
9.75 ± 67% +220.5% 31.25 ± 79% sched_debug.cfs_rq:/.nr_spread_over.20
9.25 ± 67% +281.1% 35.25 ± 65% sched_debug.cfs_rq:/.nr_spread_over.21
4.25 ± 45% +1517.6% 68.75 ± 53% sched_debug.cfs_rq:/.nr_spread_over.22
5.50 ± 68% +581.8% 37.50 ± 92% sched_debug.cfs_rq:/.nr_spread_over.24
5.50 ± 30% +754.5% 47.00 ± 73% sched_debug.cfs_rq:/.nr_spread_over.25
6.25 ± 23% +1660.0% 110.00 ± 39% sched_debug.cfs_rq:/.nr_spread_over.26
8.00 ± 12% +343.8% 35.50 ± 39% sched_debug.cfs_rq:/.nr_spread_over.29
3.00 ± 78% +600.0% 21.00 ± 25% sched_debug.cfs_rq:/.nr_spread_over.32
5.75 ± 93% +856.5% 55.00 ± 67% sched_debug.cfs_rq:/.nr_spread_over.36
4.25 ± 65% +670.6% 32.75 ± 30% sched_debug.cfs_rq:/.nr_spread_over.37
4.75 ± 47% +1300.0% 66.50 ± 63% sched_debug.cfs_rq:/.nr_spread_over.38
6.25 ± 81% +812.0% 57.00 ± 43% sched_debug.cfs_rq:/.nr_spread_over.4
8.25 ± 89% +169.7% 22.25 ± 69% sched_debug.cfs_rq:/.nr_spread_over.41
3.00 ±102% +5608.3% 171.25 ±140% sched_debug.cfs_rq:/.nr_spread_over.42
5.25 ± 47% +566.7% 35.00 ± 27% sched_debug.cfs_rq:/.nr_spread_over.44
4.00 ± 46% +662.5% 30.50 ± 28% sched_debug.cfs_rq:/.nr_spread_over.45
3.25 ±121% +2761.5% 93.00 ± 87% sched_debug.cfs_rq:/.nr_spread_over.46
4.25 ± 58% +758.8% 36.50 ±104% sched_debug.cfs_rq:/.nr_spread_over.47
6.50 ± 60% +323.1% 27.50 ± 52% sched_debug.cfs_rq:/.nr_spread_over.48
7.50 ± 38% +800.0% 67.50 ± 81% sched_debug.cfs_rq:/.nr_spread_over.49
6.75 ± 21% +303.7% 27.25 ± 43% sched_debug.cfs_rq:/.nr_spread_over.5
2.75 ± 30% +1736.4% 50.50 ± 56% sched_debug.cfs_rq:/.nr_spread_over.50
3.75 ± 66% +1220.0% 49.50 ±117% sched_debug.cfs_rq:/.nr_spread_over.52
2.75 ± 64% +854.5% 26.25 ± 79% sched_debug.cfs_rq:/.nr_spread_over.53
2.75 ± 97% +3218.2% 91.25 ± 57% sched_debug.cfs_rq:/.nr_spread_over.54
3.00 ± 40% +900.0% 30.00 ± 66% sched_debug.cfs_rq:/.nr_spread_over.56
2.25 ± 57% +1700.0% 40.50 ± 77% sched_debug.cfs_rq:/.nr_spread_over.57
3.50 ± 58% +2771.4% 100.50 ± 69% sched_debug.cfs_rq:/.nr_spread_over.58
4.00 ±107% +893.8% 39.75 ± 82% sched_debug.cfs_rq:/.nr_spread_over.59
6.00 ± 72% +1283.3% 83.00 ± 36% sched_debug.cfs_rq:/.nr_spread_over.6
6.50 ± 77% +384.6% 31.50 ± 78% sched_debug.cfs_rq:/.nr_spread_over.61
3.50 ± 51% +1571.4% 58.50 ± 73% sched_debug.cfs_rq:/.nr_spread_over.62
6.25 ± 52% +356.0% 28.50 ± 45% sched_debug.cfs_rq:/.nr_spread_over.64
7.75 ± 84% +106.5% 16.00 ± 41% sched_debug.cfs_rq:/.nr_spread_over.65
6.50 ± 72% +319.2% 27.25 ± 83% sched_debug.cfs_rq:/.nr_spread_over.68
8.50 ± 59% +329.4% 36.50 ± 57% sched_debug.cfs_rq:/.nr_spread_over.69
5.75 ± 56% +308.7% 23.50 ± 60% sched_debug.cfs_rq:/.nr_spread_over.72
3.50 ± 65% +400.0% 17.50 ± 52% sched_debug.cfs_rq:/.nr_spread_over.73
3.25 ± 25% +876.9% 31.75 ± 68% sched_debug.cfs_rq:/.nr_spread_over.75
4.00 ± 63% +23906.2% 960.25 ±169% sched_debug.cfs_rq:/.nr_spread_over.76
5.00 ± 70% +720.0% 41.00 ± 54% sched_debug.cfs_rq:/.nr_spread_over.77
3.50 ± 47% +600.0% 24.50 ± 73% sched_debug.cfs_rq:/.nr_spread_over.8
6.75 ± 48% +370.4% 31.75 ± 83% sched_debug.cfs_rq:/.nr_spread_over.9
19.08 ± 14% +232.7% 63.48 ± 32% sched_debug.cfs_rq:/.nr_spread_over.avg
13.50 ± 21% -38.9% 8.25 ± 21% sched_debug.cfs_rq:/.runnable_load_avg.62
17.34 ± 33% -48.7% 8.90 ± 22% sched_debug.cfs_rq:/.runnable_load_avg.stddev
1650786 ± 96% +215.7% 5211288 ± 32% sched_debug.cfs_rq:/.spread0.24
3735888 ± 25% -66.2% 1261722 ± 76% sched_debug.cfs_rq:/.spread0.32
6947361 ±181% -153.7% -3730893 ±-158% sched_debug.cfs_rq:/.spread0.41
822224 ± 6% +262.2% 2978378 ± 55% sched_debug.cpu.avg_idle.11
899425 ± 22% +115.1% 1934526 ± 63% sched_debug.cpu.avg_idle.27
902557 ± 11% +161.2% 2357181 ± 71% sched_debug.cpu.avg_idle.50
855811 ± 2% +90.8% 1632603 ± 74% sched_debug.cpu.avg_idle.51
968895 ± 7% +160.6% 2524849 ± 36% sched_debug.cpu.avg_idle.52
1175701 ± 48% +111.3% 2484589 ± 71% sched_debug.cpu.avg_idle.69
5.00 ± 44% +120.0% 11.00 ± 33% sched_debug.cpu.cpu_load[0].23
13.50 ± 21% -38.9% 8.25 ± 21% sched_debug.cpu.cpu_load[0].62
0.94 ± 25% +74.1% 1.64 ± 20% sched_debug.cpu.cpu_load[0].min
5.00 ± 44% +120.0% 11.00 ± 33% sched_debug.cpu.cpu_load[1].23
5.75 ± 31% +91.3% 11.00 ± 23% sched_debug.cpu.cpu_load[1].45
13.50 ± 21% -38.9% 8.25 ± 21% sched_debug.cpu.cpu_load[1].62
0.94 ± 25% +74.1% 1.64 ± 20% sched_debug.cpu.cpu_load[1].min
17.26 ± 33% -40.2% 10.33 ± 41% sched_debug.cpu.cpu_load[1].stddev
5.50 ± 32% +100.0% 11.00 ± 23% sched_debug.cpu.cpu_load[2].45
13.50 ± 21% -38.9% 8.25 ± 21% sched_debug.cpu.cpu_load[2].62
17.07 ± 33% -43.4% 9.66 ± 32% sched_debug.cpu.cpu_load[2].stddev
5.00 ± 42% +120.0% 11.00 ± 23% sched_debug.cpu.cpu_load[3].45
13.75 ± 18% -40.0% 8.25 ± 21% sched_debug.cpu.cpu_load[3].62
16.85 ± 31% -42.2% 9.75 ± 32% sched_debug.cpu.cpu_load[3].stddev
4.75 ± 50% +131.6% 11.00 ± 23% sched_debug.cpu.cpu_load[4].45
13.75 ± 18% -40.0% 8.25 ± 21% sched_debug.cpu.cpu_load[4].62
13.50 ± 19% -37.0% 8.50 ± 24% sched_debug.cpu.cpu_load[4].78
16.46 ± 29% -40.2% 9.85 ± 33% sched_debug.cpu.cpu_load[4].stddev
45774 ± 14% -26.5% 33656 ± 21% sched_debug.cpu.curr->pid.67
7.50 ± 52% +76.7% 13.25 ± 8% sched_debug.cpu.load.17
12.25 ± 25% -36.7% 7.75 ± 14% sched_debug.cpu.load.20
11.75 ± 18% -36.2% 7.50 ± 14% sched_debug.cpu.load.22
5.75 ± 37% +126.1% 13.00 ± 32% sched_debug.cpu.load.23
6.00 ± 42% +104.2% 12.25 ± 26% sched_debug.cpu.load.25
5.75 ± 56% +126.1% 13.00 ± 30% sched_debug.cpu.load.35
12.00 ± 17% -35.4% 7.75 ± 34% sched_debug.cpu.load.46
5.75 ± 39% +117.4% 12.50 ± 18% sched_debug.cpu.load.53
12.50 ± 12% -30.0% 8.75 ± 9% sched_debug.cpu.load.54
505539 ± 1% +644.1% 3761938 ± 74% sched_debug.cpu.max_idle_balance_cost.11
719417 ± 52% +395.3% 3563254 ± 71% sched_debug.cpu.max_idle_balance_cost.37
744737 ± 44% +202.3% 2251473 ± 75% sched_debug.cpu.max_idle_balance_cost.40
534942 ± 7% +266.8% 1962058 ± 46% sched_debug.cpu.max_idle_balance_cost.52
2113103 ± 65% -68.9% 656815 ± 41% sched_debug.cpu.max_idle_balance_cost.65
630785 ± 17% -19.5% 507892 ± 1% sched_debug.cpu.max_idle_balance_cost.7
148.50 ± 45% -83.5% 24.50 ± 19% sched_debug.cpu.nr_running.0
165.00 ± 45% -85.8% 23.50 ± 38% sched_debug.cpu.nr_running.10
75.75 ± 29% -71.6% 21.50 ± 25% sched_debug.cpu.nr_running.11
124.50 ± 85% -82.9% 21.25 ± 19% sched_debug.cpu.nr_running.12
141.75 ± 29% -84.0% 22.75 ± 26% sched_debug.cpu.nr_running.14
102.25 ± 78% -71.1% 29.50 ± 33% sched_debug.cpu.nr_running.15
125.25 ± 83% -82.6% 21.75 ± 21% sched_debug.cpu.nr_running.16
157.25 ± 39% -85.2% 23.25 ± 34% sched_debug.cpu.nr_running.18
159.00 ± 66% -83.6% 26.00 ± 27% sched_debug.cpu.nr_running.19
148.00 ± 24% -80.9% 28.25 ± 29% sched_debug.cpu.nr_running.2
117.75 ± 49% -81.7% 21.50 ± 28% sched_debug.cpu.nr_running.20
135.00 ± 32% -86.1% 18.75 ± 27% sched_debug.cpu.nr_running.22
71.25 ± 27% -55.1% 32.00 ± 27% sched_debug.cpu.nr_running.23
243.50 ± 71% -89.4% 25.75 ± 8% sched_debug.cpu.nr_running.24
186.00 ± 51% -89.1% 20.25 ± 24% sched_debug.cpu.nr_running.26
95.75 ± 59% -73.4% 25.50 ± 29% sched_debug.cpu.nr_running.27
239.25 ± 99% -92.1% 19.00 ± 15% sched_debug.cpu.nr_running.28
98.25 ± 54% -66.9% 32.50 ± 22% sched_debug.cpu.nr_running.3
172.50 ± 44% -87.5% 21.50 ± 43% sched_debug.cpu.nr_running.30
66.50 ± 25% -63.9% 24.00 ± 25% sched_debug.cpu.nr_running.31
133.25 ± 48% -82.2% 23.75 ± 18% sched_debug.cpu.nr_running.32
139.75 ± 37% -82.3% 24.75 ± 27% sched_debug.cpu.nr_running.34
86.25 ± 75% -75.4% 21.25 ± 23% sched_debug.cpu.nr_running.36
62.75 ± 31% -58.2% 26.25 ± 30% sched_debug.cpu.nr_running.39
100.00 ± 72% -79.0% 21.00 ± 21% sched_debug.cpu.nr_running.4
92.00 ± 64% -78.0% 20.25 ± 15% sched_debug.cpu.nr_running.40
155.50 ± 46% -83.8% 25.25 ± 45% sched_debug.cpu.nr_running.42
62.25 ± 39% -55.8% 27.50 ± 30% sched_debug.cpu.nr_running.43
113.50 ±102% -79.5% 23.25 ± 24% sched_debug.cpu.nr_running.44
140.00 ± 29% -86.4% 19.00 ± 31% sched_debug.cpu.nr_running.46
311.50 ±114% -91.3% 27.25 ± 20% sched_debug.cpu.nr_running.47
112.75 ± 91% -81.2% 21.25 ± 5% sched_debug.cpu.nr_running.48
138.50 ± 34% -84.7% 21.25 ± 31% sched_debug.cpu.nr_running.50
88.75 ± 67% -72.7% 24.25 ± 33% sched_debug.cpu.nr_running.51
105.50 ± 51% -79.9% 21.25 ± 14% sched_debug.cpu.nr_running.52
129.50 ± 29% -84.6% 20.00 ± 21% sched_debug.cpu.nr_running.54
122.25 ± 51% -80.2% 24.25 ± 12% sched_debug.cpu.nr_running.56
163.25 ± 45% -87.7% 20.00 ± 36% sched_debug.cpu.nr_running.58
92.25 ± 66% -71.5% 26.25 ± 13% sched_debug.cpu.nr_running.59
230.75 ±101% -91.9% 18.75 ± 23% sched_debug.cpu.nr_running.60
144.75 ± 42% -82.6% 25.25 ± 41% sched_debug.cpu.nr_running.62
58.25 ± 32% -55.4% 26.00 ± 31% sched_debug.cpu.nr_running.63
122.25 ± 47% -81.6% 22.50 ± 9% sched_debug.cpu.nr_running.64
150.00 ± 38% -74.5% 38.25 ± 76% sched_debug.cpu.nr_running.66
89.25 ± 62% -75.1% 22.25 ± 43% sched_debug.cpu.nr_running.67
116.00 ± 60% -78.7% 24.75 ± 10% sched_debug.cpu.nr_running.68
88.25 ± 22% -67.4% 28.75 ± 25% sched_debug.cpu.nr_running.7
134.25 ± 40% -81.8% 24.50 ± 40% sched_debug.cpu.nr_running.70
69.00 ± 38% -59.8% 27.75 ± 22% sched_debug.cpu.nr_running.71
119.75 ± 42% -80.4% 23.50 ± 16% sched_debug.cpu.nr_running.72
138.50 ± 38% -74.5% 35.25 ± 93% sched_debug.cpu.nr_running.74
86.50 ± 61% -73.7% 22.75 ± 32% sched_debug.cpu.nr_running.75
115.50 ± 62% -81.8% 21.00 ± 10% sched_debug.cpu.nr_running.76
127.00 ± 35% -83.5% 21.00 ± 37% sched_debug.cpu.nr_running.78
59.25 ± 40% -66.2% 20.00 ± 44% sched_debug.cpu.nr_running.79
97.50 ± 54% -74.1% 25.25 ± 10% sched_debug.cpu.nr_running.8
106.72 ± 14% -69.9% 32.15 ± 23% sched_debug.cpu.nr_running.avg
975.12 ± 47% -72.2% 271.24 ± 72% sched_debug.cpu.nr_running.max
0.83 ± 26% +65.9% 1.38 ± 26% sched_debug.cpu.nr_running.min
189.39 ± 34% -74.1% 49.14 ± 62% sched_debug.cpu.nr_running.stddev
8049365 ± 5% -9.5% 7281434 ± 5% sched_debug.cpu.nr_switches.21
8222072 ± 4% -11.3% 7293398 ± 2% sched_debug.cpu.nr_switches.25
8355303 ± 5% -13.3% 7241351 ± 4% sched_debug.cpu.nr_switches.29
8377252 ± 5% -12.2% 7358538 ± 3% sched_debug.cpu.nr_switches.33
8646522 ± 6% -12.8% 7543084 ± 3% sched_debug.cpu.nr_switches.37
8680679 ± 6% -13.5% 7512566 ± 3% sched_debug.cpu.nr_switches.41
8862414 ± 6% -15.0% 7533561 ± 4% sched_debug.cpu.nr_switches.45
8513418 ± 5% -12.4% 7453546 ± 3% sched_debug.cpu.nr_switches.49
8554847 ± 6% -12.6% 7480415 ± 4% sched_debug.cpu.nr_switches.53
8818690 ± 3% -15.0% 7492325 ± 2% sched_debug.cpu.nr_switches.57
8446431 ± 5% -12.2% 7416662 ± 3% sched_debug.cpu.nr_switches.61
8424264 ± 6% -11.2% 7483465 ± 3% sched_debug.cpu.nr_switches.65
8874420 ± 6% -14.6% 7581001 ± 4% sched_debug.cpu.nr_switches.73
8548326 ± 3% -12.7% 7462215 ± 3% sched_debug.cpu.nr_switches.77
9638903 ± 5% -17.4% 7963391 ± 4% sched_debug.cpu.nr_switches.max
896810 ± 21% -68.9% 279227 ± 18% sched_debug.cpu.nr_switches.stddev
116.00 ± 29% -115.7% -18.25 ±-446% sched_debug.cpu.nr_uninterruptible.10
333.00 ± 85% -104.9% -16.25 ±-454% sched_debug.cpu.nr_uninterruptible.14
132.00 ± 65% -76.7% 30.75 ±119% sched_debug.cpu.nr_uninterruptible.16
170.00 ± 57% -100.9% -1.50 ±-8373% sched_debug.cpu.nr_uninterruptible.20
201.00 ± 66% -81.1% 38.00 ±318% sched_debug.cpu.nr_uninterruptible.24
277.00 ± 44% -74.4% 71.00 ± 96% sched_debug.cpu.nr_uninterruptible.25
213.75 ± 60% -76.8% 49.50 ± 63% sched_debug.cpu.nr_uninterruptible.33
80.25 ± 41% -109.7% -7.75 ±-722% sched_debug.cpu.nr_uninterruptible.40
350.00 ±100% -107.5% -26.25 ±-523% sched_debug.cpu.nr_uninterruptible.46
177.00 ± 35% -60.7% 69.50 ± 65% sched_debug.cpu.nr_uninterruptible.56
223.00 ± 55% -58.6% 92.25 ± 84% sched_debug.cpu.nr_uninterruptible.58
260.50 ± 67% -313.2% -555.50 ±-124% sched_debug.cpu.nr_uninterruptible.59
290.25 ± 78% -98.7% 3.75 ±246% sched_debug.cpu.nr_uninterruptible.6
139.00 ± 40% -87.4% 17.50 ±226% sched_debug.cpu.nr_uninterruptible.66
94.25 ± 96% -136.1% -34.00 ±-331% sched_debug.cpu.nr_uninterruptible.68
134.50 ± 69% -128.6% -38.50 ±-288% sched_debug.cpu.nr_uninterruptible.69
160.25 ± 60% -84.9% 24.25 ±377% sched_debug.cpu.nr_uninterruptible.73
224.00 ± 30% -71.5% 63.75 ± 24% sched_debug.cpu.nr_uninterruptible.78
104.50 ±112% -128.7% -30.00 ±-179% sched_debug.cpu.nr_uninterruptible.8
93.00 ± 30% -128.2% -26.25 ±-149% sched_debug.cpu.nr_uninterruptible.9
68.65 ± 25% -77.4% 15.53 ±150% sched_debug.cpu.nr_uninterruptible.avg
8050079 ± 5% -9.5% 7282381 ± 5% sched_debug.cpu.sched_count.21
8223158 ± 4% -11.3% 7293920 ± 2% sched_debug.cpu.sched_count.25
8355745 ± 5% -13.3% 7241867 ± 4% sched_debug.cpu.sched_count.29
8377564 ± 5% -12.2% 7358616 ± 3% sched_debug.cpu.sched_count.33
8647489 ± 6% -12.8% 7543179 ± 3% sched_debug.cpu.sched_count.37
8680933 ± 6% -13.5% 7512654 ± 3% sched_debug.cpu.sched_count.41
8862638 ± 6% -15.0% 7533639 ± 4% sched_debug.cpu.sched_count.45
8513659 ± 5% -12.5% 7453634 ± 3% sched_debug.cpu.sched_count.49
8555109 ± 6% -12.6% 7480502 ± 4% sched_debug.cpu.sched_count.53
8818934 ± 3% -15.0% 7492403 ± 2% sched_debug.cpu.sched_count.57
8446695 ± 5% -12.2% 7416735 ± 3% sched_debug.cpu.sched_count.61
8424736 ± 6% -11.2% 7483931 ± 3% sched_debug.cpu.sched_count.65
8874653 ± 6% -14.6% 7581099 ± 4% sched_debug.cpu.sched_count.73
8548563 ± 3% -12.7% 7462306 ± 3% sched_debug.cpu.sched_count.77
1277740 ± 13% -28.3% 915765 ± 1% sched_debug.cpu.sched_count.stddev
23897 ± 33% -52.1% 11437 ± 45% sched_debug.cpu.sched_goidle.0
24028 ± 11% -55.3% 10738 ± 19% sched_debug.cpu.sched_goidle.10
23579 ± 16% -50.5% 11679 ± 45% sched_debug.cpu.sched_goidle.12
25676 ± 27% -55.2% 11511 ± 21% sched_debug.cpu.sched_goidle.13
29635 ± 45% -54.6% 13447 ± 30% sched_debug.cpu.sched_goidle.15
24179 ± 31% -45.1% 13266 ± 33% sched_debug.cpu.sched_goidle.16
19085 ± 16% -50.5% 9456 ± 29% sched_debug.cpu.sched_goidle.2
26332 ± 24% -51.3% 12826 ± 23% sched_debug.cpu.sched_goidle.20
19407 ± 23% -26.8% 14211 ± 18% sched_debug.cpu.sched_goidle.21
23725 ± 10% -60.5% 9362 ± 22% sched_debug.cpu.sched_goidle.22
20885 ± 8% -30.4% 14541 ± 33% sched_debug.cpu.sched_goidle.23
26971 ± 35% -61.1% 10498 ± 25% sched_debug.cpu.sched_goidle.24
21959 ± 16% -41.4% 12865 ± 22% sched_debug.cpu.sched_goidle.25
21776 ± 12% -46.1% 11742 ± 23% sched_debug.cpu.sched_goidle.27
21108 ± 27% -56.6% 9163 ± 29% sched_debug.cpu.sched_goidle.30
18235 ± 46% -57.1% 7826 ± 51% sched_debug.cpu.sched_goidle.32
18074 ± 14% -44.3% 10059 ± 40% sched_debug.cpu.sched_goidle.33
15392 ± 23% -37.3% 9657 ± 13% sched_debug.cpu.sched_goidle.34
21667 ± 26% -59.5% 8783 ± 13% sched_debug.cpu.sched_goidle.38
19652 ± 10% -42.0% 11404 ± 25% sched_debug.cpu.sched_goidle.4
16821 ± 13% -57.7% 7113 ± 14% sched_debug.cpu.sched_goidle.42
17884 ± 27% -62.7% 6664 ± 72% sched_debug.cpu.sched_goidle.44
22708 ± 46% -63.9% 8192 ± 58% sched_debug.cpu.sched_goidle.48
19401 ± 26% -42.1% 11228 ± 5% sched_debug.cpu.sched_goidle.49
16767 ± 19% -44.3% 9332 ± 20% sched_debug.cpu.sched_goidle.50
15964 ± 27% -48.2% 8269 ± 29% sched_debug.cpu.sched_goidle.52
14561 ± 17% -31.7% 9951 ± 8% sched_debug.cpu.sched_goidle.53
16277 ± 22% -67.9% 5232 ± 35% sched_debug.cpu.sched_goidle.54
23015 ± 35% -51.2% 11236 ± 24% sched_debug.cpu.sched_goidle.6
15897 ± 14% -48.0% 8264 ± 27% sched_debug.cpu.sched_goidle.62
22641 ± 16% -46.0% 12217 ± 20% sched_debug.cpu.sched_goidle.64
23159 ± 27% -47.9% 12060 ± 15% sched_debug.cpu.sched_goidle.68
20571 ± 10% -30.7% 14246 ± 29% sched_debug.cpu.sched_goidle.69
21750 ± 25% -54.6% 9880 ± 38% sched_debug.cpu.sched_goidle.70
13572 ± 12% -35.5% 8757 ± 29% sched_debug.cpu.sched_goidle.73
15307 ± 20% -36.8% 9677 ± 40% sched_debug.cpu.sched_goidle.78
17866 ± 7% -33.2% 11942 ± 29% sched_debug.cpu.sched_goidle.9
18857 ± 12% -40.4% 11240 ± 18% sched_debug.cpu.sched_goidle.avg
43849 ± 13% -36.0% 28085 ± 10% sched_debug.cpu.sched_goidle.max
6115 ± 23% -50.4% 3034 ± 27% sched_debug.cpu.sched_goidle.min
7313 ± 9% -32.8% 4915 ± 8% sched_debug.cpu.sched_goidle.stddev
4341077 ± 9% +14.6% 4976682 ± 4% sched_debug.cpu.ttwu_count.0
372672 ± 13% -70.4% 110439 ± 8% sched_debug.cpu.ttwu_count.stddev
2955563 ± 9% +53.1% 4523498 ± 4% sched_debug.cpu.ttwu_local.0
3131208 ± 7% +43.3% 4486241 ± 5% sched_debug.cpu.ttwu_local.1
3001199 ± 15% +44.8% 4346826 ± 4% sched_debug.cpu.ttwu_local.10
3166403 ± 7% +35.4% 4288123 ± 7% sched_debug.cpu.ttwu_local.11
3100602 ± 13% +38.3% 4288026 ± 4% sched_debug.cpu.ttwu_local.12
3391546 ± 5% +30.8% 4435978 ± 5% sched_debug.cpu.ttwu_local.13
2959588 ± 16% +44.0% 4261936 ± 4% sched_debug.cpu.ttwu_local.14
3267265 ± 9% +32.4% 4326510 ± 7% sched_debug.cpu.ttwu_local.15
3137496 ± 18% +38.5% 4345588 ± 5% sched_debug.cpu.ttwu_local.16
3356134 ± 7% +31.1% 4399395 ± 4% sched_debug.cpu.ttwu_local.17
3108355 ± 17% +36.7% 4248516 ± 4% sched_debug.cpu.ttwu_local.18
3303788 ± 10% +31.6% 4346891 ± 6% sched_debug.cpu.ttwu_local.19
2790792 ± 16% +60.4% 4476856 ± 4% sched_debug.cpu.ttwu_local.2
3208488 ± 16% +36.0% 4363319 ± 5% sched_debug.cpu.ttwu_local.20
3323090 ± 6% +33.4% 4432674 ± 8% sched_debug.cpu.ttwu_local.21
2920237 ± 17% +50.4% 4391940 ± 3% sched_debug.cpu.ttwu_local.22
3133540 ± 6% +38.6% 4342196 ± 10% sched_debug.cpu.ttwu_local.23
3082052 ± 15% +40.8% 4340369 ± 5% sched_debug.cpu.ttwu_local.24
3410451 ± 2% +30.2% 4440794 ± 5% sched_debug.cpu.ttwu_local.25
2965013 ± 15% +44.9% 4295580 ± 5% sched_debug.cpu.ttwu_local.26
3140505 ± 7% +40.7% 4420128 ± 7% sched_debug.cpu.ttwu_local.27
3107859 ± 16% +40.0% 4352409 ± 5% sched_debug.cpu.ttwu_local.28
3558229 ± 8% +23.2% 4383658 ± 5% sched_debug.cpu.ttwu_local.29
3056265 ± 8% +48.4% 4534490 ± 5% sched_debug.cpu.ttwu_local.3
3064582 ± 16% +41.9% 4348948 ± 2% sched_debug.cpu.ttwu_local.30
3209662 ± 6% +36.0% 4366452 ± 6% sched_debug.cpu.ttwu_local.31
3525265 ± 14% +29.9% 4578608 ± 2% sched_debug.cpu.ttwu_local.32
3649268 ± 5% +25.8% 4592214 ± 5% sched_debug.cpu.ttwu_local.33
3181985 ± 18% +44.3% 4590269 ± 4% sched_debug.cpu.ttwu_local.34
3503558 ± 6% +34.7% 4719199 ± 5% sched_debug.cpu.ttwu_local.35
3510966 ± 13% +24.9% 4384477 ± 4% sched_debug.cpu.ttwu_local.36
3801989 ± 7% +19.7% 4550261 ± 5% sched_debug.cpu.ttwu_local.37
3197749 ± 14% +34.7% 4306364 ± 0% sched_debug.cpu.ttwu_local.38
3452371 ± 6% +27.1% 4387164 ± 7% sched_debug.cpu.ttwu_local.39
3106534 ± 14% +38.5% 4301950 ± 3% sched_debug.cpu.ttwu_local.4
3410508 ± 14% +27.4% 4345186 ± 5% sched_debug.cpu.ttwu_local.40
3854125 ± 6% +18.4% 4562285 ± 6% sched_debug.cpu.ttwu_local.41
3210250 ± 14% +38.7% 4453936 ± 3% sched_debug.cpu.ttwu_local.42
3393670 ± 7% +30.2% 4419376 ± 6% sched_debug.cpu.ttwu_local.43
3504231 ± 13% +24.3% 4356255 ± 4% sched_debug.cpu.ttwu_local.44
3828003 ± 6% +18.7% 4542494 ± 6% sched_debug.cpu.ttwu_local.45
3285330 ± 14% +31.1% 4305974 ± 3% sched_debug.cpu.ttwu_local.46
3457275 ± 11% +28.7% 4449353 ± 6% sched_debug.cpu.ttwu_local.47
3352884 ± 17% +32.3% 4436488 ± 4% sched_debug.cpu.ttwu_local.48
3674474 ± 4% +22.3% 4495693 ± 5% sched_debug.cpu.ttwu_local.49
3448117 ± 5% +27.9% 4408576 ± 7% sched_debug.cpu.ttwu_local.5
3300328 ± 15% +32.4% 4368728 ± 4% sched_debug.cpu.ttwu_local.50
3359540 ± 6% +31.7% 4426052 ± 6% sched_debug.cpu.ttwu_local.51
3398675 ± 15% +32.1% 4488804 ± 5% sched_debug.cpu.ttwu_local.52
3744113 ± 7% +22.3% 4579796 ± 5% sched_debug.cpu.ttwu_local.53
3186989 ± 15% +40.5% 4477419 ± 2% sched_debug.cpu.ttwu_local.54
3578762 ± 8% +25.2% 4478923 ± 9% sched_debug.cpu.ttwu_local.55
3390671 ± 15% +30.9% 4437633 ± 4% sched_debug.cpu.ttwu_local.56
3860784 ± 2% +17.4% 4534386 ± 4% sched_debug.cpu.ttwu_local.57
3205789 ± 12% +37.7% 4414254 ± 5% sched_debug.cpu.ttwu_local.58
3395352 ± 7% +32.7% 4506999 ± 7% sched_debug.cpu.ttwu_local.59
2915811 ± 16% +45.4% 4240300 ± 1% sched_debug.cpu.ttwu_local.6
3562805 ± 14% +23.5% 4401745 ± 6% sched_debug.cpu.ttwu_local.60
3715936 ± 6% +21.2% 4503014 ± 5% sched_debug.cpu.ttwu_local.61
3204936 ± 14% +39.2% 4461506 ± 2% sched_debug.cpu.ttwu_local.62
3507592 ± 4% +28.1% 4492218 ± 6% sched_debug.cpu.ttwu_local.63
3202967 ± 13% +37.3% 4398303 ± 4% sched_debug.cpu.ttwu_local.64
3483280 ± 7% +30.7% 4554002 ± 5% sched_debug.cpu.ttwu_local.65
3138219 ± 15% +39.9% 4389905 ± 4% sched_debug.cpu.ttwu_local.66
3153951 ± 6% +41.6% 4466967 ± 5% sched_debug.cpu.ttwu_local.67
3229444 ± 12% +37.5% 4441388 ± 5% sched_debug.cpu.ttwu_local.68
3394709 ± 3% +32.3% 4489810 ± 5% sched_debug.cpu.ttwu_local.69
3230002 ± 7% +33.4% 4308068 ± 6% sched_debug.cpu.ttwu_local.7
3056815 ± 13% +43.4% 4382407 ± 3% sched_debug.cpu.ttwu_local.70
3223738 ± 6% +37.6% 4437307 ± 7% sched_debug.cpu.ttwu_local.71
3460274 ± 13% +29.3% 4473753 ± 4% sched_debug.cpu.ttwu_local.72
3860628 ± 6% +19.2% 4600629 ± 4% sched_debug.cpu.ttwu_local.73
3361754 ± 15% +31.7% 4427085 ± 4% sched_debug.cpu.ttwu_local.74
3375963 ± 9% +32.4% 4468587 ± 5% sched_debug.cpu.ttwu_local.75
3416899 ± 13% +30.6% 4462381 ± 4% sched_debug.cpu.ttwu_local.76
3725021 ± 2% +22.7% 4571798 ± 4% sched_debug.cpu.ttwu_local.77
3301968 ± 9% +35.0% 4458653 ± 2% sched_debug.cpu.ttwu_local.78
3438172 ± 10% +31.5% 4519809 ± 8% sched_debug.cpu.ttwu_local.79
3296672 ± 16% +29.4% 4267155 ± 6% sched_debug.cpu.ttwu_local.8
3480776 ± 7% +27.5% 4437332 ± 5% sched_debug.cpu.ttwu_local.9
3329473 ± 7% +33.0% 4429258 ± 4% sched_debug.cpu.ttwu_local.avg
2541283 ± 13% +56.8% 3983513 ± 5% sched_debug.cpu.ttwu_local.min
414648 ± 17% -57.3% 176872 ± 3% sched_debug.cpu.ttwu_local.stddev
0.00 ± 85% -79.4% 0.00 ± 10% sched_debug.rt_rq:/.rt_time.24
0.03 ±149% -96.4% 0.00 ± 27% sched_debug.rt_rq:/.rt_time.4
0.00 ± 58% -80.4% 0.00 ± 27% sched_debug.rt_rq:/.rt_time.9
lkp-wsx02: Westmere-EX
Memory: 128G
5e+11 ++----------------------------------------------------------------+
4.5e+11 ++ O |
| |
4e+11 ++ |
3.5e+11 ++ |
| |
3e+11 ++ |
2.5e+11 +O O O O O OO |
2e+11 O+ OO O O O O O |
| |
1.5e+11 ++ |
1e+11 ++ |
| |
5e+10 +*. *.* .***. *.**.**.** .* .* .** .* .* .** .* .**.**.***.**.* .**
0 *+-*---*-----*O--O------*--*--*---*--*--*---*--*---------------*--+
softirqs.RCU
1e+07 ++--------------------O---------------------------------------------+
9e+06 OO O OO O O O O |
| O OO O O |
8e+06 ++ O |
7e+06 ++ |
|*.**.**.* *.* .* .* * **.***.* **.**.**.**.**.**.**. **
6e+06 ++ *.* * * * : : : : : **.* : |
5e+06 ++ : : :: :: :: :: |
4e+06 ++ : : :: :: :: * |
* : : * * * |
3e+06 ++ :: |
2e+06 ++ :: |
| :: |
1e+06 ++ : |
0 ++-----------*-O--O-------------------------------------------------+
cpuidle.C6-NHM.time
3e+10 *+-------------------------*---------*----------------------------+
| : : |
2.5e+10 ++ : : |
| : : |
| * :: :: |
2e+10 ++ : : : :: |
| ::: : :: |
1.5e+10 ++ ::: : : : |
|: : : : : : * |
1e+10 ++ : * : : : : |
|: : : : : :: |
|: : : : : : : |
5e+09 ++ .* : : : : : :|
|*.**.**.***. *.** *.** **.***.* **.***.**.**.**.***.**.* **
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
turbostat.Avg_MHz
2500 O+OO-OO-OO-OO-O*-O*-OO-OO-O--------------------------**-**-**-**----**
| **.**.** : : * : *.**.**.* **.**.**.** * :|
| : : : : : : : : : |
2000 ++ : : : * : : : :: |
|: : : : :: : :: * |
|: : : : :: :: |
1500 ++ : : * : : |
* : : * * |
1000 ++ :: |
| :: |
| :: |
500 ++ : |
| : |
| : |
0 ++------------*O--O--------------------------------------------------+
turbostat._Busy
100 O+OO-OO-OO-OO-O--O--OO-OO-O-------------------------------------------+
90 ++**.**.**.** *.**.**.** *.**.*.** *.**.**.**.**.**.**.**.** *.*
| : : : : : : : : : |
80 ++ : : : * : : : :: |
70 ++ : : : :: : : : * |
|: : : : :: :: |
60 ++ : : * : : |
50 *+ : : * * |
40 ++ :: |
| :: |
30 ++ :: |
20 ++ : |
| : |
10 ++ : |
0 ++------------*O--O---------------------------------------------------+
turbostat.CPU_c6
60 ++---------------------------------------------------------------------+
| |
50 *+ * * |
| : : |
| * : : |
40 ++ : :: :: |
|: :: :: :: |
30 ++ ::: : :: |
|: ::: : : : * |
20 ++ : : : : : : |
|: : * : : : :: |
| : : : : : : : |
10 ++: *. : : : : : : |
| **.**.**.**. *.*.* **.* **.*.**.* **.**.**.**.*.**.**.**.** *.*
0 O+OO-OO-OO-OO-OO-O-OO-OO-OO--------------------------------------------+
18 ++---------------------------------------------------------------------+
| O |
16 O+ O O O O |
14 ++ O O O O |
| O O |
12 ++O O |
10 ++ O O |
| |
8 ++ |
6 ++ |
| |
4 ++ |
2 ++ |
| *.**. .**. .* *.*.* *.**. .* *.* .* .*
0 *+*-----**----*O-*-O*-**-**-**-*-----*-**-*-----**-**-*--*-**-*---*--*-+
18 ++---------------------------------------------------------------------+
| O |
16 O+ O O O O |
14 ++ O O O O |
| O O |
12 ++ O |
10 ++O O O |
| |
8 ++ |
6 ++ |
| |
4 ++ |
2 ++ |
| *.**. .**. .* *.*.* *.**. .* *.* .* .*
0 *+*-----**----*O-*-O*-**-**-**-*-----*-**-*-----**-**-*--*-**-*---*--*-+
18 ++---------------------------------------------------------------------+
| O |
16 O+ O O O O |
14 ++ O O O O |
| O O |
12 ++ O |
10 ++O O O |
| |
8 ++ |
6 ++ |
| |
4 ++ |
2 ++ |
| *.**. .**. .* *.*.* *.**. .* *.* .* .*
0 *+*-----**----*O-*-O*-**-**-**-*-----*-**-*-----**-**-*--*-**-*---*--*-+
18 ++---------------------------------------------------------------------+
| O |
16 O+ O O O O |
14 ++ O O O O |
| O O |
12 ++ O |
10 ++O O O |
| |
8 ++ |
6 ++ |
| |
4 ++ |
2 ++ |
| *.**. .**. .* *.*.* *.**. .* *.* .* .*
0 *+*-----**----*O-*-O*-**-**-**-*-----*-**-*-----**-**-*--*-**-*---*--*-+
hackbench.throughput
300000 ++-----------------------------------------------------------------+
| |
250000 ++ *.* |
|*.**.**. *.* .* *.**.** *.**.**. : : .**. .*|
* * : ***.** *.**.* *.* * **. *.** ** |
200000 ++ : : * *
| : : |
150000 ++ : : |
| : : |
100000 OO OO OO OO OO: O OO OO O |
| :: |
| :: |
50000 ++ : |
| : |
0 ++-----------*-O-O-------------------------------------------------+
hackbench.time.percent_of_cpu_this_job_got
8000 O+OO-OO-OO-OO-O--O--OO-OO-O------------------------------------------+
| **.**.**.** *.**. *.** *.**.**.* **.**.**.**.**.**.**.**.* **
7000 ++: : : * : : : : : : |
6000 ++ : : : * : : : :: |
|: : : : :: : :: * |
5000 ++ : : : :: :: |
| : : * : : |
4000 *+ : : * * |
| :: |
3000 ++ :: |
2000 ++ :: |
| : |
1000 ++ : |
| : |
0 ++------------*O--O--------------------------------------------------+
numa-numastat.node1.numa_hit
1.2e+08 ++----------------------------------------------------------------+
| |
1e+08 ++ * * * |
| + :* +: + :.** * |
|* ** *: * * * ** : * ** * : *. * : |
8e+07 +++ + : :: * + : * :: : : :: : : : ** + : * *
|: ** : : :: * * :: : *.* :: *.* : * * + |
6e+07 ++ : * :: :: :: :: * *|
| : : * * : : |
4e+07 *+ : : * * |
| : : |
| :: |
2e+07 ++ : |
O : |
0 +O-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
numa-numastat.node1.local_node
1.2e+08 ++----------------------------------------------------------------+
| |
1e+08 ++ * * * |
| + :* +: + :.** * |
|* ** *: * * * ** : * ** * : *. * : |
8e+07 +++ + : :: * + : * :: : : :: : : : ** + : * *
|: ** : : :: * * :: : *.* :: *.* : * * + |
6e+07 ++ : * :: :: :: :: * *|
| : : * * : : |
4e+07 *+ : : * * |
| : : |
| :: |
2e+07 ++ : |
O : |
0 +O-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
numa-numastat.node2.numa_hit
9e+07 ++------------------------------------------------------------------+
| * * * * |
8e+07 +*.* + : * **. :: **. :+ : |
7e+07 ++ * * *. : : * * : * * : : *.* .* |
|: : : * * *.* :: : : : : : * : * : *|
6e+07 ++ :: : :: : : :: : : : : + *. *.* : :*
5e+07 ++ * : :*.* : : : *: * * * :: |
* : : * : ** * :: |
4e+07 ++ : : :+ * |
3e+07 ++ :: * |
| :: |
2e+07 ++ :: |
1e+07 ++ : |
| : |
0 OO-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
numa-numastat.node2.local_node
9e+07 ++------------------------------------------------------------------+
| * * * * |
8e+07 +*.* + : * **. :: **. :+ : |
7e+07 ++ * * *. : : * * : * * : : *.* .* |
|: : : * * *.* :: : : : : : * : * : *|
6e+07 ++ :: : :: : : :: : : : : + *. *.* : :*
5e+07 ++ * : :*.* : : : *: * * * :: |
* : : * : ** * :: |
4e+07 ++ : : :+ * |
3e+07 ++ :: * |
| :: |
2e+07 ++ :: |
1e+07 ++ : |
| : |
0 OO-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
numa-numastat.node3.numa_hit
1e+08 ++------------------------------------------------*-----------------+
9e+07 ++ :: |
|* * * : : *|
8e+07 +++ * :: :: * : * * :|
7e+07 ++ * * :: * .* * : *. * : *. .* : ** : :+: : |
|: *.* : :: * + * : : : ** : : ** : : + : * : : |
6e+07 ++ :: * :* *.* :: : : : :: ** *.** *
5e+07 ++ : : : :: :: : : : |
4e+07 ++ * : : * : *: * |
* : : * * |
3e+07 ++ :: |
2e+07 ++ :: |
| : |
1e+07 ++ : |
0 OO-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
numa-numastat.node3.local_node
1e+08 ++------------------------------------------------*-----------------+
9e+07 ++ :: |
|* * * : : *|
8e+07 +++ * :: :: * : * * :|
7e+07 ++ * * :: * .* * : *. * : *. .* : ** : :+: : |
|: *.* : :: * + * : : : ** : : ** : : + : * : : |
6e+07 ++ :: * :* *.* :: : : : :: ** *.** *
5e+07 ++ : : : :: :: : : : |
4e+07 ++ * : : * : *: * |
* : : * * |
3e+07 ++ :: |
2e+07 ++ :: |
| : |
1e+07 ++ : |
0 OO-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
numa-vmstat.node0.numa_hit
5e+07 ++---------------*------------------------------------------------+
4.5e+07 ++ : |
| * : : .* .* |
4e+07 ++ * : * * : * * : * * : * * |
3.5e+07 +++ : +: : : :: * : : +: : : + * * * + : |
|* ** :* : : : : : * ** : * ** + : * : * *. |
3e+07 ++ :: : ** * : :: :: **. : : : *|
2.5e+07 ++ *: * :: :: :: ** * *
2e+07 *+ : : * * * |
| : : |
1.5e+07 ++ : : |
1e+07 ++ :: |
| : |
5e+06 ++ : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
numa-vmstat.node0.numa_local
5e+07 ++---------------*------------------------------------------------+
4.5e+07 ++ : |
| * : : .* .* |
4e+07 ++ * : * * : * * : * * : * * |
3.5e+07 +++ : +: : : :: * : : +: : : + * * * + : |
|* ** :* : : : : : * ** : * ** + : * : * *. |
3e+07 ++ :: : ** * : :: :: **. : : : *|
2.5e+07 ++ *: * :: :: :: ** * *
2e+07 *+ : : * * * |
| : : |
1.5e+07 ++ : : |
1e+07 ++ :: |
| : |
5e+06 ++ : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
numa-vmstat.node1.numa_hit
6e+07 ++------------------------------------------------------------------+
| .* .* * |
5e+07 ++ * : * : * : |
| : : : : + : |
|* .* : * * * * * : * * :*.* * |
4e+07 ++: * *.* : :: .* : :: * ::: * * :+ : *.** *. |
| : : : * : :: * : : :: :+ :: :+ * *.* + : **
3e+07 *+ * : : :: * :: * * * * * |
| : : : : |
2e+07 ++ : : * * |
| :: |
| :: |
1e+07 ++ :: |
O : |
0 +O-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
numa-vmstat.node1.numa_local
6e+07 ++------------------------------------------------------------------+
| .* .* * |
5e+07 ++ * : * : * : |
| : : : : + : |
|* .* : * * * * * : * * :*.* * |
4e+07 ++: * *.* : :: .* : :: * ::: * * :+ : *.** *. |
| : : : * : :: * : : :: :+ :: :+ * *.* + : **
3e+07 *+ * : : :: * :: * * * * * |
| : : : : |
2e+07 ++ : : * * |
| :: |
| :: |
1e+07 ++ :: |
O : |
0 +O-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
numa-vmstat.node2.numa_hit
5e+07 ++----------------------------------------------------------------+
4.5e+07 ++ * * * * |
|*. + : : * :: * :: * |
4e+07 ++ ** * * : + *.* * + *.* * :+ *. |
3.5e+07 *+ : *: * : * : * : : * .***.* * |
| : :: * :* : : : : : :: : * : **
3e+07 ++ * : :+ : +: : * * * : : :: |
2.5e+07 ++ : : * : * :: :.* :: |
2e+07 ++ : : * :: * * |
| : : * |
1.5e+07 ++ :: |
1e+07 ++ :: |
| : |
5e+06 ++ : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
numa-vmstat.node2.numa_local
5e+07 ++----------------------------------------------------------------+
4.5e+07 ++ * * * * |
|*. + : : * :: * :: * |
4e+07 ++ ** * : + *.* * + *.* * :+ |
3.5e+07 *+ : ** * : * : * : : * .***.**.* |
| : :: * :* : : : : : :: : * : **
3e+07 ++ * : :+ : +: : * * * : : :: |
2.5e+07 ++ : : * : * :: :.* :: |
2e+07 ++ : : * :: * * |
| : : * |
1.5e+07 ++ :: |
1e+07 ++ :: |
| : |
5e+06 ++ : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
numa-vmstat.node3.numa_hit
6e+07 ++------------------------------------------------------------------+
| |
5e+07 ++ * |
|* * * :: |
|:: * : : : : * *|
4e+07 ++: .** * :: * : : ** : : *.* * :* : * ::|
| ** : :+ * : : ::: *.* : : *.* : : * + : :: : : |
3e+07 *+ : : * ::.** *.* : * : * : : * : :: *.** *
| :: : :* :: : : :: * * |
2e+07 ++ : : : * : : |
| * : : * * |
| :: |
1e+07 ++ :: |
| : |
0 OO-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
numa-vmstat.node3.numa_local
6e+07 ++------------------------------------------------------------------+
| |
5e+07 ++ * |
|* * * : |
|:: * : : : : * *|
4e+07 ++: .** * :: * : : ** : : *.* * : * : * :|
| ** : :+ * : : ::: *.* : : *.* : : ::+ : :: : : |
3e+07 *+ : : * ::.** :.* : * : * : : * * : :: *.* : *
| :: : :* * :: : : :: * * * |
2e+07 ++ : : : * : : |
| * : : * * |
| :: |
1e+07 ++ :: |
| : |
0 OO-OO-OO-OO-OO-OO-OO-OO-OO------------------------------------------+
proc-vmstat.numa_hit
3.5e+08 ++----------------------------------------------------------------+
| *. |
3e+08 +*.**.**. * **.***. **.***. : * .* *|
|: * * .**. *.** : * : * *. .*** *.* :|
2.5e+08 ++ : * * : * : : : ** : : *
|: : : : : : : : :: |
2e+08 ++ : : :: :: :: * |
| : : :: : : |
1.5e+08 *+ : : * * * |
| : : |
1e+08 ++ :: |
| :: |
5e+07 ++ : |
| : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
proc-vmstat.numa_local
3.5e+08 ++----------------------------------------------------------------+
| *. |
3e+08 +*.**.**. * **.***. **.***. : * .* *|
|: * * .**. *.** : * : * *. .*** *.* :|
2.5e+08 ++ : * * : * : : : ** : : *
|: : : : : : : : :: |
2e+08 ++ : : :: :: :: * |
| : : :: : : |
1.5e+08 *+ : : * * * |
| : : |
1e+08 ++ :: |
| :: |
5e+07 ++ : |
| : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
proc-vmstat.pgalloc_normal
3.5e+08 ++----------------------------------------------------------------+
| |
3e+08 +*. .**. * **. * **. *.* *|
|: ** *** .* *.** :*.* * :*.* * *. .***.**.* :|
2.5e+08 ++ : * *.* : * : : : ** : : *
|: : : : : : : : :: |
2e+08 ++ : : :: :: :: * |
| : : :: : : |
1.5e+08 *+ : : * * * |
| : : |
1e+08 ++ :: |
| :: |
5e+07 ++ : |
| : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
proc-vmstat.pgfree
3.5e+08 ++----------------------------------------------------------------+
|*. * * *.* |
3e+08 ++ **.**. * :*.***. :*.***. : : .* *|
|: * * .**. *.** : * : * *.* .*** *.* : |
2.5e+08 ++ : * * : * : : : * :: *
| : : : : : : : * |
2e+08 ++ : : :: :: :: |
| : : :: : : |
1.5e+08 *+ : : * * * |
| : : |
1e+08 ++ :: |
| :: |
5e+07 ++ : |
| : |
0 OO-OO-OO-OOO-OO-OO-OO-OOO-----------------------------------------+
slabinfo.taskstats.active_objs
450 ++--------------------------------------------------------------------+
| .* |
400 ++* *.* .* *. .*.* *. .**. * : *. * |
350 ++ *.* * * .* .* ** : ** * : ** * :.* **. : **. ::|
|: : * * :+ : *.: +: * :+ * * :|
300 *+ : : * : : * * * *
250 ++ : : : : |
| : : : |
200 ++OO OO OO OO:O: O OO O O |
150 O+ : : O |
| :: |
100 ++ :: |
50 ++ : |
| : |
0 ++------------*O--O---------------------------------------------------+
slabinfo.taskstats.num_objs
450 ++--------------------------------------------------------------------+
| .* |
400 ++* *.* .* *. .*.* *. .**. * : *. * |
350 ++ *.* * * .* .* ** : ** * : ** * :.* **. : **. ::|
|: : * * :+ : *.: +: * :+ * * :|
300 *+ : : * : : * * * *
250 ++ : : : : |
| : : : |
200 ++OO OO OO OO:O: O OO O O |
150 O+ : : O |
| :: |
100 ++ :: |
50 ++ : |
| : |
0 ++------------*O--O---------------------------------------------------+
kmsg.DMAR-IR:Failed_to_copy_IR_table_for_dmar0_from_previous_kernel
1 *+-*-*------------------------------------------*-*--*-----------*--*-*
| : : : : |
| : : : : |
0.8 ++ : : : : |
| : : : : |
| : : : : |
0.6 ++ : : : : |
| : : : : |
0.4 ++ : : : : |
| : : : : |
| : : : : |
0.2 ++ : : : : |
| : : : : |
| : : : : |
0 ++------*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*---------*--*-*--*------+
kmsg.DMAR-IR:Failed_to_copy_IR_table_for_dmar1_from_previous_kernel
1 *+-*-*------------------------------------------*-*--*-----------*--*-*
| : : : : |
| : : : : |
0.8 ++ : : : : |
| : : : : |
| : : : : |
0.6 ++ : : : : |
| : : : : |
0.4 ++ : : : : |
| : : : : |
| : : : : |
0.2 ++ : : : : |
| : : : : |
| : : : : |
0 ++------*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*---------*--*-*--*------+
kmsg.DMAR-IR:IRQ_remapping_was_enabled_on_dmar0_but_we_are_not_in_kdump_mode
1 ++-*-*--------------------------------------------*--*--------------*-*
| : : : : : |
| : : : : : |
0.8 ++: : : : : |
| : : : : : |
| : : : : : |
0.6 ++: : : : : |
| : : : : : |
0.4 ++ : : : : |
|: : : : : |
|: : : : : |
0.2 ++ : : : : |
| : : : : |
| : : : : |
0 *+------*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*------*--*-*--*-*----+
kmsg.DMAR-IR:IRQ_remapping_was_enabled_on_dmar1_but_we_are_not_in_kdump_mode
1 ++-*-*--------------------------------------------*--*--------------*-*
| : : : : : |
| : : : : : |
0.8 ++: : : : : |
| : : : : : |
| : : : : : |
0.6 ++: : : : : |
| : : : : : |
0.4 ++ : : : : |
|: : : : : |
|: : : : : |
0.2 ++ : : : : |
| : : : : |
| : : : : |
0 *+------*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*------*--*-*--*-*----+
kmsg.DHCP_BOOTP:Reply_not_for_us_op___xid___
1 ++--------------------------------------------------------------------*
| |
| :|
0.8 ++ :|
| :|
| : |
0.6 ++ : |
| : |
0.4 ++ : |
| : |
| : |
0.2 ++ : |
| : |
| : |
0 *+---*---*----*----*---*----*----*---*----*----*---*----*----*---*----+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
6 years, 4 months
[lkp] [kallsyms] bf2d2b07db: kasan: GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: 0000 [#1] general protection fault: 0000 [#1] PREEMPT PREEMPT KASANKASAN
by kernel test robot
FYI, we noticed the below changes on
https://git.linaro.org/people/ard.biesheuvel/linux-arm arm64-kaslr-v4a
commit bf2d2b07db19001ae0bd55826025b0ba47fae0c2 ("kallsyms: add support for relative offsets in kallsyms address table")
+-----------------------------------------------------------+------------+------------+
| | 2c4d21df0f | bf2d2b07db |
+-----------------------------------------------------------+------------+------------+
| boot_successes | 10 | 0 |
| boot_failures | 6 | 36 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 2 | |
| IP-Config:Auto-configuration_of_network_failed | 4 | |
| general_protection_fault:#[##]PREEMPT_PREEMPT_KASANKASAN | 0 | 36 |
| general_protection_fault:#[##] | 0 | 36 |
| BUG:kernel_boot_hang | 0 | 36 |
+-----------------------------------------------------------+------------+------------+
[ 0.281636] kasan: CONFIG_KASAN_INLINE enabled
[ 0.282416] kasan: GPF could be caused by NULL-ptr deref or user memory access
[ 0.282416] kasan: GPF could be caused by NULL-ptr deref or user memory accessgeneral protection fault: 0000 [#1] general protection fault: 0000 [#1] PREEMPT PREEMPT KASANKASAN
[ 0.284561] Modules linked in:
[ 0.284561] Modules linked in:
[ 0.285136] CPU: 0 PID: 1 Comm: swapper Not tainted 4.5.0-rc1-00036-gbf2d2b0 #1
[ 0.285136] CPU: 0 PID: 1 Comm: swapper Not tainted 4.5.0-rc1-00036-gbf2d2b0 #1
[ 0.286438] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.286438] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.288000] task: ffff88000fcb0000 ti: ffff88000fcb8000 task.ti: ffff88000fcb8000
[ 0.288000] task: ffff88000fcb0000 ti: ffff88000fcb8000 task.ti: ffff88000fcb8000
[ 0.289287] RIP: 0010:[<ffffffff8120688c>]
Thanks,
Ying Huang
6 years, 5 months
Re: [LKP] [slab] a1fd55538c: WARNING: CPU: 0 PID: 0 at kernel/locking/lockdep.c:2601 trace_hardirqs_on_caller()
by Jesper Dangaard Brouer
On Sat, 30 Jan 2016 02:09:30 -0500
Valdis.Kletnieks(a)vt.edu wrote:
> On Thu, 28 Jan 2016 18:47:49 +0100, Jesper Dangaard Brouer said:
> > I cannot reproduce below problem... have enabled all kind of debugging
> > and also lockdep.
> >
> > Can I get a version of the .config file used?
>
> I'm not the 0day bot, but my laptop hits the same issue at boot.
Thank you! I'm now able to reproduce, and I've found the issue. It only
happens for SLAB, and with FAILSLAB disabled.
The problem were introduced in the patch before:
http://ozlabs.org/~akpm/mmots/broken-out/mm-fault-inject-take-over-bootst...
which moved the check function:
static bool slab_should_failslab(struct kmem_cache *cachep, gfp_t flags)
{
if (unlikely(cachep == kmem_cache))
return false;
return should_failslab(cachep->object_size, flags, cachep->flags);
}
into the fault injection framework, call of should_failslab().
That change was wrong, as some very early boot code depend on SLAB
failing, when still allocating from the bootstrap kmem_cache. SLUB seem
to handle this better.
In this case the percpu system, have a workqueue function, calling
pcpu_extend_area_map() which sort-of probe the slab-allocator, and
depending on it fails, until it is fully ready.
I will fix up my patches, reverting this change... and let them go
through Andrews quilt process.
Let me know, if the linux-next tree need's an explicit fix?
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
6 years, 5 months
[lkp] [locks] 7f3697e24d: +35.1% will-it-scale.per_thread_ops
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 7f3697e24dc3820b10f445a4a7d914fc356012d1 ("locks: fix unlock when fcntl_setlk races with a close")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-snb01/lock1/will-it-scale
commit:
9189922675ecca0fab38931d86b676e9d79602dc
7f3697e24dc3820b10f445a4a7d914fc356012d1
9189922675ecca0f 7f3697e24dc3820b10f445a4a7
---------------- --------------------------
%stddev %change %stddev
\ | \
2376432 ± 0% +2.1% 2427484 ± 0% will-it-scale.per_process_ops
807889 ± 0% +35.1% 1091496 ± 4% will-it-scale.per_thread_ops
22.08 ± 2% +89.1% 41.75 ± 5% will-it-scale.time.user_time
1238371 ± 14% +100.4% 2481345 ± 39% cpuidle.C1E-SNB.time
3098 ± 57% -66.6% 1035 ±171% numa-numastat.node1.other_node
379.25 ± 8% -21.4% 298.00 ± 12% numa-vmstat.node0.nr_alloc_batch
22.08 ± 2% +89.1% 41.75 ± 5% time.user_time
1795 ± 4% +7.5% 1930 ± 2% vmstat.system.cs
0.54 ± 5% +136.9% 1.28 ± 10% perf-profile.cycles.___might_sleep.__might_sleep.kmem_cache_alloc.locks_alloc_lock.__posix_lock_file
1.65 ± 57% +245.2% 5.70 ± 29% perf-profile.cycles.__fdget_raw.sys_fcntl.entry_SYSCALL_64_fastpath
1.58 ± 59% +248.3% 5.50 ± 31% perf-profile.cycles.__fget.__fget_light.__fdget_raw.sys_fcntl.entry_SYSCALL_64_fastpath
1.62 ± 58% +246.3% 5.63 ± 30% perf-profile.cycles.__fget_light.__fdget_raw.sys_fcntl.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 5.88 ± 11% perf-profile.cycles.__memset.locks_alloc_lock.__posix_lock_file.vfs_lock_file.do_lock_file_wait
2.50 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles.__memset.locks_alloc_lock.__posix_lock_file.vfs_lock_file.fcntl_setlk
1.29 ± 4% +138.8% 3.09 ± 11% perf-profile.cycles.__memset.locks_alloc_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.47 ± 9% +144.4% 1.16 ± 11% perf-profile.cycles.__might_fault.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.37 ± 12% +140.3% 0.90 ± 9% perf-profile.cycles.__might_sleep.__might_fault.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.86 ± 6% +137.7% 2.05 ± 10% perf-profile.cycles.__might_sleep.kmem_cache_alloc.locks_alloc_lock.__posix_lock_file.vfs_lock_file
0.61 ± 14% +56.8% 0.95 ± 14% perf-profile.cycles.__might_sleep.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.sys_fcntl
0.00 ± -1% +Inf% 39.84 ± 12% perf-profile.cycles.__posix_lock_file.vfs_lock_file.do_lock_file_wait.fcntl_setlk.sys_fcntl
16.44 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles.__posix_lock_file.vfs_lock_file.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.77 ± 11% perf-profile.cycles._raw_spin_lock.__posix_lock_file.vfs_lock_file.do_lock_file_wait.fcntl_setlk
59.34 ± 1% -72.4% 16.36 ± 33% perf-profile.cycles._raw_spin_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.46 ± 11% +144.9% 1.13 ± 19% perf-profile.cycles.avc_has_perm.inode_has_perm.file_has_perm.selinux_file_fcntl.security_file_fcntl
0.87 ± 6% +103.2% 1.77 ± 12% perf-profile.cycles.avc_has_perm.inode_has_perm.file_has_perm.selinux_file_lock.security_file_lock
0.81 ± 4% +135.7% 1.90 ± 10% perf-profile.cycles.copy_user_generic_string.sys_fcntl.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 41.86 ± 12% perf-profile.cycles.do_lock_file_wait.part.29.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.88 ± 6% +127.8% 2.00 ± 9% perf-profile.cycles.entry_SYSCALL_64
0.86 ± 4% +122.6% 1.92 ± 12% perf-profile.cycles.entry_SYSCALL_64_after_swapgs
84.98 ± 0% -9.1% 77.20 ± 2% perf-profile.cycles.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.76 ± 10% +142.1% 1.84 ± 14% perf-profile.cycles.file_has_perm.selinux_file_fcntl.security_file_fcntl.sys_fcntl.entry_SYSCALL_64_fastpath
1.35 ± 4% +106.3% 2.78 ± 11% perf-profile.cycles.file_has_perm.selinux_file_lock.security_file_lock.fcntl_setlk.sys_fcntl
0.00 ± -1% +Inf% 0.89 ± 12% perf-profile.cycles.flock_to_posix_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
6.90 ± 4% -48.6% 3.55 ± 27% perf-profile.cycles.fput.entry_SYSCALL_64_fastpath
0.51 ± 10% +140.5% 1.23 ± 16% perf-profile.cycles.inode_has_perm.isra.31.file_has_perm.selinux_file_fcntl.security_file_fcntl.sys_fcntl
0.98 ± 4% +97.7% 1.93 ± 11% perf-profile.cycles.inode_has_perm.isra.31.file_has_perm.selinux_file_lock.security_file_lock.fcntl_setlk
0.00 ± -1% +Inf% 6.56 ± 10% perf-profile.cycles.kmem_cache_alloc.locks_alloc_lock.__posix_lock_file.vfs_lock_file.do_lock_file_wait
2.75 ± 4% -100.0% 0.00 ± -1% perf-profile.cycles.kmem_cache_alloc.locks_alloc_lock.__posix_lock_file.vfs_lock_file.fcntl_setlk
1.53 ± 7% +119.7% 3.37 ± 13% perf-profile.cycles.kmem_cache_alloc.locks_alloc_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.79 ± 11% perf-profile.cycles.kmem_cache_free.locks_free_lock.__posix_lock_file.vfs_lock_file.do_lock_file_wait
0.46 ± 14% +257.0% 1.66 ± 11% perf-profile.cycles.kmem_cache_free.locks_free_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.40 ± 7% +158.6% 1.05 ± 17% perf-profile.cycles.kmem_cache_free.locks_free_lock.locks_dispose_list.__posix_lock_file.vfs_lock_file
0.00 ± -1% +Inf% 0.96 ± 10% perf-profile.cycles.lg_local_lock.locks_insert_lock_ctx.__posix_lock_file.vfs_lock_file.do_lock_file_wait
0.00 ± -1% +Inf% 14.69 ± 10% perf-profile.cycles.locks_alloc_lock.__posix_lock_file.vfs_lock_file.do_lock_file_wait.fcntl_setlk
6.38 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles.locks_alloc_lock.__posix_lock_file.vfs_lock_file.fcntl_setlk.sys_fcntl
3.28 ± 6% +127.1% 7.45 ± 12% perf-profile.cycles.locks_alloc_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 9.75 ± 13% perf-profile.cycles.locks_delete_lock_ctx.__posix_lock_file.vfs_lock_file.do_lock_file_wait.fcntl_setlk
3.61 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles.locks_delete_lock_ctx.__posix_lock_file.vfs_lock_file.fcntl_setlk.sys_fcntl
0.00 ± -1% +Inf% 1.84 ± 11% perf-profile.cycles.locks_dispose_list.__posix_lock_file.vfs_lock_file.do_lock_file_wait.fcntl_setlk
0.00 ± -1% +Inf% 2.42 ± 10% perf-profile.cycles.locks_free_lock.__posix_lock_file.vfs_lock_file.do_lock_file_wait.fcntl_setlk
1.00 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles.locks_free_lock.__posix_lock_file.vfs_lock_file.fcntl_setlk.sys_fcntl
0.63 ± 11% +224.1% 2.05 ± 10% perf-profile.cycles.locks_free_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.22 ± 14% perf-profile.cycles.locks_free_lock.locks_dispose_list.__posix_lock_file.vfs_lock_file.do_lock_file_wait
0.00 ± -1% +Inf% 6.17 ± 15% perf-profile.cycles.locks_insert_lock_ctx.__posix_lock_file.vfs_lock_file.do_lock_file_wait.fcntl_setlk
2.31 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles.locks_insert_lock_ctx.__posix_lock_file.vfs_lock_file.fcntl_setlk.sys_fcntl
0.00 ± -1% +Inf% 8.96 ± 13% perf-profile.cycles.locks_unlink_lock_ctx.locks_delete_lock_ctx.__posix_lock_file.vfs_lock_file.do_lock_file_wait
3.27 ± 1% -100.0% 0.00 ± -1% perf-profile.cycles.locks_unlink_lock_ctx.locks_delete_lock_ctx.__posix_lock_file.vfs_lock_file.fcntl_setlk
53.88 ± 1% -79.7% 10.92 ± 46% perf-profile.cycles.native_queued_spin_lock_slowpath._raw_spin_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
2.75 ± 0% +183.3% 7.79 ± 13% perf-profile.cycles.put_pid.locks_unlink_lock_ctx.locks_delete_lock_ctx.__posix_lock_file.vfs_lock_file
1.11 ± 9% +137.2% 2.63 ± 14% perf-profile.cycles.security_file_fcntl.sys_fcntl.entry_SYSCALL_64_fastpath
1.69 ± 4% +118.2% 3.69 ± 11% perf-profile.cycles.security_file_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.91 ± 9% +139.0% 2.17 ± 14% perf-profile.cycles.selinux_file_fcntl.security_file_fcntl.sys_fcntl.entry_SYSCALL_64_fastpath
1.39 ± 4% +114.6% 2.97 ± 10% perf-profile.cycles.selinux_file_lock.security_file_lock.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 41.12 ± 12% perf-profile.cycles.vfs_lock_file.do_lock_file_wait.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
17.04 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles.vfs_lock_file.fcntl_setlk.sys_fcntl.entry_SYSCALL_64_fastpath
34.75 ±148% +132.4% 80.75 ± 82% sched_debug.cfs_rq:/.load.8
15.00 ± 9% +198.3% 44.75 ± 72% sched_debug.cfs_rq:/.load_avg.21
25.00 ± 29% +574.0% 168.50 ± 78% sched_debug.cfs_rq:/.load_avg.9
38.47 ± 5% +29.1% 49.65 ± 26% sched_debug.cfs_rq:/.load_avg.avg
63.17 ± 10% +44.3% 91.16 ± 36% sched_debug.cfs_rq:/.load_avg.stddev
893865 ± 12% -12.5% 782455 ± 0% sched_debug.cfs_rq:/.min_vruntime.25
18.25 ± 26% +52.1% 27.75 ± 25% sched_debug.cfs_rq:/.runnable_load_avg.9
-57635 ±-68% -196.4% 55548 ±130% sched_debug.cfs_rq:/.spread0.1
-802264 ±-25% -29.5% -565458 ±-49% sched_debug.cfs_rq:/.spread0.8
-804662 ±-25% -29.4% -567811 ±-48% sched_debug.cfs_rq:/.spread0.min
1233 ± 5% +30.9% 1614 ± 28% sched_debug.cfs_rq:/.tg_load_avg.0
1233 ± 5% +30.9% 1614 ± 28% sched_debug.cfs_rq:/.tg_load_avg.1
1228 ± 5% +30.3% 1601 ± 27% sched_debug.cfs_rq:/.tg_load_avg.10
1228 ± 5% +30.4% 1601 ± 27% sched_debug.cfs_rq:/.tg_load_avg.11
1228 ± 5% +30.3% 1601 ± 27% sched_debug.cfs_rq:/.tg_load_avg.12
1229 ± 5% +30.0% 1598 ± 27% sched_debug.cfs_rq:/.tg_load_avg.13
1228 ± 5% +30.1% 1598 ± 27% sched_debug.cfs_rq:/.tg_load_avg.14
1229 ± 5% +30.0% 1598 ± 27% sched_debug.cfs_rq:/.tg_load_avg.15
1226 ± 5% +30.3% 1598 ± 27% sched_debug.cfs_rq:/.tg_load_avg.16
1226 ± 5% +30.2% 1597 ± 27% sched_debug.cfs_rq:/.tg_load_avg.17
1227 ± 5% +30.1% 1595 ± 27% sched_debug.cfs_rq:/.tg_load_avg.18
1227 ± 5% +29.4% 1588 ± 26% sched_debug.cfs_rq:/.tg_load_avg.19
1233 ± 5% +30.4% 1609 ± 27% sched_debug.cfs_rq:/.tg_load_avg.2
1222 ± 5% +29.9% 1587 ± 26% sched_debug.cfs_rq:/.tg_load_avg.20
1223 ± 5% +24.2% 1519 ± 20% sched_debug.cfs_rq:/.tg_load_avg.21
1223 ± 5% +23.8% 1515 ± 20% sched_debug.cfs_rq:/.tg_load_avg.22
1223 ± 5% +23.9% 1515 ± 20% sched_debug.cfs_rq:/.tg_load_avg.23
1223 ± 5% +23.9% 1515 ± 20% sched_debug.cfs_rq:/.tg_load_avg.24
1223 ± 5% +23.5% 1511 ± 19% sched_debug.cfs_rq:/.tg_load_avg.25
1224 ± 5% +23.5% 1512 ± 19% sched_debug.cfs_rq:/.tg_load_avg.26
1223 ± 5% +23.1% 1506 ± 19% sched_debug.cfs_rq:/.tg_load_avg.27
1223 ± 5% +22.5% 1499 ± 19% sched_debug.cfs_rq:/.tg_load_avg.28
1224 ± 5% +22.5% 1499 ± 19% sched_debug.cfs_rq:/.tg_load_avg.29
1233 ± 5% +30.3% 1607 ± 27% sched_debug.cfs_rq:/.tg_load_avg.3
1223 ± 5% +22.2% 1495 ± 18% sched_debug.cfs_rq:/.tg_load_avg.30
1224 ± 5% +22.0% 1493 ± 19% sched_debug.cfs_rq:/.tg_load_avg.31
1234 ± 5% +30.0% 1604 ± 28% sched_debug.cfs_rq:/.tg_load_avg.4
1233 ± 5% +30.0% 1604 ± 28% sched_debug.cfs_rq:/.tg_load_avg.5
1231 ± 5% +30.3% 1604 ± 28% sched_debug.cfs_rq:/.tg_load_avg.6
1233 ± 5% +30.0% 1603 ± 27% sched_debug.cfs_rq:/.tg_load_avg.7
1231 ± 5% +30.1% 1601 ± 27% sched_debug.cfs_rq:/.tg_load_avg.8
1228 ± 5% +30.3% 1601 ± 27% sched_debug.cfs_rq:/.tg_load_avg.9
1228 ± 5% +27.8% 1569 ± 24% sched_debug.cfs_rq:/.tg_load_avg.avg
1246 ± 5% +30.7% 1628 ± 27% sched_debug.cfs_rq:/.tg_load_avg.max
1212 ± 5% +22.2% 1481 ± 19% sched_debug.cfs_rq:/.tg_load_avg.min
15.00 ± 9% +198.3% 44.75 ± 72% sched_debug.cfs_rq:/.tg_load_avg_contrib.21
25.00 ± 29% +574.0% 168.50 ± 78% sched_debug.cfs_rq:/.tg_load_avg_contrib.9
38.53 ± 5% +29.0% 49.71 ± 26% sched_debug.cfs_rq:/.tg_load_avg_contrib.avg
63.34 ± 10% +44.1% 91.30 ± 36% sched_debug.cfs_rq:/.tg_load_avg_contrib.stddev
532.25 ± 2% +8.5% 577.50 ± 6% sched_debug.cfs_rq:/.util_avg.15
210.75 ± 14% -14.4% 180.50 ± 4% sched_debug.cfs_rq:/.util_avg.29
450.00 ± 22% +50.7% 678.00 ± 18% sched_debug.cfs_rq:/.util_avg.9
955572 ± 4% -10.2% 857813 ± 5% sched_debug.cpu.avg_idle.6
23.99 ± 60% -76.2% 5.71 ± 24% sched_debug.cpu.clock.stddev
23.99 ± 60% -76.2% 5.71 ± 24% sched_debug.cpu.clock_task.stddev
2840 ± 37% -47.4% 1492 ± 65% sched_debug.cpu.curr->pid.25
34.75 ±148% +132.4% 80.75 ± 82% sched_debug.cpu.load.8
61776 ± 7% -7.1% 57380 ± 0% sched_debug.cpu.nr_load_updates.25
6543 ± 2% +20.4% 7879 ± 9% sched_debug.cpu.nr_switches.0
5256 ± 23% +177.1% 14566 ± 52% sched_debug.cpu.nr_switches.27
7915 ± 3% +8.7% 8605 ± 3% sched_debug.cpu.nr_switches.avg
-0.25 ±-519% +1900.0% -5.00 ±-24% sched_debug.cpu.nr_uninterruptible.12
2.00 ± 93% -125.0% -0.50 ±-300% sched_debug.cpu.nr_uninterruptible.24
17468 ± 14% +194.3% 51413 ± 75% sched_debug.cpu.sched_count.15
2112 ± 2% +20.8% 2552 ± 11% sched_debug.cpu.sched_goidle.0
2103 ± 34% +219.0% 6709 ± 55% sched_debug.cpu.sched_goidle.27
3159 ± 3% +8.2% 3418 ± 4% sched_debug.cpu.sched_goidle.avg
1323 ± 64% -72.7% 361.50 ± 15% sched_debug.cpu.ttwu_count.23
3264 ± 12% +94.4% 6347 ± 41% sched_debug.cpu.ttwu_count.27
3860 ± 3% +9.0% 4208 ± 3% sched_debug.cpu.ttwu_count.avg
2358 ± 3% +28.7% 3035 ± 9% sched_debug.cpu.ttwu_local.0
1110 ± 22% +54.6% 1716 ± 28% sched_debug.cpu.ttwu_local.27
1814 ± 8% +16.1% 2106 ± 5% sched_debug.cpu.ttwu_local.stddev
lkp-snb01: Sandy Bridge-EP
Memory: 32G
will-it-scale.per_thread_ops
1.2e+06 ++---------------------------------------------------------------+
| O |
1.15e+06 O+O O O O O O O O |
1.1e+06 ++ |
| O O O O O OO |
1.05e+06 ++ O O |
1e+06 ++ |
| |
950000 ++ |
900000 ++ |
| |
850000 ++ |
800000 *+*.*.*.*.*.*.*.*.*.*.*.*. .*.*. *.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*.*
| * * |
750000 ++---------------------------------------------------------------+
will-it-scale.time.user_time
50 ++---------------------------------------------------------------------+
| |
45 ++ O O O O O O |
O O O O |
| O O O O O |
40 ++ O O O O |
| |
35 ++ |
| |
30 ++ |
| |
| * |
25 ++ + + |
*.*.*.*..*.* *.*.*..*.*.*.*.*.*.*..*.*.*.*.*.*.*..*.*.*.*.*.*..*.*.*.*
20 ++---------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
6 years, 5 months
[lkp] [wext] 6a9a24baba: BUG: spinlock bad magic on CPU#0, swapper/0/1
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 6a9a24baba28599b0bb8453ae6a6a06ad79701a0 ("wext: fix message delay/ordering")
[ 0.821194] e820: reserve RAM buffer [mem 0x167e0000-0x17ffffff]
[ 0.823299] NET: Registered protocol family 23
[ 0.823299] NET: Registered protocol family 23
[ 0.824526] BUG: spinlock bad magic on CPU#0, swapper/0/1
[ 0.824526] BUG: spinlock bad magic on CPU#0, swapper/0/1
[ 0.826041] lock: init_net+0xe78/0xf00, .magic: 00000000, .owner: swapper/0/1, .owner_cpu: 0
[ 0.826041] lock: init_net+0xe78/0xf00, .magic: 00000000, .owner: swapper/0/1, .owner_cpu: 0
[ 0.828388] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.4.0-03435-g6a9a24b #1
[ 0.828388] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.4.0-03435-g6a9a24b #1
[ 0.830342] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.830342] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.832868] ffffffff886ed7f8
[ 0.832868] ffffffff886ed7f8 ffff8800131b7d88 ffff8800131b7d88 ffffffff87eb8f86 ffffffff87eb8f86 ffff8800131b2040 ffff8800131b2040
[ 0.835143] ffff8800131b7da8
[ 0.835143] ffff8800131b7da8 ffffffff87d25897 ffffffff87d25897 ffffffff886ed7f8 ffffffff886ed7f8 ffffffff886ed7f8 ffffffff886ed7f8
[ 0.837376] ffff8800131b7dc0
[ 0.837376] ffff8800131b7dc0 ffffffff87d25b10 ffffffff87d25b10 0000000000000286 0000000000000286 ffff8800131b7de0 ffff8800131b7de0
[ 0.842451] Call Trace:
[ 0.842451] Call Trace:
[ 0.843040] [<ffffffff87eb8f86>] dump_stack+0x4b/0x63
[ 0.843040] [<ffffffff87eb8f86>] dump_stack+0x4b/0x63
[ 0.844370] [<ffffffff87d25897>] spin_dump+0x78/0xbb
[ 0.844370] [<ffffffff87d25897>] spin_dump+0x78/0xbb
[ 0.845662] [<ffffffff87d25b10>] do_raw_spin_unlock+0x64/0xba
[ 0.845662] [<ffffffff87d25b10>] do_raw_spin_unlock+0x64/0xba
[ 0.847114] [<ffffffff88123874>] _raw_spin_unlock_irqrestore+0x2c/0x5d
[ 0.847114] [<ffffffff88123874>] _raw_spin_unlock_irqrestore+0x2c/0x5d
[ 0.848654] [<ffffffff8807c341>] skb_dequeue+0x59/0x67
[ 0.848654] [<ffffffff8807c341>] skb_dequeue+0x59/0x67
[ 0.849874] [<ffffffff880edfe4>] wireless_nlevent_flush+0x54/0x8d
[ 0.849874] [<ffffffff880edfe4>] wireless_nlevent_flush+0x54/0x8d
[ 0.851321] [<ffffffff880ee02b>] wext_netdev_notifier_call+0xe/0x15
[ 0.851321] [<ffffffff880ee02b>] wext_netdev_notifier_call+0xe/0x15
[ 0.853020] [<ffffffff8808aa37>] register_netdevice_notifier+0x8a/0x1b9
[ 0.853020] [<ffffffff8808aa37>] register_netdevice_notifier+0x8a/0x1b9
[ 0.854689] [<ffffffff8895e4f6>] ? wext_pernet_init+0x4a/0x4a
[ 0.854689] [<ffffffff8895e4f6>] ? wext_pernet_init+0x4a/0x4a
[ 0.856159] [<ffffffff8895e506>] wireless_nlevent_init+0x10/0x22
[ 0.856159] [<ffffffff8895e506>] wireless_nlevent_init+0x10/0x22
[ 0.857556] [<ffffffff87c003d9>] do_one_initcall+0xb3/0x1fa
[ 0.857556] [<ffffffff87c003d9>] do_one_initcall+0xb3/0x1fa
[ 0.858853] [<ffffffff87cf5024>] ? parse_args+0x26b/0x4c4
[ 0.858853] [<ffffffff87cf5024>] ? parse_args+0x26b/0x4c4
[ 0.860344] [<ffffffff889170c2>] kernel_init_freeable+0x10f/0x194
[ 0.860344] [<ffffffff889170c2>] kernel_init_freeable+0x10f/0x194
[ 0.861889] [<ffffffff881168ac>] ? rest_init+0xcc/0xcc
[ 0.861889] [<ffffffff881168ac>] ? rest_init+0xcc/0xcc
[ 0.863210] [<ffffffff881168ba>] kernel_init+0xe/0xde
[ 0.863210] [<ffffffff881168ba>] kernel_init+0xe/0xde
[ 0.864587] [<ffffffff8812462f>] ret_from_fork+0x3f/0x70
[ 0.864587] [<ffffffff8812462f>] ret_from_fork+0x3f/0x70
[ 0.866178] [<ffffffff881168ac>] ? rest_init+0xcc/0xcc
[ 0.866178] [<ffffffff881168ac>] ? rest_init+0xcc/0xcc
[ 0.867927] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
[ 0.867927] HPET: 3 timers in total, 0 timers will be used for per-cpu timer
[ 0.870286] clocksource: Switched to clocksource kvm-clock
Thanks,
Kernel Test Robot
6 years, 5 months
[netfilter: nf_conntrack] b16c29191d: BUG: spinlock recursion on CPU#1, kworker/u4:2/89
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit b16c29191dc89bd877af99a7b04ce4866728a3e0
Author: Sasha Levin <sasha.levin(a)oracle.com>
AuthorDate: Mon Jan 18 19:23:51 2016 -0500
Commit: Pablo Neira Ayuso <pablo(a)netfilter.org>
CommitDate: Wed Jan 20 14:15:31 2016 +0100
netfilter: nf_conntrack: use safer way to lock all buckets
When we need to lock all buckets in the connection hashtable we'd attempt to
lock 1024 spinlocks, which is way more preemption levels than supported by
the kernel. Furthermore, this behavior was hidden by checking if lockdep is
enabled, and if it was - use only 8 buckets(!).
Fix this by using a global lock and synchronize all buckets on it when we
need to lock them all. This is pretty heavyweight, but is only done when we
need to resize the hashtable, and that doesn't happen often enough (or at all).
Signed-off-by: Sasha Levin <sasha.levin(a)oracle.com>
Acked-by: Jesper Dangaard Brouer <brouer(a)redhat.com>
Reviewed-by: Florian Westphal <fw(a)strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo(a)netfilter.org>
+-------------------------------+------------+------------+------------+
| | 35b815392a | b16c29191d | 9f59b59fdd |
+-------------------------------+------------+------------+------------+
| boot_successes | 910 | 118 | 38 |
| boot_failures | 0 | 2 | 8 |
| BUG:spinlock_recursion_on_CPU | 0 | 2 | 8 |
| backtrace:cleanup_net | 0 | 2 | 7 |
+-------------------------------+------------+------------+------------+
[child0:1965] uid changed! Was: 0, now 65535
[child1:1964] child exiting.
Bailing main loop. Exit reason: UID changed.
[ 24.822451] BUG: spinlock recursion on CPU#1, kworker/u4:2/89
[ 24.823358] lock: nf_conntrack_locks+0x0/0x12780, .magic: dead4ead, .owner: kworker/u4:2/89, .owner_cpu: 1
[ 24.824802] CPU: 1 PID: 89 Comm: kworker/u4:2 Not tainted 4.4.0-03418-gb16c291 #1
[ 24.825913] Workqueue: netns cleanup_net
[ 24.826552] ffffffff81e08880 ffff88001265fbe8 ffffffff813cf0c9 ffff8800125e6100
[ 24.827743] ffff88001265fc08 ffffffff810b37e8 ffffffff81e08880 ffffffff81e08880
[ 24.828930] ffff88001265fc38 ffffffff810b39aa ffffffff81e08898 ffffffff81e08880
[ 24.830114] Call Trace:
[ 24.830510] [<ffffffff813cf0c9>] dump_stack+0x4b/0x72
[ 24.831277] [<ffffffff810b37e8>] spin_dump+0x78/0xc0
[ 24.832043] [<ffffffff810b39aa>] do_raw_spin_lock+0x11a/0x150
[ 24.832922] [<ffffffff8185022d>] _raw_spin_lock+0x5d/0x80
[ 24.833753] [<ffffffff81701922>] ? nf_conntrack_lock+0x12/0x60
[ 24.834648] [<ffffffff81701922>] nf_conntrack_lock+0x12/0x60
[ 24.835353] caif:caif_disconnect_client(): nothing to disconnect
[ 24.849447] [<ffffffff8170bce2>] ctnl_untimeout+0x82/0xb0
[ 24.850371] [<ffffffff8170bd3b>] cttimeout_net_exit+0x2b/0x80
[ 24.851245] [<ffffffff816b4ff8>] ops_exit_list+0x38/0x60
[ 24.852153] [<ffffffff816b5dae>] cleanup_net+0x1ae/0x270
[ 24.852986] [<ffffffff81080b71>] process_one_work+0x1c1/0x500
[ 24.853879] [<ffffffff81080ae9>] ? process_one_work+0x139/0x500
[ 24.854805] [<ffffffff81080efe>] worker_thread+0x4e/0x490
[ 24.855649] [<ffffffff81080eb0>] ? process_one_work+0x500/0x500
[ 24.856571] [<ffffffff81080eb0>] ? process_one_work+0x500/0x500
[ 24.857507] [<ffffffff81087851>] kthread+0x101/0x120
[ 24.858285] [<ffffffff81087750>] ? kthread_stop+0x120/0x120
[ 24.859186] [<ffffffff8185146f>] ret_from_fork+0x3f/0x70
[ 24.860035] [<ffffffff81087750>] ? kthread_stop+0x120/0x120
[ 34.718759] init: tty4 main process (1966) terminated with status 1
[ 34.721009] init: tty4 main process ended, respawning
[ 34.730863] init: tty5 main process (1967) terminated with status 1
git bisect start 9f59b59fddd4ad4a398007059b179cc804e9a85f 92e963f50fc74041b5e9e744c330dca48e04f08d --
git bisect bad d2907ebfd7892cc6919288e306782d28b1aec054 # 11:25 14- 3 Merge 'linux-review/Nicolas-Ferre/spi-atmel-fix-gpio-chip-select-in-case-of-non-DT-platform/20160128-005054' into devel-spot-201601280923
git bisect good 2f451ad637b99e6073555ee77b4861ba4c0b33d8 # 11:37 120+ 0 Merge 'asoc/for-next' into devel-spot-201601280923
git bisect bad 5bf7f3913eb09a58a29f8c49d35e6d564521efc3 # 11:43 0- 1 Merge 'asoc/topic/ssm4567' into devel-spot-201601280923
git bisect good c0cd80413f024d2d2b0306cd73ff905211b2dfdf # 12:00 120+ 0 Merge 'spi/for-next' into devel-spot-201601280923
git bisect good cec507cd4a1097ff1db46a2138b5a63a34e19a4e # 12:25 119+ 0 Merge 'linux-review/Ross-Zwisler/ext2-ext4-Fix-issue-with-missing-journal-entry/20160128-030742' into devel-spot-201601280923
git bisect bad fab357e5a0b27fe98447194f4ff3b57cf161353a # 12:48 5- 5 Merge 'bluetooth/master' into devel-spot-201601280923
git bisect bad 41a751e8480e093999fed3af75cfd17ad3822f77 # 13:02 4- 8 Merge 'linux-review/Eric-Dumazet/tcp-beware-of-alignments-in-tcp_get_info/20160128-025549' into devel-spot-201601280923
git bisect bad 0e03f563a04207cc8e5db6afe63309a585995de7 # 13:21 3- 5 net: mvneta: sort the headers in alphabetic order
git bisect bad 8034e1efcb330d2aecef8cbf8a83f206270c1775 # 13:39 2- 3 Merge git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf
git bisect good 81e8f2e930fe76b9814c71b9d87c30760b5eb705 # 13:48 120+ 0 net: dp83640: Fix tx timestamp overflow handling.
git bisect good d6b3347bf178266259af64b1f27b5cf54acf62c8 # 14:19 120+ 0 netfilter: xt_TCPMSS: handle CHECKSUM_COMPLETE in tcpmss_tg6()
git bisect bad b16c29191dc89bd877af99a7b04ce4866728a3e0 # 16:32 75- 2 netfilter: nf_conntrack: use safer way to lock all buckets
git bisect good 35b815392a6b6c268baf3b63d7f2ba350597024f # 17:14 310+ 0 netfilter: nf_tables_netdev: fix error path in module initialization
# first bad commit: [b16c29191dc89bd877af99a7b04ce4866728a3e0] netfilter: nf_conntrack: use safer way to lock all buckets
git bisect good 35b815392a6b6c268baf3b63d7f2ba350597024f # 17:27 905+ 0 netfilter: nf_tables_netdev: fix error path in module initialization
# extra tests with DEBUG_INFO
git bisect bad b16c29191dc89bd877af99a7b04ce4866728a3e0 # 17:51 104- 4 netfilter: nf_conntrack: use safer way to lock all buckets
# extra tests on HEAD of linux-devel/devel-spot-201601280923
git bisect bad 9f59b59fddd4ad4a398007059b179cc804e9a85f # 17:51 0- 8 0day head guard for 'devel-spot-201601280923'
# extra tests on tree/branch linux-next/master
git bisect bad 888c8375131656144c1605071eab2eb6ac49abc3 # 18:04 16- 3 Add linux-next specific files for 20160128
# extra tests with first bad commit reverted
git bisect good 8aef8bd67e3bcbe1894bf46992346fc0a5ef4647 # 18:32 905+ 0 Revert "netfilter: nf_conntrack: use safer way to lock all buckets"
# extra tests on tree/branch linus/master
git bisect good 03c21cb775a313f1ff19be59c5d02df3e3526471 # 19:20 910+ 0 Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
# extra tests on tree/branch linux-next/master
git bisect bad 888c8375131656144c1605071eab2eb6ac49abc3 # 19:20 0- 64 Add linux-next specific files for 20160128
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-initrd $initrd
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
6 years, 5 months