[block] WARNING: CPU: 0 PID: 99 at kernel/softirq.c:156 local_bh_enable()
by Jet Chen
Hi Zhiguo,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 2c575026fae6e63771bd2a4c1d407214a8096a89
Author: Hong Zhiguo <zhiguohong(a)tencent.com>
AuthorDate: Wed Nov 20 10:35:05 2013 -0700
Commit: Jens Axboe <axboe(a)kernel.dk>
CommitDate: Wed Nov 20 15:33:04 2013 -0700
Update of blkg_stat and blkg_rwstat may happen in bh context.
While u64_stats_fetch_retry is only preempt_disable on 32bit
UP system. This is not enough to avoid preemption by bh and
may read strange 64 bit value.
Signed-off-by: Hong Zhiguo <zhiguohong(a)tencent.com>
Acked-by: Tejun Heo <tj(a)kernel.org>
Cc: stable(a)kernel.org
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
+-------------------------------------------------------+------------+------------+
| | 82023bb7f7 | 2c575026fa |
+-------------------------------------------------------+------------+------------+
| boot_successes | 794 | 183 |
| boot_failures | 106 | 117 |
| BUG:kernel_test_crashed | 33 | |
| BUG:kernel_boot_hang | 73 | 91 |
| WARNING:CPU:PID:at_kernel/softirq.c:local_bh_enable() | 0 | 117 |
| inconsistent_SOFTIRQ-ON-W-IN-SOFTIRQ-W_usage | 0 | 117 |
| backtrace:do_mount | 0 | 85 |
| backtrace:SyS_mount | 0 | 85 |
| backtrace:redo_fd_request | 0 | 117 |
| backtrace:floppy_work_workfn | 0 | 0 |
| backtrace:fd_timer_workfn | 0 | 0 |
+-------------------------------------------------------+------------+------------+
[ 414.461526] power_supply test_battery: prop TEMP=26
[ 414.871358] power_supply test_battery: prop VOLTAGE_NOW=3300
[ 416.112247] ------------[ cut here ]------------
[ 416.113048] WARNING: CPU: 0 PID: 99 at kernel/softirq.c:156 local_bh_enable+0x39/0xc0()
[ 416.113048] CPU: 0 PID: 99 Comm: kworker/u2:1 Not tainted 3.12.0-10276-g2c57502 #1
[ 416.113048] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 416.113048] Workqueue: floppy redo_fd_request
[ 416.113048] 00000000 00000000 d1599c7c c13f3d3f d1599cac c023551b c1990f08 00000000
[ 416.113048] 00000063 c1991108 0000009c c0239219 c0239219 c060f801 d1661a14 d1660000
[ 416.113048] d1599cbc c02355dd 00000009 00000000 d1599cc8 c0239219 d1599cf0 d1599cd8
[ 416.113048] Call Trace:
[ 416.113048] [<c13f3d3f>] dump_stack+0x16/0x18
[ 416.113048] [<c023551b>] warn_slowpath_common+0x6b/0x90
[ 416.113048] [<c0239219>] ? local_bh_enable+0x39/0xc0
[ 416.113048] [<c0239219>] ? local_bh_enable+0x39/0xc0
[ 416.113048] [<c060f801>] ? blkg_rwstat_read+0x41/0x50
[ 416.113048] [<c02355dd>] warn_slowpath_null+0x1d/0x20
[ 416.113048] [<c0239219>] local_bh_enable+0x39/0xc0
[ 416.113048] [<c060f801>] blkg_rwstat_read+0x41/0x50
[ 416.113048] [<c0611b65>] __cfq_set_active_queue+0x75/0x180
[ 416.113048] [<c0220ab3>] ? kvm_clock_read+0x13/0x20
[ 416.113048] [<c0207738>] ? sched_clock+0x8/0x10
[ 416.113048] [<c025b8b5>] ? sched_clock_local.constprop.2+0x15/0x150
[ 416.113048] [<c06132f3>] cfq_dispatch_requests+0x613/0x990
[ 416.113048] [<c0220ab3>] ? kvm_clock_read+0x13/0x20
[ 416.113048] [<c0207738>] ? sched_clock+0x8/0x10
[ 416.113048] [<c025b8b5>] ? sched_clock_local.constprop.2+0x15/0x150
[ 416.113048] [<c08bceff>] ? redo_fd_request+0x3f/0x1010
[ 416.113048] [<c05f7e1e>] blk_peek_request+0x17e/0x1b0
[ 416.113048] [<c140945d>] ? _raw_spin_lock_irq+0x6d/0x80
[ 416.113048] [<c05f7e59>] blk_fetch_request+0x9/0x20
[ 416.113048] [<c08bcf45>] redo_fd_request+0x85/0x1010
[ 416.113048] [<c024a1e0>] ? process_one_work+0x1c0/0x3d0
[ 416.113048] [<c024a24a>] process_one_work+0x22a/0x3d0
[ 416.113048] [<c024a1e0>] ? process_one_work+0x1c0/0x3d0
[ 416.113048] [<c024a5bf>] worker_thread+0x1cf/0x330
[ 416.113048] [<c024a3f0>] ? process_one_work+0x3d0/0x3d0
[ 416.113048] [<c025108a>] kthread+0xaa/0xb0
[ 416.113048] [<c140a637>] ret_from_kernel_thread+0x1b/0x28
[ 416.113048] [<c0250fe0>] ? __kthread_unpark+0x40/0x40
[ 416.113048] ---[ end trace 053edd2998e0f96b ]---
[ 438.998238] block nbd12: Attempted send on closed socket
git bisect start v3.13 v3.12 --
git bisect good 3bad8bb5cd3048a67df43ac6b1e2f191f19d9ff0 # 12:35 300+ 46 Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6
git bisect bad dd0508093b79141e0044ca02f0acb6319f69f546 # 12:49 41- 1 Merge branch 'sched-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 3f02ff5c2c69753666787ed125708d283a823ffb # 13:07 93- 2 Merge tag 'tty-3.13-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
git bisect good 1ab231b274ba51a54acebec23c6aded0f3cdf54e # 13:31 300+ 30 Merge branch 'timers-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 5ee540613db504a10e15fafaf4c08cac96aa1823 # 13:50 66- 5 Merge branch 'for-linus' of git://git.kernel.dk/linux-block
git bisect good 53c6de50262a8edd6932bb59a32db7b9d92f8d67 # 14:26 300+ 6 Merge branch 'x86/urgent' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 59fb2f0e9e30ad99a8bab0ff1efaf8f4a3b7105f # 14:50 300+ 57 Merge tag 'fbdev-fixes-3.13' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux
git bisect good ef1e4e32d595d3e6c9a6d3d2956f087d5886c5e5 # 15:17 300+ 32 Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mason/linux-btrfs
git bisect good 29be6345bbaec8502a70c4e2204d5818b48c4e8f # 19:00 300+ 40 Merge tag 'nfs-for-3.13-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
git bisect bad e345d767f6530ec9cb0aabab7ea248072a9c6975 # 19:17 131- 8 Merge branch 'stable/for-jens-3.13-take-two' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip into for-linus
git bisect bad c170bbb45febc03ac4d34ba2b8bb55e06104b7e7 # 19:33 15- 2 block: submit_bio_wait() conversions
git bisect bad 2c575026fae6e63771bd2a4c1d407214a8096a89 # 19:57 41- 2 Update of blkg_stat and blkg_rwstat may happen in bh context. While u64_stats_fetch_retry is only preempt_disable on 32bit UP system. This is not enough to avoid preemption by bh and may read strange 64 bit value.
# first bad commit: [2c575026fae6e63771bd2a4c1d407214a8096a89] Update of blkg_stat and blkg_rwstat may happen in bh context. While u64_stats_fetch_retry is only preempt_disable on 32bit UP system. This is not enough to avoid preemption by bh and may read strange 64 bit value.
git bisect good 82023bb7f75b0052f40d3e74169d191c3e4e6286 # 20:59 900+ 106 Merge tag 'pm+acpi-2-3.13-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect bad b1cce620fd4b6864c92e7307be7839789f9c8be0 # 20:59 0- 5 Add linux-next specific files for 20140612
git bisect bad 6d87c225f5d82d29243dc124f1ffcbb0e14ec358 # 21:29 89- 6 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/sage/ceph-client
git bisect bad f9801c532e045c1ab89801d0597353c5e2a55671 # 21:58 113- 8 Add linux-next specific files for 20140613
This script may reproduce the error.
-----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
kvm=(
qemu-system-x86_64 -cpu kvm64 -enable-kvm
-kernel $kernel
-smp 2
-m 256M
-net nic,vlan=0,macaddr=00:00:00:00:00:00,model=virtio
-net user,vlan=0
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-serial stdio
-display none
-monitor null
)
append=(
debug
sched_debug
apic=debug
ignore_loglevel
sysrq_always_enabled
panic=10
prompt_ramdisk=0
earlyprintk=ttyS0,115200
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
)
"${kvm[@]}" --append "${append[*]}"
-----------------------------------------------------------------------------
Thanks,
Jet
6 years, 7 months
[x86: kvm] WARNING: at arch/x86/kernel/pvclock.c:182 pvclock_init_vsyscall()
by Jet Chen
Hi Marcelo,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 3dc4f7cfb7441e5e0fed3a02fc81cdaabd28300a
Author: Marcelo Tosatti <mtosatti(a)redhat.com>
AuthorDate: Tue Nov 27 23:28:56 2012 -0200
Commit: Marcelo Tosatti <mtosatti(a)redhat.com>
CommitDate: Tue Nov 27 23:29:10 2012 -0200
x86: kvm guest: pvclock vsyscall support
Hook into generic pvclock vsyscall code, with the aim to
allow userspace to have visibility into pvclock data.
Signed-off-by: Marcelo Tosatti <mtosatti(a)redhat.com>
+--------------------------------------------------------------+------------+------------+
| | 71056ae22d | 3dc4f7cfb7 |
+--------------------------------------------------------------+------------+------------+
| boot_successes | 60 | 0 |
| boot_failures | 0 | 20 |
| WARNING:at_arch/x86/kernel/pvclock.c:pvclock_init_vsyscall() | 0 | 20 |
| backtrace:pvclock_init_vsyscall | 0 | 20 |
| backtrace:warn_slowpath_null | 0 | 20 |
| backtrace:kvm_setup_vsyscall_timeinfo | 0 | 20 |
| backtrace:kvm_guest_init | 0 | 20 |
+--------------------------------------------------------------+------------+------------+
[ 0.000000] mapped IOAPIC to ffffffffff5f8000 (fec00000)
[ 0.000000] nr_irqs_gsi: 40
[ 0.000000] ------------[ cut here ]------------
[ 0.000000] WARNING: at arch/x86/kernel/pvclock.c:182 pvclock_init_vsyscall+0x21/0x59()
[ 0.000000] Hardware name: Bochs
[ 0.000000] Pid: 0, comm: swapper Not tainted 3.7.0-rc3-00112-g3dc4f7c #37
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffff81062255>] warn_slowpath_common+0x7c/0x94
[ 0.000000] [<ffffffff81062343>] warn_slowpath_null+0x1a/0x1c
[ 0.000000] [<ffffffff81cdd7c9>] pvclock_init_vsyscall+0x21/0x59
[ 0.000000] [<ffffffff81cdd75b>] kvm_setup_vsyscall_timeinfo+0x59/0x8b
[ 0.000000] [<ffffffff81cdd4cf>] kvm_guest_init+0xe4/0xfe
[ 0.000000] [<ffffffff81cd2f9e>] setup_arch+0xa64/0xad9
[ 0.000000] [<ffffffff81cd0893>] start_kernel+0xdc/0x430
[ 0.000000] [<ffffffff81cd0120>] ? early_idt_handlers+0x120/0x120
[ 0.000000] [<ffffffff81cd02b4>] x86_64_start_reservations+0xb0/0xb3
[ 0.000000] [<ffffffff81cd03b9>] x86_64_start_kernel+0x102/0x10f
[ 0.000000] ---[ end trace c4149eb8555303de ]---
[ 0.000000] e820: [mem 0x14000000-0xfeffbfff] available for PCI devices
git bisect start 749387dc8d8270b279f27a0a794cdf4f4a4aa774 v3.7 --
git bisect bad e32795503de02da4e7e74a5e039cc268f6a0ecfb # 23:38 0- 13 Merge tags 'dt-for-linus', 'gpio-for-linus' and 'spi-for-linus' of git://git.secretlab.ca/git/linux-2.6
git bisect bad e777d192ffb9f2929d547a2f8a5f65b7db7a9552 # 00:25 0- 4 Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
git bisect good 50851c6248e1a13c45d97c41f6ebcf716093aa5e # 00:43 20+ 0 Merge branch 'release' of git://git.kernel.org/pub/scm/linux/kernel/git/rzhang/linux
git bisect good 698d601224824bc1a5bf17f3d86be902e2aabff0 # 01:11 20+ 0 Merge tag 'drivers' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc
git bisect good 8d9ea7172edd2e52da26b9485b4c97969a0d2648 # 01:34 20+ 0 Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net
git bisect bad 66cdd0ceaf65a18996f561b770eedde1d123b019 # 01:57 0- 3 Merge tag 'kvm-3.8-1' of git://git.kernel.org/pub/scm/virt/kvm/kvm
git bisect good 3127f23f013eabe9b58132c05061684c49146ba3 # 02:12 20+ 0 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/linux-m68k
git bisect good c7708fac5a878d6e0f2de0aa19f9749cff4f707f # 02:31 20+ 0 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
git bisect good 896ea17d3da5f44b2625c9cda9874d7dfe447393 # 02:42 20+ 0 Merge tag 'stable/for-linus-3.8-rc0-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen
git bisect good 8455d79e2163997e479931b8d5b7e60a92cd2b86 # 02:49 20+ 0 KVM: PPC: Book3S HV: Run virtual core whenever any vcpus in it can run
git bisect bad 78c634402a1825f1f5bef13077f0985f3b8a3212 # 03:10 0- 6 kvm: deliver msi interrupts from irq handler
git bisect good 189e11731aa858597095fbe1e6d243bad26bd96b # 03:44 20+ 0 x86: pvclock: add note about rdtsc barriers
git bisect bad b48aa97e38206a84bf8485e7c553412274708ce5 # 04:09 0- 11 KVM: x86: require matched TSC offsets for master clock
git bisect bad 886b470cb14733a0286e365c77f1844c240c33a4 # 04:33 0- 13 KVM: x86: pass host_tsc to read_l1_tsc
git bisect good 71056ae22d43f58d7e0f793af18ace2eaf5b74eb # 04:58 20+ 0 x86: pvclock: generic pvclock vsyscall initialization
git bisect bad 51c19b4f5927f5a646e93d69f73c7e89ea14e737 # 05:16 0- 15 x86: vdso: pvclock gettime support
git bisect bad 3dc4f7cfb7441e5e0fed3a02fc81cdaabd28300a # 05:40 0- 6 x86: kvm guest: pvclock vsyscall support
# first bad commit: [3dc4f7cfb7441e5e0fed3a02fc81cdaabd28300a] x86: kvm guest: pvclock vsyscall support
git bisect good 71056ae22d43f58d7e0f793af18ace2eaf5b74eb # 05:42 60+ 0 x86: pvclock: generic pvclock vsyscall initialization
git bisect good d67a0e110187abd560a1de63fa172894a52839d5 # 05:45 60+ 0 morphologies: Enable CONFIG_FHANDLE
git bisect good 0e04c641b199435f3779454055f6a7de258ecdfc # 06:06 60+ 0 Merge tag 'dm-3.16-changes' of git://git.kernel.org/pub/scm/linux/kernel/git/device-mapper/linux-dm
git bisect good b1cce620fd4b6864c92e7307be7839789f9c8be0 # 06:09 60+ 0 Add linux-next specific files for 20140612
This script may reproduce the error.
-----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/blob/master/initrd/$initrd
kvm=(
qemu-system-x86_64 -cpu kvm64 -enable-kvm
-kernel $kernel
-initrd $initrd
-smp 2
-m 256M
-net nic,vlan=0,macaddr=00:00:00:00:00:00,model=virtio
-net user,vlan=0
-net nic,vlan=1,model=e1000
-net user,vlan=1
-boot order=nc
-no-reboot
-watchdog i6300esb
-serial stdio
-display none
-monitor null
)
append=(
debug
sched_debug
apic=debug
ignore_loglevel
sysrq_always_enabled
panic=10
prompt_ramdisk=0
earlyprintk=ttyS0,115200
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
)
"${kvm[@]}" --append "${append[*]}"
-----------------------------------------------------------------------------
Thanks,
Jet
6 years, 7 months
[raid5] cf170f3fa45: +4.8% vmstat.io.bo
by Jet Chen
Hi Eivind,
FYI, we noticed the below changes on
git://neil.brown.name/md for-next
commit cf170f3fa451350e431314e1a0a52014fda4b2d6 ("raid5: avoid release list until last reference of the stripe")
test case: lkp-st02/dd-write/11HDD-RAID5-cfq-xfs-10dd
8b32bf5e37328c0 cf170f3fa451350e431314e1a
--------------- -------------------------
486996 ~ 0% +4.8% 510428 ~ 0% TOTAL vmstat.io.bo
17643 ~ 1% -17.3% 14599 ~ 0% TOTAL vmstat.system.in
11633 ~ 4% -56.7% 5039 ~ 0% TOTAL vmstat.system.cs
109 ~ 1% +6.5% 116 ~ 1% TOTAL iostat.sdb.rrqm/s
109 ~ 2% +5.1% 114 ~ 1% TOTAL iostat.sdc.rrqm/s
110 ~ 2% +5.5% 117 ~ 0% TOTAL iostat.sdj.rrqm/s
12077 ~ 0% +4.8% 12660 ~ 0% TOTAL iostat.sde.wrqm/s
48775 ~ 0% +4.8% 51125 ~ 0% TOTAL iostat.sde.wkB/s
12077 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdb.wrqm/s
12076 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdd.wrqm/s
12077 ~ 0% +4.8% 12660 ~ 0% TOTAL iostat.sdf.wrqm/s
48775 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdb.wkB/s
12078 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdj.wrqm/s
12078 ~ 0% +4.8% 12660 ~ 0% TOTAL iostat.sdi.wrqm/s
12076 ~ 0% +4.8% 12658 ~ 0% TOTAL iostat.sdg.wrqm/s
48774 ~ 0% +4.8% 51122 ~ 0% TOTAL iostat.sdd.wkB/s
48776 ~ 0% +4.8% 51128 ~ 0% TOTAL iostat.sdf.wkB/s
48780 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdj.wkB/s
48779 ~ 0% +4.8% 51128 ~ 0% TOTAL iostat.sdi.wkB/s
48773 ~ 0% +4.8% 51119 ~ 0% TOTAL iostat.sdg.wkB/s
486971 ~ 0% +4.8% 510409 ~ 0% TOTAL iostat.md0.wkB/s
12076 ~ 0% +4.8% 12657 ~ 0% TOTAL iostat.sdc.wrqm/s
12077 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdh.wrqm/s
1910 ~ 0% +4.8% 2001 ~ 0% TOTAL iostat.md0.w/s
110 ~ 2% +6.5% 117 ~ 1% TOTAL iostat.sdk.rrqm/s
12077 ~ 0% +4.8% 12659 ~ 0% TOTAL iostat.sdk.wrqm/s
48772 ~ 0% +4.8% 51115 ~ 0% TOTAL iostat.sdc.wkB/s
48776 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdh.wkB/s
48777 ~ 0% +4.8% 51121 ~ 0% TOTAL iostat.sdk.wkB/s
109 ~ 2% +3.3% 113 ~ 1% TOTAL iostat.sde.rrqm/s
4.28e+09 ~ 0% -4.1% 4.104e+09 ~ 0% TOTAL perf-stat.cache-misses
8.654e+10 ~ 0% +4.7% 9.058e+10 ~ 0% TOTAL perf-stat.L1-dcache-store-misses
3.549e+09 ~ 1% +3.7% 3.682e+09 ~ 0% TOTAL perf-stat.L1-dcache-prefetches
6.764e+11 ~ 0% +3.7% 7.011e+11 ~ 0% TOTAL perf-stat.dTLB-stores
6.759e+11 ~ 0% +3.7% 7.011e+11 ~ 0% TOTAL perf-stat.L1-dcache-stores
4.731e+10 ~ 0% +3.6% 4.903e+10 ~ 0% TOTAL perf-stat.L1-dcache-load-misses
3.017e+12 ~ 0% +3.5% 3.121e+12 ~ 0% TOTAL perf-stat.instructions
1.118e+12 ~ 0% +3.3% 1.156e+12 ~ 0% TOTAL perf-stat.dTLB-loads
1.117e+12 ~ 0% +3.2% 1.152e+12 ~ 0% TOTAL perf-stat.L1-dcache-loads
3.022e+12 ~ 0% +3.2% 3.119e+12 ~ 0% TOTAL perf-stat.iTLB-loads
5.613e+11 ~ 0% +3.2% 5.794e+11 ~ 0% TOTAL perf-stat.branch-instructions
5.62e+11 ~ 0% +3.1% 5.793e+11 ~ 0% TOTAL perf-stat.branch-loads
1.343e+09 ~ 0% +2.6% 1.378e+09 ~ 0% TOTAL perf-stat.LLC-store-misses
2.073e+10 ~ 0% +2.9% 2.133e+10 ~ 1% TOTAL perf-stat.LLC-loads
4.854e+10 ~ 0% +1.6% 4.931e+10 ~ 0% TOTAL perf-stat.cache-references
1.167e+10 ~ 0% +1.4% 1.183e+10 ~ 0% TOTAL perf-stat.L1-icache-load-misses
7068624 ~ 4% -56.4% 3078966 ~ 0% TOTAL perf-stat.context-switches
2.214e+09 ~ 1% -7.8% 2.041e+09 ~ 1% TOTAL perf-stat.LLC-load-misses
131433 ~ 0% -18.9% 106597 ~ 1% TOTAL perf-stat.cpu-migrations
Legend:
~XX% - stddev percent
[+-]XX% - change percent
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Jet
6 years, 7 months
[rcu] 5057f55e543: -23.5% qperf.udp.recv_bw
by Jet Chen
Hi Paul,
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/fixes
commit 5057f55e543b7859cfd26bc281291795eac93f8a ("rcu: Bind RCU grace-period kthreads if NO_HZ_FULL")
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.127e+09 ~ 0% -23.5% 1.628e+09 ~ 4% bens/qperf/600s
2.127e+09 ~ 0% -23.5% 1.628e+09 ~ 4% TOTAL qperf.udp.recv_bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.128e+09 ~ 0% -23.3% 1.633e+09 ~ 4% bens/qperf/600s
2.128e+09 ~ 0% -23.3% 1.633e+09 ~ 4% TOTAL qperf.udp.send_bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% bens/iperf/300s-tcp
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% TOTAL iperf.tcp.sender.bps
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% bens/iperf/300s-tcp
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% TOTAL iperf.tcp.receiver.bps
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1.331e+09 ~ 2% -5.8% 1.255e+09 ~ 2% bens/qperf/600s
2.4e+09 ~ 6% -30.4% 1.671e+09 ~12% brickland3/qperf/600s
2.384e+09 ~ 7% -12.1% 2.096e+09 ~ 3% lkp-sb03/qperf/600s
6.115e+09 ~ 5% -17.9% 5.022e+09 ~ 6% TOTAL qperf.sctp.bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.83e+09 ~ 1% -12.5% 2.476e+09 ~ 3% bens/qperf/600s
2.83e+09 ~ 1% -12.5% 2.476e+09 ~ 3% TOTAL qperf.tcp.bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.272e+08 ~ 1% -13.3% 1.97e+08 ~ 2% bens/qperf/600s
2.272e+08 ~ 1% -13.3% 1.97e+08 ~ 2% TOTAL proc-vmstat.pgalloc_dma32
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
53062 ~ 2% -35.1% 34464 ~ 3% bens/qperf/600s
109531 ~13% +46.9% 160928 ~ 5% brickland3/qperf/600s
67902 ~ 1% +13.8% 77302 ~ 3% lkp-sb03/qperf/600s
230496 ~ 7% +18.3% 272694 ~ 4% TOTAL softirqs.RCU
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
80344 ~ 1% -26.2% 59325 ~ 2% bens/qperf/600s
80344 ~ 1% -26.2% 59325 ~ 2% TOTAL softirqs.SCHED
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1036 ~ 4% -17.6% 853 ~ 4% brickland3/qperf/600s
1036 ~ 4% -17.6% 853 ~ 4% TOTAL proc-vmstat.nr_page_table_pages
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
48.12 ~ 0% -11.7% 42.46 ~ 6% brickland3/qperf/600s
48.12 ~ 0% -11.7% 42.46 ~ 6% TOTAL turbostat.%pc2
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
74689352 ~ 1% -13.3% 64771743 ~ 2% bens/qperf/600s
74689352 ~ 1% -13.3% 64771743 ~ 2% TOTAL proc-vmstat.pgalloc_normal
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
3.019e+08 ~ 1% -13.3% 2.618e+08 ~ 2% bens/qperf/600s
3.019e+08 ~ 1% -13.3% 2.618e+08 ~ 2% TOTAL proc-vmstat.pgfree
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
23538414 ~ 0% -12.9% 20506157 ~ 2% bens/qperf/600s
23538414 ~ 0% -12.9% 20506157 ~ 2% TOTAL proc-vmstat.numa_local
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
23538414 ~ 0% -12.9% 20506157 ~ 2% bens/qperf/600s
23538414 ~ 0% -12.9% 20506157 ~ 2% TOTAL proc-vmstat.numa_hit
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
12789 ~ 1% -10.9% 11391 ~ 2% bens/qperf/600s
12789 ~ 1% -10.9% 11391 ~ 2% TOTAL softirqs.HRTIMER
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
481253 ~ 0% -8.9% 438624 ~ 0% bens/qperf/600s
481253 ~ 0% -8.9% 438624 ~ 0% TOTAL softirqs.TIMER
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1297 ~33% +565.9% 8640 ~ 7% bens/iperf/300s-tcp
2788 ~ 3% +588.8% 19204 ~ 4% bens/qperf/600s
1191 ~ 5% +1200.9% 15493 ~ 4% brickland3/qperf/600s
1135 ~26% +1195.9% 14709 ~ 4% lkp-sb03/qperf/600s
6411 ~13% +805.3% 58047 ~ 4% TOTAL time.involuntary_context_switches
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
72398 ~ 1% -5.4% 68503 ~ 0% bens/qperf/600s
8789 ~ 4% +22.3% 10749 ~15% lkp-sb03/qperf/600s
81187 ~ 1% -2.4% 79253 ~ 2% TOTAL vmstat.system.in
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
141174 ~ 1% -5.4% 133551 ~ 0% bens/qperf/600s
143982 ~ 1% -4.4% 137600 ~ 0% brickland3/qperf/600s
285156 ~ 1% -4.9% 271152 ~ 0% TOTAL vmstat.system.cs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
42351859 ~ 0% -5.3% 40114932 ~ 0% bens/qperf/600s
43015383 ~ 1% -4.4% 41143092 ~ 0% brickland3/qperf/600s
85367242 ~ 1% -4.8% 81258025 ~ 0% TOTAL time.voluntary_context_switches
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
146 ~ 0% -2.2% 143 ~ 0% bens/qperf/600s
147 ~ 1% -4.8% 140 ~ 1% brickland3/qperf/600s
293 ~ 0% -3.5% 283 ~ 0% TOTAL time.percent_of_cpu_this_job_got
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
872 ~ 0% -2.3% 853 ~ 0% bens/qperf/600s
874 ~ 1% -4.6% 834 ~ 1% brickland3/qperf/600s
1747 ~ 0% -3.4% 1687 ~ 0% TOTAL time.system_time
Legend:
~XX% - stddev percent
[+-]XX% - change percent
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Jet
6 years, 7 months
[rcu] 5057f55e543: +1307.4% time.involuntary_context_switches
by Jet Chen
TO: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
CC: LKML <linux-kernel(a)vger.kernel.org>
CC: lkp(a)01.org
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/fixes
commit 5057f55e543b7859cfd26bc281291795eac93f8a ("rcu: Bind RCU grace-period kthreads if NO_HZ_FULL")
Here highlight some major changes:
7946 ~13% +1307.4% 111846 ~ 2% TOTAL time.involuntary_context_switches
2.127e+09 ~ 0% -23.5% 1.628e+09 ~ 4% TOTAL qperf.udp.recv_bw
2.128e+09 ~ 0% -23.3% 1.633e+09 ~ 4% TOTAL qperf.udp.send_bw
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% TOTAL iperf.tcp.receiver.bps
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% TOTAL iperf.tcp.sender.bps
6.115e+09 ~ 5% -17.9% 5.022e+09 ~ 6% TOTAL qperf.sctp.bw
2.83e+09 ~ 1% -12.5% 2.476e+09 ~ 3% TOTAL qperf.tcp.bw
26070 ~ 0% -1.2% 25768 ~ 0% TOTAL will-it-scale.per_process_ops
0.40 ~ 0% +1.9% 0.41 ~ 1% TOTAL will-it-scale.scalability
1113433 ~ 7% +137.1% 2639530 ~ 3% TOTAL cpuidle.C4-ATM.time
786 ~20% +71.8% 1350 ~15% TOTAL cpuidle.POLL.usage
47106 ~ 8% +53.8% 72472 ~ 3% TOTAL cpuidle.C1E-ATM.usage
659 ~ 7% +50.6% 992 ~ 3% TOTAL cpuidle.C3-IVT.usage
3026 ~ 8% +42.9% 4324 ~ 3% TOTAL cpuidle.C4-ATM.usage
5062629 ~ 4% +26.6% 6408492 ~ 2% TOTAL cpuidle.C1E-ATM.time
402345 ~ 2% -18.9% 326211 ~ 0% TOTAL cpuidle.C6-IVT.usage
944 ~ 9% +17.0% 1105 ~ 8% TOTAL cpuidle.C1E-IVT.usage
403926 ~ 0% -17.3% 333985 ~ 1% TOTAL cpuidle.C7-SNB.usage
124425 ~ 0% -16.4% 103999 ~ 0% TOTAL cpuidle.C6-ATM.usage
full comparison result:
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.127e+09 ~ 0% -23.5% 1.628e+09 ~ 4% bens/qperf/600s
2.127e+09 ~ 0% -23.5% 1.628e+09 ~ 4% TOTAL qperf.udp.recv_bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.128e+09 ~ 0% -23.3% 1.633e+09 ~ 4% bens/qperf/600s
2.128e+09 ~ 0% -23.3% 1.633e+09 ~ 4% TOTAL qperf.udp.send_bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% bens/iperf/300s-tcp
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% TOTAL iperf.tcp.receiver.bps
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% bens/iperf/300s-tcp
2.101e+10 ~ 2% -18.7% 1.707e+10 ~ 2% TOTAL iperf.tcp.sender.bps
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1.331e+09 ~ 2% -5.8% 1.255e+09 ~ 2% bens/qperf/600s
2.4e+09 ~ 6% -30.4% 1.671e+09 ~12% brickland3/qperf/600s
2.384e+09 ~ 7% -12.1% 2.096e+09 ~ 3% lkp-sb03/qperf/600s
6.115e+09 ~ 5% -17.9% 5.022e+09 ~ 6% TOTAL qperf.sctp.bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.83e+09 ~ 1% -12.5% 2.476e+09 ~ 3% bens/qperf/600s
2.83e+09 ~ 1% -12.5% 2.476e+09 ~ 3% TOTAL qperf.tcp.bw
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
26070 ~ 0% -1.2% 25768 ~ 0% lkp-a03/will-it-scale/unlink1
26070 ~ 0% -1.2% 25768 ~ 0% TOTAL will-it-scale.per_process_ops
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0.40 ~ 0% +1.9% 0.41 ~ 1% lkp-a03/will-it-scale/unlink1
0.40 ~ 0% +1.9% 0.41 ~ 1% TOTAL will-it-scale.scalability
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
5951 ~ 4% -61.0% 2318 ~ 4% lkp-a03/will-it-scale/unlink1
5951 ~ 4% -61.0% 2318 ~ 4% TOTAL slabinfo.kmalloc-256.active_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
373 ~ 4% -60.7% 146 ~ 4% lkp-a03/will-it-scale/unlink1
373 ~ 4% -60.7% 146 ~ 4% TOTAL slabinfo.kmalloc-256.num_slabs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
373 ~ 4% -60.7% 146 ~ 4% lkp-a03/will-it-scale/unlink1
373 ~ 4% -60.7% 146 ~ 4% TOTAL slabinfo.kmalloc-256.active_slabs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
5990 ~ 4% -60.7% 2355 ~ 4% lkp-a03/will-it-scale/unlink1
5990 ~ 4% -60.7% 2355 ~ 4% TOTAL slabinfo.kmalloc-256.num_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
2.272e+08 ~ 1% -13.3% 1.97e+08 ~ 2% bens/qperf/600s
1507825 ~ 0% -20.5% 1198788 ~ 0% lkp-a03/will-it-scale/unlink1
2.287e+08 ~ 1% -13.3% 1.982e+08 ~ 2% TOTAL proc-vmstat.pgalloc_dma32
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1113433 ~ 7% +137.1% 2639530 ~ 3% lkp-a03/will-it-scale/unlink1
1113433 ~ 7% +137.1% 2639530 ~ 3% TOTAL cpuidle.C4-ATM.time
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
6269 ~ 4% -57.8% 2647 ~ 4% lkp-a03/will-it-scale/unlink1
6269 ~ 4% -57.8% 2647 ~ 4% TOTAL slabinfo.shmem_inode_cache.active_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
6306 ~ 4% -57.4% 2688 ~ 4% lkp-a03/will-it-scale/unlink1
6306 ~ 4% -57.4% 2688 ~ 4% TOTAL slabinfo.shmem_inode_cache.num_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
262 ~ 4% -57.5% 111 ~ 4% lkp-a03/will-it-scale/unlink1
262 ~ 4% -57.5% 111 ~ 4% TOTAL slabinfo.shmem_inode_cache.active_slabs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
262 ~ 4% -57.5% 111 ~ 4% lkp-a03/will-it-scale/unlink1
262 ~ 4% -57.5% 111 ~ 4% TOTAL slabinfo.shmem_inode_cache.num_slabs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
786 ~20% +71.8% 1350 ~15% lkp-sb03/qperf/600s
786 ~20% +71.8% 1350 ~15% TOTAL cpuidle.POLL.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0.23 ~ 7% -40.0% 0.14 ~ 5% lkp-sb03/qperf/600s
0.23 ~ 7% -40.0% 0.14 ~ 5% TOTAL turbostat.%c3
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
47106 ~ 8% +53.8% 72472 ~ 3% lkp-a03/will-it-scale/unlink1
47106 ~ 8% +53.8% 72472 ~ 3% TOTAL cpuidle.C1E-ATM.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1.14 ~28% -45.0% 0.63 ~43% lkp-sb03/qperf/600s
1.14 ~28% -45.0% 0.63 ~43% TOTAL perf-profile.cpu-cycles.copy_user_generic_string.skb_copy_datagram_iovec.tcp_recvmsg.inet_recvmsg.sock_aio_read
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
659 ~ 7% +50.6% 992 ~ 3% brickland3/qperf/600s
659 ~ 7% +50.6% 992 ~ 3% TOTAL cpuidle.C3-IVT.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
3026 ~ 8% +42.9% 4324 ~ 3% lkp-a03/will-it-scale/unlink1
3026 ~ 8% +42.9% 4324 ~ 3% TOTAL cpuidle.C4-ATM.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
258 ~ 9% -19.8% 207 ~13% brickland3/qperf/600s
258 ~ 9% -19.8% 207 ~13% TOTAL numa-vmstat.node1.nr_page_table_pages
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1041 ~ 9% -19.3% 839 ~12% brickland3/qperf/600s
1041 ~ 9% -19.3% 839 ~12% TOTAL numa-meminfo.node1.PageTables
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
53062 ~ 2% -35.1% 34464 ~ 3% bens/qperf/600s
109531 ~13% +46.9% 160928 ~ 5% brickland3/qperf/600s
72773 ~ 1% +40.8% 102483 ~ 1% lkp-a03/will-it-scale/unlink1
67902 ~ 1% +13.8% 77302 ~ 3% lkp-sb03/qperf/600s
303269 ~ 6% +23.7% 375178 ~ 3% TOTAL softirqs.RCU
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
80344 ~ 1% -26.2% 59325 ~ 2% bens/qperf/600s
65012 ~ 0% -30.9% 44916 ~ 0% lkp-a03/will-it-scale/unlink1
145357 ~ 1% -28.3% 104241 ~ 1% TOTAL softirqs.SCHED
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
5062629 ~ 4% +26.6% 6408492 ~ 2% lkp-a03/will-it-scale/unlink1
5062629 ~ 4% +26.6% 6408492 ~ 2% TOTAL cpuidle.C1E-ATM.time
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
5015 ~ 0% -19.1% 4054 ~ 0% lkp-a03/will-it-scale/unlink1
5015 ~ 0% -19.1% 4054 ~ 0% TOTAL proc-vmstat.nr_slab_unreclaimable
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
402345 ~ 2% -18.9% 326211 ~ 0% brickland3/qperf/600s
402345 ~ 2% -18.9% 326211 ~ 0% TOTAL cpuidle.C6-IVT.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
944 ~ 9% +17.0% 1105 ~ 8% brickland3/qperf/600s
944 ~ 9% +17.0% 1105 ~ 8% TOTAL cpuidle.C1E-IVT.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
32956 ~ 3% +17.5% 38713 ~ 8% brickland3/qperf/600s
32956 ~ 3% +17.5% 38713 ~ 8% TOTAL numa-meminfo.node3.Active
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
19858 ~ 1% -18.1% 16267 ~ 0% lkp-a03/will-it-scale/unlink1
19858 ~ 1% -18.1% 16267 ~ 0% TOTAL meminfo.SUnreclaim
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
403926 ~ 0% -17.3% 333985 ~ 1% lkp-sb03/qperf/600s
403926 ~ 0% -17.3% 333985 ~ 1% TOTAL cpuidle.C7-SNB.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
74689352 ~ 1% -13.3% 64771743 ~ 2% bens/qperf/600s
2539322 ~ 0% -20.6% 2015084 ~ 0% lkp-a03/will-it-scale/unlink1
77228675 ~ 1% -13.5% 66786827 ~ 2% TOTAL proc-vmstat.pgalloc_normal
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
3.019e+08 ~ 1% -13.3% 2.618e+08 ~ 2% bens/qperf/600s
4047283 ~ 0% -20.6% 3214087 ~ 0% lkp-a03/will-it-scale/unlink1
3.059e+08 ~ 1% -13.4% 2.65e+08 ~ 2% TOTAL proc-vmstat.pgfree
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
124425 ~ 0% -16.4% 103999 ~ 0% lkp-a03/will-it-scale/unlink1
124425 ~ 0% -16.4% 103999 ~ 0% TOTAL cpuidle.C6-ATM.usage
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
23538414 ~ 0% -12.9% 20506157 ~ 2% bens/qperf/600s
2496727 ~ 0% -19.5% 2010351 ~ 0% lkp-a03/will-it-scale/unlink1
26035141 ~ 0% -13.5% 22516509 ~ 2% TOTAL proc-vmstat.numa_hit
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
23538414 ~ 0% -12.9% 20506157 ~ 2% bens/qperf/600s
2496727 ~ 0% -19.5% 2010351 ~ 0% lkp-a03/will-it-scale/unlink1
26035141 ~ 0% -13.5% 22516509 ~ 2% TOTAL proc-vmstat.numa_local
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
4196 ~ 4% -18.1% 3438 ~ 4% brickland3/qperf/600s
4196 ~ 4% -18.1% 3438 ~ 4% TOTAL meminfo.PageTables
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1036 ~ 4% -17.6% 853 ~ 4% brickland3/qperf/600s
1036 ~ 4% -17.6% 853 ~ 4% TOTAL proc-vmstat.nr_page_table_pages
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
5817789 ~ 8% +18.0% 6867050 ~ 7% lkp-sb03/qperf/600s
5817789 ~ 8% +18.0% 6867050 ~ 7% TOTAL meminfo.DirectMap2M
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
48.12 ~ 0% -11.7% 42.46 ~ 6% brickland3/qperf/600s
48.12 ~ 0% -11.7% 42.46 ~ 6% TOTAL turbostat.%pc2
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
6542 ~ 6% +12.6% 7364 ~ 4% brickland3/qperf/600s
6542 ~ 6% +12.6% 7364 ~ 4% TOTAL numa-meminfo.node3.Active(anon)
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1636 ~ 6% +12.5% 1841 ~ 4% brickland3/qperf/600s
1636 ~ 6% +12.5% 1841 ~ 4% TOTAL numa-vmstat.node3.nr_active_anon
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1415 ~ 3% +13.3% 1602 ~ 1% brickland3/qperf/600s
1415 ~ 3% +13.3% 1602 ~ 1% TOTAL numa-meminfo.node1.KernelStack
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
47493 ~ 4% -11.0% 42252 ~ 5% brickland3/qperf/600s
47493 ~ 4% -11.0% 42252 ~ 5% TOTAL numa-meminfo.node1.Slab
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
12789 ~ 1% -10.9% 11391 ~ 2% bens/qperf/600s
12789 ~ 1% -10.9% 11391 ~ 2% TOTAL softirqs.HRTIMER
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1275 ~ 7% -12.2% 1120 ~ 4% brickland3/qperf/600s
1275 ~ 7% -12.2% 1120 ~ 4% TOTAL numa-vmstat.node1.nr_anon_pages
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
8813 ~ 4% -11.7% 7784 ~ 6% brickland3/qperf/600s
8813 ~ 4% -11.7% 7784 ~ 6% TOTAL numa-vmstat.node1.nr_slab_unreclaimable
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
35254 ~ 4% -11.7% 31138 ~ 6% brickland3/qperf/600s
35254 ~ 4% -11.7% 31138 ~ 6% TOTAL numa-meminfo.node1.SUnreclaim
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
5095 ~ 7% -12.1% 4478 ~ 4% brickland3/qperf/600s
5095 ~ 7% -12.1% 4478 ~ 4% TOTAL numa-meminfo.node1.AnonPages
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
6764 ~ 5% +11.5% 7542 ~ 4% brickland3/qperf/600s
6764 ~ 5% +11.5% 7542 ~ 4% TOTAL numa-meminfo.node3.AnonPages
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1691 ~ 5% +11.4% 1884 ~ 4% brickland3/qperf/600s
1691 ~ 5% +11.4% 1884 ~ 4% TOTAL numa-vmstat.node3.nr_anon_pages
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0.99 ~10% +16.0% 1.15 ~ 8% bens/qperf/600s
0.99 ~10% +16.0% 1.15 ~ 8% TOTAL perf-profile.cpu-cycles.tcp_rcv_established.tcp_v4_do_rcv.tcp_prequeue_process.tcp_recvmsg.inet_recvmsg
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
10946 ~ 2% -9.0% 9957 ~ 1% lkp-sb03/qperf/600s
10946 ~ 2% -9.0% 9957 ~ 1% TOTAL slabinfo.kmalloc-192.active_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
0.78 ~10% +18.9% 0.92 ~ 7% lkp-sb03/qperf/600s
0.78 ~10% +18.9% 0.92 ~ 7% TOTAL perf-profile.cpu-cycles.enqueue_entity.enqueue_task_fair.enqueue_task.activate_task.ttwu_do_activate
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
4838 ~ 4% -7.7% 4465 ~ 2% brickland3/qperf/600s
4838 ~ 4% -7.7% 4465 ~ 2% TOTAL slabinfo.signal_cache.num_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
4838 ~ 4% -7.7% 4465 ~ 2% brickland3/qperf/600s
4838 ~ 4% -7.7% 4465 ~ 2% TOTAL slabinfo.signal_cache.active_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
10948 ~ 2% -8.9% 9977 ~ 1% lkp-sb03/qperf/600s
10948 ~ 2% -8.9% 9977 ~ 1% TOTAL slabinfo.kmalloc-192.num_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
8138 ~ 7% -12.2% 7145 ~ 2% lkp-sb03/qperf/600s
8138 ~ 7% -12.2% 7145 ~ 2% TOTAL slabinfo.anon_vma.active_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
8138 ~ 7% -12.2% 7145 ~ 2% lkp-sb03/qperf/600s
8138 ~ 7% -12.2% 7145 ~ 2% TOTAL slabinfo.anon_vma.num_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1877 ~ 0% -9.5% 1699 ~ 0% lkp-a03/will-it-scale/unlink1
1877 ~ 0% -9.5% 1699 ~ 0% TOTAL slabinfo.dentry.active_slabs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1877 ~ 0% -9.5% 1699 ~ 0% lkp-a03/will-it-scale/unlink1
1877 ~ 0% -9.5% 1699 ~ 0% TOTAL slabinfo.dentry.num_slabs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
39391 ~ 0% -9.5% 35638 ~ 0% lkp-a03/will-it-scale/unlink1
39391 ~ 0% -9.5% 35638 ~ 0% TOTAL slabinfo.dentry.active_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
39441 ~ 0% -9.5% 35702 ~ 0% lkp-a03/will-it-scale/unlink1
39441 ~ 0% -9.5% 35702 ~ 0% TOTAL slabinfo.dentry.num_objs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
481253 ~ 0% -8.9% 438624 ~ 0% bens/qperf/600s
481253 ~ 0% -8.9% 438624 ~ 0% TOTAL softirqs.TIMER
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1297 ~33% +565.9% 8640 ~ 7% bens/iperf/300s-tcp
2788 ~ 3% +588.8% 19204 ~ 4% bens/qperf/600s
1191 ~ 5% +1200.9% 15493 ~ 4% brickland3/qperf/600s
1535 ~ 9% +3404.4% 53799 ~ 0% lkp-a03/will-it-scale/unlink1
1135 ~26% +1195.9% 14709 ~ 4% lkp-sb03/qperf/600s
7946 ~13% +1307.4% 111846 ~ 2% TOTAL time.involuntary_context_switches
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
42351859 ~ 0% -5.3% 40114932 ~ 0% bens/qperf/600s
43015383 ~ 1% -4.4% 41143092 ~ 0% brickland3/qperf/600s
7205 ~14% +320.9% 30329 ~ 5% lkp-a03/will-it-scale/unlink1
85374447 ~ 1% -4.8% 81288354 ~ 0% TOTAL time.voluntary_context_switches
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
141174 ~ 1% -5.4% 133551 ~ 0% bens/qperf/600s
143982 ~ 1% -4.4% 137600 ~ 0% brickland3/qperf/600s
2199 ~ 3% +34.1% 2949 ~ 1% lkp-a03/will-it-scale/unlink1
287355 ~ 1% -4.6% 274101 ~ 0% TOTAL vmstat.system.cs
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
72398 ~ 1% -5.4% 68503 ~ 0% bens/qperf/600s
3708 ~ 0% +2.0% 3781 ~ 0% lkp-a03/will-it-scale/unlink1
8789 ~ 4% +22.3% 10749 ~15% lkp-sb03/qperf/600s
84895 ~ 1% -2.2% 83034 ~ 2% TOTAL vmstat.system.in
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
1.51 ~ 0% -5.3% 1.43 ~ 1% brickland3/qperf/600s
5.43 ~ 1% -2.8% 5.28 ~ 0% lkp-sb03/qperf/600s
6.94 ~ 1% -3.3% 6.71 ~ 0% TOTAL turbostat.%c0
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
146 ~ 0% -2.2% 143 ~ 0% bens/qperf/600s
147 ~ 1% -4.8% 140 ~ 1% brickland3/qperf/600s
293 ~ 0% -3.5% 283 ~ 0% TOTAL time.percent_of_cpu_this_job_got
71a9b26963f8c2d 5057f55e543b7859cfd26bc28
--------------- -------------------------
872 ~ 0% -2.3% 853 ~ 0% bens/qperf/600s
874 ~ 1% -4.6% 834 ~ 1% brickland3/qperf/600s
1747 ~ 0% -3.4% 1687 ~ 0% TOTAL time.system_time
Legend:
~XX% - stddev percent
[+-]XX% - change percent
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Jet
6 years, 7 months