[of/platform] 4f7c8127d9: WARNING: CPU: 0 PID: 1 at kernel/locking/mutex.c:526 __mutex_lock_slowpath()
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.collabora.co.uk/git/user/tomeu/linux.git on-demand-probes-v7
commit 4f7c8127d91e752c59ed46f8b894d834740eefac
Author: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
AuthorDate: Tue Aug 11 10:07:10 2015 +0200
Commit: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
CommitDate: Fri Sep 11 10:46:01 2015 +0200
of/platform: Point to struct device from device node
When adding platform and AMBA devices, set the device node's device
member to point to it.
This speeds lookups considerably and is safe because we only create one
of these devices for any given device node.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
Series-changes: 5
- Set the pointer to struct device also for AMBA devices
- Unset the pointer when the device is about to be unregistered
+------------------------------------------------------------+------------+------------+------------+
| | 2405e5d827 | 4f7c8127d9 | 0dd8ca95ba |
+------------------------------------------------------------+------------+------------+------------+
| boot_successes | 63 | 0 | 0 |
| boot_failures | 0 | 56 | 39 |
| WARNING:at_kernel/locking/mutex.c:#__mutex_lock_slowpath() | 0 | 56 | 39 |
| BUG:unable_to_handle_kernel | 0 | 56 | 39 |
| Oops | 0 | 56 | 39 |
| EIP_is_at__mutex_lock_slowpath | 0 | 56 | 39 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 56 | 39 |
| backtrace:of_unittest | 0 | 56 | 39 |
| backtrace:kernel_init_freeable | 0 | 56 | 39 |
+------------------------------------------------------------+------------+------------+------------+
[ 5.674028] /testcase-data/phandle-tests/consumer-a: arguments longer than property
[ 5.675552] irq: no irq domain found for /testcase-data/interrupts/intc0 !
[ 5.678686] ------------[ cut here ]------------
[ 5.679297] WARNING: CPU: 0 PID: 1 at kernel/locking/mutex.c:526 __mutex_lock_slowpath+0xa6/0x28d()
[ 5.680703] DEBUG_LOCKS_WARN_ON(l->magic != l)
[ 5.681326] Modules linked in:
[ 5.681854] CPU: 0 PID: 1 Comm: swapper Not tainted 4.2.0-next-20150911-00004-g4f7c812 #1
[ 5.682942] 00000000 c0057c90 c0057c64 c11197c9 c0057c80 c102c031 0000020e c1327581
[ 5.684028] d6001620 c0060000 c14da480 c0057c98 c102c06e 00000009 c0057c90 c140f4f4
[ 5.685154] c0057cac c0057cd4 c1327581 c1418698 0000020e c140f4f4 c14186d3 c14da480
[ 5.686367] Call Trace:
[ 5.686746] [<c11197c9>] dump_stack+0x16/0x18
[ 5.687408] [<c102c031>] warn_slowpath_common+0x79/0x90
[ 5.688139] [<c1327581>] ? __mutex_lock_slowpath+0xa6/0x28d
[ 5.688916] [<c102c06e>] warn_slowpath_fmt+0x26/0x2a
[ 5.689551] [<c1327581>] __mutex_lock_slowpath+0xa6/0x28d
[ 5.690208] [<c1040902>] ? __might_sleep+0x83/0x8b
[ 5.690821] [<c11c8311>] ? put_device+0xf/0x11
[ 5.691399] [<c1327786>] mutex_lock+0x1e/0x29
[ 5.691940] [<c11cb313>] device_release_driver+0x11/0x23
[ 5.692699] [<c11ca872>] bus_remove_device+0xb1/0xbe
[ 5.693316] [<c11c8aaa>] device_del+0x127/0x18d
[ 5.693891] [<c1328609>] ? _raw_spin_unlock+0x12/0x22
[ 5.694530] [<c1320c48>] ? klist_next+0x73/0x8d
[ 5.695084] [<c11c8313>] ? put_device+0x11/0x11
[ 5.695668] [<c11cc21d>] platform_device_del+0x13/0x52
[ 5.696300] [<c11cc267>] platform_device_unregister+0xb/0x15
[ 5.697009] [<c126d2b6>] of_platform_device_destroy+0x4b/0x67
[ 5.697743] [<c126d5d2>] of_platform_notify+0xa2/0xb6
[ 5.698384] [<c103e1b6>] notifier_call_chain+0x2d/0x46
[ 5.699014] [<c103e3d6>] __blocking_notifier_call_chain+0x2c/0x41
[ 5.699783] [<c103e3f7>] blocking_notifier_call_chain+0xc/0xe
[ 5.700507] [<c126da7d>] of_reconfig_notify+0x11/0x26
[ 5.701119] [<c126dab9>] of_property_notify+0x27/0x29
[ 5.708967] [<c126daf7>] __of_changeset_entry_notify+0x3c/0x8d
[ 5.709715] [<c126ca75>] ? __of_update_property_sysfs+0x2a/0x2e
[ 5.710467] [<c1327450>] ? __mutex_unlock_slowpath+0xc5/0xdb
[ 5.711155] [<c126e0e6>] of_changeset_apply+0x70/0x88
[ 5.711798] [<c1270bf3>] ? of_overlay_apply_one+0xc4/0x1a5
[ 5.712493] [<c1270fb3>] of_overlay_create+0x265/0x2db
[ 5.713125] [<c128f1e6>] of_unittest_apply_overlay+0x51/0xa8
[ 5.713911] [<c128f2fe>] of_unittest_apply_overlay_check+0x61/0xcb
[ 5.714686] [<c152c048>] of_unittest_overlay+0x141/0x792
[ 5.715349] [<c152d250>] of_unittest+0xbb7/0xbe0
[ 5.715927] [<c10aa6c1>] ? kfree+0xc3/0xcc
[ 5.716455] [<c10aa6c1>] ? kfree+0xc3/0xcc
[ 5.716960] [<c1000423>] ? do_one_initcall+0x79/0x145
[ 5.717600] [<c1000423>] ? do_one_initcall+0x79/0x145
[ 5.718216] [<c152c699>] ? of_unittest_overlay+0x792/0x792
[ 5.729154] [<c1000478>] do_one_initcall+0xce/0x145
[ 5.729797] [<c103d382>] ? parse_args+0x192/0x26e
[ 5.730405] [<c1507bd7>] ? kernel_init_freeable+0x11d/0x1b5
[ 5.731085] [<c1507bf7>] kernel_init_freeable+0x13d/0x1b5
[ 5.731768] [<c13224fe>] kernel_init+0x8/0xb0
[ 5.732313] [<c1328a40>] ret_from_kernel_thread+0x20/0x30
[ 5.732991] [<c13224f6>] ? rest_init+0x6f/0x6f
[ 5.733564] ---[ end trace d50f1d4221bfcf41 ]---
[ 5.734121] BUG: unable to handle kernel paging request at 6b6b6b6b
git bisect start 0dd8ca95ba44a0e5dc4b2db016f7acddbe03c0c5 64291f7db5bd8150a74ad2036f1037e6a0428df2 --
git bisect bad 74a7b91cd134ec253a69198c2ec71ce0f269d89b # 00:01 0- 16 Merge 'skn/build_test' into devel-spot-201509111851
git bisect bad ba832512ca50840d0f729bf3972cfd52edb2d0d5 # 00:01 0- 13 Merge 'krzk/defconfig-for-next' into devel-spot-201509111851
git bisect bad 4dcf2e574482399b84b3d934919abad8b832d748 # 00:02 0- 16 Merge 'efi/urgent' into devel-spot-201509111851
git bisect bad f1ef8ad54f2f325d271ff8caa10b160b0c9aeb9f # 00:02 0- 16 Merge 'platform-drivers-x86/testing' into devel-spot-201509111851
git bisect bad c87ebb1bb527d088298b6d114885bb5ef399641e # 00:02 0- 13 Merge 'tomeu/on-demand-probes-v7' into devel-spot-201509111851
git bisect good 191591d4ef5eefd9f2119844625cd97e6a76f005 # 00:04 20+ 0 0day base guard for 'devel-spot-201509111851'
git bisect good 4c82ac3c37363e8c4ded6a5fe1ec5fa756b34df3 # 00:11 22+ 0 xen-netback: respect user provided max_queues
git bisect good 653ebd75e9e469e99a40ab14128d915386dc78c6 # 00:23 22+ 0 rtc: pcf2127: use OFS flag to detect unreliable date and warn the user
git bisect good a794b4f3292160bb3fd0f1f90ec8df454e3b17b3 # 00:31 22+ 0 Merge tag 'for-linus-4.3' of git://git.code.sf.net/p/openipmi/linux-ipmi
git bisect good a8b6420a012c693415359b23e1f246fc11d30492 # 00:41 20+ 0 Merge remote-tracking branch 'libata/for-next'
git bisect good 9f045b5f6bc0a97dbbfafedc6d9b49296251afe6 # 00:51 22+ 0 Merge remote-tracking branch 'extcon/extcon-next'
git bisect good a7631f445c4b080b7873c5832d11a277223809fe # 01:01 22+ 2 Merge remote-tracking branch 'scsi/for-next'
git bisect good 2b5b7f6899eda1c168de2731951b271d10be2605 # 01:01 21+ 0 Merge remote-tracking branch 'livepatching/for-next'
git bisect good 3b1670dad7028f8a99cda6fda4dd8e565ff208a3 # 01:12 22+ 0 page-flags: introduce page flags policies wrt compound pages
git bisect good 9311c6028c0295c4b4c065fe7169fcb8226dd126 # 01:17 22+ 2 include/linux/poison.h: use POISON_POINTER_DELTA for poison pointers
git bisect good ce6cd5fd7ed87d6740a9af5fa62e7f0b758f6fb4 # 01:21 22+ 0 Merge branch 'akpm-current/current'
git bisect bad 4f7c8127d91e752c59ed46f8b894d834740eefac # 01:27 0- 18 of/platform: Point to struct device from device node
git bisect good e9da37cd43049f5133e5953fe4bc9f469ada7153 # 01:35 22+ 0 Add linux-next specific files for 20150911
git bisect good 577230c27d27539f29a35553f8824ce875da2956 # 01:40 22+ 0 driver core: Add pre_probe callback to bus_type
git bisect good 2405e5d8273b1e8cfb17c26c2b3da6f9d46c7433 # 01:40 24+ 0 ARM: amba: Move reading of periphid to pre_probe()
# first bad commit: [4f7c8127d91e752c59ed46f8b894d834740eefac] of/platform: Point to struct device from device node
git bisect good 2405e5d8273b1e8cfb17c26c2b3da6f9d46c7433 # 01:43 63+ 0 ARM: amba: Move reading of periphid to pre_probe()
# extra tests with DEBUG_INFO
git bisect bad 4f7c8127d91e752c59ed46f8b894d834740eefac # 01:47 0- 18 of/platform: Point to struct device from device node
# extra tests on HEAD of linux-devel/devel-spot-201509111851
git bisect bad 0dd8ca95ba44a0e5dc4b2db016f7acddbe03c0c5 # 01:47 0- 39 0day head guard for 'devel-spot-201509111851'
# extra tests on tree/branch tomeu/on-demand-probes-v7
git bisect bad 75620956721d4f7a7261313053b9e4f6ceeee12a # 01:51 0- 52 of/platform: Defer probes of registered devices
# extra tests with first bad commit reverted
# extra tests on tree/branch linus/master
git bisect good 64d1def7d33856824d2c5c6fd6d4579d4d54bb87 # 02:04 62+ 0 Merge tag 'sound-fix-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
# extra tests on tree/branch linux-next/master
git bisect good e9da37cd43049f5133e5953fe4bc9f469ada7153 # 02:06 62+ 0 Add linux-next specific files for 20150911
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
5 years, 7 months
[of/platform] 8ea9d35082: WARNING: CPU: 1 PID: 1 at fs/kernfs/dir.c:1276 kernfs_remove_by_name_ns()
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.collabora.co.uk/git/user/tomeu/linux.git on-demand-probes-v7
commit 8ea9d35082dd906c11679a77fbe35ef40e27a393
Author: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
AuthorDate: Tue Aug 11 10:07:10 2015 +0200
Commit: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
CommitDate: Thu Sep 10 16:54:14 2015 +0200
of/platform: Point to struct device from device node
When adding a platform device, set the device node's device member to
point to it.
This speeds lookups considerably and is safe because we only create one
platform device for any given device node.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
+--------------------------------------------------------+------------+------------+------------+
| | 1dd2ec2649 | 8ea9d35082 | ebe876e33f |
+--------------------------------------------------------+------------+------------+------------+
| boot_successes | 63 | 0 | 0 |
| boot_failures | 0 | 22 | 13 |
| WARNING:at_fs/kernfs/dir.c:#kernfs_remove_by_name_ns() | 0 | 22 | 13 |
| WARNING:at_fs/sysfs/dir.c:#sysfs_warn_dup() | 0 | 22 | 12 |
| backtrace:of_unittest | 0 | 22 | 13 |
| backtrace:kernel_init_freeable | 0 | 22 | 13 |
| BUG:unable_to_handle_kernel | 0 | 0 | 1 |
| Oops | 0 | 0 | 1 |
| EIP_is_at__device_release_driver | 0 | 0 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 0 | 1 |
+--------------------------------------------------------+------------+------------+------------+
[ 4.283141] ### dt-test ### FAIL of_unittest_platform_populate():812 device didn't get destroyed 'dev'
[ 4.283905] ### dt-test ### FAIL of_unittest_platform_populate():812 device didn't get destroyed 'dev'
[ 4.285495] ------------[ cut here ]------------
[ 4.285886] WARNING: CPU: 1 PID: 1 at fs/kernfs/dir.c:1276 kernfs_remove_by_name_ns+0x46/0xb6()
[ 4.286734] kernfs: can not remove 'driver', no directory
[ 4.287180] Modules linked in:
[ 4.287451] CPU: 1 PID: 1 Comm: swapper/0 Not tainted 4.2.0-next-20150910-00004-g8ea9d35 #2
[ 4.288134] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 4.288960] 00000001 00000000 00000001 8010dc10 8141e2bd 8010dc3c 00000001 81b0bf24
[ 4.289701] 8010dc2c 8106ae34 000004fc 8129dc5d 00000001 00000000 81aecea4 8010dc44
[ 4.290443] 8106ae76 00000009 8010dc3c 81b0bfe2 8010dc58 8010dc6c 8129dc5d 81b0bf24
[ 4.291183] Call Trace:
[ 4.291395] [<8141e2bd>] dump_stack+0x7b/0xf0
[ 4.291763] [<8106ae34>] warn_slowpath_common+0xb7/0xce
[ 4.292214] [<8129dc5d>] ? kernfs_remove_by_name_ns+0x46/0xb6
[ 4.292695] [<8106ae76>] warn_slowpath_fmt+0x2b/0x2f
[ 4.293114] [<8129dc5d>] kernfs_remove_by_name_ns+0x46/0xb6
[ 4.293578] [<812a0733>] sysfs_remove_link+0x36/0x3a
[ 4.293997] [<8155d555>] driver_sysfs_remove+0x45/0x49
[ 4.294429] [<8155e479>] __device_release_driver+0x61/0x184
[ 4.294892] [<8155e5bb>] device_release_driver+0x1f/0x2a
[ 4.295348] [<8155ccef>] bus_remove_device+0x198/0x1af
[ 4.295778] [<81559225>] device_del+0x1d9/0x285
[ 4.296162] [<81559225>] ? device_del+0x1d9/0x285
[ 4.296555] [<8155839a>] ? put_device+0x2f/0x2f
[ 4.296936] [<81561041>] platform_device_del+0x32/0x92
[ 4.297372] [<815610b1>] platform_device_unregister+0x10/0x1a
[ 4.297852] [<816df685>] of_platform_device_destroy+0x90/0xab
[ 4.298339] [<816dfc17>] of_platform_notify+0x120/0x134
[ 4.298786] [<81090dd8>] notifier_call_chain+0x32/0x7b
[ 4.299226] [<8109131a>] __blocking_notifier_call_chain+0x48/0x5d
[ 4.299728] [<81091340>] blocking_notifier_call_chain+0x11/0x13
[ 4.300225] [<816e05ba>] of_reconfig_notify+0x16/0x2b
[ 4.300645] [<816e061b>] of_property_notify+0x4c/0x54
[ 4.301074] [<816e0670>] __of_changeset_entry_notify+0x4d/0xb5
[ 4.301561] [<8177b731>] ? __mutex_unlock_slowpath+0x230/0x250
[ 4.302056] [<816e0fc2>] of_changeset_apply+0x11d/0x181
[ 4.302495] [<816dc647>] ? of_get_next_child+0x34/0x3b
[ 4.302926] [<816e67c0>] ? of_overlay_apply_one+0x137/0x2ab
[ 4.303395] [<816e6f8d>] of_overlay_create+0x3b2/0x43f
[ 4.303823] [<816e6f8d>] ? of_overlay_create+0x3b2/0x43f
[ 4.304273] [<81705c8a>] of_unittest_apply_overlay+0x71/0xf8
[ 4.304791] [<81705e28>] of_unittest_apply_overlay_check+0x7e/0x116
[ 4.305323] [<81eefad6>] of_unittest+0x1203/0x21d5
[ 4.305730] [<810c5520>] ? debug_check_no_locks_freed+0x108/0x120
[ 4.306243] [<811eb1dc>] ? kfree+0x433/0x460
[ 4.306601] [<81000545>] do_one_initcall+0x188/0x293
[ 4.307021] [<81eee8d3>] ? of_unittest_platform_populate+0x6a5/0x6a5
[ 4.307545] [<81000545>] ? do_one_initcall+0x188/0x293
[ 4.307980] [<8108f229>] ? parse_args+0x4c5/0x59b
[ 4.308377] [<81e8d5e3>] ? initcall_blacklist+0xe5/0xe5
[ 4.308821] [<81e8e173>] kernel_init_freeable+0x1c4/0x286
[ 4.309277] [<81e8e173>] ? kernel_init_freeable+0x1c4/0x286
[ 4.309743] [<8176ec9c>] kernel_init+0xe/0x13e
[ 4.310124] [<8177e981>] ret_from_kernel_thread+0x21/0x30
[ 4.310573] [<8176ec8e>] ? rest_init+0xab/0xab
[ 4.310950] ---[ end trace bd7d5a1540808323 ]---
[ 4.311355] ### dt-test ### FAIL of_unittest_apply_overlay_check():1226 overlay @"/testcase-data/overlay1" failed to create @"/testcase-data/overlay-node/test-bus/test-unittest1" enabled
git bisect start ebe876e33f433844a589613b9cc90fa53f046961 22dc312d56ba077db27a9798b340e7d161f1df05 --
git bisect bad 2486b1ca93d8761f3641b2a7cdb3ec170ec84aba # 01:40 0- 19 drm/tegra: Probe dpaux devices on demand
git bisect bad e9b51fa1c2d3e2559fde5a01f983b14fafe1aacf # 01:46 0- 3 of: add function to allow probing a device from a OF node
git bisect good fff867fe2684e3f6f76b21d6591c3fa6e5a82280 # 01:54 22+ 0 driver core: Add pre_probe callback to bus_type
git bisect good 1dd2ec2649d71e55fc3084f5fe7c96538e9b716a # 02:00 22+ 0 ARM: amba: Move reading of periphid to pre_probe()
git bisect bad 8ea9d35082dd906c11679a77fbe35ef40e27a393 # 02:05 0- 22 of/platform: Point to struct device from device node
# first bad commit: [8ea9d35082dd906c11679a77fbe35ef40e27a393] of/platform: Point to struct device from device node
git bisect good 1dd2ec2649d71e55fc3084f5fe7c96538e9b716a # 02:09 63+ 0 ARM: amba: Move reading of periphid to pre_probe()
# extra tests with DEBUG_INFO
git bisect bad 8ea9d35082dd906c11679a77fbe35ef40e27a393 # 02:13 0- 15 of/platform: Point to struct device from device node
# extra tests on HEAD of tomeu/on-demand-probes-v7
git bisect bad ebe876e33f433844a589613b9cc90fa53f046961 # 02:13 0- 13 of/platform: Defer probes of registered devices
# extra tests on tree/branch tomeu/on-demand-probes-v7
git bisect bad ebe876e33f433844a589613b9cc90fa53f046961 # 02:13 0- 13 of/platform: Defer probes of registered devices
# extra tests with first bad commit reverted
# extra tests on tree/branch linus/master
git bisect good b8889c4fc6ba03e289cec6a4d692f6f080a55e53 # 02:20 63+ 0 Merge tag 'tty-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
# extra tests on tree/branch linux-next/master
git bisect good 22dc312d56ba077db27a9798b340e7d161f1df05 # 02:24 63+ 0 Add linux-next specific files for 20150910
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-initrd $initrd
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
5 years, 7 months
[locktorture] 5f6a140f30: kernel BUG at drivers/base/driver.c:153!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev.2015.09.01a
commit 5f6a140f30b4296ac6ef2dd275ed513c27180fbe
Author: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
AuthorDate: Sat Aug 29 14:46:29 2015 -0700
Commit: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
CommitDate: Mon Aug 31 14:37:46 2015 -0700
locktorture: Add torture tests for percpu_rwsem
This commit adds percpu_rwsem tests based on the earlier rwsem tests.
Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
Cc: Oleg Nesterov <oleg(a)redhat.com>
Cc: Davidlohr Bueso <dave(a)stgolabs.net>
+------------------------------------------+------------+------------+------------+
| | 04be76a9b0 | 5f6a140f30 | 325c0efb5f |
+------------------------------------------+------------+------------+------------+
| boot_successes | 0 | 0 | 0 |
| boot_failures | 66 | 22 | 19 |
| kernel_BUG_at_drivers/base/driver.c | 66 | 22 | |
| invalid_opcode | 66 | 22 | 19 |
| EIP_is_at_driver_register | 66 | 22 | |
| Kernel_panic-not_syncing:Fatal_exception | 66 | 22 | 19 |
| backtrace:tusb1210_driver_init | 66 | 22 | |
| backtrace:kernel_init_freeable | 66 | 22 | 19 |
| kernel_BUG_at_include/linux/pagemap.h | 0 | 0 | 19 |
| EIP_is_at_find_get_entry | 0 | 0 | 19 |
| backtrace:vfs_write | 0 | 0 | 19 |
| backtrace:SyS_write | 0 | 0 | 19 |
| backtrace:populate_rootfs | 0 | 0 | 19 |
+------------------------------------------+------------+------------+------------+
[ 1.548005] tsc: Refined TSC clocksource calibration: 2693.507 MHz
[ 1.548484] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x26d348cd811, max_idle_ns: 440795335366 ns
[ 1.549281] ------------[ cut here ]------------
[ 1.549643] kernel BUG at drivers/base/driver.c:153!
[ 1.550005] invalid opcode: 0000 [#1]
[ 1.550005] CPU: 0 PID: 1 Comm: swapper Not tainted 4.2.0-rc1-00064-g5f6a140 #2
[ 1.550005] task: cf440000 ti: cf43c000 task.ti: cf43c000
[ 1.550005] EIP: 0060:[<c1309078>] EFLAGS: 00010246 CPU: 0
[ 1.550005] EIP is at driver_register+0x98/0xd0
[ 1.550005] EAX: c1b19bac EBX: cf4d7a80 ECX: 00000000 EDX: c1b51b40
[ 1.550005] ESI: c1bcd884 EDI: c1ba72e2 EBP: cf43df28 ESP: cf43df24
[ 1.550005] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 1.550005] CR0: 80050033 CR2: 00000000 CR3: 01bf8000 CR4: 00040690
[ 1.550005] Stack:
[ 1.550005] c1435019 cf43df30 c1ba72ef cf43df90 c1b8ab5c cf43df48 c1b8a4bd 00000000
[ 1.550005] cffdbf00 cf43df7c c10906fd 00000000 00060006 cffdbf48 c1aeedb0 c1a69053
[ 1.550005] cffdbf47 cffdbf48 00000200 00000006 c1be7c1c 0000025e cf43dfa4 c1b8aca1
[ 1.550005] Call Trace:
[ 1.550005] [<c1435019>] ? ulpi_register_driver+0x19/0x30
[ 1.550005] [<c1ba72ef>] tusb1210_driver_init+0xd/0xf
[ 1.550005] [<c1b8ab5c>] do_one_initcall+0xcb/0x14d
[ 1.550005] [<c1b8a4bd>] ? repair_env_string+0x12/0x54
[ 1.550005] [<c10906fd>] ? parse_args+0x1bd/0x3c0
[ 1.550005] [<c1b8aca1>] ? kernel_init_freeable+0xc3/0x15b
[ 1.550005] [<c1b8acc1>] kernel_init_freeable+0xe3/0x15b
[ 1.550005] [<c178adb8>] kernel_init+0x8/0xc0
[ 1.550005] [<c1798a80>] ret_from_kernel_thread+0x20/0x30
[ 1.550005] [<c178adb0>] ? rest_init+0xb0/0xb0
[ 1.550005] Code: 85 c0 75 17 8b 43 3c 31 d2 e8 25 fe ec ff 8d 65 f8 89 f0 5b 5e 5d c3 8d 74 26 00 89 d8 e8 01 eb ff ff 8d 65 f8 89 f0 5b 5e 5d c3 <0f> 0b ff 33 68 58 f9 a8 c1 e8 f5 4d 48 00 59 8b 53 04 5e eb 92
[ 1.550005] EIP: [<c1309078>] driver_register+0x98/0xd0 SS:ESP 0068:cf43df24
[ 1.563040] ---[ end trace 23c59b24b9f0a1e9 ]---
[ 1.563391] Kernel panic - not syncing: Fatal exception
git bisect start 325c0efb5f89af4e5e3d0575ea1b042375dedf6b 64291f7db5bd8150a74ad2036f1037e6a0428df2 --
git bisect bad b570ee62511a07540aac9c8953dabee4f5e491f5 # 23:19 0- 26 Merge 'iommu/master' into devel-spot-201509091835
git bisect bad 7d4e51e4b65c3ca7b64634a1629453329b02d448 # 23:19 0- 26 Merge 'renesas-drivers/topic/gen3-integration-v8' into devel-spot-201509091835
git bisect good e5af4c798148137d0b22bf5ccc1e92523ec46f6d # 23:35 550+ 0 Merge 'platform-drivers-x86/testing' into devel-spot-201509091835
git bisect good 75520c559502b803aaa6c2906e40fdb61e2497ef # 23:43 550+ 0 Merge 'dynticks/sched/core' into devel-spot-201509091835
git bisect good 296e6185a1ec3cd3ccbba9477919a727b595e9e9 # 23:57 546+ 2 Merge 'renesas-drivers/topic/r8a7795-scif-v1' into devel-spot-201509091835
git bisect bad e53433a47052636a7625da8d08bb231d49b16be5 # 23:57 0- 26 Merge 'rcu/dev.2015.09.01a' into devel-spot-201509091835
git bisect good b25a776a7fbeb80c5cec9e17d6e8cc5ea2e9873a # 00:12 550+ 1 Merge 'renesas-drivers/topic/rcar-dmac-residue-v1' into devel-spot-201509091835
git bisect bad ba42cf33babc6f93e1a44c849fa5ea6e0ad4ce06 # 00:12 0- 22 rcu: Eliminate panic when silly boot-time fanout specified
git bisect bad 6c0797038c459497744235dc5255c1f12bd39f74 # 00:12 0- 22 locking/percpu-rwsem: Make use of the rcu_sync infrastructure
git bisect bad a88f9ff7d3984bb2c5f456777b0df17b7eeb766e # 00:12 0- 22 rcu_sync: Simplify rcu_sync using new rcu_sync_ops structure
git bisect bad 9a5d9131db0a16ae911236ed6aa793696c544eba # 00:12 0- 22 torture: Consolidate cond_resched_rcu_qs() into stutter_wait()
git bisect bad 5f6a140f30b4296ac6ef2dd275ed513c27180fbe # 00:12 0- 22 locktorture: Add torture tests for percpu_rwsem
# first bad commit: [5f6a140f30b4296ac6ef2dd275ed513c27180fbe] locktorture: Add torture tests for percpu_rwsem
git bisect bad 04be76a9b067f1f9ecc2605ad5ef26dc5ec01e41 # 00:12 0- 66 locktorture: Support rtmutex torturing
# extra tests with DEBUG_INFO
git bisect bad 5f6a140f30b4296ac6ef2dd275ed513c27180fbe # 00:18 0- 11 locktorture: Add torture tests for percpu_rwsem
# extra tests on HEAD of linux-devel/devel-spot-201509091835
git bisect bad 325c0efb5f89af4e5e3d0575ea1b042375dedf6b # 00:18 0- 19 0day head guard for 'devel-spot-201509091835'
# extra tests on tree/branch rcu/dev.2015.09.01a
git bisect bad caa7dd68d68438b256ee830f4aa6b2ddd04b4575 # 00:25 0- 7 locktorture: Fix module unwind when bad torture_type specified
# extra tests with first bad commit reverted
git bisect bad 219829903bebb1168928ed7aaf8cfd083ab98b26 # 00:54 0- 12 Revert "locktorture: Add torture tests for percpu_rwsem"
# extra tests on tree/branch linus/master
git bisect good 26d2177e977c912863ac04f6c1a967e793ca3a56 # 01:29 1010+ 1 Merge tag 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma
# extra tests on tree/branch linux-next/master
git bisect good 72c9d8043fdc87832802de0b7a7129d6fc4c4c70 # 02:02 1010+ 0 Add linux-next specific files for 20150909
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
5 years, 7 months
Re: [LKP] [rcu] kernel BUG at include/linux/pagemap.h:149!
by Paul E. McKenney
On Thu, Sep 10, 2015 at 06:25:13PM +0800, Boqun Feng wrote:
> Hi Fengguang,
>
> On Thu, Sep 10, 2015 at 08:57:08AM +0800, Fengguang Wu wrote:
> > Greetings,
> >
> > 0day kernel testing robot got the below dmesg and the first bad commit is
> >
> > https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git dev.2015.09.01a
> >
> > commit d0a795e7964cca98fbefefef5e0c330b24d04f50
> > Author: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> > AuthorDate: Thu Jul 30 16:55:38 2015 -0700
> > Commit: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> > CommitDate: Mon Aug 31 14:38:03 2015 -0700
> >
> > rcu: Don't disable preemption for Tiny and Tree RCU readers
> >
> > Because preempt_disable() maps to barrier() for non-debug builds,
> > it forces the compiler to spill and reload registers. Because Tree
> > RCU and Tiny RCU now only appear in CONFIG_PREEMPT=n builds, these
> > barrier() instances generate needless extra code for each instance of
> > rcu_read_lock() and rcu_read_unlock(). This extra code slows down Tree
> > RCU and bloats Tiny RCU.
> >
> > This commit therefore removes the preempt_disable() and preempt_enable()
> > from the non-preemptible implementations of __rcu_read_lock() and
> > __rcu_read_unlock(), respectively.
> >
> > Signed-off-by: Paul E. McKenney <paulmck(a)linux.vnet.ibm.com>
> >
> > +------------------------------------------------+------------+------------+------------+
> > | | 2d0f6efd31 | d0a795e796 | d0a795e796 |
> > +------------------------------------------------+------------+------------+------------+
> > | boot_successes | 63 | 0 | 0 |
> > | boot_failures | 2 | 42 | 42 |
> > | IP-Config:Auto-configuration_of_network_failed | 2 | | |
> > | kernel_BUG_at_include/linux/pagemap.h | 0 | 42 | 42 |
> > | invalid_opcode | 0 | 42 | 42 |
> > | EIP_is_at_page_cache_get_speculative | 0 | 42 | 42 |
> > | Kernel_panic-not_syncing:Fatal_exception | 0 | 42 | 42 |
> > | backtrace:vfs_write | 0 | 42 | 42 |
> > | backtrace:SyS_write | 0 | 42 | 42 |
> > | backtrace:populate_rootfs | 0 | 42 | 42 |
> > | backtrace:kernel_init_freeable | 0 | 42 | 42 |
> > +------------------------------------------------+------------+------------+------------+
> >
> > dmesg for d0a795e796 and 2d0f6efd31 are both attached.
> >
> > [ 0.205937] PCI: CLS 0 bytes, default 32
> > [ 0.206554] Unpacking initramfs...
> > [ 0.208263] ------------[ cut here ]------------
> > [ 0.209011] kernel BUG at include/linux/pagemap.h:149!
>
> Code here is:
>
> #ifdef CONFIG_TINY_RCU
> # ifdef CONFIG_PREEMPT_COUNT
> VM_BUG_ON(!in_atomic()); <-- BUG triggered here.
> # endif
> ...
> #endif
>
> This indicates that CONFIG_TINY_RCU and CONFIG_PREEMPT_COUNT are both y.
> Normally, IIUC, this is not possible or meaningless, because TINY_RCU is
> for !PREEMPT kernel. However, according to commmit e8f7c70f4 ("sched:
> Make sleeping inside spinlock detection working in !CONFIG_PREEMPT"),
> maintaining preempt counts in !PREEMPT kernel makes sense for finding
> preempt-related bugs.
Good analysis, thank you!
> So a possible fix would be still counting preempt_count in
> rcu_read_lock and rcu_read_unlock if PREEMPT_COUNT is y for debug
> purpose:
>
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 07f9b95..887bf5f 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -297,10 +297,16 @@ void synchronize_rcu(void);
>
> static inline void __rcu_read_lock(void)
> {
> +#ifdef CONFIG_PREEMPT_COUNT
> + preempt_disable();
> +#endif
We can save a line as follows:
if (IS_ENABLED(CONFIG_PREEMPT_COUNT))
preempt_disable();
This approach also has the advantage of letting the compiler look at
more of the code, so that compiler errors in strange combinations of
configurations are less likely to be missed.
> }
>
> static inline void __rcu_read_unlock(void)
> {
> +#ifdef CONFIG_PREEMPT_COUNT
> + preempt_enable();
> +#endif
> }
>
> I did a simple booting test with the some configuration on a guest on
> x86, didn't see this error again.
>
> (Also add Frederic Weisbecker to CCed)
Would you like to send me a replacement patch?
Thanx, Paul
> Regards,
> Boqun
>
> > [ 0.209033] invalid opcode: 0000 [#1] DEBUG_PAGEALLOC
> > [ 0.209033] CPU: 0 PID: 1 Comm: swapper Not tainted 4.2.0-rc1-00078-gd0a795e #1
> > [ 0.209033] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
> > [ 0.209033] task: 8b1aa040 ti: 8b1ac000 task.ti: 8b1ac000
> > [ 0.209033] EIP: 0060:[<7dccf506>] EFLAGS: 00010202 CPU: 0
> > [ 0.209033] EIP is at page_cache_get_speculative+0x6c/0x149
> > [ 0.209033] EAX: 00000001 EBX: 8ac00101 ECX: 00000000 EDX: 00000001
> > [ 0.209033] ESI: 8bb946e0 EDI: 00000001 EBP: 8b1adc7c ESP: 8b1adc70
> > [ 0.209033] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
> > [ 0.209033] CR0: 8005003b CR2: ffffffff CR3: 069fb000 CR4: 00000690
> > [ 0.209033] Stack:
> > [ 0.209033] 8ac0012c 8bb946e0 00000000 8b1adc9c 7dcd1219 8ad5ac7c 00000007 00000002
> > [ 0.209033] 000200d2 00000000 00000000 8b1adcc0 7dcd136c 00000007 00000000 8ad5ac78
> > [ 0.209033] 0000000f 000200d2 00000000 00000000 8b1adcd4 7dcd3609 000200d2 000009e8
> > [ 0.209033] Call Trace:
> > [ 0.209033] [<7dcd1219>] find_get_entry+0xce/0x133
> > [ 0.209033] [<7dcd136c>] pagecache_get_page+0x1d/0x39d
> > [ 0.209033] [<7dcd3609>] grab_cache_page_write_begin+0x3a/0x68
> > [ 0.209033] [<7dd58353>] simple_write_begin+0x2d/0xbf
> > [ 0.209033] [<7dcd3703>] generic_perform_write+0xcc/0x26f
> > [ 0.209033] [<7dd4c3e5>] ? file_update_time+0x12b/0x135
> > [ 0.209033] [<7dcd3a79>] __generic_file_write_iter+0x1d3/0x233
> > [ 0.209033] [<7dcd3b2f>] generic_file_write_iter+0x56/0x128
> > [ 0.209033] [<7dd2d866>] __vfs_write+0x99/0x106
> > [ 0.209033] [<7dd2da75>] vfs_write+0xf7/0x136
> > [ 0.209033] [<7dd2dbbc>] SyS_write+0x61/0xa7
> > [ 0.209033] [<7e967bdd>] xwrite+0x23/0xa1
> > [ 0.209033] [<7e967d10>] do_copy+0xb5/0xf9
> > [ 0.209033] [<7e96781a>] write_buffer+0x1d/0x2c
> > [ 0.209033] [<7e9679c1>] flush_buffer+0x3c/0xb0
> > [ 0.209033] [<7e98e1dc>] gunzip+0x3a0/0x4b0
> > [ 0.209033] [<7e98de34>] ? bunzip2+0x60c/0x60c
> > [ 0.209033] [<7e98de3c>] ? nofill+0x8/0x8
> > [ 0.209033] [<7e96838a>] unpack_to_rootfs+0x1b0/0x2ee
> > [ 0.209033] [<7e967985>] ? error+0x2c/0x2c
> > [ 0.209033] [<7e967959>] ? do_start+0x1b/0x1b
> > [ 0.209033] [<7e968546>] populate_rootfs+0x7e/0x199
> > [ 0.209033] [<7e966f01>] do_one_initcall+0x130/0x216
> > [ 0.209033] [<7e966506>] ? repair_env_string+0x29/0x96
> > [ 0.209033] [<7e9684c8>] ? unpack_to_rootfs+0x2ee/0x2ee
> > [ 0.209033] [<7dc59bec>] ? parse_args+0x343/0x40a
> > [ 0.209033] [<7e9670be>] ? kernel_init_freeable+0xd7/0x1b4
> > [ 0.209033] [<7e9670de>] kernel_init_freeable+0xf7/0x1b4
> > [ 0.209033] [<7e2736cf>] kernel_init+0x9/0x139
> > [ 0.209033] [<7e281f00>] ret_from_kernel_thread+0x20/0x30
> > [ 0.209033] [<7e2736c6>] ? rest_init+0x11e/0x11e
> > [ 0.209033] Code: ff ff ff 7f b8 dc 98 77 7e 0f 94 c3 31 c9 0f b6 fb 89 fa e8 c7 50 fe ff 8b 04 bd dc 8a 7d 7e 40 84 db 89 04 bd dc 8a 7d 7e 74 02 <0f> 0b 8b 1e 31 c9 b8 a0 98 77 7e c1 eb 0f 83 e3 01 89 da e8 9c
> > [ 0.209033] EIP: [<7dccf506>] page_cache_get_speculative+0x6c/0x149 SS:ESP 0068:8b1adc70
> > [ 0.240927] ---[ end trace b15ce49b08a81922 ]---
> > [ 0.241403] Kernel panic - not syncing: Fatal exception
> >
> > git bisect start d0a795e7964cca98fbefefef5e0c330b24d04f50 d770e558e21961ad6cfdf0ff7df0eb5d7d4f0754 --
> > git bisect good 8ff4fbfd69a6c7b9598f8c1f2df34f89bac02c1a # 06:49 22+ 0 Merge branches 'fixes.2015.07.22a' and 'initexp.2015.08.04a' into HEAD
> > git bisect good 8611bc8cfdb809852e15c8c8786fe0fbd72e7da7 # 06:54 22+ 0 rcu_sync: Introduce rcu_sync_dtor()
> > git bisect good d74251b8bae07a4957c9f3ccecbdb7f84d790f38 # 07:03 22+ 0 locking/percpu-rwsem: Clean up the lockdep annotations in percpu_down_read()
> > git bisect good 51899e3fb9b8de48dff0ab1a9cc5250c6d4020ac # 07:09 22+ 0 rcu: Use rcu_callback_t in call_rcu*() and friends
> > git bisect good e8ee682bcce44c81d8363009b08f115e964dbd0b # 07:14 21+ 2 rcu: Use call_rcu_func_to to replace explicit type equivalents
> > git bisect good 2d0f6efd311165fe06cecb29475e89e550b92d8c # 07:22 22+ 0 rcu: Use rsp->expedited_wq instead of sync_rcu_preempt_exp_wq
> > # first bad commit: [d0a795e7964cca98fbefefef5e0c330b24d04f50] rcu: Don't disable preemption for Tiny and Tree RCU readers
> > git bisect good 2d0f6efd311165fe06cecb29475e89e550b92d8c # 07:26 63+ 2 rcu: Use rsp->expedited_wq instead of sync_rcu_preempt_exp_wq
> > # extra tests on HEAD of linux-devel/devel-spot-201509091835
> > git bisect bad 325c0efb5f89af4e5e3d0575ea1b042375dedf6b # 07:31 0- 2 0day head guard for 'devel-spot-201509091835'
> > # extra tests on tree/branch rcu/dev.2015.09.01a
> > git bisect bad caa7dd68d68438b256ee830f4aa6b2ddd04b4575 # 07:36 0- 11 locktorture: Fix module unwind when bad torture_type specified
> > # extra tests with first bad commit reverted
> > git bisect good 9ff21ff24e5557ad2c1bd72b63f969609e0d28f8 # 07:42 66+ 2 Revert "rcu: Don't disable preemption for Tiny and Tree RCU readers"
> > # extra tests on tree/branch linus/master
> > git bisect good b8889c4fc6ba03e289cec6a4d692f6f080a55e53 # 07:50 66+ 0 Merge tag 'tty-4.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
> > # extra tests on tree/branch linux-next/master
> > git bisect good 72c9d8043fdc87832802de0b7a7129d6fc4c4c70 # 07:58 63+ 0 Add linux-next specific files for 20150909
> >
> >
>
5 years, 7 months
Re: [LKP] [PATCH 2/3] rhashtable-test: retry insert operations in threads
by Herbert Xu
On Tue, Sep 01, 2015 at 03:56:18PM +0200, Phil Sutter wrote:
>
> Looking at rhashtable_test.c, I see the initial table size is 8 entries.
> 70% of that is 5.6 entries, so background expansion is started after the
> 6th entry has been added, right? Given there are 10 threads running
> which try to insert 50k entries at the same time, I don't think it's
> unlikely that three more entries are inserted before the background
> expansion completes.
Yes but in that case the GFP_ATOMIC allocation should work because
the table is so small anyway.
Cheers,
--
Email: Herbert Xu <herbert(a)gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
5 years, 7 months
[lkp] [fs/file.c] 8a81252b77: 14.2% will-it-scale.per_thread_ops
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 8a81252b774b53e628a8a0fe18e2b8fc236d92cc ("fs/file.c: don't acquire files->file_lock in fd_install()")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
xps/will-it-scale/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/open1
commit:
1af95de6f0119d5bde02d3a811a9f3a3661e954e
8a81252b774b53e628a8a0fe18e2b8fc236d92cc
1af95de6f0119d5b 8a81252b774b53e628a8a0fe18
---------------- --------------------------
%stddev %change %stddev
\ | \
581483 ± 2% +14.2% 663787 ± 2% will-it-scale.per_thread_ops
689.96 ± 0% -2.7% 671.31 ± 0% will-it-scale.time.system_time
17.86 ± 1% +59.8% 28.55 ± 0% will-it-scale.time.user_time
3521 ± 6% -11.2% 3125 ± 3% slabinfo.kmalloc-192.active_objs
17.86 ± 1% +59.8% 28.55 ± 0% time.user_time
8.75 ± 16% -51.4% 4.25 ± 50% sched_debug.cfs_rq[1]:/.nr_spread_over
5.50 ± 20% +95.5% 10.75 ± 35% sched_debug.cfs_rq[3]:/.nr_spread_over
473.25 ± 23% +45.7% 689.50 ± 25% sched_debug.cfs_rq[7]:/.utilization_load_avg
811992 ± 10% +11.6% 906272 ± 3% sched_debug.cpu#0.avg_idle
80.00 ± 17% +61.6% 129.25 ± 33% sched_debug.cpu#7.cpu_load[0]
1372 ± 19% +50.6% 2066 ± 13% sched_debug.cpu#7.curr->pid
20835 ± 26% +40.7% 29308 ± 18% sched_debug.cpu#7.ttwu_count
2.15 ± 2% +34.2% 2.88 ± 2% perf-profile.cpu-cycles.__alloc_fd.get_unused_fd_flags.do_sys_open.sys_open.system_call_fastpath
1.40 ± 3% -8.7% 1.28 ± 3% perf-profile.cpu-cycles.__slab_alloc.kmem_cache_alloc.get_empty_filp.path_openat.do_filp_open
0.96 ± 4% -8.4% 0.88 ± 2% perf-profile.cpu-cycles.dput.__fput.____fput.task_work_run.do_notify_resume
2.55 ± 4% +42.7% 3.63 ± 2% perf-profile.cpu-cycles.get_unused_fd_flags.do_sys_open.sys_open.system_call_fastpath
3.67 ± 4% -9.1% 3.34 ± 3% perf-profile.cpu-cycles.getname.do_sys_open.sys_open.system_call_fastpath
1.02 ± 7% +16.4% 1.19 ± 5% perf-profile.cpu-cycles.kmem_cache_free.putname.do_sys_open.sys_open.system_call_fastpath
1.45 ± 6% +22.8% 1.78 ± 5% perf-profile.cpu-cycles.path_init.path_openat.do_filp_open.do_sys_open.sys_open
1.19 ± 6% +25.4% 1.49 ± 4% perf-profile.cpu-cycles.putname.do_sys_open.sys_open.system_call_fastpath
1.71 ± 7% -14.0% 1.47 ± 4% perf-profile.cpu-cycles.security_file_free.__fput.____fput.task_work_run.do_notify_resume
xps: Nehalem
Memory: 4G
will-it-scale.time.user_time
35 ++---------------------------------------------------------------------+
O O O O O O O |
30 ++O O O O O O O O O O O |
| O O O O O |
25 ++ |
| .*.*..*.*.*..*.*..*.*..*.*..* |
20 *+*..*.*..*.*. + |
| *..*.*.*..*.*..*.*..*.*..*.*
15 ++ |
| |
10 ++ |
| |
5 ++ |
| |
0 ++---O----O------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 7 months
[lkp] [drm/i915] aaf5ec2e51: [drm:gen8_irq_handler [i915]] *ERROR* The master control interrupt lied (SDE)!
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit aaf5ec2e51ab1d9c5e962b4728a1107ed3ff7a3e ("drm/i915: Handle HPD when it has actually occurred")
+--------------------------------+------------+------------+
| | 8df5dd57fd | aaf5ec2e51 |
+--------------------------------+------------+------------+
| boot_successes | 22 | 12 |
| boot_failures | 0 | 2 |
| invoked_oom-killer:gfp_mask=0x | 0 | 2 |
| Mem-Info | 0 | 2 |
| Out_of_memory:Kill_process | 0 | 2 |
+--------------------------------+------------+------------+
[ 12.566634] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[ 12.574144] [drm] Driver supports precise vblank timestamp query.
[ 12.581256] vgaarb: device changed decodes: PCI:0000:00:02.0,olddecodes=io+mem,decodes=io+mem:owns=io+mem
[ 12.610468] [drm:gen8_irq_handler [i915]] *ERROR* The master control interrupt lied (SDE)!
[ 12.619867] [drm:gen8_irq_handler [i915]] *ERROR* The master control interrupt lied (SDE)!
[ 12.619869] fbcon: inteldrmfb (fb0) is primary device
[ 12.622546] ACPI: Video Device [GFX0] (multi-head: yes rom: no post: no)
[ 12.622727] input: Video Bus as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0A08:00/LNXVIDEO:00/input/input6
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
5 years, 7 months
[lkp] [sched/fair] 7ea241afbf: 2.9% ebizzy.throughput.per_thread.min, +218.0% unixbench.score
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 7ea241afbf4924c58d41078599f7a32ba49fb985 ("sched/fair: Clean up load average references")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/iterations/duration:
ivb42/ebizzy/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/200%/100x/10s
commit:
139622343ef31941effc6de6a5a9320371a00e62
7ea241afbf4924c58d41078599f7a32ba49fb985
139622343ef31941 7ea241afbf4924c58d41078599
---------------- --------------------------
%stddev %change %stddev
\ | \
395.70 ± 0% +2.9% 407.00 ± 0% ebizzy.throughput.per_thread.min
139775 ± 2% +47.2% 205700 ± 1% ebizzy.time.voluntary_context_switches
15966 ± 7% -12.5% 13971 ± 5% slabinfo.kmalloc-256.active_objs
299932 ± 3% +7.5% 322466 ± 3% softirqs.SCHED
139775 ± 2% +47.2% 205700 ± 1% time.voluntary_context_switches
1466879 ± 35% +65.1% 2421889 ± 21% cpuidle.C1E-IVT.time
1718 ± 33% +194.8% 5066 ± 26% cpuidle.C1E-IVT.usage
6115647 ± 20% +106.2% 12609661 ± 22% cpuidle.C3-IVT.time
3897 ± 23% +287.7% 15107 ± 15% cpuidle.C3-IVT.usage
120886 ± 2% +50.3% 181748 ± 1% latency_stats.hits.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
54997679 ± 3% +48.2% 81517016 ± 1% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
547359 ± 8% +87.7% 1027569 ± 5% latency_stats.sum.call_rwsem_down_write_failed.SyS_mprotect.entry_SYSCALL_64_fastpath
1640004 ± 7% +119.5% 3599195 ± 6% latency_stats.sum.call_rwsem_down_write_failed.vm_munmap.SyS_munmap.entry_SYSCALL_64_fastpath
620798 ± 1% +21.1% 751505 ± 1% proc-vmstat.numa_hint_faults_local
144389 ± 3% -35.1% 93722 ± 2% proc-vmstat.numa_pages_migrated
144389 ± 3% -35.1% 93722 ± 2% proc-vmstat.pgmigrate_success
923.20 ± 8% +105.6% 1898 ± 4% proc-vmstat.thp_fault_alloc
-1403365 ±-26% -88.5% -161208 ±-137% sched_debug.cfs_rq[10]:/.spread0
-1250955 ±-37% -92.4% -94665 ±-354% sched_debug.cfs_rq[11]:/.spread0
-565945 ±-23% -106.0% 33749 ±291% sched_debug.cfs_rq[24]:/.spread0
-788845 ±-32% -85.8% -111818 ±-297% sched_debug.cfs_rq[25]:/.spread0
-955073 ±-35% -92.7% -69316 ±-355% sched_debug.cfs_rq[26]:/.spread0
-1271910 ±-19% -84.9% -191443 ±-108% sched_debug.cfs_rq[27]:/.spread0
-994274 ±-44% -98.6% -13713 ±-2451% sched_debug.cfs_rq[28]:/.spread0
-1096179 ±-36% -81.3% -204573 ±-130% sched_debug.cfs_rq[29]:/.spread0
-1037623 ±-37% -93.9% -63744 ±-384% sched_debug.cfs_rq[2]:/.spread0
-1117584 ±-23% -87.8% -136548 ±-299% sched_debug.cfs_rq[30]:/.spread0
-1135662 ±-36% -91.6% -94995 ±-326% sched_debug.cfs_rq[31]:/.spread0
-1358698 ±-29% -94.1% -80724 ±-384% sched_debug.cfs_rq[32]:/.spread0
-1064343 ±-31% -102.8% 30158 ±1565% sched_debug.cfs_rq[33]:/.spread0
-1334859 ±-31% -91.1% -119317 ±-155% sched_debug.cfs_rq[34]:/.spread0
-1165532 ±-37% -93.4% -76739 ±-477% sched_debug.cfs_rq[35]:/.spread0
-1334812 ±-18% -84.4% -208008 ±-88% sched_debug.cfs_rq[3]:/.spread0
-1105021 ±-39% -98.1% -21519 ±-1720% sched_debug.cfs_rq[4]:/.spread0
-1213413 ±-30% -79.8% -245150 ±-102% sched_debug.cfs_rq[5]:/.spread0
-1209687 ±-26% -84.5% -187380 ±-223% sched_debug.cfs_rq[6]:/.spread0
-1217712 ±-33% -90.0% -121950 ±-268% sched_debug.cfs_rq[7]:/.spread0
-1404258 ±-27% -90.9% -127826 ±-243% sched_debug.cfs_rq[8]:/.spread0
-1237286 ±-23% -96.2% -47513 ±-969% sched_debug.cfs_rq[9]:/.spread0
805288 ± 3% -50.8% 395956 ± 20% sched_debug.cpu#0.avg_idle
816964 ± 4% -51.3% 397868 ± 16% sched_debug.cpu#1.avg_idle
-74.70 ±-54% -140.6% 30.30 ±156% sched_debug.cpu#1.nr_uninterruptible
777519 ± 5% -49.9% 389238 ± 13% sched_debug.cpu#10.avg_idle
783294 ± 5% -49.9% 392698 ± 18% sched_debug.cpu#11.avg_idle
565267 ± 9% -31.8% 385712 ± 13% sched_debug.cpu#12.avg_idle
551803 ± 8% -33.2% 368517 ± 14% sched_debug.cpu#13.avg_idle
590148 ± 8% -38.4% 363530 ± 12% sched_debug.cpu#14.avg_idle
573886 ± 6% -30.2% 400308 ± 12% sched_debug.cpu#15.avg_idle
582337 ± 9% -38.7% 356889 ± 14% sched_debug.cpu#16.avg_idle
604791 ± 9% -38.9% 369404 ± 13% sched_debug.cpu#17.avg_idle
578703 ± 8% -34.8% 377529 ± 11% sched_debug.cpu#19.avg_idle
774099 ± 6% -53.3% 361276 ± 14% sched_debug.cpu#2.avg_idle
608324 ± 10% -38.4% 374662 ± 11% sched_debug.cpu#20.avg_idle
588479 ± 8% -36.6% 373042 ± 11% sched_debug.cpu#21.avg_idle
564123 ± 9% -36.4% 358931 ± 11% sched_debug.cpu#22.avg_idle
589404 ± 10% -37.1% 370768 ± 16% sched_debug.cpu#23.avg_idle
878444 ± 3% -52.0% 421579 ± 17% sched_debug.cpu#24.avg_idle
586.40 ± 56% +493.9% 3482 ±130% sched_debug.cpu#24.sched_goidle
851203 ± 4% -51.9% 409072 ± 18% sched_debug.cpu#25.avg_idle
677.60 ± 52% +348.2% 3036 ± 97% sched_debug.cpu#25.sched_goidle
824295 ± 5% -51.4% 400388 ± 13% sched_debug.cpu#26.avg_idle
527.70 ± 24% +311.9% 2173 ± 59% sched_debug.cpu#26.sched_goidle
823033 ± 5% -49.2% 417989 ± 13% sched_debug.cpu#27.avg_idle
735.70 ± 21% +238.6% 2491 ± 77% sched_debug.cpu#27.sched_goidle
825555 ± 4% -50.1% 411834 ± 13% sched_debug.cpu#28.avg_idle
513.50 ± 21% +286.4% 1984 ± 24% sched_debug.cpu#28.sched_goidle
825524 ± 4% -51.5% 400111 ± 12% sched_debug.cpu#29.avg_idle
769852 ± 6% -49.7% 387088 ± 14% sched_debug.cpu#3.avg_idle
822581 ± 5% -52.2% 393485 ± 10% sched_debug.cpu#30.avg_idle
533.70 ± 28% +202.8% 1616 ± 8% sched_debug.cpu#30.sched_goidle
819353 ± 5% -50.0% 409589 ± 8% sched_debug.cpu#31.avg_idle
837948 ± 4% -50.7% 412842 ± 19% sched_debug.cpu#32.avg_idle
826642 ± 4% -51.9% 397593 ± 16% sched_debug.cpu#33.avg_idle
512.50 ± 25% +338.5% 2247 ± 46% sched_debug.cpu#33.sched_goidle
833929 ± 4% -51.4% 405238 ± 10% sched_debug.cpu#34.avg_idle
830692 ± 4% -50.7% 409409 ± 17% sched_debug.cpu#35.avg_idle
711361 ± 6% -41.8% 413699 ± 15% sched_debug.cpu#36.avg_idle
684821 ± 6% -44.1% 382697 ± 13% sched_debug.cpu#37.avg_idle
216.60 ± 17% -96.6% 7.40 ±330% sched_debug.cpu#37.nr_uninterruptible
1188 ± 14% +48.6% 1766 ± 32% sched_debug.cpu#37.sched_goidle
696736 ± 7% -45.5% 379597 ± 10% sched_debug.cpu#38.avg_idle
681896 ± 6% -42.8% 389861 ± 12% sched_debug.cpu#39.avg_idle
768155 ± 6% -47.6% 402778 ± 20% sched_debug.cpu#4.avg_idle
696896 ± 5% -44.0% 390348 ± 8% sched_debug.cpu#40.avg_idle
685253 ± 9% -44.6% 379667 ± 13% sched_debug.cpu#41.avg_idle
1126 ± 16% +56.4% 1760 ± 30% sched_debug.cpu#41.sched_goidle
706556 ± 6% -41.7% 411902 ± 24% sched_debug.cpu#42.avg_idle
135.30 ± 53% -98.6% 1.90 ±1768% sched_debug.cpu#42.nr_uninterruptible
691559 ± 8% -43.4% 391276 ± 8% sched_debug.cpu#43.avg_idle
686480 ± 7% -44.0% 384641 ± 16% sched_debug.cpu#44.avg_idle
690032 ± 8% -44.8% 380668 ± 11% sched_debug.cpu#45.avg_idle
151.20 ± 44% -80.0% 30.30 ±113% sched_debug.cpu#45.nr_uninterruptible
1079 ± 9% +71.6% 1853 ± 36% sched_debug.cpu#45.sched_goidle
678938 ± 8% -41.6% 396431 ± 12% sched_debug.cpu#46.avg_idle
1103 ± 13% +55.5% 1716 ± 21% sched_debug.cpu#46.sched_goidle
687628 ± 7% -45.1% 377761 ± 11% sched_debug.cpu#47.avg_idle
1183 ± 16% +37.6% 1629 ± 8% sched_debug.cpu#47.sched_goidle
756693 ± 7% -50.0% 378296 ± 10% sched_debug.cpu#5.avg_idle
765989 ± 3% -52.3% 365247 ± 11% sched_debug.cpu#6.avg_idle
773488 ± 5% -49.7% 389404 ± 14% sched_debug.cpu#7.avg_idle
777480 ± 6% -49.7% 390881 ± 19% sched_debug.cpu#8.avg_idle
765042 ± 5% -47.2% 404008 ± 13% sched_debug.cpu#9.avg_idle
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
lkp-ne04/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/f2fs/8K/400M/fsyncBeforeClose/16d/256fpd
commit:
139622343ef31941effc6de6a5a9320371a00e62
7ea241afbf4924c58d41078599f7a32ba49fb985
139622343ef31941 7ea241afbf4924c58d41078599
---------------- --------------------------
%stddev %change %stddev
\ | \
22.60 ± 2% -29.6% 15.90 ± 5% fsmark.time.percent_of_cpu_this_job_got
24.04 ± 1% -29.4% 16.96 ± 4% fsmark.time.system_time
498612 ± 0% -9.6% 450525 ± 1% fsmark.time.voluntary_context_switches
1226 ± 4% +23.8% 1517 ± 22% uptime.idle
47483 ± 5% +99.7% 94819 ± 4% proc-vmstat.pgalloc_dma32
353432 ± 0% -14.1% 303531 ± 1% proc-vmstat.pgalloc_normal
19757 ± 0% +18.4% 23392 ± 2% softirqs.BLOCK
31479 ± 0% -21.8% 24612 ± 8% softirqs.SCHED
12898 ± 0% -3.1% 12495 ± 1% vmstat.system.cs
1273 ± 0% -7.6% 1177 ± 2% vmstat.system.in
116583 ± 8% +105.5% 239619 ± 4% numa-numastat.node0.local_node
116584 ± 8% +105.5% 239619 ± 4% numa-numastat.node0.numa_hit
262990 ± 3% -47.7% 137594 ± 7% numa-numastat.node1.local_node
262992 ± 3% -47.7% 137595 ± 7% numa-numastat.node1.numa_hit
20635 ± 4% -12.6% 18028 ± 3% slabinfo.free_nid.active_objs
20635 ± 4% -12.3% 18095 ± 3% slabinfo.free_nid.num_objs
5361 ± 4% -14.0% 4609 ± 5% slabinfo.kmalloc-192.active_objs
5408 ± 5% -14.1% 4648 ± 5% slabinfo.kmalloc-192.num_objs
22.60 ± 2% -29.6% 15.90 ± 5% time.percent_of_cpu_this_job_got
24.04 ± 1% -29.4% 16.96 ± 4% time.system_time
0.99 ± 4% -26.6% 0.73 ± 6% time.user_time
498612 ± 0% -9.6% 450525 ± 1% time.voluntary_context_switches
59704529 ± 5% +100.9% 1.2e+08 ± 3% cpuidle.C1-NHM.time
152224 ± 6% +46.9% 223671 ± 5% cpuidle.C1-NHM.usage
16110901 ± 2% +58.4% 25517916 ± 2% cpuidle.C1E-NHM.time
15507 ± 3% +110.8% 32694 ± 8% cpuidle.C1E-NHM.usage
6.254e+08 ± 0% -17.7% 5.149e+08 ± 1% cpuidle.C3-NHM.time
345895 ± 1% -48.2% 179296 ± 15% cpuidle.C6-NHM.usage
1.81 ± 0% -24.4% 1.36 ± 3% turbostat.%Busy
37.90 ± 1% -8.4% 34.70 ± 1% turbostat.Avg_MHz
2102 ± 0% +20.8% 2540 ± 2% turbostat.Bzy_MHz
9.99 ± 2% +52.8% 15.26 ± 2% turbostat.CPU%c1
49.12 ± 1% -32.4% 33.20 ± 3% turbostat.CPU%c3
39.09 ± 0% +28.3% 50.17 ± 2% turbostat.CPU%c6
39.66 ± 1% -25.1% 29.71 ± 3% turbostat.Pkg%pc3
39778 ± 2% +29.1% 51342 ± 7% numa-vmstat.node0.nr_active_file
106178 ± 1% +17.8% 125067 ± 4% numa-vmstat.node0.nr_file_pages
65547 ± 0% +9.2% 71548 ± 2% numa-vmstat.node0.nr_inactive_file
10129 ± 2% +23.9% 12554 ± 5% numa-vmstat.node0.nr_slab_reclaimable
183370 ± 11% +37.9% 252900 ± 7% numa-vmstat.node0.numa_hit
122668 ± 17% +56.9% 192409 ± 9% numa-vmstat.node0.numa_local
31077 ± 2% -31.9% 21177 ± 16% numa-vmstat.node1.nr_active_file
92412 ± 2% -21.0% 72993 ± 7% numa-vmstat.node1.nr_file_pages
59578 ± 1% -14.1% 51194 ± 4% numa-vmstat.node1.nr_inactive_file
8812 ± 3% -29.1% 6248 ± 11% numa-vmstat.node1.nr_slab_reclaimable
271760 ± 7% -25.9% 201348 ± 9% numa-vmstat.node1.numa_hit
268099 ± 7% -26.4% 197434 ± 9% numa-vmstat.node1.numa_local
169923 ± 3% +31.0% 222560 ± 6% numa-meminfo.node0.Active
159034 ± 2% +29.1% 205387 ± 7% numa-meminfo.node0.Active(file)
424572 ± 2% +17.8% 500299 ± 4% numa-meminfo.node0.FilePages
264854 ± 2% +10.7% 293083 ± 3% numa-meminfo.node0.Inactive
262135 ± 0% +9.2% 286207 ± 2% numa-meminfo.node0.Inactive(file)
540928 ± 3% +18.0% 638125 ± 4% numa-meminfo.node0.MemUsed
40502 ± 2% +24.0% 50221 ± 5% numa-meminfo.node0.SReclaimable
142078 ± 3% -32.3% 96195 ± 15% numa-meminfo.node1.Active
124264 ± 2% -31.8% 84713 ± 16% numa-meminfo.node1.Active(file)
369559 ± 2% -21.0% 291978 ± 7% numa-meminfo.node1.FilePages
244345 ± 2% -15.4% 206692 ± 4% numa-meminfo.node1.Inactive
238277 ± 1% -14.1% 204782 ± 4% numa-meminfo.node1.Inactive(file)
495916 ± 3% -20.0% 396836 ± 7% numa-meminfo.node1.MemUsed
35242 ± 3% -29.1% 24993 ± 11% numa-meminfo.node1.SReclaimable
14192 ± 2% -37.4% 8877 ± 9% latency_stats.hits.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
17981 ± 2% -41.0% 10612 ± 9% latency_stats.hits.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
17623 ± 2% -35.3% 11404 ± 7% latency_stats.hits.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
3983488 ± 3% -53.3% 1861229 ± 4% latency_stats.sum.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
2332611 ± 4% -67.8% 751118 ± 24% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
5008528 ± 3% -43.3% 2842165 ± 9% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
4152411 ± 3% -55.7% 1838163 ± 14% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
15229256 ± 3% -47.0% 8078403 ± 9% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
1295965 ± 1% -46.8% 689847 ± 9% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].write_cache_pages.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range
9001985 ± 3% -50.4% 4463922 ± 10% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
8696503 ± 2% -33.5% 5782368 ± 6% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open
267415 ± 6% -42.5% 153787 ± 9% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
7268225 ± 3% -54.0% 3340379 ± 13% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
864112 ± 2% -48.8% 442013 ± 10% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].write_cache_pages.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs]
3467511 ± 2% -59.4% 1409453 ± 19% latency_stats.sum.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
3379634 ± 2% -64.5% 1199808 ± 22% latency_stats.sum.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
4512610 ± 3% -69.2% 1387720 ± 24% latency_stats.sum.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
857550 ± 4% -59.6% 346763 ± 6% latency_stats.sum.call_rwsem_down_write_failed.f2fs_submit_merged_bio.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1765616 ± 11% -52.1% 846079 ± 11% latency_stats.sum.call_rwsem_down_write_failed.f2fs_submit_page_mbio.[f2fs].do_write_page.[f2fs].write_node_page.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
7571006 ± 3% -48.6% 3890987 ± 10% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
3009442 ± 3% -69.0% 934243 ± 24% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
4321993 ± 3% -50.6% 2133341 ± 11% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
1.339e+09 ± 0% +8.0% 1.446e+09 ± 0% latency_stats.sum.submit_bio_wait.blkdev_issue_flush.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
2056 ± 18% +83.8% 3781 ± 26% sched_debug.cfs_rq[0]:/.exec_clock
42614 ± 8% -19.8% 34187 ± 12% sched_debug.cfs_rq[0]:/.min_vruntime
41315 ± 10% -52.7% 19523 ± 27% sched_debug.cfs_rq[10]:/.min_vruntime
20336 ± 16% -50.9% 9988 ± 21% sched_debug.cfs_rq[11]:/.min_vruntime
40435 ± 4% -50.2% 20125 ± 18% sched_debug.cfs_rq[12]:/.min_vruntime
-2178 ±-177% +548.3% -14124 ±-24% sched_debug.cfs_rq[12]:/.spread0
835.50 ± 54% -59.8% 335.85 ± 34% sched_debug.cfs_rq[13]:/.exec_clock
40939 ± 5% -48.4% 21125 ± 16% sched_debug.cfs_rq[14]:/.min_vruntime
-1675 ±-307% +683.5% -13125 ±-19% sched_debug.cfs_rq[14]:/.spread0
21833 ± 23% -47.2% 11520 ± 34% sched_debug.cfs_rq[15]:/.min_vruntime
39175 ± 15% -34.0% 25844 ± 20% sched_debug.cfs_rq[1]:/.min_vruntime
39915 ± 10% -33.4% 26583 ± 24% sched_debug.cfs_rq[3]:/.min_vruntime
47827 ± 11% -30.8% 33108 ± 17% sched_debug.cfs_rq[4]:/.min_vruntime
37583 ± 6% -38.2% 23211 ± 21% sched_debug.cfs_rq[5]:/.min_vruntime
46823 ± 10% -27.8% 33813 ± 12% sched_debug.cfs_rq[6]:/.min_vruntime
37934 ± 14% -34.6% 24796 ± 20% sched_debug.cfs_rq[7]:/.min_vruntime
40049 ± 7% -44.9% 22064 ± 17% sched_debug.cfs_rq[8]:/.min_vruntime
-2565 ±-201% +373.9% -12157 ±-37% sched_debug.cfs_rq[8]:/.spread0
19575 ± 11% -48.3% 10114 ± 36% sched_debug.cfs_rq[9]:/.min_vruntime
34529 ± 2% +48.0% 51087 ± 18% sched_debug.cpu#0.nr_switches
-618.00 ± -5% +71.8% -1061 ± -3% sched_debug.cpu#0.nr_uninterruptible
38396 ± 6% +40.9% 54089 ± 17% sched_debug.cpu#0.sched_count
16001 ± 3% +41.9% 22701 ± 21% sched_debug.cpu#0.sched_goidle
49940 ± 1% +23.4% 61622 ± 6% sched_debug.cpu#0.ttwu_count
9244 ± 3% +67.3% 15466 ± 6% sched_debug.cpu#0.ttwu_local
9541 ± 10% -19.9% 7641 ± 13% sched_debug.cpu#1.nr_load_updates
-178.90 ±-16% -91.3% -15.60 ±-85% sched_debug.cpu#1.nr_uninterruptible
326.00 ± 9% -26.0% 241.30 ± 9% sched_debug.cpu#10.nr_uninterruptible
5750 ± 3% -24.0% 4372 ± 6% sched_debug.cpu#11.nr_load_updates
260.70 ± 13% -87.2% 33.40 ± 34% sched_debug.cpu#11.nr_uninterruptible
5654 ± 3% -23.0% 4355 ± 8% sched_debug.cpu#13.nr_load_updates
242.50 ± 11% -88.3% 28.40 ± 60% sched_debug.cpu#13.nr_uninterruptible
329.00 ± 9% -26.1% 243.20 ± 11% sched_debug.cpu#14.nr_uninterruptible
26609 ± 31% -55.9% 11748 ± 62% sched_debug.cpu#15.nr_switches
254.50 ± 13% -84.2% 40.10 ± 45% sched_debug.cpu#15.nr_uninterruptible
27018 ± 29% -56.0% 11875 ± 63% sched_debug.cpu#15.sched_count
11885 ± 34% -57.5% 5048 ± 71% sched_debug.cpu#15.sched_goidle
-189.30 ±-17% -77.4% -42.70 ±-53% sched_debug.cpu#2.nr_uninterruptible
2058 ± 9% +63.9% 3374 ± 27% sched_debug.cpu#2.ttwu_local
-183.20 ±-13% -88.3% -21.50 ±-79% sched_debug.cpu#3.nr_uninterruptible
13728 ± 15% -41.8% 7995 ± 36% sched_debug.cpu#3.ttwu_count
-262.40 ±-17% -82.5% -45.90 ±-60% sched_debug.cpu#4.nr_uninterruptible
8009 ± 2% -25.3% 5986 ± 15% sched_debug.cpu#5.nr_load_updates
-190.70 ±-17% -89.6% -19.80 ±-49% sched_debug.cpu#5.nr_uninterruptible
16503 ± 29% -48.6% 8486 ± 59% sched_debug.cpu#5.ttwu_count
-275.30 ±-14% -97.9% -5.80 ±-600% sched_debug.cpu#6.nr_uninterruptible
8320 ± 13% -27.0% 6076 ± 11% sched_debug.cpu#7.nr_load_updates
-177.30 ±-20% -88.3% -20.70 ±-71% sched_debug.cpu#7.nr_uninterruptible
2934 ± 23% -39.1% 1786 ± 31% sched_debug.cpu#7.ttwu_local
6143 ± 3% +12.7% 6926 ± 4% sched_debug.cpu#8.nr_load_updates
17122 ± 4% +59.3% 27268 ± 11% sched_debug.cpu#8.nr_switches
89.30 ± 27% +328.8% 382.90 ± 6% sched_debug.cpu#8.nr_uninterruptible
17144 ± 4% +59.5% 27340 ± 12% sched_debug.cpu#8.sched_count
7177 ± 5% +55.7% 11172 ± 13% sched_debug.cpu#8.sched_goidle
6429 ± 7% +77.3% 11396 ± 37% sched_debug.cpu#8.ttwu_count
1495 ± 14% +75.2% 2619 ± 40% sched_debug.cpu#8.ttwu_local
268.50 ± 15% -88.8% 30.10 ± 51% sched_debug.cpu#9.nr_uninterruptible
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/execl
commit:
139622343ef31941effc6de6a5a9320371a00e62
7ea241afbf4924c58d41078599f7a32ba49fb985
139622343ef31941 7ea241afbf4924c58d41078599
---------------- --------------------------
%stddev %change %stddev
\ | \
2777 ± 0% +84.8% 5133 ± 0% unixbench.score
597160 ± 3% +250.0% 2090062 ± 1% unixbench.time.involuntary_context_switches
68574556 ± 0% +56.5% 1.073e+08 ± 0% unixbench.time.minor_page_faults
177.50 ± 0% +98.5% 352.25 ± 0% unixbench.time.percent_of_cpu_this_job_got
279.21 ± 0% +104.8% 571.76 ± 0% unixbench.time.system_time
71.68 ± 0% +74.9% 125.39 ± 0% unixbench.time.user_time
78783 ± 1% +563.9% 523002 ± 0% unixbench.time.voluntary_context_switches
1365 ± 4% -26.9% 998.34 ± 2% uptime.idle
3526 ± 3% -22.2% 2744 ± 4% slabinfo.anon_vma.active_objs
3556 ± 3% -22.8% 2744 ± 4% slabinfo.anon_vma.num_objs
44543 ± 4% +61.8% 72077 ± 2% vmstat.system.cs
21416 ± 4% +17.5% 25174 ± 4% vmstat.system.in
107439 ± 1% +134.1% 251539 ± 0% softirqs.RCU
92680 ± 2% +234.7% 310164 ± 0% softirqs.SCHED
211188 ± 3% +78.9% 377818 ± 0% softirqs.TIMER
54857623 ± 0% +55.9% 85530499 ± 0% proc-vmstat.numa_hit
54857623 ± 0% +55.9% 85530499 ± 0% proc-vmstat.numa_local
27623940 ± 0% +56.0% 43094142 ± 0% proc-vmstat.pgalloc_dma32
27250105 ± 0% +55.8% 42446449 ± 0% proc-vmstat.pgalloc_normal
68704226 ± 0% +56.5% 1.075e+08 ± 0% proc-vmstat.pgfault
54870202 ± 0% +55.9% 85536588 ± 0% proc-vmstat.pgfree
597160 ± 3% +250.0% 2090062 ± 1% time.involuntary_context_switches
68574556 ± 0% +56.5% 1.073e+08 ± 0% time.minor_page_faults
177.50 ± 0% +98.5% 352.25 ± 0% time.percent_of_cpu_this_job_got
279.21 ± 0% +104.8% 571.76 ± 0% time.system_time
71.68 ± 0% +74.9% 125.39 ± 0% time.user_time
78783 ± 1% +563.9% 523002 ± 0% time.voluntary_context_switches
24.72 ± 0% +90.6% 47.10 ± 0% turbostat.%Busy
720.50 ± 0% +88.8% 1360 ± 0% turbostat.Avg_MHz
22.51 ± 0% -21.7% 17.62 ± 0% turbostat.CPU%c1
5.08 ± 1% +186.1% 14.52 ± 2% turbostat.CPU%c3
47.70 ± 0% -56.5% 20.76 ± 1% turbostat.CPU%c6
53.50 ± 3% +17.3% 62.75 ± 1% turbostat.CoreTmp
0.06 ± 20% +150.0% 0.15 ± 12% turbostat.Pkg%pc3
43137087 ± 2% +66.6% 71866807 ± 1% cpuidle.C1-NHM.time
3516387 ± 5% +31.2% 4612342 ± 4% cpuidle.C1-NHM.usage
809442 ± 2% +4382.4% 36282417 ± 1% cpuidle.C1E-NHM.time
6910 ± 2% +10307.4% 719174 ± 0% cpuidle.C1E-NHM.usage
68030588 ± 0% +195.6% 2.011e+08 ± 1% cpuidle.C3-NHM.time
76199 ± 1% +454.8% 422740 ± 0% cpuidle.C3-NHM.usage
1.091e+09 ± 0% -50.5% 5.397e+08 ± 0% cpuidle.C6-NHM.time
496983 ± 0% -32.1% 337347 ± 0% cpuidle.C6-NHM.usage
26119 ± 8% +428.6% 138058 ± 5% cpuidle.POLL.time
3461 ± 5% +708.4% 27983 ± 7% cpuidle.POLL.usage
224827 ± 49% -100.0% 0.00 ± -1% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
3975 ± 2% +882.0% 39033 ± 0% latency_stats.hits.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
1503 ± 4% +1188.0% 19365 ± 1% latency_stats.hits.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
816.00 ± 5% +1217.0% 10746 ± 1% latency_stats.hits.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
3912 ± 1% +889.7% 38717 ± 1% latency_stats.hits.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.entry_SYSCALL_64_fastpath
1634 ± 2% +1222.3% 21609 ± 0% latency_stats.hits.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
814.25 ± 1% +1516.2% 13160 ± 0% latency_stats.hits.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.vm_mmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
23567 ± 2% +574.2% 158894 ± 0% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
40825 ± 1% +388.1% 199274 ± 0% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
6132 ± 41% -92.1% 484.00 ± 20% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
224827 ± 49% -100.0% 0.00 ± -1% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
4308 ± 43% +273.2% 16079 ± 3% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
102172 ± 10% +41.9% 144985 ± 1% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.entry_SYSCALL_64_fastpath
8447 ± 28% +341.5% 37291 ± 2% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.vm_mmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
987847 ± 2% -31.9% 672427 ± 1% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
224827 ± 49% -100.0% 0.00 ± -1% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
2012933 ± 1% -45.2% 1102741 ± 2% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
89887 ± 2% +282.0% 343413 ± 0% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
160304 ± 1% +177.6% 444992 ± 0% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
53720 ± 19% -73.0% 14481 ± 13% latency_stats.sum.walk_component.path_lookupat.filename_lookup.user_path_at_empty.SyS_access.entry_SYSCALL_64_fastpath
19815 ± 0% +59.6% 31628 ± 1% sched_debug.cfs_rq[0]:/.exec_clock
135827 ± 1% +77.9% 241641 ± 1% sched_debug.cfs_rq[0]:/.min_vruntime
373.50 ± 4% +20.6% 450.50 ± 5% sched_debug.cfs_rq[0]:/.tg_load_avg
16457 ± 5% +75.2% 28829 ± 2% sched_debug.cfs_rq[1]:/.exec_clock
126071 ± 6% +89.2% 238558 ± 2% sched_debug.cfs_rq[1]:/.min_vruntime
372.50 ± 3% +20.8% 450.00 ± 5% sched_debug.cfs_rq[1]:/.tg_load_avg
43.50 ± 14% +28.7% 56.00 ± 3% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
105.00 ± 29% +251.9% 369.50 ± 22% sched_debug.cfs_rq[1]:/.util_avg
16547 ± 4% +72.2% 28490 ± 4% sched_debug.cfs_rq[2]:/.exec_clock
43.50 ± 4% +54.6% 67.25 ± 10% sched_debug.cfs_rq[2]:/.load_avg
132026 ± 5% +78.0% 234955 ± 3% sched_debug.cfs_rq[2]:/.min_vruntime
30.00 ± 30% +125.8% 67.75 ± 26% sched_debug.cfs_rq[2]:/.runnable_load_avg
368.75 ± 2% +22.6% 452.00 ± 5% sched_debug.cfs_rq[2]:/.tg_load_avg
44.25 ± 7% +57.6% 69.75 ± 6% sched_debug.cfs_rq[2]:/.tg_load_avg_contrib
174.00 ± 28% +157.0% 447.25 ± 13% sched_debug.cfs_rq[2]:/.util_avg
15909 ± 6% +73.0% 27517 ± 4% sched_debug.cfs_rq[3]:/.exec_clock
122010 ± 5% +88.7% 230244 ± 4% sched_debug.cfs_rq[3]:/.min_vruntime
13.00 ± 18% -44.2% 7.25 ± 26% sched_debug.cfs_rq[3]:/.nr_spread_over
41.75 ± 20% +43.7% 60.00 ± 17% sched_debug.cfs_rq[3]:/.runnable_load_avg
366.00 ± 2% +24.0% 453.75 ± 5% sched_debug.cfs_rq[3]:/.tg_load_avg
132.00 ± 14% +148.7% 328.25 ± 24% sched_debug.cfs_rq[3]:/.util_avg
16257 ± 0% +57.0% 25519 ± 1% sched_debug.cfs_rq[4]:/.exec_clock
103545 ± 3% +106.0% 213258 ± 1% sched_debug.cfs_rq[4]:/.min_vruntime
366.75 ± 3% +23.3% 452.25 ± 5% sched_debug.cfs_rq[4]:/.tg_load_avg
140.25 ± 17% +168.1% 376.00 ± 24% sched_debug.cfs_rq[4]:/.util_avg
15851 ± 2% +58.2% 25080 ± 4% sched_debug.cfs_rq[5]:/.exec_clock
98491 ± 1% +115.4% 212187 ± 6% sched_debug.cfs_rq[5]:/.min_vruntime
14.75 ±143% +286.4% 57.00 ± 5% sched_debug.cfs_rq[5]:/.runnable_load_avg
370.50 ± 3% +20.6% 446.75 ± 4% sched_debug.cfs_rq[5]:/.tg_load_avg
125.00 ± 26% +228.0% 410.00 ± 15% sched_debug.cfs_rq[5]:/.util_avg
15689 ± 4% +60.7% 25219 ± 3% sched_debug.cfs_rq[6]:/.exec_clock
50.75 ± 11% +33.5% 67.75 ± 14% sched_debug.cfs_rq[6]:/.load_avg
98238 ± 4% +116.3% 212530 ± 4% sched_debug.cfs_rq[6]:/.min_vruntime
16.75 ± 78% +258.2% 60.00 ± 12% sched_debug.cfs_rq[6]:/.runnable_load_avg
369.75 ± 2% +21.1% 447.75 ± 3% sched_debug.cfs_rq[6]:/.tg_load_avg
50.75 ± 11% +33.0% 67.50 ± 14% sched_debug.cfs_rq[6]:/.tg_load_avg_contrib
158.75 ± 10% +201.3% 478.25 ± 5% sched_debug.cfs_rq[6]:/.util_avg
15371 ± 4% +60.3% 24635 ± 2% sched_debug.cfs_rq[7]:/.exec_clock
97547 ± 4% +111.0% 205787 ± 1% sched_debug.cfs_rq[7]:/.min_vruntime
7.00 ±102% +685.7% 55.00 ± 15% sched_debug.cfs_rq[7]:/.runnable_load_avg
370.75 ± 2% +21.0% 448.75 ± 4% sched_debug.cfs_rq[7]:/.tg_load_avg
875331 ± 5% -40.8% 518188 ± 5% sched_debug.cpu#0.avg_idle
21.25 ± 13% +158.8% 55.00 ± 34% sched_debug.cpu#0.cpu_load[0]
21.50 ± 9% +157.0% 55.25 ± 15% sched_debug.cpu#0.cpu_load[1]
18.75 ± 15% +190.7% 54.50 ± 6% sched_debug.cpu#0.cpu_load[2]
14.75 ± 21% +272.9% 55.00 ± 3% sched_debug.cpu#0.cpu_load[3]
11.25 ± 25% +397.8% 56.00 ± 5% sched_debug.cpu#0.cpu_load[4]
38804 ± 5% +52.5% 59171 ± 1% sched_debug.cpu#0.nr_load_updates
100572 ± 24% +218.9% 320711 ± 1% sched_debug.cpu#0.nr_switches
-666.50 ± -5% -69.5% -203.50 ±-17% sched_debug.cpu#0.nr_uninterruptible
100682 ± 24% +218.8% 321019 ± 1% sched_debug.cpu#0.sched_count
39169 ± 30% +176.8% 108420 ± 0% sched_debug.cpu#0.sched_goidle
42885 ± 3% +213.6% 134495 ± 6% sched_debug.cpu#0.ttwu_count
20783 ± 10% +341.5% 91760 ± 4% sched_debug.cpu#0.ttwu_local
828164 ± 12% -51.7% 400105 ± 8% sched_debug.cpu#1.avg_idle
11.75 ± 99% +406.4% 59.50 ± 20% sched_debug.cpu#1.cpu_load[0]
13.75 ± 47% +280.0% 52.25 ± 16% sched_debug.cpu#1.cpu_load[1]
13.50 ± 40% +268.5% 49.75 ± 13% sched_debug.cpu#1.cpu_load[2]
13.00 ± 35% +276.9% 49.00 ± 13% sched_debug.cpu#1.cpu_load[3]
12.25 ± 25% +312.2% 50.50 ± 13% sched_debug.cpu#1.cpu_load[4]
41410 ± 18% +42.2% 58885 ± 5% sched_debug.cpu#1.nr_load_updates
-713.25 ± -2% -71.7% -201.75 ±-13% sched_debug.cpu#1.nr_uninterruptible
862241 ± 1% -50.4% 427474 ± 7% sched_debug.cpu#2.avg_idle
25.25 ± 47% +167.3% 67.50 ± 27% sched_debug.cpu#2.cpu_load[0]
25.00 ± 24% +135.0% 58.75 ± 24% sched_debug.cpu#2.cpu_load[1]
21.00 ± 8% +165.5% 55.75 ± 23% sched_debug.cpu#2.cpu_load[2]
17.75 ± 8% +205.6% 54.25 ± 21% sched_debug.cpu#2.cpu_load[3]
16.50 ± 19% +221.2% 53.00 ± 17% sched_debug.cpu#2.cpu_load[4]
765.75 ± 55% +91.4% 1465 ± 21% sched_debug.cpu#2.curr->pid
38186 ± 9% +54.0% 58807 ± 8% sched_debug.cpu#2.nr_load_updates
-718.75 ± -3% -69.4% -220.00 ±-24% sched_debug.cpu#2.nr_uninterruptible
844650 ± 7% -50.8% 415306 ± 14% sched_debug.cpu#3.avg_idle
29.75 ± 30% +59.7% 47.50 ± 11% sched_debug.cpu#3.cpu_load[1]
22.25 ± 31% +136.0% 52.50 ± 12% sched_debug.cpu#3.cpu_load[2]
18.00 ± 30% +194.4% 53.00 ± 16% sched_debug.cpu#3.cpu_load[3]
14.50 ± 25% +269.0% 53.50 ± 13% sched_debug.cpu#3.cpu_load[4]
40826 ± 15% +54.2% 62962 ± 8% sched_debug.cpu#3.nr_load_updates
-718.50 ± -2% -68.1% -229.50 ± -6% sched_debug.cpu#3.nr_uninterruptible
960718 ± 3% -39.1% 584950 ± 2% sched_debug.cpu#4.avg_idle
29.50 ± 50% +89.8% 56.00 ± 5% sched_debug.cpu#4.cpu_load[0]
21.00 ± 34% +156.0% 53.75 ± 1% sched_debug.cpu#4.cpu_load[1]
13.50 ± 28% +290.7% 52.75 ± 1% sched_debug.cpu#4.cpu_load[2]
9.50 ± 15% +439.5% 51.25 ± 3% sched_debug.cpu#4.cpu_load[3]
8.75 ± 9% +477.1% 50.50 ± 4% sched_debug.cpu#4.cpu_load[4]
24965 ± 2% +69.1% 42227 ± 1% sched_debug.cpu#4.nr_load_updates
47274 ± 12% +423.0% 247264 ± 3% sched_debug.cpu#4.nr_switches
722.75 ± 3% -80.7% 139.25 ± 27% sched_debug.cpu#4.nr_uninterruptible
47352 ± 12% +422.7% 247523 ± 3% sched_debug.cpu#4.sched_count
15295 ± 19% +415.4% 78829 ± 4% sched_debug.cpu#4.sched_goidle
25360 ± 14% +313.0% 104737 ± 5% sched_debug.cpu#4.ttwu_count
11144 ± 7% +558.5% 73386 ± 3% sched_debug.cpu#4.ttwu_local
952020 ± 7% -41.9% 553128 ± 15% sched_debug.cpu#5.avg_idle
24.75 ± 58% +102.0% 50.00 ± 28% sched_debug.cpu#5.cpu_load[0]
22.50 ± 58% +122.2% 50.00 ± 17% sched_debug.cpu#5.cpu_load[1]
18.75 ± 53% +154.7% 47.75 ± 16% sched_debug.cpu#5.cpu_load[2]
15.00 ± 43% +221.7% 48.25 ± 13% sched_debug.cpu#5.cpu_load[3]
12.75 ± 32% +302.0% 51.25 ± 13% sched_debug.cpu#5.cpu_load[4]
396.33 ± 81% +167.8% 1061 ± 34% sched_debug.cpu#5.curr->pid
28469 ± 21% +68.0% 47826 ± 20% sched_debug.cpu#5.nr_load_updates
700.50 ± 1% -66.1% 237.50 ± 12% sched_debug.cpu#5.nr_uninterruptible
875687 ± 11% -31.1% 603325 ± 6% sched_debug.cpu#6.avg_idle
24.75 ± 42% +105.1% 50.75 ± 11% sched_debug.cpu#6.cpu_load[1]
18.75 ± 34% +166.7% 50.00 ± 3% sched_debug.cpu#6.cpu_load[2]
14.75 ± 23% +237.3% 49.75 ± 4% sched_debug.cpu#6.cpu_load[3]
12.25 ± 18% +312.2% 50.50 ± 5% sched_debug.cpu#6.cpu_load[4]
690.75 ± 2% -66.4% 232.25 ± 16% sched_debug.cpu#6.nr_uninterruptible
937693 ± 11% -41.7% 547028 ± 15% sched_debug.cpu#7.avg_idle
16.00 ± 24% +246.9% 55.50 ± 14% sched_debug.cpu#7.cpu_load[0]
16.50 ± 26% +231.8% 54.75 ± 11% sched_debug.cpu#7.cpu_load[1]
15.00 ± 24% +261.7% 54.25 ± 9% sched_debug.cpu#7.cpu_load[2]
12.75 ± 19% +321.6% 53.75 ± 8% sched_debug.cpu#7.cpu_load[3]
10.75 ± 10% +400.0% 53.75 ± 6% sched_debug.cpu#7.cpu_load[4]
268.25 ±115% +424.8% 1407 ± 11% sched_debug.cpu#7.curr->pid
699.50 ± 4% -65.3% 242.75 ± 6% sched_debug.cpu#7.nr_uninterruptible
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/spawn
commit:
139622343ef31941effc6de6a5a9320371a00e62
7ea241afbf4924c58d41078599f7a32ba49fb985
139622343ef31941 7ea241afbf4924c58d41078599
---------------- --------------------------
%stddev %change %stddev
\ | \
2860 ± 1% +218.0% 9098 ± 0% unixbench.score
61296 ± 35% +669.0% 471374 ± 0% unixbench.time.involuntary_context_switches
1.069e+08 ± 0% +155.7% 2.734e+08 ± 0% unixbench.time.minor_page_faults
178.75 ± 2% +270.8% 662.75 ± 0% unixbench.time.percent_of_cpu_this_job_got
353.52 ± 1% +266.3% 1294 ± 0% unixbench.time.system_time
5.79 ± 1% +226.7% 18.93 ± 0% unixbench.time.user_time
9029197 ± 1% +163.5% 23788571 ± 0% unixbench.time.voluntary_context_switches
3121 ± 3% -30.1% 2181 ± 2% uptime.idle
12143 ± 1% +66.6% 20232 ± 1% meminfo.KernelStack
3935 ± 0% +20.0% 4721 ± 1% meminfo.PageTables
61609 ± 0% +11.5% 68720 ± 0% meminfo.SUnreclaim
153064 ± 0% +167.8% 409918 ± 0% softirqs.RCU
136085 ± 1% +346.1% 607054 ± 0% softirqs.SCHED
239770 ± 0% +198.7% 716185 ± 0% softirqs.TIMER
7.00 ± 0% +14.3% 8.00 ± 0% vmstat.procs.r
106697 ± 2% +108.5% 222434 ± 0% vmstat.system.cs
16835 ± 1% +45.7% 24531 ± 0% vmstat.system.in
543.50 ± 5% +45718.6% 249024 ± 3% latency_stats.hits.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
4541178 ± 1% +156.9% 11665972 ± 0% latency_stats.hits.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
691.00 ± 6% +1.1e+05% 751793 ± 5% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
1.362e+09 ± 0% -28.8% 9.696e+08 ± 0% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
1097010 ± 5% -26.6% 804970 ± 1% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
61296 ± 35% +669.0% 471374 ± 0% time.involuntary_context_switches
1.069e+08 ± 0% +155.7% 2.734e+08 ± 0% time.minor_page_faults
178.75 ± 2% +270.8% 662.75 ± 0% time.percent_of_cpu_this_job_got
353.52 ± 1% +266.3% 1294 ± 0% time.system_time
5.79 ± 1% +226.7% 18.93 ± 0% time.user_time
9029197 ± 1% +163.5% 23788571 ± 0% time.voluntary_context_switches
2.625e+08 ± 5% +251.6% 9.229e+08 ± 1% cpuidle.C1-HSW.time
4126487 ± 1% +229.4% 13591854 ± 0% cpuidle.C1-HSW.usage
3.189e+08 ± 0% -39.3% 1.934e+08 ± 6% cpuidle.C1E-HSW.time
3960503 ± 2% -26.1% 2925407 ± 1% cpuidle.C1E-HSW.usage
2.278e+08 ± 4% -74.7% 57534896 ± 6% cpuidle.C3-HSW.time
1418950 ± 4% -87.7% 174589 ± 7% cpuidle.C3-HSW.usage
2.01e+09 ± 3% -69.2% 6.181e+08 ± 2% cpuidle.C6-HSW.time
1172252 ± 3% -74.3% 301380 ± 6% cpuidle.C6-HSW.usage
6773 ± 3% -22.3% 5260 ± 3% slabinfo.anon_vma.active_objs
6807 ± 3% -22.7% 5260 ± 3% slabinfo.anon_vma.num_objs
2238 ± 2% +12.8% 2524 ± 2% slabinfo.signal_cache.active_objs
2319 ± 2% +19.4% 2770 ± 2% slabinfo.signal_cache.num_objs
817.25 ± 0% +77.5% 1450 ± 2% slabinfo.task_struct.active_objs
279.50 ± 0% +88.5% 526.75 ± 0% slabinfo.task_struct.active_slabs
839.50 ± 0% +88.4% 1581 ± 0% slabinfo.task_struct.num_objs
279.50 ± 0% +88.5% 526.75 ± 0% slabinfo.task_struct.num_slabs
755.75 ± 0% +67.1% 1263 ± 2% proc-vmstat.nr_kernel_stack
979.25 ± 0% +21.0% 1185 ± 1% proc-vmstat.nr_page_table_pages
15396 ± 0% +11.6% 17180 ± 0% proc-vmstat.nr_slab_unreclaimable
1.154e+08 ± 0% +153.7% 2.926e+08 ± 0% proc-vmstat.numa_hit
1.154e+08 ± 0% +153.7% 2.926e+08 ± 0% proc-vmstat.numa_local
18684015 ± 0% +152.4% 47155989 ± 0% proc-vmstat.pgalloc_dma32
1.171e+08 ± 0% +145.1% 2.871e+08 ± 0% proc-vmstat.pgalloc_normal
1.066e+08 ± 0% +154.9% 2.717e+08 ± 0% proc-vmstat.pgfault
1.358e+08 ± 0% +146.1% 3.342e+08 ± 0% proc-vmstat.pgfree
12.64 ± 2% +246.0% 43.74 ± 0% turbostat.%Busy
416.50 ± 2% +245.9% 1440 ± 0% turbostat.Avg_MHz
2.61 ± 2% -80.9% 0.50 ± 3% turbostat.CPU%c3
36.78 ± 3% -71.5% 10.48 ± 1% turbostat.CPU%c6
36.75 ± 3% +25.9% 46.25 ± 1% turbostat.CoreTmp
0.08 ± 23% -66.7% 0.03 ± 69% turbostat.Pkg%pc2
5.77 ± 35% -50.0% 2.88 ± 27% turbostat.Pkg%pc6
42.00 ± 1% +16.7% 49.00 ± 0% turbostat.PkgTmp
43.04 ± 1% +46.5% 63.07 ± 0% turbostat.PkgWatt
0.88 ± 1% +84.1% 1.62 ± 0% turbostat.RAMWatt
10808 ± 15% +176.5% 29888 ± 2% sched_debug.cfs_rq[0]:/.exec_clock
23.75 ± 87% -91.6% 2.00 ±117% sched_debug.cfs_rq[0]:/.load_avg
2906711 ± 21% -35.2% 1884311 ± 4% sched_debug.cfs_rq[0]:/.min_vruntime
0.25 ±173% +31000.0% 77.75 ±133% sched_debug.cfs_rq[0]:/.runnable_load_avg
23.75 ± 87% -91.6% 2.00 ±117% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
71.75 ± 51% -69.0% 22.25 ±140% sched_debug.cfs_rq[0]:/.util_avg
8159 ± 12% +146.5% 20109 ± 0% sched_debug.cfs_rq[10]:/.exec_clock
5.00 ±104% -100.0% 0.00 ± 0% sched_debug.cfs_rq[10]:/.load_avg
72414 ± 23% +334.4% 314569 ± 18% sched_debug.cfs_rq[10]:/.min_vruntime
-2834315 ±-21% -44.6% -1569770 ± -7% sched_debug.cfs_rq[10]:/.spread0
5.00 ±104% -100.0% 0.00 ± 0% sched_debug.cfs_rq[10]:/.tg_load_avg_contrib
6.00 ± 86% -87.5% 0.75 ± 57% sched_debug.cfs_rq[10]:/.util_avg
7813 ± 29% +160.2% 20328 ± 2% sched_debug.cfs_rq[11]:/.exec_clock
144056 ± 46% +204.9% 439226 ± 60% sched_debug.cfs_rq[11]:/.min_vruntime
-2762678 ±-20% -47.7% -1445117 ±-14% sched_debug.cfs_rq[11]:/.spread0
7289 ± 26% +177.3% 20210 ± 0% sched_debug.cfs_rq[12]:/.exec_clock
-2514451 ±-14% -35.8% -1613921 ± -5% sched_debug.cfs_rq[12]:/.spread0
8403 ± 28% +138.5% 20045 ± 0% sched_debug.cfs_rq[13]:/.exec_clock
68206 ± 31% +301.1% 273582 ± 4% sched_debug.cfs_rq[13]:/.min_vruntime
-2838530 ±-22% -43.3% -1610764 ± -4% sched_debug.cfs_rq[13]:/.spread0
8619 ± 30% +132.4% 20029 ± 0% sched_debug.cfs_rq[14]:/.exec_clock
11.50 ± 77% -87.0% 1.50 ±137% sched_debug.cfs_rq[14]:/.load_avg
83933 ± 49% +260.3% 302385 ± 12% sched_debug.cfs_rq[14]:/.min_vruntime
-2822808 ±-22% -44.0% -1581964 ± -6% sched_debug.cfs_rq[14]:/.spread0
11.50 ± 77% -87.0% 1.50 ±137% sched_debug.cfs_rq[14]:/.tg_load_avg_contrib
11.50 ± 70% -95.7% 0.50 ±173% sched_debug.cfs_rq[14]:/.util_avg
8121 ± 17% +153.7% 20606 ± 4% sched_debug.cfs_rq[15]:/.exec_clock
3.00 ± 62% -100.0% 0.00 ± 0% sched_debug.cfs_rq[15]:/.load_avg
107660 ± 73% +195.1% 317706 ± 25% sched_debug.cfs_rq[15]:/.min_vruntime
83.33 ±141% +576.5% 563.75 ± 19% sched_debug.cfs_rq[15]:/.removed_load_avg
83.33 ±141% +576.5% 563.75 ± 19% sched_debug.cfs_rq[15]:/.removed_util_avg
0.33 ±141% +9800.0% 33.00 ± 31% sched_debug.cfs_rq[15]:/.runnable_load_avg
-2799081 ±-24% -44.0% -1566646 ± -9% sched_debug.cfs_rq[15]:/.spread0
3.00 ± 62% -100.0% 0.00 ± 0% sched_debug.cfs_rq[15]:/.tg_load_avg_contrib
4.50 ± 48% -72.2% 1.25 ± 87% sched_debug.cfs_rq[15]:/.util_avg
8774 ± 9% +194.9% 25872 ± 7% sched_debug.cfs_rq[1]:/.exec_clock
12.00 ± -8% +718.8% 98.25 ±136% sched_debug.cfs_rq[1]:/.load
1.00 ± 0% +8300.0% 84.00 ±136% sched_debug.cfs_rq[1]:/.runnable_load_avg
8333 ± 6% +205.6% 25464 ± 7% sched_debug.cfs_rq[2]:/.exec_clock
10.00 ±167% +347.5% 44.75 ± 60% sched_debug.cfs_rq[2]:/.load
2512896 ± 4% -40.1% 1505666 ± 20% sched_debug.cfs_rq[2]:/.min_vruntime
9.50 ± 18% -73.7% 2.50 ± 60% sched_debug.cfs_rq[2]:/.nr_spread_over
3.00 ±141% +791.7% 26.75 ± 41% sched_debug.cfs_rq[2]:/.runnable_load_avg
8669 ± 16% +208.8% 26767 ± 1% sched_debug.cfs_rq[3]:/.exec_clock
2870913 ± 19% -40.0% 1721928 ± 11% sched_debug.cfs_rq[3]:/.min_vruntime
73.50 ± 69% -97.6% 1.75 ±142% sched_debug.cfs_rq[3]:/.util_avg
8368 ± 20% +219.9% 26769 ± 1% sched_debug.cfs_rq[4]:/.exec_clock
2765749 ± 27% -38.7% 1695670 ± 9% sched_debug.cfs_rq[4]:/.min_vruntime
105.00 ± 79% -97.9% 2.25 ± 96% sched_debug.cfs_rq[4]:/.util_avg
8653 ± 23% +196.3% 25644 ± 9% sched_debug.cfs_rq[5]:/.exec_clock
2933961 ± 34% -52.4% 1396567 ± 19% sched_debug.cfs_rq[5]:/.min_vruntime
58.00 ± 68% -79.3% 12.00 ±121% sched_debug.cfs_rq[5]:/.util_avg
9321 ± 11% +172.5% 25398 ± 6% sched_debug.cfs_rq[6]:/.exec_clock
15.25 ±158% +568.9% 102.00 ± 98% sched_debug.cfs_rq[6]:/.load
2973173 ± 24% -51.4% 1446280 ± 14% sched_debug.cfs_rq[6]:/.min_vruntime
1.50 ±173% +5416.7% 82.75 ±125% sched_debug.cfs_rq[6]:/.runnable_load_avg
9269 ± 7% +189.3% 26813 ± 1% sched_debug.cfs_rq[7]:/.exec_clock
5.00 ± 80% +265.0% 18.25 ± 18% sched_debug.cfs_rq[7]:/.load
2658153 ± 6% -29.1% 1884549 ± 7% sched_debug.cfs_rq[7]:/.min_vruntime
2.50 ±103% +500.0% 15.00 ± 16% sched_debug.cfs_rq[7]:/.runnable_load_avg
82.00 ± 66% -86.9% 10.75 ±119% sched_debug.cfs_rq[7]:/.util_avg
7970 ± 11% +153.1% 20169 ± 0% sched_debug.cfs_rq[8]:/.exec_clock
57884 ± 15% +440.6% 312939 ± 16% sched_debug.cfs_rq[8]:/.min_vruntime
-2848841 ±-22% -44.8% -1571391 ± -5% sched_debug.cfs_rq[8]:/.spread0
6321 ± 29% +215.4% 19935 ± 0% sched_debug.cfs_rq[9]:/.exec_clock
4.25 ± 45% -92.2% 0.33 ±141% sched_debug.cfs_rq[9]:/.load_avg
105874 ± 70% +159.0% 274232 ± 2% sched_debug.cfs_rq[9]:/.min_vruntime
-2800853 ±-20% -42.5% -1610100 ± -5% sched_debug.cfs_rq[9]:/.spread0
4.25 ± 45% -92.2% 0.33 ±141% sched_debug.cfs_rq[9]:/.tg_load_avg_contrib
684762 ± 30% -47.5% 359545 ± 25% sched_debug.cpu#0.avg_idle
0.50 ±100% +16200.0% 81.50 ±137% sched_debug.cpu#0.cpu_load[0]
1.25 ±131% +4200.0% 53.75 ±101% sched_debug.cpu#0.cpu_load[1]
2.25 ±101% +1822.2% 43.25 ± 70% sched_debug.cpu#0.cpu_load[2]
4.00 ±106% +956.2% 42.25 ± 54% sched_debug.cpu#0.cpu_load[3]
5.50 ±100% +695.5% 43.75 ± 42% sched_debug.cpu#0.cpu_load[4]
-22.25 ±-43% +1870.8% -438.50 ± -8% sched_debug.cpu#0.nr_uninterruptible
688905 ± 18% -61.7% 263718 ± 26% sched_debug.cpu#1.avg_idle
0.75 ±173% +2733.3% 21.25 ± 3% sched_debug.cpu#1.cpu_load[0]
0.75 ±173% +3266.7% 25.25 ± 43% sched_debug.cpu#1.cpu_load[1]
0.75 ±173% +3766.7% 29.00 ± 54% sched_debug.cpu#1.cpu_load[2]
0.50 ±173% +6450.0% 32.75 ± 39% sched_debug.cpu#1.cpu_load[3]
0.25 ±173% +15200.0% 38.25 ± 29% sched_debug.cpu#1.cpu_load[4]
897.00 ± 0% +2095.2% 19690 ± 22% sched_debug.cpu#1.curr->pid
12.00 ± -8% +1764.6% 223.75 ± 55% sched_debug.cpu#1.load
37685 ± 3% +52.3% 57377 ± 17% sched_debug.cpu#1.nr_load_updates
527300 ± 9% +266.7% 1933691 ± 82% sched_debug.cpu#1.nr_switches
-23.25 ±-82% +1451.6% -360.75 ±-10% sched_debug.cpu#1.nr_uninterruptible
527317 ± 9% +266.8% 1933971 ± 82% sched_debug.cpu#1.sched_count
255612 ± 10% +240.9% 871328 ± 91% sched_debug.cpu#1.sched_goidle
81533 ± 19% +819.0% 749326 ±111% sched_debug.cpu#1.ttwu_count
4112 ± 33% +14469.8% 599218 ±144% sched_debug.cpu#1.ttwu_local
982806 ± 1% -57.9% 413490 ± 18% sched_debug.cpu#10.avg_idle
0.25 ±173% +17000.0% 42.75 ± 36% sched_debug.cpu#10.cpu_load[1]
0.00 ± 0% +Inf% 52.00 ± 30% sched_debug.cpu#10.cpu_load[2]
0.00 ± 0% +Inf% 55.25 ± 18% sched_debug.cpu#10.cpu_load[3]
0.00 ± 0% +Inf% 56.25 ± 11% sched_debug.cpu#10.cpu_load[4]
5741 ±133% +405.2% 29004 ± 24% sched_debug.cpu#10.curr->pid
17466 ± 3% +50.9% 26360 ± 4% sched_debug.cpu#10.nr_load_updates
232630 ± 7% +169.9% 627781 ± 20% sched_debug.cpu#10.nr_switches
15.25 ± 18% +2372.1% 377.00 ± 9% sched_debug.cpu#10.nr_uninterruptible
232647 ± 7% +169.9% 628029 ± 20% sched_debug.cpu#10.sched_count
67648 ± 12% +261.1% 244266 ± 26% sched_debug.cpu#10.sched_goidle
169385 ± 10% +45.3% 246194 ± 26% sched_debug.cpu#10.ttwu_count
1023 ± 53% +13289.1% 137037 ± 48% sched_debug.cpu#10.ttwu_local
910969 ± 7% -46.0% 492369 ± 10% sched_debug.cpu#11.avg_idle
1.50 ±100% +6033.3% 92.00 ±111% sched_debug.cpu#11.cpu_load[0]
1.00 ±100% +7325.0% 74.25 ± 65% sched_debug.cpu#11.cpu_load[1]
0.50 ±100% +13550.0% 68.25 ± 39% sched_debug.cpu#11.cpu_load[2]
0.00 ± 0% +Inf% 67.75 ± 27% sched_debug.cpu#11.cpu_load[3]
0.00 ± 0% +Inf% 68.00 ± 16% sched_debug.cpu#11.cpu_load[4]
9930 ± 60% +198.8% 29669 ± 34% sched_debug.cpu#11.curr->pid
16703 ± 21% +58.0% 26386 ± 5% sched_debug.cpu#11.nr_load_updates
230368 ± 27% +147.5% 570050 ± 7% sched_debug.cpu#11.nr_switches
18.50 ± 28% +1802.7% 352.00 ± 7% sched_debug.cpu#11.nr_uninterruptible
230383 ± 27% +147.5% 570304 ± 7% sched_debug.cpu#11.sched_count
72828 ± 23% +196.9% 216221 ± 10% sched_debug.cpu#11.sched_goidle
602.00 ± 58% +16215.9% 98222 ± 0% sched_debug.cpu#11.ttwu_local
987614 ± 1% -51.1% 482482 ± 18% sched_debug.cpu#12.avg_idle
2.50 ±131% +920.0% 25.50 ± 29% sched_debug.cpu#12.cpu_load[0]
2.25 ±148% +1077.8% 26.50 ± 13% sched_debug.cpu#12.cpu_load[1]
2.00 ±173% +1637.5% 34.75 ± 28% sched_debug.cpu#12.cpu_load[2]
2.00 ±173% +2037.5% 42.75 ± 29% sched_debug.cpu#12.cpu_load[3]
2.00 ±173% +2312.5% 48.25 ± 25% sched_debug.cpu#12.cpu_load[4]
5424 ± 75% +524.4% 33866 ± 21% sched_debug.cpu#12.curr->pid
16.50 ± 32% +2397.0% 412.00 ± 8% sched_debug.cpu#12.nr_uninterruptible
932518 ± 11% -53.8% 430756 ± 25% sched_debug.cpu#13.avg_idle
5.00 ±-20% +430.0% 26.50 ± 32% sched_debug.cpu#13.cpu_load[0]
3.00 ±-33% +1083.3% 35.50 ± 50% sched_debug.cpu#13.cpu_load[1]
1.00 ±-100% +4450.0% 45.50 ± 40% sched_debug.cpu#13.cpu_load[2]
1.00 ±-100% +5000.0% 51.00 ± 26% sched_debug.cpu#13.cpu_load[3]
0.33 ±141% +15425.0% 51.75 ± 24% sched_debug.cpu#13.cpu_load[4]
7281 ± 99% +488.7% 42863 ± 19% sched_debug.cpu#13.curr->pid
17767 ± 15% +47.3% 26178 ± 2% sched_debug.cpu#13.nr_load_updates
239785 ± 26% +151.6% 603223 ± 16% sched_debug.cpu#13.nr_switches
17.00 ± 28% +2333.8% 413.75 ± 16% sched_debug.cpu#13.nr_uninterruptible
239799 ± 26% +151.7% 603464 ± 16% sched_debug.cpu#13.sched_count
74352 ± 25% +213.5% 233129 ± 21% sched_debug.cpu#13.sched_goidle
7980 ± 96% +1514.8% 128867 ± 41% sched_debug.cpu#13.ttwu_local
905514 ± 16% -49.9% 453914 ± 20% sched_debug.cpu#14.avg_idle
0.00 ± -1% +Inf% 71.25 ±109% sched_debug.cpu#14.cpu_load[1]
2289 ±100% +605.4% 16150 ± 60% sched_debug.cpu#14.curr->pid
18237 ± 15% +55.8% 28421 ± 15% sched_debug.cpu#14.nr_load_updates
266131 ± 27% +209.2% 822794 ± 57% sched_debug.cpu#14.nr_switches
19.50 ± 66% +1769.2% 364.50 ± 6% sched_debug.cpu#14.nr_uninterruptible
266148 ± 27% +209.2% 823029 ± 57% sched_debug.cpu#14.sched_count
90503 ± 26% +276.7% 340951 ± 68% sched_debug.cpu#14.sched_goidle
760.25 ± 28% +31235.3% 238226 ±101% sched_debug.cpu#14.ttwu_local
985776 ± 1% -46.1% 530905 ± 1% sched_debug.cpu#15.avg_idle
18500 ± 11% +41.1% 26107 ± 3% sched_debug.cpu#15.nr_load_updates
231923 ± 19% +139.4% 555216 ± 3% sched_debug.cpu#15.nr_switches
13.00 ± 49% +3103.8% 416.50 ± 4% sched_debug.cpu#15.nr_uninterruptible
231938 ± 19% +139.5% 555458 ± 3% sched_debug.cpu#15.sched_count
81257 ± 19% +158.3% 209886 ± 4% sched_debug.cpu#15.sched_goidle
145943 ± 21% +42.4% 207858 ± 2% sched_debug.cpu#15.ttwu_count
1637 ± 73% +5884.5% 97981 ± 0% sched_debug.cpu#15.ttwu_local
556737 ± 15% -48.3% 287729 ± 34% sched_debug.cpu#2.avg_idle
0.25 ±173% +35900.0% 90.00 ±113% sched_debug.cpu#2.cpu_load[0]
0.25 ±173% +27100.0% 68.00 ± 70% sched_debug.cpu#2.cpu_load[1]
0.25 ±173% +18600.0% 46.75 ± 57% sched_debug.cpu#2.cpu_load[2]
0.25 ±173% +15400.0% 38.75 ± 36% sched_debug.cpu#2.cpu_load[3]
0.25 ±173% +16000.0% 40.25 ± 26% sched_debug.cpu#2.cpu_load[4]
7241 ±108% +255.2% 25722 ± 41% sched_debug.cpu#2.curr->pid
10.00 ±167% +387.5% 48.75 ± 49% sched_debug.cpu#2.load
36328 ± 1% +46.4% 53193 ± 22% sched_debug.cpu#2.nr_load_updates
491179 ± 2% +284.9% 1890762 ± 85% sched_debug.cpu#2.nr_switches
-18.25 ±-35% +1646.6% -318.75 ±-18% sched_debug.cpu#2.nr_uninterruptible
491195 ± 2% +285.0% 1891050 ± 85% sched_debug.cpu#2.sched_count
238992 ± 2% +255.6% 849917 ± 94% sched_debug.cpu#2.sched_goidle
70908 ± 5% +967.6% 757003 ±110% sched_debug.cpu#2.ttwu_count
3960 ± 11% +14961.7% 596555 ±145% sched_debug.cpu#2.ttwu_local
514303 ± 17% -28.7% 366548 ± 6% sched_debug.cpu#3.avg_idle
0.50 ±100% +1900.0% 10.00 ± 52% sched_debug.cpu#3.cpu_load[0]
0.25 ±173% +13700.0% 34.50 ± 56% sched_debug.cpu#3.cpu_load[1]
0.25 ±173% +21000.0% 52.75 ± 61% sched_debug.cpu#3.cpu_load[2]
0.25 ±173% +21700.0% 54.50 ± 48% sched_debug.cpu#3.cpu_load[3]
0.25 ±173% +19400.0% 48.75 ± 32% sched_debug.cpu#3.cpu_load[4]
39036 ± 9% +22.6% 47867 ± 3% sched_debug.cpu#3.nr_load_updates
540531 ± 18% +80.4% 975275 ± 2% sched_debug.cpu#3.nr_switches
-17.00 ±-42% +1761.8% -316.50 ±-19% sched_debug.cpu#3.nr_uninterruptible
540549 ± 18% +80.5% 975568 ± 2% sched_debug.cpu#3.sched_count
264596 ± 18% +49.5% 395608 ± 3% sched_debug.cpu#3.sched_goidle
73196 ± 18% +265.1% 267276 ± 3% sched_debug.cpu#3.ttwu_count
4329 ± 33% +2142.7% 97090 ± 1% sched_debug.cpu#3.ttwu_local
0.50 ±100% +3850.0% 19.75 ± 10% sched_debug.cpu#4.cpu_load[0]
0.00 ± 0% +Inf% 42.25 ± 57% sched_debug.cpu#4.cpu_load[1]
0.25 ±173% +19400.0% 48.75 ± 29% sched_debug.cpu#4.cpu_load[2]
0.25 ±173% +20100.0% 50.50 ± 13% sched_debug.cpu#4.cpu_load[3]
0.25 ±173% +19300.0% 48.50 ± 14% sched_debug.cpu#4.cpu_load[4]
96.25 ±102% +46885.2% 45223 ± 20% sched_debug.cpu#4.curr->pid
-19.50 ±-19% +2055.1% -420.25 ±-12% sched_debug.cpu#4.nr_uninterruptible
0.00 ± 0% +Inf% 24.50 ± 26% sched_debug.cpu#5.cpu_load[0]
0.00 ± 0% +Inf% 31.25 ± 42% sched_debug.cpu#5.cpu_load[1]
0.00 ± 0% +Inf% 45.25 ± 52% sched_debug.cpu#5.cpu_load[2]
0.00 ± 0% +Inf% 48.75 ± 52% sched_debug.cpu#5.cpu_load[3]
0.00 ± 0% +Inf% 47.00 ± 49% sched_debug.cpu#5.cpu_load[4]
-13.00 ±-62% +3198.1% -428.75 ±-26% sched_debug.cpu#5.nr_uninterruptible
625680 ± 17% -50.0% 312954 ± 29% sched_debug.cpu#6.avg_idle
0.75 ±173% +2700.0% 21.00 ± 43% sched_debug.cpu#6.cpu_load[0]
0.50 ±173% +5150.0% 26.25 ± 53% sched_debug.cpu#6.cpu_load[1]
0.25 ±173% +12400.0% 31.25 ± 62% sched_debug.cpu#6.cpu_load[2]
0.25 ±173% +14500.0% 36.50 ± 53% sched_debug.cpu#6.cpu_load[3]
0.25 ±173% +16200.0% 40.75 ± 37% sched_debug.cpu#6.cpu_load[4]
5233 ±118% +627.5% 38070 ± 18% sched_debug.cpu#6.curr->pid
40072 ± 5% +29.4% 51833 ± 16% sched_debug.cpu#6.nr_load_updates
598048 ± 16% +180.1% 1674979 ± 76% sched_debug.cpu#6.nr_switches
-15.00 ±-14% +2550.0% -397.50 ±-11% sched_debug.cpu#6.nr_uninterruptible
598064 ± 16% +180.1% 1675255 ± 76% sched_debug.cpu#6.sched_count
293293 ± 16% +153.2% 742697 ± 86% sched_debug.cpu#6.sched_goidle
78730 ± 24% +733.7% 656394 ± 99% sched_debug.cpu#6.ttwu_count
4408 ± 27% +11122.5% 494717 ±139% sched_debug.cpu#6.ttwu_local
1.00 ±100% +7650.0% 77.50 ±138% sched_debug.cpu#7.cpu_load[0]
0.25 ±173% +20000.0% 50.25 ±100% sched_debug.cpu#7.cpu_load[1]
0.50 ±100% +8150.0% 41.25 ± 55% sched_debug.cpu#7.cpu_load[2]
1.00 ±100% +3550.0% 36.50 ± 36% sched_debug.cpu#7.cpu_load[3]
1.00 ±100% +3175.0% 32.75 ± 29% sched_debug.cpu#7.cpu_load[4]
4726 ± 95% +458.6% 26402 ± 62% sched_debug.cpu#7.curr->pid
4.50 ± 77% +1661.1% 79.25 ±135% sched_debug.cpu#7.load
37643 ± 7% +25.9% 47381 ± 1% sched_debug.cpu#7.nr_load_updates
528594 ± 7% +86.1% 983699 ± 2% sched_debug.cpu#7.nr_switches
-18.50 ±-71% +2395.9% -461.75 ±-13% sched_debug.cpu#7.nr_uninterruptible
528611 ± 7% +86.1% 983990 ± 2% sched_debug.cpu#7.sched_count
258262 ± 7% +54.1% 398087 ± 3% sched_debug.cpu#7.sched_goidle
85314 ± 17% +215.4% 269067 ± 2% sched_debug.cpu#7.ttwu_count
3050 ± 11% +3010.7% 94892 ± 0% sched_debug.cpu#7.ttwu_local
839878 ± 12% -36.1% 536832 ± 0% sched_debug.cpu#8.avg_idle
1.00 ±141% +3100.0% 32.00 ± 11% sched_debug.cpu#8.cpu_load[0]
0.67 ±141% +5487.5% 37.25 ± 24% sched_debug.cpu#8.cpu_load[1]
0.33 ±141% +15200.0% 51.00 ± 30% sched_debug.cpu#8.cpu_load[2]
0.33 ±141% +16925.0% 56.75 ± 18% sched_debug.cpu#8.cpu_load[3]
0.00 ± 0% +Inf% 54.75 ± 9% sched_debug.cpu#8.cpu_load[4]
6752 ±107% +445.4% 36830 ± 59% sched_debug.cpu#8.curr->pid
18369 ± 13% +46.4% 26887 ± 6% sched_debug.cpu#8.nr_load_updates
25.25 ± 35% +1522.8% 409.75 ± 4% sched_debug.cpu#8.nr_uninterruptible
925902 ± 7% -55.3% 413447 ± 30% sched_debug.cpu#9.avg_idle
9830 ±106% +291.6% 38498 ± 34% sched_debug.cpu#9.curr->pid
16022 ± 19% +63.2% 26154 ± 3% sched_debug.cpu#9.nr_load_updates
200156 ± 29% +208.0% 616466 ± 19% sched_debug.cpu#9.nr_switches
15.75 ± 57% +2385.7% 391.50 ± 3% sched_debug.cpu#9.nr_uninterruptible
200172 ± 29% +208.1% 616723 ± 19% sched_debug.cpu#9.sched_count
71186 ± 27% +236.3% 239419 ± 24% sched_debug.cpu#9.sched_goidle
126454 ± 32% +89.4% 239544 ± 25% sched_debug.cpu#9.ttwu_count
658.25 ± 25% +20542.2% 135877 ± 48% sched_debug.cpu#9.ttwu_local
0.05 ±157% +1202.7% 0.64 ±151% sched_debug.rt_rq[2]:/.rt_time
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/spawn
commit:
139622343ef31941effc6de6a5a9320371a00e62
7ea241afbf4924c58d41078599f7a32ba49fb985
139622343ef31941 7ea241afbf4924c58d41078599
---------------- --------------------------
%stddev %change %stddev
\ | \
1374 ± 0% +200.3% 4128 ± 0% unixbench.score
271201 ± 6% +20.2% 326006 ± 6% unixbench.time.involuntary_context_switches
58873699 ± 0% +126.9% 1.336e+08 ± 0% unixbench.time.minor_page_faults
141.25 ± 0% +131.9% 327.50 ± 15% unixbench.time.percent_of_cpu_this_job_got
276.64 ± 0% +160.2% 719.79 ± 0% unixbench.time.system_time
4.53 ± 1% +147.8% 11.23 ± 1% unixbench.time.user_time
5029357 ± 0% +127.4% 11436751 ± 0% unixbench.time.voluntary_context_switches
6408 ± 0% +18.9% 7621 ± 9% meminfo.KernelStack
109692 ± 1% +145.6% 269408 ± 0% softirqs.RCU
100013 ± 1% +229.6% 329644 ± 0% softirqs.SCHED
164001 ± 1% +142.1% 397035 ± 1% softirqs.TIMER
19.79 ± 0% +124.9% 44.51 ± 15% turbostat.%Busy
574.00 ± 0% +126.4% 1299 ± 15% turbostat.Avg_MHz
17.39 ± 0% -86.7% 2.31 ± 3% turbostat.CPU%c3
52.50 ± 5% +23.3% 64.75 ± 5% turbostat.CoreTmp
0.13 ± 5% +411.5% 0.67 ± 98% turbostat.Pkg%pc3
72858341 ± 2% +256.6% 2.598e+08 ± 1% cpuidle.C1-NHM.time
4164450 ± 3% +52.0% 6329592 ± 6% cpuidle.C1-NHM.usage
1.037e+08 ± 1% +16.0% 1.203e+08 ± 1% cpuidle.C1E-NHM.time
2.502e+08 ± 0% -64.6% 88526888 ± 4% cpuidle.C3-NHM.time
1222733 ± 1% -81.5% 226236 ± 1% cpuidle.C3-NHM.usage
305573 ± 1% -46.6% 163112 ± 21% cpuidle.C6-NHM.usage
3632 ± 1% -22.6% 2811 ± 4% slabinfo.anon_vma.active_objs
3678 ± 1% -23.5% 2812 ± 4% slabinfo.anon_vma.num_objs
256.00 ± 0% +56.2% 400.00 ± 13% slabinfo.kmem_cache_node.active_objs
256.00 ± 0% +56.2% 400.00 ± 13% slabinfo.kmem_cache_node.num_objs
429.00 ± 0% +21.0% 519.00 ± 8% slabinfo.task_struct.active_objs
448.50 ± 0% +26.3% 566.50 ± 8% slabinfo.task_struct.num_objs
271201 ± 6% +20.2% 326006 ± 6% time.involuntary_context_switches
58873699 ± 0% +126.9% 1.336e+08 ± 0% time.minor_page_faults
141.25 ± 0% +131.9% 327.50 ± 15% time.percent_of_cpu_this_job_got
276.64 ± 0% +160.2% 719.79 ± 0% time.system_time
4.53 ± 1% +147.8% 11.23 ± 1% time.user_time
5029357 ± 0% +127.4% 11436751 ± 0% time.voluntary_context_switches
404.25 ± 1% +18.6% 479.50 ± 9% proc-vmstat.nr_kernel_stack
64399670 ± 0% +121.1% 1.424e+08 ± 0% proc-vmstat.numa_hit
64399670 ± 0% +121.1% 1.424e+08 ± 0% proc-vmstat.numa_local
37850172 ± 0% +116.8% 82071299 ± 0% proc-vmstat.pgalloc_dma32
37071781 ± 0% +118.0% 80813788 ± 0% proc-vmstat.pgalloc_normal
58883203 ± 0% +126.1% 1.331e+08 ± 0% proc-vmstat.pgfault
74915678 ± 0% +117.4% 1.629e+08 ± 0% proc-vmstat.pgfree
472.00 ±173% +3.5e+06% 16433923 ±100% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
397.00 ± 43% +5.8e+05% 2304097 ±127% latency_stats.avg.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
309.75 ± 8% +21620.2% 67278 ± 5% latency_stats.hits.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
2519095 ± 0% +125.0% 5668481 ± 0% latency_stats.hits.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
472.00 ±173% +4.2e+06% 19909910 ±103% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1592 ± 77% +1.1e+06% 17409287 ±112% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
911.75 ± 13% +15961.7% 146442 ± 6% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.entry_SYSCALL_64_fastpath
6.662e+08 ± 0% -25.2% 4.982e+08 ± 0% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
472.00 ±173% +7e+06% 32926268 ±122% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
3552 ± 47% +8.7e+05% 30738303 ±134% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.entry_SYSCALL_64_fastpath
19504 ± 5% +67.1% 32588 ± 16% sched_debug.cfs_rq[0]:/.exec_clock
5908647 ± 6% -70.0% 1774297 ± 33% sched_debug.cfs_rq[0]:/.min_vruntime
10.50 ± 30% +54.8% 16.25 ± 7% sched_debug.cfs_rq[0]:/.nr_spread_over
73.50 ± 29% -66.0% 25.00 ± 47% sched_debug.cfs_rq[0]:/.tg_load_avg
137.25 ± 28% -95.3% 6.50 ± 31% sched_debug.cfs_rq[0]:/.util_avg
13686 ± 4% +121.4% 30299 ± 13% sched_debug.cfs_rq[1]:/.exec_clock
6.50 ± 53% -96.2% 0.25 ±173% sched_debug.cfs_rq[1]:/.load_avg
3806197 ± 12% -48.7% 1952484 ± 11% sched_debug.cfs_rq[1]:/.min_vruntime
-2102463 ±-15% -108.5% 178185 ±457% sched_debug.cfs_rq[1]:/.spread0
74.75 ± 30% -69.9% 22.50 ± 56% sched_debug.cfs_rq[1]:/.tg_load_avg
6.75 ± 48% -96.3% 0.25 ±173% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
79.75 ± 83% -84.3% 12.50 ±133% sched_debug.cfs_rq[1]:/.util_avg
15033 ± 7% +109.0% 31425 ± 15% sched_debug.cfs_rq[2]:/.exec_clock
24.00 ±110% +226.0% 78.25 ± 68% sched_debug.cfs_rq[2]:/.load
5365005 ± 8% -60.5% 2117425 ± 10% sched_debug.cfs_rq[2]:/.min_vruntime
18.00 ± 16% -37.5% 11.25 ± 29% sched_debug.cfs_rq[2]:/.nr_spread_over
5.50 ±126% +800.0% 49.50 ± 32% sched_debug.cfs_rq[2]:/.runnable_load_avg
76.75 ± 29% -70.7% 22.50 ± 60% sched_debug.cfs_rq[2]:/.tg_load_avg
151.25 ± 29% -84.1% 24.00 ± 97% sched_debug.cfs_rq[2]:/.util_avg
17138 ± 6% +81.5% 31112 ± 18% sched_debug.cfs_rq[3]:/.exec_clock
13.75 ± 44% -87.3% 1.75 ±102% sched_debug.cfs_rq[3]:/.load_avg
5700138 ± 13% -63.6% 2073243 ± 9% sched_debug.cfs_rq[3]:/.min_vruntime
13.50 ±152% +235.2% 45.25 ± 29% sched_debug.cfs_rq[3]:/.runnable_load_avg
77.50 ± 28% -70.3% 23.00 ± 58% sched_debug.cfs_rq[3]:/.tg_load_avg
13.75 ± 44% -87.3% 1.75 ±102% sched_debug.cfs_rq[3]:/.tg_load_avg_contrib
159.50 ± 22% -96.1% 6.25 ±101% sched_debug.cfs_rq[3]:/.util_avg
10797 ± 11% +122.4% 24009 ± 19% sched_debug.cfs_rq[4]:/.exec_clock
62.00 ±173% +222.2% 199.75 ± 52% sched_debug.cfs_rq[4]:/.load
59.00 ±168% +190.7% 171.50 ± 70% sched_debug.cfs_rq[4]:/.runnable_load_avg
-5360985 ± -8% -76.8% -1242465 ±-44% sched_debug.cfs_rq[4]:/.spread0
80.50 ± 25% -71.1% 23.25 ± 57% sched_debug.cfs_rq[4]:/.tg_load_avg
13968 ± 12% +69.4% 23667 ± 17% sched_debug.cfs_rq[5]:/.exec_clock
319.25 ± 34% -79.0% 67.00 ± 31% sched_debug.cfs_rq[5]:/.load
176169 ± 36% +269.1% 650208 ± 16% sched_debug.cfs_rq[5]:/.min_vruntime
-5732646 ± -6% -80.4% -1124095 ±-57% sched_debug.cfs_rq[5]:/.spread0
81.00 ± 25% -69.1% 25.00 ± 49% sched_debug.cfs_rq[5]:/.tg_load_avg
15472 ± 14% +57.9% 24436 ± 18% sched_debug.cfs_rq[6]:/.exec_clock
20.25 ± 56% -88.9% 2.25 ± 79% sched_debug.cfs_rq[6]:/.load_avg
-5381953 ± -7% -78.9% -1136651 ±-57% sched_debug.cfs_rq[6]:/.spread0
82.75 ± 26% -62.8% 30.75 ± 66% sched_debug.cfs_rq[6]:/.tg_load_avg
20.25 ± 56% -88.9% 2.25 ± 79% sched_debug.cfs_rq[6]:/.tg_load_avg_contrib
11259 ± 20% +113.0% 23985 ± 20% sched_debug.cfs_rq[7]:/.exec_clock
7.50 ± 30% -66.7% 2.50 ± 72% sched_debug.cfs_rq[7]:/.load_avg
357472 ± 18% +76.1% 629483 ± 22% sched_debug.cfs_rq[7]:/.min_vruntime
-5551422 ± -6% -79.4% -1144823 ±-56% sched_debug.cfs_rq[7]:/.spread0
83.75 ± 26% -63.0% 31.00 ± 59% sched_debug.cfs_rq[7]:/.tg_load_avg
7.50 ± 30% -63.3% 2.75 ± 69% sched_debug.cfs_rq[7]:/.tg_load_avg_contrib
1.75 ± 47% +2085.7% 38.25 ± 39% sched_debug.cpu#0.cpu_load[1]
1.50 ± 33% +2900.0% 45.00 ± 37% sched_debug.cpu#0.cpu_load[2]
1.50 ± 33% +3516.7% 54.25 ± 37% sched_debug.cpu#0.cpu_load[3]
0.75 ±110% +7800.0% 59.25 ± 39% sched_debug.cpu#0.cpu_load[4]
694790 ± 8% +171.5% 1886697 ± 85% sched_debug.cpu#0.nr_switches
-17.00 ±-24% -88.2% -2.00 ±-348% sched_debug.cpu#0.nr_uninterruptible
694858 ± 8% +171.5% 1886801 ± 85% sched_debug.cpu#0.sched_count
81331 ± 11% +842.8% 766757 ±111% sched_debug.cpu#0.ttwu_count
6127 ± 2% +9956.2% 616216 ±142% sched_debug.cpu#0.ttwu_local
1.25 ±173% +3700.0% 47.50 ± 31% sched_debug.cpu#1.cpu_load[0]
1.00 ±173% +5825.0% 59.25 ± 37% sched_debug.cpu#1.cpu_load[1]
0.50 ±173% +15550.0% 78.25 ± 50% sched_debug.cpu#1.cpu_load[2]
0.25 ±173% +31600.0% 79.25 ± 47% sched_debug.cpu#1.cpu_load[3]
0.00 ± 0% +Inf% 72.25 ± 37% sched_debug.cpu#1.cpu_load[4]
1.00 ± 81% +4725.0% 48.25 ± 48% sched_debug.cpu#2.cpu_load[0]
1.00 ±122% +7075.0% 71.75 ± 66% sched_debug.cpu#2.cpu_load[1]
1.00 ±122% +7200.0% 73.00 ± 64% sched_debug.cpu#2.cpu_load[2]
1.00 ±122% +6700.0% 68.00 ± 55% sched_debug.cpu#2.cpu_load[3]
0.75 ±110% +8333.3% 63.25 ± 44% sched_debug.cpu#2.cpu_load[4]
639284 ± 7% +47.2% 940710 ± 11% sched_debug.cpu#2.nr_switches
-17.00 ±-55% -142.6% 7.25 ± 82% sched_debug.cpu#2.nr_uninterruptible
639320 ± 7% +47.2% 940777 ± 11% sched_debug.cpu#2.sched_count
316312 ± 7% +18.9% 376081 ± 9% sched_debug.cpu#2.sched_goidle
67699 ± 8% +294.1% 266835 ± 12% sched_debug.cpu#2.ttwu_count
5145 ± 13% +2007.3% 108427 ± 21% sched_debug.cpu#2.ttwu_local
1.50 ± 74% +2666.7% 41.50 ± 72% sched_debug.cpu#3.cpu_load[1]
1.25 ± 87% +3720.0% 47.75 ± 61% sched_debug.cpu#3.cpu_load[2]
1.00 ± 70% +4800.0% 49.00 ± 49% sched_debug.cpu#3.cpu_load[3]
1.00 ± 70% +4825.0% 49.25 ± 40% sched_debug.cpu#3.cpu_load[4]
49730 ± 4% +23.9% 61601 ± 7% sched_debug.cpu#3.nr_load_updates
690552 ± 8% +103.1% 1402384 ± 48% sched_debug.cpu#3.nr_switches
-14.25 ±-22% -100.0% 0.00 ± 10% sched_debug.cpu#3.nr_uninterruptible
690583 ± 8% +103.1% 1402456 ± 48% sched_debug.cpu#3.sched_count
342296 ± 8% +72.4% 590017 ± 56% sched_debug.cpu#3.sched_goidle
92922 ± 8% +452.5% 513410 ± 74% sched_debug.cpu#3.ttwu_count
4391 ± 8% +7913.1% 351853 ±112% sched_debug.cpu#3.ttwu_local
936024 ± 5% -44.0% 524020 ± 41% sched_debug.cpu#4.avg_idle
0.50 ±100% +21500.0% 108.00 ±108% sched_debug.cpu#4.cpu_load[0]
0.50 ±100% +16450.0% 82.75 ± 90% sched_debug.cpu#4.cpu_load[1]
1.00 ± 70% +7175.0% 72.75 ± 66% sched_debug.cpu#4.cpu_load[2]
1.25 ± 87% +5420.0% 69.00 ± 47% sched_debug.cpu#4.cpu_load[3]
1.25 ± 87% +5220.0% 66.50 ± 34% sched_debug.cpu#4.cpu_load[4]
1064 ±173% +528.6% 6693 ± 76% sched_debug.cpu#4.curr->pid
330965 ± 7% +87.0% 618850 ± 22% sched_debug.cpu#4.nr_switches
330992 ± 7% +87.0% 618907 ± 22% sched_debug.cpu#4.sched_count
6158 ± 11% +2116.7% 136520 ± 45% sched_debug.cpu#4.ttwu_local
2.25 ± 65% +1533.3% 36.75 ± 61% sched_debug.cpu#5.cpu_load[1]
1.00 ± 70% +4400.0% 45.00 ± 44% sched_debug.cpu#5.cpu_load[2]
0.75 ± 57% +6833.3% 52.00 ± 36% sched_debug.cpu#5.cpu_load[3]
1.25 ± 34% +4420.0% 56.50 ± 34% sched_debug.cpu#5.cpu_load[4]
319.25 ± 34% -79.0% 67.00 ± 31% sched_debug.cpu#5.load
15.75 ± 36% -61.9% 6.00 ± 88% sched_debug.cpu#5.nr_uninterruptible
972753 ± 4% -43.0% 554163 ± 20% sched_debug.cpu#6.avg_idle
8.00 ± 25% +387.5% 39.00 ± 28% sched_debug.cpu#6.cpu_load[0]
4.00 ± 25% +1350.0% 58.00 ± 43% sched_debug.cpu#6.cpu_load[1]
2.50 ± 20% +2590.0% 67.25 ± 42% sched_debug.cpu#6.cpu_load[2]
1.00 ± 0% +6550.0% 66.50 ± 39% sched_debug.cpu#6.cpu_load[3]
0.50 ±100% +12550.0% 63.25 ± 35% sched_debug.cpu#6.cpu_load[4]
401930 ± 13% +48.7% 597781 ± 16% sched_debug.cpu#6.nr_switches
28.75 ± 13% -89.6% 3.00 ±418% sched_debug.cpu#6.nr_uninterruptible
401955 ± 13% +48.7% 597858 ± 16% sched_debug.cpu#6.sched_count
130918 ± 18% +59.3% 208552 ± 14% sched_debug.cpu#6.sched_goidle
3553 ± 20% +2897.2% 106490 ± 22% sched_debug.cpu#6.ttwu_local
860564 ± 10% -40.1% 515607 ± 21% sched_debug.cpu#7.avg_idle
0.25 ±173% +15100.0% 38.00 ± 54% sched_debug.cpu#7.cpu_load[0]
0.25 ±173% +16400.0% 41.25 ± 54% sched_debug.cpu#7.cpu_load[1]
0.25 ±173% +18600.0% 46.75 ± 55% sched_debug.cpu#7.cpu_load[2]
0.25 ±173% +20100.0% 50.50 ± 52% sched_debug.cpu#7.cpu_load[3]
0.25 ±173% +21000.0% 52.75 ± 42% sched_debug.cpu#7.cpu_load[4]
606.50 ±173% +641.6% 4498 ± 92% sched_debug.cpu#7.curr->pid
62.25 ±173% +290.8% 243.25 ± 88% sched_debug.cpu#7.load
26365 ± 7% +60.8% 42384 ± 8% sched_debug.cpu#7.nr_load_updates
328555 ± 16% +356.1% 1498555 ± 97% sched_debug.cpu#7.nr_switches
328584 ± 16% +356.1% 1498636 ± 97% sched_debug.cpu#7.sched_count
135194 ± 9% +364.7% 628213 ±111% sched_debug.cpu#7.sched_goidle
181468 ± 23% +272.0% 675037 ±111% sched_debug.cpu#7.ttwu_count
3906 ± 23% +14334.8% 563894 ±136% sched_debug.cpu#7.ttwu_local
30.01 ± 46% -91.6% 2.51 ±121% sched_debug.rt_rq[5]:/.rt_time
ivb42: Ivytown Ivy Bridge-EP
Memory: 64G
lkp-ne04: Nehalem-EP
Memory: 12G
nhm-white: Nehalem
Memory: 6G
lituya: Grantley Haswell
Memory: 16G
4e+06 ++-----*---------*------------------------------------------------+
*** *.* * * * .* :*** * * * |
3.5e+06 ++: : * * * * ::*.* * :*** |
3e+06 ++: : * * + *** * .* * * * |
| : : * ** * * * * |
2.5e+06 ++: : |
| : : |
2e+06 ++: : |
|OOO O |
1.5e+06 O+ :O O O OO O OO O |
1e+06 ++ : O O O OO OO OO OO O O O
| : O O O O O O O O O O OOO OOOO O O |
500000 ++ : O O O O OO OO O OO O O|
| : |
0 ++-*--------------------------------------------------------------+
last_state.booting
1 ++--------------------------------------------------------------------*
| ..|
| .. |
0.8 ++ . |
| .. |
| .. |
0.6 ++ . |
| .. |
0.4 ++ . |
| .. |
| .. |
0.2 ++ . |
| .. |
| .. |
0 *+---------------------*-----------------------*----------------------+
last_state.is_incomplete_run
1 ++--------------------------------------------------------------------*
| ..|
| .. |
0.8 ++ . |
| .. |
| .. |
0.6 ++ . |
| .. |
0.4 ++ . |
| .. |
| .. |
0.2 ++ . |
| .. |
| .. |
0 *+---------------------*-----------------------*----------------------+
1.8e+07 ++----------------------------------------------------------------+
* * *. * ** * *.** |
1.6e+07 +*: : * *** *.** ***** ******.**** ** .* *** * |
1.4e+07 ++: : * * * * |
| : : |
1.2e+07 ++: : |
1e+07 ++: : OO O O O |
OOOOO OO O O OO OO OO OO O O O O O
8e+06 ++ : O O O O OO O OOOO OO O OOO OOOO OOOO OO O O OO|
6e+06 ++ : O O |
| : |
4e+06 ++ : |
2e+06 ++ : |
| : |
0 ++-*--------------------------------------------------------------+
9e+06 ++------------------------------------------------------------------+
*** **** .*** **.** **.* * * |
8e+06 ++: : ** ** ** * ** *. *** *. * * *. |
7e+06 ++: : * ** * * * ** |
| : : |
6e+06 ++: : |
5e+06 ++: : |
| : : O OO O O |
4e+06 ++ :: O OO O OO O O OO O O O
3e+06 OOOO:O OO O O O OO OO OOO O O O O O OO OOOO OOOO O |
| :: O O O O O O O O O|
2e+06 ++ : |
1e+06 ++ : |
| : |
0 ++-*----------------------------------------------------------------+
6e+06 ++------------------------------------------------------------------+
| |
5e+06 ++ * * |
*** ** ***.***** *.** ***.*** *** * |
| : : * * + * * *. * ***. * |
4e+06 ++: : * ** * * * |
| : : |
3e+06 ++: : |
| : : O OO O O |
2e+06 ++ :: O OO O OO O O OO O O O O
OOOO: OO O O O O OOOOO OOO OO O OOO OOOO OOOO OO O O OO|
| ::OO O |
1e+06 ++ : |
| : |
0 ++-*----------------------------------------------------------------+
6e+06 ++------------------------------------------------------------------+
| * |
5e+06 *+* * + : ** .* **.** |
|*: :***** ** ** *** ***** |
| : : + ** * **. |
4e+06 ++: : ** :**.* ** ** |
| : : * |
3e+06 ++: : |
O O::O O O |
2e+06 +O O: OOOOO OOO O O O OO O O OO O O
| :: O O O OOO OOO OO OOOOOOO OOOOO OOO OOO OO OOO|
| :: |
1e+06 ++ : |
| : |
0 ++-*----------------------------------------------------------------+
3.5e+06 ++----------------------------------------------------------------+
| * |
3e+06 *** *.******* .* :*** **.*** * |
| : : * * * *** * |
2.5e+06 ++: : + *** .* * |
| : : * **** *** ** |
2e+06 ++: : |
| : : |
1.5e+06 ++ :: |
OOOO OO O O O |
1e+06 ++ :O O O O OO OO O OO OO OOO O O O
| : O O O O O O O O O O OOO OOO |
500000 ++ : O O O O OO OO O OO O O O OO|
| : |
0 ++-*--------------------------------------------------------------+
1.2e+06 ++--*------*------------------------------------------------------+
* * :+ * ::*.***** **.* ** * |
1e+06 +*: : * *** * * : * ** *. |
| : : * *** * |
| : : * * .***** * |
800000 ++: : * ** |
| : : |
600000 ++: : |
| O:O O O |
400000 OO O O OOO O OO O O OO OO OO |
| : O O OO O OO OOOO OO OOOOOOOO OOOO OOO OOOOO OOOOO
| : |
200000 ++ : |
| : |
0 ++-*--------------------------------------------------------------+
cpuidle.C1-NHM.time
1.8e+08 ++----------------------------------------------------------------+
| O |
1.6e+08 OOOOO OOO O |
1.4e+08 ++ |
| O OOO OOOO OOOO O OO OOO OOO OO O OO|
1.2e+08 ++ OO O O OOO OO O O OOO OOOOOOOO O O
1e+08 ++ |
| |
8e+07 ++ |
6e+07 ++ * ****.*** ** |
| + *** ** |
4e+07 +* ** * ** * ** |
2e+07 *+* *.*** ***.* *** **.**** * |
| :: |
0 ++-*--------------------------------------------------------------+
cpuidle.C1E-NHM.time
3.5e+07 ++----------------------------------------------------------------+
| |
3e+07 OO O OO OO |
| OO O O O O OO O O
2.5e+07 ++ OO OOOOOOOO OO OOOO OOOOOOOO OOO OOOO OOOOOOO OOOO|
| O |
2e+07 ++ |
| * *. ** * |
1.5e+07 ++ * * .* ***** ** * * |
|** *.********.** *** **.* ** *** |
1e+07 *+: : * * |
| : : |
5e+06 ++ : |
| : |
0 ++-*--------------------------------------------------------------+
cpuidle.C1E-NHM.usage
45000 ++------------------------------------------------------------------+
| O O |
40000 ++ |
35000 OOOO O OO O O OOO OO OO O OO O O OO|
| O O OO O O OOO OO O O O O OO |
30000 ++ O OO O O OO OO O O O OOO O O O
25000 ++ |
| |
20000 ++ |
15000 ++ * .*******.***** .** |
|** ** ***. **** *.** ** .******* * |
10000 *+: : * * * * |
5000 ++ :: |
| :: |
0 ++-*----------------------------------------------------------------+
cpuidle.C3-NHM.time
7e+08 ++---*-*----*----*--------------------------------------------------+
*** :* *** **** *.******.*******. *** **. * ***. * |
6e+08 ++: : * * * * * |
| : : O O O |
5e+08 OOOO:OOOOOO OOOOOOO OOOOOO OOOOOOO OOOOOOO OOOOO OOOOO O OOOO O OOOO
| : : |
4e+08 ++: : |
| : : |
3e+08 ++ :: |
| :: |
2e+08 ++ :: |
| : |
1e+08 ++ : |
| : |
0 ++-*----------------------------------------------------------------+
cpuidle.C6-NHM.usage
400000 ++-----------------------------------------------------------------+
*** ********.*******. ******. ** * * |
350000 ++: : * * *** *.*******.* ** |
| : : |
300000 ++: : |
250000 ++: : |
OOOO:O OO O OO OO OO O O OO O |
200000 ++: : O O OO OO O O O
| :: O O O O OOOOO O O OO OO OOOO OOOO |
150000 ++ :: O O O OO O O OO|
100000 ++ :: |
| : |
50000 ++ : |
| : |
0 ++-*---------------------------------------------------------------+
turbostat.Bzy_MHz
3000 ++-------------------------------------------------------------------+
| |
2500 OOO OOOOOO OO O O O OOO OOOOO OOOOO OOOOOO OOO OO OOO O OOOOO OOO
| O O OO O O O OO O |
| ***.******.****** |
2000 *** *****.******.******.******.*** |
| : : |
1500 ++: : |
| : : |
1000 ++ : : |
| :: |
| :: |
500 ++ : |
| : |
0 ++--*----------------------------------------------------------------+
turbostat.CPU%c1
20 ++---------------------------------------------------------------------+
18 OO OOOOO OO |
| O |
16 ++ OO OOOOO OOOOO OOOOO OOOO OOOOO OOOOO O OOO OOOO OO OO OOO
14 ++ O O O O |
| |
12 ++ |
10 ++ **.*****.*****.*** |
8 ++ * * : |
*** ****.*****.*****.* ***.*** *.** |
6 ++: : |
4 ++ : : |
| :: |
2 ++ : |
0 ++--*------------------------------------------------------------------+
turbostat.CPU%c3
60 ++---------------------------------------------------------------------+
* * *** *** .* ***.* * *. ** *.* |
50 +*: : *.* * * * * * * * .** * *.* |
| : : ** ***.* ** ** |
| : : |
40 ++ : : |
OOO:O: OO O OO OO OO O O OOO O OO O OO O O OO OOOOO OOOOO O
30 ++ : OO O O O O OO OO OOO OO OO O O OO|
| :: |
20 ++ :: |
| :: |
| : |
10 ++ : |
| : |
0 ++--*------------------------------------------------------------------+
turbostat.CPU%c6
60 ++---------------------------------------------------------------------+
| |
50 ++ O O O O O OOOOO OOO OOOO OOOO OOOO OOOO OO OO O OO|
OOO OO OO O O OO OO OO O O O OOO O O O
| * |
40 *** ****.*****.*****.*****.** **.****.** **.*****.*** |
| : : * |
30 ++: : |
| : : |
20 ++ : : |
| :: |
| :: |
10 ++ : |
| : |
0 ++--*------------------------------------------------------------------+
turbostat.Pkg%pc3
60 ++---------------------------------------------------------------------+
| |
50 ++ * .* |
*** :***. **** :***. ***.** **. |
| : : * * ** * ** .* |
40 ++: : **.*****.***** ** |
| : : |
30 ++ : : OOO O OOO OOOOO OOOOO OOOO OOOOO OOOOO OOOOO OOOO OOOOO OOO
| O:OOOO O O |
20 OO :: O OO |
| :: |
| :: |
10 ++ : |
| : |
0 ++--*------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 7 months