[lkp] [x86_64] ef7f0d6a6c: BUG: kernel early-boot crashed early console in setup code
by kernel test robot
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2 ("x86_64: add KASan support")
+-----------------------------------------------------------+------------+------------+
| | 786a895991 | ef7f0d6a6c |
+-----------------------------------------------------------+------------+------------+
| boot_successes | 10 | 0 |
| boot_failures | 1 | 12 |
| BUG:kernel_test_crashed | 1 | |
| BUG:kernel_early-boot_crashed_early_console_in_setup_code | 0 | 12 |
+-----------------------------------------------------------+------------+------------+
This may related to the specific kernel configuration file, which is attached.
early console in setup code
Elapsed time: 20
BUG: kernel early-boot crashed early console in setup code
Linux version 3.19.0-05243-gef7f0d6 #1
Command line: root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-4G-7/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-randconfig-b0-07260806-ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2-20150810-121762-teagdv-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-b0-07260806 branch=linux-devel/devel-spot-201507260511 commit=ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-b0-07260806/gcc-4.9/ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2/vmlinuz-3.19.0-05243-gef7f0d6 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-4G/debian-x86_64-2015-02-07.cgz/x86_64-randconfig-b0-07260806/gcc-4.9/ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-4G-7::dhcp
qemu-system-x86_64 -enable-kvm -cpu qemu64,+ssse3 -kernel /pkg/linux/x86_64-randconfig-b0-07260806/gcc-4.9/ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2/vmlinuz-3.19.0-05243-gef7f0d6 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-4G-7/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-randconfig-b0-07260806-ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2-20150810-121762-teagdv-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-b0-07260806 branch=linux-devel/devel-spot-201507260511 commit=ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-b0-07260806/gcc-4.9/ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2/vmlinuz-3.19.0-05243-gef7f0d6 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-4G/debian-x86_64-2015-02-07.cgz/x86_64-randconfig-b0-07260806/gcc-4.9/ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-4G-7::dhcp' -initrd /fs/sde1/initrd-vm-kbuild-4G-7 -m 4096 -smp 4 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23038-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sde1/disk0-vm-kbuild-4G-7,media=disk,if=virtio -drive file=/fs/sde1/disk1-vm-kbuild-4G-7,media=disk,if=virtio -drive file=/fs/sde1/disk2-vm-kbuild-4G-7,media=disk,if=virtio -drive file=/fs/sde1/disk3-vm-kbuild-4G-7,media=disk,if=virtio -drive file=/fs/sde1/disk4-vm-kbuild-4G-7,media=disk,if=virtio -drive file=/fs/sde1/disk5-vm-kbuild-4G-7,media=disk,if=virtio -drive file=/fs/sde1/disk6-vm-kbuild-4G-7,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-4G-7 -serial file:/dev/shm/kboot/serial-vm-kbuild-4G-7 -daemonize -display none -monitor null
Thanks,
Ying Huang
5 years, 7 months
[lkp] [of/platform] 7ec0126d70: WARNING: CPU: 0 PID: 1 at fs/kernfs/dir.c:1253 kernfs_remove_by_name_ns+0x74/0x80()
by kernel test robot
FYI, we noticed the below changes on
git://git.collabora.co.uk/git/user/tomeu/linux.git on-demand-probes-v6
commit 7ec0126d70e7cf5029b717f3b3ecf48ee1d17930 ("of/platform: Point to struct device from device node")
+--------------------------------------------------------+------------+------------+
| | 2ffbf1ddf7 | 7ec0126d70 |
+--------------------------------------------------------+------------+------------+
| boot_successes | 101 | 0 |
| boot_failures | 10 | 21 |
| IP-Config:Auto-configuration_of_network_failed | 10 | 10 |
| WARNING:at_fs/kernfs/dir.c:#kernfs_remove_by_name_ns() | 0 | 21 |
| backtrace:of_unittest | 0 | 21 |
| backtrace:kernel_init_freeable | 0 | 21 |
| WARNING:at_lib/kobject.c:#kobject_put() | 0 | 7 |
| BUG:unable_to_handle_kernel | 0 | 7 |
| Oops | 0 | 7 |
| EIP_is_at_of_platform_device_destroy | 0 | 7 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 7 |
+--------------------------------------------------------+------------+------------+
[ 5.232210] ### dt-test ### FAIL of_unittest_platform_populate():812 device didn't get destroyed 'dev'
[ 5.232926] ### dt-test ### FAIL of_unittest_platform_populate():812 device didn't get destroyed 'dev'
[ 5.235261] ------------[ cut here ]------------
[ 5.235631] WARNING: CPU: 0 PID: 1 at fs/kernfs/dir.c:1253 kernfs_remove_by_name_ns+0x74/0x80()
[ 5.236451] kernfs: can not remove 'driver', no directory
[ 5.236864] CPU: 0 PID: 1 Comm: swapper Tainted: G S 4.2.0-rc6-next-20150810-00011-g7ec0126 #1
[ 5.237610] 94437cb8 94437cb8 94437c8c 81cc5f20 94437ca8 8103fc40 000004e5 811311b4
[ 5.238300] 00000000 8209cf55 823f45c0 94437cc0 8103fcc6 00000009 94437cb8 820aaf54
[ 5.238969] 94437cd4 94437ce4 811311b4 820aaecc 000004e5 820aaf54 8209cf55 8d26c014
[ 5.239662] Call Trace:
[ 5.239859] [<81cc5f20>] dump_stack+0x16/0x18
[ 5.240215] [<8103fc40>] warn_slowpath_common+0x60/0x90
[ 5.240623] [<811311b4>] ? kernfs_remove_by_name_ns+0x74/0x80
[ 5.241073] [<8103fcc6>] warn_slowpath_fmt+0x26/0x30
[ 5.241469] [<811311b4>] kernfs_remove_by_name_ns+0x74/0x80
[ 5.241898] [<811330a1>] sysfs_remove_link+0x11/0x30
[ 5.242310] [<81707ba8>] driver_sysfs_remove+0x28/0x30
[ 5.242710] [<81707c62>] __device_release_driver+0x32/0xf0
[ 5.243141] [<817084aa>] device_release_driver+0x1a/0x30
[ 5.243564] [<8170705b>] bus_remove_device+0xbb/0xf0
[ 5.243948] [<81704c5c>] device_del+0xec/0x1e0
[ 5.244312] [<81cc1e01>] ? klist_next+0x101/0x110
[ 5.244684] [<81709a14>] platform_device_del+0x14/0xa0
[ 5.245092] [<81709acb>] platform_device_unregister+0xb/0x20
[ 5.245547] [<81af1d18>] of_platform_device_destroy+0x68/0x70
[ 5.245991] [<81af2267>] of_platform_notify+0xc7/0x100
[ 5.246408] [<8105753a>] notifier_call_chain+0x2a/0x90
[ 5.246805] [<810578aa>] __blocking_notifier_call_chain+0x2a/0x50
[ 5.247289] [<810578dc>] blocking_notifier_call_chain+0xc/0x10
[ 5.247738] [<81af282e>] of_property_notify+0x2e/0x60
[ 5.248144] [<81af2891>] __of_changeset_entry_notify+0x31/0xc0
[ 5.248615] [<81af3039>] of_changeset_apply+0x49/0xa0
[ 5.249017] [<81af69da>] ? of_overlay_apply_one+0xba/0x210
[ 5.249453] [<81af6d4a>] of_overlay_create+0x1ba/0x330
[ 5.249851] [<81ccd483>] of_unittest_apply_overlay+0x8a/0xf8
[ 5.250349] [<81ccd57b>] of_unittest_apply_overlay_check+0x8a/0x10d
[ 5.250840] [<82529b39>] of_unittest+0x19a4/0x21d1
[ 5.251242] [<810cf7e7>] ? slob_free+0xd7/0x430
[ 5.251598] [<82528195>] ? of_unittest_check_tree_linkage+0x81/0x81
[ 5.252135] [<82528195>] ? of_unittest_check_tree_linkage+0x81/0x81
[ 5.252670] [<824e7b2b>] do_one_initcall+0xcb/0x14d
[ 5.253055] [<824e74c5>] ? repair_env_string+0x12/0x54
[ 5.253463] [<8105645c>] ? parse_args+0x1bc/0x3c0
[ 5.253829] [<824e7c6b>] ? kernel_init_freeable+0xbe/0x15b
[ 5.254278] [<824e7c8b>] kernel_init_freeable+0xde/0x15b
[ 5.254694] [<81cc1ec8>] kernel_init+0x8/0xc0
[ 5.255044] [<81cd2ec0>] ret_from_kernel_thread+0x20/0x30
[ 5.255467] [<81cc1ec0>] ? rest_init+0xb0/0xb0
[ 5.255814] ---[ end trace 9fe8dc290cc801c7 ]---
[ 5.256199] ### dt-test ### FAIL of_unittest_apply_overlay_check():1226 overlay @"/testcase-data/overlay1" failed to create @"/testcase-data/overlay-node/test-bus/test-unittest1" enabled
Thanks,
Ying Huang
5 years, 7 months
[lkp] [sched] d4573c3e1c: -5.9% unixbench.score
by kernel test robot
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit d4573c3e1c992668f5dcd57d1c2ced56ae9650b9 ("sched: Improve load balancing in the presence of idle CPUs")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/execl
commit:
dfbca41f347997e57048a53755611c8e2d792924
d4573c3e1c992668f5dcd57d1c2ced56ae9650b9
dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c
---------------- --------------------------
%stddev %change %stddev
\ | \
4725 ± 0% -2.7% 4599 ± 0% unixbench.score
2123335 ± 0% -1.7% 2087061 ± 0% unixbench.time.involuntary_context_switches
99575417 ± 0% -2.3% 97252046 ± 0% unixbench.time.minor_page_faults
317.00 ± 0% -2.2% 310.00 ± 0% unixbench.time.percent_of_cpu_this_job_got
515.93 ± 0% -2.3% 504.21 ± 0% unixbench.time.system_time
450501 ± 0% -4.9% 428319 ± 0% unixbench.time.voluntary_context_switches
301368 ± 0% -11.4% 267086 ± 0% softirqs.SCHED
49172197 ± 0% -10.0% 44274425 ± 0% cpuidle.C1E-NHM.time
613281 ± 0% -17.6% 505485 ± 0% cpuidle.C1E-NHM.usage
232695 ± 5% -12.9% 202734 ± 1% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
470921 ± 2% -10.7% 420710 ± 1% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.system_call_fastpath
42.74 ± 0% -2.2% 41.81 ± 0% turbostat.%Busy
1232 ± 0% -2.2% 1205 ± 0% turbostat.Avg_MHz
45.75 ± 33% +100.0% 91.50 ± 17% sched_debug.cfs_rq[1]:/.load
1788 ± 22% -64.0% 644.25 ± 61% sched_debug.cfs_rq[3]:/.blocked_load_avg
1950 ± 22% -62.9% 724.00 ± 55% sched_debug.cfs_rq[3]:/.tg_load_contrib
-315.00 ± -5% +13.1% -356.25 ± -5% sched_debug.cpu#0.nr_uninterruptible
69.00 ± 6% +12.3% 77.50 ± 4% sched_debug.cpu#1.cpu_load[3]
45.75 ± 33% +100.0% 91.50 ± 17% sched_debug.cpu#1.load
449022 ± 6% +15.4% 518171 ± 4% sched_debug.cpu#2.avg_idle
624.00 ± 62% +115.9% 1347 ± 26% sched_debug.cpu#2.curr->pid
-403.75 ± -4% +26.9% -512.25 ± -9% sched_debug.cpu#2.nr_uninterruptible
-433.00 ± -4% +18.0% -511.00 ±-11% sched_debug.cpu#3.nr_uninterruptible
315.50 ± 6% +31.5% 415.00 ± 9% sched_debug.cpu#4.nr_uninterruptible
399.00 ± 4% +18.2% 471.75 ± 7% sched_debug.cpu#5.nr_uninterruptible
407.50 ± 0% +18.6% 483.25 ± 4% sched_debug.cpu#6.nr_uninterruptible
402.00 ± 8% +20.8% 485.50 ± 2% sched_debug.cpu#7.nr_uninterruptible
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/execl
commit:
dfbca41f347997e57048a53755611c8e2d792924
d4573c3e1c992668f5dcd57d1c2ced56ae9650b9
dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c
---------------- --------------------------
%stddev %change %stddev
\ | \
10886 ± 0% -5.9% 10249 ± 0% unixbench.score
4700905 ± 0% -4.9% 4468392 ± 0% unixbench.time.involuntary_context_switches
2.16e+08 ± 0% -5.4% 2.044e+08 ± 0% unixbench.time.minor_page_faults
554.50 ± 0% -9.9% 499.50 ± 2% unixbench.time.percent_of_cpu_this_job_got
902.06 ± 0% -8.1% 828.84 ± 0% unixbench.time.system_time
192.66 ± 0% -7.6% 177.94 ± 0% unixbench.time.user_time
2695111 ± 0% -10.9% 2400967 ± 3% unixbench.time.voluntary_context_switches
525929 ± 0% -25.9% 389861 ± 1% softirqs.SCHED
2695111 ± 0% -10.9% 2400967 ± 3% time.voluntary_context_switches
121703 ± 0% -6.6% 113648 ± 2% vmstat.system.cs
26498 ± 0% -6.2% 24852 ± 2% vmstat.system.in
6429895 ± 0% -9.8% 5800792 ± 0% cpuidle.C1-HSW.usage
1927178 ± 0% -20.6% 1529231 ± 1% cpuidle.C1E-HSW.usage
50019600 ± 3% +92.3% 96177564 ± 4% cpuidle.C3-HSW.time
721114 ± 2% +40.6% 1013620 ± 4% cpuidle.C3-HSW.usage
928033 ± 2% +13.8% 1056036 ± 2% cpuidle.C6-HSW.usage
36.80 ± 0% -9.6% 33.28 ± 2% turbostat.%Busy
1214 ± 0% -9.5% 1098 ± 2% turbostat.Avg_MHz
0.07 ± 14% +85.7% 0.13 ± 35% turbostat.Pkg%pc2
3.68 ± 10% +85.5% 6.83 ± 35% turbostat.Pkg%pc6
53.85 ± 0% -4.2% 51.61 ± 2% turbostat.PkgWatt
0.00 ± -1% +Inf% 4318915 ±150% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.new_sync_write.vfs_write.SyS_write.system_call_fastpath
11715 ±101% +3928.9% 471985 ±157% latency_stats.avg.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
0.00 ± -1% +Inf% 4717643 ±134% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.new_sync_write.vfs_write.SyS_write.system_call_fastpath
91245 ±109% +4137.1% 3866124 ±152% latency_stats.max.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
0.00 ± -1% +Inf% 4757271 ±133% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.new_sync_write.vfs_write.SyS_write.system_call_fastpath
19114 ±158% -71.4% 5475 ± 67% latency_stats.sum.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
109615 ± 98% +3435.2% 3875132 ±152% latency_stats.sum.wait_on_page_bit.filemap_fdatawait_range.filemap_fdatawait.sync_inodes_sb.sync_inodes_one_sb.iterate_supers.sys_sync.system_call_fastpath
22954 ± 4% -10.8% 20484 ± 9% sched_debug.cfs_rq[0]:/.exec_clock
306157 ± 0% -13.1% 266065 ± 5% sched_debug.cfs_rq[0]:/.min_vruntime
20641 ± 11% -27.2% 15036 ± 20% sched_debug.cfs_rq[10]:/.avg->runnable_avg_sum
19987 ± 4% -11.2% 17751 ± 6% sched_debug.cfs_rq[10]:/.exec_clock
300899 ± 0% -12.5% 263419 ± 5% sched_debug.cfs_rq[10]:/.min_vruntime
452.50 ± 11% -27.6% 327.75 ± 20% sched_debug.cfs_rq[10]:/.tg_runnable_contrib
300940 ± 0% -11.9% 265047 ± 5% sched_debug.cfs_rq[11]:/.min_vruntime
-5222 ±-16% -80.4% -1022 ±-175% sched_debug.cfs_rq[11]:/.spread0
19789 ± 11% -21.2% 15590 ± 16% sched_debug.cfs_rq[12]:/.avg->runnable_avg_sum
20790 ± 3% -14.4% 17801 ± 6% sched_debug.cfs_rq[12]:/.exec_clock
302961 ± 0% -13.0% 263467 ± 5% sched_debug.cfs_rq[12]:/.min_vruntime
432.75 ± 11% -21.3% 340.75 ± 17% sched_debug.cfs_rq[12]:/.tg_runnable_contrib
20451 ± 5% -12.8% 17830 ± 6% sched_debug.cfs_rq[13]:/.exec_clock
302744 ± 1% -12.7% 264381 ± 5% sched_debug.cfs_rq[13]:/.min_vruntime
1.75 ± 47% +171.4% 4.75 ± 40% sched_debug.cfs_rq[13]:/.nr_spread_over
19559 ± 0% -11.0% 17407 ± 4% sched_debug.cfs_rq[14]:/.exec_clock
300081 ± 0% -12.3% 263170 ± 4% sched_debug.cfs_rq[14]:/.min_vruntime
-6082 ±-15% -52.3% -2900 ±-60% sched_debug.cfs_rq[14]:/.spread0
300413 ± 0% -11.7% 265326 ± 4% sched_debug.cfs_rq[15]:/.min_vruntime
-5751 ±-18% -87.0% -745.73 ±-366% sched_debug.cfs_rq[15]:/.spread0
303798 ± 0% -13.1% 264112 ± 4% sched_debug.cfs_rq[1]:/.min_vruntime
84.00 ±100% +229.5% 276.75 ± 63% sched_debug.cfs_rq[1]:/.utilization_load_avg
302656 ± 0% -12.1% 266065 ± 5% sched_debug.cfs_rq[2]:/.min_vruntime
-3502 ±-17% -100.0% -1.05 ±-196837% sched_debug.cfs_rq[2]:/.spread0
19933 ± 3% -13.5% 17238 ± 3% sched_debug.cfs_rq[3]:/.exec_clock
305206 ± 0% -12.8% 265991 ± 4% sched_debug.cfs_rq[3]:/.min_vruntime
20973 ± 6% -24.6% 15805 ± 18% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
20068 ± 5% -11.5% 17767 ± 5% sched_debug.cfs_rq[4]:/.exec_clock
305752 ± 0% -12.9% 266399 ± 5% sched_debug.cfs_rq[4]:/.min_vruntime
461.50 ± 6% -25.0% 346.25 ± 18% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
303317 ± 0% -12.6% 264993 ± 5% sched_debug.cfs_rq[5]:/.min_vruntime
-2842 ±-35% -62.2% -1073 ±-105% sched_debug.cfs_rq[5]:/.spread0
20814 ± 9% -21.2% 16410 ± 26% sched_debug.cfs_rq[6]:/.avg->runnable_avg_sum
19473 ± 0% -10.9% 17351 ± 4% sched_debug.cfs_rq[6]:/.exec_clock
304159 ± 0% -12.7% 265678 ± 5% sched_debug.cfs_rq[6]:/.min_vruntime
455.75 ± 9% -21.0% 360.00 ± 26% sched_debug.cfs_rq[6]:/.tg_runnable_contrib
304471 ± 0% -11.9% 268359 ± 4% sched_debug.cfs_rq[7]:/.min_vruntime
298485 ± 0% -13.3% 258901 ± 3% sched_debug.cfs_rq[8]:/.min_vruntime
18.00 ± 29% +356.9% 82.25 ± 51% sched_debug.cfs_rq[8]:/.runnable_load_avg
231.00 ± 30% +116.6% 500.25 ± 20% sched_debug.cfs_rq[8]:/.utilization_load_avg
19913 ± 3% -13.2% 17285 ± 4% sched_debug.cfs_rq[9]:/.exec_clock
14.00 ± 35% +250.0% 49.00 ± 39% sched_debug.cfs_rq[9]:/.load
300167 ± 0% -13.1% 260832 ± 4% sched_debug.cfs_rq[9]:/.min_vruntime
-1710 ± -2% -23.6% -1306 ±-13% sched_debug.cpu#0.nr_uninterruptible
158575 ± 6% -11.5% 140360 ± 1% sched_debug.cpu#1.ttwu_count
553112 ± 72% -45.1% 303486 ± 5% sched_debug.cpu#10.nr_switches
554861 ± 72% -45.1% 304840 ± 5% sched_debug.cpu#10.sched_count
27.00 ± 10% -20.4% 21.50 ± 13% sched_debug.cpu#12.cpu_load[4]
186234 ± 90% -58.3% 77602 ± 7% sched_debug.cpu#12.ttwu_local
20.50 ± 21% +48.8% 30.50 ± 18% sched_debug.cpu#14.cpu_load[1]
21.50 ± 21% +58.1% 34.00 ± 19% sched_debug.cpu#14.cpu_load[2]
22.75 ± 14% +44.0% 32.75 ± 17% sched_debug.cpu#14.cpu_load[3]
109416 ± 5% -13.7% 94393 ± 4% sched_debug.cpu#14.sched_goidle
130439 ± 2% -12.4% 114236 ± 4% sched_debug.cpu#14.ttwu_count
30.50 ±101% +301.6% 122.50 ± 25% sched_debug.cpu#2.cpu_load[0]
27.25 ± 47% +209.2% 84.25 ± 34% sched_debug.cpu#2.cpu_load[1]
37.50 ± 27% -32.7% 25.25 ± 21% sched_debug.cpu#4.cpu_load[4]
17.25 ± 43% +362.3% 79.75 ± 56% sched_debug.cpu#4.load
57816 ± 15% -15.3% 48950 ± 2% sched_debug.cpu#4.nr_load_updates
1220774 ±121% -73.1% 328817 ± 6% sched_debug.cpu#4.nr_switches
1222807 ±121% -73.0% 330486 ± 6% sched_debug.cpu#4.sched_count
542794 ±133% -78.8% 115250 ± 5% sched_debug.cpu#4.sched_goidle
588622 ±125% -76.2% 140136 ± 7% sched_debug.cpu#4.ttwu_count
522443 ±142% -84.6% 80605 ± 6% sched_debug.cpu#4.ttwu_local
345322 ± 1% -7.4% 319747 ± 4% sched_debug.cpu#7.nr_switches
347310 ± 1% -7.5% 321411 ± 4% sched_debug.cpu#7.sched_count
120552 ± 2% -9.4% 109240 ± 4% sched_debug.cpu#7.sched_goidle
14.00 ± 35% +233.9% 46.75 ± 36% sched_debug.cpu#9.load
136971 ± 10% -15.1% 116346 ± 1% sched_debug.cpu#9.ttwu_count
0.14 ± 57% +411.4% 0.72 ± 73% sched_debug.rt_rq[1]:/.rt_time
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/powersave/execl
commit:
dfbca41f347997e57048a53755611c8e2d792924
d4573c3e1c992668f5dcd57d1c2ced56ae9650b9
dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c
---------------- --------------------------
%stddev %change %stddev
\ | \
10563 ± 1% -5.8% 9952 ± 0% unixbench.score
4414071 ± 1% -5.2% 4184851 ± 0% unixbench.time.involuntary_context_switches
2.028e+08 ± 1% -5.5% 1.917e+08 ± 0% unixbench.time.minor_page_faults
540.75 ± 1% -7.2% 502.00 ± 0% unixbench.time.percent_of_cpu_this_job_got
882.57 ± 1% -7.4% 816.95 ± 0% unixbench.time.system_time
188.00 ± 1% -7.0% 174.82 ± 0% unixbench.time.user_time
2858074 ± 3% -8.8% 2605521 ± 0% unixbench.time.voluntary_context_switches
511610 ± 1% -25.5% 381032 ± 0% softirqs.SCHED
2858074 ± 3% -8.8% 2605521 ± 0% time.voluntary_context_switches
118783 ± 0% -4.6% 113276 ± 0% vmstat.system.cs
25883 ± 0% -4.3% 24765 ± 0% vmstat.system.in
683238 ± 2% -9.4% 619110 ± 0% latency_stats.hits.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.system_call_fastpath
96234 ± 5% -11.9% 84756 ± 1% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
191410 ± 5% -13.4% 165713 ± 1% latency_stats.hits.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_pte_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.padzero.load_elf_binary
36.16 ± 1% -6.9% 33.68 ± 0% turbostat.%Busy
1148 ± 1% -7.1% 1066 ± 0% turbostat.Avg_MHz
50.93 ± 0% -1.4% 50.21 ± 0% turbostat.PkgWatt
1928029 ± 0% -20.0% 1543361 ± 0% cpuidle.C1E-HSW.usage
55003420 ± 8% +78.9% 98419263 ± 1% cpuidle.C3-HSW.time
799437 ± 7% +33.2% 1064466 ± 1% cpuidle.C3-HSW.usage
873657 ± 3% +11.9% 977668 ± 1% cpuidle.C6-HSW.usage
19945 ± 2% -7.9% 18369 ± 0% sched_debug.cfs_rq[13]:/.exec_clock
21825 ± 10% -18.4% 17818 ± 1% sched_debug.cfs_rq[14]:/.avg->runnable_avg_sum
478.50 ± 10% -18.6% 389.50 ± 1% sched_debug.cfs_rq[14]:/.tg_runnable_contrib
942.75 ± 31% -70.3% 280.00 ± 62% sched_debug.cfs_rq[15]:/.blocked_load_avg
958.25 ± 31% -68.3% 303.75 ± 62% sched_debug.cfs_rq[15]:/.tg_load_contrib
20260 ± 7% -9.5% 18339 ± 1% sched_debug.cfs_rq[2]:/.exec_clock
240.25 ± 43% +185.7% 686.50 ± 49% sched_debug.cfs_rq[2]:/.utilization_load_avg
18628 ± 2% -9.2% 16914 ± 5% sched_debug.cfs_rq[4]:/.avg->runnable_avg_sum
405.75 ± 1% -8.7% 370.50 ± 5% sched_debug.cfs_rq[4]:/.tg_runnable_contrib
19914 ± 4% -12.0% 17521 ± 2% sched_debug.cfs_rq[5]:/.exec_clock
-2928 ±-101% -146.4% 1357 ± 75% sched_debug.cfs_rq[6]:/.spread0
20113 ± 2% -8.2% 18458 ± 3% sched_debug.cfs_rq[7]:/.exec_clock
-2088 ±-11% -19.0% -1691 ± -3% sched_debug.cpu#0.nr_uninterruptible
14.25 ± 90% +452.6% 78.75 ± 66% sched_debug.cpu#1.cpu_load[0]
1057371 ±117% -70.1% 316240 ± 2% sched_debug.cpu#1.nr_switches
1059402 ±116% -70.0% 317990 ± 2% sched_debug.cpu#1.sched_count
472923 ±128% -76.4% 111412 ± 3% sched_debug.cpu#1.sched_goidle
507097 ±122% -72.5% 139349 ± 3% sched_debug.cpu#1.ttwu_count
27.50 ± 12% +53.6% 42.25 ± 23% sched_debug.cpu#11.cpu_load[3]
28.00 ± 11% +29.5% 36.25 ± 12% sched_debug.cpu#11.cpu_load[4]
603068 ± 3% -23.4% 462150 ± 20% sched_debug.cpu#13.avg_idle
436039 ± 23% +29.1% 563133 ± 2% sched_debug.cpu#14.avg_idle
402209 ±116% -81.9% 72660 ± 3% sched_debug.cpu#14.ttwu_local
-2215 ±-11% +13.7% -2519 ± -3% sched_debug.cpu#2.nr_uninterruptible
41.25 ± 26% -35.2% 26.75 ± 10% sched_debug.cpu#3.cpu_load[2]
34.00 ±117% +242.6% 116.50 ± 79% sched_debug.cpu#4.load
-2266 ±-13% +16.9% -2648 ± -2% sched_debug.cpu#4.nr_uninterruptible
884.50 ± 33% +44.8% 1280 ± 14% sched_debug.cpu#5.curr->pid
-2219 ± -7% +11.4% -2472 ± -3% sched_debug.cpu#5.nr_uninterruptible
55415 ± 20% -19.8% 44460 ± 0% sched_debug.cpu#6.nr_load_updates
-2350 ± -8% +14.2% -2684 ± -2% sched_debug.cpu#6.nr_uninterruptible
847343 ± 91% -84.0% 135604 ± 2% sched_debug.cpu#6.ttwu_count
784793 ± 99% -90.3% 75880 ± 3% sched_debug.cpu#6.ttwu_local
39.00 ± 14% -30.1% 27.25 ± 23% sched_debug.cpu#7.cpu_load[3]
35.00 ± 11% -22.1% 27.25 ± 12% sched_debug.cpu#7.cpu_load[4]
128907 ± 8% -10.8% 114951 ± 2% sched_debug.cpu#8.ttwu_count
624669 ± 88% -56.2% 273343 ± 0% sched_debug.cpu#9.nr_switches
626375 ± 88% -56.1% 274746 ± 0% sched_debug.cpu#9.sched_count
262109 ±102% -64.9% 92004 ± 1% sched_debug.cpu#9.sched_goidle
2.11 ± 4% +18.9% 2.50 ± 5% sched_debug.rt_rq[0]:/.rt_time
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/test:
lituya/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/powersave/shell1
commit:
dfbca41f347997e57048a53755611c8e2d792924
d4573c3e1c992668f5dcd57d1c2ced56ae9650b9
dfbca41f347997e5 d4573c3e1c992668f5dcd57d1c
---------------- --------------------------
%stddev %change %stddev
\ | \
5132876 ± 0% +2.0% 5236374 ± 0% unixbench.time.involuntary_context_switches
1571 ± 0% +1.2% 1591 ± 0% unixbench.time.system_time
795.19 ± 0% +1.3% 805.33 ± 0% unixbench.time.user_time
932469 ± 0% -12.7% 813874 ± 0% softirqs.SCHED
18660 ± 0% +1.7% 18975 ± 0% vmstat.system.in
15466776 ± 1% -5.1% 14675901 ± 0% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.stub_execve
5322533 ± 1% -8.5% 4868555 ± 1% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
21589512 ± 1% -7.4% 19993473 ± 1% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.system_call_fastpath
9791019 ± 1% -8.2% 8988996 ± 1% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap_pgoff.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.system_call_fastpath
4.135e+08 ± 0% -3.0% 4.012e+08 ± 0% latency_stats.sum.sigsuspend.SyS_rt_sigsuspend.system_call_fastpath
24904 ± 4% -25.2% 18617 ± 11% sched_debug.cfs_rq[0]:/.blocked_load_avg
8.75 ± 58% +508.6% 53.25 ±106% sched_debug.cfs_rq[0]:/.load
391160 ± 1% -11.4% 346400 ± 1% sched_debug.cfs_rq[0]:/.tg_load_avg
25100 ± 4% -25.2% 18770 ± 11% sched_debug.cfs_rq[0]:/.tg_load_contrib
390600 ± 1% -11.1% 347123 ± 2% sched_debug.cfs_rq[10]:/.tg_load_avg
391242 ± 1% -11.7% 345453 ± 1% sched_debug.cfs_rq[11]:/.tg_load_avg
390931 ± 1% -11.5% 345958 ± 1% sched_debug.cfs_rq[12]:/.tg_load_avg
23834 ± 5% -17.3% 19701 ± 11% sched_debug.cfs_rq[13]:/.blocked_load_avg
390601 ± 1% -11.7% 345060 ± 2% sched_debug.cfs_rq[13]:/.tg_load_avg
24080 ± 5% -17.7% 19823 ± 11% sched_debug.cfs_rq[13]:/.tg_load_contrib
390376 ± 1% -11.5% 345360 ± 2% sched_debug.cfs_rq[14]:/.tg_load_avg
23210 ± 5% -12.3% 20347 ± 4% sched_debug.cfs_rq[15]:/.blocked_load_avg
390137 ± 0% -11.4% 345480 ± 2% sched_debug.cfs_rq[15]:/.tg_load_avg
23345 ± 5% -12.4% 20458 ± 4% sched_debug.cfs_rq[15]:/.tg_load_contrib
390654 ± 1% -11.4% 345993 ± 2% sched_debug.cfs_rq[1]:/.tg_load_avg
24857 ± 5% -16.0% 20876 ± 19% sched_debug.cfs_rq[1]:/.tg_load_contrib
391340 ± 1% -11.6% 346023 ± 2% sched_debug.cfs_rq[2]:/.tg_load_avg
26275 ± 5% -15.6% 22180 ± 19% sched_debug.cfs_rq[3]:/.blocked_load_avg
391666 ± 1% -11.7% 345867 ± 2% sched_debug.cfs_rq[3]:/.tg_load_avg
26369 ± 5% -15.5% 22271 ± 19% sched_debug.cfs_rq[3]:/.tg_load_contrib
391430 ± 1% -11.5% 346427 ± 2% sched_debug.cfs_rq[4]:/.tg_load_avg
391321 ± 1% -11.5% 346235 ± 2% sched_debug.cfs_rq[5]:/.tg_load_avg
25744 ± 3% -17.8% 21156 ± 14% sched_debug.cfs_rq[6]:/.blocked_load_avg
1.00 ± 70% +225.0% 3.25 ± 25% sched_debug.cfs_rq[6]:/.nr_spread_over
389932 ± 1% -11.1% 346764 ± 2% sched_debug.cfs_rq[6]:/.tg_load_avg
25873 ± 3% -17.6% 21329 ± 14% sched_debug.cfs_rq[6]:/.tg_load_contrib
389907 ± 1% -11.0% 346962 ± 2% sched_debug.cfs_rq[7]:/.tg_load_avg
23576 ± 4% -20.0% 18853 ± 10% sched_debug.cfs_rq[8]:/.blocked_load_avg
390564 ± 1% -11.1% 347109 ± 2% sched_debug.cfs_rq[8]:/.tg_load_avg
23775 ± 3% -20.3% 18937 ± 10% sched_debug.cfs_rq[8]:/.tg_load_contrib
391152 ± 1% -11.2% 347502 ± 2% sched_debug.cfs_rq[9]:/.tg_load_avg
450623 ± 16% +28.9% 580722 ± 4% sched_debug.cpu#0.avg_idle
29.75 ± 66% -74.8% 7.50 ± 56% sched_debug.cpu#12.load
540.50 ± 6% -15.5% 456.75 ± 3% sched_debug.cpu#14.nr_uninterruptible
88.25 ± 72% -83.9% 14.25 ± 27% sched_debug.cpu#2.cpu_load[0]
61.00 ± 50% -69.3% 18.75 ± 24% sched_debug.cpu#2.cpu_load[1]
46.00 ± 26% -37.5% 28.75 ± 18% sched_debug.cpu#3.cpu_load[2]
30.00 ± 21% +43.3% 43.00 ± 31% sched_debug.cpu#6.cpu_load[2]
21361 ± 16% +72.5% 36844 ± 39% sched_debug.cpu#8.curr->pid
nhm-white: Nehalem
Memory: 6G
lituya: Grantley Haswell
Memory: 16G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 8 months
[lkp] [platform] 7b623a4d85f: INFO: possible recursive locking detected ]
by kernel test robot
FYI, we noticed the below changes on
git://git.infradead.org/users/dvhart/linux-platform-drivers-x86.git for-review
commit 7b623a4d85fe7b8d0b4bf24f04a783420b3dbf1a ("platform:x86: add Intel Punit mailbox IPC driver")
The following new stuff in kernel log may make end user confusing.
[ 0.448516] intel_punit_ipc PNP0103:00: Could not get irq number
[ 0.448516] intel_punit_ipc PNP0103:00: Could not get irq number
[ 0.449541] intel_punit_ipc PNP0103:00: Failed to get iomem resource1
[ 0.449541] intel_punit_ipc PNP0103:00: Failed to get iomem resource1
[ 0.450666] intel_punit_ipc: probe of PNP0103:00 failed with error -22
[ 0.450666] intel_punit_ipc: probe of PNP0103:00 failed with error -22
Thanks,
Ying Huang
5 years, 8 months
[lkp] [Yama] 730daa164e7: INFO: suspicious RCU usage. ]
by kernel test robot
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 730daa164e7c7e31c08fab940549f4acc3329432 ("Yama: remove needless CONFIG_SECURITY_YAMA_STACKED")
+------------------------------------------------+------------+------------+
| | fe6c59dc17 | 730daa164e |
+------------------------------------------------+------------+------------+
| boot_successes | 21 | 0 |
| boot_failures | 10 | 21 |
| IP-Config:Auto-configuration_of_network_failed | 10 | 10 |
| INFO:suspicious_RCU_usage | 0 | 21 |
| backtrace:inet_ctl_sock_create | 0 | 21 |
| backtrace:dccp_v4_init_net | 0 | 21 |
| backtrace:ops_init | 0 | 21 |
| backtrace:register_pernet_subsys | 0 | 21 |
| backtrace:dccp_v4_init | 0 | 21 |
| backtrace:kernel_init_freeable | 0 | 21 |
+------------------------------------------------+------------+------------+
[ 3.532320] ===============================
[ 3.532320] ===============================
[ 3.532958] [ INFO: suspicious RCU usage. ]
[ 3.532958] [ INFO: suspicious RCU usage. ]
[ 3.533598] 4.2.0-rc3-00005-g730daa1 #2 Not tainted
[ 3.533598] 4.2.0-rc3-00005-g730daa1 #2 Not tainted
[ 3.534347] -------------------------------
[ 3.534347] -------------------------------
[ 3.534964] net/ipv4/cipso_ipv4.c:1936 suspicious rcu_dereference_protected() usage!
[ 3.534964] net/ipv4/cipso_ipv4.c:1936 suspicious rcu_dereference_protected() usage!
[ 3.536386]
[ 3.536386] other info that might help us debug this:
[ 3.536386]
[ 3.536386]
[ 3.536386] other info that might help us debug this:
[ 3.536386]
[ 3.537593]
[ 3.537593] rcu_scheduler_active = 1, debug_locks = 0
[ 3.537593]
[ 3.537593] rcu_scheduler_active = 1, debug_locks = 0
[ 3.538550] 3 locks held by swapper/1:
[ 3.538550] 3 locks held by swapper/1:
[ 3.539131] #0:
[ 3.539131] #0: ( (net_mutexnet_mutex){+.+.+.}){+.+.+.}, at: , at: [<412e3d44>] register_pernet_subsys+0x17/0x2f
[<412e3d44>] register_pernet_subsys+0x17/0x2f
[ 3.540430] #1:
[ 3.540430] #1: ( (slock-AF_INETslock-AF_INET/1/1){+.....}){+.....}, at: , at: [<4116c83d>] smack_netlabel+0x37/0x81
[<4116c83d>] smack_netlabel+0x37/0x81
[ 3.541659] #2:
[ 3.541659] #2: ( (rcu_read_lockrcu_read_lock){......}){......}, at: , at: [<413adb4a>] rcu_read_lock+0x0/0x5d
[<413adb4a>] rcu_read_lock+0x0/0x5d
[ 3.542841]
[ 3.542841] stack backtrace:
[ 3.542841]
[ 3.542841] stack backtrace:
[ 3.543506] CPU: 0 PID: 1 Comm: swapper Not tainted 4.2.0-rc3-00005-g730daa1 #2
[ 3.543506] CPU: 0 PID: 1 Comm: swapper Not tainted 4.2.0-rc3-00005-g730daa1 #2
[ 3.544585] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 3.544585] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 3.546106] 00000000
[ 3.546106] 00000000 00000001 00000001 4005fddc 4005fddc 413c5dcc 413c5dcc 4005fdf8 4005fdf8 410694c2 410694c2 4158ab66 4158ab66 4005c040 4005c040
[ 3.547340] 4c3f9740
[ 3.547340] 4c3f9740 519ae1cb 519ae1cb 519ae180 519ae180 4005fe1c 4005fe1c 41364911 41364911 0000000b 0000000b 0000000c 0000000c 519ae1c0 519ae1c0
[ 3.548571] 519ae180
[ 3.548571] 519ae180 ffffffa7 ffffffa7 4f34f660 4f34f660 4c3f9740 4c3f9740 4005fe34 4005fe34 413ae46d 413ae46d 417aa958 417aa958 4c3f9740 4c3f9740
[ 3.549853] Call Trace:
[ 3.549853] Call Trace:
[ 3.550244] [<413c5dcc>] dump_stack+0x16/0x18
[ 3.550244] [<413c5dcc>] dump_stack+0x16/0x18
[ 3.550925] [<410694c2>] lockdep_rcu_suspicious+0xc4/0xcd
[ 3.550925] [<410694c2>] lockdep_rcu_suspicious+0xc4/0xcd
[ 3.551835] [<41364911>] cipso_v4_sock_setattr+0xc4/0x13c
[ 3.551835] [<41364911>] cipso_v4_sock_setattr+0xc4/0x13c
[ 3.552717] [<413ae46d>] netlbl_sock_setattr+0x65/0x7b
[ 3.552717] [<413ae46d>] netlbl_sock_setattr+0x65/0x7b
[ 3.553508] [<4116c867>] smack_netlabel+0x61/0x81
[ 3.553508] [<4116c867>] smack_netlabel+0x61/0x81
[ 3.554212] [<4116c8e9>] smack_socket_post_create+0x62/0x67
[ 3.554212] [<4116c8e9>] smack_socket_post_create+0x62/0x67
[ 3.555088] [<4116b1c2>] security_socket_post_create+0x31/0x45
[ 3.555088] [<4116b1c2>] security_socket_post_create+0x31/0x45
[ 3.555958] [<412d6313>] __sock_create+0x18a/0x1a3
[ 3.555958] [<412d6313>] __sock_create+0x18a/0x1a3
[ 3.556710] [<412d6634>] sock_create_kern+0x15/0x17
[ 3.556710] [<412d6634>] sock_create_kern+0x15/0x17
[ 3.557451] [<41351b71>] inet_ctl_sock_create+0x24/0x4f
[ 3.557451] [<41351b71>] inet_ctl_sock_create+0x24/0x4f
[ 3.558259] [<4185d98c>] dccp_v4_init_net+0x26/0x30
[ 3.558259] [<4185d98c>] dccp_v4_init_net+0x26/0x30
[ 3.558992] [<412e3c7f>] ops_init+0x129/0x14f
[ 3.558992] [<412e3c7f>] ops_init+0x129/0x14f
[ 3.559844] [<410422f2>] ? __local_bh_enable_ip+0x10f/0x116
[ 3.559844] [<410422f2>] ? __local_bh_enable_ip+0x10f/0x116
[ 3.560709] [<412e3d00>] register_pernet_operations+0x5b/0x88
[ 3.560709] [<412e3d00>] register_pernet_operations+0x5b/0x88
[ 3.561702] [<4185d996>] ? dccp_v4_init_net+0x30/0x30
[ 3.561702] [<4185d996>] ? dccp_v4_init_net+0x30/0x30
[ 3.562462] [<412e3d4b>] register_pernet_subsys+0x1e/0x2f
[ 3.562462] [<412e3d4b>] register_pernet_subsys+0x1e/0x2f
[ 3.563294] [<4185d9d8>] dccp_v4_init+0x42/0x70
[ 3.563294] [<4185d9d8>] dccp_v4_init+0x42/0x70
[ 3.563994] [<4100053f>] do_one_initcall+0x18b/0x19a
[ 3.563994] [<4100053f>] do_one_initcall+0x18b/0x19a
[ 3.564791] [<41822472>] ? repair_env_string+0x12/0x54
[ 3.564791] [<41822472>] ? repair_env_string+0x12/0x54
[ 3.565595] [<41056ba7>] ? parse_args+0x191/0x26f
[ 3.565595] [<41056ba7>] ? parse_args+0x191/0x26f
[ 3.566326] [<41822cea>] kernel_init_freeable+0x18d/0x205
[ 3.566326] [<41822cea>] kernel_init_freeable+0x18d/0x205
[ 3.567180] [<413c2731>] kernel_init+0xd/0xb5
[ 3.567180] [<413c2731>] kernel_init+0xd/0xb5
[ 3.567845] [<413cbf40>] ret_from_kernel_thread+0x20/0x30
[ 3.567845] [<413cbf40>] ret_from_kernel_thread+0x20/0x30
[ 3.568652] [<413c2724>] ? rest_init+0x113/0x113
[ 3.568652] [<413c2724>] ? rest_init+0x113/0x113
Thanks,
Ying Huang
5 years, 8 months
[lkp] [rhashtable] 9d901bc0515: WARNING: CPU: 0 PID: 1 at arch/x86/mm/ioremap.c:63 __ioremap_check_ram+0x6a/0x99()
by kernel test robot
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit 9d901bc05153bbf33b5da2cd6266865e531f0545 ("rhashtable: Free bucket tables asynchronously after rehash")
With the commit, the possibility of OOM is increased under our boot testing.
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/sleep:
vm-kbuild-yocto-i386/boot/yocto-minimal-i386.cgz/i386-randconfig-b0-08160805/gcc-4.9/1
commit:
5269b53da4d432b0fbf755bd423c807bf6bd4aa0
9d901bc05153bbf33b5da2cd6266865e531f0545
5269b53da4d432b0 9d901bc05153bbf33b5da2cd62
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
:70 23% 16:70 kmsg.Kernel_Offset:disabled
:70 23% 16:70 dmesg.Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes
:70 23% 16:70 dmesg.invoked_oom-killer:gfp_mask=0x
Thanks,
Ying Huang
5 years, 8 months
How to run well about the Linux Kernel Performance tests
by xly
Hi,
Ying, Fengguang and Tim
I belong to the OS Labs in Tsinghua univ.
I run the lkp-tests on Ubuntu 14.04. and I had a few problems:
1) After I run the command "lkp split-job $LKP_SRC/jobs/hackbench.yaml", I get the following error message, How should I fix it ?
/usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require': cannot load such file -- /home/chy/lkp-tests/lib/assert (LoadError)
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /home/chy/lkp-tests/lib/lkp_git.rb:15:in `<top (required)>'
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /home/chy/lkp-tests/lib/result.rb:6:in `<top (required)>'
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /home/chy/lkp-tests/lib/job.rb:7:in `<top (required)>'
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /usr/lib/ruby/1.9.1/rubygems/custom_require.rb:36:in `require'
from /home/chy/lkp-tests/sbin/split-job:5:in `<main>'
2) After the command "lkp result hackbench" to check result , there are a lot of files in the directory "/result/hackbench/xxx/ubuntu/defconfig/gcc-4.8/3.13.0-24-generic/0/".
and how to analysis these files?
Thanks for Linux Kernel Performance tests Project!
from xly
5 years, 8 months
[lkp] [x86/entry/64] fa58aafc448: 10.8% aim7.jobs-per-min
by kernel test robot
FYI, we noticed the below changes on
git://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git x86/entry
commit fa58aafc44805ac425d17c6a8082513b5442ce9d ("x86/entry/64: When returning via SYSRET, POP regs instead of using MOV")
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/load/test:
lkp-a06/aim7/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/4000/new_raph
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
184099 ± 0% +10.8% 204000 ± 0% aim7.jobs-per-min
131.89 ± 0% -9.8% 119.00 ± 0% aim7.time.elapsed_time
131.89 ± 0% -9.8% 119.00 ± 0% aim7.time.elapsed_time.max
2215262 ± 0% -92.5% 165275 ± 0% aim7.time.involuntary_context_switches
19.56 ± 1% -65.8% 6.70 ± 5% aim7.time.system_time
435.63 ± 0% -2.8% 423.34 ± 0% aim7.time.user_time
60385 ± 1% -17.3% 49927 ± 0% aim7.time.voluntary_context_switches
131.89 ± 0% -9.8% 119.00 ± 0% time.elapsed_time
131.89 ± 0% -9.8% 119.00 ± 0% time.elapsed_time.max
2215262 ± 0% -92.5% 165275 ± 0% time.involuntary_context_switches
19.56 ± 1% -65.8% 6.70 ± 5% time.system_time
60385 ± 1% -17.3% 49927 ± 0% time.voluntary_context_switches
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/load/test:
lkp-a06/aim7/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/4000/pipe_cpy
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
247245 ± 1% +15.6% 285751 ± 1% aim7.jobs-per-min
98.61 ± 1% -13.4% 85.37 ± 1% aim7.time.elapsed_time
98.61 ± 1% -13.4% 85.37 ± 1% aim7.time.elapsed_time.max
2003598 ± 0% -93.3% 133967 ± 2% aim7.time.involuntary_context_switches
266.80 ± 1% -7.1% 247.73 ± 1% aim7.time.system_time
51.41 ± 4% -11.8% 45.32 ± 7% aim7.time.user_time
53934 ± 1% -21.5% 42329 ± 1% aim7.time.voluntary_context_switches
98.61 ± 1% -13.4% 85.37 ± 1% time.elapsed_time
98.61 ± 1% -13.4% 85.37 ± 1% time.elapsed_time.max
2003598 ± 0% -93.3% 133967 ± 2% time.involuntary_context_switches
51.41 ± 4% -11.8% 45.32 ± 7% time.user_time
53934 ± 1% -21.5% 42329 ± 1% time.voluntary_context_switches
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads:
lkp-a06/dbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/100%
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
2245160 ± 8% -76.6% 526406 ± 4% dbench.time.involuntary_context_switches
379.50 ± 0% +1.3% 384.50 ± 0% dbench.time.percent_of_cpu_this_job_got
1715 ± 0% +1.7% 1745 ± 0% dbench.time.system_time
2245160 ± 8% -76.6% 526406 ± 4% time.involuntary_context_switches
2.69 ± 11% +81.5% 4.88 ± 37% perf-profile.cpu-cycles.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.43 ± 2% -10.1% 1.29 ± 2% perf-profile.cpu-cycles.entry_SYSCALL_64_after_swapgs
1.51 ± 8% -26.2% 1.11 ± 10% perf-profile.cpu-cycles.rcu_nocb_kthread.kthread.ret_from_fork
1.20 ± 15% +109.4% 2.51 ± 46% perf-profile.cpu-cycles.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.68 ± 16% +110.7% 1.43 ± 47% perf-profile.cpu-cycles.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
1.97 ± 11% +96.4% 3.87 ± 40% perf-profile.cpu-cycles.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
2.33 ± 10% +84.8% 4.30 ± 38% perf-profile.cpu-cycles.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
1.90 ± 11% +96.3% 3.72 ± 41% perf-profile.cpu-cycles.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.07 ± 2% -9.3% 0.97 ± 3% perf-profile.cpu-cycles.vfs_create.path_openat.do_filp_open.do_sys_open.sys_open
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/ext4/8K/400M/fsyncBeforeClose/16d/256fpd
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
10935 ± 7% -38.1% 6768 ± 1% fsmark.time.involuntary_context_switches
2989 ± 0% +1.2% 3026 ± 0% fsmark.time.maximum_resident_set_size
10935 ± 7% -38.1% 6768 ± 1% time.involuntary_context_switches
29861 ± 3% -86.7% 3970 ± 1% vmstat.system.cs
13362 ± 3% -97.0% 405.25 ± 1% vmstat.system.in
76414335 ± 1% -55.4% 34106888 ± 4% cpuidle.C1-NHM.time
4836217 ± 0% -92.9% 344308 ± 4% cpuidle.C1-NHM.usage
1310 ± 4% -96.7% 43.00 ± 10% cpuidle.POLL.usage
1.32 ± 2% -43.9% 0.74 ± 0% turbostat.%Busy
39.25 ± 2% -51.6% 19.00 ± 0% turbostat.Avg_MHz
2985 ± 0% -15.9% 2512 ± 0% turbostat.Bzy_MHz
7.68 ± 5% -42.2% 4.44 ± 3% turbostat.CPU%c1
0.00 ± -1% +Inf% 20233 ±125% latency_stats.avg.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.__ext4_new_inode.ext4_mkdir.vfs_mkdir.SyS_mkdir
4866 ± 28% +42.4% 6930 ±141% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
8314 ± 73% +365.2% 38680 ± 69% latency_stats.max.do_get_write_access.jbd2_journal_get_write_access.__ext4_journal_get_write_access.ext4_reserve_inode_write.ext4_mark_inode_dirty.ext4_dirty_inode.__mark_inode_dirty.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 24939 ±105% latency_stats.max.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.__ext4_new_inode.ext4_mkdir.vfs_mkdir.SyS_mkdir
0.00 ± -1% +Inf% 24960 ±105% latency_stats.sum.submit_bio_wait.blkdev_issue_flush.jbd2_cleanup_journal_tail.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.__ext4_new_inode.ext4_mkdir.vfs_mkdir.SyS_mkdir
5099 ± 5% +8.2% 5517 ± 5% sched_debug.cfs_rq[0]:/.min_vruntime
533.25 ± 3% -9.6% 482.25 ± 2% sched_debug.cfs_rq[0]:/.tg->runnable_avg
537.75 ± 3% -9.9% 484.75 ± 2% sched_debug.cfs_rq[1]:/.tg->runnable_avg
11.50 ± 35% +65.2% 19.00 ± 11% sched_debug.cfs_rq[2]:/.nr_spread_over
538.50 ± 3% -9.7% 486.50 ± 2% sched_debug.cfs_rq[2]:/.tg->runnable_avg
-1924 ±-24% +34.3% -2583 ±-12% sched_debug.cfs_rq[3]:/.spread0
539.75 ± 3% -10.4% 483.75 ± 2% sched_debug.cfs_rq[3]:/.tg->runnable_avg
1006 ± 13% +17.2% 1179 ± 5% sched_debug.cfs_rq[4]:/.exec_clock
2780 ± 16% +20.9% 3361 ± 7% sched_debug.cfs_rq[4]:/.min_vruntime
542.75 ± 3% -10.7% 484.50 ± 2% sched_debug.cfs_rq[4]:/.tg->runnable_avg
2626 ± 5% +41.7% 3723 ± 12% sched_debug.cfs_rq[5]:/.avg->runnable_avg_sum
2463 ± 8% +16.3% 2865 ± 7% sched_debug.cfs_rq[5]:/.min_vruntime
547.00 ± 4% -11.4% 484.50 ± 2% sched_debug.cfs_rq[5]:/.tg->runnable_avg
56.75 ± 4% +41.9% 80.50 ± 13% sched_debug.cfs_rq[5]:/.tg_runnable_contrib
909.00 ± 74% +241.7% 3105 ± 4% sched_debug.cfs_rq[6]:/.blocked_load_avg
549.00 ± 4% -11.5% 486.00 ± 2% sched_debug.cfs_rq[6]:/.tg->runnable_avg
927.25 ± 71% +240.7% 3158 ± 6% sched_debug.cfs_rq[6]:/.tg_load_contrib
4572 ± 22% -49.6% 2303 ± 27% sched_debug.cfs_rq[7]:/.avg->runnable_avg_sum
-1634 ±-23% +55.2% -2535 ±-19% sched_debug.cfs_rq[7]:/.spread0
551.00 ± 4% -11.4% 488.25 ± 3% sched_debug.cfs_rq[7]:/.tg->runnable_avg
98.00 ± 22% -49.7% 49.25 ± 27% sched_debug.cfs_rq[7]:/.tg_runnable_contrib
-9609 ± -7% +10.0% -10571 ± -1% sched_debug.cpu#0.nr_uninterruptible
15.50 ± 79% -91.9% 1.25 ±173% sched_debug.cpu#2.cpu_load[1]
12.75 ± 58% -76.5% 3.00 ±117% sched_debug.cpu#2.cpu_load[2]
11.75 ± 42% -70.2% 3.50 ± 95% sched_debug.cpu#2.cpu_load[3]
11.00 ± 39% -68.2% 3.50 ± 82% sched_debug.cpu#2.cpu_load[4]
851076 ±155% -93.9% 52140 ± 38% sched_debug.cpu#3.nr_switches
1395 ± 4% -8.7% 1274 ± 1% sched_debug.cpu#3.nr_uninterruptible
851137 ±155% -93.9% 52218 ± 38% sched_debug.cpu#3.sched_count
418288 ±157% -94.6% 22436 ± 44% sched_debug.cpu#3.sched_goidle
6.00 ±100% +150.0% 15.00 ± 30% sched_debug.cpu#4.cpu_load[2]
5.25 ± 76% +157.1% 13.50 ± 19% sched_debug.cpu#4.cpu_load[3]
5.25 ± 72% +123.8% 11.75 ± 20% sched_debug.cpu#4.cpu_load[4]
1507 ± 5% +23.3% 1859 ± 5% sched_debug.cpu#5.nr_uninterruptible
811411 ± 8% +10.4% 895772 ± 6% sched_debug.cpu#6.avg_idle
1349 ± 13% +38.2% 1863 ± 3% sched_debug.cpu#6.nr_uninterruptible
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/iterations/nr_threads/disk/fs/filesize/test_size/sync_method/nr_directories/nr_files_per_directory:
nhm4/fsmark/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/1x/32t/1HDD/xfs/8K/400M/fsyncBeforeClose/16d/256fpd
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
85071 ± 14% -33.4% 56662 ± 0% fsmark.time.involuntary_context_switches
44.50 ± 2% +12.9% 50.25 ± 0% fsmark.time.percent_of_cpu_this_job_got
1173823 ± 2% +25.4% 1472245 ± 6% latency_stats.sum.down.xfs_buf_lock._xfs_buf_find.xfs_buf_get_map.xfs_buf_read_map.xfs_trans_read_buf_map.xfs_read_agi.xfs_ialloc_read_agi.xfs_dialloc.xfs_ialloc.xfs_dir_ialloc.xfs_create
16393 ± 0% +224.5% 53190 ±112% softirqs.TIMER
36.11 ± 2% +148.9% 89.88 ± 94% uptime.boot
178.57 ± 4% +241.2% 609.30 ±111% uptime.idle
178335 ± 0% -80.3% 35149 ± 2% vmstat.system.cs
77433 ± 0% -93.5% 5027 ± 2% vmstat.system.in
28135 ± 3% -12.1% 24722 ± 1% meminfo.Active(anon)
27784 ± 3% -12.3% 24365 ± 1% meminfo.AnonPages
14863 ± 2% -14.8% 12659 ± 2% meminfo.Mapped
6993 ± 3% -11.9% 6160 ± 1% proc-vmstat.nr_active_anon
6906 ± 3% -12.0% 6075 ± 1% proc-vmstat.nr_anon_pages
3703 ± 2% -14.9% 3152 ± 2% proc-vmstat.nr_mapped
85071 ± 14% -33.4% 56662 ± 0% time.involuntary_context_switches
44.50 ± 2% +12.9% 50.25 ± 0% time.percent_of_cpu_this_job_got
5.87 ± 1% +13.5% 6.67 ± 1% time.system_time
10.71 ± 1% -27.2% 7.79 ± 0% turbostat.%Busy
357.25 ± 1% -34.9% 232.50 ± 0% turbostat.Avg_MHz
3333 ± 0% -10.5% 2984 ± 0% turbostat.Bzy_MHz
48.21 ± 5% -23.5% 36.86 ± 4% turbostat.CPU%c1
32.52 ± 5% +22.7% 39.91 ± 5% turbostat.CPU%c3
8.56 ± 11% +80.3% 15.43 ± 5% turbostat.CPU%c6
18315930 ± 4% -46.6% 9777154 ± 8% cpuidle.C1-NHM.time
1153863 ± 2% -94.6% 62163 ± 3% cpuidle.C1-NHM.usage
73216 ± 3% +10.4% 80802 ± 3% cpuidle.C3-NHM.usage
22540985 ± 6% +26.9% 28610584 ± 4% cpuidle.C6-NHM.time
10006 ± 8% +10.7% 11072 ± 3% cpuidle.C6-NHM.usage
43036 ± 99% -98.5% 641.00 ± 24% cpuidle.POLL.time
14491 ±104% -99.6% 51.50 ± 21% cpuidle.POLL.usage
17223 ± 25% -42.3% 9931 ± 35% sched_debug.cfs_rq[0]:/.avg->runnable_avg_sum
2435 ± 2% -10.7% 2174 ± 2% sched_debug.cfs_rq[0]:/.tg->runnable_avg
379.00 ± 25% -42.8% 216.75 ± 34% sched_debug.cfs_rq[0]:/.tg_runnable_contrib
2432 ± 2% -10.7% 2172 ± 2% sched_debug.cfs_rq[1]:/.tg->runnable_avg
12047 ± 12% +26.4% 15233 ± 4% sched_debug.cfs_rq[2]:/.avg->runnable_avg_sum
1122 ± 11% +18.6% 1331 ± 4% sched_debug.cfs_rq[2]:/.min_vruntime
-2608 ± -9% -16.4% -2180 ±-12% sched_debug.cfs_rq[2]:/.spread0
2436 ± 2% -10.9% 2170 ± 2% sched_debug.cfs_rq[2]:/.tg->runnable_avg
262.50 ± 12% +27.0% 333.50 ± 5% sched_debug.cfs_rq[2]:/.tg_runnable_contrib
2435 ± 2% -10.7% 2173 ± 2% sched_debug.cfs_rq[3]:/.tg->runnable_avg
2050 ±120% +731.3% 17041 ± 16% sched_debug.cfs_rq[4]:/.blocked_load_avg
2433 ± 1% -10.3% 2181 ± 2% sched_debug.cfs_rq[4]:/.tg->runnable_avg
2073 ±121% +731.2% 17235 ± 16% sched_debug.cfs_rq[4]:/.tg_load_contrib
1043 ± 19% -35.6% 672.06 ± 20% sched_debug.cfs_rq[5]:/.min_vruntime
2433 ± 1% -10.3% 2184 ± 2% sched_debug.cfs_rq[5]:/.tg->runnable_avg
2433 ± 1% -10.2% 2185 ± 2% sched_debug.cfs_rq[6]:/.tg->runnable_avg
13519 ± 30% -40.0% 8114 ± 35% sched_debug.cfs_rq[7]:/.blocked_load_avg
2429 ± 1% -10.1% 2185 ± 2% sched_debug.cfs_rq[7]:/.tg->runnable_avg
13871 ± 30% -39.9% 8331 ± 35% sched_debug.cfs_rq[7]:/.tg_load_contrib
353549 ± 9% +66.8% 589619 ± 40% sched_debug.cpu#0.avg_idle
21206 ± 3% +253.8% 75034 ±113% sched_debug.cpu#0.clock
21206 ± 3% +253.8% 75034 ±113% sched_debug.cpu#0.clock_task
21207 ± 3% +253.8% 75035 ±113% sched_debug.cpu#1.clock
21207 ± 3% +253.8% 75035 ±113% sched_debug.cpu#1.clock_task
21205 ± 3% +253.9% 75035 ±113% sched_debug.cpu#2.clock
21205 ± 3% +253.9% 75035 ±113% sched_debug.cpu#2.clock_task
5275 ± 21% +95.3% 10300 ± 35% sched_debug.cpu#2.nr_switches
5280 ± 21% +95.4% 10319 ± 35% sched_debug.cpu#2.sched_count
2298 ± 24% +108.5% 4792 ± 37% sched_debug.cpu#2.sched_goidle
2377 ± 31% +96.9% 4680 ± 34% sched_debug.cpu#2.ttwu_count
748.00 ± 47% +284.9% 2879 ± 48% sched_debug.cpu#2.ttwu_local
21208 ± 3% +253.8% 75034 ±113% sched_debug.cpu#3.clock
21208 ± 3% +253.8% 75034 ±113% sched_debug.cpu#3.clock_task
21206 ± 3% +253.8% 75034 ±113% sched_debug.cpu#4.clock
21206 ± 3% +253.8% 75034 ±113% sched_debug.cpu#4.clock_task
73956 ±163% -96.5% 2581 ± 41% sched_debug.cpu#4.nr_switches
73962 ±163% -96.5% 2600 ± 42% sched_debug.cpu#4.sched_count
36498 ±165% -97.4% 950.25 ± 60% sched_debug.cpu#4.sched_goidle
507768 ± 26% +65.5% 840493 ± 20% sched_debug.cpu#5.avg_idle
21207 ± 3% +253.8% 75034 ±113% sched_debug.cpu#5.clock
21207 ± 3% +253.8% 75034 ±113% sched_debug.cpu#5.clock_task
44.75 ± 62% -84.4% 7.00 ± 81% sched_debug.cpu#5.cpu_load[1]
779.25 ± 42% +33.1% 1037 ± 34% sched_debug.cpu#5.nr_load_updates
21207 ± 3% +253.8% 75035 ±113% sched_debug.cpu#6.clock
21207 ± 3% +253.8% 75035 ±113% sched_debug.cpu#6.clock_task
1995 ± 11% +21.6% 2427 ± 17% sched_debug.cpu#6.nr_switches
2001 ± 11% +22.3% 2446 ± 17% sched_debug.cpu#6.sched_count
21206 ± 3% +253.8% 75035 ±113% sched_debug.cpu#7.clock
21206 ± 3% +253.8% 75035 ±113% sched_debug.cpu#7.clock_task
21207 ± 3% +253.8% 75036 ±113% sched_debug.cpu_clk
21049 ± 3% +255.7% 74876 ±113% sched_debug.ktime
21207 ± 3% +253.8% 75036 ±113% sched_debug.sched_clk
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/iterations/samples:
lituya/ftq/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/powersave/100%/20x/100000ss
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
7572 ± 0% +6.2% 8040 ± 0% ftq.counts
0.18 ± 2% -82.4% 0.03 ± 8% ftq.stddev
1737203 ± 0% -99.1% 15898 ± 18% ftq.time.involuntary_context_switches
1467 ± 0% +3.4% 1517 ± 0% ftq.time.percent_of_cpu_this_job_got
547.01 ± 0% +3.5% 566.08 ± 0% ftq.time.user_time
16734 ± 0% -13.5% 14475 ± 0% meminfo.Mapped
4181 ± 0% -13.3% 3624 ± 0% proc-vmstat.nr_mapped
1.21 ± 3% -53.7% 0.56 ± 2% turbostat.CPU%c1
5.76 ± 3% +14.6% 6.61 ± 1% turbostat.CPU%c6
97309 ± 0% -96.9% 2991 ± 2% vmstat.system.cs
62011 ± 0% -76.5% 14573 ± 0% vmstat.system.in
1737203 ± 0% -99.1% 15898 ± 18% time.involuntary_context_switches
2.07 ± 6% -47.5% 1.09 ± 2% time.system_time
655.75 ± 36% +55.8% 1021 ± 5% time.voluntary_context_switches
1917711 ± 27% -91.5% 163688 ± 12% cpuidle.C1-HSW.time
144241 ± 3% -99.6% 608.50 ± 13% cpuidle.C1-HSW.usage
13.25 ± 38% -92.5% 1.00 ±100% cpuidle.POLL.time
7.00 ± 30% -92.9% 0.50 ±100% cpuidle.POLL.usage
3330 ± 2% -12.4% 2918 ± 2% sched_debug.cfs_rq[0]:/.tg->runnable_avg
48305 ± 10% +46.6% 70802 ± 3% sched_debug.cfs_rq[0]:/.tg_load_avg
1737 ± 74% +479.4% 10066 ± 59% sched_debug.cfs_rq[10]:/.blocked_load_avg
3330 ± 2% -12.3% 2922 ± 1% sched_debug.cfs_rq[10]:/.tg->runnable_avg
47674 ± 10% +46.5% 69861 ± 3% sched_debug.cfs_rq[10]:/.tg_load_avg
1812 ± 73% +457.2% 10098 ± 59% sched_debug.cfs_rq[10]:/.tg_load_contrib
-4849 ± -2% -17.4% -4006 ±-17% sched_debug.cfs_rq[11]:/.spread0
3330 ± 2% -12.3% 2922 ± 1% sched_debug.cfs_rq[11]:/.tg->runnable_avg
47674 ± 10% +46.5% 69861 ± 3% sched_debug.cfs_rq[11]:/.tg_load_avg
3330 ± 2% -12.0% 2930 ± 1% sched_debug.cfs_rq[12]:/.tg->runnable_avg
47674 ± 10% +46.4% 69806 ± 3% sched_debug.cfs_rq[12]:/.tg_load_avg
3330 ± 2% -12.0% 2930 ± 1% sched_debug.cfs_rq[13]:/.tg->runnable_avg
47674 ± 10% +46.4% 69806 ± 3% sched_debug.cfs_rq[13]:/.tg_load_avg
3338 ± 2% -12.2% 2930 ± 1% sched_debug.cfs_rq[14]:/.tg->runnable_avg
47612 ± 10% +46.6% 69806 ± 3% sched_debug.cfs_rq[14]:/.tg_load_avg
13486 ± 65% -66.1% 4567 ± 44% sched_debug.cfs_rq[15]:/.avg->runnable_avg_sum
3347 ± 2% -12.5% 2930 ± 1% sched_debug.cfs_rq[15]:/.tg->runnable_avg
47536 ± 10% +46.8% 69806 ± 3% sched_debug.cfs_rq[15]:/.tg_load_avg
295.00 ± 67% -66.2% 99.75 ± 45% sched_debug.cfs_rq[15]:/.tg_runnable_contrib
3329 ± 2% -12.4% 2917 ± 2% sched_debug.cfs_rq[1]:/.tg->runnable_avg
48268 ± 10% +46.7% 70802 ± 3% sched_debug.cfs_rq[1]:/.tg_load_avg
611.00 ±164% +895.1% 6080 ± 65% sched_debug.cfs_rq[2]:/.blocked_load_avg
3328 ± 2% -13.0% 2897 ± 1% sched_debug.cfs_rq[2]:/.tg->runnable_avg
48268 ± 10% +45.8% 70372 ± 3% sched_debug.cfs_rq[2]:/.tg_load_avg
611.00 ±164% +961.3% 6484 ± 69% sched_debug.cfs_rq[2]:/.tg_load_contrib
2088 ± 22% -30.6% 1448 ± 2% sched_debug.cfs_rq[3]:/.min_vruntime
3328 ± 2% -13.0% 2897 ± 1% sched_debug.cfs_rq[3]:/.tg->runnable_avg
48268 ± 10% +45.8% 70372 ± 3% sched_debug.cfs_rq[3]:/.tg_load_avg
3330 ± 2% -12.8% 2902 ± 1% sched_debug.cfs_rq[4]:/.tg->runnable_avg
48037 ± 10% +46.3% 70285 ± 3% sched_debug.cfs_rq[4]:/.tg_load_avg
3321 ± 2% -12.5% 2905 ± 1% sched_debug.cfs_rq[5]:/.tg->runnable_avg
48034 ± 10% +46.2% 70241 ± 3% sched_debug.cfs_rq[5]:/.tg_load_avg
5958 ± 58% -79.2% 1239 ± 77% sched_debug.cfs_rq[6]:/.blocked_load_avg
3321 ± 2% -12.4% 2909 ± 1% sched_debug.cfs_rq[6]:/.tg->runnable_avg
48034 ± 10% +46.2% 70222 ± 3% sched_debug.cfs_rq[6]:/.tg_load_avg
6017 ± 57% -79.4% 1239 ± 77% sched_debug.cfs_rq[6]:/.tg_load_contrib
-4384 ±-20% -23.6% -3350 ±-37% sched_debug.cfs_rq[7]:/.spread0
3321 ± 2% -12.4% 2909 ± 1% sched_debug.cfs_rq[7]:/.tg->runnable_avg
47777 ± 10% +47.0% 70222 ± 3% sched_debug.cfs_rq[7]:/.tg_load_avg
6303 ± 42% +54.5% 9736 ± 14% sched_debug.cfs_rq[8]:/.avg->runnable_avg_sum
909.28 ± 36% +91.1% 1738 ± 39% sched_debug.cfs_rq[8]:/.min_vruntime
3330 ± 2% -12.5% 2914 ± 1% sched_debug.cfs_rq[8]:/.tg->runnable_avg
47674 ± 10% +47.2% 70180 ± 3% sched_debug.cfs_rq[8]:/.tg_load_avg
137.75 ± 43% +52.8% 210.50 ± 14% sched_debug.cfs_rq[8]:/.tg_runnable_contrib
99.27 ± 18% +50.2% 149.07 ± 22% sched_debug.cfs_rq[9]:/.exec_clock
3330 ± 2% -12.3% 2922 ± 1% sched_debug.cfs_rq[9]:/.tg->runnable_avg
47674 ± 10% +46.5% 69861 ± 3% sched_debug.cfs_rq[9]:/.tg_load_avg
27.00 ± 43% -55.6% 12.00 ± 5% sched_debug.cpu#0.cpu_load[3]
889905 ± 9% -24.5% 671951 ± 16% sched_debug.cpu#1.avg_idle
10.00 ± 43% -70.0% 3.00 ±102% sched_debug.cpu#1.cpu_load[3]
8.50 ± 52% -79.4% 1.75 ±102% sched_debug.cpu#1.cpu_load[4]
7.75 ± 19% -108.6% -0.67 ±-430% sched_debug.cpu#10.nr_uninterruptible
2398 ± 82% -86.6% 321.25 ± 45% sched_debug.cpu#10.ttwu_count
1835 ± 95% -96.5% 64.00 ± 30% sched_debug.cpu#10.ttwu_local
1368 ± 8% +26.8% 1736 ± 18% sched_debug.cpu#11.nr_switches
1373 ± 8% +26.6% 1738 ± 18% sched_debug.cpu#11.sched_count
509.75 ± 10% +35.9% 693.00 ± 22% sched_debug.cpu#11.sched_goidle
578.00 ± 30% +36.0% 786.25 ± 17% sched_debug.cpu#11.ttwu_count
334.00 ± 36% +115.6% 720.25 ± 16% sched_debug.cpu#13.ttwu_count
588893 ± 42% +64.2% 966897 ± 5% sched_debug.cpu#5.avg_idle
-4.00 ±-68% -118.8% 0.75 ±331% sched_debug.cpu#5.nr_uninterruptible
2.25 ±164% -355.6% -5.75 ±-65% sched_debug.cpu#7.nr_uninterruptible
343.25 ± 42% +86.7% 641.00 ± 43% sched_debug.cpu#9.ttwu_count
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/iterations/samples:
lituya/fwq/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/100%/20x/100000ss
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
0.11 ± 2% -44.0% 0.06 ± 2% fwq.stddev
3229188 ± 0% -85.9% 455853 ± 11% fwq.time.involuntary_context_switches
13780 ± 1% +5.7% 14566 ± 0% fwq.time.maximum_resident_set_size
176058 ± 20% -31.6% 120345 ± 0% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
0.30 ± 22% -76.9% 0.07 ± 0% turbostat.CPU%c1
484.63 ± 56% -36.7% 307.00 ± 1% uptime.idle
16399 ± 3% -78.3% 3553 ± 2% vmstat.system.cs
22757 ± 3% -26.2% 16801 ± 0% vmstat.system.in
3907 ± 4% -10.0% 3517 ± 4% slabinfo.anon_vma.active_objs
3907 ± 4% -10.0% 3517 ± 4% slabinfo.anon_vma.num_objs
8215 ± 4% -8.4% 7522 ± 4% slabinfo.kmalloc-512.num_objs
3229188 ± 0% -85.9% 455853 ± 11% time.involuntary_context_switches
1791 ± 3% +10.3% 1976 ± 3% time.minor_page_faults
3.12 ± 1% -72.4% 0.86 ± 6% time.system_time
2392395 ±114% -93.4% 156963 ± 92% cpuidle.C1-HSW.time
48618 ± 12% -99.3% 331.33 ± 14% cpuidle.C1-HSW.usage
2.271e+08 ±130% -82.5% 39690941 ± 3% cpuidle.C6-HSW.time
5212 ± 82% -75.3% 1289 ± 6% cpuidle.C6-HSW.usage
6.25 ± 66% -100.0% 0.00 ± 0% cpuidle.POLL.time
2.75 ± 15% -100.0% 0.00 ± 0% cpuidle.POLL.usage
300.50 ± 49% +108.2% 625.67 ± 33% sched_debug.cfs_rq[0]:/.blocked_load_avg
55.25 ± 4% -7.7% 51.00 ± 4% sched_debug.cfs_rq[0]:/.load
58.75 ± 8% -14.9% 50.00 ± 1% sched_debug.cfs_rq[0]:/.runnable_load_avg
364.25 ± 42% +87.7% 683.67 ± 28% sched_debug.cfs_rq[0]:/.tg_load_contrib
912.75 ± 35% -41.1% 537.33 ± 70% sched_debug.cfs_rq[10]:/.tg_load_contrib
792.33 ± 74% +151.1% 1989 ± 32% sched_debug.cfs_rq[11]:/.blocked_load_avg
659.50 ± 92% +212.5% 2061 ± 30% sched_debug.cfs_rq[11]:/.tg_load_contrib
489.50 ± 59% +158.1% 1263 ± 20% sched_debug.cfs_rq[12]:/.blocked_load_avg
544.75 ± 53% +143.4% 1326 ± 19% sched_debug.cfs_rq[12]:/.tg_load_contrib
98.25 ± 86% +298.3% 391.33 ± 34% sched_debug.cfs_rq[1]:/.blocked_load_avg
157.25 ± 57% +194.9% 463.67 ± 28% sched_debug.cfs_rq[1]:/.tg_load_contrib
324.00 ± 68% +471.2% 1850 ± 22% sched_debug.cfs_rq[3]:/.blocked_load_avg
379.75 ± 60% +402.9% 1909 ± 21% sched_debug.cfs_rq[3]:/.tg_load_contrib
67.75 ± 26% -23.2% 52.00 ± 6% sched_debug.cfs_rq[5]:/.load
1586 ± 85% -64.3% 566.67 ± 46% sched_debug.cfs_rq[6]:/.blocked_load_avg
1679 ± 82% -61.8% 642.00 ± 43% sched_debug.cfs_rq[6]:/.tg_load_contrib
1.25 ± 34% +406.7% 6.33 ± 39% sched_debug.cfs_rq[7]:/.nr_spread_over
59.75 ± 12% -11.3% 53.00 ± 6% sched_debug.cpu#0.cpu_load[0]
55.25 ± 4% -7.7% 51.00 ± 4% sched_debug.cpu#0.load
125050 ± 80% -83.6% 20475 ± 37% sched_debug.cpu#1.nr_switches
1.75 ±240% +776.2% 15.33 ± 13% sched_debug.cpu#1.nr_uninterruptible
125107 ± 80% -83.6% 20557 ± 37% sched_debug.cpu#1.sched_count
54622 ± 93% -78.4% 11825 ± 72% sched_debug.cpu#1.ttwu_count
36441 ± 92% -91.9% 2955 ± 47% sched_debug.cpu#1.ttwu_local
7.75 ± 78% -95.7% 0.33 ±282% sched_debug.cpu#10.nr_uninterruptible
7.25 ± 45% -51.7% 3.50 ± 42% sched_debug.cpu#12.nr_uninterruptible
1584 ± 72% +1888.2% 31493 ±121% sched_debug.cpu#13.ttwu_count
12188 ±141% -91.0% 1100 ± 68% sched_debug.cpu#4.sched_goidle
80.00 ± 15% -27.1% 58.33 ± 7% sched_debug.cpu#6.cpu_load[3]
78.00 ± 16% -27.8% 56.33 ± 4% sched_debug.cpu#6.cpu_load[4]
128000 ±128% -95.1% 6219 ±103% sched_debug.cpu#8.ttwu_count
106189 ±165% -99.7% 357.33 ± 2% sched_debug.cpu#8.ttwu_local
32547 ±143% -89.9% 3291 ± 62% sched_debug.cpu#9.nr_switches
32615 ±143% -89.7% 3352 ± 62% sched_debug.cpu#9.sched_count
26785 ± 79% -85.9% 3781 ± 81% sched_debug.cpu#9.ttwu_count
5.94 ±172% -100.0% 0.00 ± 85% sched_debug.rt_rq[2]:/.rt_time
1.89 ±172% -99.9% 0.00 ± 89% sched_debug.rt_rq[8]:/.rt_time
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtime/nr_threads/cluster/test:
lkp-ne02/netperf/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/200%/cs-localhost/SCTP_RR
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
1233946 ± 1% -93.3% 83018 ± 13% netperf.time.involuntary_context_switches
26623 ±120% -76.8% 6174 ± 63% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs4_file_open.[nfsv4].do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open
301360 ± 2% -8.2% 276612 ± 1% softirqs.RCU
1233946 ± 1% -93.3% 83018 ± 13% time.involuntary_context_switches
0.22 ± 12% -27.0% 0.16 ± 13% turbostat.CPU%c1
26675 ± 0% -32.3% 18052 ± 0% vmstat.system.in
9078 ± 5% +12.7% 10235 ± 3% slabinfo.vm_area_struct.active_objs
9128 ± 4% +12.8% 10298 ± 3% slabinfo.vm_area_struct.num_objs
3247591 ± 37% -42.2% 1877235 ± 23% cpuidle.C1-NHM.time
175462 ± 7% -95.2% 8494 ± 6% cpuidle.C1-NHM.usage
863871 ± 11% -100.0% 63.00 ± 8% cpuidle.POLL.time
175219 ± 10% -100.0% 5.50 ± 20% cpuidle.POLL.usage
2731 ±120% +218.4% 8696 ± 3% numa-meminfo.node0.Inactive(anon)
15363 ± 5% +7.2% 16474 ± 4% numa-meminfo.node0.SUnreclaim
6557 ± 50% -91.3% 570.25 ± 49% numa-meminfo.node1.Inactive(anon)
7805 ± 13% -25.5% 5812 ± 0% numa-meminfo.node1.Mapped
18367 ± 2% -7.4% 17002 ± 5% numa-meminfo.node1.SReclaimable
682.50 ±120% +218.5% 2173 ± 3% numa-vmstat.node0.nr_inactive_anon
3840 ± 5% +7.2% 4118 ± 4% numa-vmstat.node0.nr_slab_unreclaimable
1639 ± 50% -91.3% 142.00 ± 49% numa-vmstat.node1.nr_inactive_anon
1950 ± 13% -25.5% 1452 ± 0% numa-vmstat.node1.nr_mapped
4591 ± 2% -7.4% 4250 ± 5% numa-vmstat.node1.nr_slab_reclaimable
1.00 ± 70% +350.0% 4.50 ± 57% sched_debug.cfs_rq[12]:/.nr_spread_over
-1967 ±-3098% +5332.3% -106895 ±-79% sched_debug.cfs_rq[13]:/.spread0
103.00 ± 5% +19.4% 123.00 ± 5% sched_debug.cfs_rq[15]:/.load
95.75 ± 10% +16.7% 111.75 ± 10% sched_debug.cfs_rq[15]:/.runnable_load_avg
-1514 ±-4117% +6467.6% -99452 ±-81% sched_debug.cfs_rq[15]:/.spread0
1116022 ± 6% +11.2% 1240796 ± 4% sched_debug.cfs_rq[2]:/.MIN_vruntime
1116022 ± 6% +11.2% 1240796 ± 4% sched_debug.cfs_rq[2]:/.max_vruntime
1084538 ± 9% +15.0% 1247278 ± 4% sched_debug.cfs_rq[3]:/.MIN_vruntime
1084538 ± 9% +15.0% 1247278 ± 4% sched_debug.cfs_rq[3]:/.max_vruntime
12.25 ± 10% -40.8% 7.25 ± 35% sched_debug.cfs_rq[5]:/.nr_spread_over
-3847 ±-1573% +2484.2% -99431 ±-80% sched_debug.cfs_rq[7]:/.spread0
-2145 ±-139% +451.8% -11836 ±-59% sched_debug.cfs_rq[8]:/.spread0
119.00 ± 7% -23.1% 91.50 ± 12% sched_debug.cpu#0.cpu_load[0]
105.25 ± 3% -15.9% 88.50 ± 6% sched_debug.cpu#0.cpu_load[1]
99.00 ± 4% -13.1% 86.00 ± 5% sched_debug.cpu#0.cpu_load[2]
1480 ± 21% -34.8% 965.50 ± 10% sched_debug.cpu#0.curr->pid
2943 ± 29% +28.1% 3770 ± 1% sched_debug.cpu#0.sched_goidle
1784 ± 77% -75.2% 442.50 ± 7% sched_debug.cpu#13.sched_goidle
88778 ± 53% +58.4% 140611 ± 8% sched_debug.cpu#2.avg_idle
89.00 ± 6% +27.2% 113.25 ± 4% sched_debug.cpu#3.load
2979 ± 51% -67.9% 956.75 ± 19% sched_debug.cpu#7.sched_goidle
1369 ± 29% -26.8% 1002 ± 21% sched_debug.cpu#9.curr->pid
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/runtime/nr_threads/cluster/test:
lkp-ne02/netperf/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/300s/200%/cs-localhost/TCP_SENDFILE
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
2033310 ± 0% -97.2% 57638 ± 4% netperf.time.involuntary_context_switches
165110 ± 33% +47.2% 243029 ± 20% proc-vmstat.pgalloc_normal
2033310 ± 0% -97.2% 57638 ± 4% time.involuntary_context_switches
3320 ± 7% +12.2% 3725 ± 4% numa-meminfo.node0.KernelStack
2760 ± 8% -15.4% 2335 ± 6% numa-meminfo.node1.KernelStack
78.25 ± 37% -78.3% 17.00 ±139% numa-vmstat.node1.nr_dirtied
76.25 ± 37% -78.4% 16.50 ±138% numa-vmstat.node1.nr_written
0.22 ± 5% -35.6% 0.14 ± 8% turbostat.CPU%c1
0.53 ± 5% +16.0% 0.61 ± 0% turbostat.CPU%c6
49180 ± 0% -48.2% 25479 ± 0% vmstat.system.cs
27651 ± 0% -37.2% 17351 ± 0% vmstat.system.in
4278144 ± 1% -22.0% 3335680 ± 0% latency_stats.hits.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.SyS_recvfrom.entry_SYSCALL_64_fastpath
44720 ± 36% +113.9% 95674 ±108% latency_stats.sum.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
2.379e+09 ± 0% -1.2% 2.351e+09 ± 0% latency_stats.sum.sk_wait_data.tcp_recvmsg.inet_recvmsg.sock_recvmsg.SYSC_recvfrom.SyS_recvfrom.entry_SYSCALL_64_fastpath
1875754 ± 8% -75.8% 453014 ± 30% cpuidle.C1-NHM.time
149386 ± 1% -95.2% 7207 ± 45% cpuidle.C1-NHM.usage
322.75 ± 51% -65.1% 112.50 ± 16% cpuidle.C1E-NHM.usage
14707 ±165% -99.7% 42.00 ± 41% cpuidle.POLL.time
232.50 ± 82% -98.6% 3.25 ± 45% cpuidle.POLL.usage
106.75 ± 2% -10.5% 95.50 ± 5% sched_debug.cfs_rq[12]:/.load
112.50 ± 7% -15.8% 94.75 ± 5% sched_debug.cfs_rq[12]:/.runnable_load_avg
110.75 ± 3% -12.0% 97.50 ± 15% sched_debug.cfs_rq[13]:/.runnable_load_avg
143.00 ± 98% +628.1% 1041 ± 47% sched_debug.cfs_rq[2]:/.blocked_load_avg
247.75 ± 57% +366.3% 1155 ± 42% sched_debug.cfs_rq[2]:/.tg_load_contrib
42.50 ±158% +665.3% 325.25 ± 68% sched_debug.cfs_rq[3]:/.blocked_load_avg
-75934 ±-49% -129.8% 22591 ± 76% sched_debug.cfs_rq[3]:/.spread0
145.50 ± 46% +199.0% 435.00 ± 51% sched_debug.cfs_rq[3]:/.tg_load_contrib
30.75 ± 31% +74.8% 53.75 ± 25% sched_debug.cfs_rq[4]:/.nr_spread_over
102.75 ± 3% +10.7% 113.75 ± 6% sched_debug.cpu#0.cpu_load[0]
2530 ± 20% +68.6% 4265 ± 23% sched_debug.cpu#0.sched_goidle
350.25 ± 10% +81.1% 634.25 ± 24% sched_debug.cpu#10.sched_goidle
112.00 ± 7% -14.7% 95.50 ± 8% sched_debug.cpu#12.cpu_load[0]
110.25 ± 6% -11.3% 97.75 ± 5% sched_debug.cpu#12.cpu_load[1]
109.50 ± 4% -8.9% 99.75 ± 4% sched_debug.cpu#12.cpu_load[2]
110.75 ± 4% -12.0% 97.50 ± 11% sched_debug.cpu#13.cpu_load[1]
111.25 ± 4% -11.9% 98.00 ± 10% sched_debug.cpu#13.cpu_load[2]
624.00 ± 23% -32.6% 420.50 ± 16% sched_debug.cpu#13.sched_goidle
947672 ± 76% -77.0% 217667 ± 3% sched_debug.cpu#15.nr_switches
947705 ± 76% -77.0% 217685 ± 3% sched_debug.cpu#15.sched_count
592433 ± 65% -66.0% 201467 ± 1% sched_debug.cpu#15.ttwu_local
911.50 ± 8% +9.5% 998.50 ± 11% sched_debug.cpu#3.curr->pid
562814 ± 32% +41.8% 798162 ± 15% sched_debug.cpu#4.avg_idle
277723 ± 12% -17.9% 227889 ± 5% sched_debug.cpu#4.nr_switches
277747 ± 12% -17.9% 227907 ± 5% sched_debug.cpu#4.sched_count
109.00 ± 2% -11.0% 97.00 ± 0% sched_debug.cpu#5.load
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/cpufreq_governor/nr_threads/iterations/entries:
snb-drag/tlbflush/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/performance/200%/32x/512
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
89406 ± 1% +2.1% 91264 ± 0% tlbflush.mem_acc_time_thread_ms
12692 ± 29% -69.7% 3848 ± 6% tlbflush.time.involuntary_context_switches
45262 ± 14% -20.5% 35996 ± 13% softirqs.SCHED
12692 ± 29% -69.7% 3848 ± 6% time.involuntary_context_switches
5023 ± 14% +24.8% 6271 ± 4% slabinfo.kmalloc-32.active_objs
5023 ± 14% +24.8% 6271 ± 4% slabinfo.kmalloc-32.num_objs
62647 ± 4% -20.7% 49700 ± 4% vmstat.system.cs
26516 ± 6% -24.7% 19964 ± 3% vmstat.system.in
1.486e+08 ± 4% -25.5% 1.108e+08 ± 7% cpuidle.C1-SNB.time
9489652 ± 0% -45.4% 5183155 ± 1% cpuidle.C1-SNB.usage
94983 ± 14% -35.2% 61571 ± 0% cpuidle.POLL.usage
424061 ± 57% -100.0% 0.00 ± -1% latency_stats.avg.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
424061 ± 57% -100.0% 0.00 ± -1% latency_stats.max.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
424061 ± 57% -100.0% 0.00 ± -1% latency_stats.sum.rpc_wait_bit_killable.__rpc_wait_for_completion_task.nfs4_run_open_task.[nfsv4]._nfs4_open_and_get_state.[nfsv4].nfs4_do_open.[nfsv4].nfs4_atomic_open.[nfsv4].nfs_atomic_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
4767 ± 56% -94.4% 268.00 ± 30% sched_debug.cfs_rq[0]:/.load
17.25 ± 58% +127.5% 39.25 ± 19% sched_debug.cfs_rq[3]:/.runnable_load_avg
8521655 ± 15% -42.7% 4882199 ± 15% sched_debug.cpu#0.nr_switches
8522183 ± 15% -42.7% 4882721 ± 15% sched_debug.cpu#0.sched_count
4225538 ± 15% -42.7% 2421794 ± 15% sched_debug.cpu#0.sched_goidle
4280766 ± 15% -41.3% 2511288 ± 15% sched_debug.cpu#0.ttwu_count
3693688 ± 17% -48.0% 1919886 ± 18% sched_debug.cpu#0.ttwu_local
10474544 ± 12% -43.9% 5872222 ± 9% sched_debug.cpu#1.nr_switches
10474799 ± 12% -43.9% 5872473 ± 9% sched_debug.cpu#1.sched_count
5198778 ± 12% -43.9% 2917524 ± 9% sched_debug.cpu#1.sched_goidle
5265913 ± 12% -44.2% 2940722 ± 9% sched_debug.cpu#1.ttwu_count
4654824 ± 14% -48.5% 2396748 ± 10% sched_debug.cpu#1.ttwu_local
6.50 ± 50% +138.5% 15.50 ± 25% sched_debug.cpu#3.cpu_load[1]
5.25 ± 28% +114.3% 11.25 ± 25% sched_debug.cpu#3.cpu_load[2]
=========================================================================================
tbox_group/testcase/rootfs/kconfig/compiler/test:
nhm-white/unixbench/debian-x86_64-2015-02-07.cgz/x86_64-rhel/gcc-4.9/shell8
commit:
a4be9881623375fd126762af65ef18dc8175c68d
fa58aafc44805ac425d17c6a8082513b5442ce9d
a4be9881623375fd fa58aafc44805ac425d17c6a80
---------------- --------------------------
%stddev %change %stddev
\ | \
5937622 ± 1% -26.4% 4369228 ± 0% unixbench.time.involuntary_context_switches
51200 ± 3% +15.0% 58880 ± 4% meminfo.DirectMap4k
5937622 ± 1% -26.4% 4369228 ± 0% time.involuntary_context_switches
62839 ± 1% -28.4% 44966 ± 2% vmstat.system.cs
19289 ± 2% -46.2% 10378 ± 2% vmstat.system.in
6.09 ± 0% +8.9% 6.63 ± 2% turbostat.CPU%c3
0.45 ± 10% +50.6% 0.67 ± 20% turbostat.Pkg%pc3
3.17 ± 1% +62.5% 5.15 ± 47% turbostat.Pkg%pc6
45216499 ± 0% -34.6% 29566388 ± 1% cpuidle.C1-NHM.time
1918738 ± 7% -88.4% 222808 ± 2% cpuidle.C1-NHM.usage
220032 ± 10% -90.2% 21647 ± 11% cpuidle.POLL.time
30597 ± 3% -96.6% 1051 ± 2% cpuidle.POLL.usage
1886 ± 45% +73.6% 3275 ± 36% sched_debug.cfs_rq[4]:/.utilization_load_avg
294740 ± 5% +42.0% 418454 ± 8% sched_debug.cpu#1.avg_idle
2624072 ± 63% -60.8% 1029438 ± 2% sched_debug.cpu#1.nr_switches
2624725 ± 63% -60.8% 1030184 ± 2% sched_debug.cpu#1.sched_count
1203729 ± 70% -66.9% 398300 ± 1% sched_debug.cpu#1.ttwu_count
992043 ± 86% -81.8% 180660 ± 3% sched_debug.cpu#1.ttwu_local
15179 ± 13% -43.3% 8606 ± 20% sched_debug.cpu#2.curr->pid
-204.00 ±-22% -47.5% -107.00 ±-43% sched_debug.cpu#2.nr_uninterruptible
184.75 ± 28% -28.1% 132.75 ± 13% sched_debug.cpu#5.nr_uninterruptible
14010 ± 11% -20.8% 11095 ± 16% sched_debug.cpu#7.curr->pid
2209845 ± 57% -56.4% 962613 ± 3% sched_debug.cpu#7.nr_switches
2210474 ± 57% -56.4% 963302 ± 3% sched_debug.cpu#7.sched_count
575333 ± 61% -58.4% 239461 ± 1% sched_debug.cpu#7.sched_goidle
7.45 ±124% -99.9% 0.01 ± 3% sched_debug.rt_rq[5]:/.rt_time
lkp-a06: Atom
Memory: 8G
nhm4: Nehalem
Memory: 4G
lituya: Grantley Haswell
Memory: 16G
lkp-ne02: Nehalem-EP
Memory: 5G
snb-drag: Sandy Bridge
Memory: 6G
nhm-white: Nehalem
Memory: 6G
aim7.time.voluntary_context_switches
58000 ++------------------------------------------------------------------+
| * |
56000 *+ .. + .* .* |
54000 ++*..*.*.*.. .* *.*..* + .* .*.*..*.*. .*. .* + .*.*
| *.* *. + .* *..* *. *. |
52000 ++ *. |
50000 ++ |
| |
48000 ++ |
46000 ++ |
| |
44000 O+ O O O |
42000 ++O O O O O |
| O O O O O O |
40000 ++------------------------------------------------------------------+
aim7.time.involuntary_context_switches
2.2e+06 ++----------------------------------------------------------------+
2e+06 *+*..*.*.*.*.. .*. .*.*.*..*.*.*.*..*.*.*..*.*.*.*..*.*.*.*..*.*
| *.* *. |
1.8e+06 ++ |
1.6e+06 ++ |
1.4e+06 ++ |
1.2e+06 ++ |
| |
1e+06 ++ |
800000 ++ |
600000 ++ |
400000 ++ |
| |
200000 O+O O O O O O O O O O O O O O |
0 ++----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
5 years, 8 months
[of/platform] BUG: unable to handle kernel paging request at 6b6b6b7f
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.collabora.co.uk/git/user/tomeu/linux.git on-demand-probes-v6
commit 7ec0126d70e7cf5029b717f3b3ecf48ee1d17930
Author: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
AuthorDate: Tue Aug 11 10:07:10 2015 +0200
Commit: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
CommitDate: Tue Aug 11 12:09:17 2015 +0200
of/platform: Point to struct device from device node
When adding a platform device, set the device node's device member to
point to it.
This speeds lookups considerably and is safe because we only create one
platform device for any given device node.
Signed-off-by: Tomeu Vizoso <tomeu.vizoso(a)collabora.com>
+------------------------------------------+------------+------------+------------+
| | 2ffbf1ddf7 | 7ec0126d70 | 739b19076e |
+------------------------------------------+------------+------------+------------+
| boot_successes | 63 | 0 | 0 |
| boot_failures | 0 | 66 | 46 |
| BUG:unable_to_handle_kernel | 0 | 66 | 46 |
| Oops | 0 | 66 | 46 |
| EIP_is_at_driver_allows_async_probing | 0 | 49 | |
| EIP_is_at_bus_remove_device | 0 | 0 | 29 |
| backtrace:of_unittest | 0 | 0 | 3 |
| backtrace:kernel_init_freeable | 0 | 0 | 3 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 0 | 1 |
+------------------------------------------+------------+------------+------------+
[ 329.233245] /testcase-data/phandle-tests/consumer-a: arguments longer than property
[ 330.035559] /testcase-data/phandle-tests/consumer-a: arguments longer than property
[ 331.140007] irq: no irq domain found for /testcase-data/interrupts/intc0 !
[ 331.663210] BUG: unable to handle kernel paging request at 6b6b6b7f
[ 332.314350] IP: [<c1612d1a>] driver_allows_async_probing+0x1/0xc
[ 332.830564] *pde = 00000000
[ 333.384771] Oops: 0000 [#1]
[ 333.813016] CPU: 0 PID: 1 Comm: swapper Not tainted 4.2.0-rc6-next-20150810-00011-g7ec0126 #1
[ 335.046563] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 336.465960] task: d0c60000 ti: d0c68000 task.ti: d0c68000
[ 337.089686] EIP: 0060:[<c1612d1a>] EFLAGS: 00010202 CPU: 0
[ 337.573068] EIP is at driver_allows_async_probing+0x1/0xc
[ 338.100472] EAX: 6b6b6b6b EBX: d1a05a4c ECX: 00000000 EDX: 00000061
[ 338.601313] ESI: 6b6b6b6b EDI: c20750a0 EBP: d0c69ce4 ESP: d0c69cd4
[ 339.186247] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 339.967334] CR0: 8005003b CR2: 6b6b6b7f CR3: 021d1000 CR4: 00000690
[ 340.588626] Stack:
[ 341.271228] d0c69ce4 c1612d99 d1a05a80 d1a05a4c d0c69cf4 c1612e8e d1a05a4c d0c6c2fc
[ 342.143552] d0c69d08 c1611e79 d1a05a4c c2075120 c208747c d0c69d30 c16103d3 00000000
Elapsed time: 340
qemu-system-x86_64 -enable-kvm -cpu kvm64 -kernel /pkg/linux/i386-randconfig-sb0-08130545+CONFIG_DEBUG_INFO_REDUCED/gcc-4.9/7ec0126d70e7cf5029b717f3b3ecf48ee1d17930/vmlinuz-4.2.0-rc6-next-20150810-00011-g7ec0126 -append 'hung_task_panic=1 earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal root=/dev/ram0 rw link=/kbuild-tests/run-queue/kvm/i386-randconfig-sb0-08130545+CONFIG_DEBUG_INFO_REDUCED/linux-devel:devel-spot-201508121642:7ec0126d70e7cf5029b717f3b3ecf48ee1d17930:bisect-linux-5/.vmlinuz-7ec0126d70e7cf5029b717f3b3ecf48ee1d17930-20150813151103-39-ivb41 branch=linux-devel/devel-spot-201508121642 BOOT_IMAGE=/pkg/linux/i386-randconfig-sb0-08130545+CONFIG_DEBUG_INFO_REDUCED/gcc-4.9/7ec0126d70e7cf5029b717f3b3ecf48ee1d17930/vmlinuz-4.2.0-rc6-next-20150810-00011-g7ec0126 drbd.minor_count=8' -initrd /osimage/quantal/quantal-core-i386.cgz -m 300 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sda5/disk0-quantal-ivb41-4,media=disk,if=virtio -drive file=/fs/sda5/disk1-quantal-ivb41-4,media=disk,if=virtio -drive file=/fs/sda5/disk2-quantal-ivb41-4,media=disk,if=virtio -drive file=/fs/sda5/disk3-quantal-ivb41-4,media=disk,if=virtio -drive file=/fs/sda5/disk4-quantal-ivb41-4,media=disk,if=virtio -drive file=/fs/sda5/disk5-quantal-ivb41-4,media=disk,if=virtio -drive file=/fs/sda5/disk6-quantal-ivb41-4,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-quantal-ivb41-4 -serial file:/dev/shm/kboot/serial-quantal-ivb41-4 -daemonize -display none -monitor null
git bisect start 739b19076e1a0f8ec5979bba7ae509902ce52f89 b195df50400676bdeaacdca27051e1a71ccd570f --
git bisect bad cd51e505d8afb42d4ef4966f34f7e4364e2bcc4e # 14:34 0- 22 regulator: core: Reduce critical area in _regulator_get
git bisect good b88e83162b43ecbcd7543ee36d2c558c56c9006f # 14:54 22+ 0 ARM: tegra: Add gpio-ranges property
git bisect bad 7ec0126d70e7cf5029b717f3b3ecf48ee1d17930 # 14:54 0- 66 of/platform: Point to struct device from device node
git bisect good 162ca88685866a44878df9373e9f1b349e571d37 # 15:19 22+ 0 memory: omap-gpmc: Don't try to save the GPMC context
git bisect good 2ffbf1ddf7ca086993ce248da95c5d7a9237f0ec # 15:41 22+ 0 ARM: ux500: fix typo in regulator names
# first bad commit: [7ec0126d70e7cf5029b717f3b3ecf48ee1d17930] of/platform: Point to struct device from device node
git bisect good 2ffbf1ddf7ca086993ce248da95c5d7a9237f0ec # 15:49 63+ 0 ARM: ux500: fix typo in regulator names
# extra tests on HEAD of tomeu/on-demand-probes-v6
git bisect bad 739b19076e1a0f8ec5979bba7ae509902ce52f89 # 15:49 0- 46 DEBUG
# extra tests on tree/branch tomeu/on-demand-probes-v6
git bisect bad 739b19076e1a0f8ec5979bba7ae509902ce52f89 # 15:49 0- 46 DEBUG
# extra tests with first bad commit reverted
# extra tests on tree/branch linus/master
git bisect good 7ddab73346a1277b90fd6a4d044bc948f9cc9ad8 # 16:24 66+ 8 Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
# extra tests on tree/branch linux-next/master
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-initrd $initrd
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
5 years, 8 months
[x86_64] RIP: 0010:[<ffffffff813bc3dc>] [<ffffffff813bc3dc>] __asan_store8
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
commit ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2
Author: Andrey Ryabinin <a.ryabinin(a)samsung.com>
AuthorDate: Fri Feb 13 14:39:25 2015 -0800
Commit: Linus Torvalds <torvalds(a)linux-foundation.org>
CommitDate: Fri Feb 13 21:21:41 2015 -0800
x86_64: add KASan support
This patch adds arch specific code for kernel address sanitizer.
16TB of virtual addressed used for shadow memory. It's located in range
[ffffec0000000000 - fffffc0000000000] between vmemmap and %esp fixup
stacks.
At early stage we map whole shadow region with zero page. Latter, after
pages mapped to direct mapping address range we unmap zero pages from
corresponding shadow (see kasan_map_shadow()) and allocate and map a real
shadow memory reusing vmemmap_populate() function.
Also replace __pa with __pa_nodebug before shadow initialized. __pa with
CONFIG_DEBUG_VIRTUAL=y make external function call (__phys_addr)
__phys_addr is instrumented, so __asan_load could be called before shadow
area initialized.
Signed-off-by: Andrey Ryabinin <a.ryabinin(a)samsung.com>
Cc: Dmitry Vyukov <dvyukov(a)google.com>
Cc: Konstantin Serebryany <kcc(a)google.com>
Cc: Dmitry Chernenkov <dmitryc(a)google.com>
Signed-off-by: Andrey Konovalov <adech.fo(a)gmail.com>
Cc: Yuri Gribov <tetra2005(a)gmail.com>
Cc: Konstantin Khlebnikov <koct9i(a)gmail.com>
Cc: Sasha Levin <sasha.levin(a)oracle.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Joonsoo Kim <iamjoonsoo.kim(a)lge.com>
Cc: Dave Hansen <dave.hansen(a)intel.com>
Cc: Andi Kleen <andi(a)firstfloor.org>
Cc: Ingo Molnar <mingo(a)elte.hu>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: "H. Peter Anvin" <hpa(a)zytor.com>
Cc: Christoph Lameter <cl(a)linux.com>
Cc: Pekka Enberg <penberg(a)kernel.org>
Cc: David Rientjes <rientjes(a)google.com>
Cc: Jim Davis <jim.epost(a)gmail.com>
Signed-off-by: Andrew Morton <akpm(a)linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds(a)linux-foundation.org>
+------------------------------------------------+------------+------------+-----------------+
| | 786a895991 | ef7f0d6a6c | v4.2-rc4_080210 |
+------------------------------------------------+------------+------------+-----------------+
| boot_successes | 910 | 77 | 40 |
| boot_failures | 0 | 233 | 18 |
| RIP:rb_insert_color | 0 | 17 | |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 233 | 17 |
| backtrace:insert | 0 | 72 | 1 |
| backtrace:rbtree_test_init | 0 | 232 | 17 |
| backtrace:kernel_init_freeable | 0 | 233 | 17 |
| RIP:rb_erase | 0 | 17 | |
| backtrace:apic_timer_interrupt | 0 | 57 | 2 |
| RIP:__asan_load8 | 0 | 44 | 2 |
| backtrace:rb_erase | 0 | 45 | |
| RIP:__asan_loadN | 0 | 72 | 8 |
| backtrace:erase_augmented | 0 | 33 | 5 |
| RIP:insert_augmented | 0 | 7 | 1 |
| RIP:__asan_store8 | 0 | 24 | 2 |
| RIP:__asan_store4 | 0 | 5 | |
| backtrace:insert_augmented | 0 | 26 | 9 |
| RIP:augment_recompute | 0 | 4 | |
| RIP:augment_callbacks_propagate | 0 | 1 | |
| RIP:erase_augmented | 0 | 2 | 1 |
| RIP:__rb_insert_augmented | 0 | 4 | |
| RIP:augment_callbacks_rotate | 0 | 6 | |
| RIP:insert | 0 | 8 | |
| RIP:__asan_storeN | 0 | 4 | |
| RIP:__asan_load4 | 0 | 14 | 3 |
| RIP:rbtree_test_init | 0 | 1 | |
| RIP:__rb_erase_color | 0 | 2 | |
| RIP:__rb_change_child | 0 | 1 | |
| BUG:kernel_boot_hang | 0 | 0 | 1 |
+------------------------------------------------+------------+------------+-----------------+
[ 53.667591] xz_dec_test: Create a device node with 'mknod xz_dec_test c 250 0' and write .xz files to it.
[ 53.671288] rbtree testing
[ 80.140009] NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [swapper:1]
[ 80.140009] Modules linked in:
[ 80.140009] CPU: 0 PID: 1 Comm: swapper Not tainted 3.19.0-05243-gef7f0d6 #4
[ 80.140009] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 80.140009] task: ffff88000e4d0000 ti: ffff88000e4d8000 task.ti: ffff88000e4d8000
[ 80.140009] RIP: 0010:[<ffffffff813bc3dc>] [<ffffffff813bc3dc>] __asan_store8+0x4c/0x140
[ 80.140009] RSP: 0018:ffff88000e4dbd88 EFLAGS: 00000206
[ 80.140009] RAX: 0000000086dfa090 RBX: ffffffff8413730f RCX: dffffc0000000000
[ 80.140009] RDX: 0000000086dfa08f RSI: 0000000000000008 RDI: ffffffff84a3d468
[ 80.140009] RBP: ffff88000e4dbdb8 R08: fffffbfff0826e61 R09: ffffffff8413730f
[ 80.140009] R10: 0000000026d79129 R11: 0000000026d6454e R12: 0000000026d6bb7e
[ 80.140009] R13: 1ffffffff0826e61 R14: 0000000000000010 R15: ffff88000e4dbdb8
[ 80.140009] FS: 0000000000000000(0000) GS:ffffffff838b0000(0000) knlGS:0000000000000000
[ 80.140009] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 80.140009] CR2: 0000000000000000 CR3: 000000000388a000 CR4: 00000000000006b0
[ 80.140009] Stack:
[ 80.140009] ffff88000e4dbdf8 ffffffff84a3d440 ffffffff84a3cfb8 0000000000000002
[ 80.140009] 00000000b4f0c03a ffffffff84a3d460 ffff88000e4dbdf8 ffffffff82d2a679
[ 80.140009] 0000000026d79123 00000000000005c8 0000000000004094 00000034c1e9ef3c
[ 80.140009] Call Trace:
[ 80.140009] [<ffffffff82d2a679>] insert+0x9d/0xf1
[ 80.140009] [<ffffffff844eef7e>] rbtree_test_init+0x98/0x32c
[ 80.140009] [<ffffffff810006a9>] do_one_initcall+0x409/0x570
[ 80.140009] [<ffffffff844eeee6>] ? dynamic_debug_init+0x52a/0x52a
[ 80.140009] [<ffffffff8446d38b>] kernel_init_freeable+0x25a/0x3e4
[ 80.140009] [<ffffffff811567d4>] ? finish_task_switch+0x274/0x4c0
[ 80.140009] [<ffffffff82d142c0>] ? rest_init+0xe0/0xe0
[ 80.140009] [<ffffffff82d142df>] kernel_init+0x1f/0x2b0
[ 80.140009] [<ffffffff82d142c0>] ? rest_init+0xe0/0xe0
[ 80.140009] [<ffffffff82d50ffa>] ret_from_fork+0x7a/0xb0
[ 80.140009] [<ffffffff82d142c0>] ? rest_init+0xe0/0xe0
[ 80.140009] Code: 01 48 39 c7 76 49 48 8b 15 42 ce 3f 03 48 b9 00 00 00 00 00 fc ff df 48 83 05 00 d2 3f 03 01 48 8d 42 01 48 83 05 04 d2 3f 03 01 <48> 89 05 1d ce 3f 03 48 89 f8 48 c1 e8 03 48 01 c8 66 83 38 00
[ 80.140009] Kernel panic - not syncing: softlockup: hung tasks
[ 80.140009] CPU: 0 PID: 1 Comm: swapper Tainted: G L 3.19.0-05243-gef7f0d6 #4
[ 80.140009] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.7.5-20140531_083030-gandalf 04/01/2014
[ 80.140009] 0000000000000003 0000000000000003 0000000000000001 ffff88000e4dbc01
[ 80.140009] 0000000000000000 ffffffff838b3da8 ffffffff82d28acb ffffffff838b3e38
[ 80.140009] ffffffff82d1c785 ffffffff838b3e38 ffffffff00000008 ffffffff838b3e48
[ 80.140009] Call Trace:
[ 80.140009] <IRQ> [<ffffffff82d28acb>] dump_stack+0x2e/0x3e
[ 80.140009] [<ffffffff82d1c785>] panic+0x1bb/0x4d6
[ 80.140009] [<ffffffff81227d1e>] watchdog_timer_fn+0x46e/0x470
[ 80.140009] [<ffffffff811b7aca>] hrtimer_run_queues+0x5aa/0xb30
[ 80.140009] [<ffffffff812278b0>] ? watchdog+0x40/0x40
[ 80.140009] [<ffffffff811b59cb>] update_process_times+0x3b/0xe0
[ 80.140009] [<ffffffff811ddb6e>] tick_nohz_handler+0x15e/0x350
[ 80.140009] [<ffffffff81071dd5>] local_apic_timer_interrupt+0x65/0xb0
[ 80.140009] [<ffffffff81072245>] smp_apic_timer_interrupt+0x85/0xb0
[ 80.140009] [<ffffffff82d51dfb>] apic_timer_interrupt+0x6b/0x70
[ 80.140009] <EOI> [<ffffffff813bc3dc>] ? __asan_store8+0x4c/0x140
[ 80.140009] [<ffffffff82d2a679>] insert+0x9d/0xf1
[ 80.140009] [<ffffffff844eef7e>] rbtree_test_init+0x98/0x32c
[ 80.140009] [<ffffffff810006a9>] do_one_initcall+0x409/0x570
[ 80.140009] [<ffffffff844eeee6>] ? dynamic_debug_init+0x52a/0x52a
[ 80.140009] [<ffffffff8446d38b>] kernel_init_freeable+0x25a/0x3e4
[ 80.140009] [<ffffffff811567d4>] ? finish_task_switch+0x274/0x4c0
[ 80.140009] [<ffffffff82d142c0>] ? rest_init+0xe0/0xe0
[ 80.140009] [<ffffffff82d142df>] kernel_init+0x1f/0x2b0
[ 80.140009] [<ffffffff82d142c0>] ? rest_init+0xe0/0xe0
[ 80.140009] [<ffffffff82d50ffa>] ret_from_fork+0x7a/0xb0
[ 80.140009] [<ffffffff82d142c0>] ? rest_init+0xe0/0xe0
[ 80.140009] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range: 0xffffffff80000000-0xffffffff9fffffff)
Elapsed time: 110
git bisect start v4.0 v2.6.39 --
git bisect good 5abcd76f5d896de014bd8d1486107c483659d40d # 13:13 310+ 310 Merge branch 'x86-boot-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect good 6a4d07f85ba9da5b6eab6e60a493d459c4296176 # 13:35 310+ 156 Merge branch 'for-3.14-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup
git bisect good 9f47112975fdc32e545e079f42a17bbd0be236fc # 14:09 310+ 0 Merge branch 'upstream' of git://git.linux-mips.org/pub/scm/ralf/upstream-linus
git bisect good c0f486fde3f353232c1cc2fd4d62783ac782a467 # 14:30 310+ 0 Merge tag 'pm+acpi-3.19-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect good a42cf70eb81558082e9a26fe8541d160b6c2a694 # 14:51 301+ 0 Merge tag 'modules-next-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/rusty/linux
git bisect bad ecddad64d4ca427c71598cc23183f48bc9cc4568 # 15:07 47- 17 Merge tag 'fbdev-fixes-4.0' of git://git.kernel.org/pub/scm/linux/kernel/git/tomba/linux
git bisect bad d34696c2208b2dc1b27ec8f0a017a91e4e6eb85d # 15:15 6- 1 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux
git bisect bad 66dc830d14a222c9214a8557e9feb1e4a67a3857 # 15:25 15- 8 Merge branch 'iov_iter' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
git bisect bad 8c334ce8f0fec7122fc3059c52a697b669a01b41 # 15:36 34- 38 Merge branch 'timers-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
git bisect bad 4ba63072b998cc31515cc6305c25f3b808b50c01 # 15:48 36- 36 Merge tag 'char-misc-3.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc
git bisect bad fee5429e028c414d80d036198db30454cfd91b7a # 16:00 52- 44 Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6
git bisect good 18320f2a6871aaf2522f793fee4a67eccf5e131a # 16:21 310+ 0 Merge tag 'pm+acpi-3.20-rc1-2' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm
git bisect bad 83e047c104aa95a8a683d6bd421df1551c17dbd2 # 16:57 39- 40 Merge branch 'akpm' (patches from Andrew)
git bisect good 327953e9af6c59ad111b28359e59e3ec0cbd71b6 # 17:37 310+ 0 checkpatch: add check for keyword 'boolean' in Kconfig definitions
git bisect bad 3f15801cdc2379ca4bf507f48bffd788f9e508ae # 17:47 20- 21 lib: add kasan test module
git bisect good 0f3c5aab5e00527eb3167aa9d1725cca9320e01e # 18:14 300+ 1 checkpatch: add of_device_id to structs that should be const
git bisect bad b8c73fc2493d42517be95cf2c89659fc6c6f4d02 # 18:24 8- 10 mm: page_alloc: add kasan hooks on alloc and free paths
git bisect good cb4188ac8e5779f66b9f55888ac2c75b391cde44 # 18:47 310+ 0 compiler: introduce __alias(symbol) shortcut
git bisect good 786a8959912eb94fc2381c2ae487a96ce55dabca # 19:10 306+ 0 kasan: disable memory hotplug
git bisect bad ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2 # 19:20 14- 12 x86_64: add KASan support
# first bad commit: [ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2] x86_64: add KASan support
git bisect good 786a8959912eb94fc2381c2ae487a96ce55dabca # 19:56 910+ 0 kasan: disable memory hotplug
# extra tests with DEBUG_INFO
git bisect bad ef7f0d6a6ca8c9e4b27d78895af86c2fbfaeedb2 # 20:08 31- 22 x86_64: add KASan support
# extra tests on HEAD of linux-devel/devel-hourly-2015080210
git bisect bad 8fc06a4ce2b4a6828d0a8d70daaf9d999c72fb8a # 20:08 0- 18 0day head guard for 'devel-hourly-2015080210'
# extra tests on tree/branch linus/master
git bisect bad 01183609ab61d11f1c310d42552a97be3051cc0f # 20:47 54- 31 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
# extra tests on tree/branch linus/master
git bisect bad 01183609ab61d11f1c310d42552a97be3051cc0f # 20:47 0- 31 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
# extra tests on tree/branch linux-next/master
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=quantal-core-x86_64.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu kvm64
-kernel $kernel
-initrd $initrd
-m 300
-smp 2
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
5 years, 8 months