[power_supply] 7f8d22d52d: WARNING: CPU: 0 PID: 1 at drivers/power/power_supply_core.c:569 power_supply_read_temp+0x87/0x90
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux Rhyland-Klein/power_supply-power_supply_read_temp-only-if-use_cnt-0/20160604-043353
commit 7f8d22d52dade417e135fd1c08ab9977882ff0b6 ("power_supply: power_supply_read_temp only if use_cnt > 0")
on test machine: vm-kbuild-yocto-x86_64: 1 threads qemu-system-x86_64 -enable-kvm -cpu SandyBridge with 320M memory
caused below changes:
+----------------+------------+------------+
| | 4a99fa06a8 | 7f8d22d52d |
+----------------+------------+------------+
| boot_successes | 4 | 0 |
+----------------+------------+------------+
[ 2.427327] power_supply test_battery: uevent
[ 2.427748] power_supply test_battery: POWER_SUPPLY_NAME=test_battery
[ 2.428409] ------------[ cut here ]------------
[ 2.428895] WARNING: CPU: 0 PID: 1 at drivers/power/power_supply_core.c:569 power_supply_read_temp+0x87/0x90
[ 2.430001] CPU: 0 PID: 1 Comm: swapper Not tainted 4.6.0-rc2-00006-g7f8d22d #1
[ 2.430700] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 2.431570] 0000000000000000 ffff880012857cd0 ffffffff81493375 ffff880012857d10
[ 2.432524] ffffffff81082d09 0000023912857d18 ffff88000e8ad000 ffff880012857d84
[ 2.433723] ffff88000e8adbc0 ffffffff823f6ec0 ffff88000e8ad800 ffff880012857d20
[ 2.434597] Call Trace:
[ 2.434843] [<ffffffff81493375>] dump_stack+0x19/0x24
[ 2.435341] [<ffffffff81082d09>] __warn+0xb9/0xe0
[ 2.435799] [<ffffffff81082de8>] warn_slowpath_null+0x18/0x20
[ 2.436358] [<ffffffff81be8187>] power_supply_read_temp+0x87/0x90
[ 2.436939] [<ffffffff81c4d76f>] thermal_zone_get_temp+0x4f/0x70
[ 2.437519] [<ffffffff81c4d7a7>] thermal_zone_device_update+0x17/0x110
[ 2.438214] [<ffffffff814aeff6>] ? debug_object_init+0x16/0x20
[ 2.438778] [<ffffffff81c4f284>] thermal_zone_device_register+0x834/0x920
[ 2.439431] [<ffffffff81be8512>] __power_supply_register+0x242/0x500
[ 2.440041] [<ffffffff81be8c3e>] power_supply_register+0xe/0x10
[ 2.440626] [<ffffffff82a2b96d>] test_power_init+0x4b/0xb2
[ 2.441160] [<ffffffff82a2b922>] ? wm831x_backup_driver_init+0x14/0x14
[ 2.441799] [<ffffffff829e8ff6>] do_one_initcall+0xf1/0x182
[ 2.442344] [<ffffffff829e87ca>] ? set_debug_rodata+0x12/0x12
[ 2.442896] [<ffffffff829e91a6>] kernel_init_freeable+0x11f/0x1a7
[ 2.443488] [<ffffffff820f79a9>] kernel_init+0x9/0xf0
[ 2.443995] [<ffffffff82100b62>] ret_from_fork+0x22/0x40
[ 2.444522] [<ffffffff820f79a0>] ? rest_init+0x80/0x80
[ 2.445028] ---[ end trace cc5ecf4c9ffecec6 ]---
[ 2.445495] __power_supply_register: Expected proper parent device for 'test_usb'
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu SandyBridge -kernel /pkg/linux/x86_64-randconfig-s2-06040635/gcc-6/7f8d22d52dade417e135fd1c08ab9977882ff0b6/vmlinuz-4.6.0-rc2-00006-g7f8d22d -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-x86_64-6/bisect_boot-1-yocto-minimal-x86_64.cgz-x86_64-randconfig-s2-06040635-7f8d22d52dade417e135fd1c08ab9977882ff0b6-20160604-100904-1gq6z3z-1.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s2-06040635 branch=linux-devel/devel-spot-201606040556 commit=7f8d22d52dade417e135fd1c08ab9977882ff0b6 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s2-06040635/gcc-6/7f8d22d52dade417e135fd1c08ab9977882ff0b6/vmlinuz-4.6.0-rc2-00006-g7f8d22d max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-x86_64/yocto-minimal-x86_64.cgz/x86_64-randconfig-s2-06040635/gcc-6/7f8d22d52dade417e135fd1c08ab9977882ff0b6/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-x86_64-6::dhcp drbd.minor_count=8' -initrd /fs/sdc1/initrd-vm-kbuild-yocto-x86_64-6 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-kbuild-yocto-x86_64-6,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-x86_64-6 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-x86_64-6 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
4 years, 8 months
[lkp] [dcache_{readdir, dir_lseek}() users] 4e82901cd6: reaim.jobs_per_min -49.1% regression
by kernel test robot
FYI, we noticed reaim.jobs_per_min -49.1% regression due to commit:
commit 4e82901cd6d1af21ae232ae835c36d8230c809e8 ("dcache_{readdir,dir_lseek}() users: switch to ->iterate_shared")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: reaim
on test machine: lkp-hsx04: 144 threads Brickland Haswell-EX with 512G memory
with following parameters: cpufreq_governor=performance/iterations=4/nr_task=1600%/test=fserver
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/iterations/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/4/x86_64-rhel/1600%/debian-x86_64-2015-02-07.cgz/lkp-hsx04/fserver/reaim
commit:
3125d2650cae97d8f313ab696cd0ed66916e767a
4e82901cd6d1af21ae232ae835c36d8230c809e8
3125d2650cae97d8 4e82901cd6d1af21ae232ae835
---------------- --------------------------
%stddev %change %stddev
\ | \
189504 ± 5% -49.1% 96506 ± 1% reaim.jobs_per_min
82.25 ± 5% -49.1% 41.89 ± 1% reaim.jobs_per_min_child
486.97 ± 6% +2540.6% 12859 ± 1% reaim.child_systime
377.56 ± 0% +14.0% 430.41 ± 0% reaim.child_utime
66.25 ± 0% +32.4% 87.69 ± 1% reaim.jti
191210 ± 5% -48.4% 98588 ± 2% reaim.max_jobs_per_min
73.90 ± 5% +95.9% 144.75 ± 1% reaim.parent_time
33.26 ± 1% -64.8% 11.71 ± 10% reaim.std_dev_percent
18.74 ± 4% -17.6% 15.44 ± 10% reaim.std_dev_time
304.45 ± 5% +93.1% 587.85 ± 1% reaim.time.elapsed_time
304.45 ± 5% +93.1% 587.85 ± 1% reaim.time.elapsed_time.max
1049052 ± 0% +44.3% 1513524 ± 1% reaim.time.involuntary_context_switches
1.766e+08 ± 0% +1.1% 1.784e+08 ± 0% reaim.time.minor_page_faults
1137 ± 1% +695.0% 9043 ± 1% reaim.time.percent_of_cpu_this_job_got
1950 ± 6% +2537.6% 51439 ± 1% reaim.time.system_time
1510 ± 0% +14.0% 1721 ± 0% reaim.time.user_time
7816334 ± 2% -26.5% 5744712 ± 3% reaim.time.voluntary_context_switches
358.57 ± 4% +78.9% 641.41 ± 1% uptime.boot
47094 ± 4% -55.0% 21202 ± 5% uptime.idle
632750 ± 7% +583.2% 4323189 ± 6% softirqs.RCU
573650 ± 0% +79.8% 1031264 ± 0% softirqs.SCHED
3249858 ± 1% +813.2% 29677185 ± 1% softirqs.TIMER
941405 ± 0% +33.9% 1260181 ± 1% vmstat.memory.cache
1.50 ± 33% +25466.7% 383.50 ± 12% vmstat.procs.b
11.75 ± 9% +1551.1% 194.00 ± 3% vmstat.procs.r
66096 ± 6% -59.7% 26611 ± 2% vmstat.system.cs
13790 ± 2% +679.2% 107457 ± 2% vmstat.system.in
304.45 ± 5% +93.1% 587.85 ± 1% time.elapsed_time
304.45 ± 5% +93.1% 587.85 ± 1% time.elapsed_time.max
1049052 ± 0% +44.3% 1513524 ± 1% time.involuntary_context_switches
1137 ± 1% +695.0% 9043 ± 1% time.percent_of_cpu_this_job_got
1950 ± 6% +2537.6% 51439 ± 1% time.system_time
1510 ± 0% +14.0% 1721 ± 0% time.user_time
7816334 ± 2% -26.5% 5744712 ± 3% time.voluntary_context_switches
5.521e+08 ± 2% +128.2% 1.26e+09 ± 9% cpuidle.C1-HSW.time
1119526 ± 12% +39.4% 1560698 ± 8% cpuidle.C1-HSW.usage
1.473e+09 ± 3% -26.0% 1.09e+09 ± 13% cpuidle.C1E-HSW.time
3341952 ± 0% -81.1% 631688 ± 8% cpuidle.C1E-HSW.usage
3.215e+08 ± 9% +626.5% 2.336e+09 ± 9% cpuidle.C3-HSW.time
400710 ± 3% +123.0% 893484 ± 5% cpuidle.C3-HSW.usage
3.797e+10 ± 5% -38.3% 2.342e+10 ± 3% cpuidle.C6-HSW.time
4992473 ± 0% -38.2% 3086799 ± 4% cpuidle.C6-HSW.usage
86345226 ± 17% +246.7% 2.993e+08 ± 9% cpuidle.POLL.time
680.00 ± 6% +441.2% 3680 ± 11% cpuidle.POLL.usage
8.28 ± 1% +707.7% 66.86 ± 1% turbostat.%Busy
241.50 ± 1% +700.9% 1934 ± 1% turbostat.Avg_MHz
19.38 ± 2% +25.8% 24.39 ± 1% turbostat.CPU%c1
0.90 ± 8% +86.7% 1.68 ± 7% turbostat.CPU%c3
71.44 ± 0% -90.1% 7.07 ± 4% turbostat.CPU%c6
40.50 ± 2% +30.2% 52.75 ± 3% turbostat.CoreTmp
6.20 ± 3% -87.9% 0.75 ± 13% turbostat.Pkg%pc2
44.75 ± 1% +25.7% 56.25 ± 1% turbostat.PkgTmp
264.66 ± 0% +83.6% 485.92 ± 0% turbostat.PkgWatt
235.00 ± 0% +2.3% 240.47 ± 0% turbostat.RAMWatt
679219 ± 0% +30.0% 883283 ± 1% meminfo.Active
576849 ± 0% +35.3% 780525 ± 2% meminfo.Active(anon)
562337 ± 0% +17.2% 659238 ± 1% meminfo.AnonPages
510249 ± 0% +19.7% 610864 ± 1% meminfo.Cached
2279043 ± 0% +22.0% 2780860 ± 3% meminfo.Committed_AS
10600 ± 0% +16.4% 12336 ± 3% meminfo.Inactive(anon)
57088 ± 0% +256.1% 203274 ± 5% meminfo.KernelStack
18972 ± 0% +9.3% 20734 ± 4% meminfo.Mapped
109131 ± 5% +30.5% 142400 ± 3% meminfo.PageTables
84481 ± 0% +25.7% 106227 ± 1% meminfo.SReclaimable
346606 ± 0% +57.4% 545507 ± 3% meminfo.SUnreclaim
25419 ± 1% +395.4% 125929 ± 9% meminfo.Shmem
431088 ± 0% +51.2% 651735 ± 2% meminfo.Slab
25.10 ± 4% -88.1% 2.98 ± 13% perf-profile.cycles-pp.call_cpuidle
26.12 ± 4% -88.4% 3.02 ± 13% perf-profile.cycles-pp.cpu_startup_entry
25.10 ± 4% -88.1% 2.98 ± 13% perf-profile.cycles-pp.cpuidle_enter
23.90 ± 4% -87.7% 2.95 ± 13% perf-profile.cycles-pp.cpuidle_enter_state
23.59 ± 3% -87.8% 2.87 ± 13% perf-profile.cycles-pp.intel_idle
1.10 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.reschedule_interrupt
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.rest_init
1.10 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.scheduler_ipi
1.10 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_reschedule_interrupt
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.start_kernel
24.80 ± 4% -88.5% 2.84 ± 11% perf-profile.cycles-pp.start_secondary
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.x86_64_start_kernel
1.38 ± 6% -87.1% 0.18 ± 49% perf-profile.cycles-pp.x86_64_start_reservations
144401 ± 0% +35.4% 195475 ± 1% proc-vmstat.nr_active_anon
140772 ± 0% +17.3% 165063 ± 1% proc-vmstat.nr_anon_pages
127559 ± 0% +19.8% 152794 ± 1% proc-vmstat.nr_file_pages
2649 ± 0% +16.2% 3079 ± 3% proc-vmstat.nr_inactive_anon
0.00 ± 0% +Inf% 427.25 ± 5% proc-vmstat.nr_isolated_anon
3574 ± 0% +254.6% 12675 ± 4% proc-vmstat.nr_kernel_stack
27385 ± 6% +30.0% 35613 ± 3% proc-vmstat.nr_page_table_pages
6352 ± 1% +396.9% 31560 ± 9% proc-vmstat.nr_shmem
21119 ± 0% +25.8% 26563 ± 1% proc-vmstat.nr_slab_reclaimable
86645 ± 0% +57.2% 136181 ± 3% proc-vmstat.nr_slab_unreclaimable
18704 ± 91% +6342.3% 1204981 ± 1% proc-vmstat.numa_hint_faults
7213 ±107% +7548.2% 551723 ± 2% proc-vmstat.numa_hint_faults_local
41554 ± 1% +13.0% 46949 ± 3% proc-vmstat.numa_other
34.75 ± 25% +9.5e+05% 330841 ± 4% proc-vmstat.numa_pages_migrated
75.00 ± 0% +2.4e+06% 1806450 ± 1% proc-vmstat.numa_pte_updates
6980 ± 3% +286.5% 26983 ± 14% proc-vmstat.pgactivate
9.75 ± 46% +29151.3% 2852 ± 6% proc-vmstat.pgmigrate_fail
34.75 ± 25% +9.5e+05% 330841 ± 4% proc-vmstat.pgmigrate_success
30962 ± 2% +49.7% 46340 ± 22% numa-vmstat.node0.nr_file_pages
1336 ± 26% +176.5% 3696 ± 17% numa-vmstat.node0.nr_kernel_stack
655.25 ±138% +2346.8% 16032 ± 64% numa-vmstat.node0.nr_shmem
4444 ± 4% +60.6% 7138 ± 4% numa-vmstat.node0.nr_slab_reclaimable
24105 ± 13% +52.7% 36818 ± 14% numa-vmstat.node0.nr_slab_unreclaimable
37828 ± 8% +20.4% 45545 ± 10% numa-vmstat.node1.nr_active_anon
0.00 ± 0% +Inf% 104.75 ± 4% numa-vmstat.node1.nr_isolated_anon
777.00 ± 1% +263.0% 2820 ± 7% numa-vmstat.node1.nr_kernel_stack
21035 ± 3% +49.4% 31432 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
19261584 ± 3% -8.3% 17664368 ± 6% numa-vmstat.node1.numa_local
6445 ± 5% -6.9% 6000 ± 2% numa-vmstat.node2.nr_active_file
731.75 ± 54% +274.9% 2743 ± 11% numa-vmstat.node2.nr_kernel_stack
5668 ± 9% +21.7% 6898 ± 6% numa-vmstat.node2.nr_slab_reclaimable
20284 ± 17% +52.0% 30833 ± 11% numa-vmstat.node2.nr_slab_unreclaimable
32582 ± 11% +30.8% 42629 ± 7% numa-vmstat.node3.nr_active_anon
32573 ± 11% +28.2% 41746 ± 7% numa-vmstat.node3.nr_anon_pages
0.00 ± 0% +Inf% 107.25 ± 10% numa-vmstat.node3.nr_isolated_anon
722.75 ± 31% +373.0% 3418 ± 18% numa-vmstat.node3.nr_kernel_stack
802.50 ± 3% +56.5% 1255 ± 51% numa-vmstat.node3.nr_mapped
269.25 ± 31% +265.0% 982.75 ± 84% numa-vmstat.node3.nr_shmem
4816 ± 17% +34.6% 6484 ± 8% numa-vmstat.node3.nr_slab_reclaimable
21220 ± 11% +75.8% 37314 ± 12% numa-vmstat.node3.nr_slab_unreclaimable
123852 ± 2% +49.7% 185420 ± 22% numa-meminfo.node0.FilePages
21489 ± 26% +174.9% 59065 ± 17% numa-meminfo.node0.KernelStack
861898 ± 3% +30.0% 1120062 ± 9% numa-meminfo.node0.MemUsed
17776 ± 4% +60.6% 28540 ± 4% numa-meminfo.node0.SReclaimable
96387 ± 13% +52.6% 147124 ± 14% numa-meminfo.node0.SUnreclaim
2622 ±138% +2347.4% 64189 ± 64% numa-meminfo.node0.Shmem
114164 ± 11% +53.9% 175665 ± 12% numa-meminfo.node0.Slab
175655 ± 6% +19.3% 209563 ± 9% numa-meminfo.node1.Active
151199 ± 8% +20.9% 182816 ± 10% numa-meminfo.node1.Active(anon)
12414 ± 2% +263.7% 45153 ± 6% numa-meminfo.node1.KernelStack
84163 ± 3% +49.4% 125761 ± 3% numa-meminfo.node1.SUnreclaim
108941 ± 6% +37.7% 149995 ± 3% numa-meminfo.node1.Slab
25784 ± 5% -6.9% 24004 ± 2% numa-meminfo.node2.Active(file)
11770 ± 54% +272.3% 43819 ± 10% numa-meminfo.node2.KernelStack
22674 ± 9% +21.7% 27592 ± 6% numa-meminfo.node2.SReclaimable
81132 ± 17% +51.8% 123123 ± 11% numa-meminfo.node2.SUnreclaim
103806 ± 14% +45.2% 150716 ± 9% numa-meminfo.node2.Slab
155823 ± 9% +26.8% 197624 ± 6% numa-meminfo.node3.Active
130231 ± 11% +31.2% 170866 ± 7% numa-meminfo.node3.Active(anon)
130132 ± 11% +28.6% 167346 ± 7% numa-meminfo.node3.AnonPages
11486 ± 32% +376.1% 54692 ± 18% numa-meminfo.node3.KernelStack
3220 ± 3% +56.7% 5046 ± 51% numa-meminfo.node3.Mapped
731454 ± 4% +23.8% 905502 ± 8% numa-meminfo.node3.MemUsed
19265 ± 17% +34.6% 25932 ± 8% numa-meminfo.node3.SReclaimable
84857 ± 11% +75.8% 149168 ± 11% numa-meminfo.node3.SUnreclaim
1072 ± 31% +267.3% 3939 ± 84% numa-meminfo.node3.Shmem
104123 ± 11% +68.2% 175101 ± 10% numa-meminfo.node3.Slab
2655 ± 0% +12.3% 2980 ± 1% slabinfo.anon_vma_chain.active_slabs
169938 ± 0% +12.3% 190786 ± 1% slabinfo.anon_vma_chain.num_objs
2655 ± 0% +12.3% 2980 ± 1% slabinfo.anon_vma_chain.num_slabs
57371 ± 2% +51.4% 86867 ± 2% slabinfo.cred_jar.active_objs
1375 ± 2% +59.0% 2187 ± 3% slabinfo.cred_jar.active_slabs
57785 ± 2% +59.0% 91883 ± 3% slabinfo.cred_jar.num_objs
1375 ± 2% +59.0% 2187 ± 3% slabinfo.cred_jar.num_slabs
105760 ± 0% +70.2% 180007 ± 2% slabinfo.dentry.active_objs
2611 ± 0% +65.9% 4334 ± 2% slabinfo.dentry.active_slabs
109710 ± 0% +65.9% 182048 ± 2% slabinfo.dentry.num_objs
2611 ± 0% +65.9% 4334 ± 2% slabinfo.dentry.num_slabs
12542 ± 0% +19.8% 15025 ± 2% slabinfo.files_cache.active_objs
12542 ± 0% +19.8% 15025 ± 2% slabinfo.files_cache.num_objs
12325 ± 0% +13.1% 13944 ± 3% slabinfo.kmalloc-128.active_objs
12368 ± 0% +12.8% 13949 ± 3% slabinfo.kmalloc-128.num_objs
48607 ± 1% +599.8% 340166 ± 7% slabinfo.kmalloc-256.active_objs
1406 ± 0% +296.1% 5569 ± 7% slabinfo.kmalloc-256.active_slabs
90025 ± 0% +295.9% 356448 ± 7% slabinfo.kmalloc-256.num_objs
1406 ± 0% +296.1% 5569 ± 7% slabinfo.kmalloc-256.num_slabs
19374 ± 4% +32.1% 25598 ± 11% slabinfo.kmalloc-512.active_objs
303.50 ± 4% +31.9% 400.25 ± 11% slabinfo.kmalloc-512.active_slabs
19466 ± 4% +31.7% 25645 ± 11% slabinfo.kmalloc-512.num_objs
303.50 ± 4% +31.9% 400.25 ± 11% slabinfo.kmalloc-512.num_slabs
54555 ± 0% +11.0% 60529 ± 1% slabinfo.kmalloc-64.active_objs
852.00 ± 0% +10.9% 945.25 ± 1% slabinfo.kmalloc-64.active_slabs
54569 ± 0% +10.9% 60543 ± 1% slabinfo.kmalloc-64.num_objs
852.00 ± 0% +10.9% 945.25 ± 1% slabinfo.kmalloc-64.num_slabs
8353 ± 2% +16.1% 9697 ± 1% slabinfo.mm_struct.active_objs
8767 ± 2% +12.5% 9867 ± 0% slabinfo.mm_struct.num_objs
18102 ± 1% +321.6% 76312 ± 2% slabinfo.pid.active_objs
282.25 ± 1% +324.1% 1197 ± 2% slabinfo.pid.active_slabs
18102 ± 1% +323.3% 76624 ± 2% slabinfo.pid.num_objs
282.25 ± 1% +324.1% 1197 ± 2% slabinfo.pid.num_slabs
15888 ± 0% +88.2% 29907 ± 5% slabinfo.radix_tree_node.active_objs
283.00 ± 0% +88.5% 533.50 ± 5% slabinfo.radix_tree_node.active_slabs
15888 ± 0% +88.3% 29912 ± 5% slabinfo.radix_tree_node.num_objs
283.00 ± 0% +88.5% 533.50 ± 5% slabinfo.radix_tree_node.num_slabs
37875 ± 1% +18.7% 44946 ± 2% slabinfo.shmem_inode_cache.active_objs
782.75 ± 1% +19.0% 931.25 ± 2% slabinfo.shmem_inode_cache.active_slabs
38374 ± 1% +19.0% 45658 ± 2% slabinfo.shmem_inode_cache.num_objs
782.75 ± 1% +19.0% 931.25 ± 2% slabinfo.shmem_inode_cache.num_slabs
9093 ± 2% +19.5% 10863 ± 0% slabinfo.sighand_cache.active_objs
9517 ± 1% +15.7% 11011 ± 0% slabinfo.sighand_cache.num_objs
15657 ± 4% +22.7% 19216 ± 2% slabinfo.signal_cache.active_objs
532.75 ± 3% +24.5% 663.25 ± 2% slabinfo.signal_cache.active_slabs
15994 ± 3% +24.4% 19903 ± 2% slabinfo.signal_cache.num_objs
532.75 ± 3% +24.5% 663.25 ± 2% slabinfo.signal_cache.num_slabs
4059 ± 0% +220.4% 13004 ± 4% slabinfo.task_struct.active_objs
1416 ± 0% +210.5% 4397 ± 4% slabinfo.task_struct.active_slabs
4249 ± 0% +210.5% 13193 ± 4% slabinfo.task_struct.num_objs
1416 ± 0% +210.5% 4397 ± 4% slabinfo.task_struct.num_slabs
9689 ± 1% -14.4% 8298 ± 0% slabinfo.tw_sock_TCP.active_objs
9689 ± 1% -14.4% 8298 ± 0% slabinfo.tw_sock_TCP.num_objs
118717 ± 0% +8.7% 129087 ± 2% slabinfo.vm_area_struct.active_objs
2834 ± 0% +11.0% 3147 ± 1% slabinfo.vm_area_struct.active_slabs
124715 ± 0% +11.0% 138490 ± 1% slabinfo.vm_area_struct.num_objs
2834 ± 0% +11.0% 3147 ± 1% slabinfo.vm_area_struct.num_slabs
826.91 ±172% +6.4e+05% 5282284 ± 23% sched_debug.cfs_rq:/.MIN_vruntime.avg
119075 ±172% +29123.4% 34797824 ± 5% sched_debug.cfs_rq:/.MIN_vruntime.max
9888 ±172% +1.1e+05% 10423763 ± 12% sched_debug.cfs_rq:/.MIN_vruntime.stddev
3.46 ± 9% +6452.2% 226.68 ± 4% sched_debug.cfs_rq:/.load.avg
76.63 ± 15% +13122.9% 10133 ± 58% sched_debug.cfs_rq:/.load.max
13.44 ± 12% +7589.8% 1033 ± 43% sched_debug.cfs_rq:/.load.stddev
10.02 ± 19% +2131.5% 223.68 ± 4% sched_debug.cfs_rq:/.load_avg.avg
192.42 ± 15% +4769.4% 9369 ± 59% sched_debug.cfs_rq:/.load_avg.max
31.87 ± 17% +2895.1% 954.48 ± 42% sched_debug.cfs_rq:/.load_avg.stddev
826.91 ±172% +6.4e+05% 5282285 ± 23% sched_debug.cfs_rq:/.max_vruntime.avg
119075 ±172% +29123.4% 34797824 ± 5% sched_debug.cfs_rq:/.max_vruntime.max
9888 ±172% +1.1e+05% 10423763 ± 12% sched_debug.cfs_rq:/.max_vruntime.stddev
867397 ± 4% +3259.7% 29141996 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg
1522256 ± 3% +2296.5% 36481296 ± 5% sched_debug.cfs_rq:/.min_vruntime.max
317110 ± 9% +6979.6% 22450187 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
466641 ± 2% +598.1% 3257731 ± 10% sched_debug.cfs_rq:/.min_vruntime.stddev
0.07 ± 12% +926.9% 0.72 ± 4% sched_debug.cfs_rq:/.nr_running.avg
1.00 ± 0% +120.5% 2.20 ± 16% sched_debug.cfs_rq:/.nr_running.max
0.24 ± 5% +148.7% 0.60 ± 8% sched_debug.cfs_rq:/.nr_running.stddev
1.57 ± 4% +12191.7% 193.17 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.avg
44.80 ± 18% +17097.4% 7704 ± 50% sched_debug.cfs_rq:/.runnable_load_avg.max
6.83 ± 10% +11845.7% 816.26 ± 34% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-561803 ±-17% +909.6% -5671743 ±-17% sched_debug.cfs_rq:/.spread0.avg
93073 ± 48% +1693.9% 1669627 ± 33% sched_debug.cfs_rq:/.spread0.max
-1112123 ± -9% +1011.8% -12365037 ±-14% sched_debug.cfs_rq:/.spread0.min
466665 ± 2% +598.4% 3259347 ± 10% sched_debug.cfs_rq:/.spread0.stddev
72.03 ± 8% +681.0% 562.57 ± 4% sched_debug.cfs_rq:/.util_avg.avg
845.49 ± 7% +19.8% 1012 ± 1% sched_debug.cfs_rq:/.util_avg.max
162.56 ± 7% +141.2% 392.17 ± 2% sched_debug.cfs_rq:/.util_avg.stddev
1041175 ± 4% +74.3% 1814551 ± 41% sched_debug.cpu.avg_idle.max
190602 ± 7% +74.9% 333448 ± 4% sched_debug.cpu.clock.avg
190622 ± 7% +74.9% 333489 ± 4% sched_debug.cpu.clock.max
190578 ± 7% +74.9% 333385 ± 4% sched_debug.cpu.clock.min
12.27 ± 11% +123.7% 27.45 ± 17% sched_debug.cpu.clock.stddev
190602 ± 7% +74.9% 333448 ± 4% sched_debug.cpu.clock_task.avg
190622 ± 7% +74.9% 333489 ± 4% sched_debug.cpu.clock_task.max
190578 ± 7% +74.9% 333385 ± 4% sched_debug.cpu.clock_task.min
12.27 ± 11% +123.7% 27.45 ± 17% sched_debug.cpu.clock_task.stddev
2.48 ± 30% +7940.8% 199.54 ± 8% sched_debug.cpu.cpu_load[0].avg
151.10 ± 73% +5500.9% 8462 ± 53% sched_debug.cpu.cpu_load[0].max
16.01 ± 55% +5396.2% 879.71 ± 37% sched_debug.cpu.cpu_load[0].stddev
2.30 ± 25% +8559.3% 198.83 ± 8% sched_debug.cpu.cpu_load[1].avg
103.62 ± 61% +7965.2% 8357 ± 53% sched_debug.cpu.cpu_load[1].max
12.13 ± 42% +7072.1% 869.95 ± 37% sched_debug.cpu.cpu_load[1].stddev
2.17 ± 20% +8999.8% 197.88 ± 9% sched_debug.cpu.cpu_load[2].avg
79.29 ± 49% +10256.5% 8211 ± 52% sched_debug.cpu.cpu_load[2].max
10.29 ± 32% +8233.9% 857.48 ± 36% sched_debug.cpu.cpu_load[2].stddev
2.09 ± 16% +9348.3% 197.64 ± 9% sched_debug.cpu.cpu_load[3].avg
69.15 ± 34% +11736.4% 8184 ± 52% sched_debug.cpu.cpu_load[3].max
9.19 ± 22% +9151.1% 850.06 ± 36% sched_debug.cpu.cpu_load[3].stddev
2.05 ± 14% +9635.5% 199.62 ± 8% sched_debug.cpu.cpu_load[4].avg
57.27 ± 25% +14613.6% 8427 ± 54% sched_debug.cpu.cpu_load[4].max
8.27 ± 17% +10402.4% 868.48 ± 38% sched_debug.cpu.cpu_load[4].stddev
3419 ± 20% +1048.9% 39291 ± 7% sched_debug.cpu.curr->pid.avg
13248 ± 10% +120.8% 29253 ± 3% sched_debug.cpu.curr->pid.stddev
4.14 ± 23% +5375.2% 226.50 ± 4% sched_debug.cpu.load.avg
166.82 ± 60% +5974.4% 10133 ± 58% sched_debug.cpu.load.max
20.66 ± 42% +4903.2% 1033 ± 43% sched_debug.cpu.load.stddev
519931 ± 4% +88.8% 981576 ± 44% sched_debug.cpu.max_idle_balance_cost.max
1655 ±112% +3127.4% 53420 ± 62% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 6% +257.7% 0.00 ± 14% sched_debug.cpu.next_balance.stddev
57551 ± 8% +292.1% 225655 ± 4% sched_debug.cpu.nr_load_updates.avg
74049 ± 6% +245.3% 255674 ± 4% sched_debug.cpu.nr_load_updates.max
42700 ± 6% +360.8% 196777 ± 2% sched_debug.cpu.nr_load_updates.min
10498 ± 5% +45.5% 15274 ± 7% sched_debug.cpu.nr_load_updates.stddev
0.07 ± 16% +1759.2% 1.37 ± 27% sched_debug.cpu.nr_running.avg
1.28 ± 25% +927.9% 13.19 ± 25% sched_debug.cpu.nr_running.max
0.26 ± 10% +629.1% 1.88 ± 24% sched_debug.cpu.nr_running.stddev
61209 ± 4% -22.7% 47333 ± 4% sched_debug.cpu.nr_switches.avg
33110 ± 3% -55.3% 14805 ± 6% sched_debug.cpu.nr_switches.stddev
9.09 ± 6% +36.1% 12.36 ± 4% sched_debug.cpu.nr_uninterruptible.avg
534.05 ± 7% +189.9% 1548 ± 8% sched_debug.cpu.nr_uninterruptible.max
-503.79 ± -4% +246.5% -1745 ± -5% sched_debug.cpu.nr_uninterruptible.min
392.51 ± 9% +124.5% 881.29 ± 4% sched_debug.cpu.nr_uninterruptible.stddev
190581 ± 7% +74.9% 333386 ± 4% sched_debug.cpu_clk
188297 ± 7% +75.8% 331102 ± 4% sched_debug.ktime
0.12 ± 7% -34.3% 0.08 ± 7% sched_debug.rt_rq:/.rt_time.avg
9.29 ± 7% -45.7% 5.04 ± 21% sched_debug.rt_rq:/.rt_time.max
0.93 ± 7% -41.8% 0.54 ± 16% sched_debug.rt_rq:/.rt_time.stddev
190581 ± 7% +74.9% 333386 ± 4% sched_debug.sched_clk
reaim.child_systime
14000 ++------------------------------------------------------------------+
| O O O OO O O
12000 ++ |
| O |
10000 O+O O O O O |
| OO O O OO O O OO O O O OO O O O OO O O O |
8000 ++ |
| |
6000 ++ |
| |
4000 ++ |
| |
2000 ++ |
*.*.**.*.*.*.**.*.*.**.*.*.*.**.*.*.*.**.*.*.*.**.*.*.**. .*. .* |
0 ++-----------------------O-------------------------------*-O-*--*---+
reaim.child_utime
1000 O+--OO---O-O---------------------------------------------------------+
900 ++O O |
| |
800 ++ |
700 ++ |
| |
600 ++ |
500 ++ |
400 ++ O OO O O O O O O O O OO O O O OO O O O OO O O O OO O O
*.*.**.*.*.*.*.**.*.*.*.**.*.*.*.*.**.*.*.*.**.*.*.*.**.*.*.*.*.** |
300 ++ |
200 ++ |
| |
100 ++ |
0 ++-----------------------O----------------------------------O--------+
reaim.std_dev_percent
40 ++*---*--*---*-*---*--*-*-*---*---*----*---*---------------------------+
* * * * * * * * ** * *. .*.*. |
35 ++ *.**.* *.*.*.**.* |
30 ++ |
| |
25 ++ |
| |
20 ++ |
| |
15 ++ O |
10 ++ O O O O O O O OO O O
O O OO O O O O O O O O O O O O O O OO O O O O |
5 ++O |
| |
0 ++------------------------O-----------------------------------O--------+
reaim.jti
100 ++--------------------------------------------------------------------+
90 O+O O OO O O OO O O OO O O O O O O O O OO O O |
| O O O O O O O O OO O O
80 ++ |
70 ++ |
*.*.*. *.*.*. .* .*. .*.**.*. .*.*.*.* .*.*.*.*.**.*.*.*.*.**.* |
60 ++ * *.* * *.* * * |
50 ++ |
40 ++ |
| |
30 ++ |
20 ++ |
| |
10 ++ |
0 ++------------------------O----------------------------------O--------+
reaim.time.user_time
4000 O+--OO---O-O---------------------------------------------------------+
| O O |
3500 ++ |
3000 ++ |
| |
2500 ++ |
| |
2000 ++ |
| O OO O O O O O O O O OO O O O OO O O O OO O O O OO O O
1500 *+*.**.*.*.*.*.**.*.*.*.**.*.*.*.*.**.*.*.*.**.*.*.*.**.*.*.*.*.** |
1000 ++ |
| |
500 ++ |
| |
0 ++-----------------------O----------------------------------O--------+
reaim.time.system_time
60000 ++------------------------------------------------------------------+
| |
50000 ++ O O O OO O O
| |
O O O |
40000 ++O O O O O O O O O O |
| O O O O O OO O O O O O O O O O O |
30000 ++ |
| |
20000 ++ |
| |
| |
10000 ++ |
*.*.**.*.*.*.**.*.*.**.*.*.*.**.*.*.*.**.*.*.*.**.*.*.* |
0 ++-----------------------O-----------------------------*-*-O-*-**---+
reaim.time.percent_of_cpu_this_job_got
10000 ++--O----O-O--------------------------------------------------------+
9000 O+O O O O O O O O O
| O |
8000 ++ O O O |
7000 ++ O O O OO O O OO O O OO O O OO O O O |
| |
6000 ++ |
5000 ++ |
4000 ++ |
| |
3000 ++ |
2000 ++ |
*.*.**.*.*.*.**.*.*.**.*.*.*.**.*.*.*.**.*.*. .*.**.*.*.*.** |
1000 ++ *.**.* |
0 ++-----------------------O---------------------------------O--------+
reaim.time.involuntary_context_switches
1.6e+06 ++----------------------------------------------------------------+
| O O O O OO O
1.4e+06 O+OO O O O OO O O O OO O O O O OO O O OO O |
| OO O O O O |
1.2e+06 ++ .*. .**.*.*. *. .*.**. *. .*. *.*.*.* |
1e+06 *+** * * * *.*.* *.**.* * *.*.*.**.*.*.* |
| |
800000 ++ |
| |
600000 ++ |
400000 ++ |
| |
200000 ++ |
| |
0 ++----------------------O--------------------------------O--------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
4 years, 9 months
[lkp] [sched] 86d68266c6: BUG: spinlock lockup suspected on CPU#1, rcu_sched/7
by kernel test robot
FYI, we noticed the following commit:
https://git.linaro.org/people/vincent.guittot/kernel.git sched/pelt
commit 86d68266c6210a86a0e69f6cf242803609aba19b ("sched: fix first task of a task group is attached twice")
on test machine: vm-kbuild-yocto-i386: 2 threads qemu-system-i386 -enable-kvm with 320M memory
caused below changes:
+----------------------------------------------------------------+------------+------------+
| | ef0491ea17 | 86d68266c6 |
+----------------------------------------------------------------+------------+------------+
| boot_successes | 12 | 0 |
| boot_failures | 7 | 21 |
| BUG:workqueue_lockup-pool_cpus=#node=#flags=#nice=#stuck_for#s | 3 | |
| IP-Config:Auto-configuration_of_network_failed | 4 | |
| BUG:kernel_test_hang | 0 | 10 |
| BUG:spinlock_lockup_suspected_on_CPU | 0 | 11 |
| EIP_is_at_set_task_rq_fair | 0 | 11 |
| EIP_is_at__default_send_IPI_dest_field | 0 | 11 |
| backtrace:schedule_timeout | 0 | 1 |
| backtrace:torture_shuffle | 0 | 3 |
| backtrace:cpu_startup_entry | 0 | 2 |
| backtrace:smpboot_thread_fn | 0 | 1 |
+----------------------------------------------------------------+------------+------------+
[ 69.565670] Freeing unused kernel memory: 2248K (cabf9000 - cae2b000)
[ 69.570234] Write protecting the kernel text: 18844k
[ 69.615691] Write protecting the kernel read-only data: 6820k
[ 69.633606] BUG: spinlock lockup suspected on CPU#1, rcu_sched/7
[ 69.633606] lock: 0xd36ba980, .magic: dead4ead, .owner: init/1, .owner_cpu: 0
[ 69.633606] CPU: 1 PID: 7 Comm: rcu_sched Not tainted 4.6.0-rc7-00127-g86d6826 #1
[ 69.633606] 00000000 c0059cd4 c9433829 c00303a4 d36ba980 c0059d00 c90fffd7 ca79453c
[ 69.633606] d36ba980 dead4ead c00303a4 00000001 00000000 d36ba980 a08bb370 00000000
[ 69.633606] c0059d1c c910036a a08bb370 00000000 d36ba990 d36ba980 00000046 c0059d40
[ 69.633606] Call Trace:
[ 69.633606] [<c9433829>] dump_stack+0x162/0x1f9
[ 69.633606] [<c90fffd7>] spin_dump+0xe7/0x160
[ 69.633606] [<c910036a>] do_raw_spin_lock+0x21a/0x330
[ 69.633606] [<ca263581>] _raw_spin_lock_irqsave+0x131/0x180
[ 69.633606] [<c90cebd3>] ? load_balance+0x3e3/0x10d0
[ 69.633606] [<c90cebd3>] load_balance+0x3e3/0x10d0
[ 69.633606] [<c904baea>] ? kvm_sched_clock_read+0x3a/0x70
[ 69.633606] [<c9019766>] ? sched_clock+0x16/0x30
[ 69.633606] [<c90c1bfd>] ? sched_clock_local+0x2d/0x1e0
[ 69.633606] [<c90d08a7>] pick_next_task_fair+0x947/0xd50
[ 69.633606] [<ca259808>] __schedule+0x778/0xd90
[ 69.633606] [<c948adbc>] ? debug_object_activate+0x23c/0x370
[ 69.633606] [<ca26250c>] ? schedule_timeout+0x21c/0x290
[ 69.633606] [<ca259edd>] schedule+0x3d/0x80
[ 69.633606] [<ca26252d>] schedule_timeout+0x23d/0x290
[ 69.633606] [<c9130270>] ? detach_if_pending+0x120/0x120
[ 69.633606] [<c9129aaf>] rcu_gp_kthread+0x85f/0x10f0
[ 69.633606] [<c9129250>] ? force_qs_rnp+0x250/0x250
[ 69.633606] [<c90a603c>] kthread+0x13c/0x150
[ 69.633606] [<ca264e51>] ret_from_kernel_thread+0x21/0x40
[ 69.633606] [<c90a5f00>] ? __kthread_unpark+0xf0/0xf0
[ 69.633606] Sending NMI to all CPUs:
[ 69.633606] NMI backtrace for cpu 0
[ 69.633606] CPU: 0 PID: 1 Comm: init Not tainted 4.6.0-rc7-00127-g86d6826 #1
[ 69.633606] task: c0030000 ti: c0032000 task.ti: c0032000
[ 69.633606] EIP: 0060:[<c90cc942>] EFLAGS: 00000046 CPU: 0
[ 69.633606] EIP is at set_task_rq_fair+0xd2/0x420
[ 69.633606] EAX: 00000000 EBX: 00000000 ECX: cf990d18 EDX: d36ba9d4
[ 69.633606] ESI: 3664896f EDI: 00000010 EBP: c0033eec ESP: c0033eb8
[ 69.633606] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
[ 69.633606] CR0: 80050033 CR2: 47ef2e50 CR3: 1092dae0 CR4: 000006b0
[ 69.633606] Stack:
[ 69.633606] c0030080 3664896f 00000010 00000000 00000000 00000000 00000000 00000001
[ 69.633606] 00000010 00000010 c0030000 00000000 d0353e40 c0033f00 c90ccfcd c0030000
[ 69.633606] d0353e40 d36ba980 c0033f28 c90c0c6a 00000002 00000001 c0030000 00000046
[ 69.633606] Call Trace:
[ 69.633606] [<c90ccfcd>] task_move_group_fair+0x5d/0xe0
[ 69.633606] [<c90c0c6a>] sched_move_task+0x1ca/0x370
[ 69.633606] [<c90e0cbd>] autogroup_move_group+0x1bd/0x2b0
[ 69.633606] [<c90e1062>] sched_autogroup_create_attach+0x232/0x380
[ 69.633606] [<c90f41f7>] ? trace_hardirqs_on+0x27/0x40
[ 69.633606] [<c908f521>] sys_setsid+0x1a1/0x1f0
[ 69.633606] [<c9001e6f>] do_int80_syscall_32+0xbf/0x2d0
[ 69.633606] [<ca264f6c>] entry_INT80_32+0x2c/0x2c
[ 69.633606] Code: 72 44 8b 7a 48 83 05 d8 67 e8 ca 01 83 15 dc 67 e8 ca 00 89 45 e8 8b 41 48 31 f3 89 75 d0 89 45 e4 8b 45 ec 89 7d d4 31 f8 09 c3 <75> b4 8b 75 dc 8b 7d d8 33 75 e8 33 7d e4 09 fe 75 a4 8b 5d cc
[ 69.633606] NMI backtrace for cpu 1
[ 69.633606] CPU: 1 PID: 7 Comm: rcu_sched Not tainted 4.6.0-rc7-00127-g86d6826 #1
[ 69.633606] task: c0056000 ti: c0058000 task.ti: c0058000
[ 69.633606] EIP: 0060:[<c903bfe2>] EFLAGS: 00000046 CPU: 1
[ 69.633606] EIP is at __default_send_IPI_dest_field+0x132/0x160
[ 69.633606] EAX: 00001f81 EBX: 00000000 ECX: fffff000 EDX: 00000000
[ 69.633606] ESI: 00000002 EDI: 00000c00 EBP: c0059cb4 ESP: c0059ca8
[ 69.633606] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068
[ 69.633606] CR0: 80050033 CR2: 00000000 CR3: 0ae34000 CR4: 000006b0
[ 69.633606] Stack:
[ 69.633606] 00000003 00000046 00000002 c0059cc8 c903c861 00002710 00000001 c903e570
[ 69.633606] c0059cd0 c903e58f c0059cf8 c943b7f0 ca7c08de ca873dc7 d36ba980 dead4ead
[ 69.633606] c00303a4 d36ba980 a08bb370 00000000 c0059d00 c903e64e c0059d1c c9100382
[ 69.633606] Call Trace:
[ 69.633606] [<c903c861>] default_send_IPI_mask_logical+0xd1/0x160
[ 69.633606] [<c903e570>] ? irq_force_complete_move+0x1e0/0x1e0
[ 69.633606] [<c903e58f>] nmi_raise_cpu_backtrace+0x1f/0x30
[ 69.633606] [<c943b7f0>] nmi_trigger_all_cpu_backtrace+0x1b0/0x410
[ 69.633606] [<c903e64e>] arch_trigger_all_cpu_backtrace+0x1e/0x30
[ 69.633606] [<c9100382>] do_raw_spin_lock+0x232/0x330
[ 69.633606] [<ca263581>] _raw_spin_lock_irqsave+0x131/0x180
[ 69.633606] [<c90cebd3>] ? load_balance+0x3e3/0x10d0
[ 69.633606] [<c90cebd3>] load_balance+0x3e3/0x10d0
[ 69.633606] [<c904baea>] ? kvm_sched_clock_read+0x3a/0x70
[ 69.633606] [<c9019766>] ? sched_clock+0x16/0x30
[ 69.633606] [<c90c1bfd>] ? sched_clock_local+0x2d/0x1e0
[ 69.633606] [<c90d08a7>] pick_next_task_fair+0x947/0xd50
[ 69.633606] [<ca259808>] __schedule+0x778/0xd90
[ 69.633606] [<c948adbc>] ? debug_object_activate+0x23c/0x370
[ 69.633606] [<ca26250c>] ? schedule_timeout+0x21c/0x290
[ 69.633606] [<ca259edd>] schedule+0x3d/0x80
[ 69.633606] [<ca26252d>] schedule_timeout+0x23d/0x290
[ 69.633606] [<c9130270>] ? detach_if_pending+0x120/0x120
[ 69.633606] [<c9129aaf>] rcu_gp_kthread+0x85f/0x10f0
[ 69.633606] [<c9129250>] ? force_qs_rnp+0x250/0x250
[ 69.633606] [<c90a603c>] kthread+0x13c/0x150
[ 69.633606] [<ca264e51>] ret_from_kernel_thread+0x21/0x40
[ 69.633606] [<c90a5f00>] ? __kthread_unpark+0xf0/0xf0
[ 69.633606] Code: 05 70 60 e5 ca 01 83 15 74 60 e5 ca 00 83 c1 01 83 d3 00 89 0d 68 5f e5 ca 89 1d 6c 5f e5 ca 8b 0d 18 a1 92 ca 89 b9 00 c3 ff ff <83> c0 01 83 d2 00 83 05 78 60 e5 ca 01 a3 70 5f e5 ca 5b 89 15
Elapsed time: 390
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-b0-05262019/gcc-6/86d68266c6210a86a0e69f6cf242803609aba19b/vmlinuz-4.6.0-rc7-00127-g86d6826 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-i386-26/rand_boot-1-yocto-minimal-i386.cgz-i386-randconfig-b0-05262019-86d68266c6210a86a0e69f6cf242803609aba19b-20160527-100557-1afgxdi-0.yaml ARCH=i386 kconfig=i386-randconfig-b0-05262019 branch=linux-devel/devel-spot-201605261922 commit=86d68266c6210a86a0e69f6cf242803609aba19b BOOT_IMAGE=/pkg/linux/i386-randconfig-b0-05262019/gcc-6/86d68266c6210a86a0e69f6cf242803609aba19b/vmlinuz-4.6.0-rc7-00127-g86d6826 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-i386/yocto-minimal-i386.cgz/i386-randconfig-b0-05262019/gcc-6/86d68266c6210a86a0e69f6cf242803609aba19b/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-i386-26::dhcp drbd.minor_count=8' -initrd /fs/sdd1/initrd-vm-kbuild-yocto-i386-26 -m 320 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdd1/disk0-vm-kbuild-yocto-i386-26,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-i386-26 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-i386-26 -daemonize -display none -monitor null
FYI, raw QEMU command line is:
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-b0-05262019/gcc-6/86d68266c6210a86a0e69f6cf242803609aba19b/vmlinuz-4.6.0-rc7-00127-g86d6826 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-i386-26/rand_boot-1-yocto-minimal-i386.cgz-i386-randconfig-b0-05262019-86d68266c6210a86a0e69f6cf242803609aba19b-20160527-100557-1afgxdi-0.yaml ARCH=i386 kconfig=i386-randconfig-b0-05262019 branch=linux-devel/devel-spot-201605261922 commit=86d68266c6210a86a0e69f6cf242803609aba19b BOOT_IMAGE=/pkg/linux/i386-randconfig-b0-05262019/gcc-6/86d68266c6210a86a0e69f6cf242803609aba19b/vmlinuz-4.6.0-rc7-00127-g86d6826 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-i386/yocto-minimal-i386.cgz/i386-randconfig-b0-05262019/gcc-6/86d68266c6210a86a0e69f6cf242803609aba19b/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-i386-26::dhcp drbd.minor_count=8' -initrd /fs/sdd1/initrd-vm-kbuild-yocto-i386-26 -m 320 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdd1/disk0-vm-kbuild-yocto-i386-26,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-i386-26 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-i386-26 -daemonize -display none -monitor null
Thanks,
Xiaolong
4 years, 9 months
[lkp] [kbuild] 03282deca1: kmsg.dt-test###unittest_data_add:Failed_to_resolve_phandles(rc=-#)
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers.git topic/renesas-overlays
commit 03282deca12a1a3ac947445a95606af51df53a69 ("kbuild: Enable DT symbols when CONFIG_OF_OVERLAY is used")
on test machine: vm-kbuild-1G: 2 threads qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap with 1G memory
caused below changes:
[ 9.890292] ### dt-test ### unittest_data_add: Failed to resolve phandles (rc=-22)
[ 9.890292] ### dt-test ### unittest_data_add: Failed to resolve phandles (rc=-22)
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/x86_64-randconfig-n0-05312127/gcc-6/03282deca12a1a3ac947445a95606af51df53a69/vmlinuz-4.7.0-rc1-00030-g03282de -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-1G-6/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-randconfig-n0-05312127-03282deca12a1a3ac947445a95606af51df53a69-20160602-106516-uu872i-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-n0-05312127 branch=renesas-drivers/topic/renesas-overlays commit=03282deca12a1a3ac947445a95606af51df53a69 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-n0-05312127/gcc-6/03282deca12a1a3ac947445a95606af51df53a69/vmlinuz-4.7.0-rc1-00030-g03282de max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-1G/debian-x86_64-2015-02-07.cgz/x86_64-randconfig-n0-05312127/gcc-6/03282deca12a1a3ac947445a95606af51df53a69/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-1G-6::dhcp' -initrd /fs/sdh1/initrd-vm-kbuild-1G-6 -m 1024 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23005-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -device virtio-scsi-pci,id=scsi0 -drive file=/fs/sdh1/disk0-vm-kbuild-1G-6,if=none,id=hd0,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd0,scsi-id=1,lun=0 -drive file=/fs/sdh1/disk1-vm-kbuild-1G-6,if=none,id=hd1,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd1,scsi-id=1,lun=1 -drive file=/fs/sdh1/disk2-vm-kbuild-1G-6,if=none,id=hd2,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd2,scsi-id=1,lun=2 -drive file=/fs/sdh1/disk3-vm-kbuild-1G-6,if=none,id=hd3,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd3,scsi-id=1,lun=3 -drive file=/fs/sdh1/disk4-vm-kbuild-1G-6,if=none,id=hd4,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd4,scsi-id=1,lun=4 -pidfile /dev/shm/kboot/pid-vm-kbuild-1G-6 -serial file:/dev/shm/kboot/serial-vm-kbuild-1G-6 -daemonize -display none -monitor null
Thanks,
Xiaolong
4 years, 9 months
[lkp] [usb] ab3455d638: swapper invoked oom-killer: gfp_mask=0x24000c0(GFP_KERNEL), order=0, oom_score_adj=0
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux Ruslan-Bilovol/USB-Audio-Gadget-refactoring/20160524-075634
commit ab3455d638a872d4e4dc6562347d494fca8fd756 ("usb: gadget: f_uac2: remove platform driver/device creation")
on test machine: vm-kbuild-yocto-x86_64: 1 threads qemu-system-x86_64 -enable-kvm -cpu SandyBridge with 320M memory
caused below changes:
[ 39.403262] g_audio gadget: unable to register OSS PCM device 0:0
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu SandyBridge -kernel /pkg/linux/x86_64-randconfig-s2-05270032/gcc-6/ab3455d638a872d4e4dc6562347d494fca8fd756/vmlinuz-4.6.0-rc3-00058-gab3455d -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-x86_64-36/bisect_boot-1-yocto-minimal-x86_64.cgz-x86_64-randconfig-s2-05270032-ab3455d638a872d4e4dc6562347d494fca8fd756-20160527-68586-3w5kf5-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s2-05270032 branch=linux-devel/devel-hourly-2016052621 commit=ab3455d638a872d4e4dc6562347d494fca8fd756 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s2-05270032/gcc-6/ab3455d638a872d4e4dc6562347d494fca8fd756/vmlinuz-4.6.0-rc3-00058-gab3455d max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-x86_64/yocto-minimal-x86_64.cgz/x86_64-randconfig-s2-05270032/gcc-6/ab3455d638a872d4e4dc6562347d494fca8fd756/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-x86_64-36::dhcp drbd.minor_count=8' -initrd /fs/sde1/initrd-vm-kbuild-yocto-x86_64-36 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sde1/disk0-vm-kbuild-yocto-x86_64-36,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-x86_64-36 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-x86_64-36 -daemonize -display none -monitor null
Thanks,
Xiaolong
4 years, 9 months
[lkp] [sched/fair] 53d3bc773e: hackbench.throughput -32.9% regression
by kernel test robot
FYI, we noticed hackbench.throughput -32.9% regression due to commit:
commit 53d3bc773eaa7ab1cf63585e76af7ee869d5e709 ("Revert "sched/fair: Fix fairness issue on migration"")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: hackbench
on test machine: ivb42: 48 threads Ivytown Ivy Bridge-EP with 64G memory
with following parameters: cpufreq_governor=performance/ipc=socket/mode=threads/nr_threads=50%
In addition to that, the commit also has significant impact on the following tests:
unixbench: unixbench.score 25.9% improvement on test machine - ivb42: 48 threads Ivytown Ivy Bridge-EP with 64G memory
with test parameters: cpufreq_governor=performance/nr_task=100%/test=context1
hackbench: hackbench.throughput -15.6% regression on test machine - lkp-hsw-ep4: 72 threads Haswell-EP with 128G memory
with test parameters: cpufreq_governor=performance/ipc=pipe/iterations=12/mode=process/nr_threads=50%
Details are as below:
-------------------------------------------------------------------------------------------------->
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/socket/x86_64-rhel/threads/50%/debian-x86_64-2015-02-07.cgz/ivb42/hackbench
commit:
c5114626f33b62fa7595e57d87f33d9d1f8298a2
53d3bc773eaa7ab1cf63585e76af7ee869d5e709
c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76
---------------- --------------------------
%stddev %change %stddev
\ | \
196590 ± 0% -32.9% 131963 ± 2% hackbench.throughput
602.66 ± 0% +2.8% 619.27 ± 2% hackbench.time.elapsed_time
602.66 ± 0% +2.8% 619.27 ± 2% hackbench.time.elapsed_time.max
1.76e+08 ± 3% +236.0% 5.914e+08 ± 2% hackbench.time.involuntary_context_switches
208664 ± 2% +26.0% 262929 ± 3% hackbench.time.minor_page_faults
4401 ± 0% +5.7% 4650 ± 0% hackbench.time.percent_of_cpu_this_job_got
25256 ± 0% +10.2% 27842 ± 2% hackbench.time.system_time
1272 ± 0% -24.5% 961.37 ± 2% hackbench.time.user_time
7.64e+08 ± 1% +131.8% 1.771e+09 ± 2% hackbench.time.voluntary_context_switches
143370 ± 0% -12.0% 126124 ± 1% meminfo.SUnreclaim
2462880 ± 0% -35.6% 1585869 ± 5% softirqs.SCHED
4051 ± 0% -39.9% 2434 ± 3% uptime.idle
1766752 ± 1% +122.6% 3932589 ± 1% vmstat.system.cs
249718 ± 2% +307.4% 1017398 ± 3% vmstat.system.in
1.76e+08 ± 3% +236.0% 5.914e+08 ± 2% time.involuntary_context_switches
208664 ± 2% +26.0% 262929 ± 3% time.minor_page_faults
1272 ± 0% -24.5% 961.37 ± 2% time.user_time
7.64e+08 ± 1% +131.8% 1.771e+09 ± 2% time.voluntary_context_switches
2228 ± 92% +137.1% 5285 ± 15% numa-meminfo.node0.AnonHugePages
73589 ± 4% -12.5% 64393 ± 2% numa-meminfo.node0.SUnreclaim
27438 ± 83% +102.6% 55585 ± 6% numa-meminfo.node0.Shmem
101051 ± 3% -10.9% 90044 ± 2% numa-meminfo.node0.Slab
69844 ± 4% -11.8% 61579 ± 3% numa-meminfo.node1.SUnreclaim
1136461 ± 3% +16.6% 1324662 ± 5% numa-numastat.node0.local_node
1140216 ± 3% +16.2% 1324689 ± 5% numa-numastat.node0.numa_hit
3755 ± 68% -99.3% 27.25 ± 94% numa-numastat.node0.other_node
1098889 ± 4% +20.1% 1320211 ± 6% numa-numastat.node1.local_node
1101996 ± 4% +20.5% 1327590 ± 6% numa-numastat.node1.numa_hit
7.18 ± 0% -50.2% 3.57 ± 43% perf-profile.cycles-pp.call_cpuidle
8.09 ± 0% -44.7% 4.47 ± 38% perf-profile.cycles-pp.cpu_startup_entry
7.17 ± 0% -50.3% 3.56 ± 43% perf-profile.cycles-pp.cpuidle_enter
7.14 ± 0% -50.3% 3.55 ± 43% perf-profile.cycles-pp.cpuidle_enter_state
7.11 ± 0% -50.6% 3.52 ± 43% perf-profile.cycles-pp.intel_idle
8.00 ± 0% -44.5% 4.44 ± 38% perf-profile.cycles-pp.start_secondary
92.32 ± 0% +5.4% 97.32 ± 0% turbostat.%Busy
2763 ± 0% +5.4% 2912 ± 0% turbostat.Avg_MHz
7.48 ± 0% -66.5% 2.50 ± 7% turbostat.CPU%c1
0.20 ± 2% -6.4% 0.18 ± 2% turbostat.CPU%c6
180.03 ± 0% -1.3% 177.62 ± 0% turbostat.CorWatt
5.83 ± 0% +38.9% 8.10 ± 3% turbostat.RAMWatt
6857 ± 83% +102.8% 13905 ± 6% numa-vmstat.node0.nr_shmem
18395 ± 4% -12.4% 16121 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
675569 ± 3% +12.7% 761135 ± 4% numa-vmstat.node0.numa_local
71537 ± 5% -7.9% 65920 ± 2% numa-vmstat.node0.numa_other
17456 ± 4% -11.7% 15405 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
695848 ± 3% +14.9% 799683 ± 5% numa-vmstat.node1.numa_hit
677405 ± 4% +14.5% 775903 ± 6% numa-vmstat.node1.numa_local
18442 ± 19% +28.9% 23779 ± 5% numa-vmstat.node1.numa_other
1.658e+09 ± 0% -59.1% 6.784e+08 ± 7% cpuidle.C1-IVT.time
1.066e+08 ± 0% -40.3% 63661563 ± 6% cpuidle.C1-IVT.usage
26348635 ± 0% -86.8% 3471048 ± 15% cpuidle.C1E-IVT.time
291620 ± 0% -85.1% 43352 ± 15% cpuidle.C1E-IVT.usage
54158643 ± 1% -88.5% 6254009 ± 14% cpuidle.C3-IVT.time
482437 ± 1% -87.0% 62620 ± 16% cpuidle.C3-IVT.usage
5.028e+08 ± 0% -75.8% 1.219e+08 ± 8% cpuidle.C6-IVT.time
3805026 ± 0% -85.5% 552326 ± 16% cpuidle.C6-IVT.usage
2766 ± 4% -51.4% 1344 ± 6% cpuidle.POLL.usage
35841 ± 0% -12.0% 31543 ± 0% proc-vmstat.nr_slab_unreclaimable
154090 ± 2% +43.1% 220509 ± 3% proc-vmstat.numa_hint_faults
129240 ± 2% +47.4% 190543 ± 3% proc-vmstat.numa_hint_faults_local
2238386 ± 1% +18.4% 2649737 ± 2% proc-vmstat.numa_hit
2232163 ± 1% +18.4% 2643105 ± 2% proc-vmstat.numa_local
22315 ± 1% -21.0% 17625 ± 5% proc-vmstat.numa_pages_migrated
154533 ± 2% +45.6% 225071 ± 3% proc-vmstat.numa_pte_updates
382980 ± 2% +33.2% 510157 ± 4% proc-vmstat.pgalloc_dma32
7311738 ± 2% +37.2% 10029060 ± 2% proc-vmstat.pgalloc_normal
7672040 ± 2% +37.1% 10519738 ± 2% proc-vmstat.pgfree
22315 ± 1% -21.0% 17625 ± 5% proc-vmstat.pgmigrate_success
5487 ± 6% -12.6% 4797 ± 4% slabinfo.UNIX.active_objs
5609 ± 5% -12.2% 4926 ± 4% slabinfo.UNIX.num_objs
4362 ± 4% +14.6% 4998 ± 2% slabinfo.cred_jar.active_objs
4362 ± 4% +14.6% 4998 ± 2% slabinfo.cred_jar.num_objs
42525 ± 0% -41.6% 24824 ± 3% slabinfo.kmalloc-256.active_objs
845.50 ± 0% -42.9% 482.50 ± 3% slabinfo.kmalloc-256.active_slabs
54124 ± 0% -42.9% 30920 ± 3% slabinfo.kmalloc-256.num_objs
845.50 ± 0% -42.9% 482.50 ± 3% slabinfo.kmalloc-256.num_slabs
47204 ± 0% -37.9% 29335 ± 2% slabinfo.kmalloc-512.active_objs
915.25 ± 0% -39.8% 551.00 ± 3% slabinfo.kmalloc-512.active_slabs
58599 ± 0% -39.8% 35300 ± 3% slabinfo.kmalloc-512.num_objs
915.25 ± 0% -39.8% 551.00 ± 3% slabinfo.kmalloc-512.num_slabs
12443 ± 2% -20.1% 9944 ± 3% slabinfo.pid.active_objs
12443 ± 2% -20.1% 9944 ± 3% slabinfo.pid.num_objs
440.00 ± 5% -32.8% 295.75 ± 4% slabinfo.taskstats.active_objs
440.00 ± 5% -32.8% 295.75 ± 4% slabinfo.taskstats.num_objs
312.45 ±157% -94.8% 16.29 ± 33% sched_debug.cfs_rq:/.load.stddev
0.27 ± 5% -56.3% 0.12 ± 30% sched_debug.cfs_rq:/.nr_running.stddev
16.51 ± 1% +9.5% 18.08 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
0.05 ±100% +7950.0% 3.66 ± 48% sched_debug.cfs_rq:/.runnable_load_avg.min
-740916 ±-28% -158.5% 433310 ±120% sched_debug.cfs_rq:/.spread0.avg
1009940 ± 19% +75.8% 1775442 ± 30% sched_debug.cfs_rq:/.spread0.max
-2384171 ± -7% -65.7% -818684 ±-76% sched_debug.cfs_rq:/.spread0.min
749.14 ± 1% +13.0% 846.34 ± 1% sched_debug.cfs_rq:/.util_avg.min
51.66 ± 4% -36.3% 32.92 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
161202 ± 7% -41.7% 93997 ± 4% sched_debug.cpu.avg_idle.avg
595158 ± 6% -51.2% 290491 ± 22% sched_debug.cpu.avg_idle.max
132760 ± 8% -58.8% 54718 ± 19% sched_debug.cpu.avg_idle.stddev
11.40 ± 11% +111.0% 24.05 ± 16% sched_debug.cpu.clock.stddev
11.40 ± 11% +111.0% 24.05 ± 16% sched_debug.cpu.clock_task.stddev
32.34 ± 2% +23.9% 40.07 ± 19% sched_debug.cpu.cpu_load[0].max
0.34 ±103% +520.0% 2.11 ± 67% sched_debug.cpu.cpu_load[0].min
32.18 ± 2% +22.7% 39.50 ± 17% sched_debug.cpu.cpu_load[1].max
3.32 ± 8% +84.9% 6.14 ± 12% sched_debug.cpu.cpu_load[1].min
5.39 ± 7% +36.3% 7.34 ± 4% sched_debug.cpu.cpu_load[2].min
33.18 ± 3% +14.0% 37.82 ± 5% sched_debug.cpu.cpu_load[4].max
5.56 ± 6% +16.2% 6.45 ± 6% sched_debug.cpu.cpu_load[4].stddev
16741 ± 0% -15.4% 14166 ± 2% sched_debug.cpu.curr->pid.avg
19196 ± 0% -18.3% 15690 ± 1% sched_debug.cpu.curr->pid.max
5174 ± 5% -55.4% 2305 ± 14% sched_debug.cpu.curr->pid.stddev
1410 ± 1% -14.2% 1210 ± 6% sched_debug.cpu.nr_load_updates.stddev
9.95 ± 3% -14.5% 8.51 ± 5% sched_debug.cpu.nr_running.avg
29.07 ± 2% -15.0% 24.70 ± 4% sched_debug.cpu.nr_running.max
0.05 ±100% +850.0% 0.43 ± 37% sched_debug.cpu.nr_running.min
7.64 ± 3% -23.0% 5.88 ± 2% sched_debug.cpu.nr_running.stddev
10979930 ± 1% +123.3% 24518490 ± 2% sched_debug.cpu.nr_switches.avg
12350130 ± 1% +117.5% 26856375 ± 2% sched_debug.cpu.nr_switches.max
9594835 ± 2% +132.6% 22314436 ± 2% sched_debug.cpu.nr_switches.min
769296 ± 1% +56.8% 1206190 ± 3% sched_debug.cpu.nr_switches.stddev
8.30 ± 18% +32.9% 11.02 ± 15% sched_debug.cpu.nr_uninterruptible.max
turbostat.Avg_MHz
3000 O+---O-O-O--O-O-O-O--O-O-O--O-O-O--O-O-O-O--O------------------------+
*.O..*.*.* *.*.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*.*
2500 ++ : : |
| : : |
| : : |
2000 ++ : : |
| : : |
1500 ++ : : |
| : : |
1000 ++ : : |
| : : |
| :: |
500 ++ : |
| : |
0 ++----------*--------------------------------------------------------+
turbostat._Busy
100 O+O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O--O-O-O-------------------------+
90 *+*..*.*.* *.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*
| : : |
80 ++ : : |
70 ++ : : |
| : : |
60 ++ : : |
50 ++ : : |
40 ++ : : |
| : : |
30 ++ : : |
20 ++ :: |
| : |
10 ++ : |
0 ++----------*---------------------------------------------------------+
turbostat.CPU_c1
8 ++---------------*------------------------------------------------------+
*.*..*.*..* *. *..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*.*
7 ++ : : |
6 ++ : : |
| : : |
5 ++ : : |
| : : |
4 ++ : : |
| O : : |
3 O+ O O: : O O |
2 ++ O :O:O O O O O O O O O O O |
| : O |
1 ++ : |
| : |
0 ++----------*-----------------------------------------------------------+
turbostat.PkgWatt
250 ++--------------------------------------------------------------------+
| |
O.O..O.O.O O O.O..O.O.O..O.O.O..O.O.O..O.O.O..*.*.*..*.*.*..*.*.*..*.*
200 ++ : : |
| : : |
| : : |
150 ++ : : |
| : : |
100 ++ : : |
| : : |
| : : |
50 ++ : : |
| : |
| : |
0 ++----------*---------------------------------------------------------+
turbostat.CorWatt
200 ++--------------------------------------------------------------------+
180 *+*..*.*.* *.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*.*..*.*
O O O O O O O O O O O O O O O O O O O O |
160 ++ : : |
140 ++ : : |
| : : |
120 ++ : : |
100 ++ : : |
80 ++ : : |
| : : |
60 ++ : : |
40 ++ :: |
| : |
20 ++ : |
0 ++----------*---------------------------------------------------------+
turbostat.RAMWatt
9 ++---------------O------------------------------------------------------+
| O O O O O O O O O O O |
8 ++O O O O O O O |
7 O+ |
| |
6 *+*..*.*..* *..*.*..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*.*.*..*.*..*.*
5 ++ : : |
| : : |
4 ++ : : |
3 ++ : : |
| : : |
2 ++ : : |
1 ++ : |
| : |
0 ++----------*-----------------------------------------------------------+
hackbench.throughput
200000 *+*-*--*-*---*--*-*-*-*--*-*-*-*--*-*-*-*-*--*-*-*-*--*-*-*-*--*-*-*
180000 ++ : : |
| : : |
160000 ++ : : |
140000 O+O : : O O |
| O O O:O:O O O O O O O O O O O O |
120000 ++ : : |
100000 ++ : : |
80000 ++ : : |
| : : |
60000 ++ : : |
40000 ++ : |
| : |
20000 ++ : |
0 ++---------*-------------------------------------------------------+
time.user_time
1400 ++------------*------------------------------------------------------+
*.*..*.*.* : *.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*.*
1200 ++ : : |
| : : |
1000 O+O O: O:O O O O O O O O O O |
| O O : : O O O O |
800 ++ : : |
| : : |
600 ++ : : |
| : : |
400 ++ : : |
| :: |
200 ++ : |
| : |
0 ++----------*--------------------------------------------------------+
time.minor_page_faults
300000 ++-----------------------------------------------------------------+
| O O O O O O O O O |
250000 O+O O O O O O O O O O |
| |
*.*.*..*.* *..*.*.*.*..*.*.*.*..*.*.*.*.*..*. .*.*..*.*. .*..*.*.*
200000 ++ : : * * |
| : : |
150000 ++ : : |
| : : |
100000 ++ : : |
| : : |
| : : |
50000 ++ : |
| : |
0 ++---------*-------------------------------------------------------+
time.voluntary_context_switches
2e+09 ++----------------------------------------------------------------+
1.8e+09 ++ O O O O O O |
| O O O O O O O O O O O O |
1.6e+09 O+O |
1.4e+09 ++ |
| |
1.2e+09 ++ |
1e+09 ++ |
8e+08 ++ .*
*.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.* |
6e+08 ++ : : |
4e+08 ++ : : |
| : : |
2e+08 ++ : |
0 ++---------*------------------------------------------------------+
time.involuntary_context_switches
7e+08 ++------------------------------------------------------------------+
| O O O |
6e+08 ++ O O O O O O O O O O O |
O O O O O O |
5e+08 ++ |
| |
4e+08 ++ |
| |
3e+08 ++ |
| |
2e+08 ++ .*.. .*.*. .*.*.*. .*
*.*..*.*.* *.*.* *.*.*.*..*.*.*..*.*.*.*. *.*. *..* |
1e+08 ++ : + |
| : + |
0 ++---------*--------------------------------------------------------+
hackbench.time.user_time
1400 ++------------*------------------------------------------------------+
*.*..*.*.* : *.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*.*
1200 ++ : : |
| : : |
1000 O+O O: O:O O O O O O O O O O |
| O O : : O O O O |
800 ++ : : |
| : : |
600 ++ : : |
| : : |
400 ++ : : |
| :: |
200 ++ : |
| : |
0 ++----------*--------------------------------------------------------+
hackbench.time.percent_of_cpu_this_job_got
5000 ++-------------------------------------------------------------------+
4500 O+O O O O O O O O O O O O O O O O O O O |
*.*..*.*.* *.*.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*.*.*..*.*.*.*..*.*
4000 ++ : : |
3500 ++ : : |
| : : |
3000 ++ : : |
2500 ++ : : |
2000 ++ : : |
| : : |
1500 ++ : : |
1000 ++ : : |
| : |
500 ++ : |
0 ++----------*--------------------------------------------------------+
hackbench.time.minor_page_faults
300000 ++-----------------------------------------------------------------+
| O O O O O O O O O |
250000 O+O O O O O O O O O O |
| |
*.*.*..*.* *..*.*.*.*..*.*.*.*..*.*.*.*.*..*. .*.*..*.*. .*..*.*.*
200000 ++ : : * * |
| : : |
150000 ++ : : |
| : : |
100000 ++ : : |
| : : |
| : : |
50000 ++ : |
| : |
0 ++---------*-------------------------------------------------------+
hackbench.time.voluntary_context_switches
2e+09 ++----------------------------------------------------------------+
1.8e+09 ++ O O O O O O |
| O O O O O O O O O O O O |
1.6e+09 O+O |
1.4e+09 ++ |
| |
1.2e+09 ++ |
1e+09 ++ |
8e+08 ++ .*
*.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.* |
6e+08 ++ : : |
4e+08 ++ : : |
| : : |
2e+08 ++ : |
0 ++---------*------------------------------------------------------+
hackbench.time.involuntary_context_switches
7e+08 ++------------------------------------------------------------------+
| O O O |
6e+08 ++ O O O O O O O O O O O |
O O O O O O |
5e+08 ++ |
| |
4e+08 ++ |
| |
3e+08 ++ |
| |
2e+08 ++ .*.. .*.*. .*.*.*. .*
*.*..*.*.* *.*.* *.*.*.*..*.*.*..*.*.*.*. *.*. *..* |
1e+08 ++ : + |
| : + |
0 ++---------*--------------------------------------------------------+
softirqs.SCHED
3e+06 ++----------------------------------------------------------------+
| |
2.5e+06 *+*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*.. .*. .*. |
| : : *.* *.*. *.*
| : : |
2e+06 ++ : : |
O O O: : O O O O |
1.5e+06 ++ O O :O:O O O O O O O O O O |
| : : |
1e+06 ++ : : |
| : : |
| :: |
500000 ++ : |
| : |
0 ++---------*------------------------------------------------------+
uptime.idle
4500 ++-------------------------------------------------------------------+
*.*..*.*.* .*.*..*.*.*..*.*.*..*.*.*.*..*.*.*..*. .*. .*.*..*.*
4000 ++ : * *.*. * |
3500 ++ : : |
| : : |
3000 ++ : : |
2500 O+O O: O: O O O O O O O |
| O : :O O O O O O O |
2000 ++ O : : |
1500 ++ : : |
| : : |
1000 ++ : : |
500 ++ : |
| : |
0 ++----------*--------------------------------------------------------+
cpuidle.POLL.usage
3500 ++-------------------------------------------------------------------+
| |
3000 ++ .*. *. .*.. .*. .*.. .*. .*.. .*
*.*..* * : * * *..*.* *.*.*.*. * *.*.*..*.*.*.*..* |
2500 ++ : : |
| : : |
2000 ++ : : |
O O : : |
1500 ++ O: : O O |
| O :O:O O O O O O O O O O |
1000 ++ O : : O O |
| : : |
500 ++ :: |
| : |
0 ++----------*--------------------------------------------------------+
cpuidle.C1-IVT.time
1.8e+09 *+*-*--*-*---*-*--*---*-*-*--*-*-*-*-*--*-*-*---*-----------------+
| : : * * *.*.*.*.*..*.*.*
1.6e+09 ++ : : |
1.4e+09 ++ : : |
| : : |
1.2e+09 ++ : : |
1e+09 ++ : : |
| : : |
8e+08 O+O O: : O O O O |
6e+08 ++ O O :O:O O O O O O O O |
| : : O O |
4e+08 ++ : |
2e+08 ++ : |
| : |
0 ++---------*------------------------------------------------------+
cpuidle.C1-IVT.usage
1.2e+08 ++----------------------------------------------------------------+
*.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*.. .*. |
1e+08 ++ : : *.* *.*..*.*.*
| : : |
| : : |
8e+07 O+O : : |
| O: : O O O O |
6e+07 ++ O O :O:O O O O O O O O |
| : : O O |
4e+07 ++ : : |
| : : |
| : |
2e+07 ++ : |
| : |
0 ++---------*------------------------------------------------------+
cpuidle.C1E-IVT.time
3e+07 ++----------------------------------------------------------------+
*.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*.. .*.*.*. |
2.5e+07 ++ : : * *..*.*.*
| : : |
| : : |
2e+07 ++ : : |
| : : |
1.5e+07 ++ : : |
| : : |
1e+07 ++ : : |
| : : |
O : |
5e+06 ++O O O O O O O O O |
| O : O O O O O O O O O |
0 ++---------*------------------------------------------------------+
cpuidle.C1E-IVT.usage
350000 ++-----------------------------------------------------------------+
| .*.*. |
300000 *+*.*..*.* *. *.*..*.*.*.*..*.*.*.*.*..*.*.*.*..*.*.*.*..*.*.*
| : : |
250000 ++ : : |
| : : |
200000 ++ : : |
| : : |
150000 ++ : : |
| : : |
100000 ++ : : |
| O :: |
50000 O+ O O O O O O O O O O |
| O : O O O O O O O |
0 ++---------*-------------------------------------------------------+
cpuidle.C3-IVT.time
6e+07 *+*--*-*-*------*--------*-*-*--*-*-*--*-*-*-*----*-----------------+
| : * *.*..* * *.*..*.*.*.*..*.|
5e+07 ++ : : *
| : : |
| : : |
4e+07 ++ : : |
| : : |
3e+07 ++ : : |
| : : |
2e+07 ++ : : |
| : : |
| :: |
1e+07 O+O O O : O O O O |
| O O O O O O O O O O O O |
0 ++---------*--------------------------------------------------------+
cpuidle.C3-IVT.usage
600000 ++-----------------------------------------------------------------+
| |
500000 *+*.*..*.* *..*.*.*.*..*.*.*.*..*.*.*.*.*..*.*.*. .*. |
| : : *..* *.*..*.*.*
| : : |
400000 ++ : : |
| : : |
300000 ++ : : |
| : : |
200000 ++ : : |
| : : |
| :: |
100000 O+O O O : O O O O O |
| O O O O O O O O O O O |
0 ++---------*-------------------------------------------------------+
cpuidle.C6-IVT.time
6e+08 ++------------------------------------------------------------------+
|.*..*.*.* .*.*..*.*. .*.*.*.*.. |
5e+08 *+ : *.*.*.*..*.* *. *.*.*.*..*.*.*.*..*.*
| : : |
| : : |
4e+08 ++ : : |
| : : |
3e+08 ++ : : |
| : : |
2e+08 ++ : : |
| O : : |
O O O O O: O O O O O O O O O O O |
1e+08 ++ :: O O O |
| : |
0 ++---------*--------------------------------------------------------+
cpuidle.C6-IVT.usage
4.5e+06 ++----------------------------------------------------------------+
*.*.*..*.* .*..*. .*.*.*..*.*.*.*.*..*.*.*.*. .*. |
4e+06 ++ : * * *..*.* *.*..*.*.*
3.5e+06 ++ : : |
| : : |
3e+06 ++ : : |
2.5e+06 ++ : : |
| : : |
2e+06 ++ : : |
1.5e+06 ++ : : |
| : : |
1e+06 ++O :: |
500000 O+ O O O O O O O O O |
| O : O O O O O O O O |
0 ++---------*------------------------------------------------------+
meminfo.Slab
200000 *+*-*--*-*---*--*-*-*-*--*-*-*-*--*-*-*-*-*--*-*-*-*--*-*-*-*--*-*-*
180000 ++O O : O O |
O O O : O O O O O O O O O O O O O |
160000 ++ : : |
140000 ++ : : |
| : : |
120000 ++ : : |
100000 ++ : : |
80000 ++ : : |
| : : |
60000 ++ : : |
40000 ++ : |
| : |
20000 ++ : |
0 ++---------*-------------------------------------------------------+
meminfo.SUnreclaim
160000 ++-----------------------------------------------------------------+
*. .*..*.* *..*.*. .*. .*.*..*.*. .*.*..*. .*. .*.*.*.*..*. .*
140000 ++* : : *.*. * * * *. * |
120000 O+O O O O O O O O O O O O O O O O O O O |
| : : |
100000 ++ : : |
| : : |
80000 ++ : : |
| : : |
60000 ++ : : |
40000 ++ : : |
| : |
20000 ++ : |
| : |
0 ++---------*-------------------------------------------------------+
vmstat.system.in
1.2e+06 ++----------------------------------------------------------------+
| O O O |
1e+06 ++ O O O O O O O O O O O O |
| O O O |
O O |
800000 ++ |
| |
600000 ++ |
| |
400000 ++ |
| |
*.*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.*
200000 ++ : : |
| : : |
0 ++---------*------------------------------------------------------+
vmstat.system.cs
4.5e+06 ++----------------------------------------------------------------+
| O O O O |
4e+06 O+ O O O O O O O O O O O O O O |
3.5e+06 ++O |
| |
3e+06 ++ |
2.5e+06 ++ |
| |
2e+06 ++ .*. .*
1.5e+06 *+*.*..*.* *.*..*.*.*.*.*..*.*.*.*.*..*.*.*.*.*..*.*.* *..*.* |
| : : |
1e+06 ++ : : |
500000 ++ : : |
| : |
0 ++---------*------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
***************************************************************************************************
ivb42: 48 threads Ivytown Ivy Bridge-EP with 64G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/100%/debian-x86_64-2015-02-07.cgz/ivb42/context1/unixbench
commit:
c5114626f33b62fa7595e57d87f33d9d1f8298a2
53d3bc773eaa7ab1cf63585e76af7ee869d5e709
c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76
---------------- --------------------------
%stddev %change %stddev
\ | \
18006 ± 1% +25.9% 22672 ± 0% unixbench.score
39774 ± 33% +5.4e+05% 2.138e+08 ± 4% unixbench.time.involuntary_context_switches
1717 ± 0% +1.9% 1749 ± 0% unixbench.time.percent_of_cpu_this_job_got
152.51 ± 0% +33.9% 204.18 ± 1% unixbench.time.user_time
7.052e+08 ± 1% -3.9% 6.78e+08 ± 1% unixbench.time.voluntary_context_switches
4.243e+08 ± 3% -9.4% 3.845e+08 ± 7% cpuidle.C1-IVT.time
1.544e+08 ± 6% -37.5% 96475672 ± 5% cpuidle.C1-IVT.usage
409626 ± 4% +28.6% 526843 ± 15% softirqs.RCU
274815 ± 4% -27.5% 199184 ± 9% softirqs.SCHED
39774 ± 33% +5.4e+05% 2.138e+08 ± 4% time.involuntary_context_switches
152.51 ± 0% +33.9% 204.18 ± 1% time.user_time
45.25 ± 0% +12.7% 51.00 ± 0% vmstat.procs.r
11774346 ± 0% +20.2% 14152328 ± 0% vmstat.system.cs
1848728 ± 0% +22.7% 2269123 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
2029277 ± 0% +18.7% 2409509 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
1561074 ± 5% +29.9% 2027122 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
103209 ± 9% -17.8% 84792 ± 10% sched_debug.cfs_rq:/.min_vruntime.stddev
11.68 ± 6% -35.9% 7.49 ± 6% sched_debug.cfs_rq:/.runnable_load_avg.avg
103208 ± 9% -17.8% 84795 ± 10% sched_debug.cfs_rq:/.spread0.stddev
946393 ± 5% -24.5% 714499 ± 10% sched_debug.cpu.avg_idle.max
234059 ± 6% -36.5% 148728 ± 37% sched_debug.cpu.avg_idle.stddev
11.57 ± 6% -31.2% 7.96 ± 20% sched_debug.cpu.cpu_load[1].avg
11.61 ± 7% -34.4% 7.61 ± 12% sched_debug.cpu.cpu_load[2].avg
11.70 ± 7% -35.4% 7.56 ± 8% sched_debug.cpu.cpu_load[3].avg
11.86 ± 7% -36.1% 7.58 ± 6% sched_debug.cpu.cpu_load[4].avg
0.48 ± 6% +13.9% 0.54 ± 3% sched_debug.cpu.nr_running.avg
0.37 ± 5% +10.5% 0.41 ± 4% sched_debug.cpu.nr_running.stddev
14556348 ± 0% +20.1% 17474921 ± 0% sched_debug.cpu.nr_switches.avg
14764042 ± 0% +24.5% 18380752 ± 0% sched_debug.cpu.nr_switches.max
14296508 ± 0% +14.9% 16430231 ± 0% sched_debug.cpu.nr_switches.min
121577 ± 25% +268.4% 447878 ± 8% sched_debug.cpu.nr_switches.stddev
-9.42 ± -3% +20.4% -11.33 ±-12% sched_debug.cpu.nr_uninterruptible.min
***************************************************************************************************
lkp-hsw-ep4: 72 threads Haswell-EP with 128G memory
=========================================================================================
compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/pipe/12/x86_64-rhel/process/50%/debian-x86_64-2015-02-07.cgz/lkp-hsw-ep4/hackbench
commit:
c5114626f33b62fa7595e57d87f33d9d1f8298a2
53d3bc773eaa7ab1cf63585e76af7ee869d5e709
c5114626f33b62fa 53d3bc773eaa7ab1cf63585e76
---------------- --------------------------
%stddev %change %stddev
\ | \
207412 ± 0% -15.6% 175076 ± 1% hackbench.throughput
489.41 ± 0% +18.4% 579.66 ± 1% hackbench.time.elapsed_time
489.41 ± 0% +18.4% 579.66 ± 1% hackbench.time.elapsed_time.max
1.005e+09 ± 0% +113.2% 2.142e+09 ± 4% hackbench.time.involuntary_context_switches
6966 ± 0% +2.2% 7118 ± 0% hackbench.time.percent_of_cpu_this_job_got
32394 ± 0% +19.3% 38635 ± 1% hackbench.time.system_time
1700 ± 0% +54.6% 2627 ± 3% hackbench.time.user_time
3.164e+09 ± 0% +64.2% 5.195e+09 ± 3% hackbench.time.voluntary_context_switches
536.44 ± 0% +17.1% 627.97 ± 1% uptime.boot
4496 ± 1% -16.4% 3757 ± 4% uptime.idle
720.75 ± 0% +14.7% 826.75 ± 0% vmstat.procs.r
8795090 ± 0% +44.3% 12689850 ± 2% vmstat.system.cs
2115904 ± 1% -7.1% 1965559 ± 3% vmstat.system.in
49651750 ± 0% -34.1% 32710138 ± 3% numa-numastat.node0.local_node
49657590 ± 0% -34.1% 32719401 ± 3% numa-numastat.node0.numa_hit
51230886 ± 1% -37.1% 32238968 ± 4% numa-numastat.node1.local_node
51235497 ± 1% -37.1% 32241201 ± 4% numa-numastat.node1.numa_hit
16114 ± 3% +15.3% 18577 ± 2% softirqs.NET_RX
3907664 ± 1% +44.4% 5643157 ± 1% softirqs.RCU
2029740 ± 1% -67.7% 655775 ± 16% softirqs.SCHED
17332687 ± 0% +21.1% 20995794 ± 1% softirqs.TIMER
97.19 ± 0% +1.5% 98.70 ± 0% turbostat.%Busy
2694 ± 0% +1.2% 2726 ± 0% turbostat.Avg_MHz
2.58 ± 2% -56.9% 1.11 ± 7% turbostat.CPU%c1
0.22 ± 3% -14.8% 0.19 ± 2% turbostat.CPU%c6
894518 ± 5% -16.2% 749856 ± 5% numa-meminfo.node0.MemUsed
31304 ± 18% -19.8% 25116 ± 13% numa-meminfo.node0.PageTables
137230 ± 14% -13.2% 119062 ± 7% numa-meminfo.node0.Slab
77654 ± 43% +53.9% 119507 ± 2% numa-meminfo.node1.Active(anon)
676863 ± 6% +18.9% 804493 ± 5% numa-meminfo.node1.MemUsed
40040 ± 87% +102.8% 81204 ± 3% numa-meminfo.node1.Shmem
2.29 ± 8% -82.5% 0.40 ±112% perf-profile.cycles-pp.call_cpuidle
3.41 ± 8% -84.8% 0.52 ±113% perf-profile.cycles-pp.cpu_startup_entry
2.29 ± 8% -82.4% 0.40 ±112% perf-profile.cycles-pp.cpuidle_enter
2.26 ± 9% -82.4% 0.40 ±112% perf-profile.cycles-pp.cpuidle_enter_state
2.24 ± 9% -82.4% 0.40 ±112% perf-profile.cycles-pp.intel_idle
3.42 ± 7% -84.9% 0.52 ±113% perf-profile.cycles-pp.start_secondary
86451 ± 1% +9.1% 94357 ± 3% proc-vmstat.numa_hint_faults_local
1.009e+08 ± 0% -35.6% 64951081 ± 3% proc-vmstat.numa_hit
1.009e+08 ± 0% -35.6% 64941826 ± 3% proc-vmstat.numa_local
1744958 ± 0% -36.7% 1105128 ± 3% proc-vmstat.pgalloc_dma32
99309681 ± 0% -35.5% 64014721 ± 3% proc-vmstat.pgalloc_normal
1.01e+08 ± 0% -35.6% 65068018 ± 3% proc-vmstat.pgfree
489.41 ± 0% +18.4% 579.66 ± 1% time.elapsed_time
489.41 ± 0% +18.4% 579.66 ± 1% time.elapsed_time.max
1.005e+09 ± 0% +113.2% 2.142e+09 ± 4% time.involuntary_context_switches
32394 ± 0% +19.3% 38635 ± 1% time.system_time
1700 ± 0% +54.6% 2627 ± 3% time.user_time
3.164e+09 ± 0% +64.2% 5.195e+09 ± 3% time.voluntary_context_switches
7826 ± 18% -19.7% 6283 ± 13% numa-vmstat.node0.nr_page_table_pages
24938156 ± 0% -34.5% 16344223 ± 2% numa-vmstat.node0.numa_hit
24865727 ± 0% -34.6% 16268676 ± 2% numa-vmstat.node0.numa_local
19415 ± 43% +53.9% 29872 ± 2% numa-vmstat.node1.nr_active_anon
10012 ± 87% +102.5% 20273 ± 3% numa-vmstat.node1.nr_shmem
25578109 ± 2% -35.3% 16544997 ± 3% numa-vmstat.node1.numa_hit
25542618 ± 2% -35.4% 16513089 ± 3% numa-vmstat.node1.numa_local
7.39e+08 ± 1% -63.6% 2.693e+08 ± 12% cpuidle.C1-HSW.time
1.279e+08 ± 2% -75.4% 31468140 ± 20% cpuidle.C1-HSW.usage
97966635 ± 3% -38.4% 60323848 ± 6% cpuidle.C1E-HSW.time
2424496 ± 2% -54.3% 1108542 ± 10% cpuidle.C1E-HSW.usage
2168324 ± 5% -38.4% 1335858 ± 6% cpuidle.C3-HSW.time
23824 ± 2% -51.7% 11496 ± 10% cpuidle.C3-HSW.usage
133416 ± 1% -41.7% 77729 ± 10% cpuidle.C6-HSW.usage
72278 ± 96% -85.4% 10574 ± 13% cpuidle.POLL.time
7564 ± 0% -64.3% 2699 ± 13% cpuidle.POLL.usage
447972 ± 12% -77.1% 102749 ± 39% sched_debug.cfs_rq:/.MIN_vruntime.avg
23408331 ± 2% -74.0% 6077779 ± 38% sched_debug.cfs_rq:/.MIN_vruntime.max
3133258 ± 5% -75.3% 773710 ± 35% sched_debug.cfs_rq:/.MIN_vruntime.stddev
0.17 ±173% +1025.0% 1.88 ± 15% sched_debug.cfs_rq:/.load.min
4.72 ± 5% +21.2% 5.72 ± 4% sched_debug.cfs_rq:/.load_avg.min
447972 ± 12% -77.1% 102749 ± 39% sched_debug.cfs_rq:/.max_vruntime.avg
23408331 ± 2% -74.0% 6077779 ± 38% sched_debug.cfs_rq:/.max_vruntime.max
3133258 ± 5% -75.3% 773710 ± 35% sched_debug.cfs_rq:/.max_vruntime.stddev
34877232 ± 0% -16.9% 28973299 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
36136568 ± 0% -16.9% 30030834 ± 1% sched_debug.cfs_rq:/.min_vruntime.max
33553337 ± 0% -16.4% 28050567 ± 2% sched_debug.cfs_rq:/.min_vruntime.min
580186 ± 2% -26.0% 429600 ± 11% sched_debug.cfs_rq:/.min_vruntime.stddev
0.08 ±110% +710.0% 0.67 ± 21% sched_debug.cfs_rq:/.nr_running.min
0.17 ± 12% -59.6% 0.07 ± 31% sched_debug.cfs_rq:/.nr_running.stddev
25.39 ± 2% -17.8% 20.88 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.max
0.44 ±173% +1002.5% 4.90 ± 13% sched_debug.cfs_rq:/.runnable_load_avg.min
4.84 ± 2% -39.4% 2.93 ± 8% sched_debug.cfs_rq:/.runnable_load_avg.stddev
952653 ± 15% -51.5% 462372 ± 50% sched_debug.cfs_rq:/.spread0.avg
2206041 ± 10% -31.4% 1514231 ± 8% sched_debug.cfs_rq:/.spread0.max
577122 ± 2% -25.8% 428166 ± 11% sched_debug.cfs_rq:/.spread0.stddev
46.85 ± 3% -34.0% 30.93 ± 24% sched_debug.cfs_rq:/.util_avg.stddev
115635 ± 1% +107.7% 240214 ± 8% sched_debug.cpu.avg_idle.avg
506560 ± 15% +83.7% 930497 ± 4% sched_debug.cpu.avg_idle.max
6833 ±131% +168.7% 18362 ± 34% sched_debug.cpu.avg_idle.min
78999 ± 9% +214.9% 248764 ± 8% sched_debug.cpu.avg_idle.stddev
290289 ± 0% +10.7% 321362 ± 0% sched_debug.cpu.clock.avg
290345 ± 0% +10.7% 321461 ± 0% sched_debug.cpu.clock.max
290230 ± 0% +10.7% 321263 ± 0% sched_debug.cpu.clock.min
34.48 ± 26% +74.7% 60.23 ± 5% sched_debug.cpu.clock.stddev
290289 ± 0% +10.7% 321362 ± 0% sched_debug.cpu.clock_task.avg
290345 ± 0% +10.7% 321461 ± 0% sched_debug.cpu.clock_task.max
290230 ± 0% +10.7% 321263 ± 0% sched_debug.cpu.clock_task.min
34.48 ± 26% +74.7% 60.23 ± 5% sched_debug.cpu.clock_task.stddev
0.50 ± 80% +865.0% 4.82 ± 7% sched_debug.cpu.cpu_load[0].min
2.00 ± 33% +155.0% 5.10 ± 6% sched_debug.cpu.cpu_load[1].min
3.31 ± 17% +59.6% 5.28 ± 6% sched_debug.cpu.cpu_load[2].min
4.28 ± 5% +28.0% 5.47 ± 4% sched_debug.cpu.cpu_load[3].min
29.69 ± 10% -21.4% 23.35 ± 4% sched_debug.cpu.cpu_load[4].max
4.39 ± 5% +24.7% 5.47 ± 4% sched_debug.cpu.cpu_load[4].min
4.99 ± 9% -30.3% 3.47 ± 5% sched_debug.cpu.cpu_load[4].stddev
1275 ± 74% +660.4% 9696 ± 35% sched_debug.cpu.curr->pid.min
2960 ± 11% -54.8% 1338 ± 39% sched_debug.cpu.curr->pid.stddev
0.22 ± 70% +935.0% 2.30 ± 30% sched_debug.cpu.load.min
0.00 ± 11% +39.0% 0.00 ± 4% sched_debug.cpu.next_balance.stddev
245043 ± 0% +12.4% 275488 ± 0% sched_debug.cpu.nr_load_updates.avg
253700 ± 0% +11.3% 282470 ± 0% sched_debug.cpu.nr_load_updates.max
242515 ± 0% +12.5% 272755 ± 0% sched_debug.cpu.nr_load_updates.min
8.93 ± 5% +12.5% 10.05 ± 2% sched_debug.cpu.nr_running.avg
29.08 ± 4% -23.2% 22.35 ± 2% sched_debug.cpu.nr_running.max
0.11 ± 70% +1970.0% 2.30 ± 26% sched_debug.cpu.nr_running.min
6.52 ± 3% -40.5% 3.88 ± 8% sched_debug.cpu.nr_running.stddev
29380032 ± 0% +62.7% 47789650 ± 1% sched_debug.cpu.nr_switches.avg
32480191 ± 0% +63.0% 52947357 ± 1% sched_debug.cpu.nr_switches.max
26568245 ± 0% +64.3% 43639487 ± 2% sched_debug.cpu.nr_switches.min
1724177 ± 1% +28.9% 2223172 ± 5% sched_debug.cpu.nr_switches.stddev
307.39 ± 7% -42.6% 176.42 ± 14% sched_debug.cpu.nr_uninterruptible.max
-278.64 ±-10% -41.9% -162.00 ± -5% sched_debug.cpu.nr_uninterruptible.min
131.21 ± 6% -45.4% 71.66 ± 3% sched_debug.cpu.nr_uninterruptible.stddev
290228 ± 0% +10.7% 321261 ± 0% sched_debug.cpu_clk
286726 ± 0% +11.2% 318853 ± 0% sched_debug.ktime
290228 ± 0% +10.7% 321261 ± 0% sched_debug.sched_clk
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
4 years, 9 months
[x86/mm] 66174bd03a: BUG: unable to handle kernel paging request at fffff57810000004
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git kaslr/memory
commit 66174bd03a6386eb82f304bea95d69e777144b0b ("x86/mm: Implement ASLR for kernel memory sections (x86_64)")
on test machine: vm-kbuild-yocto-ia32: 1 threads qemu-system-x86_64 -enable-kvm -cpu Westmere with 320M memory
caused below changes:
+----------------+------------+------------+
| | a4f6fdb166 | 66174bd03a |
+----------------+------------+------------+
| boot_successes | 12 | 0 |
+----------------+------------+------------+
[ 0.000000] PID hash table entries: 2048 (order: 2, 16384 bytes)
[ 0.000000] Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
[ 0.000000] Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
[ 0.000000] BUG: unable to handle kernel paging request at fffff57810000004
[ 0.000000] IP: [<ffffffffa5c93436>] reserve_bootmem_region+0x282/0x2fd
[ 0.000000] PGD 0
[ 0.000000] Oops: 0000 [#1] SMP KASAN
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.7.0-rc1-00031-g66174bd #1
[ 0.000000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.000000] task: ffffffffa6611a40 ti: ffffffffa6600000 task.ti: ffffffffa6600000
[ 0.000000] RIP: 0010:[<ffffffffa5c93436>] [<ffffffffa5c93436>] reserve_bootmem_region+0x282/0x2fd
[ 0.000000] RSP: 0000:ffffffffa6607d70 EFLAGS: 00010802
[ 0.000000] RAX: ffffcbc080000020 RBX: dffffc0000000000 RCX: 0000000000000000
[ 0.000000] RDX: 1ffff97810000004 RSI: 0000000000000010 RDI: ffffffffa6aeac28
[ 0.000000] RBP: ffffffffa6607dc0 R08: ffffffffa6607d38 R09: ffffffffa6d81694
[ 0.000000] R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000000
[ 0.000000] R13: ffffcbc080000000 R14: ffffffffa6673bc0 R15: 0000000000000000
[ 0.000000] FS: 0000000000000000(0000) GS:ffff923d4bc00000(0000) knlGS:0000000000000000
[ 0.000000] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.000000] CR2: fffff57810000004 CR3: 000000000e20a000 CR4: 00000000000006b0
[ 0.000000] Stack:
[ 0.000000] 00000000ffffffff fffffbfff4cce778 ffffffffffffffff 0000000000000010
[ 0.000000] 0000000000000000 ffffffffa6607eb8 1ffffffff4cc0fe0 ffffffffa6607df8
[ 0.000000] ffffffffa6607e78 ffffffffa6607e38 ffffffffa6607ee0 ffffffffa6d9ffc7
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffffa6d9ffc7>] free_all_bootmem+0xe0/0x231
[ 0.000000] [<ffffffffa6d9fee7>] ? reset_all_zones_managed_pages+0xb5/0xb5
[ 0.000000] [<ffffffffa6d6b6e8>] ? check_iommu_entries+0xd3/0x133
[ 0.000000] [<ffffffffa6d82cd5>] mem_init+0x13/0x74
[ 0.000000] [<ffffffffa6d544f4>] start_kernel+0x30a/0x701
[ 0.000000] [<ffffffffa6d541ea>] ? thread_info_cache_init+0xb/0xb
[ 0.000000] [<ffffffffa6d53120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffffa5c977e4>] ? memblock_reserve+0x59/0x5e
[ 0.000000] [<ffffffffa6d53120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffffa6d5329a>] x86_64_start_reservations+0x29/0x2b
[ 0.000000] [<ffffffffa6d533da>] x86_64_start_kernel+0x13e/0x14d
[ 0.000000] Code: 57 20 48 89 f9 48 c1 e9 03 80 3c 19 00 74 0d 48 89 55 c0 e8 ad d5 1c ff 48 8b 55 c0 49 89 57 28 49 8d 45 20 48 89 c2 48 c1 ea 03 <80> 3c 1a 00 74 10 48 89 c7 48 89 45 c0 e8 88 d5 1c ff 48 8b 45
[ 0.000000] RIP [<ffffffffa5c93436>] reserve_bootmem_region+0x282/0x2fd
[ 0.000000] RSP <ffffffffa6607d70>
[ 0.000000] CR2: fffff57810000004
[ 0.000000] ---[ end trace 52fd474ee1adf4a2 ]---
[ 0.000000] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Westmere -kernel /pkg/linux/x86_64-randconfig-s5-06011053/gcc-6/66174bd03a6386eb82f304bea95d69e777144b0b/vmlinuz-4.7.0-rc1-00031-g66174bd -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-ia32-4/rand_boot-1-yocto-minimal-i386.cgz-x86_64-randconfig-s5-06011053-66174bd03a6386eb82f304bea95d69e777144b0b-20160601-88682-jg99rm-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s5-06011053 branch=linux-devel/devel-hourly-2016060108 commit=66174bd03a6386eb82f304bea95d69e777144b0b BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s5-06011053/gcc-6/66174bd03a6386eb82f304bea95d69e777144b0b/vmlinuz-4.7.0-rc1-00031-g66174bd max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-ia32/yocto-minimal-i386.cgz/x86_64-randconfig-s5-06011053/gcc-6/66174bd03a6386eb82f304bea95d69e777144b0b/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-ia32-4::dhcp drbd.minor_count=8' -initrd /fs/sdd1/initrd-vm-kbuild-yocto-ia32-4 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdd1/disk0-vm-kbuild-yocto-ia32-4,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-ia32-4 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-ia32-4 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
4 years, 9 months
[lkp] [oom_reaper] df1e2f5663: EIP: [<81e30134>] mmput_async+0x9/0x6b SS:ESP 0068:819a5e78
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit df1e2f56632ddf17186f7036a3bd809d3aed8fd8 ("oom_reaper: close race with exiting task")
on test machine: vm-lkp-wsx03-openwrt-i386: 1 threads qemu-system-i386 -enable-kvm with 192M memory
caused below changes:
+------------------------------------------------+------------+------------+
| | dea6c8c672 | df1e2f5663 |
+------------------------------------------------+------------+------------+
| boot_successes | 23 | 18 |
| boot_failures | 3 | 8 |
| invoked_oom-killer:gfp_mask=0x | 3 | 6 |
| Mem-Info | 3 | 6 |
| Out_of_memory:Kill_process | 3 | 6 |
| backtrace:_do_fork | 1 | |
| backtrace:SyS_clone | 1 | |
| backtrace:process_vm_rw | 1 | |
| backtrace:SyS_process_vm_readv | 1 | |
| backtrace:do_execve | 2 | |
| backtrace:SyS_execve | 2 | |
| backtrace:pgd_alloc | 1 | |
| backtrace:mm_init | 1 | |
| backtrace:vfs_write | 1 | |
| backtrace:SyS_write | 1 | |
| BUG:unable_to_handle_kernel | 0 | 5 |
| Oops | 0 | 5 |
| EIP_is_at_mmput_async | 0 | 5 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 5 |
| backtrace:oom_reaper | 0 | 5 |
| backtrace:do_sys_open | 0 | 1 |
| backtrace:SyS_open | 0 | 1 |
| IP-Config:Auto-configuration_of_network_failed | 0 | 2 |
+------------------------------------------------+------------+------------+
[ 82.815896] BUG: unable to handle kernel NULL pointer dereference at 00000025
[ 82.816733] IP: [<81e30134>] mmput_async+0x9/0x6b
[ 82.817281] *pde = 00000000
[ 82.817628] Oops: 0002 [#1] PREEMPT DEBUG_PAGEALLOC
[ 82.818169] CPU: 0 PID: 13 Comm: oom_reaper Not tainted 4.6.0-10870-gdf1e2f5 #1
[ 82.818973] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 82.819867] task: 819a2340 ti: 819a4000 task.ti: 819a4000
[ 82.820419] EIP: 0060:[<81e30134>] EFLAGS: 00010246 CPU: 0
[ 82.820988] EIP is at mmput_async+0x9/0x6b
[ 82.821413] EAX: 00000001 EBX: 00000001 ECX: 00000000 EDX: 00000000
[ 82.822040] ESI: 00000000 EDI: 819a5e9c EBP: 819a5e7c ESP: 819a5e78
[ 82.822683] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 82.823226] CR0: 80050033 CR2: 00000025 CR3: 00740000 CR4: 00000690
[ 82.823864] DR0: 6cd78000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 82.824511] DR6: ffff0ff0 DR7: 00000600
[ 82.824918] Stack:
[ 82.825131] 00000001 819a5eec 81ed1467 819a5e94 7de80301 00000000 00000000 00000000
[ 82.826043] 00000101 819a5edc 00000246 00000246 819a5eb0 81e5ca86 819a5edc 819a27e8
[ 82.826968] 819a27e8 00000000 000000c2 00000000 819a5edc 819a5edc 81e50726 00000000
[ 82.827881] Call Trace:
[ 82.828147] [<81ed1467>] __oom_reap_task+0x178/0x185
[ 82.828676] [<81e5ca86>] ? put_lock_stats+0xd/0x1d
[ 82.829234] [<81e50726>] ? preempt_count_sub+0x8b/0xce
[ 82.829771] [<81ed18c6>] oom_reaper+0x159/0x190
[ 82.830249] [<81e57fcf>] ? __wake_up_common+0x5f/0x5f
[ 82.830776] [<81ed176d>] ? exit_oom_victim+0x40/0x40
[ 82.831286] [<81e496f1>] kthread+0xad/0xb2
[ 82.831722] [<8231e6a0>] ? _raw_spin_unlock_irq+0x61/0x6e
[ 82.832273] [<8231eec2>] ret_from_kernel_thread+0xe/0x24
[ 82.832824] [<81e49644>] ? __kthread_parkme+0x6e/0x6e
[ 82.833339] Code: 2c 50 68 a5 31 62 82 e8 9b af 09 00 58 5a a1 c4 85 9e 82 89 da e8 33 49 0d 00 8d 65 f4 5b 5e 5f 5d c3 55 89 e5 53 e8 cc f7 4e 00 <ff> 48 24 74 02 eb 56 89 c3 b9 ac 85 9e 82 c7 80 4c 02 00 00 e0
[ 82.836292] EIP: [<81e30134>] mmput_async+0x9/0x6b SS:ESP 0068:819a5e78
[ 82.837000] CR2: 0000000000000025
[ 82.837342] ---[ end trace e937cb7742e041b3 ]---
[ 82.837834] Kernel panic - not syncing: Fatal exception
[ 82.838374] Kernel Offset: 0x8e00000 from 0x79000000 (relocation range: 0x78000000-0x847dffff)
FYI, raw QEMU command line is:
qemu-system-i386 -enable-kvm -kernel /pkg/linux/i386-randconfig-x0-05271601/gcc-6/df1e2f56632ddf17186f7036a3bd809d3aed8fd8/vmlinuz-4.6.0-10870-gdf1e2f5 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-lkp-wsx03-openwrt-i386-6/rand_boot-1-openwrt-i386.cgz-i386-randconfig-x0-05271601-df1e2f56632ddf17186f7036a3bd809d3aed8fd8-20160527-94565-1mp99it-1.yaml ARCH=i386 kconfig=i386-randconfig-x0-05271601 branch=linux-next/master commit=df1e2f56632ddf17186f7036a3bd809d3aed8fd8 BOOT_IMAGE=/pkg/linux/i386-randconfig-x0-05271601/gcc-6/df1e2f56632ddf17186f7036a3bd809d3aed8fd8/vmlinuz-4.6.0-10870-gdf1e2f5 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-lkp-wsx03-openwrt-i386/openwrt-i386.cgz/i386-randconfig-x0-05271601/gcc-6/df1e2f56632ddf17186f7036a3bd809d3aed8fd8/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-lkp-wsx03-openwrt-i386-6::dhcp drbd.minor_count=8' -initrd /fs/sdc1/initrd-vm-lkp-wsx03-openwrt-i386-6 -m 192 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sdc1/disk0-vm-lkp-wsx03-openwrt-i386-6,media=disk,if=virtio -drive file=/fs/sdc1/disk1-vm-lkp-wsx03-openwrt-i386-6,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-lkp-wsx03-openwrt-i386-6 -serial file:/dev/shm/kboot/serial-vm-lkp-wsx03-openwrt-i386-6 -daemonize -display none -monitor null
Thanks,
Xiaolong
4 years, 9 months
[x86/uaccess] 6f2d5395a5: BUG: uaccess fault at 0xd3634000 with KERNEL_DS
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git x86/uaccess
commit 6f2d5395a55901d674b41a9ebe2a03ae28050739 ("x86/uaccess: OOPS or warn on a fault with KERNEL_DS and !pagefault_disabled()")
on test machine: vm-vp-quantal-i386: 1 threads qemu-system-i386 -enable-kvm -cpu Haswell,+smep,+smap with 360M memory
caused below changes:
+----------------+------------+------------+
| | 1e260fb1ca | 6f2d5395a5 |
+----------------+------------+------------+
| boot_successes | 4 | 0 |
+----------------+------------+------------+
[ 0.135028] apic 0 pin 23 not connected
[ 0.135488] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[ 0.135989] TSC deadline timer enabled
[ 0.136509] BUG: uaccess fault at 0xd3634000 with KERNEL_DS
[ 0.136956] BUG: unable to handle kernel paging request at d3634000
[ 0.137471] IP: [<c1102ff3>] copy_mount_options+0x86/0xd8
[ 0.137915] *pdpt = 00000000018a9001 *pde = 0000000014776067 *pte = 8000000013634060
[ 0.138557] Oops: 0000 [#1] DEBUG_PAGEALLOC
[ 0.138883] Modules linked in:
[ 0.139146] CPU: 0 PID: 8 Comm: kdevtmpfs Not tainted 4.7.0-rc1-00030-g6f2d539 #1
[ 0.139728] task: d35eba40 ti: d3632000 task.ti: d3632000
[ 0.140186] task.addr_limit: 0xffffffff
[ 0.140512] EIP: 0060:[<c1102ff3>] EFLAGS: 00210202 CPU: 0
[ 0.140954] EIP is at copy_mount_options+0x86/0xd8
[ 0.141350] EAX: d3634000 EBX: d35c6458 ECX: 00000000 EDX: ffffff00
[ 0.141862] ESI: d35c656e EDI: 00000000 EBP: d3633e88 ESP: d3633e74
[ 0.142420] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068
[ 0.142881] CR0: 80050033 CR2: d3634000 CR3: 018ac000 CR4: 000406b0
[ 0.143391] Stack:
[ 0.143557] 00000eea 00001000 d361baf8 d361baf8 d3633ef4 d3633ea0 c1103c29 d361b3f0
[ 0.144246] d35f1f7c c16f18d3 d3633ef4 d3633f34 c12eeaf6 c16f1939 c16bb4f5 c16f1939
[ 0.144967] 00008000 d3633eea c104e888 00000001 00000001 c179cb60 d35eba40 c179cb60
[ 0.145676] Call Trace:
[ 0.145886] [<c1103c29>] SyS_mount+0x36/0x76
[ 0.146253] [<c12eeaf6>] devtmpfsd+0x4c/0x28e
[ 0.146603] [<c104e888>] ? finish_task_switch+0x109/0x162
[ 0.147037] [<c148aa62>] ? _raw_spin_unlock_irq+0x1d/0x2c
[ 0.147476] [<c104e888>] ? finish_task_switch+0x109/0x162
[ 0.147907] [<c104e85b>] ? finish_task_switch+0xdc/0x162
[ 0.148335] [<c14877ba>] ? __schedule+0x328/0x456
[ 0.148714] [<c12eeaaa>] ? handle_remove+0x233/0x233
[ 0.149112] [<c12eeaaa>] ? handle_remove+0x233/0x233
[ 0.149509] [<c1049a60>] kthread+0xa8/0xad
[ 0.149839] [<c148af22>] ret_from_kernel_thread+0xe/0x24
[ 0.150320] [<c10499b8>] ? kthread_create_on_node+0x11c/0x11c
[ 0.150774] Code: 12 39 c2 72 0e 8b 7d f0 89 f0 89 de 89 7d ec 31 ff eb 0d 8b 55 f0 eb 28 ff 4d ec 46 88 56 ff 40 83 7d ec 00 74 17 8d 76 00 89 f9 <8a> 10 8d 76 00 85 c9 74 e4 8b 4d ec 31 c0 89 f7 f3 aa 8b 55 ec
[ 0.152937] EIP: [<c1102ff3>] copy_mount_options+0x86/0xd8 SS:ESP 0068:d3633e74
[ 0.153532] CR2: 00000000d3634000
[ 0.153799] ---[ end trace 4fb99d4ea4386d1c ]---
[ 0.154199] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-i386 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/i386-randconfig-h1-06010516/gcc-6/6f2d5395a55901d674b41a9ebe2a03ae28050739/vmlinuz-4.7.0-rc1-00030-g6f2d539 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-i386-27/bisect_boot-1-quantal-core-i386.cgz-i386-randconfig-h1-06010516-6f2d5395a55901d674b41a9ebe2a03ae28050739-20160601-86867-1u5hj7g-0.yaml ARCH=i386 kconfig=i386-randconfig-h1-06010516 branch=linux-devel/devel-hourly-2016060102 commit=6f2d5395a55901d674b41a9ebe2a03ae28050739 BOOT_IMAGE=/pkg/linux/i386-randconfig-h1-06010516/gcc-6/6f2d5395a55901d674b41a9ebe2a03ae28050739/vmlinuz-4.7.0-rc1-00030-g6f2d539 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-vp-quantal-i386/quantal-core-i386.cgz/i386-randconfig-h1-06010516/gcc-6/6f2d5395a55901d674b41a9ebe2a03ae28050739/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-i386-27::dhcp drbd.minor_count=8' -initrd /fs/sde1/initrd-vm-vp-quantal-i386-27 -m 360 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-i386-27 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-i386-27 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
4 years, 9 months