[test] 82cced3b73: BUG: unable to handle kernel
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/mcgrof/linux.git 20160412-sysdata-v1
commit 82cced3b732e0833fd3ccb40237f9dbc0f378cec
Author: Luis R. Rodriguez <mcgrof(a)kernel.org>
AuthorDate: Mon Feb 29 19:52:08 2016 -0800
Commit: Luis R. Rodriguez <mcgrof(a)kernel.org>
CommitDate: Tue Apr 12 16:56:01 2016 -0700
test: add sysdata loader tester
This add a load tester for the new extensible sysdata
file loader, part firmware_class. The usermode helper
can be safely ignored here.
The sysdata API has two main interfaces, synchronous and
asynchronous, we provide 4 types of trigger tests for
each:
* trigger_request: sync simple loader
* trigger_request_keep: sync, asks for us to manage freeing sysdata
* trigger_request_opt: sync, the file is optional
* trigger_request_opt_default: sync, optional, try test-sysdata.bin
* trigger_async_request: async simple loader
* trigger_async_request_keep: async, asks us to manage freeing sysdata
* trigger_async_request_opt: async, the file is optional
* trigger_async_request_opt_default: async, optional, try test-sysdata.bin
Signed-off-by: Luis R. Rodriguez <mcgrof(a)kernel.org>
+------------------------------------------------+------------+------------+------------+
| | 0da7215909 | 82cced3b73 | 58af2f6277 |
+------------------------------------------------+------------+------------+------------+
| boot_successes | 72 | 13 | 11 |
| boot_failures | 2 | 37 | 10 |
| IP-Config:Auto-configuration_of_network_failed | 2 | 5 | |
| BUG:unable_to_handle_kernel | 0 | 5 | 5 |
| Oops | 0 | 5 | 5 |
| RIP:kobject_get | 0 | 17 | 7 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 32 | 10 |
| backtrace:misc_register | 0 | 32 | 10 |
| backtrace:do_init_work | 0 | 32 | 10 |
| general_protection_fault:#[##]SMP | 0 | 27 | 5 |
| WARNING:at_lib/kobject.c:#kobject_get | 0 | 15 | 3 |
| WARNING:at_include/linux/kref.h:#kobject_get | 0 | 2 | 2 |
| RIP:kobj_child_ns_ops | 0 | 15 | 3 |
+------------------------------------------------+------------+------------+------------+
[ 7.359122] xz_dec_test: Create a device node with 'mknod xz_dec_test c 249 0' and write .xz files to it.
[ 7.361319] rbtree testing -> 16998 cycles
[ 8.052945] augmented rbtree testing -> 24642 cycles
[ 9.078335] BUG: unable to handle kernel
[ 9.078657] pci_hotplug: PCI Hot Plug PCI Core version: 0.5
[ 9.078662] cpcihp_zt5550: ZT5550 CompactPCI Hot Plug Driver version: 0.2
[ 9.078747] cpcihp_generic: Generic port I/O CompactPCI Hot Plug Driver version: 0.1
git bisect start 58af2f627767a64a0eef453b521905e1b33826ef bf16200689118d19de1b8d2a3c314fc21f5dc7bb --
git bisect bad a4c154ccfc2ea4f26141467d241237e8f132a2a6 # 10:46 0- 1 Merge 'linux-review/Diego-Herranz/doc-usb-Fix-typo-in-gadget_multi-documentation/20160413-011746' into devel-spot-201604130932
git bisect bad 1cc29d3e4926226a91d193175f92b93438edae38 # 11:01 0- 18 Merge 'linux-review/Stephen-Boyd/devicetree-bindings-designware-pcie-Fix-unit-address/20160413-040430' into devel-spot-201604130932
git bisect bad bacb931f86df1083c59d1ad26ce061e775499031 # 11:26 0- 21 Merge 'ezequielg/nand-err-cleaning' into devel-spot-201604130932
git bisect bad c473d4db7dc648949d464c8992260ad1ab665570 # 11:33 0- 4 Merge 'linux-review/Heinrich-Schuchardt/ASoC-au1x-use-correct-format-specifier/20160413-075734' into devel-spot-201604130932
git bisect bad c542ca8c995d49c5f580de3859eb1cd8f8bcbe39 # 11:44 0- 17 Merge 'jkirsher-next-queue/dev-queue' into devel-spot-201604130932
git bisect bad ab0d36a9cfc3e51aec17a04283b2006a05fb52e7 # 11:58 0- 9 Merge 'mcgrof/20160412-sysdata-v1' into devel-spot-201604130932
git bisect good 3216a555cc5113a3fe2fb00e2b7c120ad0282b31 # 12:04 24+ 2 0day base guard for 'devel-spot-201604130932'
git bisect bad 82cced3b732e0833fd3ccb40237f9dbc0f378cec # 12:11 0- 13 test: add sysdata loader tester
git bisect good 0da721590939e4d2880d5c13a0e4a907797b30ae # 12:18 24+ 0 firmware: add an extensible system data helpers
# first bad commit: [82cced3b732e0833fd3ccb40237f9dbc0f378cec] test: add sysdata loader tester
git bisect good 0da721590939e4d2880d5c13a0e4a907797b30ae # 12:20 70+ 0 firmware: add an extensible system data helpers
# extra tests with DEBUG_INFO
git bisect bad 82cced3b732e0833fd3ccb40237f9dbc0f378cec # 12:26 0- 19 test: add sysdata loader tester
# extra tests on HEAD of linux-devel/devel-spot-201604130932
git bisect bad 58af2f627767a64a0eef453b521905e1b33826ef # 12:26 0- 10 0day head guard for 'devel-spot-201604130932'
# extra tests on tree/branch mcgrof/20160412-sysdata-v1
git bisect bad 82cced3b732e0833fd3ccb40237f9dbc0f378cec # 12:28 0- 16 test: add sysdata loader tester
# extra tests with first bad commit reverted
git bisect good 4d0e17e83dda4553389744db86612a4d84410350 # 12:37 66+ 0 Revert "test: add sysdata loader tester"
# extra tests on tree/branch linus/master
git bisect good 1c74a7f812b135d3df41d7c3671b647aed6467bf # 12:44 72+ 2 Merge branch 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/hid
# extra tests on tree/branch linux-next/master
git bisect good e45df7c04b4805648ea31a3e398e0ade23d368f2 # 12:52 69+ 0 Add linux-next specific files for 20160412
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
4 years, 9 months
[lkp] [scatterlist] c38ecfb12e: kernel BUG at crypto/scatterwalk.c:37!
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Shawn-Guo/scatterlist-use-sg_dma_len-in-sg_set_page/20160411-105225
commit c38ecfb12e9a4c0c17d0879090741d6ce2a200de ("scatterlist: use sg_dma_len() in sg_set_page()")
+------------------------------------------------------------------+----------+------------+
| | v4.6-rc3 | c38ecfb12e |
+------------------------------------------------------------------+----------+------------+
| kernel_BUG_at_crypto/scatterwalk.c | 0 | 32 |
| invalid_opcode:#[##]DEBUG_PAGEALLOC | 0 | 32 |
| RIP:scatterwalk_start | 0 | 32 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 32 |
| backtrace:bt_selftest_smp | 0 | 32 |
| backtrace:bt_selftest_init | 0 | 32 |
+------------------------------------------------------------------+----------+------------+
[ 20.259507] cryptomgr_probe (141) used greatest stack depth: 14344 bytes left
[ 20.260567] cryptomgr_probe (145) used greatest stack depth: 14280 bytes left
[ 20.261260] ------------[ cut here ]------------
[ 20.261681] kernel BUG at crypto/scatterwalk.c:37!
[ 20.262293] invalid opcode: 0000 [#1] DEBUG_PAGEALLOC
[ 20.262776] Modules linked in:
[ 20.263070] CPU: 0 PID: 1 Comm: swapper Not tainted 4.6.0-rc3-00001-gc38ecfb #1
[ 20.263714] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 20.264500] task: ffff88001217c000 ti: ffff880012180000 task.ti: ffff880012180000
[ 20.265162] RIP: 0010:[<ffffffff8ac7f8ad>] [<ffffffff8ac7f8ad>] scatterwalk_start+0x41/0x55
[ 20.265959] RSP: 0000:ffff880012183b00 EFLAGS: 00010246
[ 20.266434] RAX: 0000000000000000 RBX: 0000000000000003 RCX: ffffffff8aa734f6
[ 20.267063] RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffffffff8bad0048
[ 20.267700] RBP: ffff880012183b20 R08: ffff880012183d20 R09: 0000000000000002
[ 20.268330] R10: ffff880012183c80 R11: 0000000000000000 R12: ffff880012183d30
[ 20.268966] R13: ffff880012183bf0 R14: 0000000000000000 R15: 0000000000000000
[ 20.269592] FS: 0000000000000000(0000) GS:ffffffff8b830000(0000) knlGS:0000000000000000
[ 20.270332] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 20.270844] CR2: 0000000000000000 CR3: 000000000b80c000 CR4: 00000000000006f0
[ 20.271474] Stack:
[ 20.271661] ffff880012183bd0 0000000000000002 0000000000000000 ffff880012183c80
[ 20.272364] ffff880012183b58 ffffffff8ac82d82 ffff880012183bd0 0000000000000010
[ 20.273069] ffff880010dafc68 0000000000000010 ffffffff8aa734f6 ffff880012183b68
[ 20.273776] Call Trace:
[ 20.274006] [<ffffffff8ac82d82>] blkcipher_walk_first+0x12d/0x243
[ 20.274552] [<ffffffff8aa734f6>] ? aes_decrypt+0x6d/0x6d
[ 20.275034] [<ffffffff8ac82ed2>] blkcipher_walk_virt+0x3a/0x3c
[ 20.275557] [<ffffffff8ac9246f>] crypto_ecb_crypt+0x2e/0x97
[ 20.276060] [<ffffffff8aa28557>] ? __kernel_fpu_end+0x41/0x43
[ 20.276582] [<ffffffff8ac92573>] crypto_ecb_encrypt+0x46/0x55
[ 20.277102] [<ffffffff8ac7dad5>] ? setkey+0x10a/0x117
[ 20.277574] [<ffffffff8aa47e8f>] ? pvclock_clocksource_read+0x6e/0x110
[ 20.278187] [<ffffffff8acbac90>] ? sg_assign_page+0x3a/0x5a
[ 20.278764] [<ffffffff8ac82fe7>] skcipher_crypt_blkcipher+0x35/0x37
[ 20.279337] [<ffffffff8ac82fe7>] ? skcipher_crypt_blkcipher+0x35/0x37
[ 20.279921] [<ffffffff8ac83001>] skcipher_encrypt_blkcipher+0x18/0x1a
[ 20.280504] [<ffffffff8b0db2c1>] smp_e+0x140/0x1a5
[ 20.280969] [<ffffffff8b0db364>] smp_ah+0x3e/0x81
[ 20.281398] [<ffffffff8bf25b84>] bt_selftest_smp+0x127/0x856
[ 20.281919] [<ffffffff8aadcc4e>] ? debug_mutex_unlock+0x233/0x2ac
[ 20.282468] [<ffffffff8b11d451>] ? __mutex_unlock_slowpath+0x1bd/0x1c8
[ 20.283054] [<ffffffff8b11d465>] ? mutex_unlock+0x9/0xb
[ 20.283527] [<ffffffff8bf26646>] bt_selftest_init+0x197/0x1ae
[ 20.284045] [<ffffffff8bf264af>] ? test_ecdh_sample+0x96/0x96
[ 20.284570] [<ffffffff8aa00465>] do_one_initcall+0x12f/0x21c
[ 20.285095] [<ffffffff8bebc414>] kernel_init_freeable+0x115/0x1d3
[ 20.285645] [<ffffffff8b111a1c>] kernel_init+0x9/0x15b
[ 20.286138] [<ffffffff8b120922>] ret_from_fork+0x22/0x40
[ 20.286617] [<ffffffff8b111a13>] ? rest_init+0xba/0xba
[ 20.287085] Code: fd 49 89 f4 48 c7 c7 48 00 ad 8b 45 85 f6 0f 94 c3 31 d2 89 de 48 83 c3 02 e8 2c 63 ec ff 48 ff 04 dd 68 26 c4 8b 45 85 f6 75 02 <0f> 0b 41 8b 44 24 08 5b 41 5c 41 89 45 08 41 5d 41 5e 5d c3 31
[ 20.289560] RIP [<ffffffff8ac7f8ad>] scatterwalk_start+0x41/0x55
[ 20.290117] RSP <ffff880012183b00>
[ 20.290459] ---[ end trace 9e10e2ce7abe5940 ]---
[ 20.290886] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Westmere -kernel /pkg/linux/x86_64-randconfig-n0-04111246/gcc-5/c38ecfb12e9a4c0c17d0879090741d6ce2a200de/vmlinuz-4.6.0-rc3-00001-gc38ecfb -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-yocto-ia32-13/bisect_boot-1-yocto-minimal-i386.cgz-x86_64-randconfig-n0-04111246-c38ecfb12e9a4c0c17d0879090741d6ce2a200de-20160411-124852-85aq5l-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-n0-04111246 branch=linux-devel/devel-spot-201604111237 commit=c38ecfb12e9a4c0c17d0879090741d6ce2a200de BOOT_IMAGE=/pkg/linux/x86_64-randconfig-n0-04111246/gcc-5/c38ecfb12e9a4c0c17d0879090741d6ce2a200de/vmlinuz-4.6.0-rc3-00001-gc38ecfb max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-yocto-ia32/yocto-minimal-i386.cgz/x86_64-randconfig-n0-04111246/gcc-5/c38ecfb12e9a4c0c17d0879090741d6ce2a200de/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-yocto-ia32-13::dhcp drbd.minor_count=8' -initrd /fs/sde1/initrd-vm-kbuild-yocto-ia32-13 -m 320 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sde1/disk0-vm-kbuild-yocto-ia32-13,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-vm-kbuild-yocto-ia32-13 -serial file:/dev/shm/kboot/serial-vm-kbuild-yocto-ia32-13 -daemonize -display none -monitor null
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [thp] 5f155ea5ab: No primary change, vm-scalability.time.system_time -51.0% improvement
by kernel test robot
FYI, we noticed that vm-scalability.time.system_time -51.0% improvement on
https://github.com/0day-ci/linux Kirill-A-Shutemov/thp-keep-huge-zero-page-pinned-until-tlb-flush/20160405-181706
commit 5f155ea5ab62b5321aa7315de62d21da59aa8c65 ("thp: keep huge zero page pinned until tlb flush")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/300s/lkp-hsx04/anon-r-seq/vm-scalability
commit:
v4.6-rc2
5f155ea5ab62b5321aa7315de62d21da59aa8c65
v4.6-rc2 5f155ea5ab62b5321aa7315de6
---------------- --------------------------
%stddev %change %stddev
\ | \
517.69 ± 0% -51.0% 253.81 ± 0% vm-scalability.time.system_time
1694 ±173% +392.6% 8345 ±116% latency_stats.sum.load_module.SYSC_finit_module.SyS_finit_module.entry_SYSCALL_64_fastpath
517.69 ± 0% -51.0% 253.81 ± 0% time.system_time
0.46 ± 2% -16.5% 0.38 ± 2% turbostat.CPU%c6
1731 ± 14% +50.9% 2612 ± 9% cpuidle.C1-HSW.usage
15476193 ± 6% -32.9% 10389552 ± 6% cpuidle.C1E-HSW.time
3.55 ± 3% +26.0% 4.48 ± 1% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.do_huge_pmd_anonymous_page
2.45 ± 4% +9.9% 2.69 ± 5% perf-profile.cycles-pp.perf_event_task_tick.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
0.86 ± 5% +11.9% 0.96 ± 3% perf-profile.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
28006 ±113% +143.3% 68134 ± 1% numa-vmstat.node0.nr_active_anon
1359 ± 44% +68.6% 2292 ± 28% numa-vmstat.node0.nr_mapped
82066 ± 31% -44.3% 45740 ± 60% numa-vmstat.node2.nr_file_pages
52443 ± 49% -69.2% 16160 ±171% numa-vmstat.node2.nr_shmem
336.60 ± 7% -20.5% 267.75 ± 8% slabinfo.kmem_cache.active_objs
336.60 ± 7% -20.5% 267.75 ± 8% slabinfo.kmem_cache.num_objs
694.40 ± 4% -12.4% 608.00 ± 4% slabinfo.kmem_cache_node.active_objs
742.40 ± 4% -11.6% 656.00 ± 4% slabinfo.kmem_cache_node.num_objs
136724 ± 93% +117.8% 297755 ± 1% numa-meminfo.node0.Active
111528 ±113% +144.5% 272658 ± 1% numa-meminfo.node0.Active(anon)
5429 ± 43% +70.6% 9262 ± 27% numa-meminfo.node0.Mapped
328531 ± 31% -44.3% 182976 ± 60% numa-meminfo.node2.FilePages
210042 ± 49% -69.2% 64658 ±171% numa-meminfo.node2.Shmem
lkp-hsx04: Brickland Haswell-EX
Memory: 512G
vm-scalability.time.system_time
550 ++--------------------------------------------------------------------+
*..*..*..*.. .*..*..*..*.. .*.*..*..*.. |
500 ++ *..*..*.*..*..*..*..*..*. *. * |
| |
450 ++ |
| |
400 ++ |
| |
350 ++ |
| |
300 ++ |
| O O |
250 O+ O O O O O O O O O O O O O O O O O O O O O O
| |
200 ++--------------------------------------------------------------------+
time.system_time
550 ++--------------------------------------------------------------------+
*..*..*..*.. .*..*..*..*.. .*.*..*..*.. |
500 ++ *..*..*.*..*..*..*..*..*. *. * |
| |
450 ++ |
| |
400 ++ |
| |
350 ++ |
| |
300 ++ |
| O O |
250 O+ O O O O O O O O O O O O O O O O O O O O O O
| |
200 ++--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [sched/fair] 9e7474b13a: hackbench.throughput -16.6% regression
by kernel test robot
FYI, we noticed that hackbench.throughput -16.6% regression on
git://bee.sh.intel.com/git/ydu19/tip flat_hierarchy_v1.6
commit 9e7474b13a96cdfcea6f8ea7f8810d05747fa254 ("sched/fair: Drop out incomplete current period when sched averages accrue")
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/socket/x86_64-rhel/process/1600%/debian-x86_64-2015-02-07.cgz/lkp-ivb-d02/hackbench
commit:
782fb955d6cce91ffc2089fc5a70c751e1581eff
9e7474b13a96cdfcea6f8ea7f8810d05747fa254
782fb955d6cce91f 9e7474b13a96cdfcea6f8ea7f8
---------------- --------------------------
%stddev %change %stddev
\ | \
21304 ± 0% -16.6% 17766 ± 0% hackbench.throughput
1445902 ± 5% +4671.1% 68985956 ± 2% hackbench.time.involuntary_context_switches
2664583 ± 1% -16.6% 2221049 ± 0% hackbench.time.minor_page_faults
397.00 ± 0% -1.9% 389.50 ± 0% hackbench.time.percent_of_cpu_this_job_got
2339 ± 1% -3.6% 2254 ± 0% hackbench.time.system_time
104.41 ± 1% +19.7% 124.97 ± 1% hackbench.time.user_time
23025389 ± 1% +556.0% 1.511e+08 ± 3% hackbench.time.voluntary_context_switches
63.91 ± 2% +75.1% 111.89 ± 5% uptime.idle
104525 ± 2% +146.5% 257698 ± 4% softirqs.RCU
20353 ± 0% +238.1% 68818 ± 7% softirqs.SCHED
99.55 ± 0% -1.5% 98.03 ± 0% turbostat.%Busy
3277 ± 0% -1.5% 3227 ± 0% turbostat.Avg_MHz
0.32 ± 2% +474.0% 1.82 ± 12% turbostat.CPU%c1
326.50 ± 3% +13.5% 370.50 ± 4% vmstat.procs.r
41166 ± 0% +796.8% 369158 ± 2% vmstat.system.cs
4526 ± 0% +905.2% 45500 ± 5% vmstat.system.in
1445902 ± 5% +4671.1% 68985956 ± 2% time.involuntary_context_switches
2664583 ± 1% -16.6% 2221049 ± 0% time.minor_page_faults
104.41 ± 1% +19.7% 124.97 ± 1% time.user_time
23025389 ± 1% +556.0% 1.511e+08 ± 3% time.voluntary_context_switches
3173807 ± 2% +865.0% 30626136 ± 13% cpuidle.C1-IVB.time
144740 ± 2% +2444.9% 3683489 ± 7% cpuidle.C1-IVB.usage
598777 ± 15% +1035.1% 6797017 ± 20% cpuidle.C1E-IVB.time
2920 ± 10% +3484.9% 104697 ± 15% cpuidle.C1E-IVB.usage
266669 ± 31% +1008.5% 2955953 ± 9% cpuidle.C3-IVB.time
464.50 ± 12% +5824.3% 27518 ± 12% cpuidle.C3-IVB.usage
7113307 ± 1% +8.2% 7697261 ± 4% cpuidle.C6-IVB.time
2109 ± 5% +1144.2% 26240 ± 17% cpuidle.C6-IVB.usage
96.50 ± 19% +5163.7% 5079 ± 14% cpuidle.POLL.usage
212531 ± 0% -11.3% 188498 ± 0% meminfo.Active
115472 ± 0% -20.8% 91406 ± 0% meminfo.Active(anon)
3998 ± 0% +25.9% 5035 ± 19% meminfo.AnonHugePages
110735 ± 0% -21.8% 86579 ± 0% meminfo.AnonPages
909871 ± 0% -25.0% 682060 ± 0% meminfo.Committed_AS
89268 ± 5% -16.1% 74932 ± 10% meminfo.DirectMap4k
41189 ± 0% -29.1% 29223 ± 1% meminfo.KernelStack
81962 ± 1% -26.4% 60283 ± 1% meminfo.PageTables
118988 ± 0% -11.3% 105536 ± 3% meminfo.SUnreclaim
28902 ± 0% -20.9% 22867 ± 2% proc-vmstat.nr_active_anon
27723 ± 0% -21.8% 21674 ± 2% proc-vmstat.nr_anon_pages
2574 ± 0% -29.4% 1818 ± 2% proc-vmstat.nr_kernel_stack
20539 ± 1% -26.8% 15042 ± 2% proc-vmstat.nr_page_table_pages
29910 ± 0% -12.0% 26327 ± 2% proc-vmstat.nr_slab_unreclaimable
4052339 ± 0% +9.0% 4417446 ± 3% proc-vmstat.numa_hit
4052339 ± 0% +9.0% 4417446 ± 3% proc-vmstat.numa_local
2220692 ± 0% +11.0% 2465866 ± 3% proc-vmstat.pgalloc_dma32
2946666 ± 0% +11.7% 3292537 ± 4% proc-vmstat.pgalloc_normal
5127274 ± 0% +12.0% 5744648 ± 3% proc-vmstat.pgfree
2595 ± 0% -18.3% 2119 ± 2% slabinfo.UNIX.active_objs
2643 ± 0% -15.1% 2244 ± 2% slabinfo.UNIX.num_objs
38429 ± 0% -23.4% 29446 ± 1% slabinfo.anon_vma.active_objs
770.75 ± 0% -22.1% 600.25 ± 1% slabinfo.anon_vma.active_slabs
39320 ± 0% -22.1% 30646 ± 1% slabinfo.anon_vma.num_objs
770.75 ± 0% -22.1% 600.25 ± 1% slabinfo.anon_vma.num_slabs
78807 ± 0% -24.0% 59899 ± 1% slabinfo.anon_vma_chain.active_objs
1410 ± 0% -27.7% 1020 ± 1% slabinfo.anon_vma_chain.active_slabs
90271 ± 0% -27.7% 65306 ± 1% slabinfo.anon_vma_chain.num_objs
1410 ± 0% -27.7% 1020 ± 1% slabinfo.anon_vma_chain.num_slabs
3511 ± 1% -21.3% 2762 ± 5% slabinfo.cred_jar.active_objs
3593 ± 1% -13.9% 3095 ± 6% slabinfo.cred_jar.num_objs
2632 ± 0% -21.1% 2077 ± 2% slabinfo.files_cache.active_objs
2743 ± 0% -17.2% 2271 ± 2% slabinfo.files_cache.num_objs
25849 ± 2% +18.1% 30516 ± 9% slabinfo.kmalloc-256.active_objs
12200 ± 2% -13.0% 10609 ± 3% slabinfo.kmalloc-32.active_objs
12375 ± 2% -10.8% 11042 ± 3% slabinfo.kmalloc-32.num_objs
23380 ± 2% +22.0% 28534 ± 10% slabinfo.kmalloc-512.active_objs
2339 ± 0% -23.7% 1785 ± 1% slabinfo.mm_struct.active_objs
2457 ± 0% -18.9% 1992 ± 1% slabinfo.mm_struct.num_objs
3822 ± 2% -19.3% 3083 ± 4% slabinfo.pid.active_objs
3851 ± 2% -15.7% 3246 ± 4% slabinfo.pid.num_objs
2746 ± 0% -27.5% 1992 ± 1% slabinfo.sighand_cache.active_objs
2789 ± 0% -22.2% 2171 ± 1% slabinfo.sighand_cache.num_objs
2884 ± 1% -25.8% 2140 ± 2% slabinfo.signal_cache.active_objs
2957 ± 1% -17.6% 2436 ± 2% slabinfo.signal_cache.num_objs
2834 ± 1% -18.3% 2315 ± 2% slabinfo.sock_inode_cache.active_objs
2898 ± 1% -14.7% 2473 ± 2% slabinfo.sock_inode_cache.num_objs
2583 ± 0% -29.3% 1827 ± 1% slabinfo.task_struct.active_objs
871.75 ± 0% -27.2% 634.75 ± 1% slabinfo.task_struct.active_slabs
2616 ± 0% -27.2% 1905 ± 1% slabinfo.task_struct.num_objs
871.75 ± 0% -27.2% 634.75 ± 1% slabinfo.task_struct.num_slabs
58846 ± 0% -24.6% 44372 ± 1% slabinfo.vm_area_struct.active_objs
2703 ± 0% -23.8% 2059 ± 1% slabinfo.vm_area_struct.active_slabs
59480 ± 0% -23.8% 45317 ± 1% slabinfo.vm_area_struct.num_objs
2703 ± 0% -23.8% 2059 ± 1% slabinfo.vm_area_struct.num_slabs
219.62 ± 6% -93.6% 14.05 ± 87% sched_debug.cfs_rq:/.load.avg
346.38 ± 5% -85.3% 50.87 ± 96% sched_debug.cfs_rq:/.load.max
93.30 ± 13% -98.4% 1.46 ± 5% sched_debug.cfs_rq:/.load.min
98.77 ± 10% -78.3% 21.39 ± 98% sched_debug.cfs_rq:/.load.stddev
246.46 ± 1% +2.4e+08% 5.814e+08 ± 9% sched_debug.cfs_rq:/.load_avg.avg
303.85 ± 1% +3.6e+08% 1.085e+09 ± 6% sched_debug.cfs_rq:/.load_avg.max
196.45 ± 3% +2.1e+07% 41566477 ± 53% sched_debug.cfs_rq:/.load_avg.min
42.48 ± 9% +1.1e+09% 4.465e+08 ± 6% sched_debug.cfs_rq:/.load_avg.stddev
1583097 ± 0% +9103.9% 1.457e+08 ± 4% sched_debug.cfs_rq:/.min_vruntime.avg
1713466 ± 1% +8538.8% 1.48e+08 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
1467468 ± 1% +9678.3% 1.435e+08 ± 4% sched_debug.cfs_rq:/.min_vruntime.min
96088 ± 7% +1771.4% 1798202 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
0.90 ± 0% -18.7% 0.73 ± 5% sched_debug.cfs_rq:/.nr_running.min
0.05 ± 6% +185.2% 0.14 ± 10% sched_debug.cfs_rq:/.nr_running.stddev
232.55 ± 1% +307.8% 948.36 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
284.23 ± 2% +277.5% 1072 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.max
185.32 ± 3% +304.4% 749.38 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.min
38.60 ± 14% +265.7% 141.15 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.stddev
92308 ± 63% +2991.3% 2853566 ± 64% sched_debug.cfs_rq:/.spread0.max
95934 ± 8% +1719.3% 1745323 ± 5% sched_debug.cfs_rq:/.spread0.stddev
976.67 ± 0% +55897.0% 546905 ± 9% sched_debug.cfs_rq:/.util_avg.avg
996.95 ± 0% +1.6e+05% 1639044 ± 8% sched_debug.cfs_rq:/.util_avg.max
962.65 ± 0% +3676.2% 36352 ± 80% sched_debug.cfs_rq:/.util_avg.min
13.02 ± 20% +5e+06% 652866 ± 13% sched_debug.cfs_rq:/.util_avg.stddev
678344 ± 4% -55.6% 301114 ± 12% sched_debug.cpu.avg_idle.avg
867183 ± 3% -30.0% 607019 ± 11% sched_debug.cpu.avg_idle.max
379599 ± 21% -82.0% 68178 ± 30% sched_debug.cpu.avg_idle.min
608.49 ± 24% -67.8% 195.73 ± 71% sched_debug.cpu.clock.stddev
608.49 ± 24% -67.8% 195.73 ± 71% sched_debug.cpu.clock_task.stddev
233.04 ± 1% +304.2% 941.96 ± 3% sched_debug.cpu.cpu_load[0].avg
284.62 ± 2% +286.8% 1100 ± 4% sched_debug.cpu.cpu_load[0].max
186.27 ± 3% +301.0% 747.05 ± 11% sched_debug.cpu.cpu_load[0].min
38.46 ± 13% +320.5% 161.75 ± 35% sched_debug.cpu.cpu_load[0].stddev
233.17 ± 1% +306.6% 948.11 ± 3% sched_debug.cpu.cpu_load[1].avg
284.43 ± 2% +281.3% 1084 ± 2% sched_debug.cpu.cpu_load[1].max
186.80 ± 3% +320.7% 785.93 ± 10% sched_debug.cpu.cpu_load[1].min
37.89 ± 14% +217.5% 120.33 ± 28% sched_debug.cpu.cpu_load[1].stddev
233.45 ± 1% +309.8% 956.68 ± 3% sched_debug.cpu.cpu_load[2].avg
284.12 ± 2% +275.6% 1067 ± 1% sched_debug.cpu.cpu_load[2].max
187.70 ± 3% +344.3% 833.88 ± 9% sched_debug.cpu.cpu_load[2].min
37.33 ± 13% +142.6% 90.55 ± 27% sched_debug.cpu.cpu_load[2].stddev
233.83 ± 1% +311.5% 962.18 ± 3% sched_debug.cpu.cpu_load[3].avg
283.65 ± 2% +269.3% 1047 ± 0% sched_debug.cpu.cpu_load[3].max
188.57 ± 2% +362.2% 871.56 ± 8% sched_debug.cpu.cpu_load[3].min
36.68 ± 12% +85.5% 68.05 ± 35% sched_debug.cpu.cpu_load[3].stddev
234.28 ± 1% +312.8% 967.05 ± 3% sched_debug.cpu.cpu_load[4].avg
283.02 ± 1% +264.7% 1032 ± 0% sched_debug.cpu.cpu_load[4].max
189.33 ± 2% +374.4% 898.18 ± 7% sched_debug.cpu.cpu_load[4].min
14599 ± 3% -22.6% 11300 ± 12% sched_debug.cpu.curr->pid.min
817.23 ± 24% +234.2% 2731 ± 29% sched_debug.cpu.curr->pid.stddev
225.26 ± 3% -90.9% 20.47 ±103% sched_debug.cpu.load.avg
365.90 ± 7% -86.1% 50.87 ± 96% sched_debug.cpu.load.max
89.80 ± 13% -98.3% 1.56 ± 9% sched_debug.cpu.load.min
107.38 ± 10% -78.5% 23.08 ±100% sched_debug.cpu.load.stddev
0.00 ± 24% -64.3% 0.00 ± 63% sched_debug.cpu.next_balance.stddev
27.88 ± 4% +163.8% 73.56 ± 22% sched_debug.cpu.nr_running.avg
48.08 ± 6% +334.9% 209.07 ± 25% sched_debug.cpu.nr_running.max
9.85 ± 17% -63.8% 3.56 ± 36% sched_debug.cpu.nr_running.min
14.81 ± 8% +477.7% 85.58 ± 27% sched_debug.cpu.nr_running.stddev
3038641 ± 1% +736.8% 25428093 ± 5% sched_debug.cpu.nr_switches.avg
3155768 ± 2% +1235.2% 42136256 ± 3% sched_debug.cpu.nr_switches.max
2947248 ± 1% +361.9% 13612831 ± 10% sched_debug.cpu.nr_switches.min
80631 ± 42% +13445.3% 10921737 ± 3% sched_debug.cpu.nr_switches.stddev
10.62 ± 47% +406.5% 53.82 ± 16% sched_debug.cpu.nr_uninterruptible.max
-10.10 ±-54% +198.4% -30.13 ±-20% sched_debug.cpu.nr_uninterruptible.min
7.57 ± 48% +333.0% 32.78 ± 17% sched_debug.cpu.nr_uninterruptible.stddev
0.86 ± 8% +52.8% 1.31 ± 7% perf-profile.cycles-pp.___slab_alloc.__slab_alloc.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb
0.96 ± 7% +43.5% 1.37 ± 6% perf-profile.cycles-pp.___slab_alloc.__slab_alloc.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags
12.27 ± 1% -10.7% 10.96 ± 3% perf-profile.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
3.03 ± 6% +15.2% 3.49 ± 5% perf-profile.cycles-pp.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
3.64 ± 4% +10.6% 4.03 ± 3% perf-profile.cycles-pp.__kmalloc_reserve.isra.34.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg
2.25 ±-44% +1469.4% 35.31 ± 11% perf-profile.cycles-pp.__read_nocancel
0.02 ± 57% +3322.2% 0.77 ± 24% perf-profile.cycles-pp.__schedule.schedule.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.18 ± 14% +1800.0% 3.52 ± 4% perf-profile.cycles-pp.__schedule.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg
0.99 ± 8% +46.0% 1.44 ± 7% perf-profile.cycles-pp.__slab_alloc.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
1.09 ± 6% +36.1% 1.49 ± 6% perf-profile.cycles-pp.__slab_alloc.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
1.19 ± 3% +16.6% 1.39 ± 4% perf-profile.cycles-pp.__slab_free.kfree.skb_release_data.skb_release_all.consume_skb
1.09 ± 7% +18.0% 1.29 ± 6% perf-profile.cycles-pp.__slab_free.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_read_generic
41.23 ± 2% -71.4% 11.78 ± 26% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.81 ±-55% +1400.4% 27.16 ± 11% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
34.92 ± 2% -66.8% 11.59 ± 26% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.50 ±170% +4538.1% 23.42 ± 13% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
0.10 ± 44% +4797.4% 4.65 ± 3% perf-profile.cycles-pp.__wake_up_common.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg.sock_sendmsg
0.10 ± 42% +4980.0% 5.08 ± 4% perf-profile.cycles-pp.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
2.58 ±-38% +1125.8% 31.62 ± 14% perf-profile.cycles-pp.__write_nocancel
0.21 ± 10% +859.3% 2.06 ± 5% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
13.01 ± 0% -11.0% 11.57 ± 3% perf-profile.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.08 ± 43% +5369.7% 4.51 ± 3% perf-profile.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg
0.23 ±-434% +255.4% 0.82 ± 18% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
14.78 ± 1% -14.8% 12.60 ± 1% perf-profile.cycles-pp.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
0.65 ± 6% +68.3% 1.09 ± 7% perf-profile.cycles-pp.copy_from_iter.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.61 ± 12% +71.4% 1.05 ± 7% perf-profile.cycles-pp.copy_to_iter.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
0.47 ± 7% +94.1% 0.91 ± 10% perf-profile.cycles-pp.copy_user_enhanced_fast_string.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
5.52 ± 5% -17.5% 4.56 ± 2% perf-profile.cycles-pp.copy_user_enhanced_fast_string.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
0.25 ±-400% +321.0% 1.05 ± 22% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
0.03 ± 33% +4784.6% 1.59 ± 6% perf-profile.cycles-pp.deactivate_task.__schedule.schedule.schedule_timeout.unix_stream_read_generic
0.08 ± 43% +5260.6% 4.42 ± 3% perf-profile.cycles-pp.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable
0.03 ± 40% +3083.3% 0.96 ± 10% perf-profile.cycles-pp.dequeue_entity.dequeue_task_fair.deactivate_task.__schedule.schedule
0.04 ± 40% +2916.7% 1.36 ± 7% perf-profile.cycles-pp.dequeue_task_fair.deactivate_task.__schedule.schedule.schedule_timeout
0.09 ± 27% +1265.7% 1.20 ± 4% perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.17 ± 16% +980.3% 1.78 ± 3% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
98.48 ± 2% -70.1% 29.47 ± 27% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath
2.19 ±-45% +1446.3% 33.87 ± 11% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.__read_nocancel
2.48 ±-40% +1121.0% 30.28 ± 13% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.__write_nocancel
0.79 ± 22% +30.3% 1.02 ± 6% perf-profile.cycles-pp.file_has_perm.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write
0.23 ±-434% +370.7% 1.08 ± 22% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
2.25 ± 4% +37.9% 3.10 ± 3% perf-profile.cycles-pp.kfree.skb_release_data.skb_release_all.consume_skb.unix_stream_read_generic
2.52 ± 2% +17.3% 2.95 ± 4% perf-profile.cycles-pp.kfree_skbmem.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
2.40 ± 3% +19.4% 2.87 ± 4% perf-profile.cycles-pp.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
5.33 ± 5% -80.0% 1.07 ± 31% perf-profile.cycles-pp.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.20 ±-500% +1673.7% 3.55 ± 11% perf-profile.cycles-pp.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
4.47 ± 7% -78.5% 0.96 ± 29% perf-profile.cycles-pp.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.23 ±-434% +1180.4% 2.95 ± 12% perf-profile.cycles-pp.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
0.21 ± 13% +1674.7% 3.68 ± 5% perf-profile.cycles-pp.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
0.22 ± 16% +1648.3% 3.80 ± 6% perf-profile.cycles-pp.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
4.89 ± 6% -15.7% 4.12 ± 3% perf-profile.cycles-pp.security_file_permission.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
3.90 ± 7% -11.7% 3.44 ± 2% perf-profile.cycles-pp.security_file_permission.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.59 ± 7% +51.9% 0.90 ± 10% perf-profile.cycles-pp.security_socket_recvmsg.sock_recvmsg.sock_read_iter.__vfs_read.vfs_read
0.74 ± 15% +55.4% 1.15 ± 6% perf-profile.cycles-pp.security_socket_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write.vfs_write
7.99 ± 4% -19.1% 6.47 ± 2% perf-profile.cycles-pp.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
0.76 ± 3% +52.8% 1.17 ± 10% perf-profile.cycles-pp.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write
10.53 ± 2% -17.0% 8.74 ± 1% perf-profile.cycles-pp.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
6.75 ± 2% -12.5% 5.90 ± 2% perf-profile.cycles-pp.skb_release_data.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
0.87 ± 7% +39.9% 1.22 ± 6% perf-profile.cycles-pp.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
17.95 ± 0% -15.8% 15.11 ± 1% perf-profile.cycles-pp.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write
1.03 ± 8% +521.1% 6.41 ± 3% perf-profile.cycles-pp.sock_def_readable.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write
0.25 ±-400% +322.0% 1.06 ± 22% perf-profile.cycles-pp.start_secondary
50.85 ± 2% -72.4% 14.01 ± 26% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
2.14 ±-46% +1435.9% 32.87 ± 11% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
44.14 ± 3% -68.8% 13.77 ± 27% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
2.39 ±-41% +1111.3% 28.95 ± 13% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
0.45 ± 8% +918.0% 4.53 ± 3% perf-profile.cycles-pp.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
0.25 ± 9% +937.4% 2.57 ± 4% perf-profile.cycles-pp.ttwu_do_activate.constprop.91.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
8.18 ± 4% -19.6% 6.58 ± 1% perf-profile.cycles-pp.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
49.38 ± 2% -72.5% 13.59 ± 26% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.09 ±-47% +1430.9% 32.00 ± 11% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath.__read_nocancel
42.36 ± 3% -68.6% 13.32 ± 27% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
2.34 ±-42% +1091.2% 27.88 ± 12% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath.__write_nocancel
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/1/debian-x86_64-2015-02-07.cgz/lkp-ivb-d03/shell8/unixbench
commit:
782fb955d6cce91ffc2089fc5a70c751e1581eff
9e7474b13a96cdfcea6f8ea7f8810d05747fa254
782fb955d6cce91f 9e7474b13a96cdfcea6f8ea7f8
---------------- --------------------------
%stddev %change %stddev
\ | \
5751 ± 0% +2.8% 5914 ± 0% unixbench.score
759980 ± 0% -29.9% 532832 ± 0% unixbench.time.involuntary_context_switches
68713272 ± 0% +2.8% 70660284 ± 0% unixbench.time.minor_page_faults
316.25 ± 0% +4.6% 330.75 ± 0% unixbench.time.percent_of_cpu_this_job_got
367.07 ± 0% +4.1% 381.99 ± 0% unixbench.time.system_time
232.32 ± 0% +5.3% 244.53 ± 0% unixbench.time.user_time
2076488 ± 0% +3.2% 2142947 ± 0% unixbench.time.voluntary_context_switches
129443 ± 0% -15.4% 109553 ± 0% softirqs.SCHED
759980 ± 0% -29.9% 532832 ± 0% time.involuntary_context_switches
224.77 ± 0% -13.8% 193.75 ± 0% uptime.idle
22610 ± 0% -11.5% 20000 ± 0% vmstat.system.cs
1144 ± 1% +8.3% 1239 ± 3% slabinfo.kmalloc-96.active_objs
1144 ± 1% +8.3% 1239 ± 3% slabinfo.kmalloc-96.num_objs
9620 ± 47% -82.1% 1721 ± 17% latency_stats.sum.call_rwsem_down_write_failed.copy_process._do_fork.SyS_clone.do_syscall_64.return_from_SYSCALL_64
32967 ± 27% -84.5% 5100 ± 37% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.do_syscall_64.return_from_SYSCALL_64
28489 ± 13% -74.4% 7288 ± 30% latency_stats.sum.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
80.36 ± 0% +4.4% 83.86 ± 0% turbostat.%Busy
2646 ± 0% +4.4% 2761 ± 0% turbostat.Avg_MHz
14.24 ± 0% -23.3% 10.92 ± 1% turbostat.CPU%c1
0.41 ± 3% -32.5% 0.28 ± 12% turbostat.CPU%c3
14.15 ± 0% +1.5% 14.37 ± 0% turbostat.CorWatt
64382115 ± 0% -23.3% 49374392 ± 0% cpuidle.C1E-IVB.time
600295 ± 0% -18.1% 491829 ± 0% cpuidle.C1E-IVB.usage
19272685 ± 2% -36.3% 12283514 ± 3% cpuidle.C3-IVB.time
148792 ± 4% -27.3% 108185 ± 2% cpuidle.C3-IVB.usage
52929080 ± 2% -10.0% 47653009 ± 1% cpuidle.C6-IVB.time
113947 ± 7% -22.7% 88076 ± 2% cpuidle.C6-IVB.usage
162.63 ± 5% -67.1% 53.46 ± 48% sched_debug.cfs_rq:/.exec_clock.stddev
115.09 ± 6% -98.5% 1.72 ± 3% sched_debug.cfs_rq:/.load.avg
198.56 ± 11% -99.0% 2.00 ± 0% sched_debug.cfs_rq:/.load.max
41.81 ± 29% -96.7% 1.38 ± 15% sched_debug.cfs_rq:/.load.min
61.10 ± 18% -99.5% 0.30 ± 30% sched_debug.cfs_rq:/.load.stddev
147.98 ± 10% +7.7e+07% 1.14e+08 ± 0% sched_debug.cfs_rq:/.load_avg.avg
203.56 ± 17% +5.9e+07% 1.193e+08 ± 1% sched_debug.cfs_rq:/.load_avg.max
102.06 ± 3% +1.1e+08% 1.089e+08 ± 0% sched_debug.cfs_rq:/.load_avg.min
39.75 ± 28% +1.1e+07% 4329918 ± 24% sched_debug.cfs_rq:/.load_avg.stddev
683448 ± 0% +5468.4% 38056825 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
685927 ± 0% +5459.5% 38134331 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
681176 ± 0% +5477.1% 37990150 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
1870 ± 30% +2912.1% 56334 ± 22% sched_debug.cfs_rq:/.min_vruntime.stddev
0.19 ±-533% +21841.7% 41.14 ± 9% sched_debug.cfs_rq:/.nr_spread_over.avg
0.75 ±-133% +7858.3% 59.69 ± 9% sched_debug.cfs_rq:/.nr_spread_over.max
0.32 ±-307% +4215.7% 14.02 ± 19% sched_debug.cfs_rq:/.nr_spread_over.stddev
86.86 ± 3% +913.1% 880.00 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
104.94 ± 1% +875.8% 1024 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
54.25 ± 22% +1197.7% 704.00 ± 15% sched_debug.cfs_rq:/.runnable_load_avg.min
20.91 ± 24% +644.8% 155.71 ± 30% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-2782 ±-29% +4262.9% -121377 ±-52% sched_debug.cfs_rq:/.spread0.min
1871 ± 30% +2913.2% 56397 ± 22% sched_debug.cfs_rq:/.spread0.stddev
784.78 ± 0% +73979.7% 581363 ± 1% sched_debug.cfs_rq:/.util_avg.avg
826.62 ± 1% +2e+05% 1624636 ± 11% sched_debug.cfs_rq:/.util_avg.max
751.44 ± 2% +3338.7% 25840 ± 90% sched_debug.cfs_rq:/.util_avg.min
29.04 ± 43% +2.3e+06% 655306 ± 9% sched_debug.cfs_rq:/.util_avg.stddev
0.68 ± 11% -24.4% 0.51 ± 8% sched_debug.cpu.clock.stddev
0.68 ± 11% -24.4% 0.51 ± 8% sched_debug.cpu.clock_task.stddev
78.17 ± 6% +943.9% 816.00 ± 10% sched_debug.cpu.cpu_load[0].avg
104.31 ± 2% +881.7% 1024 ± 17% sched_debug.cpu.cpu_load[0].max
37.31 ± 33% +1443.7% 576.00 ± 36% sched_debug.cpu.cpu_load[0].min
80.42 ± 8% +887.3% 794.02 ± 2% sched_debug.cpu.cpu_load[1].avg
99.62 ± 3% +862.7% 959.12 ± 8% sched_debug.cpu.cpu_load[1].max
53.44 ± 21% +987.1% 580.94 ± 16% sched_debug.cpu.cpu_load[1].min
18.70 ± 28% +707.9% 151.03 ± 42% sched_debug.cpu.cpu_load[1].stddev
81.47 ± 7% +873.1% 792.77 ± 2% sched_debug.cpu.cpu_load[2].avg
96.19 ± 3% +849.9% 913.69 ± 3% sched_debug.cpu.cpu_load[2].max
63.75 ± 12% +905.5% 641.00 ± 9% sched_debug.cpu.cpu_load[2].min
12.57 ± 24% +736.3% 105.14 ± 30% sched_debug.cpu.cpu_load[2].stddev
81.86 ± 4% +882.3% 804.12 ± 2% sched_debug.cpu.cpu_load[3].avg
93.50 ± 3% +855.7% 893.56 ± 0% sched_debug.cpu.cpu_load[3].max
70.12 ± 6% +918.5% 714.25 ± 5% sched_debug.cpu.cpu_load[3].min
8.90 ± 17% +688.3% 70.17 ± 21% sched_debug.cpu.cpu_load[3].stddev
83.83 ± 2% +881.5% 822.77 ± 1% sched_debug.cpu.cpu_load[4].avg
93.12 ± 2% +847.0% 881.94 ± 1% sched_debug.cpu.cpu_load[4].max
74.94 ± 4% +929.9% 771.81 ± 3% sched_debug.cpu.cpu_load[4].min
6.96 ± 20% +511.7% 42.55 ± 19% sched_debug.cpu.cpu_load[4].stddev
5911 ± 13% +60.2% 9468 ± 16% sched_debug.cpu.curr->pid.avg
9818 ± 7% +36.7% 13418 ± 19% sched_debug.cpu.curr->pid.max
107.91 ± 9% -68.8% 33.72 ± 94% sched_debug.cpu.load.avg
33.56 ± 50% -95.9% 1.38 ± 15% sched_debug.cpu.load.min
518047 ± 0% -11.5% 458691 ± 0% sched_debug.cpu.nr_switches.avg
534319 ± 0% -12.2% 468994 ± 0% sched_debug.cpu.nr_switches.max
503695 ± 0% -10.8% 449233 ± 0% sched_debug.cpu.nr_switches.min
13436 ± 5% -38.9% 8204 ± 19% sched_debug.cpu.nr_switches.stddev
36.06 ± 30% +171.1% 97.75 ± 32% sched_debug.cpu.nr_uninterruptible.max
-28.19 ±-21% +267.0% -103.44 ±-15% sched_debug.cpu.nr_uninterruptible.min
25.73 ± 26% +200.4% 77.31 ± 24% sched_debug.cpu.nr_uninterruptible.stddev
511556 ± 0% -11.6% 452160 ± 0% sched_debug.cpu.sched_count.avg
523416 ± 0% -12.4% 458580 ± 0% sched_debug.cpu.sched_count.max
499741 ± 0% -10.9% 445373 ± 0% sched_debug.cpu.sched_count.min
10860 ± 3% -50.4% 5386 ± 23% sched_debug.cpu.sched_count.stddev
108740 ± 0% -17.6% 89574 ± 0% sched_debug.cpu.sched_goidle.avg
114592 ± 0% -19.3% 92528 ± 1% sched_debug.cpu.sched_goidle.max
103040 ± 0% -16.2% 86324 ± 1% sched_debug.cpu.sched_goidle.min
5317 ± 2% -51.9% 2557 ± 27% sched_debug.cpu.sched_goidle.stddev
=========================================================================================
compiler/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-4.9/x86_64-rhel/100%/debian-x86_64-2015-02-07.cgz/nhm-white2/spawn/unixbench
commit:
782fb955d6cce91ffc2089fc5a70c751e1581eff
9e7474b13a96cdfcea6f8ea7f8810d05747fa254
782fb955d6cce91f 9e7474b13a96cdfcea6f8ea7f8
---------------- --------------------------
%stddev %change %stddev
\ | \
3697 ± 0% +3.4% 3824 ± 0% unixbench.score
696681 ± 0% +115.8% 1503139 ± 0% unixbench.time.involuntary_context_switches
98235337 ± 0% +3.7% 1.019e+08 ± 0% unixbench.time.minor_page_faults
8376821 ± 0% +3.5% 8670506 ± 0% unixbench.time.voluntary_context_switches
696681 ± 0% +115.8% 1503139 ± 0% time.involuntary_context_switches
138679 ± 0% +8.2% 150042 ± 0% vmstat.system.cs
13752 ± 0% +67.2% 22990 ± 0% vmstat.system.in
24641298 ± 0% -9.8% 22222780 ± 0% cpuidle.C1E-NHM.time
2177636 ± 2% -47.8% 1136290 ± 5% cpuidle.C3-NHM.time
45535 ± 2% -71.3% 13077 ± 5% cpuidle.C3-NHM.usage
17154 ± 4% -34.5% 11232 ± 4% cpuidle.C6-NHM.usage
2239 ± 5% +8.6% 2431 ± 6% slabinfo.kmalloc-256.num_objs
1404 ± 5% -17.2% 1162 ± 11% slabinfo.kmalloc-512.active_objs
304.00 ± 17% -36.8% 192.00 ± 23% slabinfo.kmem_cache_node.active_objs
304.00 ± 17% -36.8% 192.00 ± 23% slabinfo.kmem_cache_node.num_objs
582.04 ± 17% -49.4% 294.70 ± 1% sched_debug.cfs_rq:/.exec_clock.stddev
52.30 ± 24% +3.7e+08% 1.934e+08 ± 0% sched_debug.cfs_rq:/.load_avg.avg
187.88 ± 58% +1.1e+08% 1.991e+08 ± 0% sched_debug.cfs_rq:/.load_avg.max
11.62 ± 17% +1.6e+09% 1.888e+08 ± 0% sched_debug.cfs_rq:/.load_avg.min
59.12 ± 57% +6.7e+06% 3948667 ± 3% sched_debug.cfs_rq:/.load_avg.stddev
1041947 ± 0% +981.7% 11271059 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
1081772 ± 0% +958.8% 11453444 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
1006384 ± 0% +1001.2% 11082074 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
25413 ± 19% +546.6% 164330 ± 1% sched_debug.cfs_rq:/.min_vruntime.stddev
7.33 ± 11% +8196.8% 608.00 ± 5% sched_debug.cfs_rq:/.runnable_load_avg.avg
16.38 ± 19% +6935.1% 1152 ± 19% sched_debug.cfs_rq:/.runnable_load_avg.max
5.25 ± 20% +5897.4% 314.94 ± 39% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-8572 ±-84% +1913.7% -172629 ±-12% sched_debug.cfs_rq:/.spread0.avg
-44145 ±-19% +719.5% -361756 ± -6% sched_debug.cfs_rq:/.spread0.min
25416 ± 19% +547.0% 164452 ± 1% sched_debug.cfs_rq:/.spread0.stddev
574.16 ± 2% +65254.1% 375234 ± 2% sched_debug.cfs_rq:/.util_avg.avg
708.38 ± 4% +2.1e+05% 1507520 ± 9% sched_debug.cfs_rq:/.util_avg.max
437.25 ± 1% +1109.5% 5288 ± 50% sched_debug.cfs_rq:/.util_avg.min
89.66 ± 6% +6.2e+05% 554928 ± 5% sched_debug.cfs_rq:/.util_avg.stddev
122888 ± 26% +40.6% 172812 ± 15% sched_debug.cpu.avg_idle.min
6.78 ± 11% +8630.0% 592.00 ± 8% sched_debug.cpu.cpu_load[0].avg
16.88 ± 11% +6726.7% 1152 ± 19% sched_debug.cpu.cpu_load[0].max
5.51 ± 10% +5722.9% 321.12 ± 40% sched_debug.cpu.cpu_load[0].stddev
6.62 ± 8% +8627.6% 578.20 ± 4% sched_debug.cpu.cpu_load[1].avg
15.25 ± 18% +6354.9% 984.38 ± 11% sched_debug.cpu.cpu_load[1].max
2.38 ± 40% +13505.3% 323.12 ± 35% sched_debug.cpu.cpu_load[1].min
4.24 ± 21% +4922.7% 213.12 ± 28% sched_debug.cpu.cpu_load[1].stddev
7.20 ± 17% +7728.0% 563.86 ± 3% sched_debug.cpu.cpu_load[2].avg
19.12 ± 35% +4209.2% 824.12 ± 6% sched_debug.cpu.cpu_load[2].max
3.25 ± 7% +11623.1% 381.00 ± 17% sched_debug.cpu.cpu_load[2].min
5.02 ± 43% +2697.9% 140.34 ± 20% sched_debug.cpu.cpu_load[2].stddev
8.09 ± 24% +6808.1% 559.12 ± 1% sched_debug.cpu.cpu_load[3].avg
23.00 ± 56% +2986.4% 709.88 ± 3% sched_debug.cpu.cpu_load[3].max
3.88 ± 10% +10787.1% 421.88 ± 9% sched_debug.cpu.cpu_load[3].min
6.13 ± 69% +1400.0% 92.01 ± 13% sched_debug.cpu.cpu_load[3].stddev
8.20 ± 26% +6841.3% 569.41 ± 1% sched_debug.cpu.cpu_load[4].avg
23.75 ± 58% +2710.0% 667.38 ± 3% sched_debug.cpu.cpu_load[4].max
4.25 ± 10% +10685.3% 458.38 ± 2% sched_debug.cpu.cpu_load[4].min
6.41 ± 70% +939.9% 66.62 ± 6% sched_debug.cpu.cpu_load[4].stddev
5760 ± 19% -35.8% 3700 ± 49% sched_debug.cpu.curr->pid.stddev
0.77 ± 8% -33.0% 0.51 ± 22% sched_debug.cpu.nr_running.stddev
472982 ± 1% +16.2% 549406 ± 0% sched_debug.cpu.nr_switches.min
38497 ± 20% -69.1% 11893 ± 1% sched_debug.cpu.nr_switches.stddev
87.12 ± 11% -17.5% 71.88 ± 6% sched_debug.cpu.nr_uninterruptible.max
471433 ± 1% +15.9% 546547 ± 0% sched_debug.cpu.sched_count.min
36045 ± 22% -72.6% 9859 ± 3% sched_debug.cpu.sched_count.stddev
12488 ± 23% -72.0% 3496 ± 1% sched_debug.cpu.sched_goidle.stddev
215968 ± 5% -19.8% 173290 ± 0% sched_debug.cpu.ttwu_count.max
79635 ± 44% +101.0% 160040 ± 0% sched_debug.cpu.ttwu_count.min
53558 ± 27% -89.5% 5619 ± 1% sched_debug.cpu.ttwu_count.stddev
20904 ± 0% -11.3% 18539 ± 0% sched_debug.cpu.ttwu_local.avg
23866 ± 2% -18.3% 19497 ± 0% sched_debug.cpu.ttwu_local.max
2412 ± 17% -73.4% 641.69 ± 4% sched_debug.cpu.ttwu_local.stddev
=========================================================================================
compiler/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-4.9/x86_64-rhel/100%/debian-x86_64-2015-02-07.cgz/nhm-white2/shell8/unixbench
commit:
782fb955d6cce91ffc2089fc5a70c751e1581eff
9e7474b13a96cdfcea6f8ea7f8810d05747fa254
782fb955d6cce91f 9e7474b13a96cdfcea6f8ea7f8
---------------- --------------------------
%stddev %change %stddev
\ | \
10405 ± 0% +2.6% 10680 ± 0% unixbench.score
1842798 ± 0% -58.0% 773428 ± 0% unixbench.time.involuntary_context_switches
1.246e+08 ± 0% +2.6% 1.279e+08 ± 0% unixbench.time.minor_page_faults
654.75 ± 21% +14.6% 750.50 ± 0% unixbench.time.percent_of_cpu_this_job_got
557.37 ± 0% +3.6% 577.67 ± 0% unixbench.time.user_time
5173 ± 6% +20.0% 6207 ± 1% meminfo.KernelStack
320.00 ± 6% +21.2% 388.00 ± 1% proc-vmstat.nr_kernel_stack
138144 ± 1% -38.8% 84496 ± 1% softirqs.SCHED
1842798 ± 0% -58.0% 773428 ± 0% time.involuntary_context_switches
536.05 ± 97% -61.2% 207.84 ± 0% uptime.idle
83.09 ± 21% +14.0% 94.72 ± 0% turbostat.%Busy
2429 ± 21% +14.1% 2771 ± 0% turbostat.Avg_MHz
1.48 ± 1% -88.7% 0.17 ± 2% turbostat.CPU%c1
0.10 ± 24% -28.6% 0.08 ± 6% turbostat.CPU%c3
0.19 ± 22% -35.5% 0.12 ± 17% turbostat.Pkg%pc3
5568104 ± 4% -91.8% 454715 ± 10% cpuidle.C1-NHM.time
95013 ± 7% -87.0% 12330 ± 34% cpuidle.C1-NHM.usage
9238028 ± 3% -94.1% 547129 ± 8% cpuidle.C1E-NHM.time
139659 ± 2% -94.4% 7836 ± 7% cpuidle.C1E-NHM.usage
7788431 ± 11% -86.7% 1036028 ± 2% cpuidle.C3-NHM.time
93260 ± 4% -95.5% 4218 ± 7% cpuidle.C3-NHM.usage
54514 ±103% -82.2% 9722 ± 1% cpuidle.C6-NHM.usage
1632 ± 12% -92.1% 129.00 ± 56% cpuidle.POLL.time
87.50 ± 13% -89.1% 9.50 ± 52% cpuidle.POLL.usage
72.26 ± 6% -75.0% 18.09 ± 16% sched_debug.cfs_rq:/.exec_clock.stddev
70.28 ± 14% -97.8% 1.58 ± 1% sched_debug.cfs_rq:/.load.avg
220.29 ± 50% -99.1% 2.00 ± 0% sched_debug.cfs_rq:/.load.max
15.19 ± 76% -90.9% 1.38 ± 15% sched_debug.cfs_rq:/.load.min
67.27 ± 53% -99.7% 0.23 ± 38% sched_debug.cfs_rq:/.load.stddev
81.68 ± 16% +4.1e+07% 33204344 ± 1% sched_debug.cfs_rq:/.load_avg.avg
154.75 ± 21% +2.6e+07% 40432320 ± 4% sched_debug.cfs_rq:/.load_avg.max
53.10 ± 15% +4.8e+07% 25323904 ± 5% sched_debug.cfs_rq:/.load_avg.min
34.64 ± 25% +1.4e+07% 4981171 ± 13% sched_debug.cfs_rq:/.load_avg.stddev
2154140 ± 15% +1895.2% 42980390 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
2173234 ± 14% +1884.2% 43121931 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
2130729 ± 15% +1909.8% 42824256 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
12947 ± 16% +639.6% 95751 ± 26% sched_debug.cfs_rq:/.min_vruntime.stddev
0.07 ±103% +27296.0% 17.84 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
0.40 ±100% +10873.7% 43.44 ± 13% sched_debug.cfs_rq:/.nr_spread_over.max
0.14 ±100% +9295.2% 13.50 ± 19% sched_debug.cfs_rq:/.nr_spread_over.stddev
53.85 ± 16% +1400.6% 808.00 ± 1% sched_debug.cfs_rq:/.runnable_load_avg.avg
66.02 ± 15% +1451.0% 1024 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
37.40 ± 42% +1782.6% 704.00 ± 15% sched_debug.cfs_rq:/.runnable_load_avg.min
8.80 ± 55% +1251.5% 118.92 ± 38% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-26313 ±-60% +686.9% -207064 ±-30% sched_debug.cfs_rq:/.spread0.min
12935 ± 16% +641.6% 95928 ± 26% sched_debug.cfs_rq:/.spread0.stddev
749.07 ± 16% +50878.0% 381861 ± 3% sched_debug.cfs_rq:/.util_avg.avg
807.19 ± 15% +2.4e+05% 1965535 ± 21% sched_debug.cfs_rq:/.util_avg.max
688.31 ± 16% +580.3% 4682 ± 41% sched_debug.cfs_rq:/.util_avg.min
38.77 ± 15% +1.7e+06% 646951 ± 20% sched_debug.cfs_rq:/.util_avg.stddev
287652 ± 31% +49.0% 428560 ± 6% sched_debug.cpu.avg_idle.min
162349 ± 7% -35.4% 104877 ± 14% sched_debug.cpu.avg_idle.stddev
53.76 ± 16% +1492.4% 856.00 ± 3% sched_debug.cpu.cpu_load[0].avg
67.17 ± 15% +1615.1% 1152 ± 19% sched_debug.cpu.cpu_load[0].max
37.15 ± 43% +1967.5% 768.00 ± 0% sched_debug.cpu.cpu_load[0].min
8.95 ± 52% +1647.1% 156.45 ± 45% sched_debug.cpu.cpu_load[0].stddev
53.48 ± 17% +1457.5% 832.96 ± 1% sched_debug.cpu.cpu_load[1].avg
65.46 ± 15% +1457.8% 1019 ± 9% sched_debug.cpu.cpu_load[1].max
40.12 ± 28% +1784.7% 756.25 ± 2% sched_debug.cpu.cpu_load[1].min
7.37 ± 37% +1166.6% 93.34 ± 33% sched_debug.cpu.cpu_load[1].stddev
53.54 ± 16% +1445.5% 827.41 ± 0% sched_debug.cpu.cpu_load[2].avg
64.02 ± 16% +1365.6% 938.31 ± 4% sched_debug.cpu.cpu_load[2].max
43.81 ± 19% +1632.4% 759.00 ± 3% sched_debug.cpu.cpu_load[2].min
5.93 ± 28% +860.3% 56.94 ± 23% sched_debug.cpu.cpu_load[2].stddev
53.58 ± 16% +1445.1% 827.92 ± 0% sched_debug.cpu.cpu_load[3].avg
62.00 ± 16% +1349.9% 898.94 ± 2% sched_debug.cpu.cpu_load[3].max
46.56 ± 15% +1562.6% 774.12 ± 2% sched_debug.cpu.cpu_load[3].min
4.62 ± 26% +740.3% 38.79 ± 16% sched_debug.cpu.cpu_load[3].stddev
53.77 ± 16% +1449.4% 833.12 ± 0% sched_debug.cpu.cpu_load[4].avg
61.08 ± 16% +1361.1% 892.50 ± 1% sched_debug.cpu.cpu_load[4].max
48.38 ± 14% +1532.6% 789.75 ± 1% sched_debug.cpu.cpu_load[4].min
3.99 ± 24% +705.8% 32.13 ± 20% sched_debug.cpu.cpu_load[4].stddev
63.66 ± 7% -85.0% 9.58 ±144% sched_debug.cpu.load.avg
15.75 ± 65% -91.3% 1.38 ± 15% sched_debug.cpu.load.min
465366 ± 15% -36.2% 296986 ± 0% sched_debug.cpu.nr_switches.avg
489166 ± 14% -36.6% 310220 ± 0% sched_debug.cpu.nr_switches.max
452338 ± 15% -37.6% 282296 ± 1% sched_debug.cpu.nr_switches.min
12631 ± 5% -28.5% 9029 ± 18% sched_debug.cpu.nr_switches.stddev
91.48 ± 9% +53.1% 140.06 ± 12% sched_debug.cpu.nr_uninterruptible.max
-83.04 ±-14% +88.0% -156.12 ±-28% sched_debug.cpu.nr_uninterruptible.min
54.46 ± 8% +74.0% 94.74 ± 15% sched_debug.cpu.nr_uninterruptible.stddev
461439 ± 15% -36.5% 292946 ± 0% sched_debug.cpu.sched_count.avg
482544 ± 14% -36.9% 304578 ± 0% sched_debug.cpu.sched_count.max
448484 ± 15% -37.7% 279598 ± 1% sched_debug.cpu.sched_count.min
11812 ± 4% -31.9% 8047 ± 18% sched_debug.cpu.sched_count.stddev
22491 ± 19% -91.8% 1851 ± 11% sched_debug.cpu.sched_goidle.avg
24845 ± 18% -81.7% 4545 ± 25% sched_debug.cpu.sched_goidle.max
20564 ± 19% -94.9% 1055 ± 8% sched_debug.cpu.sched_goidle.min
229661 ± 15% -29.5% 161879 ± 0% sched_debug.cpu.ttwu_count.avg
245728 ± 14% -30.3% 171334 ± 1% sched_debug.cpu.ttwu_count.max
217816 ± 14% -29.3% 153946 ± 0% sched_debug.cpu.ttwu_count.min
9399 ± 10% -37.1% 5914 ± 19% sched_debug.cpu.ttwu_count.stddev
172413 ± 15% -26.7% 126368 ± 0% sched_debug.cpu.ttwu_local.avg
179174 ± 14% -28.0% 128998 ± 0% sched_debug.cpu.ttwu_local.max
166227 ± 14% -25.9% 123148 ± 0% sched_debug.cpu.ttwu_local.min
4190 ± 7% -56.2% 1835 ± 26% sched_debug.cpu.ttwu_local.stddev
0.00 ±141% +188.6% 0.00 ± 30% sched_debug.rt_rq:/.rt_time.min
lkp-ivb-d02: Ivy Bridge
Memory: 8G
lkp-ivb-d03: Ivy Bridge
Memory: 4G
nhm-white2: Nehalem
Memory: 4G
vm-client-e5450-openwrt-ia32: qemu-system-x86_64 -enable-kvm
Memory: 192M
unixbench.score
5950 ++-------------------------------------------------------------------+
| OO O |
5900 ++ O OOO OO OO O |
| |
5850 OO O O O O O |
| O O OO O O |
5800 ++ |
| *. |
5750 ++ .* *.* * *. .**. .* *.* .* .* *|
**.** *.* * :+ * *.**.* ** **.*** *.* *.** * *
5700 ++ + : * *.**.* |
| * * |
5650 ++ :+ |
| * |
5600 ++-------------------------------------------------------------------+
unixbench.time.user_time
246 ++-------------O---O--O--O--O-----------------------------------------+
|O O O O OO O O O O O OO OO |
244 O+ O O O O |
242 ++ |
| |
240 ++ |
| |
238 ++ |
| |
236 ++ |
234 ++ *. |
| .**.* .**.**. : **.**. .**.* .**.**. .**.* .**.**.* .**. *. |
232 ** * * .* **.** * ** * * * *|
| * *
230 ++--------------------------------------------------------------------+
unixbench.time.system_time
384 O+-O--O---O-----------------------------------------------------------+
382 +O O O O OO OO O O O O O |
| O OO O OO O O |
380 ++ |
378 ++ |
| |
376 ++ |
374 ++ |
372 ++ |
| |
370 ++ *.* .* |
368 ++ .* :.** *.**. *
**.**.**.**.** * **.**.**.**.**.**.**.**.**.**.**. *.* .*|
366 ++ * * |
364 ++--------------------------------------------------------------------+
unixbench.time.percent_of_cpu_this_job_got
332 ++--------------------------------------------------------------------+
OO OO OO OO OO OO OO OO OO OO O OO O |
330 ++ O |
328 ++ |
| |
326 ++ |
| |
324 ++ |
| |
322 ++ |
320 ++ |
| |
318 ++ *.* .**.* *. |
**.**.* .* .**.* * *.* **.**.* .* .**.**.**.* .* .**.**.* .* |
316 ++-----*--*--------------------------*--*-----------*--*--------*--*-**
unixbench.time.minor_page_faults
7.1e+07 ++---------------------------------------------------------------+
| OO O |
7.05e+07 ++ OO OO OOO O O |
7e+07 +O O |
O O O O O O |
6.95e+07 ++ OO O O O |
| |
6.9e+07 ++ *. |
| .* .* * *. *. * *. * .* *|
6.85e+07 **.** ** * ::.* .***.* ** * *.**.* *.* * *.* * *
6.8e+07 ++ *. : * *.*** * * |
| * * |
6.75e+07 ++ : : |
| :: |
6.7e+07 ++-------------*-------------------------------------------------+
unixbench.time.voluntary_context_switches
2.16e+06 ++---------------------------------------------------------------+
| O O |
2.14e+06 +O O O O OO OO OOO O O |
O O OO O O O O |
2.12e+06 ++ O O |
| |
2.1e+06 ++ |
| |
2.08e+06 ++ .* * * *. * *. * .**. |
* * : + * * .* + : * *. .* : : * *.* * **
2.06e+06 +*.* ** *. * .* + ** ** * ** *.* * |
| * :* *.*** |
2.04e+06 ++ : * |
| :+ |
2.02e+06 ++-------------*-------------------------------------------------+
unixbench.time.involuntary_context_switches
800000 ++-----------------------------------------------------------------+
| |
750000 **.**.**.***.**.* .***.**.***.**.**.***.**.**.***.**.**.**
| :.***. *. * |
| * * * |
700000 ++ |
| |
650000 ++ |
| |
600000 ++ |
| |
| |
550000 OO OO OO O O O OO OOO OO OO OOO |
| O O OO |
500000 ++-----------------------------------------------------------------+
hackbench.throughput
23000 ++------------------------------------------------------------------+
| *.* |
22000 ++ *.* :.**.***. |
* .* *.* : * * .* *.* *. *. .** .**. *.**. |
21000 +* *.**.* * * *.**.* * * **.** * * **
| |
20000 ++ |
| |
19000 ++ |
| O O |
18000 ++ OO O O OOO O O O O O O |
OO O O OO |
17000 ++ O O |
| O O |
16000 ++------------------------------------------------------------------+
hackbench.time.minor_page_faults
2.9e+06 ++----------------------------------------------------------------+
| |
2.8e+06 ++ .** ** * |
* *. * + : + :+ * *. * *. .* * |
2.7e+06 ++ **. *.* ** * * *: **. .** * : **. .** ** :*.* : *
2.6e+06 +++ ** :+ ** : : ** * : |
|* * * *|
2.5e+06 ++ |
| |
2.4e+06 ++ |
2.3e+06 ++ O O O O OO O |
| O O |
2.2e+06 OO O OO O O O O O O O |
| |
2.1e+06 ++------------------O-O--O--O-------------------------------------+
hackbench.time.voluntary_context_switches
1.8e+08 ++----------------------------------------------------------------+
| O O O O O |
1.6e+08 +O OO OO O OO |
1.4e+08 O+ OO O OO O O O |
| O O O |
1.2e+08 ++ |
| |
1e+08 ++ |
| |
8e+07 ++ |
6e+07 ++ |
| |
4e+07 ++ |
| |
2e+07 **-**-***-**-***-**-***-**-***-**-**-***-**-***-**-***-**-***-**-**
hackbench.time.involuntary_context_switches
8e+07 ++------------------------------------------------------------------+
| O OO O O O O O |
7e+07 ++ O O O O O O O O O O O |
6e+07 OO O O O O |
| |
5e+07 ++ |
| |
4e+07 ++ |
| |
3e+07 ++ |
2e+07 ++ |
| |
1e+07 ++ |
| |
0 **-**-**-**-***-**-**-**-***-**-**-**-**-***-**-**-**-***-**-**-**-**
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [sched/fair] ec419c38ec: unixbench.score +2.7% improvement
by kernel test robot
FYI, we noticed that unixbench.score +2.7% improvement on
git://bee.sh.intel.com/git/ydu19/tip flat_hierarchy_v2
commit ec419c38ecd766e8e41e3232e9f3d473c7d56b3a ("sched/fair: Drop out incomplete current period when sched averages accrue")
=========================================================================================
compiler/kconfig/nr_task/rootfs/tbox_group/test/testcase:
gcc-4.9/x86_64-rhel/100%/debian-x86_64-2015-02-07.cgz/nhm-white2/shell8/unixbench
commit:
19aa29a06823363298646327ecf461e70758b483
ec419c38ecd766e8e41e3232e9f3d473c7d56b3a
19aa29a068233632 ec419c38ecd766e8e41e3232e9
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
10397 ± 0% +2.7% 10673 ± 0% unixbench.score
1839526 ± 0% -57.9% 774575 ± 0% unixbench.time.involuntary_context_switches
1.245e+08 ± 0% +2.7% 1.278e+08 ± 0% unixbench.time.minor_page_faults
737.00 ± 0% +1.8% 750.50 ± 0% unixbench.time.percent_of_cpu_this_job_got
556.78 ± 0% +3.7% 577.20 ± 0% unixbench.time.user_time
12060 ± 31% +247.1% 41859 ± 36% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
5391 ± 1% +16.0% 6252 ± 0% meminfo.KernelStack
137262 ± 0% -38.4% 84501 ± 0% softirqs.SCHED
1839526 ± 0% -57.9% 774575 ± 0% time.involuntary_context_switches
37242 ± 0% -30.6% 25830 ± 0% vmstat.system.cs
10778 ± 0% -8.5% 9857 ± 0% vmstat.system.in
93.38 ± 0% +1.4% 94.71 ± 0% turbostat.%Busy
2731 ± 0% +1.4% 2770 ± 0% turbostat.Avg_MHz
1.48 ± 1% -88.9% 0.17 ± 5% turbostat.CPU%c1
1676 ± 1% +13.2% 1897 ± 1% slabinfo.cred_jar.active_objs
10224 ± 3% +12.1% 11462 ± 4% slabinfo.kmalloc-32.active_objs
10444 ± 3% +11.6% 11658 ± 4% slabinfo.kmalloc-32.num_objs
2548 ± 4% +5.7% 2692 ± 5% slabinfo.pid.num_objs
5442033 ± 1% -92.1% 432360 ± 8% cpuidle.C1-NHM.time
90309 ± 1% -86.4% 12297 ± 27% cpuidle.C1-NHM.usage
9093083 ± 2% -93.8% 567479 ± 10% cpuidle.C1E-NHM.time
138870 ± 1% -94.2% 8031 ± 13% cpuidle.C1E-NHM.usage
7175826 ± 3% -84.9% 1083522 ± 7% cpuidle.C3-NHM.time
91126 ± 3% -95.1% 4434 ± 8% cpuidle.C3-NHM.usage
21635 ± 4% -55.7% 9588 ± 2% cpuidle.C6-NHM.usage
1547 ± 17% -91.6% 130.00 ± 18% cpuidle.POLL.time
81.00 ± 12% -88.9% 9.00 ± 13% cpuidle.POLL.usage
63.96 ± 21% -58.2% 26.70 ± 68% sched_debug.cfs_rq:/.exec_clock.stddev
78.16 ± 39% -87.7% 9.62 ±143% sched_debug.cfs_rq:/.load.avg
13.81 ± 36% -89.1% 1.50 ± 0% sched_debug.cfs_rq:/.load.min
83.85 ± 4% +3.9e+07% 33031208 ± 0% sched_debug.cfs_rq:/.load_avg.avg
147.62 ± 11% +2.8e+07% 41895680 ± 6% sched_debug.cfs_rq:/.load_avg.max
58.88 ± 1% +3.9e+07% 22927040 ± 16% sched_debug.cfs_rq:/.load_avg.min
30.81 ± 15% +2e+07% 6076286 ± 33% sched_debug.cfs_rq:/.load_avg.stddev
1967526 ± 0% +2084.6% 42982191 ± 0% sched_debug.cfs_rq:/.min_vruntime.avg
1986032 ± 0% +2073.0% 43156774 ± 0% sched_debug.cfs_rq:/.min_vruntime.max
1948430 ± 0% +2097.8% 42822398 ± 0% sched_debug.cfs_rq:/.min_vruntime.min
12142 ± 27% +795.8% 108769 ± 21% sched_debug.cfs_rq:/.min_vruntime.stddev
0.09 ±-1066% +18625.0% 17.55 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
0.75 ±-133% +5133.3% 39.25 ± 24% sched_debug.cfs_rq:/.nr_spread_over.max
0.25 ±-403% +5061.5% 12.80 ± 22% sched_debug.cfs_rq:/.nr_spread_over.stddev
58.11 ± 3% +1345.5% 840.00 ± 1% sched_debug.cfs_rq:/.runnable_load_avg.avg
68.00 ± 2% +1500.0% 1088 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.max
43.75 ± 33% +1655.4% 768.00 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.min
7.60 ± 56% +1637.8% 132.02 ± 27% sched_debug.cfs_rq:/.runnable_load_avg.stddev
22539 ± 60% +808.0% 204668 ± 55% sched_debug.cfs_rq:/.spread0.max
-15041 ±-70% +761.6% -129597 ±-39% sched_debug.cfs_rq:/.spread0.min
12137 ± 27% +796.4% 108802 ± 20% sched_debug.cfs_rq:/.spread0.stddev
811.48 ± 0% +43541.4% 354139 ± 5% sched_debug.cfs_rq:/.util_avg.avg
869.12 ± 1% +1.9e+05% 1638566 ± 5% sched_debug.cfs_rq:/.util_avg.max
743.88 ± 1% +727.2% 6153 ± 44% sched_debug.cfs_rq:/.util_avg.min
39.33 ± 6% +1.4e+06% 555702 ± 8% sched_debug.cfs_rq:/.util_avg.stddev
480108 ± 5% +13.3% 544065 ± 2% sched_debug.cpu.avg_idle.avg
212617 ± 21% +54.0% 327335 ± 11% sched_debug.cpu.avg_idle.min
186637 ± 6% -31.0% 128767 ± 12% sched_debug.cpu.avg_idle.stddev
57.17 ± 1% +1383.2% 848.00 ± 3% sched_debug.cpu.cpu_load[0].avg
67.75 ± 0% +1505.9% 1088 ± 10% sched_debug.cpu.cpu_load[0].max
38.50 ± 18% +1894.8% 768.00 ± 0% sched_debug.cpu.cpu_load[0].min
9.14 ± 24% +1344.5% 132.01 ± 30% sched_debug.cpu.cpu_load[0].stddev
57.66 ± 0% +1352.1% 837.35 ± 1% sched_debug.cpu.cpu_load[1].avg
66.75 ± 0% +1380.6% 988.31 ± 3% sched_debug.cpu.cpu_load[1].max
45.50 ± 5% +1579.3% 764.06 ± 0% sched_debug.cpu.cpu_load[1].min
6.45 ± 12% +1149.7% 80.65 ± 18% sched_debug.cpu.cpu_load[1].stddev
57.53 ± 0% +1351.2% 834.91 ± 0% sched_debug.cpu.cpu_load[2].avg
66.12 ± 2% +1316.4% 936.62 ± 2% sched_debug.cpu.cpu_load[2].max
48.50 ± 2% +1476.8% 764.75 ± 1% sched_debug.cpu.cpu_load[2].min
5.22 ± 8% +985.5% 56.65 ± 12% sched_debug.cpu.cpu_load[2].stddev
57.43 ± 0% +1350.6% 833.06 ± 0% sched_debug.cpu.cpu_load[3].avg
65.12 ± 2% +1290.8% 905.75 ± 2% sched_debug.cpu.cpu_load[3].max
50.56 ± 1% +1433.4% 775.31 ± 1% sched_debug.cpu.cpu_load[3].min
4.47 ± 6% +834.0% 41.77 ± 17% sched_debug.cpu.cpu_load[3].stddev
57.62 ± 0% +1352.0% 836.62 ± 0% sched_debug.cpu.cpu_load[4].avg
64.62 ± 2% +1281.2% 892.62 ± 3% sched_debug.cpu.cpu_load[4].max
51.56 ± 1% +1437.8% 792.94 ± 1% sched_debug.cpu.cpu_load[4].min
3.96 ± 11% +696.6% 31.52 ± 30% sched_debug.cpu.cpu_load[4].stddev
7178 ± 18% -91.6% 599.44 ± 0% sched_debug.cpu.curr->pid.min
62.48 ± 6% -84.6% 9.62 ±143% sched_debug.cpu.load.avg
14.50 ± 27% -89.7% 1.50 ± 0% sched_debug.cpu.load.min
1280 ± 20% +35.3% 1733 ± 5% sched_debug.cpu.nr_load_updates.stddev
424311 ± 0% -30.2% 296261 ± 0% sched_debug.cpu.nr_switches.avg
442171 ± 0% -29.6% 311189 ± 1% sched_debug.cpu.nr_switches.max
409575 ± 0% -31.9% 278989 ± 1% sched_debug.cpu.nr_switches.min
65.50 ± 13% +167.3% 175.06 ± 21% sched_debug.cpu.nr_uninterruptible.max
-68.31 ±-28% +323.2% -289.12 ±-39% sched_debug.cpu.nr_uninterruptible.min
43.89 ± 17% +210.5% 136.27 ± 29% sched_debug.cpu.nr_uninterruptible.stddev
420593 ± 0% -30.5% 292299 ± 0% sched_debug.cpu.sched_count.avg
435963 ± 0% -30.0% 305050 ± 0% sched_debug.cpu.sched_count.max
406393 ± 0% -32.0% 276478 ± 1% sched_debug.cpu.sched_count.min
20000 ± 2% -90.7% 1865 ± 8% sched_debug.cpu.sched_goidle.avg
21505 ± 2% -78.9% 4535 ± 22% sched_debug.cpu.sched_goidle.max
18583 ± 3% -94.4% 1049 ± 8% sched_debug.cpu.sched_goidle.min
209474 ± 0% -22.9% 161408 ± 0% sched_debug.cpu.ttwu_count.avg
222079 ± 0% -22.9% 171331 ± 1% sched_debug.cpu.ttwu_count.max
198750 ± 0% -24.5% 149958 ± 1% sched_debug.cpu.ttwu_count.min
157706 ± 0% -20.1% 125986 ± 0% sched_debug.cpu.ttwu_local.avg
163360 ± 0% -20.5% 129947 ± 0% sched_debug.cpu.ttwu_local.max
152244 ± 0% -19.5% 122494 ± 0% sched_debug.cpu.ttwu_local.min
3666 ± 6% -35.3% 2373 ± 17% sched_debug.cpu.ttwu_local.stddev
nhm-white2: Nehalem
Memory: 4G
unixbench.time.user_time
580 ++-----------------------O-----O--------------------------------------+
| OO O OO O OO OO |
| O O O OO OO |
575 +O OO O O O O |
O |
| |
570 ++ |
| |
565 ++ |
| |
| * |
560 **.**. *.**. *. .**.**. *.**. *.**.**. *.**. *.* .* : |
| * * **.**. .* * * * * * * : *
| ** *.* **.*|
555 ++--------------------------------------------------------------------+
unixbench.time.percent_of_cpu_this_job_got
752 ++--------------------------------------------------------------------+
| O OO OO OO O O O |
750 OO OO OO OO OO OO OO O O O |
748 ++ |
| |
746 ++ |
| |
744 ++ |
| |
742 ++ |
740 ++ |
| |
738 ++ .* *. .* |
**.**.**.**.**.** *.* ** *.**.**.**.**.**.**.**.**.**.**.**. *.**.**
736 ++-------------------------------------------------------------*------+
unixbench.time.involuntary_context_switches
2e+06 ++----------------------------------------------------------------+
|*. *. .* .* *. *. * .* .* *. * .**. * |
1.8e+06 *+ * *** * **.**.***.**.** * **.* * * * **.* * * *.**.**
| |
1.6e+06 ++ |
| |
1.4e+06 ++ |
| |
1.2e+06 ++ |
| |
1e+06 ++ |
| |
800000 ++ OO O OO OOO OO OOO OO OOO OO OO O |
OO O O |
600000 ++----------------------------------------------------------------+
time.involuntary_context_switches
2e+06 ++----------------------------------------------------------------+
|*. *. .* .* *. *. * .* .* *. * .**. * |
1.8e+06 *+ * *** * **.**.***.**.** * **.* * * * **.* * * *.**.**
| |
1.6e+06 ++ |
| |
1.4e+06 ++ |
| |
1.2e+06 ++ |
| |
1e+06 ++ |
| |
800000 ++ OO O OO OOO OO OOO OO OOO OO OO O |
OO O O |
600000 ++----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [drm/i915] 1c6825de4f: [drm:intel_dp_start_link_train [i915]] *ERROR* failed to update link training
by kernel test robot
FYI, we noticed the below changes on
git://people.freedesktop.org/~mlankhorst/linux rework-page-flip
commit 1c6825de4fe3e2fe18f47d336b548b4e61a80768 ("drm/i915: Full async modeset.")
+--------------------------------------------------------------------------------------+------------+------------+
| | 4a0ff1c4b3 | 1c6825de4f |
+--------------------------------------------------------------------------------------+------------+------------+
| boot_successes | 30 | 9 |
| boot_failures | 2 | 25 |
| WARNING:at_drivers/gpu/drm/drm_irq.c:#drm_vblank_put[drm] | 2 | |
| backtrace:vfs_write | 2 | |
| backtrace:SyS_write | 2 | |
| WARNING:at_drivers/gpu/drm/i915/intel_psr.c:#intel_psr_enable[i915] | 0 | 7 |
| backtrace:intel_mmio_flip_work_func | 0 | 6 |
| drm:intel_dp_start_link_train[i915]] | 0 | 18 |
| WARNING:at_drivers/gpu/drm/i915/intel_display.c:#intel_get_pipe_from_connector[i915] | 0 | 1 |
+--------------------------------------------------------------------------------------+------------+------------+
As below, the log "[drm:intel_dp_start_link_train [i915]] *ERROR* failed to update link training" showed with your commit.
CW120A3_CVCV2525
[ 14.690352] [drm] Num modeset disables: 1, async 0
05YF120BGN-part2
[ 15.720659] [drm:intel_dp_start_link_train [i915]] *ERROR* failed to update link training
does not exist
[ 20.367161] systemd-journald[133]: /run/log/journal/9ff6d4db7adf133653e0c19b5107c592/system.journal: Journal header limits reached or header out-of-date, rotating.
and no size was
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [Add sancov plugin] 26627f05b7: BUG: unable to handle kernel NULL pointer dereference at (null)
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git kspp/gcc-plugins
commit 26627f05b7df43dfd0586ddb5fe568463658c5da ("Add sancov plugin")
[ 0.000000] DMA zone: 3998 pages, LIFO batch:0
[ 0.000000] DMA32 zone: 3528 pages used for memmap
[ 0.000000] DMA32 zone: 258016 pages, LIFO batch:31
[ 0.000000] BUG: unable to handle kernel NULL pointer dereference at (null)
[ 0.000000] IP: [<ffffffffa4896452>] native_set_pgd+0x22/0x30
[ 0.000000] PGD 0
[ 0.000000] Oops: 0002 [#1] PREEMPT SMP KASAN
[ 0.000000] Modules linked in:
[ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.6.0-rc3-00023-g26627f05 #1
[ 0.000000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.000000] task: ffffffffa6217e80 ti: ffffffffa6200000 task.ti: ffffffffa6200000
[ 0.000000] RIP: 0010:[<ffffffffa4896452>] [<ffffffffa4896452>] native_set_pgd+0x22/0x30
[ 0.000000] RSP: 0000:ffffffffa6207d30 EFLAGS: 00010097
[ 0.000000] RAX: ffffffffa6217e80 RBX: 0000000000000000 RCX: ffffffffa4896452
[ 0.000000] RDX: 0000000000000000 RSI: 0000000000000008 RDI: 0000000000000000
[ 0.000000] RBP: ffffffffa6207d40 R08: 0000000000000000 R09: 0000000000000000
[ 0.000000] R10: 0000000000000000 R11: 000000002620b067 R12: 0000000000000000
[ 0.000000] R13: fffffc0000000000 R14: 0000008000000000 R15: ffffffffa715a660
[ 0.000000] FS: 0000000000000000(0000) GS:ffffffffa6fef000(0000) knlGS:0000000000000000
[ 0.000000] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.000000] CR2: 0000000000000000 CR3: 0000000027093000 CR4: 00000000000406b0
[ 0.000000] Stack:
[ 0.000000] ffffec0000000000 0000000000000000 ffffffffa6207d78 ffffffffa703c97f
[ 0.000000] ffffffffa6207ec8 ffffffffa6207e08 000000003ff7ec00 ffffffffa6207dc8
[ 0.000000] ffffffffa715a660 ffffffffa6207ef0 ffffffffa701b3c0 0000000000000006
[ 0.000000] Call Trace:
[ 0.000000] [<ffffffffa703c97f>] kasan_init+0xf1/0x3b0
[ 0.000000] [<ffffffffa701b3c0>] setup_arch+0x1491/0x168b
[ 0.000000] [<ffffffffa7019f2f>] ? reserve_standard_io_resources+0x50/0x50
[ 0.000000] [<ffffffffa4974b30>] ? vprintk_default+0x30/0x40
[ 0.000000] [<ffffffffa4a42081>] ? printk+0xb6/0xdd
[ 0.000000] [<ffffffffa4a41fcb>] ? kzalloc+0x2b/0x2b
[ 0.000000] [<ffffffffa4add31e>] ? __asan_store8+0x4e/0x140
[ 0.000000] [<ffffffffa4addb8b>] ? __asan_load1+0x4b/0xe0
[ 0.000000] [<ffffffffa4fc1033>] ? check_preemption_disabled+0x43/0x1d0
[ 0.000000] [<ffffffffa700e413>] start_kernel+0xd3/0x7e0
[ 0.000000] [<ffffffffa700e340>] ? thread_info_cache_init+0x12/0x12
[ 0.000000] [<ffffffffa57d5851>] ? memblock_reserve+0x7c/0x8f
[ 0.000000] [<ffffffffa700d120>] ? early_idt_handler_array+0x120/0x120
[ 0.000000] [<ffffffffa700d3e8>] x86_64_start_reservations+0x59/0x63
[ 0.000000] [<ffffffffa700d561>] x86_64_start_kernel+0x16f/0x183
[ 0.000000] Code: 5c 5d c3 66 0f 1f 44 00 00 55 48 89 e5 41 54 53 48 89 fb 49 89 f4 e8 ce 69 16 00 48 89 df 48 83 05 73 39 92 02 01 e8 7e 6e 24 00 <4c> 89 23 5b 41 5c 5d c3 66 0f 1f 44 00 00 55 48 89 e5 41 54 53
[ 0.000000] RIP [<ffffffffa4896452>] native_set_pgd+0x22/0x30
[ 0.000000] RSP <ffffffffa6207d30>
[ 0.000000] CR2: 0000000000000000
[ 0.000000] ---[ end trace f1542b5b1e40bd14 ]---
[ 0.000000] Kernel panic - not syncing: Fatal exception
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/x86_64-randconfig-s4-04130305/gcc-5/26627f05b7df43dfd0586ddb5fe568463658c5da/vmlinuz-4.6.0-rc3-00023-g26627f05 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-kbuild-1G-3/bisect_boot-1-debian-x86_64-2015-02-07.cgz-x86_64-randconfig-s4-04130305-26627f05b7df43dfd0586ddb5fe568463658c5da-20160413-41841-dknpvw-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-s4-04130305 branch=linux-devel/devel-catchup-201604130329 commit=26627f05b7df43dfd0586ddb5fe568463658c5da BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s4-04130305/gcc-5/26627f05b7df43dfd0586ddb5fe568463658c5da/vmlinuz-4.6.0-rc3-00023-g26627f05 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-kbuild-1G/debian-x86_64-2015-02-07.cgz/x86_64-randconfig-s4-04130305/gcc-5/26627f05b7df43dfd0586ddb5fe568463658c5da/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-kbuild-1G-3::dhcp' -initrd /fs/sdd1/initrd-vm-kbuild-1G-3 -m 1024 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0,hostfwd=tcp::23002-:22 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -device virtio-scsi-pci,id=scsi0 -drive file=/fs/sdd1/disk0-vm-kbuild-1G-3,if=none,id=hd0,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd0,scsi-id=1,lun=0 -drive file=/fs/sdd1/disk1-vm-kbuild-1G-3,if=none,id=hd1,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd1,scsi-id=1,lun=1 -drive file=/fs/sdd1/disk2-vm-kbuild-1G-3,if=none,id=hd2,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd2,scsi-id=1,lun=2 -drive file=/fs/sdd1/disk3-vm-kbuild-1G-3,if=none,id=hd3,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd3,scsi-id=1,lun=3 -drive file=/fs/sdd1/disk4-vm-kbuild-1G-3,if=none,id=hd4,media=disk,aio=native,cache=none -device scsi-hd,bus=scsi0.0,drive=hd4,scsi-id=1,lun=4 -pidfile /dev/shm/kboot/pid-vm-kbuild-1G-3 -serial file:/dev/shm/kboot/serial-vm-kbuild-1G-3 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
4 years, 9 months
[lkp] [x86, ACPI, cpu] f962c29c2f: BUG: unable to handle kernel paging request at 0000000000001b00
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
commit f962c29c2f5d80be28bd76e2c3fadf1ce97ccd76 ("x86, ACPI, cpu-hotplug: Set persistent cpuid <-> nodeid mapping when booting")
+------------------------------------------+------------+------------+
| | 40d4d2e6c5 | f962c29c2f |
+------------------------------------------+------------+------------+
| boot_successes | 12 | 4 |
| boot_failures | 0 | 8 |
| BUG:unable_to_handle_kernel | 0 | 8 |
| Oops | 0 | 8 |
| RIP:__alloc_pages_nodemask | 0 | 8 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 8 |
| backtrace:pcpu_balance_workfn | 0 | 8 |
+------------------------------------------+------------+------------+
[ 10.413471] RAPL PMU: hw unit of domain pp0-core 2^-16 Joules
[ 10.419886] RAPL PMU: hw unit of domain package 2^-16 Joules
[ 10.426206] RAPL PMU: hw unit of domain dram 2^-16 Joules
[ 10.433905] BUG: unable to handle kernel paging request at 0000000000001b00
[ 10.441699] IP: [<ffffffff8117e350>] __alloc_pages_nodemask+0x210/0xc10
[ 10.449097] PGD 0
[ 10.451350] Oops: 0000 [#1] SMP
[ 10.454972] Modules linked in:
[ 10.458390] CPU: 12 PID: 407 Comm: kworker/12:1 Not tainted 4.6.0-rc1-00007-gf962c29 #1
[ 10.467326] Hardware name: Intel Corporation S2600WP/S2600WP, BIOS SE5C600.86B.02.02.0002.122320131210 12/23/2013
[ 10.478776] Workqueue: events pcpu_balance_workfn
[ 10.484035] task: ffff88100d858000 ti: ffff88100d860000 task.ti: ffff88100d860000
[ 10.492381] RIP: 0010:[<ffffffff8117e350>] [<ffffffff8117e350>] __alloc_pages_nodemask+0x210/0xc10
[ 10.502491] RSP: 0000:ffff88100d863c28 EFLAGS: 00010286
[ 10.508419] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000004
[ 10.516383] RDX: 0000000080000000 RSI: 0000000000000d06 RDI: ffffffff81c9d430
[ 10.524348] RBP: ffff88100d863d40 R08: 0000000000000000 R09: 0000000000000000
[ 10.532312] R10: ffffffff81ca1e87 R11: ffffc90000076000 R12: 0000000000000003
[ 10.540278] R13: 0000000000001b00 R14: 0000000000000000 R15: ffff88100d43e480
[ 10.548243] FS: 0000000000000000(0000) GS:ffff881013400000(0000) knlGS:0000000000000000
[ 10.557275] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 10.563687] CR2: 0000000000001b00 CR3: 000000103ee06000 CR4: 00000000001406e0
[ 10.571652] Stack:
[ 10.573894] ffffc90000090fff ffffc90000091000 ffff881013002008 0000000000000001
[ 10.582196] 8000000000000163 00000000ffffffff 00000000024082c2 0000000000400000
[ 10.590492] 000000000000001e ffff880ffe57c0c0 ffff88100d863c88 ffffffff811b7766
[ 10.598794] Call Trace:
[ 10.601527] [<ffffffff811b7766>] ? map_vm_area+0x36/0x50
[ 10.607553] [<ffffffff81198ade>] pcpu_populate_chunk+0xae/0x340
[ 10.614257] [<ffffffff8119a118>] pcpu_balance_workfn+0x578/0x5b0
[ 10.621060] [<ffffffff81094be5>] process_one_work+0x155/0x440
[ 10.627562] [<ffffffff8109582e>] worker_thread+0x4e/0x4c0
[ 10.633687] [<ffffffff818f1b9b>] ? __schedule+0x34b/0x8b0
[ 10.639809] [<ffffffff810957e0>] ? rescuer_thread+0x350/0x350
[ 10.646319] [<ffffffff810957e0>] ? rescuer_thread+0x350/0x350
[ 10.652831] [<ffffffff8109ade4>] kthread+0xd4/0xf0
[ 10.658276] [<ffffffff818f6742>] ret_from_fork+0x22/0x40
[ 10.664302] [<ffffffff8109ad10>] ? kthread_park+0x60/0x60
[ 10.670423] Code: d0 49 8b 04 24 48 85 c0 75 e0 65 ff 0d 22 f0 e8 7e eb 8b 31 d2 be 06 0d 00 00 48 c7 c7 30 d4 c9 81 e8 35 2c f2 ff e8 50 40 77 00 <49> 83 7d 00 00 0f 85 7c fe ff ff 31 c0 e9 6d ff ff ff 0f 1f 44
[ 10.692137] RIP [<ffffffff8117e350>] __alloc_pages_nodemask+0x210/0xc10
[ 10.699627] RSP <ffff88100d863c28>
[ 10.703518] CR2: 0000000000001b00
[ 10.707220] ---[ end trace bc83ef9ded88f808 ]---
[ 10.712373] Kernel panic - not syncing: Fatal exception
Thanks,
Kernel Test Robot
4 years, 9 months
[lkp] [drm/i915] f58f668ad6: WARNING: CPU: 3 PID: 77 at drivers/gpu/drm/i915/intel_display.c:1328 assert_plane+0x8d/0x90 [i915]()
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Lionel-Landwerlin/drm-i915-add-missing-condition-for-committing-planes-on-crtc/20160409-003403
commit f58f668ad664a521b0321c34c0fcb54dafb04337 ("drm/i915: add missing condition for committing planes on crtc")
As below, the log "WARNING: CPU: 3 PID: 77 at drivers/gpu/drm/i915/intel_display.c:1328 assert_plane+0x8d/0x90 [i915]()" showed with your commit.
[ 16.495153] ata2.00: configured for UDMA/133
[ 16.495435] scsi 1:0:0:0: Direct-Access ATA INTEL SSDSC2CW12 400i PQ: 0 ANSI: 5
[ 17.711554] ------------[ cut here ]------------
[ 17.711617] WARNING: CPU: 3 PID: 77 at drivers/gpu/drm/i915/intel_display.c:1328 assert_plane+0x8d/0x90 [i915]()
[ 17.711618] plane A assertion failure (expected on, current off)
[ 17.711635] Modules linked in: x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper ppdev serio_raw pcspkr snd_hda_intel i915(+) cryptd snd_hda_codec snd_hda_core ahci libahci drm_kms_helper syscopyarea sysfillrect sysimgblt snd_hwdep fb_sys_fops snd_pcm libata snd_timer snd soundcore shpchp drm i2c_hid winbond_cir rc_core dw_dmac sdhci_acpi dw_dmac_core video parport_pc parport sdhci mmc_core i2c_designware_platform i2c_designware_core spi_pxa2xx_platform acpi_pad
[ 17.711637] CPU: 3 PID: 77 Comm: kworker/3:1 Not tainted 4.5.0-rc7-01683-gf58f668 #1
[ 17.711637] Hardware name: Intel Corporation Broadwell Client platform/WhiteTip Mountain 1, BIOS BDW-E1R1.86C.0120.R00.1504020241 04/02/2015
[ 17.711643] Workqueue: events output_poll_execute [drm_kms_helper]
[ 17.711645] 0000000000000000 ffff880083547b30 ffffffff814261da ffff880083547b78
[ 17.711647] ffffffffa0384cb0 ffff880083547b68 ffffffff81079fe6 0000000000000000
[ 17.711648] ffff88007ae02000 0000000000000000 ffff88007ae02000 ffff88007a94bc00
[ 17.711648] Call Trace:
[ 17.711652] [<ffffffff814261da>] dump_stack+0x63/0x89
[ 17.711655] [<ffffffff81079fe6>] warn_slowpath_common+0x86/0xc0
[ 17.711657] [<ffffffff8107a06c>] warn_slowpath_fmt+0x4c/0x50
[ 17.711686] [<ffffffffa030a09d>] assert_plane+0x8d/0x90 [i915]
[ 17.711713] [<ffffffffa03149e7>] hsw_enable_ips+0x37/0x170 [i915]
[ 17.711738] [<ffffffffa0315b1a>] intel_atomic_commit+0x7aa/0xd10 [i915]
[ 17.711740] [<ffffffff810bd7f0>] ? wait_woken+0xa0/0xa0
[ 17.711755] [<ffffffffa0178227>] drm_atomic_commit+0x37/0x60 [drm]
[ 17.711761] [<ffffffffa0233467>] restore_fbdev_mode+0x237/0x260 [drm_kms_helper]
[ 17.711773] [<ffffffffa01770ba>] ? drm_modeset_lock_all_ctx+0x9a/0xb0 [drm]
[ 17.711778] [<ffffffffa0235623>] drm_fb_helper_restore_fbdev_mode_unlocked+0x33/0x80 [drm_kms_helper]
[ 17.711782] [<ffffffffa023569d>] drm_fb_helper_set_par+0x2d/0x50 [drm_kms_helper]
[ 17.711786] [<ffffffffa02355a2>] drm_fb_helper_hotplug_event+0xa2/0xf0 [drm_kms_helper]
[ 17.711816] [<ffffffffa032f6ae>] intel_fbdev_output_poll_changed+0x1e/0x30 [i915]
[ 17.711820] [<ffffffffa0228707>] drm_kms_helper_hotplug_event+0x27/0x30 [drm_kms_helper]
[ 17.711824] [<ffffffffa0228909>] output_poll_execute+0x199/0x1e0 [drm_kms_helper]
[ 17.711826] [<ffffffff81092365>] process_one_work+0x155/0x440
[ 17.711828] [<ffffffff81092fae>] worker_thread+0x4e/0x4c0
[ 17.711830] [<ffffffff818e112c>] ? __schedule+0x35c/0x8e0
[ 17.711832] [<ffffffff81092f60>] ? rescuer_thread+0x350/0x350
[ 17.711834] [<ffffffff81092f60>] ? rescuer_thread+0x350/0x350
[ 17.711836] [<ffffffff81098554>] kthread+0xd4/0xf0
[ 17.711837] [<ffffffff81098480>] ? kthread_park+0x60/0x60
[ 17.711839] [<ffffffff818e5eff>] ret_from_fork+0x3f/0x70
[ 17.711841] [<ffffffff81098480>] ? kthread_park+0x60/0x60
[ 17.711842] ---[ end trace cc49f1984c8fa0ba ]---
[ 17.890226] snd_hda_intel 0000:00:03.0: bound 0000:00:02.0 (ops i915_audio_component_bind_ops [i915])
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong Ye
4 years, 9 months
[lkp] [Add sancov plugin] 47faf3078f: BUG: kernel boot crashed
by kernel test robot
FYI, we noticed the below changes on
https://github.com/0day-ci/linux Emese-Revfy/Introduce-GCC-plugin-infrastructure/20160408-052328
commit 47faf3078f741dd7d854131a547615fa8e447dd5 ("Add sancov plugin")
[ 0.000000] percpu: Embedded 31 pages/cpu @ffff880013e00000 s88640 r8192 d30144 u1048576
[ 0.000000] pcpu-alloc: s88640 r8192 d30144 u1048576 alloc=1*2097152
[ 0.000000] pcpu-alloc: [0] 0 1
Elapsed time: 10
BUG: kernel boot crashed
qemu-system-x86_64 -enable-kvm -kernel /pkg/linux/x86_64-randconfig-n0-04081244/gcc-5/47faf3078f741dd7d854131a547615fa8e447dd5/vmlinuz-4.6.0-rc2-00006-g47faf30 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-x86_64-62/bisect_boot-1-quantal-core-x86_64.cgz-x86_64-randconfig-n0-04081244-47faf3078f741dd7d854131a547615fa8e447dd5-20160410-42948-1n1aons-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-n0-04081244 branch=linux-devel/devel-hourly-2016040809 commit=47faf3078f741dd7d854131a547615fa8e447dd5 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-n0-04081244/gcc-5/47faf3078f741dd7d854131a547615fa8e447dd5/vmlinuz-4.6.0-rc2-00006-g47faf30 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-vp-quantal-x86_64/quantal-core-x86_64.cgz/x86_64-randconfig-n0-04081244/gcc-5/47faf3078f741dd7d854131a547615fa8e447dd5/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-x86_64-62::dhcp drbd.minor_count=8' -initrd /fs/sdd1/initrd-vm-vp-quantal-x86_64-62 -m 360 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-x86_64-62 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-x86_64-62 -daemonize -display none -monitor null
FYI, raw QEMU command line is:
qemu-system-x86_64 -enable-kvm -kernel /pkg/linux/x86_64-randconfig-n0-04081244/gcc-5/47faf3078f741dd7d854131a547615fa8e447dd5/vmlinuz-4.6.0-rc2-00006-g47faf30 -append 'root=/dev/ram0 user=lkp job=/lkp/scheduled/vm-vp-quantal-x86_64-62/bisect_boot-1-quantal-core-x86_64.cgz-x86_64-randconfig-n0-04081244-47faf3078f741dd7d854131a547615fa8e447dd5-20160410-42948-1n1aons-0.yaml ARCH=x86_64 kconfig=x86_64-randconfig-n0-04081244 branch=linux-devel/devel-hourly-2016040809 commit=47faf3078f741dd7d854131a547615fa8e447dd5 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-n0-04081244/gcc-5/47faf3078f741dd7d854131a547615fa8e447dd5/vmlinuz-4.6.0-rc2-00006-g47faf30 max_uptime=600 RESULT_ROOT=/result/boot/1/vm-vp-quantal-x86_64/quantal-core-x86_64.cgz/x86_64-randconfig-n0-04081244/gcc-5/47faf3078f741dd7d854131a547615fa8e447dd5/0 LKP_SERVER=inn earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal rw ip=::::vm-vp-quantal-x86_64-62::dhcp drbd.minor_count=8' -initrd /fs/sdd1/initrd-vm-vp-quantal-x86_64-62 -m 360 -smp 2 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -pidfile /dev/shm/kboot/pid-vm-vp-quantal-x86_64-62 -serial file:/dev/shm/kboot/serial-vm-vp-quantal-x86_64-62 -daemonize -display none -monitor null
Thanks,
Kernel Test Robot
4 years, 9 months