[lkp-robot] [net] addf9b90de: WARNING:at_lib/kobject.c:#kobject_get
by kernel test robot
FYI, we noticed the following commit (built with gcc-6):
commit: addf9b90de22f7aaad0db39bccb5d51ac47dd4e1 ("net: rtnetlink: use rcu to free rtnl message handlers")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: boot
on test machine: qemu-system-i386 -enable-kvm -smp 2 -m 320M
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------+------------+------------+
| | 9753c21f55 | addf9b90de |
+------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 40 | 34 |
| BUG:kernel_hang_in_boot_stage | 40 | |
| WARNING:at_lib/kobject.c:#kobject_get | 0 | 34 |
| EIP:kobject_get | 0 | 34 |
| BUG:unable_to_handle_kernel | 0 | 34 |
| Oops:#[##] | 0 | 34 |
| EIP:strcmp | 0 | 34 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 34 |
+------------------------------------------+------------+------------+
[ 12.001124] WARNING: CPU: 0 PID: 18 at lib/kobject.c:597 kobject_get+0x1d/0x31
[ 12.002012] CPU: 0 PID: 18 Comm: kworker/0:1 Tainted: G S 4.15.0-rc1-00242-gaddf9b9 #1
[ 12.002012] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 12.002012] Workqueue: events_long serio_handle_event
[ 12.002012] task: a939e183 task.stack: 55c861e7
[ 12.002012] EIP: kobject_get+0x1d/0x31
[ 12.002012] EFLAGS: 00210296 CPU: 0
[ 12.002012] EAX: 0000005a EBX: d2b8e014 ECX: c2c8daa8 EDX: c2c8daac
[ 12.002012] ESI: ffffffea EDI: d2a66108 EBP: d352defc ESP: d352deec
[ 12.002012] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068
[ 12.002012] CR0: 80050033 CR2: 00000000 CR3: 02f0c000 CR4: 00000690
[ 12.002012] Call Trace:
[ 12.002012] get_device+0xf/0x14
[ 12.002012] device_add+0xa9/0x4b0
[ 12.002012] ? __might_sleep+0x64/0x6b
[ 12.002012] ? __switch_to+0xd0/0x215
[ 12.002012] serio_handle_event+0xf8/0x18a
[ 12.002012] process_one_work+0x101/0x1e5
[ 12.002012] worker_thread+0x1b6/0x275
[ 12.002012] kthread+0xee/0xf3
[ 12.002012] ? process_scheduled_works+0x24/0x24
[ 12.002012] ? __kthread_create_on_node+0x11b/0x11b
[ 12.002012] ret_from_fork+0x19/0x30
[ 12.002012] Code: 0c ec ff ff 83 c4 0c 8d 65 f8 5b 5e 5d c3 55 85 c0 89 e5 53 89 c3 74 20 f6 40 20 01 75 12 50 ff 30 68 5c ad c5 c2 e8 a8 21 c7 fe <0f> ff 83 c4 0c 8d 43 1c e8 e3 11 03 ff 89 d8 8b 5d fc c9 c3 55
[ 12.002012] ---[ end trace 0089d2c08c89c537 ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Xiaolong
3 years
[lkp-robot] [netfilter] f4ae02e01d: stress-ng.clone.ops 176.5% improvement
by kernel test robot
Greeting,
FYI, we noticed a 176.5% improvement of stress-ng.clone.ops due to commit:
commit: f4ae02e01ddec20c34c870906ef43088c673fb0b ("netfilter: core: free hooks with call_rcu")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: stress-ng
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
testtime: 1s
class: scheduler
cpufreq_governor: performance
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
class/compiler/cpufreq_governor/kconfig/rootfs/tbox_group/testcase/testtime:
scheduler/gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/lkp-bdw-ep6/stress-ng/1s
commit:
a57de64fc8 ("netfilter: core: remove synchronize_net call if nfqueue is used")
f4ae02e01d ("netfilter: core: free hooks with call_rcu")
a57de64fc8dff797 f4ae02e01ddec20c34c870906e
---------------- --------------------------
%stddev %change %stddev
\ | \
561.25 ± 6% +176.5% 1552 ± 5% stress-ng.clone.ops
10.78 +5486.1% 602.04 ± 9% stress-ng.clone.ops_per_sec
57828069 ± 2% +3.9% 60111279 stress-ng.mq.ops
57745148 ± 2% +3.9% 60010983 stress-ng.mq.ops_per_sec
106.83 ± 3% -68.4% 33.76 stress-ng.time.elapsed_time
106.83 ± 3% -68.4% 33.76 stress-ng.time.elapsed_time.max
1622 ± 4% +223.2% 5244 stress-ng.time.percent_of_cpu_this_job_got
3.416e+08 ± 17% +19.3% 4.074e+08 ± 4% stress-ng.yield.ops
3.399e+08 ± 17% +19.2% 4.051e+08 ± 3% stress-ng.yield.ops_per_sec
3332 ± 5% -6.2% 3127 ± 2% boot-time.idle
365351 ± 2% -11.0% 324982 ± 4% softirqs.RCU
7.066e+09 ± 4% -74.0% 1.834e+09 ± 53% cpuidle.C6.time
7338620 ± 4% -73.5% 1948124 ± 52% cpuidle.C6.usage
1.542e+08 ± 7% -78.8% 32708972 ± 80% cpuidle.POLL.time
79.59 -29.9 49.68 ± 24% mpstat.cpu.idle%
18.74 ± 4% +27.5 46.26 ± 24% mpstat.cpu.sys%
1.57 ± 2% +2.3 3.84 ± 23% mpstat.cpu.usr%
2077901 ± 4% +51.6% 3150137 ± 13% vmstat.memory.cache
208.50 ± 29% +98.6% 414.00 ± 23% vmstat.procs.r
748042 ± 20% +132.6% 1740231 ± 29% vmstat.system.cs
127043 ± 3% +35.3% 171843 ± 11% vmstat.system.in
0.18 ± 7% -23.0% 0.14 ± 10% sched_debug.cfs_rq:/.nr_running.avg
7358 ± 78% -82.6% 1281 ±139% sched_debug.cfs_rq:/.spread0.max
277.32 ± 9% -16.7% 230.90 ± 6% sched_debug.cpu.curr->pid.avg
14480 ± 54% -64.6% 5126 ±103% sched_debug.cpu.load.avg
0.19 ± 6% -24.2% 0.14 ± 11% sched_debug.cpu.nr_running.avg
484265 ± 10% +90.6% 922939 ± 21% meminfo.Active
442448 ± 11% +99.2% 881214 ± 22% meminfo.Active(anon)
1823273 ± 3% +56.5% 2854129 ± 9% meminfo.Cached
1543190 +41.8% 2188588 ± 8% meminfo.Inactive
442716 ± 6% +145.7% 1087777 ± 17% meminfo.Inactive(anon)
457477 ± 6% +140.9% 1101943 ± 17% meminfo.Mapped
680743 ± 9% +151.4% 1711577 ± 15% meminfo.Shmem
114524 ± 7% +92.0% 219930 ± 20% proc-vmstat.nr_active_anon
463278 ± 5% +53.8% 712508 ± 6% proc-vmstat.nr_file_pages
114015 ± 10% +136.6% 269807 ± 11% proc-vmstat.nr_inactive_anon
26369 ± 6% +47.7% 38957 ± 20% proc-vmstat.nr_kernel_stack
117871 ± 10% +132.0% 273476 ± 11% proc-vmstat.nr_mapped
177645 ± 13% +140.3% 426871 ± 11% proc-vmstat.nr_shmem
114524 ± 7% +92.0% 219931 ± 20% proc-vmstat.nr_zone_active_anon
114015 ± 10% +136.6% 269807 ± 11% proc-vmstat.nr_zone_inactive_anon
37335 ± 2% +39.5% 52097 ± 19% proc-vmstat.numa_other
0.41 -0.1 0.32 ± 6% perf-stat.branch-miss-rate%
2.069e+09 ± 2% -24.0% 1.573e+09 ± 8% perf-stat.branch-misses
8.00 ± 5% +0.7 8.70 ± 4% perf-stat.cache-miss-rate%
2.936e+10 ± 5% -6.9% 2.733e+10 ± 4% perf-stat.cache-references
2.69 ± 2% -7.0% 2.50 perf-stat.cpi
6.527e+12 ± 2% -9.9% 5.883e+12 ± 3% perf-stat.cpu-cycles
0.10 ± 6% -0.0 0.08 ± 11% perf-stat.dTLB-load-miss-rate%
6.847e+08 ± 6% -22.7% 5.291e+08 ± 13% perf-stat.dTLB-load-misses
0.06 ± 10% -0.0 0.04 ± 6% perf-stat.dTLB-store-miss-rate%
1.834e+08 ± 8% -34.8% 1.196e+08 ± 7% perf-stat.dTLB-store-misses
2.401e+08 ± 13% -33.1% 1.606e+08 ± 21% perf-stat.iTLB-load-misses
10313 ± 14% +48.6% 15329 ± 20% perf-stat.instructions-per-iTLB-miss
0.37 ± 2% +7.5% 0.40 perf-stat.ipc
671.50 ± 3% +122.4% 1493 ± 22% turbostat.Avg_MHz
24.77 ± 3% +28.8 53.59 ± 22% turbostat.Busy%
0.90 ± 15% +1.5 2.41 ± 25% turbostat.C1%
1.06 ± 10% +1.4 2.43 ± 25% turbostat.C1E%
0.62 +0.7 1.36 ± 18% turbostat.C3%
7415034 ± 4% -73.5% 1961926 ± 52% turbostat.C6
72.77 -32.2 40.55 ± 33% turbostat.C6%
22.48 -18.5% 18.32 ± 10% turbostat.CPU%c1
0.21 ± 5% +141.9% 0.52 ± 22% turbostat.CPU%c3
52.52 -47.5% 27.57 ± 36% turbostat.CPU%c6
14815026 ± 3% -39.2% 9014213 ± 11% turbostat.IRQ
17.17 ± 6% -57.7% 7.27 ± 36% turbostat.Pkg%pc2
0.07 ± 41% +330.8% 0.28 ± 81% turbostat.Pkg%pc3
6.51 ± 15% -63.7% 2.36 ± 67% turbostat.Pkg%pc6
126.42 +43.5% 181.45 ± 11% turbostat.PkgWatt
11.04 ± 3% +38.4% 15.28 ± 11% turbostat.RAMWatt
229842 ± 12% +96.2% 450950 ± 15% numa-meminfo.node0.Active
208989 ± 13% +105.6% 429750 ± 15% numa-meminfo.node0.Active(anon)
902041 ± 7% +56.3% 1409628 ± 6% numa-meminfo.node0.FilePages
761954 ± 4% +41.3% 1076636 ± 4% numa-meminfo.node0.Inactive
220571 ± 14% +142.8% 535489 ± 9% numa-meminfo.node0.Inactive(anon)
13328 ± 9% +47.5% 19653 ± 25% numa-meminfo.node0.KernelStack
228784 ± 14% +137.2% 542667 ± 9% numa-meminfo.node0.Mapped
1480294 ± 5% +36.5% 2020295 ± 5% numa-meminfo.node0.MemUsed
65366 ± 10% +22.8% 80249 ± 9% numa-meminfo.node0.SReclaimable
339691 ± 19% +149.4% 847265 ± 11% numa-meminfo.node0.Shmem
257613 ± 22% +95.3% 503100 ± 29% numa-meminfo.node1.Active
236640 ± 23% +103.9% 482551 ± 31% numa-meminfo.node1.Active(anon)
923661 ± 8% +58.4% 1463261 ± 13% numa-meminfo.node1.FilePages
782761 ± 4% +40.2% 1097302 ± 12% numa-meminfo.node1.Inactive
223676 ± 11% +140.4% 537661 ± 23% numa-meminfo.node1.Inactive(anon)
230365 ± 10% +136.5% 544700 ± 23% numa-meminfo.node1.Mapped
1556182 ± 6% +35.0% 2100580 ± 15% numa-meminfo.node1.MemUsed
343481 ± 18% +157.1% 883055 ± 22% numa-meminfo.node1.Shmem
52870 ± 20% +97.8% 104563 ± 8% numa-vmstat.node0.nr_active_anon
222702 ± 4% +57.3% 350316 ± 8% numa-vmstat.node0.nr_file_pages
52568 ± 11% +153.9% 133452 ± 17% numa-vmstat.node0.nr_inactive_anon
54627 ± 11% +147.8% 135346 ± 17% numa-vmstat.node0.nr_mapped
82114 ± 10% +155.4% 209728 ± 14% numa-vmstat.node0.nr_shmem
16250 ± 12% +23.9% 20135 ± 9% numa-vmstat.node0.nr_slab_reclaimable
52871 ± 20% +97.8% 104564 ± 8% numa-vmstat.node0.nr_zone_active_anon
52564 ± 11% +153.9% 133445 ± 17% numa-vmstat.node0.nr_zone_inactive_anon
3417475 ± 13% +254.7% 12123438 ± 22% numa-vmstat.node0.numa_hit
3401367 ± 13% +255.8% 12101131 ± 22% numa-vmstat.node0.numa_local
225958 ± 7% +55.0% 350178 ± 18% numa-vmstat.node1.nr_file_pages
54867 ± 9% +136.4% 129713 ± 27% numa-vmstat.node1.nr_inactive_anon
12846 ± 12% +64.3% 21102 ± 26% numa-vmstat.node1.nr_kernel_stack
56674 ± 8% +132.1% 131514 ± 27% numa-vmstat.node1.nr_mapped
80913 ± 16% +153.5% 205129 ± 30% numa-vmstat.node1.nr_shmem
54867 ± 9% +136.4% 129717 ± 27% numa-vmstat.node1.nr_zone_inactive_anon
3526202 ± 12% +239.8% 11980772 ± 19% numa-vmstat.node1.numa_hit
3351142 ± 13% +252.0% 11795741 ± 19% numa-vmstat.node1.numa_local
211698 ± 7% +29.6% 274283 ± 11% slabinfo.Acpi-Namespace.active_objs
2238 ± 7% +22.7% 2746 ± 11% slabinfo.Acpi-Namespace.active_slabs
228389 ± 7% +22.7% 280151 ± 11% slabinfo.Acpi-Namespace.num_objs
2238 ± 7% +22.7% 2746 ± 11% slabinfo.Acpi-Namespace.num_slabs
5293 ± 48% +159.2% 13720 ± 26% slabinfo.Acpi-State.active_objs
104.50 ± 47% +159.8% 271.50 ± 26% slabinfo.Acpi-State.active_slabs
5354 ± 47% +159.3% 13884 ± 26% slabinfo.Acpi-State.num_objs
104.50 ± 47% +159.8% 271.50 ± 26% slabinfo.Acpi-State.num_slabs
57565 ± 10% -78.8% 12219 ± 17% slabinfo.RAW.active_objs
1695 ± 10% -78.3% 367.50 ± 16% slabinfo.RAW.active_slabs
57674 ± 10% -78.3% 12514 ± 16% slabinfo.RAW.num_objs
1695 ± 10% -78.3% 367.50 ± 16% slabinfo.RAW.num_slabs
14484 ± 6% -60.6% 5709 ± 11% slabinfo.RAWv6.active_objs
527.00 ± 6% -59.9% 211.50 ± 11% slabinfo.RAWv6.active_slabs
14770 ± 6% -59.8% 5935 ± 11% slabinfo.RAWv6.num_objs
527.00 ± 6% -59.9% 211.50 ± 11% slabinfo.RAWv6.num_slabs
38076 ± 2% +47.2% 56041 ± 4% slabinfo.anon_vma.active_objs
829.00 ± 2% +47.4% 1222 ± 4% slabinfo.anon_vma.active_slabs
38155 ± 2% +47.4% 56230 ± 4% slabinfo.anon_vma.num_objs
829.00 ± 2% +47.4% 1222 ± 4% slabinfo.anon_vma.num_slabs
87058 +11.5% 97103 ± 4% slabinfo.anon_vma_chain.active_objs
1390 ± 2% +13.4% 1576 ± 5% slabinfo.anon_vma_chain.active_slabs
89012 ± 2% +13.4% 100920 ± 5% slabinfo.anon_vma_chain.num_objs
1390 ± 2% +13.4% 1576 ± 5% slabinfo.anon_vma_chain.num_slabs
16688 ± 12% +106.5% 34454 ± 29% slabinfo.cred_jar.active_objs
403.25 ± 11% +109.9% 846.50 ± 27% slabinfo.cred_jar.active_slabs
16959 ± 11% +109.8% 35573 ± 27% slabinfo.cred_jar.num_objs
403.25 ± 11% +109.9% 846.50 ± 27% slabinfo.cred_jar.num_slabs
104937 ± 11% +49.9% 157259 ± 18% slabinfo.dentry.active_objs
2542 ± 11% +53.8% 3908 ± 17% slabinfo.dentry.active_slabs
106777 ± 11% +53.8% 164182 ± 17% slabinfo.dentry.num_objs
2542 ± 11% +53.8% 3908 ± 17% slabinfo.dentry.num_slabs
4661 +25.6% 5857 ± 6% slabinfo.files_cache.active_objs
4661 +25.6% 5857 ± 6% slabinfo.files_cache.num_objs
7587 ± 31% +78.7% 13557 ± 4% slabinfo.fsnotify_mark_connector.active_objs
7587 ± 31% +78.7% 13557 ± 4% slabinfo.fsnotify_mark_connector.num_objs
77663 ± 14% +60.3% 124484 ± 23% slabinfo.inode_cache.active_objs
1480 ± 14% +62.0% 2398 ± 22% slabinfo.inode_cache.active_slabs
78485 ± 14% +62.0% 127146 ± 22% slabinfo.inode_cache.num_objs
1480 ± 14% +62.0% 2398 ± 22% slabinfo.inode_cache.num_slabs
7088 ± 4% +19.1% 8442 ± 5% slabinfo.kmalloc-1024.active_objs
8011 ± 5% +18.2% 9466 ± 3% slabinfo.kmalloc-1024.num_objs
31361 ± 6% -51.6% 15193 ± 7% slabinfo.kmalloc-192.active_objs
913.00 ± 7% -49.4% 462.25 ± 5% slabinfo.kmalloc-192.active_slabs
38367 ± 7% -49.3% 19438 ± 5% slabinfo.kmalloc-192.num_objs
913.00 ± 7% -49.4% 462.25 ± 5% slabinfo.kmalloc-192.num_slabs
10832 ± 4% -17.8% 8899 ± 3% slabinfo.kmalloc-2048.active_objs
765.75 ± 5% -23.0% 589.25 ± 3% slabinfo.kmalloc-2048.active_slabs
12258 ± 5% -23.0% 9435 ± 3% slabinfo.kmalloc-2048.num_objs
765.75 ± 5% -23.0% 589.25 ± 3% slabinfo.kmalloc-2048.num_slabs
127099 ± 4% -17.2% 105269 ± 9% slabinfo.kmalloc-32.active_objs
1124 ± 5% -23.0% 865.25 ± 7% slabinfo.kmalloc-32.active_slabs
143962 ± 5% -23.0% 110843 ± 7% slabinfo.kmalloc-32.num_objs
1124 ± 5% -23.0% 865.25 ± 7% slabinfo.kmalloc-32.num_slabs
3698 ± 6% -50.2% 1842 ± 6% slabinfo.kmalloc-4096.active_objs
611.00 ± 7% -57.2% 261.50 ± 5% slabinfo.kmalloc-4096.active_slabs
4891 ± 7% -57.1% 2096 ± 5% slabinfo.kmalloc-4096.num_objs
611.00 ± 7% -57.2% 261.50 ± 5% slabinfo.kmalloc-4096.num_slabs
216825 ± 7% -23.8% 165170 ± 13% slabinfo.kmalloc-64.active_objs
3552 ± 7% -25.8% 2636 ± 13% slabinfo.kmalloc-64.active_slabs
227382 ± 7% -25.8% 168727 ± 13% slabinfo.kmalloc-64.num_objs
3552 ± 7% -25.8% 2636 ± 13% slabinfo.kmalloc-64.num_slabs
117177 ± 10% +154.8% 298600 ± 20% slabinfo.kmalloc-8.active_objs
229.00 ± 11% +155.6% 585.25 ± 19% slabinfo.kmalloc-8.active_slabs
117661 ± 11% +154.8% 299794 ± 19% slabinfo.kmalloc-8.num_objs
229.00 ± 11% +155.6% 585.25 ± 19% slabinfo.kmalloc-8.num_slabs
961.25 ± 2% -19.7% 771.75 ± 4% slabinfo.kmalloc-8192.active_objs
1121 ± 3% -13.2% 973.25 ± 5% slabinfo.kmalloc-8192.num_objs
340.25 ± 3% +37.7% 468.50 ± 7% slabinfo.kmalloc-96.active_slabs
14315 ± 3% +37.6% 19702 ± 7% slabinfo.kmalloc-96.num_objs
340.25 ± 3% +37.7% 468.50 ± 7% slabinfo.kmalloc-96.num_slabs
2334 ± 3% +61.0% 3758 ± 6% slabinfo.mm_struct.active_objs
2343 ± 3% +62.3% 3802 ± 7% slabinfo.mm_struct.num_objs
5409 ± 5% +191.6% 15774 ± 29% slabinfo.mnt_cache.active_objs
128.50 ± 5% +191.8% 375.00 ± 29% slabinfo.mnt_cache.active_slabs
5409 ± 5% +191.6% 15774 ± 29% slabinfo.mnt_cache.num_objs
128.50 ± 5% +191.8% 375.00 ± 29% slabinfo.mnt_cache.num_slabs
672.25 ± 6% -21.8% 525.50 ± 3% slabinfo.net_namespace.active_objs
687.50 ± 5% -15.2% 583.00 ± 4% slabinfo.net_namespace.num_objs
14052 ± 7% +132.5% 32674 ± 16% slabinfo.pid.active_objs
443.75 ± 7% +134.2% 1039 ± 16% slabinfo.pid.active_slabs
14215 ± 7% +134.0% 33266 ± 16% slabinfo.pid.num_objs
443.75 ± 7% +134.2% 1039 ± 16% slabinfo.pid.num_slabs
23640 ± 3% +88.3% 44510 ± 8% slabinfo.radix_tree_node.active_objs
436.75 ± 3% +91.1% 834.75 ± 8% slabinfo.radix_tree_node.active_slabs
24498 ± 3% +90.9% 46765 ± 7% slabinfo.radix_tree_node.num_objs
436.75 ± 3% +91.1% 834.75 ± 8% slabinfo.radix_tree_node.num_slabs
5003 +13.4% 5675 ± 2% slabinfo.shmem_inode_cache.active_objs
5003 +13.4% 5675 ± 2% slabinfo.shmem_inode_cache.num_objs
4408 ± 5% +78.6% 7872 ± 8% slabinfo.sighand_cache.active_objs
298.00 ± 4% +81.8% 541.75 ± 8% slabinfo.sighand_cache.active_slabs
4480 ± 4% +81.6% 8136 ± 8% slabinfo.sighand_cache.num_objs
298.00 ± 4% +81.8% 541.75 ± 8% slabinfo.sighand_cache.num_slabs
9457 ± 15% +52.8% 14451 ± 10% slabinfo.signal_cache.active_objs
320.25 ± 15% +57.3% 503.75 ± 10% slabinfo.signal_cache.active_slabs
9626 ± 15% +57.2% 15130 ± 10% slabinfo.signal_cache.num_objs
320.25 ± 15% +57.3% 503.75 ± 10% slabinfo.signal_cache.num_slabs
73602 ± 9% -75.9% 17738 ± 15% slabinfo.sock_inode_cache.active_objs
1628 ± 9% -75.1% 404.75 ± 13% slabinfo.sock_inode_cache.active_slabs
74920 ± 9% -75.1% 18654 ± 13% slabinfo.sock_inode_cache.num_objs
1628 ± 9% -75.1% 404.75 ± 13% slabinfo.sock_inode_cache.num_slabs
1695 ± 6% +60.9% 2727 ± 7% slabinfo.task_group.active_objs
1695 ± 6% +60.9% 2727 ± 7% slabinfo.task_group.num_objs
618.50 ± 19% +247.3% 2147 ± 22% slabinfo.taskstats.active_objs
618.50 ± 19% +247.3% 2147 ± 22% slabinfo.taskstats.num_objs
906.00 ± 2% +66.3% 1506 ± 13% slabinfo.user_namespace.active_objs
906.00 ± 2% +66.3% 1506 ± 13% slabinfo.user_namespace.num_objs
60.73 ± 5% -59.3 1.40 ±140% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
13.13 ± 28% -13.1 0.00 perf-profile.calltrace.cycles-pp.return_from_SYSCALL_64
13.13 ± 28% -13.1 0.00 perf-profile.calltrace.cycles-pp.do_syscall_64.return_from_SYSCALL_64
11.19 ± 61% -11.2 0.00 perf-profile.calltrace.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
11.54 ± 61% -11.1 0.40 ±173% perf-profile.calltrace.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
10.09 ± 64% -10.1 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
9.87 ± 66% -9.9 0.00 perf-profile.calltrace.cycles-pp.sys_sched_yield.entry_SYSCALL_64_fastpath
9.85 ± 66% -9.8 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
9.58 ± 39% -9.6 0.00 perf-profile.calltrace.cycles-pp.sys_clone.do_syscall_64.return_from_SYSCALL_64
9.58 ± 39% -9.6 0.00 perf-profile.calltrace.cycles-pp._do_fork.sys_clone.do_syscall_64.return_from_SYSCALL_64
9.26 ± 39% -9.3 0.00 perf-profile.calltrace.cycles-pp.copy_process._do_fork.sys_clone.do_syscall_64.return_from_SYSCALL_64
8.43 ± 67% -8.4 0.00 perf-profile.calltrace.cycles-pp.schedule.sys_sched_yield.entry_SYSCALL_64_fastpath
8.31 ± 38% -8.3 0.00 perf-profile.calltrace.cycles-pp.sys_kill.entry_SYSCALL_64_fastpath
8.29 ± 38% -8.3 0.00 perf-profile.calltrace.cycles-pp.SYSC_kill.sys_kill.entry_SYSCALL_64_fastpath
8.21 ± 38% -8.2 0.00 perf-profile.calltrace.cycles-pp.kill_pid_info.SYSC_kill.sys_kill.entry_SYSCALL_64_fastpath
8.19 ± 38% -8.2 0.00 perf-profile.calltrace.cycles-pp.group_send_sig_info.kill_pid_info.SYSC_kill.sys_kill.entry_SYSCALL_64_fastpath
8.11 ± 67% -8.1 0.00 perf-profile.calltrace.cycles-pp.__schedule.schedule.sys_sched_yield.entry_SYSCALL_64_fastpath
8.06 ± 39% -8.1 0.00 perf-profile.calltrace.cycles-pp.do_send_sig_info.group_send_sig_info.kill_pid_info.SYSC_kill.sys_kill
7.99 ± 58% -8.0 0.00 perf-profile.calltrace.cycles-pp.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
7.99 ± 58% -8.0 0.00 perf-profile.calltrace.cycles-pp.sys_exit_group.entry_SYSCALL_64_fastpath
7.98 ± 58% -8.0 0.00 perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
7.60 ± 39% -7.6 0.00 perf-profile.calltrace.cycles-pp.send_signal.do_send_sig_info.group_send_sig_info.kill_pid_info.SYSC_kill
7.57 ± 39% -7.6 0.00 perf-profile.calltrace.cycles-pp.__send_signal.send_signal.do_send_sig_info.group_send_sig_info.kill_pid_info
7.37 ± 39% -7.4 0.00 perf-profile.calltrace.cycles-pp.sys_wait4.entry_SYSCALL_64_fastpath
7.37 ± 39% -7.4 0.00 perf-profile.calltrace.cycles-pp.SYSC_wait4.sys_wait4.entry_SYSCALL_64_fastpath
7.37 ± 39% -7.4 0.00 perf-profile.calltrace.cycles-pp.kernel_wait4.SYSC_wait4.sys_wait4.entry_SYSCALL_64_fastpath
7.35 ± 39% -7.3 0.00 perf-profile.calltrace.cycles-pp.do_wait.kernel_wait4.SYSC_wait4.sys_wait4.entry_SYSCALL_64_fastpath
7.22 ± 59% -6.8 0.40 ±173% perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
6.58 ±100% -6.0 0.61 ±109% perf-profile.calltrace.cycles-pp.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
5.29 ± 32% -5.3 0.00 perf-profile.calltrace.cycles-pp.syscall_return_slowpath.entry_SYSCALL_64_fastpath
5.29 ± 32% -5.3 0.00 perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
4.93 ± 32% -4.9 0.00 perf-profile.calltrace.cycles-pp.do_signal.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
4.93 ± 32% -4.9 0.00 perf-profile.calltrace.cycles-pp.get_signal.do_signal.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
4.56 ± 83% -4.6 0.00 perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
4.55 ± 83% -4.6 0.00 perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.sys_exit_group
4.88 ±101% -4.5 0.40 ±173% perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
4.85 ±101% -4.4 0.40 ±173% perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
4.78 ±100% -4.2 0.61 ±109% perf-profile.calltrace.cycles-pp.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.__wake_up_sync_key
4.10 ± 71% -4.1 0.00 perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule.sys_sched_yield.entry_SYSCALL_64_fastpath
3.90 ±100% -3.9 0.00 perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
3.88 ±100% -3.9 0.00 perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
3.79 ±100% -3.8 0.00 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
3.37 ±100% -3.4 0.00 perf-profile.calltrace.cycles-pp.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.__irqentry_text_start
3.37 ±100% -3.4 0.00 perf-profile.calltrace.cycles-pp.wake_up_process.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
3.36 ±100% -3.4 0.00 perf-profile.calltrace.cycles-pp.try_to_wake_up.wake_up_process.hrtimer_wakeup.__hrtimer_run_queues.hrtimer_interrupt
3.10 ±100% -3.1 0.00 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.wake_up_process.hrtimer_wakeup.__hrtimer_run_queues
3.10 ±100% -3.1 0.00 perf-profile.calltrace.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_process.hrtimer_wakeup
3.09 ±100% -3.1 0.00 perf-profile.calltrace.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_process
3.76 ±100% -2.9 0.82 ±100% perf-profile.calltrace.cycles-pp.pipe_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
2.92 ±100% -2.3 0.61 ±109% perf-profile.calltrace.cycles-pp.pipe_read.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.86 ±100% -2.3 0.61 ±109% perf-profile.calltrace.cycles-pp.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write.sys_write
2.86 ±100% -2.3 0.61 ±109% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write
2.83 ±100% -2.2 0.61 ±109% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.__wake_up_sync_key.pipe_write.__vfs_write
2.81 ±100% -2.2 0.61 ±109% perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.__wake_up_sync_key.pipe_write
3.03 ±100% -1.8 1.19 ±173% perf-profile.calltrace.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
3.01 ±100% -1.8 1.19 ±173% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.94 ±100% -1.8 1.19 ±173% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.90 ± 35% +3.2 4.07 ± 73% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.__irqentry_text_start.cpuidle_enter_state.cpuidle_enter.call_cpuidle
0.90 ± 35% +3.2 4.10 ± 74% perf-profile.calltrace.cycles-pp.__irqentry_text_start.cpuidle_enter_state.cpuidle_enter.call_cpuidle.do_idle
0.00 +3.8 3.85 ±105% perf-profile.calltrace.cycles-pp.event_function_call._perf_event_disable.perf_event_for_each_child.perf_ioctl.do_vfs_ioctl
0.00 +3.8 3.85 ±105% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call._perf_event_disable.perf_event_for_each_child.perf_ioctl
0.00 +3.8 3.85 ±105% perf-profile.calltrace.cycles-pp._perf_event_disable.perf_event_for_each_child.perf_ioctl.do_vfs_ioctl.sys_ioctl
0.00 +8.4 8.42 ± 83% perf-profile.calltrace.cycles-pp.delay_tsc.__const_udelay.wait_for_xmitr.serial8250_console_putchar.uart_console_write
0.00 +8.4 8.42 ± 83% perf-profile.calltrace.cycles-pp.__const_udelay.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.17 ±173% +10.7 10.91 ± 81% perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
0.00 +16.6 16.61 ± 37% perf-profile.calltrace.cycles-pp.event_function_call._perf_event_enable.perf_event_for_each_child.perf_ioctl.do_vfs_ioctl
0.00 +16.6 16.61 ± 37% perf-profile.calltrace.cycles-pp.smp_call_function_single.event_function_call._perf_event_enable.perf_event_for_each_child.perf_ioctl
0.00 +16.6 16.61 ± 37% perf-profile.calltrace.cycles-pp._perf_event_enable.perf_event_for_each_child.perf_ioctl.do_vfs_ioctl.sys_ioctl
0.00 +17.8 17.84 ±102% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.univ8250_console_write.console_unlock.vprintk_emit
0.00 +18.2 18.23 ±101% perf-profile.calltrace.cycles-pp.univ8250_console_write.console_unlock.vprintk_emit.printk_emit.devkmsg_write
0.00 +18.2 18.23 ±101% perf-profile.calltrace.cycles-pp.serial8250_console_write.univ8250_console_write.console_unlock.vprintk_emit.printk_emit
0.00 +19.0 19.05 ±101% perf-profile.calltrace.cycles-pp.devkmsg_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.00 +19.0 19.05 ±101% perf-profile.calltrace.cycles-pp.printk_emit.devkmsg_write.__vfs_write.vfs_write.sys_write
0.00 +19.0 19.05 ±101% perf-profile.calltrace.cycles-pp.vprintk_emit.printk_emit.devkmsg_write.__vfs_write.vfs_write
0.00 +19.0 19.05 ±101% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk_emit.devkmsg_write.__vfs_write
0.25 ±173% +19.5 19.73 ± 81% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write
0.00 +19.9 19.87 ±101% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath.write
0.00 +19.9 19.87 ±101% perf-profile.calltrace.cycles-pp.write
0.00 +19.9 19.87 ±101% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath.write
0.00 +19.9 19.87 ±101% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath.write
0.00 +19.9 19.87 ±101% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath.write
0.26 ±173% +19.9 20.13 ± 81% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write.console_unlock
0.00 +20.5 20.48 ± 13% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath.ioctl.__libc_start_main
0.00 +20.5 20.48 ± 13% perf-profile.calltrace.cycles-pp.ioctl.__libc_start_main
0.00 +20.5 20.48 ± 13% perf-profile.calltrace.cycles-pp.sys_ioctl.entry_SYSCALL_64_fastpath.ioctl.__libc_start_main
0.00 +20.5 20.48 ± 13% perf-profile.calltrace.cycles-pp.do_vfs_ioctl.sys_ioctl.entry_SYSCALL_64_fastpath.ioctl.__libc_start_main
0.00 +20.5 20.48 ± 13% perf-profile.calltrace.cycles-pp.perf_ioctl.do_vfs_ioctl.sys_ioctl.entry_SYSCALL_64_fastpath.ioctl
0.00 +20.5 20.48 ± 13% perf-profile.calltrace.cycles-pp.perf_event_for_each_child.perf_ioctl.do_vfs_ioctl.sys_ioctl.entry_SYSCALL_64_fastpath
0.00 +23.2 23.19 ± 30% perf-profile.calltrace.cycles-pp.__libc_start_main
9.96 ± 17% +25.7 35.69 ± 42% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.do_idle
13.36 ± 24% +30.4 43.77 ± 41% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.verify_cpu
13.38 ± 24% +30.5 43.87 ± 41% perf-profile.calltrace.cycles-pp.start_secondary.verify_cpu
13.38 ± 24% +30.5 43.87 ± 41% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.verify_cpu
11.21 ± 16% +31.2 42.45 ± 41% perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.do_idle.cpu_startup_entry.start_secondary
13.53 ± 24% +31.3 44.81 ± 41% perf-profile.calltrace.cycles-pp.verify_cpu
11.22 ± 16% +31.3 42.50 ± 41% perf-profile.calltrace.cycles-pp.call_cpuidle.do_idle.cpu_startup_entry.start_secondary.verify_cpu
11.19 ± 16% +31.9 43.13 ± 39% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.do_idle.cpu_startup_entry
38.21 ± 39% -38.0 0.25 ±173% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
17.35 ± 60% -16.7 0.66 ± 93% perf-profile.children.cycles-pp.try_to_wake_up
14.33 ± 15% -14.1 0.23 ±149% perf-profile.children.cycles-pp.__schedule
14.13 ± 63% -13.7 0.40 ±173% perf-profile.children.cycles-pp.ttwu_do_activate
14.02 ± 63% -13.6 0.40 ±173% perf-profile.children.cycles-pp.activate_task
13.98 ± 63% -13.6 0.40 ±173% perf-profile.children.cycles-pp.enqueue_task_fair
13.70 ± 64% -13.3 0.40 ±173% perf-profile.children.cycles-pp.enqueue_entity
13.20 ± 65% -13.2 0.00 perf-profile.children.cycles-pp.__account_scheduler_latency
13.16 ± 28% -13.1 0.08 ±111% perf-profile.children.cycles-pp.return_from_SYSCALL_64
13.15 ± 28% -13.1 0.08 ±111% perf-profile.children.cycles-pp.do_syscall_64
13.05 ± 29% -13.1 0.00 perf-profile.children.cycles-pp._do_fork
13.11 ± 25% -13.0 0.13 ±132% perf-profile.children.cycles-pp.schedule
12.80 ± 65% -12.8 0.00 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
12.28 ± 30% -12.3 0.00 perf-profile.children.cycles-pp.copy_process
12.59 ± 80% -12.1 0.48 ±148% perf-profile.children.cycles-pp._raw_spin_lock
12.00 ± 23% -12.0 0.00 perf-profile.children.cycles-pp._raw_write_lock_irq
11.97 ± 23% -12.0 0.00 perf-profile.children.cycles-pp.queued_write_lock_slowpath
9.89 ± 66% -9.9 0.00 perf-profile.children.cycles-pp.sys_sched_yield
9.60 ± 39% -9.6 0.00 perf-profile.children.cycles-pp.sys_clone
9.73 ± 59% -9.1 0.61 ±109% perf-profile.children.cycles-pp.__wake_up_common
9.70 ± 60% -9.1 0.61 ±109% perf-profile.children.cycles-pp.__wake_up_common_lock
9.70 ± 59% -9.1 0.61 ±109% perf-profile.children.cycles-pp.default_wake_function
9.03 ± 40% -9.0 0.00 perf-profile.children.cycles-pp.do_exit
8.31 ± 38% -8.3 0.00 perf-profile.children.cycles-pp.sys_kill
8.29 ± 38% -8.3 0.00 perf-profile.children.cycles-pp.SYSC_kill
8.21 ± 38% -8.2 0.00 perf-profile.children.cycles-pp.kill_pid_info
8.19 ± 38% -8.2 0.00 perf-profile.children.cycles-pp.group_send_sig_info
8.07 ± 39% -8.1 0.00 perf-profile.children.cycles-pp.do_send_sig_info
8.01 ± 58% -8.0 0.00 perf-profile.children.cycles-pp.do_group_exit
7.99 ± 58% -8.0 0.00 perf-profile.children.cycles-pp.sys_exit_group
7.62 ± 38% -7.6 0.00 perf-profile.children.cycles-pp.send_signal
7.58 ± 39% -7.6 0.00 perf-profile.children.cycles-pp.__send_signal
61.22 ± 5% -7.5 53.67 ± 33% perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
7.37 ± 39% -7.4 0.00 perf-profile.children.cycles-pp.sys_wait4
7.37 ± 39% -7.4 0.00 perf-profile.children.cycles-pp.SYSC_wait4
7.37 ± 39% -7.4 0.00 perf-profile.children.cycles-pp.kernel_wait4
7.36 ± 39% -7.4 0.00 perf-profile.children.cycles-pp.do_wait
7.88 ± 50% -7.3 0.61 ±109% perf-profile.children.cycles-pp.__wake_up_sync_key
6.59 ±100% -6.0 0.61 ±109% perf-profile.children.cycles-pp.autoremove_wake_function
5.38 ± 32% -5.4 0.00 perf-profile.children.cycles-pp.exit_to_usermode_loop
5.30 ± 98% -5.3 0.05 ±173% perf-profile.children.cycles-pp.wake_up_process
5.22 ±100% -5.2 0.00 perf-profile.children.cycles-pp.hrtimer_wakeup
5.31 ± 32% -5.1 0.21 ±173% perf-profile.children.cycles-pp.syscall_return_slowpath
5.01 ± 32% -5.0 0.00 perf-profile.children.cycles-pp.do_signal
5.00 ± 32% -5.0 0.00 perf-profile.children.cycles-pp.get_signal
4.88 ±101% -4.9 0.00 perf-profile.children.cycles-pp.down_write
4.67 ±102% -4.7 0.00 perf-profile.children.cycles-pp.call_rwsem_down_write_failed
4.62 ±103% -4.6 0.00 perf-profile.children.cycles-pp.rwsem_down_write_failed
4.75 ± 58% -4.6 0.13 ±132% perf-profile.children.cycles-pp.pick_next_task_fair
4.60 ± 82% -4.6 0.00 perf-profile.children.cycles-pp.mmput
4.58 ± 82% -4.6 0.00 perf-profile.children.cycles-pp.exit_mmap
3.87 ±106% -3.9 0.00 perf-profile.children.cycles-pp.osq_lock
3.76 ±100% -2.9 0.87 ± 88% perf-profile.children.cycles-pp.pipe_write
2.94 ± 99% -2.3 0.63 ±101% perf-profile.children.cycles-pp.pipe_read
5.58 ± 93% -1.5 4.09 ±127% perf-profile.children.cycles-pp.__hrtimer_run_queues
5.65 ± 92% -1.0 4.68 ±117% perf-profile.children.cycles-pp.hrtimer_interrupt
6.23 ± 81% +0.7 6.96 ± 99% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
6.25 ± 81% +0.7 6.98 ± 99% perf-profile.children.cycles-pp.__irqentry_text_start
0.00 +3.9 3.88 ±104% perf-profile.children.cycles-pp._perf_event_disable
0.00 +4.2 4.16 ±105% perf-profile.children.cycles-pp.seq_read
3.15 ± 96% +5.5 8.68 ± 54% perf-profile.children.cycles-pp.sys_read
3.12 ± 96% +5.6 8.73 ± 54% perf-profile.children.cycles-pp.vfs_read
3.04 ± 96% +5.7 8.73 ± 54% perf-profile.children.cycles-pp.__vfs_read
0.31 ± 23% +8.5 8.82 ± 81% perf-profile.children.cycles-pp.__const_udelay
0.31 ± 21% +8.9 9.22 ± 80% perf-profile.children.cycles-pp.delay_tsc
0.56 ± 26% +10.4 10.96 ± 80% perf-profile.children.cycles-pp.io_serial_in
4.25 ± 90% +15.7 19.97 ±100% perf-profile.children.cycles-pp.sys_write
4.23 ± 90% +15.7 19.94 ±100% perf-profile.children.cycles-pp.vfs_write
4.13 ± 90% +15.8 19.94 ±100% perf-profile.children.cycles-pp.__vfs_write
0.02 ±173% +16.6 16.61 ± 37% perf-profile.children.cycles-pp._perf_event_enable
0.31 ± 31% +18.7 19.05 ±101% perf-profile.children.cycles-pp.vprintk_emit
0.31 ± 31% +18.7 19.05 ±101% perf-profile.children.cycles-pp.devkmsg_write
0.31 ± 31% +18.7 19.05 ±101% perf-profile.children.cycles-pp.printk_emit
0.87 ± 23% +19.3 20.13 ± 81% perf-profile.children.cycles-pp.uart_console_write
0.87 ± 23% +19.3 20.13 ± 81% perf-profile.children.cycles-pp.serial8250_console_putchar
0.87 ± 24% +19.3 20.18 ± 80% perf-profile.children.cycles-pp.wait_for_xmitr
0.33 ± 28% +19.5 19.87 ±101% perf-profile.children.cycles-pp.write
0.89 ± 23% +19.7 20.58 ± 80% perf-profile.children.cycles-pp.univ8250_console_write
0.89 ± 23% +19.7 20.58 ± 80% perf-profile.children.cycles-pp.serial8250_console_write
0.02 ±173% +20.5 20.48 ± 13% perf-profile.children.cycles-pp.ioctl
0.02 ±173% +20.5 20.48 ± 13% perf-profile.children.cycles-pp.sys_ioctl
0.02 ±173% +20.5 20.48 ± 13% perf-profile.children.cycles-pp.do_vfs_ioctl
0.02 ±173% +20.5 20.48 ± 13% perf-profile.children.cycles-pp.perf_ioctl
0.02 ±173% +20.5 20.48 ± 13% perf-profile.children.cycles-pp.perf_event_for_each_child
0.02 ±173% +20.5 20.48 ± 13% perf-profile.children.cycles-pp.event_function_call
1.01 ± 26% +20.5 21.49 ± 80% perf-profile.children.cycles-pp.console_unlock
0.13 ± 13% +23.1 23.19 ± 30% perf-profile.children.cycles-pp.__libc_start_main
0.14 ± 33% +24.0 24.16 ± 26% perf-profile.children.cycles-pp.smp_call_function_single
10.06 ± 17% +25.8 35.87 ± 43% perf-profile.children.cycles-pp.intel_idle
13.38 ± 24% +30.5 43.87 ± 41% perf-profile.children.cycles-pp.start_secondary
13.51 ± 24% +31.2 44.71 ± 40% perf-profile.children.cycles-pp.do_idle
13.53 ± 24% +31.3 44.81 ± 41% perf-profile.children.cycles-pp.verify_cpu
13.53 ± 24% +31.3 44.81 ± 41% perf-profile.children.cycles-pp.cpu_startup_entry
11.31 ± 17% +31.9 43.18 ± 39% perf-profile.children.cycles-pp.cpuidle_enter_state
11.33 ± 17% +32.0 43.36 ± 40% perf-profile.children.cycles-pp.cpuidle_enter
11.34 ± 17% +32.1 43.46 ± 40% perf-profile.children.cycles-pp.call_cpuidle
38.10 ± 39% -37.8 0.25 ±173% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
3.82 ±107% -3.8 0.00 perf-profile.self.cycles-pp.osq_lock
0.31 ± 21% +8.9 9.22 ± 80% perf-profile.self.cycles-pp.delay_tsc
0.56 ± 26% +10.4 10.96 ± 80% perf-profile.self.cycles-pp.io_serial_in
0.13 ± 31% +24.0 24.08 ± 25% perf-profile.self.cycles-pp.smp_call_function_single
10.06 ± 17% +25.8 35.87 ± 43% perf-profile.self.cycles-pp.intel_idle
stress-ng.clone.ops_per_sec
900 +-+-------------------------------------------------------------------+
O |
800 +-+ |
700 +-+ O O O |
| O O |
600 +-+ O O O O O O O O |
500 +-O O O |
| |
400 +-+ O |
300 +-+ |
| |
200 +-+ |
100 +-+ |
| |
0 +-+-------------------------------------------------------------------+
stress-ng.time.elapsed_time
130 +-+-------------------------------------------------------------------+
120 +-+ + |
| :: |
110 +-+ : : .+. .|
100 +-+ : +.+ + |
| : |
90 +-+ .+ + +.: |
80 +-+.+. + +.+. .+. .+.+ : + + +. + + |
70 +-+ +. + + + +.+.+. .+.+ + : .+.+ +. + + |
| + + + + + |
60 +-+ |
50 +-+ |
| O O |
40 O-O O O O O O O O O |
30 +-+-O-O-O-O-----O---------------O-------------------------------------+
stress-ng.time.elapsed_time.max
130 +-+-------------------------------------------------------------------+
120 +-+ + |
| :: |
110 +-+ : : .+. .|
100 +-+ : +.+ + |
| : |
90 +-+ .+ + +.: |
80 +-+.+. + +.+. .+. .+.+ : + + +. + + |
70 +-+ +. + + + +.+.+. .+.+ + : .+.+ +. + + |
| + + + + + |
60 +-+ |
50 +-+ |
| O O |
40 O-O O O O O O O O O |
30 +-+-O-O-O-O-----O---------------O-------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years
[lkp-robot] [fs] 62231fd3ed: aim7.jobs-per-min 229.5% improvement
by kernel test robot
Greeting,
FYI, we noticed a 229.5% improvement of aim7.jobs-per-min due to commit:
commit: 62231fd3ed94c9d40b4e58ab14fe173f09e252da ("fs: handle inode->i_version more efficiently")
https://git.kernel.org/cgit/linux/kernel/git/jlayton/linux.git iversion-next
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 4BRD_12G
md: RAID0
fs: xfs
test: disk_cp
load: 3000
cpufreq_governor: performance
test-description: AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
test-url: https://sourceforge.net/projects/aimbench/files/aim-suite7/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | aim7: aim7.jobs-per-min 202.6% improvement |
| test machine | 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory |
| test parameters | cpufreq_governor=performance |
| | disk=4BRD_12G |
| | fs=xfs |
| | load=3000 |
| | md=RAID1 |
| | test=disk_rr |
+------------------+-----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.2/3000/RAID0/debian-x86_64-2016-08-31.cgz/lkp-ivb-ep01/disk_cp/aim7
commit:
269503cada ("btrfs: only dirty the inode in btrfs_update_time if something was changed")
62231fd3ed ("fs: handle inode->i_version more efficiently")
269503cada9f3e17 62231fd3ed94c9d40b4e58ab14
---------------- --------------------------
%stddev %change %stddev
\ | \
86992 +229.5% 286610 ± 4% aim7.jobs-per-min
207.43 -69.5% 63.28 ± 3% aim7.time.elapsed_time
207.43 -69.5% 63.28 ± 3% aim7.time.elapsed_time.max
449153 ± 7% -84.7% 68560 ± 6% aim7.time.involuntary_context_switches
456322 ± 8% -67.0% 150364 aim7.time.minor_page_faults
7526 -74.9% 1890 ± 5% aim7.time.system_time
28.02 ± 8% -27.2% 20.39 aim7.time.user_time
981140 ± 5% -39.6% 592973 ± 2% aim7.time.voluntary_context_switches
565790 ± 7% -93.7% 35389 ± 3% interrupts.CAL:Function_call_interrupts
6592 ± 19% -69.9% 1984 ±168% numa-numastat.node0.other_node
12.66 ± 2% +3.6% 13.12 boot-time.dhcp
12.85 ± 2% +3.5% 13.30 boot-time.kernel_boot
552697 ± 5% -47.3% 291516 softirqs.RCU
288353 ± 3% -27.6% 208732 ± 2% softirqs.SCHED
3045613 -70.8% 888561 ± 4% softirqs.TIMER
4.50 ± 11% -100.0% 0.00 vmstat.procs.b
61.25 ± 9% -54.3% 28.00 ± 9% vmstat.procs.r
10861 ± 5% +70.0% 18460 ± 3% vmstat.system.cs
7.93 ± 2% +22.4 30.34 ± 18% mpstat.cpu.idle%
1.99 ± 15% -2.0 0.01 ± 36% mpstat.cpu.iowait%
89.72 -20.9 68.86 ± 7% mpstat.cpu.sys%
0.37 ± 7% +0.4 0.79 ± 4% mpstat.cpu.usr%
296776 ± 4% -10.4% 265931 ± 2% meminfo.Active
240797 ± 5% -12.6% 210345 ± 3% meminfo.Active(anon)
5509632 ± 9% -23.4% 4218880 ± 10% meminfo.DirectMap2M
31481 ± 8% -52.8% 14850 ± 11% meminfo.Dirty
51320 ± 20% -35.0% 33366 ± 13% meminfo.Shmem
58841383 ± 10% -14.6% 50257608 ± 5% cpuidle.C1.time
1.192e+08 ± 14% -94.2% 6965057 ± 2% cpuidle.C1E.time
300054 ± 12% -77.9% 66189 ± 3% cpuidle.C1E.usage
23556029 ± 4% -15.3% 19957273 ± 2% cpuidle.C3.time
82726 +34.9% 111602 ± 3% cpuidle.C3.usage
6400 ± 12% -89.9% 644.00 ± 6% cpuidle.POLL.usage
15407 ± 6% -49.4% 7789 ± 11% numa-meminfo.node0.Dirty
3688 ± 98% +417.0% 19072 ± 33% numa-meminfo.node0.Shmem
155500 ± 4% -22.5% 120500 ± 12% numa-meminfo.node1.Active
127527 ± 4% -27.9% 91903 ± 17% numa-meminfo.node1.Active(anon)
15790 ± 8% -52.5% 7508 ± 10% numa-meminfo.node1.Dirty
41433 ± 12% -27.0% 30258 ± 11% numa-meminfo.node1.SReclaimable
47611 ± 26% -69.9% 14313 ± 65% numa-meminfo.node1.Shmem
18119808 -9.7% 16362111 ± 6% numa-vmstat.node0.nr_dirtied
3800 ± 5% -48.2% 1967 ± 8% numa-vmstat.node0.nr_dirty
922.00 ± 98% +415.2% 4750 ± 33% numa-vmstat.node0.nr_shmem
3605 ± 4% -53.3% 1683 ± 12% numa-vmstat.node0.nr_zone_write_pending
18679990 -10.0% 16818181 ± 6% numa-vmstat.node0.numa_hit
18673388 -9.9% 16816061 ± 6% numa-vmstat.node0.numa_local
31851 ± 4% -27.6% 23059 ± 16% numa-vmstat.node1.nr_active_anon
3950 ± 7% -51.1% 1930 ± 8% numa-vmstat.node1.nr_dirty
11897 ± 26% -69.9% 3580 ± 65% numa-vmstat.node1.nr_shmem
10363 ± 12% -27.0% 7561 ± 11% numa-vmstat.node1.nr_slab_reclaimable
31851 ± 4% -27.6% 23059 ± 16% numa-vmstat.node1.nr_zone_active_anon
3740 ± 8% -54.4% 1705 ± 9% numa-vmstat.node1.nr_zone_write_pending
60214 ± 5% -12.6% 52654 ± 3% proc-vmstat.nr_active_anon
8014 ± 7% -51.9% 3851 ± 9% proc-vmstat.nr_dirty
12824 ± 20% -35.0% 8336 ± 13% proc-vmstat.nr_shmem
60214 ± 5% -12.6% 52654 ± 3% proc-vmstat.nr_zone_active_anon
7635 ± 7% -56.5% 3322 ± 8% proc-vmstat.nr_zone_write_pending
19816 ± 5% -99.9% 14.00 ±101% proc-vmstat.numa_hint_faults
2832 ± 12% -99.8% 6.25 ±138% proc-vmstat.numa_hint_faults_local
8891 -11.9% 7836 proc-vmstat.numa_other
13349 ± 5% -99.9% 7.25 ±150% proc-vmstat.numa_pages_migrated
115643 ± 4% -100.0% 35.50 ±100% proc-vmstat.numa_pte_updates
932798 ± 4% -65.0% 326824 proc-vmstat.pgfault
13349 ± 5% -99.9% 7.25 ±150% proc-vmstat.pgmigrate_success
2708 -24.1% 2055 ± 7% turbostat.Avg_MHz
90.76 -19.7 71.11 ± 7% turbostat.Busy%
0.70 ± 10% +1.1 1.82 ± 5% turbostat.C1%
299987 ± 12% -78.0% 66069 ± 3% turbostat.C1E
1.43 ± 13% -1.2 0.25 ± 3% turbostat.C1E%
82600 +35.0% 111470 ± 3% turbostat.C3
0.28 ± 4% +0.4 0.73 ± 4% turbostat.C3%
6.88 ± 3% +19.4 26.25 ± 20% turbostat.C6%
4.65 ± 7% +134.7% 10.90 ± 11% turbostat.CPU%c1
0.01 ± 34% +580.0% 0.09 ± 5% turbostat.CPU%c3
4.58 ± 10% +291.0% 17.91 ± 29% turbostat.CPU%c6
127.25 -20.8% 100.72 ± 6% turbostat.CorWatt
10195212 ± 2% -67.6% 3306880 ± 3% turbostat.IRQ
2.61 ± 41% +296.2% 10.35 ± 27% turbostat.Pkg%pc2
154.68 -17.4% 127.72 ± 5% turbostat.PkgWatt
39.62 -5.9% 37.29 ± 3% turbostat.RAMWatt
17030 -67.7% 5500 ± 3% turbostat.SMI
1.327e+12 -75.3% 3.276e+11 perf-stat.branch-instructions
0.26 ± 3% +0.1 0.40 ± 2% perf-stat.branch-miss-rate%
3.514e+09 ± 3% -62.4% 1.32e+09 ± 2% perf-stat.branch-misses
25.04 -13.0 12.08 ± 4% perf-stat.cache-miss-rate%
9.306e+09 -75.3% 2.298e+09 ± 7% perf-stat.cache-misses
3.716e+10 -48.9% 1.899e+10 ± 3% perf-stat.cache-references
2280553 ± 5% -43.7% 1284205 ± 2% perf-stat.context-switches
3.79 -13.1% 3.29 ± 5% perf-stat.cpi
2.254e+13 -75.2% 5.594e+12 ± 5% perf-stat.cpu-cycles
444640 ± 5% -65.7% 152686 perf-stat.cpu-migrations
1.357e+10 ± 8% -47.0% 7.193e+09 ± 28% perf-stat.dTLB-load-misses
1.642e+12 -67.6% 5.323e+11 perf-stat.dTLB-loads
0.27 ± 17% -0.2 0.07 ± 26% perf-stat.dTLB-store-miss-rate%
1.452e+09 ± 17% -82.7% 2.515e+08 ± 26% perf-stat.dTLB-store-misses
5.373e+11 -33.7% 3.564e+11 perf-stat.dTLB-stores
1.492e+08 ± 69% -54.6% 67749320 ± 19% perf-stat.iTLB-load-misses
8.861e+08 ± 57% -85.4% 1.29e+08 ± 5% perf-stat.iTLB-loads
5.949e+12 -71.4% 1.699e+12 perf-stat.instructions
0.26 +15.4% 0.30 ± 5% perf-stat.ipc
916839 ± 4% -65.6% 315501 ± 2% perf-stat.minor-faults
4.383e+09 ± 2% -77.0% 1.008e+09 ± 15% perf-stat.node-load-misses
4.996e+09 ± 2% -76.6% 1.171e+09 ± 15% perf-stat.node-loads
3.029e+09 ± 2% -75.7% 7.349e+08 ± 2% perf-stat.node-store-misses
4.055e+09 ± 3% -75.6% 9.91e+08 ± 2% perf-stat.node-stores
916844 ± 4% -65.6% 315502 ± 2% perf-stat.page-faults
1365 ± 4% -20.4% 1087 ± 3% slabinfo.btrfs_ordered_extent.active_objs
1365 ± 4% -20.4% 1087 ± 3% slabinfo.btrfs_ordered_extent.num_objs
23854 -20.1% 19055 slabinfo.buffer_head.active_objs
613.25 -20.2% 489.50 slabinfo.buffer_head.active_slabs
23944 -20.1% 19123 slabinfo.buffer_head.num_objs
613.25 -20.2% 489.50 slabinfo.buffer_head.num_slabs
4419 ± 9% -17.8% 3633 slabinfo.kmalloc-128.active_objs
4426 ± 8% -17.9% 3633 slabinfo.kmalloc-128.num_objs
21091 -11.6% 18647 slabinfo.kmalloc-16.active_objs
21091 -11.6% 18647 slabinfo.kmalloc-16.num_objs
3152 -14.1% 2706 ± 4% slabinfo.kmalloc-4096.active_objs
3235 -14.7% 2758 ± 4% slabinfo.kmalloc-4096.num_objs
2705 ± 3% -10.2% 2430 slabinfo.nsproxy.active_objs
2705 ± 3% -10.2% 2430 slabinfo.nsproxy.num_objs
1327 -11.1% 1180 slabinfo.posix_timers_cache.active_objs
1327 -11.1% 1180 slabinfo.posix_timers_cache.num_objs
25562 -17.3% 21132 slabinfo.proc_inode_cache.active_objs
25978 -16.9% 21587 slabinfo.proc_inode_cache.num_objs
266.25 ± 21% -28.2% 191.25 ± 6% slabinfo.request_queue.num_objs
1348 -9.4% 1221 slabinfo.xfs_buf_item.active_objs
1348 -9.4% 1221 slabinfo.xfs_buf_item.num_objs
1262 -11.8% 1113 slabinfo.xfs_da_state.active_objs
1262 -11.8% 1113 slabinfo.xfs_da_state.num_objs
1428 ± 5% +40.6% 2007 slabinfo.xfs_inode.active_objs
1479 ± 9% +41.5% 2093 ± 2% slabinfo.xfs_inode.num_objs
1668 -10.9% 1486 slabinfo.xfs_log_ticket.active_objs
1668 -10.9% 1486 slabinfo.xfs_log_ticket.num_objs
88.68 -87.6 1.04 ± 5% perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter
88.84 -87.5 1.35 ± 3% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
88.95 -86.9 2.08 ± 2% perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
85.91 -85.1 0.78 ± 5% perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write
85.42 -84.7 0.76 ± 5% perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
80.48 -80.1 0.42 ± 57% perf-profile.calltrace.cycles-pp._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time
79.94 -79.5 0.40 ± 57% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
92.30 -73.4 18.93 ± 6% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write.sys_write
92.41 -73.0 19.37 ± 6% perf-profile.calltrace.cycles-pp.xfs_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.49 -72.8 19.66 ± 6% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.82 -71.6 21.20 ± 5% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.87 -71.4 21.46 ± 5% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
96.84 -7.0 89.88 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
1.35 ± 3% +5.2 6.54 ± 6% perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
0.00 +5.4 5.39 ± 46% perf-profile.calltrace.cycles-pp.__atime_needs_update.touch_atime.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
0.00 +5.5 5.45 ± 46% perf-profile.calltrace.cycles-pp.touch_atime.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
2.43 ± 2% +10.0 12.43 ± 6% perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter
3.15 ± 3% +12.0 15.15 ± 6% perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
3.17 ± 3% +12.1 15.23 ± 6% perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
0.00 +13.5 13.45 ± 14% perf-profile.calltrace.cycles-pp.down_read.xfs_ilock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
0.00 +13.5 13.49 ± 14% perf-profile.calltrace.cycles-pp.xfs_ilock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
0.00 +15.4 15.37 ± 5% perf-profile.calltrace.cycles-pp.up_read.xfs_iunlock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
0.00 +15.4 15.38 ± 5% perf-profile.calltrace.cycles-pp.xfs_iunlock.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
1.23 ± 3% +15.6 16.79 ± 25% perf-profile.calltrace.cycles-pp.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
1.72 ± 4% +44.0 45.70 ± 3% perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read.sys_read
1.80 ± 4% +45.7 47.51 ± 4% perf-profile.calltrace.cycles-pp.xfs_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.12 ± 4% +56.0 58.16 ± 2% perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.49 ± 4% +60.8 63.25 ± 3% perf-profile.calltrace.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.58 ± 4% +60.9 63.52 ± 3% perf-profile.calltrace.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
88.68 -87.6 1.04 ± 5% perf-profile.children.cycles-pp.xfs_vn_update_time
88.84 -87.5 1.38 ± 3% perf-profile.children.cycles-pp.file_update_time
88.97 -86.8 2.17 perf-profile.children.cycles-pp.xfs_file_aio_write_checks
86.42 -85.2 1.25 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
85.96 -84.7 1.21 ± 2% perf-profile.children.cycles-pp.xfs_log_commit_cil
80.96 -79.9 1.07 ± 5% perf-profile.children.cycles-pp._raw_spin_lock
80.33 -79.7 0.62 ± 5% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.31 -73.3 18.99 ± 6% perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
92.42 -73.0 19.40 ± 6% perf-profile.children.cycles-pp.xfs_file_write_iter
92.52 -72.8 19.75 ± 6% perf-profile.children.cycles-pp.__vfs_write
92.85 -71.6 21.27 ± 5% perf-profile.children.cycles-pp.vfs_write
92.90 -71.3 21.56 ± 5% perf-profile.children.cycles-pp.sys_write
96.88 -6.9 89.93 perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
0.17 ± 7% +5.2 5.40 ± 46% perf-profile.children.cycles-pp.__atime_needs_update
1.37 ± 3% +5.3 6.63 ± 6% perf-profile.children.cycles-pp.iomap_write_begin
0.19 ± 6% +5.3 5.47 ± 46% perf-profile.children.cycles-pp.touch_atime
1.15 ± 4% +5.9 7.10 ± 3% perf-profile.children.cycles-pp.pagecache_get_page
2.46 ± 2% +10.1 12.58 ± 6% perf-profile.children.cycles-pp.iomap_write_actor
3.17 ± 3% +12.1 15.24 ± 6% perf-profile.children.cycles-pp.iomap_apply
3.18 ± 3% +12.1 15.29 ± 6% perf-profile.children.cycles-pp.iomap_file_buffered_write
0.84 ± 7% +12.7 13.52 ± 14% perf-profile.children.cycles-pp.down_read
0.37 ± 6% +14.4 14.78 ± 13% perf-profile.children.cycles-pp.xfs_ilock
0.65 ± 4% +14.7 15.39 ± 5% perf-profile.children.cycles-pp.up_read
1.25 ± 3% +15.6 16.85 ± 25% perf-profile.children.cycles-pp.generic_file_read_iter
0.48 ± 5% +15.9 16.35 ± 5% perf-profile.children.cycles-pp.xfs_iunlock
1.75 ± 4% +44.1 45.85 ± 3% perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
1.80 ± 4% +45.7 47.52 ± 4% perf-profile.children.cycles-pp.xfs_file_read_iter
2.14 ± 4% +56.1 58.21 ± 2% perf-profile.children.cycles-pp.__vfs_read
2.51 ± 4% +60.8 63.31 ± 3% perf-profile.children.cycles-pp.vfs_read
2.60 ± 4% +61.0 63.61 ± 3% perf-profile.children.cycles-pp.sys_read
79.99 -79.4 0.62 ± 5% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.12 ± 3% +5.1 5.18 ± 48% perf-profile.self.cycles-pp.__atime_needs_update
0.27 ± 4% +5.9 6.12 ± 31% perf-profile.self.cycles-pp.generic_file_read_iter
0.32 ± 5% +10.2 10.53 ± 6% perf-profile.self.cycles-pp.__vfs_read
0.76 ± 6% +12.5 13.26 ± 14% perf-profile.self.cycles-pp.down_read
0.65 ± 4% +14.7 15.32 ± 5% perf-profile.self.cycles-pp.up_read
78108 -71.8% 22028 sched_debug.cfs_rq:/.exec_clock.avg
78381 -71.6% 22252 sched_debug.cfs_rq:/.exec_clock.max
77700 -72.0% 21783 sched_debug.cfs_rq:/.exec_clock.min
134.07 ± 13% -24.0% 101.86 ± 11% sched_debug.cfs_rq:/.exec_clock.stddev
3042129 -73.9% 794721 sched_debug.cfs_rq:/.min_vruntime.avg
3167769 -73.2% 848465 sched_debug.cfs_rq:/.min_vruntime.max
2976915 -73.9% 776601 sched_debug.cfs_rq:/.min_vruntime.min
36725 ± 17% -65.6% 12636 ± 21% sched_debug.cfs_rq:/.min_vruntime.stddev
0.75 -16.2% 0.63 ± 3% sched_debug.cfs_rq:/.nr_running.avg
0.28 ± 6% +20.6% 0.34 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
0.88 ± 3% -37.8% 0.55 ± 8% sched_debug.cfs_rq:/.nr_spread_over.avg
2.31 ± 20% -56.8% 1.00 sched_debug.cfs_rq:/.nr_spread_over.max
0.75 -33.3% 0.50 sched_debug.cfs_rq:/.nr_spread_over.min
0.37 ± 23% -65.0% 0.13 ± 41% sched_debug.cfs_rq:/.nr_spread_over.stddev
19.78 ± 7% -27.3% 14.38 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.avg
69.12 ± 14% -43.6% 39.00 ± 22% sched_debug.cfs_rq:/.runnable_load_avg.max
15.00 ± 10% -38.5% 9.22 ± 15% sched_debug.cfs_rq:/.runnable_load_avg.stddev
118889 ± 31% -53.6% 55144 ± 24% sched_debug.cfs_rq:/.spread0.max
-72304 -77.0% -16642 sched_debug.cfs_rq:/.spread0.min
36743 ± 17% -65.6% 12631 ± 21% sched_debug.cfs_rq:/.spread0.stddev
904.02 ± 2% -21.7% 707.67 ± 10% sched_debug.cfs_rq:/.util_avg.avg
2151 ± 7% -26.8% 1575 ± 13% sched_debug.cfs_rq:/.util_avg.max
423.44 ± 5% -26.6% 311.00 ± 10% sched_debug.cfs_rq:/.util_avg.stddev
497178 ± 2% -22.5% 385267 ± 10% sched_debug.cpu.avg_idle.avg
127739 ± 17% -63.0% 47277 ± 17% sched_debug.cpu.avg_idle.min
116003 -49.0% 59128 ± 8% sched_debug.cpu.clock.avg
116026 -49.0% 59137 ± 8% sched_debug.cpu.clock.max
115978 -49.0% 59118 ± 8% sched_debug.cpu.clock.min
14.40 ± 15% -61.6% 5.54 ± 47% sched_debug.cpu.clock.stddev
116003 -49.0% 59128 ± 8% sched_debug.cpu.clock_task.avg
116026 -49.0% 59137 ± 8% sched_debug.cpu.clock_task.max
115978 -49.0% 59118 ± 8% sched_debug.cpu.clock_task.min
14.40 ± 15% -61.6% 5.54 ± 47% sched_debug.cpu.clock_task.stddev
20.25 ± 7% -28.1% 14.56 ± 5% sched_debug.cpu.cpu_load[0].avg
68.56 ± 15% -32.5% 46.25 ± 30% sched_debug.cpu.cpu_load[0].max
15.17 ± 11% -35.3% 9.82 ± 20% sched_debug.cpu.cpu_load[0].stddev
21.01 ± 5% -26.4% 15.47 ± 7% sched_debug.cpu.cpu_load[1].avg
73.81 ± 9% -44.3% 41.12 ± 32% sched_debug.cpu.cpu_load[1].max
15.20 ± 12% -43.8% 8.54 ± 24% sched_debug.cpu.cpu_load[1].stddev
21.56 ± 5% -22.6% 16.68 ± 10% sched_debug.cpu.cpu_load[2].avg
76.00 ± 10% -39.5% 46.00 ± 42% sched_debug.cpu.cpu_load[2].max
14.87 ± 12% -40.7% 8.82 ± 38% sched_debug.cpu.cpu_load[2].stddev
22.19 ± 7% -22.2% 17.26 ± 9% sched_debug.cpu.cpu_load[3].avg
85.25 ± 33% -45.6% 46.38 ± 30% sched_debug.cpu.cpu_load[3].max
15.36 ± 24% -43.0% 8.75 ± 34% sched_debug.cpu.cpu_load[3].stddev
22.53 ± 14% -26.9% 16.46 ± 9% sched_debug.cpu.cpu_load[4].avg
115.44 ± 85% -63.7% 41.88 ± 24% sched_debug.cpu.cpu_load[4].max
19.94 ± 72% -59.8% 8.02 ± 26% sched_debug.cpu.cpu_load[4].stddev
2376 ± 8% -24.9% 1783 ± 9% sched_debug.cpu.curr->pid.avg
5595 ± 7% -33.0% 3749 ± 9% sched_debug.cpu.curr->pid.max
1108 -18.7% 901.58 ± 2% sched_debug.cpu.curr->pid.stddev
86474 -66.0% 29373 sched_debug.cpu.nr_load_updates.avg
90842 -63.8% 32895 sched_debug.cpu.nr_load_updates.max
85649 -66.5% 28665 sched_debug.cpu.nr_load_updates.min
1.26 ± 9% -46.0% 0.68 ± 2% sched_debug.cpu.nr_running.avg
4.19 ± 11% -58.2% 1.75 ± 14% sched_debug.cpu.nr_running.max
0.90 ± 10% -53.5% 0.42 ± 4% sched_debug.cpu.nr_running.stddev
26972 ± 2% -38.2% 16677 ± 6% sched_debug.cpu.nr_switches.avg
36396 ± 7% -29.3% 25723 ± 5% sched_debug.cpu.nr_switches.max
23604 ± 3% -39.4% 14307 ± 5% sched_debug.cpu.nr_switches.min
52.70 -64.5% 18.69 ± 56% sched_debug.cpu.nr_uninterruptible.avg
234.94 ± 5% -61.1% 91.50 ± 11% sched_debug.cpu.nr_uninterruptible.max
-156.12 -64.9% -54.88 sched_debug.cpu.nr_uninterruptible.min
95.53 ± 16% -63.0% 35.32 ± 13% sched_debug.cpu.nr_uninterruptible.stddev
25576 ± 3% -41.9% 14862 ± 7% sched_debug.cpu.sched_count.avg
31339 ± 7% -40.8% 18552 ± 4% sched_debug.cpu.sched_count.max
23353 ± 3% -41.8% 13594 ± 6% sched_debug.cpu.sched_count.min
12755 ± 2% -40.8% 7556 ± 7% sched_debug.cpu.ttwu_count.avg
17351 ± 5% -39.0% 10586 ± 12% sched_debug.cpu.ttwu_count.max
10212 -32.0% 6949 ± 7% sched_debug.cpu.ttwu_count.min
1712 ± 15% -53.0% 805.03 ± 17% sched_debug.cpu.ttwu_count.stddev
2128 ± 2% -71.6% 605.08 sched_debug.cpu.ttwu_local.avg
4887 ± 16% -82.0% 878.25 ± 6% sched_debug.cpu.ttwu_local.max
1406 ± 7% -65.8% 481.00 ± 4% sched_debug.cpu.ttwu_local.min
707.25 ± 24% -87.8% 86.41 ± 13% sched_debug.cpu.ttwu_local.stddev
115976 -49.0% 59118 ± 8% sched_debug.cpu_clk
115976 -49.0% 59118 ± 8% sched_debug.ktime
0.05 ± 10% +85.5% 0.08 ± 39% sched_debug.rt_rq:/.rt_time.avg
1.49 ± 11% +111.4% 3.14 ± 39% sched_debug.rt_rq:/.rt_time.max
0.00 ± 24% -100.0% 0.00 sched_debug.rt_rq:/.rt_time.min
0.23 ± 11% +110.4% 0.49 ± 39% sched_debug.rt_rq:/.rt_time.stddev
116590 -48.9% 59525 ± 8% sched_debug.sched_clk
aim7.jobs-per-min
350000 +-+----------------------------------------------------------------+
| |
300000 +-+ O O |
O O O O O O O O O O O O O O O O O O O O
| O |
250000 +-+ |
| |
200000 +-+ |
| |
150000 +-+ |
| |
| |
100000 +-++..+..+..+..+..+..+..+..+..+...+..+..+..+..+..+..+..+..+..+..+..|
| |
50000 +-+----------------------------------------------------------------+
vmstat.system.cs
20000 +-+-----------------------------------------------------------------+
19000 +-+ O O O O O |
O O O O O O O O O O O O O
18000 +-+ O O O |
17000 +-+ O O |
| |
16000 +-+ |
15000 +-+ |
14000 +-+ |
| |
13000 +-+ |
12000 +-+ .+ |
| +.. +.. +.. +..+.. ..+. + +.. |
11000 +-+ +..+.. .. .. .. +..+. + .. .|
10000 +-+-----------------------------------------------------------------+
interrupts.CAL:Function_call_interrupts
700000 +-+----------------------------------------------------------------+
| +.. |
600000 +-++.. .+..+.. .+.. .+..+.. .. |
|. .+..+..+. +.. ..+.. .+. +. +..+ +..|
500000 +-+ +..+. +. +. |
| |
400000 +-+ |
| |
300000 +-+ |
| |
200000 +-+ |
| |
100000 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
0 +-+----------------------------------------------------------------+
perf-stat.cpu-cycles
2.4e+13 +-+---------------------------------------------------------------+
2.2e+13 +-++..+..+..+..+..+..+..+..+. +. +..+..+..+..+..+..+..+..+..|
| |
2e+13 +-+ |
1.8e+13 +-+ |
| |
1.6e+13 +-+ |
1.4e+13 +-+ |
1.2e+13 +-+ |
| |
1e+13 +-+ |
8e+12 +-+ |
| |
6e+12 O-+O O O O O O O O O O O O O O O O O O O O O O
4e+12 +-+---------------------------------------------------------------+
perf-stat.instructions
6.5e+12 +-+---------------------------------------------------------------+
6e+12 +-+ .+..+..+..+..+..+..+..+..+..+..+..+..+..+..+.. .+..|
| +..+..+. +..+. |
5.5e+12 +-+ |
5e+12 +-+ |
| |
4.5e+12 +-+ |
4e+12 +-+ |
3.5e+12 +-+ |
| |
3e+12 +-+ |
2.5e+12 +-+ |
| |
2e+12 O-+O O O O O O O O O O O O O O O O O O O O O O
1.5e+12 +-+---------------------------------------------------------------+
perf-stat.cache-references
4e+10 +-+---------------------------------------------------------------+
| .+..+..+..+.. .+.. .+.. .+.. .+.. |
|..+..+..+..+. +..+. +. +..+. +..+. +..+..|
3.5e+10 +-+ |
| |
| |
3e+10 +-+ |
| |
2.5e+10 +-+ |
| |
| |
2e+10 O-+O O O O O O O O O O O O O O O O O O
| O O O O |
| |
1.5e+10 +-+---------------------------------------------------------------+
perf-stat.cache-misses
1e+10 +-+-----------------------------------------------------------------+
|..+..+..+..+.. .+..+.. .. .. +..+..+...+. .+..+..|
9e+09 +-+ +...+. +..+ + +. |
8e+09 +-+ |
| |
7e+09 +-+ |
6e+09 +-+ |
| |
5e+09 +-+ |
4e+09 +-+ |
| |
3e+09 O-+O O O O O |
2e+09 +-+ O O O O O O O O O O O O O O O O O
| |
1e+09 +-+-----------------------------------------------------------------+
perf-stat.branch-instructions
1.4e+12 +-+---------------------------------------------------------------+
|..+..+..+..+. +. +..+. +. +..+..+..+..+..|
1.2e+12 +-+ |
| |
| |
1e+12 +-+ |
| |
8e+11 +-+ |
| |
6e+11 +-+ |
| |
| |
4e+11 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
2e+11 +-+---------------------------------------------------------------+
perf-stat.branch-misses
4.5e+09 +-+---------------------------------------------------------------+
| .+ |
4e+09 +-++..+..+. + |
| + .+.. |
3.5e+09 +-+ +..+. +..+..+..+..+..+..+.. .+..+..+..+..+..+..|
| +. |
3e+09 +-+ |
| |
2.5e+09 +-+ |
| |
2e+09 +-+ |
| O |
1.5e+09 +-+O O O O O O O O O O O |
O O O O O O O O O O O
1e+09 +-+---------------------------------------------------------------+
perf-stat.dTLB-loads
1.8e+12 +-+---------------------------------------------------------------+
|.. .+..+..+..+..+..+..+..+..+..+.. .+..+.. .+..|
1.6e+12 +-++..+..+. +. +..+..+..+. |
| |
1.4e+12 +-+ |
| |
1.2e+12 +-+ |
| |
1e+12 +-+ |
| |
8e+11 +-+ |
| |
6e+11 +-+ |
O O O O O O O O O O O O O O O O O O O O O O O
4e+11 +-+---------------------------------------------------------------+
perf-stat.dTLB-stores
5.6e+11 +-+---------------------------------------------------------------+
5.4e+11 +-++..+.. .+..+..+..+..+..+..+..+..+..+.. .+..+.. .+..+..|
|. +. +. +..+..+. |
5.2e+11 +-+ |
5e+11 +-+ |
4.8e+11 +-+ |
4.6e+11 +-+ |
| |
4.4e+11 +-+ |
4.2e+11 +-+ |
4e+11 +-+ |
3.8e+11 +-+ |
| |
3.6e+11 O-+O O O O O O O O O O O O O O O O O O O O O O
3.4e+11 +-+---------------------------------------------------------------+
perf-stat.node-loads
5.5e+09 +-+---------------------------------------------------------------+
5e+09 +-+ .+.. .+.. .+..+.. .+.. .+..|
| +. +..+..+..+..+..+.. .+. +. +..+..+. +..+. |
4.5e+09 +-+ +. |
4e+09 +-+ |
| |
3.5e+09 +-+ |
3e+09 +-+ |
2.5e+09 +-+ |
| |
2e+09 +-+ |
1.5e+09 +-+ |
O O O O O O O O O O O O O O O O O O O O O
1e+09 +-+ O O |
5e+08 +-+---------------------------------------------------------------+
perf-stat.node-load-misses
5e+09 +-+---------------------------------------------------------------+
|.. .+..|
4.5e+09 +-++..+..+..+..+.. .+..+.. .+.. .+..+..+..+..+..+..+..+. |
4e+09 +-+ +. +..+. +. |
| |
3.5e+09 +-+ |
3e+09 +-+ |
| |
2.5e+09 +-+ |
2e+09 +-+ |
| |
1.5e+09 +-+ |
1e+09 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
5e+08 +-+---------------------------------------------------------------+
perf-stat.node-stores
4.5e+09 +-+---------------------------------------------------------------+
| .+.. .+.. .+..+.. .+.. .+..+..+..+..+..+.. .+.. |
4e+09 +-+ +. +..+..+. +..+. +. +. +..|
3.5e+09 +-+ |
| |
3e+09 +-+ |
| |
2.5e+09 +-+ |
| |
2e+09 +-+ |
1.5e+09 O-+O O O O O |
| |
1e+09 +-+ O O O O O O O O O O O O O O O O O
| |
5e+08 +-+---------------------------------------------------------------+
perf-stat.node-store-misses
3.5e+09 +-+---------------------------------------------------------------+
| .+.. +.. +..+.. |
3e+09 +-++.. .+.. .+. .. .. +..+..+..+.. .+.. .|
| +. +..+..+. +..+ + +. +. |
| |
2.5e+09 +-+ |
| |
2e+09 +-+ |
| |
1.5e+09 +-+ |
| |
O O O O O O |
1e+09 +-+ |
| O O O O O O O O O O O O O O O O O
5e+08 +-+---------------------------------------------------------------+
perf-stat.page-faults
1.1e+06 +-+---------------------------------------------------------------+
| +.. |
1e+06 +-++.. + +.. +.. +..+.. .+.. .+.. |
|.. .+.. + .. .. +. +. .+.. |
900000 +-+ +. +..+..+ +..+ + +. +..|
800000 +-+ |
| |
700000 +-+ |
| |
600000 +-+ |
500000 +-+ |
| |
400000 +-+ |
| |
300000 O-+O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O
perf-stat.context-switches
2.6e+06 +-+---------------------------------------------------------------+
| + .+ + +..+.. .+ |
2.4e+06 +-+ + +. + + + + +..+..+. : + |
|+ + .+.. .. + + + + : + + |
2.2e+06 +-+ +. +..+..+ +..+ + : + + .|
| + +. |
2e+06 +-+ |
| |
1.8e+06 +-+ |
| |
1.6e+06 +-+ |
| |
1.4e+06 +-+ |
O O O O O O O O O O O O O O O
1.2e+06 +-+---O--O--O--O-----------O-----O--------------O-----------O-----+
perf-stat.cpu-migrations
500000 +-+----------------------------------------------------------------+
| .+.. .+. + .+.. +..+..+. +..+.. +.. |
450000 +-+ +..+.. .+..+. + .. .. .. |
| +. +..+ + + +..|
400000 +-+ |
350000 +-+ |
| |
300000 +-+ |
| |
250000 +-+ |
200000 +-+ |
| |
150000 O-+O O O O O O O O O O O O O O O O O O O O O O
| |
100000 +-+----------------------------------------------------------------+
perf-stat.minor-faults
1.1e+06 +-+---------------------------------------------------------------+
| +.. |
1e+06 +-++.. + +.. +.. +..+.. .+.. .+.. |
|.. .+.. + .. .. +. +. .+.. |
900000 +-+ +. +..+..+ +..+ + +. +..|
800000 +-+ |
| |
700000 +-+ |
| |
600000 +-+ |
500000 +-+ |
| |
400000 +-+ |
| |
300000 O-+O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O--O
perf-stat.cache-miss-rate_
28 +-+--------------------------------------------------------------------+
| .+.. |
26 +-++.. ..+.. .+.. ..+.. .+..+..+...+..+. ..+..+..|
24 +-+ +. +.. ..+. +..+. +. +. |
| +..+. |
22 +-+ |
20 +-+ |
| |
18 +-+ |
16 +-+ |
O O O O O O |
14 +-+ |
12 +-+ O O O O O O O O O O O O
| O O O O O |
10 +-+--------------------------------------------------------------------+
perf-stat.ipc
0.34 +-+------------------------------------------------------------------+
| O O |
0.33 +-+ |
0.32 +-+ |
| |
0.31 +-+ |
| |
0.3 +-+ O O O O O O O O O |
| O O O O O O O O O
0.29 O-+ O |
0.28 +-+ O |
| |
0.27 +-+ |
|..+..+..+...+..+..+..+..+..+..+...+..+..+..+..+..+..+..+...+..+..+..|
0.26 +-+------------------------------------------------------------------+
perf-stat.cpi
3.9 +-+-------------------------------------------------------------------+
3.8 +-+ .+.. .+..+.. .+..|
|..+..+...+. +..+..+..+...+..+..+..+..+...+..+..+. +...+. |
3.7 +-+ |
3.6 +-+ |
| O |
3.5 O-+ O |
3.4 +-+O O O O O O O O O
3.3 +-+ O O O O O O O O O |
| |
3.2 +-+ |
3.1 +-+ |
| |
3 +-+ O O |
2.9 +-+-------------------------------------------------------------------+
aim7.time.system_time
8000 +-+------------------------------------------------------------------+
|..+..+..+...+..+..+..+..+..+. +..+. +..+...+..+..+..|
7000 +-+ |
| |
6000 +-+ |
| |
5000 +-+ |
| |
4000 +-+ |
| |
3000 +-+ |
| |
2000 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
1000 +-+------------------------------------------------------------------+
aim7.time.elapsed_time
220 +-+-------------------------------------------------------------------+
|..+..+...+..+..+..+..+..+...+..+. +. +..+. +..+..+...+..+..|
200 +-+ |
180 +-+ |
| |
160 +-+ |
140 +-+ |
| |
120 +-+ |
100 +-+ |
| |
80 +-+ |
60 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
40 +-+-------------------------------------------------------------------+
aim7.time.elapsed_time.max
220 +-+-------------------------------------------------------------------+
|..+..+...+..+..+..+..+..+...+..+. +. +..+. +..+..+...+..+..|
200 +-+ |
180 +-+ |
| |
160 +-+ |
140 +-+ |
| |
120 +-+ |
100 +-+ |
| |
80 +-+ |
60 O-+O O O O O O O O O O O O O O O O O O O O
| O O |
40 +-+-------------------------------------------------------------------+
aim7.time.minor_page_faults
550000 +-+----------------------------------------------------------------+
| + .+ + +..+.. .+ |
500000 +-+ + +. + .. + + +..+..+. + +.. |
450000 +-+ + .+.. .. + . + + + .. |
| +. +..+..+ +..+ + + +..|
400000 +-+ |
350000 +-+ |
| |
300000 +-+ |
250000 +-+ |
| |
200000 +-+ |
150000 O-+ O O O O O O O O O O O O O O O O
| O O O O O O |
100000 +-+----------------------------------------------------------------+
aim7.time.voluntary_context_switches
1.1e+06 +-+---------------------------------------------------------------+
| + .. : + +..+.. .+.. .. : |
1e+06 +-+ + +.. + : + + + +. + : +.. |
|+ + .. .. : + + + : .. |
| + +..+..+ +..+ + + +..|
900000 +-+ |
| |
800000 +-+ |
| |
700000 +-+ |
| |
| O O |
600000 O-+O O O O O O O O O O O O O O O O O O
| O O |
500000 +-+---------------------------------------------------------------+
aim7.time.involuntary_context_switches
600000 +-+----------------------------------------------------------------+
550000 +-+ + +.. + |
| .+ .. + + +.. .. : |
500000 +-++.. +.. +. + . + + +..+..+ : |
450000 +-+ .. .+.. .. + .+ + : .+.. .|
400000 +-+ + +. + +. +. +. |
350000 +-+ |
| |
300000 +-+ |
250000 +-+ |
200000 +-+ |
150000 +-+ |
| |
100000 O-+O O O O O O O O O O O O O O O O
50000 +-+---------------------O--O--O------------O--------O-----------O--+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-ivb-ep01: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
gcc-7/performance/4BRD_12G/xfs/x86_64-rhel-7.2/3000/RAID1/debian-x86_64-2016-08-31.cgz/lkp-ivb-ep01/disk_rr/aim7
commit:
269503cada ("btrfs: only dirty the inode in btrfs_update_time if something was changed")
62231fd3ed ("fs: handle inode->i_version more efficiently")
269503cada9f3e17 62231fd3ed94c9d40b4e58ab14
---------------- --------------------------
%stddev %change %stddev
\ | \
82623 ± 2% +202.6% 250001 aim7.jobs-per-min
218.41 ± 2% -66.8% 72.47 aim7.time.elapsed_time
218.41 ± 2% -66.8% 72.47 aim7.time.elapsed_time.max
515283 ± 8% -97.6% 12328 ± 8% aim7.time.involuntary_context_switches
566723 ± 11% -73.9% 148022 aim7.time.minor_page_faults
7813 -79.1% 1636 aim7.time.system_time
40.87 ± 4% +99.6% 81.56 aim7.time.user_time
1386479 ± 11% -27.6% 1003760 aim7.time.voluntary_context_switches
382548 ± 10% -90.2% 37413 interrupts.CAL:Function_call_interrupts
7108 ± 16% -72.5% 1956 ±108% numa-numastat.node0.other_node
11615 ± 16% -47.3% 6123 ± 8% softirqs.NET_RX
626099 ± 7% -32.1% 425130 softirqs.RCU
3142806 -66.7% 1045757 softirqs.TIMER
67.00 ± 9% -66.4% 22.50 ± 2% vmstat.procs.r
12932 ± 9% +116.7% 28024 vmstat.system.cs
46440 -7.9% 42761 vmstat.system.in
311541 -13.5% 269562 ± 2% meminfo.Active
251665 -19.5% 202511 ± 3% meminfo.Active(anon)
59875 +12.0% 67050 meminfo.Active(file)
10442 -9.7% 9433 meminfo.Inactive(anon)
58919 ± 5% -69.1% 18229 ± 9% meminfo.Shmem
8.04 ± 7% +32.2 40.28 mpstat.cpu.idle%
2.61 ± 16% -0.9 1.71 ± 19% mpstat.cpu.iowait%
0.00 ± 36% +0.0 0.01 ± 22% mpstat.cpu.soft%
88.85 -33.7 55.19 mpstat.cpu.sys%
0.50 ± 4% +2.3 2.81 mpstat.cpu.usr%
75242891 ± 18% +24.9% 94005590 ± 6% cpuidle.C1.time
343278 ± 13% +33.5% 458346 ± 2% cpuidle.C1.usage
1.51e+08 ± 15% -67.6% 48861073 ± 5% cpuidle.C1E.time
366639 ± 13% -47.4% 192734 cpuidle.C1E.usage
30899136 ± 11% +63.6% 50545366 cpuidle.C3.time
100741 ± 8% +170.7% 272682 cpuidle.C3.usage
6.368e+08 ± 9% +58.6% 1.01e+09 ± 2% cpuidle.C6.time
756348 ± 8% +80.6% 1365829 cpuidle.C6.usage
4897473 ± 15% -45.7% 2660288 ± 35% cpuidle.POLL.time
7396 ± 15% -84.6% 1141 ± 16% cpuidle.POLL.usage
29984 ± 4% +13.5% 34036 ± 3% numa-meminfo.node0.Active(file)
38389 ± 50% -60.1% 15312 ±124% numa-meminfo.node0.KernelStack
3002292 ± 42% -35.9% 1925514 ± 25% numa-meminfo.node0.MemUsed
77571 ± 41% -64.4% 27645 ±117% numa-meminfo.node0.PageTables
100325 ± 34% -41.9% 58259 ± 57% numa-meminfo.node0.SUnreclaim
3702 ± 89% +248.2% 12890 ± 42% numa-meminfo.node0.Shmem
29838 ± 5% +14.1% 34046 ± 4% numa-meminfo.node1.Active(file)
7604 ± 42% -65.7% 2609 ±138% numa-meminfo.node1.Inactive(anon)
12832 ± 14% -20.5% 10195 ± 18% numa-meminfo.node1.Mapped
29986 ± 99% +139.1% 71688 ± 43% numa-meminfo.node1.PageTables
70770 ± 7% -26.2% 52216 ± 11% numa-meminfo.node1.SReclaimable
55205 ± 2% -90.3% 5341 ± 85% numa-meminfo.node1.Shmem
62909 -19.6% 50569 ± 3% proc-vmstat.nr_active_anon
14978 +14.6% 17167 ± 5% proc-vmstat.nr_active_file
2616 -10.6% 2338 proc-vmstat.nr_inactive_anon
14729 ± 5% -69.0% 4561 ± 9% proc-vmstat.nr_shmem
62909 -19.6% 50569 ± 3% proc-vmstat.nr_zone_active_anon
14978 +14.6% 17168 ± 5% proc-vmstat.nr_zone_active_file
2616 -10.6% 2338 proc-vmstat.nr_zone_inactive_anon
18863 ± 5% -99.9% 22.25 ± 59% proc-vmstat.numa_hint_faults
4076 ± 7% -99.8% 7.50 ±135% proc-vmstat.numa_hint_faults_local
9057 ± 2% -13.9% 7798 proc-vmstat.numa_other
12294 ± 5% -99.9% 13.75 ± 81% proc-vmstat.numa_pages_migrated
120145 ± 4% -100.0% 54.00 ± 58% proc-vmstat.numa_pte_updates
1067004 ± 6% -69.0% 330915 proc-vmstat.pgfault
12294 ± 5% -99.9% 13.75 ± 81% proc-vmstat.pgmigrate_success
4485126 ± 36% -78.3% 975433 ± 19% proc-vmstat.pgpgout
38415 ± 50% -60.4% 15220 ±124% numa-vmstat.node0.nr_kernel_stack
19399 ± 41% -64.6% 6872 ±116% numa-vmstat.node0.nr_page_table_pages
925.25 ± 89% +249.1% 3229 ± 42% numa-vmstat.node0.nr_shmem
25092 ± 34% -42.0% 14550 ± 57% numa-vmstat.node0.nr_slab_unreclaimable
238193 ± 35% -75.6% 58231 ± 31% numa-vmstat.node0.nr_written
18962193 -10.2% 17025019 ± 2% numa-vmstat.node0.numa_hit
18955274 -10.2% 17022869 numa-vmstat.node0.numa_local
6922 ± 17% -68.6% 2171 ± 93% numa-vmstat.node0.numa_other
18003465 -9.7% 16256991 ± 2% numa-vmstat.node1.nr_dirtied
1911 ± 42% -65.9% 652.00 ±138% numa-vmstat.node1.nr_inactive_anon
3249 ± 15% -20.8% 2574 ± 20% numa-vmstat.node1.nr_mapped
7478 ± 98% +140.9% 18018 ± 43% numa-vmstat.node1.nr_page_table_pages
13799 ± 2% -90.3% 1343 ± 85% numa-vmstat.node1.nr_shmem
17690 ± 7% -26.2% 13064 ± 11% numa-vmstat.node1.nr_slab_reclaimable
239974 ± 35% -75.4% 59045 ± 34% numa-vmstat.node1.nr_written
1911 ± 42% -65.9% 652.00 ±138% numa-vmstat.node1.nr_zone_inactive_anon
19048270 -10.5% 17042096 numa-vmstat.node1.numa_hit
18873726 -10.6% 16863698 numa-vmstat.node1.numa_local
2662 -73.0% 720.25 turbostat.Avg_MHz
89.91 -30.0 59.87 turbostat.Busy%
2961 -59.4% 1203 turbostat.Bzy_MHz
334518 ± 13% +34.3% 449342 ± 2% turbostat.C1
0.86 ± 19% +2.3 3.15 ± 6% turbostat.C1%
366561 ± 13% -47.4% 192676 turbostat.C1E
100631 ± 8% +170.8% 272551 turbostat.C3
0.35 ± 10% +1.3 1.70 turbostat.C3%
755296 ± 8% +80.7% 1364504 turbostat.C6
7.22 ± 7% +26.7 33.88 ± 2% turbostat.C6%
5.19 ± 11% +356.9% 23.69 ± 2% turbostat.CPU%c1
0.04 ± 31% +1007.1% 0.39 ± 3% turbostat.CPU%c3
4.87 ± 10% +229.9% 16.05 ± 3% turbostat.CPU%c6
125.05 -66.7% 41.68 turbostat.CorWatt
10544150 ± 2% -69.5% 3212728 turbostat.IRQ
2.78 ± 18% +145.7% 6.84 ± 28% turbostat.Pkg%pc2
1.09 ± 48% +372.9% 5.14 ± 42% turbostat.Pkg%pc6
152.52 -55.4% 67.98 turbostat.PkgWatt
39.95 -11.1% 35.51 turbostat.RAMWatt
16200 ± 6% -63.3% 5950 turbostat.SMI
1.387e+12 -74.5% 3.531e+11 perf-stat.branch-instructions
0.28 +0.2 0.48 perf-stat.branch-miss-rate%
3.906e+09 -57.0% 1.68e+09 perf-stat.branch-misses
30.69 -11.3 19.41 perf-stat.cache-miss-rate%
1.13e+10 ± 2% -71.7% 3.202e+09 perf-stat.cache-misses
3.683e+10 -55.2% 1.65e+10 perf-stat.cache-references
2858743 ± 9% -25.7% 2122810 perf-stat.context-switches
3.76 -69.2% 1.16 perf-stat.cpi
2.331e+13 -90.9% 2.111e+12 perf-stat.cpu-cycles
662934 ± 8% -75.4% 162861 perf-stat.cpu-migrations
0.78 ± 14% +0.4 1.15 ± 23% perf-stat.dTLB-load-miss-rate%
1.35e+10 ± 14% -50.9% 6.625e+09 ± 23% perf-stat.dTLB-load-misses
1.712e+12 -66.7% 5.694e+11 perf-stat.dTLB-loads
0.22 ± 15% -0.2 0.07 ± 15% perf-stat.dTLB-store-miss-rate%
1.254e+09 ± 15% -80.3% 2.466e+08 ± 16% perf-stat.dTLB-store-misses
5.743e+11 ± 2% -34.4% 3.767e+11 perf-stat.dTLB-stores
18.06 ± 48% +14.6 32.62 ± 11% perf-stat.iTLB-load-miss-rate%
1.092e+08 ± 20% -28.0% 78621807 ± 13% perf-stat.iTLB-load-misses
6.911e+08 ± 61% -76.5% 1.622e+08 ± 6% perf-stat.iTLB-loads
6.205e+12 -70.6% 1.823e+12 perf-stat.instructions
59120 ± 18% -60.1% 23560 ± 12% perf-stat.instructions-per-iTLB-miss
0.27 +224.3% 0.86 perf-stat.ipc
1050580 ± 6% -69.6% 319027 perf-stat.minor-faults
39.90 -21.3 18.55 perf-stat.node-load-miss-rate%
4.32e+09 -92.2% 3.364e+08 perf-stat.node-load-misses
6.506e+09 -77.3% 1.477e+09 perf-stat.node-loads
40.82 -25.5 15.30 ± 2% perf-stat.node-store-miss-rate%
3.077e+09 ± 2% -91.1% 2.729e+08 ± 2% perf-stat.node-store-misses
4.46e+09 ± 2% -66.1% 1.51e+09 perf-stat.node-stores
1050570 ± 6% -69.6% 319033 perf-stat.page-faults
1472 -11.7% 1300 slabinfo.Acpi-ParseExt.active_objs
1472 -11.7% 1300 slabinfo.Acpi-ParseExt.num_objs
1959 ± 34% -65.7% 671.75 ± 22% slabinfo.bio-3.active_objs
1959 ± 34% -65.7% 671.75 ± 22% slabinfo.bio-3.num_objs
3120 ± 19% -45.8% 1692 ± 9% slabinfo.bsg_cmd.active_objs
3120 ± 19% -45.8% 1692 ± 9% slabinfo.bsg_cmd.num_objs
1646 ± 6% -22.9% 1269 ± 8% slabinfo.btrfs_ordered_extent.active_objs
1786 ± 3% -28.9% 1269 ± 8% slabinfo.btrfs_ordered_extent.num_objs
9210 -9.4% 8344 slabinfo.buffer_head.active_slabs
359223 -9.4% 325443 slabinfo.buffer_head.num_objs
9210 -9.4% 8344 slabinfo.buffer_head.num_slabs
840.00 +15.5% 970.00 ± 6% slabinfo.file_lock_cache.active_objs
840.00 +15.5% 970.00 ± 6% slabinfo.file_lock_cache.num_objs
3914 ± 6% -10.6% 3498 ± 5% slabinfo.kmalloc-1024.num_objs
5033 ± 6% -22.9% 3879 ± 4% slabinfo.kmalloc-128.active_objs
5267 ± 6% -25.1% 3947 ± 4% slabinfo.kmalloc-128.num_objs
10390 ± 16% -33.5% 6909 ± 6% slabinfo.kmalloc-192.active_objs
248.75 ± 16% -34.1% 164.00 ± 6% slabinfo.kmalloc-192.active_slabs
10453 ± 16% -33.9% 6910 ± 6% slabinfo.kmalloc-192.num_objs
248.75 ± 16% -34.1% 164.00 ± 6% slabinfo.kmalloc-192.num_slabs
10509 ± 2% -46.6% 5607 ± 13% slabinfo.kmalloc-256.active_objs
331.00 ± 2% -45.2% 181.25 ± 13% slabinfo.kmalloc-256.active_slabs
10607 ± 2% -45.2% 5807 ± 13% slabinfo.kmalloc-256.num_objs
331.00 ± 2% -45.2% 181.25 ± 13% slabinfo.kmalloc-256.num_slabs
1638 -10.8% 1461 ± 2% slabinfo.mnt_cache.active_objs
1638 -10.8% 1461 ± 2% slabinfo.mnt_cache.num_objs
1329 -9.8% 1199 slabinfo.posix_timers_cache.active_objs
1329 -9.8% 1199 slabinfo.posix_timers_cache.num_objs
25748 -14.0% 22154 slabinfo.proc_inode_cache.active_objs
26108 -13.9% 22489 slabinfo.proc_inode_cache.num_objs
34378 ± 3% -21.6% 26968 ± 6% slabinfo.radix_tree_node.active_objs
1248 ± 3% -21.8% 976.75 ± 6% slabinfo.radix_tree_node.active_slabs
34982 ± 3% -21.8% 27358 ± 6% slabinfo.radix_tree_node.num_objs
1248 ± 3% -21.8% 976.75 ± 6% slabinfo.radix_tree_node.num_slabs
938.00 ± 4% +13.9% 1068 ± 5% slabinfo.task_group.active_objs
938.00 ± 4% +13.9% 1068 ± 5% slabinfo.task_group.num_objs
1278 -10.8% 1139 slabinfo.xfs_buf_item.active_objs
1278 -10.8% 1139 slabinfo.xfs_buf_item.num_objs
1277 -10.7% 1140 slabinfo.xfs_da_state.active_objs
1277 -10.7% 1140 slabinfo.xfs_da_state.num_objs
3069 ± 3% -15.1% 2606 ± 4% slabinfo.xfs_ili.num_objs
2537 ± 2% -11.2% 2253 ± 3% slabinfo.xfs_inode.num_objs
3375 ± 18% -42.7% 1935 ± 7% slabinfo.xfs_log_ticket.active_objs
3375 ± 18% -42.7% 1935 ± 7% slabinfo.xfs_log_ticket.num_objs
76837 -77.9% 16951 ± 2% sched_debug.cfs_rq:/.exec_clock.avg
77254 -77.8% 17169 ± 2% sched_debug.cfs_rq:/.exec_clock.max
76371 -78.3% 16601 ± 2% sched_debug.cfs_rq:/.exec_clock.min
158.47 ± 11% -27.0% 115.68 ± 7% sched_debug.cfs_rq:/.exec_clock.stddev
2948116 -82.7% 511186 ± 3% sched_debug.cfs_rq:/.min_vruntime.avg
3037745 -81.2% 570721 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
2883559 -82.8% 494832 ± 3% sched_debug.cfs_rq:/.min_vruntime.min
30847 ± 19% -59.3% 12564 ± 6% sched_debug.cfs_rq:/.min_vruntime.stddev
0.81 -34.1% 0.53 ± 11% sched_debug.cfs_rq:/.nr_running.avg
0.23 +92.8% 0.45 sched_debug.cfs_rq:/.nr_running.stddev
0.96 ± 3% -47.2% 0.51 sched_debug.cfs_rq:/.nr_spread_over.avg
2.81 ± 34% -68.9% 0.88 ± 24% sched_debug.cfs_rq:/.nr_spread_over.max
0.75 -33.3% 0.50 sched_debug.cfs_rq:/.nr_spread_over.min
0.49 ± 17% -88.0% 0.06 ± 57% sched_debug.cfs_rq:/.nr_spread_over.stddev
17.13 ± 9% -31.6% 11.71 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.avg
51.19 ± 12% -40.2% 30.62 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.max
-61362 -78.6% -13132 sched_debug.cfs_rq:/.spread0.min
30836 ± 20% -59.3% 12544 ± 6% sched_debug.cfs_rq:/.spread0.stddev
914.38 ± 2% -22.1% 712.63 ± 2% sched_debug.cfs_rq:/.util_avg.avg
1901 ± 7% -20.5% 1512 ± 4% sched_debug.cfs_rq:/.util_avg.max
369.90 ± 6% -31.3% 253.99 ± 4% sched_debug.cfs_rq:/.util_avg.stddev
511341 ± 5% -17.1% 423760 ± 5% sched_debug.cpu.avg_idle.avg
148496 ± 17% -59.4% 60266 ± 31% sched_debug.cpu.avg_idle.min
117128 ± 2% -51.7% 56592 sched_debug.cpu.clock.avg
117149 ± 2% -51.7% 56613 sched_debug.cpu.clock.max
117103 ± 2% -51.7% 56570 sched_debug.cpu.clock.min
117128 ± 2% -51.7% 56592 sched_debug.cpu.clock_task.avg
117149 ± 2% -51.7% 56613 sched_debug.cpu.clock_task.max
117103 ± 2% -51.7% 56570 sched_debug.cpu.clock_task.min
17.48 ± 10% -31.7% 11.94 ± 4% sched_debug.cpu.cpu_load[0].avg
48.25 ± 15% -33.2% 32.25 ± 7% sched_debug.cpu.cpu_load[0].max
1.75 ± 67% -100.0% 0.00 sched_debug.cpu.cpu_load[0].min
3.75 ± 23% -93.3% 0.25 ±173% sched_debug.cpu.cpu_load[1].min
4.94 ± 9% -74.7% 1.25 ±128% sched_debug.cpu.cpu_load[2].min
18.25 ± 11% -14.3% 15.63 ± 10% sched_debug.cpu.cpu_load[4].avg
6.12 ± 8% -42.9% 3.50 ± 39% sched_debug.cpu.cpu_load[4].min
2664 ± 7% -42.7% 1527 ± 15% sched_debug.cpu.curr->pid.avg
5071 ± 10% -22.0% 3954 sched_debug.cpu.curr->pid.max
85741 -66.1% 29073 sched_debug.cpu.nr_load_updates.avg
90853 -62.3% 34267 ± 2% sched_debug.cpu.nr_load_updates.max
84665 -66.7% 28171 sched_debug.cpu.nr_load_updates.min
1.05 ± 15% -49.6% 0.53 ± 12% sched_debug.cpu.nr_running.avg
2.75 ± 23% -54.5% 1.25 ± 20% sched_debug.cpu.nr_running.max
31415 ± 5% -27.7% 22706 sched_debug.cpu.nr_switches.avg
40139 ± 5% -24.1% 30467 ± 2% sched_debug.cpu.nr_switches.max
28596 ± 6% -28.4% 20479 sched_debug.cpu.nr_switches.min
2571 ± 6% -15.6% 2169 ± 4% sched_debug.cpu.nr_switches.stddev
54.60 -33.1% 36.55 sched_debug.cpu.nr_uninterruptible.avg
308.62 ± 7% -78.0% 67.75 ± 6% sched_debug.cpu.nr_uninterruptible.max
-190.62 -102.5% 4.75 ± 67% sched_debug.cpu.nr_uninterruptible.min
118.54 ± 10% -87.5% 14.85 ± 11% sched_debug.cpu.nr_uninterruptible.stddev
29880 ± 5% -30.2% 20845 sched_debug.cpu.sched_count.avg
34021 ± 7% -33.4% 22650 sched_debug.cpu.sched_count.max
28141 ± 6% -29.8% 19763 sched_debug.cpu.sched_count.min
1287 ± 10% -49.8% 646.28 ± 5% sched_debug.cpu.sched_count.stddev
6708 ± 4% +51.9% 10190 sched_debug.cpu.sched_goidle.avg
8099 ± 3% +36.9% 11085 sched_debug.cpu.sched_goidle.max
6190 ± 4% +56.0% 9655 sched_debug.cpu.sched_goidle.min
450.38 ± 3% -28.6% 321.43 ± 5% sched_debug.cpu.sched_goidle.stddev
16726 ± 6% -37.5% 10448 sched_debug.cpu.ttwu_count.avg
21484 ± 5% -41.2% 12623 ± 6% sched_debug.cpu.ttwu_count.max
13123 ± 9% -24.3% 9937 sched_debug.cpu.ttwu_count.min
2232 ± 5% -72.4% 617.19 ± 8% sched_debug.cpu.ttwu_count.stddev
2892 ± 3% -91.6% 242.14 sched_debug.cpu.ttwu_local.avg
4724 ± 5% -83.4% 785.12 ± 11% sched_debug.cpu.ttwu_local.max
2305 ± 4% -92.2% 179.00 ± 5% sched_debug.cpu.ttwu_local.min
476.08 ± 10% -80.7% 91.88 ± 14% sched_debug.cpu.ttwu_local.stddev
117103 ± 2% -51.7% 56570 sched_debug.cpu_clk
117103 ± 2% -51.7% 56570 sched_debug.ktime
0.04 ± 6% +55.9% 0.07 ± 5% sched_debug.rt_rq:/.rt_time.avg
1.42 ± 6% +77.4% 2.53 ± 5% sched_debug.rt_rq:/.rt_time.max
0.00 ± 26% -100.0% 0.00 sched_debug.rt_rq:/.rt_time.min
0.22 ± 6% +76.6% 0.39 ± 5% sched_debug.rt_rq:/.rt_time.stddev
117509 ± 2% -51.5% 56978 sched_debug.sched_clk
88.63 -88.6 0.00 perf-profile.calltrace.cycles-pp.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter
88.81 -87.8 1.05 ± 7% perf-profile.calltrace.cycles-pp.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
88.91 -87.0 1.94 ± 5% perf-profile.calltrace.cycles-pp.xfs_file_aio_write_checks.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
85.42 -85.4 0.00 perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks.xfs_file_buffered_aio_write
84.97 -85.0 0.00 perf-profile.calltrace.cycles-pp.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time.xfs_file_aio_write_checks
79.71 -79.7 0.00 perf-profile.calltrace.cycles-pp._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time.file_update_time
79.19 -79.2 0.00 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.xfs_log_commit_cil.__xfs_trans_commit.xfs_vn_update_time
92.00 -63.2 28.80 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write.sys_write
92.12 -62.4 29.71 perf-profile.calltrace.cycles-pp.xfs_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.19 -61.9 30.30 perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.54 -59.2 33.35 ± 2% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
92.61 -58.7 33.87 ± 2% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
96.53 -33.0 63.58 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
0.68 ± 2% +4.5 5.21 ± 2% perf-profile.calltrace.cycles-pp.copy_page_to_iter.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
0.53 ± 4% +4.9 5.48 perf-profile.calltrace.cycles-pp.truncate_inode_pages_range.evict.__dentry_kill.dput.__fput
0.53 ± 3% +5.0 5.50 ± 2% perf-profile.calltrace.cycles-pp.evict.__dentry_kill.dput.__fput.task_work_run
0.91 ± 3% +5.5 6.41 perf-profile.calltrace.cycles-pp.__dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
0.92 ± 3% +5.5 6.44 perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath
1.10 ± 2% +5.6 6.65 perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
1.10 ± 2% +5.6 6.67 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
1.10 ± 2% +5.6 6.67 perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.syscall_return_slowpath.entry_SYSCALL_64_fastpath
1.10 ± 2% +5.6 6.68 perf-profile.calltrace.cycles-pp.syscall_return_slowpath.entry_SYSCALL_64_fastpath
0.72 ± 2% +5.8 6.52 perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply
0.76 ± 3% +6.1 6.84 ± 2% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write
1.27 ± 2% +10.3 11.54 perf-profile.calltrace.cycles-pp.iomap_write_begin.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write
1.57 ± 2% +11.2 12.74 perf-profile.calltrace.cycles-pp.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read
1.70 ± 2% +12.2 13.94 perf-profile.calltrace.cycles-pp.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read.vfs_read.sys_read
1.81 ± 2% +13.0 14.82 perf-profile.calltrace.cycles-pp.xfs_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
1.86 ± 2% +13.5 15.38 perf-profile.calltrace.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.08 ± 2% +15.4 17.43 perf-profile.calltrace.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
2.13 ± 2% +15.7 17.84 perf-profile.calltrace.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
2.26 +18.7 20.92 perf-profile.calltrace.cycles-pp.iomap_write_actor.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter
0.93 ± 10% +19.4 20.37 ± 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.98 ± 12% +20.2 21.15 ± 4% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.calltrace.cycles-pp.secondary_startup_64
2.90 +22.2 25.11 perf-profile.calltrace.cycles-pp.iomap_apply.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write
2.92 +22.3 25.27 perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_aio_write.xfs_file_write_iter.__vfs_write.vfs_write
88.81 -88.3 0.54 ± 4% perf-profile.children.cycles-pp.xfs_vn_update_time
88.81 -87.7 1.10 ± 7% perf-profile.children.cycles-pp.file_update_time
88.92 -86.8 2.10 ± 5% perf-profile.children.cycles-pp.xfs_file_aio_write_checks
86.30 -84.9 1.37 ± 3% perf-profile.children.cycles-pp.__xfs_trans_commit
85.86 -84.5 1.33 ± 3% perf-profile.children.cycles-pp.xfs_log_commit_cil
81.64 -80.5 1.10 ± 6% perf-profile.children.cycles-pp._raw_spin_lock
81.03 -79.3 1.75 ± 4% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
92.01 -63.1 28.91 perf-profile.children.cycles-pp.xfs_file_buffered_aio_write
92.13 -62.4 29.77 perf-profile.children.cycles-pp.xfs_file_write_iter
92.23 -61.8 30.42 perf-profile.children.cycles-pp.__vfs_write
92.58 -59.1 33.49 ± 2% perf-profile.children.cycles-pp.vfs_write
92.65 -58.6 34.02 ± 2% perf-profile.children.cycles-pp.sys_write
96.57 -32.9 63.68 perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
0.70 ± 3% +4.7 5.35 ± 2% perf-profile.children.cycles-pp.copy_page_to_iter
0.53 ± 4% +5.0 5.49 perf-profile.children.cycles-pp.truncate_inode_pages_range
0.53 ± 3% +5.0 5.50 ± 2% perf-profile.children.cycles-pp.evict
0.91 ± 3% +5.5 6.41 perf-profile.children.cycles-pp.__dentry_kill
0.92 ± 3% +5.5 6.46 perf-profile.children.cycles-pp.dput
1.10 ± 2% +5.6 6.66 perf-profile.children.cycles-pp.__fput
1.10 ± 2% +5.6 6.67 perf-profile.children.cycles-pp.task_work_run
1.10 ± 2% +5.6 6.68 perf-profile.children.cycles-pp.syscall_return_slowpath
1.10 ± 2% +5.6 6.68 perf-profile.children.cycles-pp.exit_to_usermode_loop
0.82 ± 5% +5.7 6.50 ± 2% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.76 ± 2% +6.1 6.89 ± 2% perf-profile.children.cycles-pp.grab_cache_page_write_begin
0.98 +8.0 8.99 perf-profile.children.cycles-pp.pagecache_get_page
1.29 ± 2% +10.4 11.72 perf-profile.children.cycles-pp.iomap_write_begin
1.59 ± 2% +11.3 12.94 perf-profile.children.cycles-pp.generic_file_read_iter
1.71 ± 2% +12.4 14.07 perf-profile.children.cycles-pp.xfs_file_buffered_aio_read
1.81 ± 2% +13.0 14.84 perf-profile.children.cycles-pp.xfs_file_read_iter
1.89 ± 2% +13.6 15.54 perf-profile.children.cycles-pp.__vfs_read
2.12 ± 2% +15.5 17.59 perf-profile.children.cycles-pp.vfs_read
2.16 ± 2% +15.9 18.03 perf-profile.children.cycles-pp.sys_read
2.29 +18.9 21.20 perf-profile.children.cycles-pp.iomap_write_actor
0.96 ± 11% +19.8 20.75 ± 3% perf-profile.children.cycles-pp.intel_idle
1.00 ± 12% +20.6 21.60 ± 4% perf-profile.children.cycles-pp.cpuidle_enter_state
1.00 ± 11% +20.7 21.70 ± 4% perf-profile.children.cycles-pp.start_secondary
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.children.cycles-pp.secondary_startup_64
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
1.02 ± 12% +21.1 22.14 ± 4% perf-profile.children.cycles-pp.do_idle
2.92 +22.4 25.29 perf-profile.children.cycles-pp.iomap_apply
2.94 +22.5 25.40 perf-profile.children.cycles-pp.iomap_file_buffered_write
80.69 -78.9 1.75 ± 4% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
0.81 ± 5% +5.6 6.43 ± 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.96 ± 11% +19.8 20.74 ± 3% perf-profile.self.cycles-pp.intel_idle
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years
db0efd107c ("perf: Remove perf_event::group_entry"): WARNING: CPU: 0 PID: 723 at kernel/events/core.c:1923 perf_group_detach
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/peterz/queue.git perf/testing
commit db0efd107c2c2f2b7e0df41e6042cda36fb3cee7
Author: Peter Zijlstra <peterz(a)infradead.org>
AuthorDate: Mon Nov 13 14:28:33 2017 +0100
Commit: Peter Zijlstra <peterz(a)infradead.org>
CommitDate: Fri Dec 22 15:29:03 2017 +0100
perf: Remove perf_event::group_entry
Now that all the grouping is done with RB trees, we no longer need
group_entry and can replace the whole thing with sibling_list.
Signed-off-by: Peter Zijlstra (Intel) <peterz(a)infradead.org>
27d0da1a7a perf: Fix event schedule order
db0efd107c perf: Remove perf_event::group_entry
b738b7f7b2 perf: Fix event rotation
+-------------------------------------------------------+------------+------------+------------+
| | 27d0da1a7a | db0efd107c | b738b7f7b2 |
+-------------------------------------------------------+------------+------------+------------+
| boot_successes | 337 | 159 | 115 |
| boot_failures | 0 | 9 | 8 |
| WARNING:at_kernel/events/core.c:#perf_group_detach | 0 | 9 | 8 |
| EIP:perf_group_detach | 0 | 9 | 8 |
| BUG:unable_to_handle_kernel | 0 | 9 | 8 |
| Oops:#[##] | 0 | 9 | 8 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 9 | 8 |
| WARNING:possible_circular_locking_dependency_detected | 0 | 8 | 8 |
+-------------------------------------------------------+------------+------------+------------+
run-parts: /etc/kernel-tests/99-rmmod exited with code 123
wfg: skip syslogd
Deconfiguring network interfaces... done.
Sending all processes the TERM signal...
[ 145.554223] watchdog: watchdog1: watchdog did not stop!
[ 145.563814] WARNING: CPU: 0 PID: 723 at kernel/events/core.c:1923 perf_group_detach+0xba/0xee
[ 145.563865] CPU: 0 PID: 723 Comm: trinity-c1 Not tainted 4.15.0-rc4-00180-gdb0efd1 #93
[ 145.563869] EIP: perf_group_detach+0xba/0xee
[ 145.563870] EFLAGS: 00010002 CPU: 0
[ 145.563872] EAX: 502f3f00 EBX: 4f9e3c00 ECX: 00000001 EDX: ffffffff
[ 145.563874] ESI: 4f966c00 EDI: 6b6b6b63 EBP: 50367eac ESP: 50367e9c
[ 145.563876] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 145.563878] CR0: 80050033 CR2: 37ef804b CR3: 10136b60 CR4: 000406b0
[ 145.563883] DR0: 36d2d000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 145.563885] DR6: ffff0ff0 DR7: 00000600
[ 145.563886] Call Trace:
[ 145.563890] ? perf_remove_from_context+0x57/0x63
[ 145.563893] ? perf_event_release_kernel+0xbc/0x1b1
[ 145.563896] ? locks_remove_file+0xdd/0xf6
[ 145.563898] ? perf_release+0xe/0x12
[ 145.563901] ? __fput+0xcd/0x142
[ 145.563903] ? ____fput+0x8/0xa
[ 145.563906] ? task_work_run+0x5c/0x7b
[ 145.563909] ? do_exit+0x349/0x7aa
[ 145.563913] ? __wake_up_common_lock+0xa4/0xa4
[ 145.563916] ? do_group_exit+0x88/0x88
[ 145.563918] ? SyS_exit_group+0x11/0x11
[ 145.563921] ? do_int80_syscall_32+0x4a/0x5c
[ 145.563924] ? entry_INT80_32+0x36/0x36
[ 145.563926] Code: 00 f6 86 b8 00 00 00 04 8d 82 94 00 00 00 74 06 8d 82 88 00 00 00 89 f2 e8 cd c8 ff ff 8b 83 a8 01 00 00 39 86 a8 01 00 00 74 02 <0f> ff 8b 47 08 89 fe 8d 78 f8 eb 99 8b 43 48 e8 f6 ba ff ff 8b
[ 145.563994] ---[ end trace 50602b477f3879a0 ]---
[ 145.563999] BUG: unable to handle kernel paging request at 6b6b6b6b
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 813162eb937e1d25d70b14315e671633712e887b 1291a0d5049dbc06baaaf66a9ff3f53db493b19b --
git bisect good 747e711fd57936d417834e167f3575ca2701d9c0 # 20:57 G 13 0 0 0 Merge 'linux-review/Corentin-Labbe/ia64-agp-Replace-empty-define-with-do-while/20171223-064546' into devel-spot-201712231943
git bisect good 3e83676f259d8fa795283e0334702b3254dc3270 # 21:12 G 13 0 0 0 Merge 'ath6kl/master-pending' into devel-spot-201712231943
git bisect bad 57ada8ca58de93c5cb1c03f9164db074cadfe8c1 # 21:43 B 10 3 0 1 Merge 'peterz-queue/master' into devel-spot-201712231943
git bisect good 7ab67c2de1010a91cefc4b91c7d902bb1eb89846 # 22:07 G 27 0 0 0 Merge 'vkoul-soundwire/soundwire-next' into devel-spot-201712231943
git bisect good 4c35117b372df259407a8443289050ee90939209 # 22:18 G 27 0 0 0 Merge 'gfs2/for-next' into devel-spot-201712231943
git bisect good c2163a2f50a433d621f46053ad7248abb5bf3e84 # 22:40 G 28 0 0 0 Merge 'pinctrl/devel' into devel-spot-201712231943
git bisect good 87f92629e5e7d5e72559a44a7abc242bfa10bc16 # 22:52 G 26 0 0 0 Merge branch 'x86/apic'
git bisect good f71ab918a5129827f8d9025270a4224f0ec3ccbe # 23:08 G 28 0 0 0 Merge branch 'sched/core'
git bisect good 445dd43469cc389d1ec84b6c6f12160896cc5e2e # 23:19 G 28 0 0 0 Merge branch 'x86/core'
git bisect bad 60a65ce66fce41e2258e0e16b3391d386177b765 # 23:29 B 8 1 0 0 perf: Optimize ctx_sched_out()
git bisect good ed03eedee614316f51385d6270227cd600083d3d # 23:48 G 81 0 0 1 Merge branch 'perf/core'
git bisect good bb1983e7522b218e8d9ef2235daa85ac3f9510cc # 00:03 G 80 0 0 0 perf: Cleanup the rb-tree code
git bisect bad db0efd107c2c2f2b7e0df41e6042cda36fb3cee7 # 00:14 B 17 1 0 0 perf: Remove perf_event::group_entry
git bisect good 27d0da1a7a013bf17957a781daa7826e9a5773fa # 01:00 G 107 0 0 0 perf: Fix event schedule order
# first bad commit: [db0efd107c2c2f2b7e0df41e6042cda36fb3cee7] perf: Remove perf_event::group_entry
git bisect good 27d0da1a7a013bf17957a781daa7826e9a5773fa # 01:55 G 322 0 0 0 perf: Fix event schedule order
# extra tests with debug options
git bisect bad db0efd107c2c2f2b7e0df41e6042cda36fb3cee7 # 02:12 B 24 3 0 0 perf: Remove perf_event::group_entry
# extra tests on HEAD of linux-devel/devel-spot-201712231943
git bisect bad 813162eb937e1d25d70b14315e671633712e887b # 02:13 B 10 2 0 4 0day head guard for 'devel-spot-201712231943'
# extra tests on tree/branch peterz-queue/perf/testing
git bisect bad b738b7f7b2a38368bd2ca508f3ce3cf68167a644 # 02:30 B 17 1 0 0 perf: Fix event rotation
# extra tests with first bad commit reverted
git bisect good 896a7741001ba5af861db95fb92faf4b0123e204 # 03:02 G 107 0 0 0 Revert "perf: Remove perf_event::group_entry"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years
92a0f81d89 ("x86/cpu_entry_area: Move it out of the fixmap"): BUG: kernel hang in boot stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git WIP.x86/pti
commit 92a0f81d89571e3e8759366e050ee05cc545ef99
Author: Thomas Gleixner <tglx(a)linutronix.de>
AuthorDate: Wed Dec 20 18:51:31 2017 +0100
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Fri Dec 22 20:13:05 2017 +0100
x86/cpu_entry_area: Move it out of the fixmap
Put the cpu_entry_area into a separate P4D entry. The fixmap gets too big
and 0-day already hit a case where the fixmap PTEs were cleared by
cleanup_highmap().
Aside of that the fixmap API is a pain as it's all backwards.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: linux-kernel(a)vger.kernel.org
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
ed1bbc40a0 x86/cpu_entry_area: Move it to a separate unit
92a0f81d89 x86/cpu_entry_area: Move it out of the fixmap
679d0580c1 x86/ldt: Make the LDT mapping RO
3056af3db3 Merge branch 'WIP.x86/pti.base'
+-------------------------------+------------+------------+------------+------------+
| | ed1bbc40a0 | 92a0f81d89 | 679d0580c1 | 3056af3db3 |
+-------------------------------+------------+------------+------------+------------+
| boot_successes | 77 | 0 | 0 | 0 |
| boot_failures | 0 | 26 | 43 | 19 |
| BUG:kernel_hang_in_boot_stage | 0 | 26 | 43 | 19 |
+-------------------------------+------------+------------+------------+------------+
[ 0.000000] Inode-cache hash table entries: 32768 (order: 5, 131072 bytes)
[ 0.000000] BRK [0x07cb7000, 0x07cb7fff] PGTABLE
[ 0.000000] BRK [0x07cb8000, 0x07cb8fff] PGTABLE
[ 0.000000] BRK [0x07cb9000, 0x07cb9fff] PGTABLE
[ 0.000000] BRK [0x07cba000, 0x07cbafff] PGTABLE
BUG: kernel hang in boot stage
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 858ee49740dd2f7e85f1b45c2af708b6c08f0771 v4.14 --
git bisect good 7fbbd5cbebf118a9e09f5453f686656a167c3d1c # 23:18 G 11 0 0 0 x86/entry/64: Remove the SYSENTER stack canary
git bisect good 4fe2d8b11a370af286287a2661de9d4e6c9a145a # 23:27 G 11 0 0 0 x86/entry: Rename SYSENTER_stack to CPU_ENTRY_AREA_entry_stack
git bisect good dd95f1a4b5ca904c78e6a097091eb21436478abb # 23:39 G 11 0 0 0 x86/mm: Put MMU to hardware ASID translation in one place
git bisect bad 613e396bc0d4c7604fba23256644e78454c68cf6 # 23:47 B 0 2 16 0 init: Invoke init_espfix_bsp() from mm_init()
git bisect good ed1bbc40a0d10e0c5c74fe7bdc6298295cf40255 # 23:58 G 11 0 0 0 x86/cpu_entry_area: Move it to a separate unit
git bisect bad 92a0f81d89571e3e8759366e050ee05cc545ef99 # 00:07 B 0 11 36 11 x86/cpu_entry_area: Move it out of the fixmap
# first bad commit: [92a0f81d89571e3e8759366e050ee05cc545ef99] x86/cpu_entry_area: Move it out of the fixmap
git bisect good ed1bbc40a0d10e0c5c74fe7bdc6298295cf40255 # 00:19 G 31 0 0 0 x86/cpu_entry_area: Move it to a separate unit
# extra tests on HEAD of tip/master
git bisect bad 3056af3db33464f58e51ddcc9fd5552413e3a6f2 # 00:55 B 0 5 27 8 Merge branch 'WIP.x86/pti.base'
# extra tests on tree/branch tip/WIP.x86/pti
git bisect bad 679d0580c1655be350392a66a45cedc9f4c5e139 # 01:14 B 0 2 42 26 x86/ldt: Make the LDT mapping RO
# extra tests on tree/branch tip/master
git bisect bad 3056af3db33464f58e51ddcc9fd5552413e3a6f2 # 01:14 B 0 19 33 0 Merge branch 'WIP.x86/pti.base'
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years
8604322546 ("x86/cpu_entry_area: Move it out of the fixmap"): BUG: unable to handle kernel paging request at fffffbfff0380000
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 8604322546c0dd30dac1baa8661dbec069184184
Author: Thomas Gleixner <tglx(a)linutronix.de>
AuthorDate: Wed Dec 20 18:51:31 2017 +0100
Commit: Ingo Molnar <mingo(a)kernel.org>
CommitDate: Thu Dec 21 13:43:26 2017 +0100
x86/cpu_entry_area: Move it out of the fixmap
Put the cpu_entry_area into a separate P4D entry. The fixmap gets too big
and 0-day already hit a case where the fixmap PTEs were cleared by
cleanup_highmap().
Aside of that the fixmap API is a pain as it's all backwards.
Signed-off-by: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Andy Lutomirski <luto(a)kernel.org>
Cc: Borislav Petkov <bp(a)alien8.de>
Cc: Dave Hansen <dave.hansen(a)linux.intel.com>
Cc: H. Peter Anvin <hpa(a)zytor.com>
Cc: Josh Poimboeuf <jpoimboe(a)redhat.com>
Cc: Juergen Gross <jgross(a)suse.com>
Cc: Linus Torvalds <torvalds(a)linux-foundation.org>
Cc: Peter Zijlstra <peterz(a)infradead.org>
Cc: linux-kernel(a)vger.kernel.org
Signed-off-by: Ingo Molnar <mingo(a)kernel.org>
642aafce6a x86/cpu_entry_area: Move it to a separate unit
8604322546 x86/cpu_entry_area: Move it out of the fixmap
3514267557 Add linux-next specific files for 20171222
+------------------------------------------+------------+------------+---------------+
| | 642aafce6a | 8604322546 | next-20171222 |
+------------------------------------------+------------+------------+---------------+
| boot_successes | 35 | 0 | 0 |
| boot_failures | 0 | 17 | 12 |
| BUG:unable_to_handle_kernel | 0 | 17 | 12 |
| Oops:#[##] | 0 | 17 | 12 |
| RIP:memset_orig | 0 | 17 | 12 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 17 | 12 |
+------------------------------------------+------------+------------+---------------+
[ 0.000000] PID hash table entries: 2048 (order: 2, 16384 bytes)
[ 0.000000] Dentry cache hash table entries: 65536 (order: 7, 524288 bytes)
[ 0.000000] Inode-cache hash table entries: 32768 (order: 6, 262144 bytes)
[ 0.000000] Memory: 381488K/523712K available (6916K kernel code, 2708K rwdata, 2336K rodata, 952K init, 20480K bss, 142224K reserved, 0K cma-reserved)
[ 0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
[ 0.010000] BUG: unable to handle kernel paging request at fffffbfff0380000
[ 0.010000] IP: memset_orig+0x33/0xb0
[ 0.010000] PGD 1e0fe067 P4D 1e0fe067 PUD 2254067 PMD 2253067 PTE 8000000002256161
[ 0.010000] Oops: 0003 [#1] PREEMPT KASAN
[ 0.010000] Modules linked in:
[ 0.010000] CPU: 0 PID: 0 Comm: swapper Not tainted 4.14.0-00142-g8604322 #1
[ 0.010000] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.010000] task: ffffffff81c26580 task.stack: ffffffff81c00000
[ 0.010000] RIP: 0010:memset_orig+0x33/0xb0
[ 0.010000] RSP: 0000:ffffffff81c07e88 EFLAGS: 00010016
[ 0.010000] RAX: 0000000000000000 RBX: 0000000000008000 RCX: 000000000000003f
[ 0.010000] RDX: 0000000000001000 RSI: 0000000000000000 RDI: fffffbfff0380000
[ 0.010000] RBP: ffffffff81c00000 R08: ffffed0003ffa201 R09: 0000000000000000
[ 0.010000] R10: fffffbfff0380000 R11: ffffed0003ffa200 R12: 000000001fd477ba
[ 0.010000] R13: ffffffff81c26600 R14: ffffffff81c26ac8 R15: ffffffff81c265a4
[ 0.010000] FS: 0000000000000000(0000) GS:ffffffff81c39000(0000) knlGS:0000000000000000
[ 0.010000] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.010000] CR2: fffffbfff0380000 CR3: 0000000001c21000 CR4: 00000000000006b0
[ 0.010000] Call Trace:
[ 0.010000] kasan_unpoison_shadow+0xf/0x30
[ 0.010000] init_idle+0x1e4/0x2a0
[ 0.010000] sched_init+0x2e0/0x323
[ 0.010000] start_kernel+0x30e/0x6f4
[ 0.010000] secondary_startup_64+0xa5/0xb0
[ 0.010000] Code: b8 01 01 01 01 01 01 01 01 48 0f af c1 41 89 f9 41 83 e1 07 75 70 48 89 d1 48 c1 e9 06 74 39 66 0f 1f 84 00 00 00 00 00 48 ff c9 <48> 89 07 48 89 47 08 48 89 47 10 48 89 47 18 48 89 47 20 48 89
[ 0.010000] RIP: memset_orig+0x33/0xb0 RSP: ffffffff81c07e88
[ 0.010000] CR2: fffffbfff0380000
[ 0.010000] ---[ end trace 08945838e05bf5b2 ]---
[ 0.010000] Kernel panic - not syncing: Fatal exception
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 5dd4e6b85900a55c19ff1945d407f9bbef8a1347 1291a0d5049dbc06baaaf66a9ff3f53db493b19b --
git bisect bad 7f868504444de3bb3624ba2e3a1dfed463c97ad3 # 21:41 B 0 3 18 0 Merge 'linux-review/tangzhongrui/kernel-groups-groups_search-can-be-boolean/20171217-190226' into devel-hourly-2017122212
git bisect bad cacdae3b65124e3a85e87a1fef3ce7c8efbe5097 # 22:02 B 0 4 18 0 Merge 'bbrezillon-0day/mtd/next' into devel-hourly-2017122212
git bisect bad 3c5f7b3c418284e7ba0b6973b4c3e61fc81e66d7 # 22:29 B 0 7 21 0 Merge 'linux-review/Shaohua-Li/net-reevalulate-autoflowlabel-setting-after-sysctl-setting/20171221-145438' into devel-hourly-2017122212
git bisect good 8217cb1688e3639ae5b8f9e49f2f10ab8b678703 # 22:50 G 11 0 0 0 Merge 'linux-review/Deepa-Dinamani/Make-input-drivers-y2038-safe/20171220-091320' into devel-hourly-2017122212
git bisect bad 14bd583ac20f8d774c33e62d6df25102ed413f1c # 23:01 B 0 11 25 0 Merge 'nvdimm/libnvdimm-pending' into devel-hourly-2017122212
git bisect bad f59550bb51fc3a58da6c730d89cdf28083e41514 # 23:12 B 0 11 27 2 Merge 'ipsec/testing' into devel-hourly-2017122212
git bisect bad 30fc120d0fe56840ede23796523d25d2a230147b # 23:20 B 0 4 18 0 Merge 'tip/master' into devel-hourly-2017122212
git bisect good 90c97a7bf42027541911f1f3fd0e234aad2bf50b # 23:35 G 11 0 0 0 Merge 'linux-review/Xin-Long/ip6_tunnel-get-the-min-mtu-properly-in-ip6_tnl_xmit/20171219-114525' into devel-hourly-2017122212
git bisect good 2630347489118a520fc58398baf663911a7af647 # 23:54 G 11 0 0 0 Merge 'arm-soc/tegra/memory' into devel-hourly-2017122212
git bisect bad 2f9f7adc76e23ed43e6326f1b88a0f591d656192 # 00:02 B 0 5 19 0 x86/ldt: Make the LDT mapping RO
git bisect good b57504a5d5d0d7ef3be6e472f9bc077a23ed03df # 00:20 G 11 0 0 0 x86/mm: Use __flush_tlb_one() for kernel memory
git bisect bad e759828040c73105526cecc862d24ba0a162cc5b # 00:30 B 0 10 24 0 x86/mm/pti: Force entry through trampoline when PTI active
git bisect bad 7ab91f5f418a67efcaa87f2b4974ba554d7022cf # 00:40 B 0 10 24 0 x86/cpufeatures: Add X86_BUG_CPU_INSECURE
git bisect good f2b7dbdb0a75aef4970bc4fb0ac4e55eb33b10af # 01:06 G 11 0 0 0 x86/mm: Put MMU to hardware ASID translation in one place
git bisect good 642aafce6a9a2606c895d9decea76adb32f37bfb # 01:16 G 10 0 0 0 x86/cpu_entry_area: Move it to a separate unit
git bisect bad 61637aa86272afaf21b0f1328ea35495e52110f5 # 01:25 B 0 10 24 0 init: Invoke init_espfix_bsp() from mm_init()
git bisect bad 8604322546c0dd30dac1baa8661dbec069184184 # 01:37 B 0 11 27 2 x86/cpu_entry_area: Move it out of the fixmap
# first bad commit: [8604322546c0dd30dac1baa8661dbec069184184] x86/cpu_entry_area: Move it out of the fixmap
git bisect good 642aafce6a9a2606c895d9decea76adb32f37bfb # 01:44 G 31 0 0 0 x86/cpu_entry_area: Move it to a separate unit
# extra tests with debug options
git bisect bad 8604322546c0dd30dac1baa8661dbec069184184 # 01:52 B 0 6 20 0 x86/cpu_entry_area: Move it out of the fixmap
# extra tests on HEAD of linux-devel/devel-hourly-2017122212
git bisect bad 5dd4e6b85900a55c19ff1945d407f9bbef8a1347 # 01:53 B 0 53 70 0 0day head guard for 'devel-hourly-2017122212'
# extra tests on tree/branch linux-next/master
git bisect bad 3514267557aabe5f0a616e82ffed7dc066f67ece # 02:01 B 0 8 22 0 Add linux-next specific files for 20171222
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
3 years
[lkp-robot] [radix] 904ac83c02: WARNING:at_kernel/sched/core.c:#migrate_disable
by kernel test robot
FYI, we noticed the following commit (built with gcc-6):
commit: 904ac83c02eabe4784027bcd4f2c8605356c45d7 ("radix-tree: use local locks")
https://git.kernel.org/cgit/linux/kernel/git/rt/linux-rt-devel.git linux-4.14.y-rt-rebase
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-i386 -enable-kvm -smp 2 -m 320M
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------+------------+------------+
| | d939f7976c | 904ac83c02 |
+-------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 78 | 88 |
| INFO:trying_to_register_non-static_key | 78 | 88 |
| WARNING:at_kernel/sched/core.c:#migrate_disable | 0 | 88 |
| EIP:migrate_disable | 0 | 88 |
| WARNING:at_kernel/sched/core.c:#migrate_enable | 0 | 88 |
| EIP:migrate_enable | 0 | 88 |
| kernel_BUG_at_kernel/sched/core.c | 0 | 30 |
| invalid_opcode:#[##] | 0 | 30 |
| EIP:select_fallback_rq | 0 | 30 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 30 |
+-------------------------------------------------+------------+------------+
[ 1.094095] WARNING: CPU: 0 PID: 1 at kernel/sched/core.c:6814 migrate_disable+0x38/0xad
[ 1.095143] Modules linked in:
[ 1.095469] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.14.6-00171-g904ac83 #1
[ 1.096180] task: 40030000 task.stack: 40038000
[ 1.096663] EIP: migrate_disable+0x38/0xad
[ 1.097088] EFLAGS: 00210286 CPU: 0
[ 1.097182] EAX: 00200246 EBX: 40030000 ECX: 00000000 EDX: 00000100
[ 1.097182] ESI: 40095000 EDI: 51c30150 EBP: 40039cd0 ESP: 40039ccc
[ 1.100046] DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068
[ 1.100046] CR0: 80050033 CR2: 00000000 CR3: 0be48000 CR4: 00000690
[ 1.100046] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 1.100046] DR6: fffe0ff0 DR7: 00000400
[ 1.100046] Call Trace:
[ 1.100046] ? mem_cgroup_commit_charge+0xa3/0x154
[ 1.100046] ? __inc_node_state+0x88/0x9e
[ 1.100046] ? __add_to_page_cache_locked+0x11f/0x183
[ 1.100046] ? add_to_page_cache_lru+0x55/0xe5
[ 1.100046] ? pagecache_get_page+0x1ca/0x21b
[ 1.100046] ? grab_cache_page_write_begin+0x1d/0x30
[ 1.100046] ? simple_write_begin+0x2d/0x10a
[ 1.100046] ? pagecache_write_begin+0x1e/0x20
[ 1.100046] ? __page_symlink+0x4f/0xd8
[ 1.100046] ? page_symlink+0x1c/0x21
[ 1.100046] ? ramfs_symlink+0x44/0x8b
[ 1.100046] ? vfs_symlink+0x99/0xc2
[ 1.100046] ? SyS_symlinkat+0x5c/0xa0
[ 1.100046] ? SyS_symlink+0x10/0x12
[ 1.100046] ? do_symlink+0x41/0x80
[ 1.100046] ? write_buffer+0x1d/0x2c
[ 1.100046] ? flush_buffer+0x21/0x6f
[ 1.100046] ? __gunzip+0x1f1/0x276
[ 1.100046] ? bunzip2+0x2ea/0x2ea
[ 1.100046] ? __gunzip+0x276/0x276
[ 1.100046] ? gunzip+0x16/0x18
[ 1.100046] ? write_buffer+0x2c/0x2c
[ 1.100046] ? initrd_load+0x3b/0x3b
[ 1.100046] ? unpack_to_rootfs+0x16c/0x26f
[ 1.100046] ? write_buffer+0x2c/0x2c
[ 1.100046] ? initrd_load+0x3b/0x3b
[ 1.100046] ? unpack_to_rootfs+0x26f/0x26f
[ 1.100046] ? populate_rootfs+0x47/0x94
[ 1.100046] ? do_one_initcall+0x8b/0x137
[ 1.100046] ? do_early_param+0x73/0x73
[ 1.100046] ? kernel_init_freeable+0xed/0x166
[ 1.100046] ? rest_init+0x1e8/0x1e8
[ 1.100046] ? kernel_init+0x8/0xcb
[ 1.100046] ? ret_from_fork+0x19/0x30
[ 1.100046] Code: e0 39 e3 4b a9 ff ff ff 7f 74 0b ff 83 20 03 00 00 e9 87 00 00 00 9c 58 8d 74 26 00 0f ba e0 09 73 e9 83 bb 20 03 00 00 00 74 02 <0f> ff 8b 83 18 03 00 00 85 c0 74 09 40 89 83 18 03 00 00 eb 5d
[ 1.100046] ---[ end trace 1c7663c27ebcbbae ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Xiaolong
3 years, 1 month
[lkp-robot] [drm/i915] 0da715ee60: [No primary change] phoronix-test-suite.time.voluntary_context_switches -17.5%
by kernel test robot
Greeting,
There is no primary kpi change in this test, below is the data collected through multiple monitors running background just for your information.
commit: 0da715ee60774401bea00dc71fca6fd1096c734a ("drm/i915: Disable semaphores on Sandybridge")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: phoronix-test-suite
on test machine: 4 threads Intel(R) Core(TM) i5-2500K CPU @ 3.30GHz with 6G memory
with following parameters:
need_x: true
test: nexuiz-1.6.1
cpufreq_governor: performance
test-description: The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
test-url: http://www.phoronix-test-suite.com/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/need_x/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/true/debian-full-x86_64/snb-drag/nexuiz-1.6.1/phoronix-test-suite
commit:
79e6770cb1 ("drm/i915: Remove obsolete ringbuffer emission for gen8+")
0da715ee60 ("drm/i915: Disable semaphores on Sandybridge")
79e6770cb1f5e32e 0da715ee60774401bea00dc71f
---------------- --------------------------
%stddev %change %stddev
\ | \
165205 -17.5% 136339 phoronix-test-suite.time.voluntary_context_switches
2.14 +0.0 2.17 perf-stat.branch-miss-rate%
1.746e+09 +1.7% 1.777e+09 perf-stat.branch-misses
4.121e+09 +2.1% 4.209e+09 perf-stat.cache-references
16349 ± 6% -19.2% 13216 ± 16% sched_debug.cfs_rq:/.exec_clock.stddev
24910 ± 5% -14.9% 21207 ± 13% sched_debug.cfs_rq:/.min_vruntime.stddev
-42645 -16.0% -35801 sched_debug.cfs_rq:/.spread0.avg
-58701 -13.0% -51074 sched_debug.cfs_rq:/.spread0.min
24910 ± 5% -14.9% 21207 ± 13% sched_debug.cfs_rq:/.spread0.stddev
181392 ± 14% +15.2% 208922 ± 14% sched_debug.cpu.avg_idle.stddev
24343 ± 6% +14.5% 27875 ± 6% sched_debug.cpu.nr_load_updates.min
35518 ± 6% -16.3% 29712 ± 5% sched_debug.cpu.nr_load_updates.stddev
38836 ± 4% +14.8% 44596 ± 5% sched_debug.cpu.nr_switches.min
36.76 ± 33% -47.3% 19.37 ± 33% sched_debug.cpu.nr_uninterruptible.stddev
28261 ± 4% +21.0% 34188 ± 7% sched_debug.cpu.sched_count.min
13542 ± 5% +20.2% 16278 ± 7% sched_debug.cpu.sched_goidle.min
24.76 ± 11% -1.1 23.62 ± 16% perf-profile.calltrace.cycles.entry_SYSCALL_64_fastpath
32.31 ± 17% -1.1 31.20 ± 26% perf-profile.calltrace.cycles.cpuidle_enter.call_cpuidle.do_idle.cpu_startup_entry.start_secondary
32.42 ± 17% -1.1 31.34 ± 25% perf-profile.calltrace.cycles.call_cpuidle.do_idle.cpu_startup_entry.start_secondary.verify_cpu
10.00 ± 14% -1.0 8.96 ± 16% perf-profile.calltrace.cycles.__irqentry_text_start.cpuidle_enter_state.cpuidle_enter.call_cpuidle.do_idle
9.84 ± 14% -1.0 8.83 ± 16% perf-profile.calltrace.cycles.smp_apic_timer_interrupt.__irqentry_text_start.cpuidle_enter_state.cpuidle_enter.call_cpuidle
16.45 ± 11% -1.0 15.48 ± 14% perf-profile.calltrace.cycles.sys_ioctl.entry_SYSCALL_64_fastpath
15.79 ± 11% -1.0 14.84 ± 14% perf-profile.calltrace.cycles.do_vfs_ioctl.sys_ioctl.entry_SYSCALL_64_fastpath
15.64 ± 11% -0.9 14.72 ± 14% perf-profile.calltrace.cycles.drm_ioctl.do_vfs_ioctl.sys_ioctl.entry_SYSCALL_64_fastpath
13.67 ± 12% -0.8 12.92 ± 17% perf-profile.calltrace.cycles.i915_gem_execbuffer2.drm_ioctl_kernel.drm_ioctl.do_vfs_ioctl.sys_ioctl
13.14 ± 12% -0.7 12.44 ± 16% perf-profile.calltrace.cycles.i915_gem_do_execbuffer.i915_gem_execbuffer2.drm_ioctl_kernel.drm_ioctl.do_vfs_ioctl
14.63 ± 10% -0.7 13.97 ± 14% perf-profile.calltrace.cycles.drm_ioctl_kernel.drm_ioctl.do_vfs_ioctl.sys_ioctl.entry_SYSCALL_64_fastpath
35.57 ± 15% -0.2 35.34 ± 20% perf-profile.calltrace.cycles.do_idle.cpu_startup_entry.start_secondary.verify_cpu
36.18 ± 15% -0.2 36.00 ± 20% perf-profile.calltrace.cycles.cpu_startup_entry.start_secondary.verify_cpu
36.18 ± 15% -0.2 36.01 ± 20% perf-profile.calltrace.cycles.start_secondary.verify_cpu
12.55 ± 11% -0.0 12.53 ± 14% perf-profile.calltrace.cycles.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.do_idle
49.83 ± 9% +2.2 52.01 ± 14% perf-profile.calltrace.cycles.cpuidle_enter_state.cpuidle_enter.call_cpuidle.do_idle.cpu_startup_entry
56.77 ± 6% +3.1 59.91 ± 10% perf-profile.calltrace.cycles.verify_cpu
18.72 ± 26% +3.2 21.95 ± 22% perf-profile.calltrace.cycles.cpuidle_enter.call_cpuidle.do_idle.cpu_startup_entry.rest_init
18.74 ± 26% +3.2 21.97 ± 22% perf-profile.calltrace.cycles.call_cpuidle.do_idle.cpu_startup_entry.rest_init.start_kernel
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.calltrace.cycles.x86_64_start_kernel.verify_cpu
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.calltrace.cycles.x86_64_start_reservations.x86_64_start_kernel.verify_cpu
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.calltrace.cycles.start_kernel.x86_64_start_reservations.x86_64_start_kernel.verify_cpu
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.calltrace.cycles.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel.verify_cpu
20.58 ± 24% +3.3 23.89 ± 20% perf-profile.calltrace.cycles.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
20.46 ± 24% +3.3 23.79 ± 20% perf-profile.calltrace.cycles.do_idle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
20.98 ± 40% +3.8 24.73 ± 46% perf-profile.calltrace.cycles.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.do_idle
26.76 ± 10% -1.4 25.34 ± 16% perf-profile.children.cycles.entry_SYSCALL_64_fastpath
19.26 ± 12% -1.3 17.96 ± 15% perf-profile.children.cycles.do_vfs_ioctl
19.99 ± 12% -1.3 18.70 ± 14% perf-profile.children.cycles.sys_ioctl
19.11 ± 12% -1.3 17.83 ± 15% perf-profile.children.cycles.drm_ioctl
12.47 ± 12% -1.2 11.28 ± 18% perf-profile.children.cycles.__irqentry_text_start
11.48 ± 12% -1.2 10.30 ± 18% perf-profile.children.cycles.smp_apic_timer_interrupt
18.03 ± 11% -1.0 16.99 ± 15% perf-profile.children.cycles.drm_ioctl_kernel
13.68 ± 12% -0.7 12.97 ± 17% perf-profile.children.cycles.i915_gem_execbuffer2
13.16 ± 12% -0.7 12.45 ± 16% perf-profile.children.cycles.i915_gem_do_execbuffer
6.51 ± 17% -0.7 5.86 ± 20% perf-profile.children.cycles.do_syscall_64
6.52 ± 17% -0.7 5.87 ± 20% perf-profile.children.cycles.return_from_SYSCALL_64
5.43 ± 14% -0.5 4.92 ± 18% perf-profile.children.cycles.hrtimer_interrupt
36.18 ± 15% -0.2 36.01 ± 20% perf-profile.children.cycles.start_secondary
12.57 ± 11% -0.0 12.54 ± 14% perf-profile.children.cycles.intel_idle
51.07 ± 8% +2.1 53.18 ± 13% perf-profile.children.cycles.cpuidle_enter
51.23 ± 8% +2.2 53.40 ± 13% perf-profile.children.cycles.call_cpuidle
49.85 ± 9% +2.2 52.04 ± 14% perf-profile.children.cycles.cpuidle_enter_state
56.04 ± 6% +3.1 59.15 ± 10% perf-profile.children.cycles.do_idle
56.76 ± 6% +3.1 59.89 ± 10% perf-profile.children.cycles.cpu_startup_entry
56.77 ± 6% +3.1 59.91 ± 10% perf-profile.children.cycles.verify_cpu
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.children.cycles.x86_64_start_kernel
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.children.cycles.x86_64_start_reservations
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.children.cycles.start_kernel
20.59 ± 24% +3.3 23.89 ± 20% perf-profile.children.cycles.rest_init
20.98 ± 40% +3.8 24.74 ± 46% perf-profile.children.cycles.poll_idle
12.57 ± 11% -0.0 12.54 ± 14% perf-profile.self.cycles.intel_idle
20.89 ± 40% +3.7 24.61 ± 46% perf-profile.self.cycles.poll_idle
phoronix-test-suite.time.voluntary_context_switches
170000 +-+----------------------------------------------------------------+
| ++ + ++ ++ ++ + |
165000 +-+ +++ +++++++ +++ +++ +++++++++ +++++++++++++ |
160000 +-+ |
| |
155000 +-+ |
| |
150000 +-+ |
| |
145000 +-+ |
140000 +-+ |
| |
135000 OOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO
| |
130000 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
3 years, 1 month