Greeting,
FYI, we noticed a -15.5% regression of vm-scalability.throughput due to commit:
commit: 85766f621c492a9aa7904050ad2f80893ab2a8fd ("shmem: Convert
shmem_add_to_page_cache to XArray")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 8G memory
with following parameters:
runtime: 300s
size: 16G
test: shm-xread-rand
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of
the mm/ of the Linux kernel which are of interest to us.
test-url:
https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone
https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/16G/lkp-ivb-d02/shm-xread-rand/vm-scalability
commit:
85b67aaaa5 ("shmem: Convert find_swap_entry to XArray")
85766f621c ("shmem: Convert shmem_add_to_page_cache to XArray")
85b67aaaa59cd76a 85766f621c492a9aa7904050ad
---------------- --------------------------
%stddev %change %stddev
\ | \
190175 -15.5% 160733 ± 21% vm-scalability.throughput
47700 +99.1% 94958 ± 3% vm-scalability.median
0.03 ± 15% +38.6% 0.05 ± 81% vm-scalability.stddev
96.91 -47.6% 50.75 ± 5% vm-scalability.time.elapsed_time
96.91 -47.6% 50.75 ± 5% vm-scalability.time.elapsed_time.max
878016 ± 2% -98.3% 14720 ± 53%
vm-scalability.time.involuntary_context_switches
4024745 -0.0% 4024725
vm-scalability.time.maximum_resident_set_size
1497244 -14.3% 1283635 ± 5% vm-scalability.time.minor_page_faults
4096 +0.0% 4096 vm-scalability.time.page_size
375.00 -54.1% 172.25 ± 22%
vm-scalability.time.percent_of_cpu_this_job_got
11.18 ± 3% -53.6% 5.19 ± 5% vm-scalability.time.system_time
352.62 -76.3% 83.60 ± 27% vm-scalability.time.user_time
18105273 -56.2% 7921056 ± 24% vm-scalability.workload
2677 ± 4% -42.1% 1549 ± 9% interrupts.CAL:Function_call_interrupts
74956 ± 11% -49.1% 38140 ± 19% softirqs.RCU
10283 ± 4% +166.8% 27439 ± 11% softirqs.SCHED
196048 -43.5% 110676 ± 6% softirqs.TIMER
79.95 -0.2% 79.79 ± 2% boot-time.boot
70.20 ± 2% -0.0% 70.17 ± 2% boot-time.dhcp
299.76 ± 2% -0.2% 299.06 ± 2% boot-time.idle
72.80 ± 2% -0.2% 72.62 ± 2% boot-time.kernel_boot
2.80 ± 6% +46.8 49.56 ± 20% mpstat.cpu.idle%
0.03 ± 34% -0.0 0.02 ± 28% mpstat.cpu.soft%
4.65 ± 4% +0.2 4.83 ± 34% mpstat.cpu.sys%
92.52 -46.9 45.59 ± 20% mpstat.cpu.usr%
1029 -3.3% 995.00 vmstat.memory.buff
4825193 -3.4% 4663456 vmstat.memory.cache
2875885 +6.4% 3059946 vmstat.memory.free
0.00 -100.0% 0.00 vmstat.procs.b
4.00 -50.0% 2.00 ± 35% vmstat.procs.r
20841 +49.8% 31217 vmstat.system.cs
13121 +37.1% 17983 vmstat.system.in
43219 ± 10% +755.6% 369768 ± 61% cpuidle.C1.time
1375 ± 15% +1197.9% 17855 ± 68% cpuidle.C1.usage
1256903 ± 68% +3335.5% 43180596 ± 3% cpuidle.C1E.time
21308 ± 69% +3424.9% 751082 ± 4% cpuidle.C1E.usage
39665 ± 25% +351.9% 179230 ± 40% cpuidle.C3.time
160.75 ± 18% +311.8% 662.00 ± 32% cpuidle.C3.usage
7135782 ± 18% +666.0% 54661249 ± 27% cpuidle.C6.time
7706 ± 17% +668.0% 59186 ± 23% cpuidle.C6.usage
301.25 ± 39% +1868.8% 5931 ± 37% cpuidle.POLL.time
29.00 ± 44% +3090.5% 925.25 ± 49% cpuidle.POLL.usage
2510 ± 20% +2.6% 2576 ± 16% slabinfo.anon_vma.active_objs
2607 ± 13% +0.7% 2627 ± 15% slabinfo.anon_vma.num_objs
1193 ± 10% +2.9% 1228 ± 9% slabinfo.cred_jar.active_objs
1193 ± 10% +2.9% 1228 ± 9% slabinfo.cred_jar.num_objs
107.75 ± 13% +107.2% 223.25 ± 54% slabinfo.dmaengine-unmap-16.active_objs
115.25 ± 19% +111.9% 244.25 ± 61% slabinfo.dmaengine-unmap-16.num_objs
7200 ± 12% +8.9% 7840 ± 6% slabinfo.kmalloc-32.active_objs
7200 ± 12% +8.9% 7840 ± 6% slabinfo.kmalloc-32.num_objs
5248 ± 8% -0.2% 5238 ± 4% slabinfo.kmalloc-8.active_objs
5248 ± 8% -0.2% 5238 ± 4% slabinfo.kmalloc-8.num_objs
3455 ± 5% +19.2% 4119 ± 4% slabinfo.pid.active_objs
4570 ± 7% +0.9% 4613 ± 5% slabinfo.pid.num_objs
24874 -1.4% 24521 slabinfo.radix_tree_node.active_objs
24874 -1.3% 24563 slabinfo.radix_tree_node.num_objs
3221 -46.7% 1717 ± 18% turbostat.Avg_MHz
97.84 -45.7 52.18 ± 18% turbostat.Busy%
3300 -0.0% 3299 turbostat.Bzy_MHz
1375 ± 15% +1198.2% 17856 ± 68% turbostat.C1
0.01 +0.2 0.18 ± 69% turbostat.C1%
21308 ± 69% +3425.0% 751108 ± 4% turbostat.C1E
0.32 ± 69% +20.5 20.78 ± 2% turbostat.C1E%
161.25 ± 17% +310.7% 662.25 ± 32% turbostat.C3
0.01 +0.1 0.09 ± 36% turbostat.C3%
7713 ± 17% +667.3% 59186 ± 23% turbostat.C6
1.82 ± 18% +24.9 26.73 ± 33% turbostat.C6%
1.38 ± 22% +3185.0% 45.33 ± 14% turbostat.CPU%c1
0.01 ± 34% -60.0% 0.01 ±100% turbostat.CPU%c3
0.77 ± 44% +220.4% 2.48 ±126% turbostat.CPU%c6
13.59 -3.8% 13.08 ± 6% turbostat.CorWatt
49.50 -3.5% 47.75 ± 2% turbostat.CoreTmp
350.00 +0.0% 350.00 turbostat.GFXMHz
1305924 -27.1% 952039 ± 4% turbostat.IRQ
0.04 ± 30% -100.0% 0.00 turbostat.Pkg%pc2
0.12 ± 10% -100.0% 0.00 turbostat.Pkg%pc6
47.00 ± 4% -1.6% 46.25 ± 2% turbostat.PkgTmp
30.99 -1.9% 30.41 ± 3% turbostat.PkgWatt
3292 -0.0% 3292 turbostat.TSC_MHz
5.878e+10 ± 3% -51.3% 2.861e+10 ± 53% perf-stat.branch-instructions
1.64 ± 14% +0.8 2.45 ± 86% perf-stat.branch-miss-rate%
9.698e+08 ± 18% -49.4% 4.904e+08 ±105% perf-stat.branch-misses
91.01 -18.6 72.41 ± 26% perf-stat.cache-miss-rate%
4.227e+09 -55.8% 1.868e+09 ± 87% perf-stat.cache-misses
4.645e+09 -53.7% 2.148e+09 ± 78% perf-stat.cache-references
2026235 -19.2% 1636525 ± 4% perf-stat.context-switches
4.83 ± 4% -50.7% 2.39 ± 32% perf-stat.cpi
1.255e+12 -72.6% 3.435e+11 ± 70% perf-stat.cpu-cycles
13403 ± 17% -85.7% 1914 ± 5% perf-stat.cpu-migrations
5.39 ± 3% -1.8 3.57 ± 58% perf-stat.dTLB-load-miss-rate%
3.877e+09 -57.0% 1.668e+09 ± 88% perf-stat.dTLB-load-misses
6.807e+10 ± 2% -50.5% 3.37e+10 ± 57% perf-stat.dTLB-loads
0.02 ± 20% +0.0 0.07 ± 69% perf-stat.dTLB-store-miss-rate%
6310413 ± 22% -6.8% 5881564 ± 31% perf-stat.dTLB-store-misses
2.53e+10 ± 2% -47.4% 1.331e+10 ± 49% perf-stat.dTLB-stores
89.37 -18.3 71.07 ± 14% perf-stat.iTLB-load-miss-rate%
10852282 ± 31% +98.0% 21484353 ± 50% perf-stat.iTLB-load-misses
1333714 ± 48% +446.8% 7292385 ± 6% perf-stat.iTLB-loads
2.601e+11 ± 3% -50.8% 1.278e+11 ± 51% perf-stat.instructions
25902 ± 23% -66.0% 8814 ± 95% perf-stat.instructions-per-iTLB-miss
0.21 ± 4% +126.5% 0.47 ± 33% perf-stat.ipc
1626010 -16.2% 1362182 ± 5% perf-stat.minor-faults
1626011 -16.2% 1362184 ± 5% perf-stat.page-faults
14363 ± 3% +4.0% 14938 ± 40% perf-stat.path-length
306033 ± 7% +501.9% 1842050 ± 51% meminfo.Active
305962 ± 7% +502.0% 1841980 ± 51% meminfo.Active(anon)
74019 -48.2% 38339 ± 5% meminfo.AnonHugePages
235389 -1.5% 231743 meminfo.AnonPages
1041 -2.2% 1017 meminfo.Buffers
4827393 -2.3% 4714606 meminfo.Cached
18164 ± 8% +32.3% 24031 ± 7% meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
4023392 +0.0% 4023392 meminfo.CommitLimit
4265286 -2.5% 4158052 meminfo.Committed_AS
8191029 +0.1% 8195721 meminfo.DirectMap2M
79997 ± 5% -5.9% 75305 ± 2% meminfo.DirectMap4k
2048 +0.0% 2048 meminfo.Hugepagesize
3877131 -42.6% 2224654 ± 41% meminfo.Inactive
3875926 -42.6% 2223471 ± 41% meminfo.Inactive(anon)
1205 -1.8% 1183 meminfo.Inactive(file)
3603 -1.7% 3541 meminfo.KernelStack
3932052 -2.8% 3822786 meminfo.Mapped
2715637 +5.0% 2850785 meminfo.MemAvailable
2824985 +4.8% 2960247 meminfo.MemFree
8046788 +0.0% 8046788 meminfo.MemTotal
34324 -50.5% 16985 ± 18% meminfo.PageTables
48267 -0.4% 48065 meminfo.SReclaimable
20732 -0.7% 20579 meminfo.SUnreclaim
3946053 -2.9% 3833269 meminfo.Shmem
69000 -0.5% 68645 meminfo.Slab
881118 -0.0% 881102 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
76751 ± 4% +500.6% 460938 ± 51% proc-vmstat.nr_active_anon
58905 -1.5% 58008 proc-vmstat.nr_anon_pages
67403 +4.7% 70593 proc-vmstat.nr_dirty_background_threshold
134972 +4.7% 141360 proc-vmstat.nr_dirty_threshold
1205133 -2.2% 1178739 proc-vmstat.nr_file_pages
4613 ± 4% +29.8% 5989 ± 7% proc-vmstat.nr_free_cma
708248 +4.5% 740205 proc-vmstat.nr_free_pages
966805 -42.6% 555343 ± 41% proc-vmstat.nr_inactive_anon
300.50 -2.0% 294.50 proc-vmstat.nr_inactive_file
3601 -1.7% 3542 proc-vmstat.nr_kernel_stack
978954 -2.9% 950733 proc-vmstat.nr_mapped
8544 -50.4% 4237 ± 18% proc-vmstat.nr_page_table_pages
984538 -2.7% 958151 proc-vmstat.nr_shmem
12063 -0.5% 12004 proc-vmstat.nr_slab_reclaimable
5182 -0.7% 5144 proc-vmstat.nr_slab_unreclaimable
220278 -0.0% 220275 proc-vmstat.nr_unevictable
76754 ± 4% +500.5% 460940 ± 51% proc-vmstat.nr_zone_active_anon
966805 -42.6% 555343 ± 41% proc-vmstat.nr_zone_inactive_anon
300.50 -2.0% 294.50 proc-vmstat.nr_zone_inactive_file
220278 -0.0% 220275 proc-vmstat.nr_zone_unevictable
1162127 -3.6% 1120485 proc-vmstat.numa_hit
1162127 -3.6% 1120485 proc-vmstat.numa_local
1006996 -5.2% 954325 ± 5% proc-vmstat.pgactivate
247442 -10.8% 220800 proc-vmstat.pgalloc_dma32
936515 -3.6% 903231 proc-vmstat.pgalloc_normal
1630736 -16.2% 1366671 ± 5% proc-vmstat.pgfault
595443 ± 62% +1.1% 601843 ± 83% proc-vmstat.pgfree
2173 +0.0% 2173 proc-vmstat.pgpgin
2048 +0.0% 2049 proc-vmstat.pgpgout
168.36 ±173% +248.0% 585.83 ±173% sched_debug.cfs_rq:/.MIN_vruntime.avg
673.46 ±173% +248.0% 2343 ±173% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
291.62 ±173% +248.0% 1014 ±173% sched_debug.cfs_rq:/.MIN_vruntime.stddev
29757 -99.9% 39.23 ± 25% sched_debug.cfs_rq:/.exec_clock.avg
30069 -99.9% 40.37 ± 24% sched_debug.cfs_rq:/.exec_clock.max
29419 -99.9% 37.80 ± 26% sched_debug.cfs_rq:/.exec_clock.min
248.80 ± 68% -99.6% 1.02 ± 13% sched_debug.cfs_rq:/.exec_clock.stddev
292413 ± 19% +13.2% 331037 ± 31% sched_debug.cfs_rq:/.load.avg
512518 ± 46% +11.9% 573270 ± 69% sched_debug.cfs_rq:/.load.max
187469 ± 7% +9.9% 205951 ± 11% sched_debug.cfs_rq:/.load.min
133934 ± 76% +10.8% 148390 ±112% sched_debug.cfs_rq:/.load.stddev
406.56 ± 15% +46.8% 596.94 ± 7% sched_debug.cfs_rq:/.load_avg.avg
634.38 ± 21% +85.7% 1178 ± 28% sched_debug.cfs_rq:/.load_avg.max
240.25 ± 11% +10.7% 266.00 ± 18% sched_debug.cfs_rq:/.load_avg.min
167.36 ± 42% +130.5% 385.79 ± 37% sched_debug.cfs_rq:/.load_avg.stddev
168.36 ±173% +248.0% 585.83 ±173% sched_debug.cfs_rq:/.max_vruntime.avg
673.46 ±173% +248.0% 2343 ±173% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
291.62 ±173% +248.0% 1014 ±173% sched_debug.cfs_rq:/.max_vruntime.stddev
131544 -93.1% 9126 sched_debug.cfs_rq:/.min_vruntime.avg
140526 ± 2% -89.2% 15201 ± 15% sched_debug.cfs_rq:/.min_vruntime.max
121759 ± 3% -97.1% 3486 ± 22% sched_debug.cfs_rq:/.min_vruntime.min
7074 ± 31% -36.0% 4528 ± 20% sched_debug.cfs_rq:/.min_vruntime.stddev
1.03 ± 5% +3.0% 1.06 ± 10% sched_debug.cfs_rq:/.nr_running.avg
1.12 ± 19% +11.1% 1.25 ± 34% sched_debug.cfs_rq:/.nr_running.max
1.00 +0.0% 1.00 sched_debug.cfs_rq:/.nr_running.min
0.05 ±173% +100.0% 0.11 ±173% sched_debug.cfs_rq:/.nr_running.stddev
3.41 ± 7% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.avg
6.88 ± 13% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.max
1.12 ± 72% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.min
2.16 ± 27% -100.0% 0.00 sched_debug.cfs_rq:/.nr_spread_over.stddev
178.72 ± 3% -53.9% 82.38 ± 26% sched_debug.cfs_rq:/.runnable_load_avg.avg
274.75 ± 4% -50.9% 135.00 ± 38% sched_debug.cfs_rq:/.runnable_load_avg.max
141.12 ± 4% -70.8% 41.25 ± 44% sched_debug.cfs_rq:/.runnable_load_avg.min
56.87 ± 7% -35.1% 36.91 ± 66%
sched_debug.cfs_rq:/.runnable_load_avg.stddev
212710 ± 24% -63.1% 78556 ± 28% sched_debug.cfs_rq:/.runnable_weight.avg
400157 ± 53% -69.9% 120568 ± 40% sched_debug.cfs_rq:/.runnable_weight.max
141898 ± 2% -70.5% 41804 ± 44% sched_debug.cfs_rq:/.runnable_weight.min
109826 ± 83% -72.0% 30704 ± 76% sched_debug.cfs_rq:/.runnable_weight.stddev
-956.92 -348.7% 2379 ±119% sched_debug.cfs_rq:/.spread0.avg
8024 ± 76% +5.3% 8451 ± 28% sched_debug.cfs_rq:/.spread0.max
-10742 -69.7% -3260 sched_debug.cfs_rq:/.spread0.min
7074 ± 31% -36.0% 4527 ± 20% sched_debug.cfs_rq:/.spread0.stddev
1028 ± 2% +0.5% 1033 ± 3% sched_debug.cfs_rq:/.util_avg.avg
1105 ± 5% +14.1% 1261 ± 6% sched_debug.cfs_rq:/.util_avg.max
929.62 ± 5% -8.2% 853.00 ± 14% sched_debug.cfs_rq:/.util_avg.min
67.44 ± 63% +130.4% 155.36 ± 41% sched_debug.cfs_rq:/.util_avg.stddev
222.56 ± 34% -71.7% 62.88 ± 58% sched_debug.cfs_rq:/.util_est_enqueued.avg
503.88 ± 33% -51.8% 242.75 ± 59% sched_debug.cfs_rq:/.util_est_enqueued.max
26.00 ±162% -95.2% 1.25 ± 34% sched_debug.cfs_rq:/.util_est_enqueued.min
195.39 ± 35% -46.8% 103.88 ± 60%
sched_debug.cfs_rq:/.util_est_enqueued.stddev
473454 ± 14% +38.0% 653593 ± 9% sched_debug.cpu.avg_idle.avg
832956 ± 6% -3.1% 806762 ± 9% sched_debug.cpu.avg_idle.max
193874 ± 57% +124.3% 434865 ± 36% sched_debug.cpu.avg_idle.min
260441 ± 13% -45.2% 142845 ± 55% sched_debug.cpu.avg_idle.stddev
110437 -27.4% 80213 ± 2% sched_debug.cpu.clock.avg
110438 -27.4% 80214 ± 2% sched_debug.cpu.clock.max
110436 -27.4% 80211 ± 2% sched_debug.cpu.clock.min
0.83 ± 13% +25.6% 1.04 ± 12% sched_debug.cpu.clock.stddev
110437 -27.4% 80213 ± 2% sched_debug.cpu.clock_task.avg
110438 -27.4% 80214 ± 2% sched_debug.cpu.clock_task.max
110436 -27.4% 80211 ± 2% sched_debug.cpu.clock_task.min
0.83 ± 13% +25.6% 1.04 ± 12% sched_debug.cpu.clock_task.stddev
189.84 ± 4% -54.7% 85.94 ± 17% sched_debug.cpu.cpu_load[0].avg
289.12 ± 10% -44.1% 161.50 ± 23% sched_debug.cpu.cpu_load[0].max
141.12 ± 4% -71.1% 40.75 ± 17% sched_debug.cpu.cpu_load[0].min
63.08 ± 20% -21.9% 49.24 ± 30% sched_debug.cpu.cpu_load[0].stddev
185.59 ± 4% -52.2% 88.69 ± 12% sched_debug.cpu.cpu_load[1].avg
282.50 ± 6% -47.1% 149.50 ± 29% sched_debug.cpu.cpu_load[1].max
137.25 ± 4% -67.8% 44.25 ± 18% sched_debug.cpu.cpu_load[1].min
60.14 ± 14% -28.4% 43.04 ± 46% sched_debug.cpu.cpu_load[1].stddev
184.72 ± 3% -50.5% 91.38 ± 12% sched_debug.cpu.cpu_load[2].avg
275.38 ± 5% -47.7% 144.00 ± 30% sched_debug.cpu.cpu_load[2].max
138.75 ± 6% -62.3% 52.25 ± 24% sched_debug.cpu.cpu_load[2].min
55.48 ± 11% -30.6% 38.47 ± 50% sched_debug.cpu.cpu_load[2].stddev
186.38 ± 4% -47.5% 97.94 ± 14% sched_debug.cpu.cpu_load[3].avg
258.75 ± 4% -44.8% 142.75 ± 27% sched_debug.cpu.cpu_load[3].max
139.88 ± 6% -57.5% 59.50 ± 25% sched_debug.cpu.cpu_load[3].min
45.63 ± 13% -26.4% 33.58 ± 46% sched_debug.cpu.cpu_load[3].stddev
188.03 ± 5% -42.5% 108.06 ± 13% sched_debug.cpu.cpu_load[4].avg
240.62 ± 5% -40.2% 144.00 ± 23% sched_debug.cpu.cpu_load[4].max
142.50 ± 9% -51.9% 68.50 ± 18% sched_debug.cpu.cpu_load[4].min
36.69 ± 22% -21.2% 28.92 ± 43% sched_debug.cpu.cpu_load[4].stddev
966.06 ± 6% -25.7% 718.06 ± 9% sched_debug.cpu.curr->pid.avg
1299 -33.2% 868.25 sched_debug.cpu.curr->pid.max
705.75 ± 34% -40.1% 422.75 ± 37% sched_debug.cpu.curr->pid.min
237.21 ± 31% -24.8% 178.41 ± 35% sched_debug.cpu.curr->pid.stddev
325755 ± 34% -18.5% 265488 ± 5% sched_debug.cpu.load.avg
617677 ± 67% -44.1% 345304 ± 6% sched_debug.cpu.load.max
187469 ± 7% +9.9% 205951 ± 11% sched_debug.cpu.load.min
176372 ±100% -69.5% 53831 ± 27% sched_debug.cpu.load.stddev
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.avg
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
4294 -0.0% 4294 sched_debug.cpu.next_balance.max
4294 -0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ± 36% -10.2% 0.00 ± 65% sched_debug.cpu.next_balance.stddev
37966 -78.9% 8004 ± 4% sched_debug.cpu.nr_load_updates.avg
40554 ± 2% -70.4% 11997 ± 21% sched_debug.cpu.nr_load_updates.max
34281 ± 2% -87.1% 4427 ± 15% sched_debug.cpu.nr_load_updates.min
2438 ± 22% +19.2% 2905 ± 41% sched_debug.cpu.nr_load_updates.stddev
2.25 ± 17% +8.3% 2.44 ± 29% sched_debug.cpu.nr_running.avg
3.12 ± 20% +20.0% 3.75 ± 54% sched_debug.cpu.nr_running.max
1.62 ± 13% -23.1% 1.25 ± 34% sched_debug.cpu.nr_running.min
0.63 ± 22% +59.9% 1.00 ± 85% sched_debug.cpu.nr_running.stddev
254977 -96.8% 8201 ± 4% sched_debug.cpu.nr_switches.avg
674868 ± 21% -98.2% 11993 ± 22% sched_debug.cpu.nr_switches.max
37283 ± 51% -85.3% 5491 ± 17% sched_debug.cpu.nr_switches.min
258652 ± 28% -99.0% 2585 ± 58% sched_debug.cpu.nr_switches.stddev
0.03 ±173% +700.0% 0.25 ± 70% sched_debug.cpu.nr_uninterruptible.avg
22.25 ± 22% -34.8% 14.50 ± 18% sched_debug.cpu.nr_uninterruptible.max
-20.75 -45.8% -11.25 sched_debug.cpu.nr_uninterruptible.min
16.52 ± 24% -38.8% 10.12 ± 14% sched_debug.cpu.nr_uninterruptible.stddev
247705 -99.6% 886.94 ± 51% sched_debug.cpu.sched_count.avg
667148 ± 21% -99.6% 2595 ± 62% sched_debug.cpu.sched_count.max
30369 ± 59% -99.5% 143.75 ± 7% sched_debug.cpu.sched_count.min
258206 ± 27% -99.6% 1005 ± 66% sched_debug.cpu.sched_count.stddev
2815 ± 66% -100.0% 0.00 sched_debug.cpu.sched_goidle.avg
8181 ± 73% -100.0% 0.00 sched_debug.cpu.sched_goidle.max
29.38 ±173% -100.0% 0.00 sched_debug.cpu.sched_goidle.min
3344 ± 71% -100.0% 0.00 sched_debug.cpu.sched_goidle.stddev
123801 -99.6% 440.69 ± 52% sched_debug.cpu.ttwu_count.avg
334777 ± 21% -99.6% 1307 ± 61% sched_debug.cpu.ttwu_count.max
14411 ± 67% -99.4% 81.25 ± 9% sched_debug.cpu.ttwu_count.min
129727 ± 27% -99.6% 507.89 ± 64% sched_debug.cpu.ttwu_count.stddev
122356 -99.7% 411.38 ± 55% sched_debug.cpu.ttwu_local.avg
332928 ± 21% -99.6% 1279 ± 62% sched_debug.cpu.ttwu_local.max
12947 ± 71% -99.6% 50.50 ± 8% sched_debug.cpu.ttwu_local.min
129617 ± 27% -99.6% 508.69 ± 64% sched_debug.cpu.ttwu_local.stddev
110435 -27.4% 80211 ± 2% sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
110435 -27.4% 80211 ± 2% sched_debug.ktime
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
110441 -27.4% 80216 ± 2% sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4118331 +0.0% 4118331
sched_debug.sysctl_sched.sysctl_sched_features
18.00 +0.0% 18.00
sched_debug.sysctl_sched.sysctl_sched_latency
2.25 +0.0% 2.25
sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00
sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
3.00 +0.0% 3.00
sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
vm-scalability.workload
2e+07 +-+---------------------------------------------------------------+
| |
1.8e+07 +-++.+.++.+.++.+.++.++.+.++.+.++.+.++.+.++.+.++.+.++.++.+.++.+.++.|
1.6e+07 +-+ |
| |
1.4e+07 +-+ |
O O O O O |
1.2e+07 +-+ |
| |
1e+07 +-+ |
8e+06 +-OO O O O OO |
| |
6e+06 +-+ |
| |
4e+06 +-+----OO----O---O---O--------------------------------------------+
vm-scalability.time.user_time
400 +-+-------------------------------------------------------------------+
| |
350 +-+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.|
300 +-+ |
| |
250 +-+ |
| |
200 O-+ O O O O |
| |
150 +-+ |
100 +-O |
| O O O OO O |
50 +-+ O O O O O |
| |
0 +-+-------------------------------------------------------------------+
vm-scalability.time.system_time
13 +-+--------------------------------------------------------------------+
|:: :: : .+ |
12 +-+: + + :: + : : + + + + .|
11 +-+: :: .+ .++. .+.+ : + .+ :: : + : : : : +. .+ + + .+ |
| : : .+ +. .+.+ + :: + :: : + :: :: + +. + + |
10 +-+ + + + + + + + + + |
9 +-+ |
| |
8 +-+ |
7 +-+ |
O O |
6 +-O OO OO O O |
5 +-+ O O O O O O |
| O O |
4 +-+--------------------------------------------------------------------+
vm-scalability.time.percent_of_cpu_this_job_got
400 +-+-------------------------------------------------------------------+
|.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.+.+.++.|
350 +-+ |
| |
| |
300 +-+ |
O O O O O |
250 +-+ |
| |
200 +-O O O O |
| O O O |
| |
150 +-+ |
| |
100 +-+----O-O----O---O----O----------------------------------------------+
vm-scalability.time.involuntary_context_switches
1e+06 +-+----------------------------------------------------------------+
900000 +-++.+.++.+. +.+.+ .+.+.++.+.++.+.++.+.+ + .++.|
| ++.+.++.+.+ + +. + +.+.+ .+ |
800000 +-+ + + |
700000 +-+ |
| |
600000 +-+ |
500000 +-+ |
400000 +-+ |
| |
300000 +-+ |
200000 +-+ |
O O |
100000 +-+ O |
0 +-OO---OO-O-OO-O-O--O-OO-O-O---------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong