Re: [LKP] [lkp] [net] af1fee9821: BUG:spinlock_trylock_failure_on_UP_on_CPU
by Ye Xiaolong
On 11/07, Allan W. Nielsen wrote:
>Hi,
>
>I tried to get this "lkp" up and running, but I had some troubles gettting
>these scripts to work.
Hi, Allan
Could you tell us what troubles you have met when trying the "lkp qemu"
tool, it would be better if you could paste some log so we can help to
improve it.
Thanks,
Xiaolong
>
>But it seems like it can be reproduced using th eprovided config file, and qemu.
4 years, 2 months
Re: [LKP] [lkp] [net] af1fee9821: BUG:spinlock_trylock_failure_on_UP_on_CPU
by Andrew Lunn
On Mon, Nov 07, 2016 at 02:27:14PM +0100, Allan W. Nielsen wrote:
> Hi,
>
> I tried to get this "lkp" up and running, but I had some troubles gettting
> these scripts to work.
>
> But it seems like it can be reproduced using th eprovided config file, and qemu.
>
> Here is what I did:
>
> # reproduce original bug
> git reset --hard af1fee98219992ba2c12441a447719652ed7e983
> mkdir bug-build
> cp config-4.8.0-14895-gaf1fee9 bug-build/.config
> make O=bug-build oldconfig
> make O=bug-build -j8
> qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 4G -kernel \
> ../net-next/bug-build/arch/x86_64/boot/bzImage -nographic
> <see-output-1-below>
> # bug seemed to be re-produced
>
>
> # Try previous version
> git reset --hard 32ab0a38f0bd554cc45203ff4fdb6b0fdea6f025
> make O=bug-build oldconfig
> make O=bug-build -j8
> qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 4G -kernel \
> ../net-next/bug-build/arch/x86_64/boot/bzImage -nographic
> <see-output-2-below>
> # bug seemed to disappear
>
>
> # Try the buggy revision again - but without MICROSEMI_PHY
> git reset --hard af1fee98219992ba2c12441a447719652ed7e983
> sed -e "/MICROSEMI_PHY/d" -i bug-build/.config
> make O=bug-build oldconfig
> cat bug-build/.config | grep MICROSEMI_PHY
> qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 4G -kernel \
> ../net-next/bug-build/arch/x86_64/boot/bzImage -nographic
> <see-output-3-below>
> # bug still seem to be there...
>
>
> Not sure what this tells me, any hints are more than welcome.
If the bug happens without your code being compiled, it cannot be your
code. It suggests the patch is moving code around in such a way to
trigger the issue, but it is not the source of the issue itself. To me
it seems like memory corruption or uninitialised variables in some
other code, or maybe DMA from the stack, which was never allowed but
mostly work on some platforms, but the recent change to virtual mapped
stacks as broken.
Your code is off the hook, thanks for the testing you did.
Andrew
4 years, 2 months
[mm] 6ab7850933: BUG: Bad page state in process kswapd1 pfn:1040263
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux Nicholas-Piggin/optimise-unlock_page-end_page_writeback/20161102-150709
commit 6ab7850933aa7c4859c30045626a86d9cdb5ac61 ("mm: Use owner_priv bit for PageSwapCache, valid when PageSwapBacked")
in testcase: fsmark
with following parameters:
iterations: 1x
nr_threads: 1t
disk: 8BRD_12G
md: RAID6
fs: btrfs
filesize: 4M
test_size: 60G
sync_method: fsyncBeforeClose
cpufreq_governor: performance
The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 128G memory
caused below changes:
+------------------------------------------------------------------+------------+------------+
| | 0c183d92b2 | 6ab7850933 |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 1 | 0 |
| boot_failures | 1 | 1 |
| invoked_oom-killer:gfp_mask=0x | 1 | |
| Mem-Info | 1 | |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 1 | |
| BUG:kernel_reboot-without-warning_in_test_stage | 0 | 1 |
+------------------------------------------------------------------+------------+------------+
user :notice: [ 56.147733] 2016-11-07 14:36:14 fs_mark -d /fs/md0/1 -n 15360 -L 1 -S 1 -s 4194304
kern :crit : [ 56.160360] BTRFS critical (device md0): corrupt leaf, non-root leaf's nritems is 0: block=29540352, root=1, slot=0
user :notice: [ 56.161213] # fs_mark -d /fs/md0/1 -n 15360 -L 1 -S 1 -s 4194304
user :notice: [ 56.162106] # Version 3.3, 1 thread(s) starting at Mon Nov 7 14:36:14 2016
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Kernel Test Robot
4 years, 2 months
[lkp] [net] af1fee9821: BUG:spinlock_trylock_failure_on_UP_on_CPU
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next.git master
commit af1fee98219992ba2c12441a447719652ed7e983 ("net: phy: Add support for Microsemi VSC 8530/40 Fast Ethernet PHY")
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu host -smp 2 -m 4G
caused below changes:
+-------------------------------------------------------+------------+------------+
| | 32ab0a38f0 | af1fee9821 |
+-------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 8 | 8 |
| calltrace:init | 8 | 7 |
| BUG:key_not_in.data | 6 | |
| WARNING:at_kernel/locking/lockdep.c:#lockdep_init_map | 6 | |
| calltrace:vhci_hcd_init | 6 | |
| invalid_opcode:#[##]PREEMPT_DEBUG_PAGEALLOC | 2 | 1 |
| RIP:__brk_base | 2 | 1 |
| calltrace:eth_driver_init | 2 | 7 |
| Kernel_panic-not_syncing:Fatal_exception | 2 | 1 |
| BUG:spinlock_trylock_failure_on_UP_on_CPU | 0 | 6 |
| BUG:workqueue_lockup-pool | 0 | 1 |
+-------------------------------------------------------+------------+------------+
[ 35.319526] udc dummy_udc.0: releasing 'dummy_udc.0'
[ 35.320910] kobject (ffff88011b574f78): tried to init an initialized object, something is seriously wrong.
[ 35.323437] CPU: 0 PID: 1 Comm: swapper Not tainted 4.8.0-14895-gaf1fee9 #1
[ 35.325381] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 35.327617] ffff88013a89fb88 ffffffff812cb0fe ffff88013a89fba8 ffffffff812cd41b
[ 35.329817] ffff88011b574f68 ffff88011b574ed8 ffff88013a89fbc0 ffffffff8144faf5
[ 35.331831] ffff88011b574f68 ffff88013a89fbd8 ffffffff81450353 ffff88011cbad800
[ 35.333751] Call Trace:
[ 35.334663] [<ffffffff812cb0fe>] dump_stack+0x19/0x1b
[ 35.335980] [<ffffffff812cd41b>] kobject_init+0x31/0x7f
[ 35.337316] [<ffffffff8144faf5>] device_initialize+0x23/0xd2
[ 35.338931] [<ffffffff81450353>] device_register+0xd/0x18
[ 35.340289] [<ffffffff818847ec>] usb_add_gadget_udc_release+0xcf/0x2cb
[ 35.342138] [<ffffffff81884a61>] usb_add_gadget_udc+0xb/0xd
[ 35.343535] [<ffffffff81887815>] dummy_udc_probe+0x1a4/0x1e5
[ 35.345019] [<ffffffff81454c94>] platform_drv_probe+0x23/0x4e
[ 35.346517] [<ffffffff8145393d>] driver_probe_device+0x1b7/0x40e
[ 35.348388] [<ffffffff81453d0c>] __device_attach_driver+0x90/0xd0
[ 35.349979] [<ffffffff81453c7c>] ? driver_allows_async_probing+0xd/0xd
[ 35.351742] [<ffffffff814521a5>] bus_for_each_drv+0x76/0x85
[ 35.353120] [<ffffffff81453657>] __device_attach+0x89/0xe7
[ 35.354639] [<ffffffff81453e5c>] device_initial_probe+0xe/0x10
[ 35.356082] [<ffffffff8145237b>] bus_probe_device+0x2e/0x99
[ 35.357471] [<ffffffff81450250>] device_add+0x3f4/0x4ea
[ 35.358999] [<ffffffff814552b4>] platform_device_add+0x174/0x1d4
[ 35.360477] [<ffffffff82d8aaf0>] init+0x26e/0x36c
[ 35.361869] [<ffffffff82d8a882>] ? trace_event_define_fields_udc_log_req+0x205/0x205
[ 35.363766] [<ffffffff82d368d4>] ? set_debug_rodata+0x12/0x12
[ 35.365211] [<ffffffff82d3706e>] do_one_initcall+0x89/0x149
[ 35.366649] [<ffffffff82d368d4>] ? set_debug_rodata+0x12/0x12
[ 35.368293] [<ffffffff82d3724b>] kernel_init_freeable+0x11d/0x1a0
[ 35.369774] [<ffffffff81dfa235>] ? rest_init+0x12c/0x12c
[ 35.371303] [<ffffffff81dfa23e>] kernel_init+0x9/0xeb
[ 35.372561] [<ffffffff81e0a8ca>] ret_from_fork+0x2a/0x40
[ 35.375486] userial_init: registered 4 ttyGS* devices
[ 35.376616] udc dummy_udc.0: registering UDC driver [g_ether]
[ 35.378200] using random self ethernet address
[ 35.379189] using random host ethernet address
[ 35.380273] g_ether gadget: adding config #1 'CDC Ethernet (ECM)'/ffffffff82af89c0
[ 35.382209] g_ether gadget: adding 'cdc_ethernet'/ffff88011b57fa00 to config 'CDC Ethernet (ECM)'/ffffffff82af89c0
[ 35.385445] BUG: spinlock trylock failure on UP on CPU#0, swapper/1
[ 35.386973] lock: 0xffff88011d52cd00, .magic: 00000000, .owner: <none>/-1, .owner_cpu: -1
[ 35.389161] CPU: 0 PID: 1 Comm: swapper Not tainted 4.8.0-14895-gaf1fee9 #1
[ 35.390818] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 35.392828] ffff88013a89fb40 ffffffff812cb0fe ffff88013a89fb60 ffffffff810d6b77
[ 35.395074] ffff88011d52cd00 0000000000000000 ffff88013a89fb80 ffffffff810d6ba2
[ 35.396910] ffff88011d52cd00 ffffffff8248818a ffff88013a89fba8 ffffffff810d6e0b
[ 35.398924] Call Trace:
[ 35.399507] [<ffffffff812cb0fe>] dump_stack+0x19/0x1b
[ 35.400712] [<ffffffff810d6b77>] spin_dump+0x7f/0x84
[ 35.402103] [<ffffffff810d6ba2>] spin_bug+0x26/0x28
[ 35.403239] [<ffffffff810d6e0b>] do_raw_spin_trylock+0x5f/0x69
[ 35.404688] [<ffffffff81e09926>] _raw_spin_lock+0x36/0x64
[ 35.405992] [<ffffffff81df92a4>] ? klist_add_tail+0x20/0x4b
[ 35.407241] [<ffffffff81df92a4>] klist_add_tail+0x20/0x4b
[ 35.408619] [<ffffffff8145026c>] device_add+0x410/0x4ea
[ 35.409727] [<ffffffff810d6c1e>] ? __raw_spin_lock_init+0x2e/0x4c
[ 35.411208] [<ffffffff81b6a372>] netdev_register_kobject+0x8f/0x12b
[ 35.412672] [<ffffffff81b4f777>] register_netdevice+0x3f2/0x5e5
[ 35.414163] [<ffffffff81b4f981>] register_netdev+0x17/0x24
[ 35.415715] [<ffffffff818ac9c2>] gether_register_netdev+0x30/0xf3
[ 35.417192] [<ffffffff818ad8c2>] ecm_bind+0x70/0x360
[ 35.418752] [<ffffffff8187ead2>] usb_add_function+0xae/0x19f
[ 35.420120] [<ffffffff818b43f6>] eth_do_config+0x10f/0x145
[ 35.421747] [<ffffffff818b42e7>] ? eth_bind+0x27f/0x27f
[ 35.423037] [<ffffffff8187ed7c>] usb_add_config+0x68/0x25a
[ 35.424365] [<ffffffff818b41ec>] eth_bind+0x184/0x27f
[ 35.425631] [<ffffffff8187f607>] composite_bind+0x99/0x182
[ 35.427003] [<ffffffff82d368d4>] ? set_debug_rodata+0x12/0x12
[ 35.428581] [<ffffffff81883c80>] udc_bind_to_driver+0x53/0xe8
[ 35.430029] [<ffffffff81884c5c>] usb_gadget_probe_driver+0x121/0x13b
[ 35.431559] [<ffffffff82d8ad5c>] ? ffsmod_init+0x12/0x12
[ 35.432849] [<ffffffff8187f78b>] usb_composite_probe+0x9b/0x9d
[ 35.434414] [<ffffffff82d8ad6c>] eth_driver_init+0x10/0x12
[ 35.435787] [<ffffffff82d3706e>] do_one_initcall+0x89/0x149
[ 35.437132] [<ffffffff82d368d4>] ? set_debug_rodata+0x12/0x12
[ 35.438774] [<ffffffff82d3724b>] kernel_init_freeable+0x11d/0x1a0
[ 35.440235] [<ffffffff81dfa235>] ? rest_init+0x12c/0x12c
[ 35.441889] [<ffffffff81dfa23e>] kernel_init+0x9/0xeb
[ 35.443110] [<ffffffff81e0a8ca>] ret_from_fork+0x2a/0x40
Elapsed time: 80
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Xiaolong
4 years, 2 months
[lkp] [fs] 012bc68ee9: aim7.jobs-per-min -11.6% regression
by kernel test robot
Greeting,
We noticed a -11.6% regression of aim7.jobs-per-min due to commit:
commit 012bc68ee96faf7b8d245edc270723544574bb11 ("fs: always set I_DIRTY_TIME to fsync correctly on lazytime")
https://github.com/0day-ci/linux Naohiro-Aota/fs-always-set-I_DIRTY_TIME-to-fsync-correctly-on-lazytime/20161101-032629
in testcase: aim7
on test machine: 40 threads Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz with 384G memory
with following parameters:
disk: 1BRD_48G
fs: f2fs
test: sync_disk_rw
load: 600
cpufreq_governor: performance
AIM7 is a traditional UNIX system level benchmark suite which is used to test and measure the performance of multiuser system.
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong
4 years, 2 months
[lkp] [mm] 371a096edf: vm-scalability.throughput 6.1% improvement
by kernel test robot
Greeting,
FYI, we noticed a 6.1% improvement of vm-scalability.throughput due to commit:
commit 371a096edf43a8c71844cf71c20765c8b21d07d9 ("mm: don't use radix tree writeback tags for pages in swap cache")
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
with following parameters:
runtime: 300
thp_enabled: never
thp_defrag: never
nr_task: 16
nr_ssd: 1
test: swap-w-seq
cpufreq_governor: performance
The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 14.1% improvement |
| test machine | 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | nr_ssd=1 |
| | nr_task=64 |
| | runtime=300 |
| | test=swap-w-seq |
| | thp_defrag=never |
| | thp_enabled=always |
+------------------+-----------------------------------------------------------------------+
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-6/performance/x86_64-rhel-7.2/1/16/debian-x86_64-2016-08-31.cgz/300/lkp-hsw-ep4/swap-w-seq/vm-scalability/never/never
commit:
1d8bf926f8 ("mm/bootmem.c: replace kzalloc() by kzalloc_node()")
371a096edf ("mm: don't use radix tree writeback tags for pages in swap cache")
1d8bf926f8739bd3 371a096edf43a8c71844cf71c2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
2213672 ± 0% +6.1% 2348422 ± 1% vm-scalability.throughput
534544 ± 1% +6.8% 570794 ± 4% vm-scalability.time.involuntary_context_switches
1.458e+08 ± 0% +7.0% 1.56e+08 ± 1% vm-scalability.time.minor_page_faults
4991 ± 50% +3615.9% 185462 ± 31% vm-scalability.time.voluntary_context_switches
0.08 ± 27% +3440.8% 2.69 ± 50% turbostat.CPU%c3
229747 ± 5% +47.7% 339402 ± 11% softirqs.RCU
270638 ± 1% +13.0% 305899 ± 2% softirqs.SCHED
1277270 ± 0% +7.8% 1377440 ± 1% vmstat.io.bo
1277264 ± 0% +7.8% 1377434 ± 1% vmstat.swap.so
5148 ± 0% +50.8% 7765 ± 9% vmstat.system.cs
110151 ± 0% +30.6% 143842 ± 11% meminfo.SUnreclaim
9703 ± 20% -32.8% 6516 ± 28% meminfo.Shmem
783039 ± 5% -26.3% 577083 ± 17% meminfo.SwapCached
1167 ± 27% +27840.3% 326175 ± 61% meminfo.Writeback
637164 ± 5% -25.4% 475615 ± 9% numa-meminfo.node0.FilePages
243.40 ± 36% +19536.4% 47794 ± 58% numa-meminfo.node0.Writeback
50438 ± 4% +31.7% 66437 ± 12% numa-meminfo.node1.SUnreclaim
83150 ± 4% +17.5% 97667 ± 8% numa-meminfo.node1.Slab
1045 ± 24% +25899.2% 271899 ± 60% numa-meminfo.node1.Writeback
159271 ± 5% -25.4% 118746 ± 9% numa-vmstat.node0.nr_file_pages
59.60 ± 37% +19680.4% 11789 ± 61% numa-vmstat.node0.nr_writeback
60.00 ± 37% +19544.0% 11786 ± 61% numa-vmstat.node0.nr_zone_write_pending
12609 ± 4% +31.5% 16579 ± 12% numa-vmstat.node1.nr_slab_unreclaimable
265.30 ± 19% +25322.4% 67445 ± 61% numa-vmstat.node1.nr_writeback
266.80 ± 19% +25179.8% 67446 ± 61% numa-vmstat.node1.nr_zone_write_pending
7424364 ± 50% +715.3% 60527951 ± 47% cpuidle.C1-HSW.time
85779 ± 8% +394.3% 423973 ± 36% cpuidle.C1-HSW.usage
11362590 ± 16% +1931.5% 2.308e+08 ± 58% cpuidle.C1E-HSW.time
58721 ± 12% +1054.1% 677675 ± 52% cpuidle.C1E-HSW.usage
60495231 ± 13% +1096.4% 7.237e+08 ± 46% cpuidle.C3-HSW.time
181445 ± 13% +599.7% 1269496 ± 43% cpuidle.C3-HSW.usage
2126 ± 9% +798.9% 19117 ± 46% cpuidle.POLL.usage
40653369 ± 70% -88.3% 4756647 ± 58% proc-vmstat.compact_migrate_scanned
370154 ± 3% -13.7% 319306 ± 8% proc-vmstat.nr_file_pages
27536 ± 0% +30.4% 35899 ± 11% proc-vmstat.nr_slab_unreclaimable
71520720 ± 5% +12.4% 80357490 ± 3% proc-vmstat.nr_vmscan_write
292.40 ± 31% +27493.5% 80683 ± 62% proc-vmstat.nr_writeback
294.00 ± 31% +27342.9% 80682 ± 62% proc-vmstat.nr_zone_write_pending
1.262e+08 ± 3% +15.9% 1.463e+08 ± 1% proc-vmstat.numa_pte_updates
87331985 ± 1% +33.7% 1.168e+08 ± 3% proc-vmstat.pgrotated
28607602 ± 2% +22.0% 34898577 ± 2% proc-vmstat.pgscan_kswapd
14766402 ± 4% +20.4% 17782412 ± 4% proc-vmstat.pgsteal_kswapd
1828938 ± 1% +52.5% 2790009 ± 9% perf-stat.context-switches
17676 ± 6% +309.2% 72326 ± 29% perf-stat.cpu-migrations
0.06 ± 11% +112.1% 0.13 ± 23% perf-stat.dTLB-load-miss-rate%
9.969e+08 ± 13% +93.7% 1.931e+09 ± 18% perf-stat.dTLB-load-misses
0.22 ± 2% +10.0% 0.24 ± 5% perf-stat.dTLB-store-miss-rate%
8.69e+08 ± 4% +18.2% 1.027e+09 ± 9% perf-stat.dTLB-store-misses
36.04 ± 2% +8.4% 39.05 ± 2% perf-stat.iTLB-load-miss-rate%
1.331e+08 ± 3% +19.8% 1.594e+08 ± 6% perf-stat.iTLB-load-misses
1.463e+08 ± 0% +7.0% 1.565e+08 ± 1% perf-stat.minor-faults
60.76 ± 1% +2.8% 62.46 ± 0% perf-stat.node-store-miss-rate%
1.463e+08 ± 0% +7.0% 1.565e+08 ± 1% perf-stat.page-faults
178.50 ± 14% +201.1% 537.40 ± 14% slabinfo.bdev_cache.active_objs
178.50 ± 14% +201.1% 537.40 ± 14% slabinfo.bdev_cache.num_objs
444.80 ± 12% +66.6% 741.10 ± 13% slabinfo.file_lock_cache.active_objs
444.80 ± 12% +66.6% 741.10 ± 13% slabinfo.file_lock_cache.num_objs
4019 ± 1% +100.4% 8054 ± 13% slabinfo.kmalloc-1024.active_objs
128.10 ± 0% +100.5% 256.90 ± 14% slabinfo.kmalloc-1024.active_slabs
4100 ± 0% +100.8% 8232 ± 14% slabinfo.kmalloc-1024.num_objs
128.10 ± 0% +100.5% 256.90 ± 14% slabinfo.kmalloc-1024.num_slabs
7578 ± 0% +21.8% 9232 ± 7% slabinfo.kmalloc-192.active_objs
7609 ± 0% +21.4% 9238 ± 7% slabinfo.kmalloc-192.num_objs
4829 ± 1% +47.7% 7134 ± 11% slabinfo.kmalloc-2048.active_objs
312.10 ± 2% +45.2% 453.10 ± 11% slabinfo.kmalloc-2048.active_slabs
4898 ± 1% +47.9% 7246 ± 11% slabinfo.kmalloc-2048.num_objs
312.10 ± 2% +45.2% 453.10 ± 11% slabinfo.kmalloc-2048.num_slabs
16028 ± 6% +479.7% 92924 ± 51% slabinfo.kmalloc-256.active_objs
351.20 ± 4% +1071.3% 4113 ± 56% slabinfo.kmalloc-256.active_slabs
16315 ± 5% +487.8% 95897 ± 51% slabinfo.kmalloc-256.num_objs
351.20 ± 4% +1071.3% 4113 ± 56% slabinfo.kmalloc-256.num_slabs
751.10 ± 6% +74.9% 1313 ± 14% slabinfo.nsproxy.active_objs
751.10 ± 6% +74.9% 1313 ± 14% slabinfo.nsproxy.num_objs
37639 ± 4% -16.0% 31603 ± 6% slabinfo.radix_tree_node.active_objs
705.70 ± 4% -17.4% 583.10 ± 7% slabinfo.radix_tree_node.active_slabs
39498 ± 3% -17.4% 32627 ± 7% slabinfo.radix_tree_node.num_objs
705.70 ± 4% -17.4% 583.10 ± 7% slabinfo.radix_tree_node.num_slabs
146695 ± 1% -28.0% 105561 ± 17% sched_debug.cfs_rq:/.exec_clock.max
57.20 ± 73% +574.1% 385.58 ± 26% sched_debug.cfs_rq:/.exec_clock.min
50781 ± 2% -26.8% 37173 ± 18% sched_debug.cfs_rq:/.exec_clock.stddev
150.31 ± 5% -25.0% 112.67 ± 17% sched_debug.cfs_rq:/.load_avg.stddev
2792970 ± 4% -40.8% 1653864 ± 29% sched_debug.cfs_rq:/.min_vruntime.max
918436 ± 5% -38.0% 569297 ± 31% sched_debug.cfs_rq:/.min_vruntime.stddev
842.53 ± 5% -31.2% 579.83 ± 18% sched_debug.cfs_rq:/.runnable_load_avg.max
134.83 ± 5% -34.0% 89.01 ± 24% sched_debug.cfs_rq:/.runnable_load_avg.stddev
918795 ± 5% -37.9% 570175 ± 31% sched_debug.cfs_rq:/.spread0.stddev
380.58 ± 2% -14.8% 324.26 ± 10% sched_debug.cfs_rq:/.util_avg.stddev
42.86 ± 17% +1021.2% 480.60 ± 88% sched_debug.cpu.clock.stddev
42.86 ± 17% +1021.2% 480.60 ± 88% sched_debug.cpu.clock_task.stddev
843.67 ± 5% -31.3% 579.83 ± 18% sched_debug.cpu.cpu_load[0].max
134.98 ± 5% -34.0% 89.02 ± 24% sched_debug.cpu.cpu_load[0].stddev
136.36 ± 5% -30.6% 94.67 ± 26% sched_debug.cpu.cpu_load[3].stddev
136.89 ± 5% -31.3% 94.00 ± 26% sched_debug.cpu.cpu_load[4].stddev
441.52 ± 5% -16.4% 369.26 ± 13% sched_debug.cpu.curr->pid.avg
0.00 ± 10% +846.3% 0.00 ± 86% sched_debug.cpu.next_balance.stddev
155148 ± 0% -16.9% 128991 ± 11% sched_debug.cpu.nr_load_updates.max
38237 ± 2% -27.9% 27553 ± 19% sched_debug.cpu.nr_load_updates.stddev
12989 ± 1% +66.7% 21655 ± 11% sched_debug.cpu.nr_switches.avg
38611 ± 9% +50.9% 58253 ± 13% sched_debug.cpu.nr_switches.max
1120 ± 32% +350.3% 5042 ± 29% sched_debug.cpu.nr_switches.min
10487 ± 3% +27.6% 13383 ± 9% sched_debug.cpu.nr_switches.stddev
0.01 ± 14% +1005.7% 0.09 ± 38% sched_debug.cpu.nr_uninterruptible.avg
22.03 ± 19% +124.5% 49.47 ± 28% sched_debug.cpu.nr_uninterruptible.max
-25.55 ±-32% +173.8% -69.95 ±-27% sched_debug.cpu.nr_uninterruptible.min
7.95 ± 16% +144.6% 19.46 ± 18% sched_debug.cpu.nr_uninterruptible.stddev
13007 ± 2% +66.2% 21615 ± 11% sched_debug.cpu.sched_count.avg
664.22 ± 53% +590.3% 4584 ± 32% sched_debug.cpu.sched_count.min
1797 ± 3% +189.0% 5196 ± 21% sched_debug.cpu.sched_goidle.avg
112.57 ± 48% +1648.7% 1968 ± 35% sched_debug.cpu.sched_goidle.min
6018 ± 1% +76.9% 10646 ± 12% sched_debug.cpu.ttwu_count.avg
17865 ± 3% +107.9% 37145 ± 21% sched_debug.cpu.ttwu_count.max
93.22 ± 36% +600.3% 652.75 ± 27% sched_debug.cpu.ttwu_count.min
5368 ± 2% +69.2% 9082 ± 14% sched_debug.cpu.ttwu_count.stddev
4852 ± 1% +36.6% 6626 ± 8% sched_debug.cpu.ttwu_local.avg
55.75 ± 24% +310.6% 228.88 ± 16% sched_debug.cpu.ttwu_local.min
***************************************************************************************************
lkp-hsw-ep4: 72 threads Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_ssd/nr_task/rootfs/runtime/tbox_group/test/testcase/thp_defrag/thp_enabled:
gcc-6/performance/x86_64-rhel-7.2/1/64/debian-x86_64-2016-08-31.cgz/300/lkp-hsw-ep4/swap-w-seq/vm-scalability/never/always
commit:
1d8bf926f8 ("mm/bootmem.c: replace kzalloc() by kzalloc_node()")
371a096edf ("mm: don't use radix tree writeback tags for pages in swap cache")
1d8bf926f8739bd3 371a096edf43a8c71844cf71c2
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
2125843 ± 0% +14.1% 2426196 ± 1% vm-scalability.throughput
741119 ± 3% +6.0% 785919 ± 2% vm-scalability.time.involuntary_context_switches
1.074e+08 ± 1% +16.5% 1.251e+08 ± 1% vm-scalability.time.minor_page_faults
6243 ± 0% -40.2% 3733 ± 10% vm-scalability.time.percent_of_cpu_this_job_got
21748 ± 0% -41.0% 12831 ± 10% vm-scalability.time.system_time
315.07 ± 1% -10.9% 280.79 ± 3% vm-scalability.time.user_time
333774 ± 2% +12.3% 374973 ± 3% softirqs.SCHED
11885858 ± 0% -39.4% 7205748 ± 9% softirqs.TIMER
40607746 ± 1% +25.4% 50919220 ± 12% numa-numastat.node0.numa_foreign
40597356 ± 1% +25.4% 50910950 ± 12% numa-numastat.node1.numa_miss
0.25 ±173% +1200.0% 3.25 ± 70% numa-numastat.node1.other_node
1211829 ± 1% +16.0% 1405484 ± 1% vmstat.io.bo
4939033 ± 3% +16.1% 5733178 ± 6% vmstat.memory.free
2.1e+08 ± 1% +14.9% 2.412e+08 ± 2% vmstat.memory.swpd
0.00 ± 0% +Inf% 31.50 ± 17% vmstat.procs.b
67.00 ± 0% -42.5% 38.50 ± 10% vmstat.procs.r
1211824 ± 1% +16.0% 1405478 ± 1% vmstat.swap.so
92.51 ± 0% -39.5% 56.01 ± 9% turbostat.%Busy
2583 ± 0% -39.4% 1566 ± 9% turbostat.Avg_MHz
6.96 ± 2% +406.3% 35.26 ± 12% turbostat.CPU%c1
0.01 ± 0% +24025.0% 2.41 ± 19% turbostat.CPU%c3
0.52 ± 21% +1125.7% 6.31 ± 14% turbostat.CPU%c6
258.28 ± 0% -12.0% 227.24 ± 1% turbostat.PkgWatt
55.36 ± 0% +1.1% 55.98 ± 0% turbostat.RAMWatt
28505239 ± 10% -31.5% 19524763 ± 15% meminfo.AnonHugePages
5537897 ± 9% -14.4% 4740114 ± 9% meminfo.DirectMap2M
5822789 ± 2% +13.2% 6594234 ± 5% meminfo.MemAvailable
6222814 ± 2% +12.5% 7001717 ± 4% meminfo.MemFree
709649 ± 0% +12.6% 798807 ± 5% meminfo.PageTables
111455 ± 0% +55.1% 172917 ± 6% meminfo.SUnreclaim
169503 ± 0% +37.7% 233321 ± 4% meminfo.Slab
1504686 ± 3% -24.8% 1131932 ± 15% meminfo.SwapCached
3028 ± 10% +22884.4% 695969 ± 20% meminfo.Writeback
43194655 ± 11% +619.4% 3.107e+08 ± 16% cpuidle.C1-HSW.time
622561 ± 17% +104.2% 1271319 ± 17% cpuidle.C1-HSW.usage
14115248 ± 15% +9297.9% 1.327e+09 ± 16% cpuidle.C1E-HSW.time
84418 ± 8% +2439.5% 2143769 ± 16% cpuidle.C1E-HSW.usage
53083465 ± 15% +2556.0% 1.41e+09 ± 21% cpuidle.C3-HSW.time
154076 ± 13% +1115.8% 1873320 ± 20% cpuidle.C3-HSW.usage
1.803e+09 ± 3% +350.2% 8.117e+09 ± 10% cpuidle.C6-HSW.time
1936956 ± 3% +347.5% 8667200 ± 11% cpuidle.C6-HSW.usage
14995046 ± 10% +2237.0% 3.504e+08 ± 14% cpuidle.POLL.time
13351 ± 17% +202.3% 40357 ± 15% cpuidle.POLL.usage
22648511 ± 14% -36.8% 14307991 ± 19% numa-meminfo.node0.AnonHugePages
1045808 ± 5% -47.9% 544659 ± 15% numa-meminfo.node0.FilePages
58493 ± 4% +57.0% 91837 ± 11% numa-meminfo.node0.SUnreclaim
85451 ± 4% +40.6% 120107 ± 9% numa-meminfo.node0.Slab
1041 ± 12% +15041.9% 157627 ± 13% numa-meminfo.node0.Writeback
5767065 ± 4% -32.4% 3896120 ± 36% numa-meminfo.node1.AnonHugePages
293938 ± 5% +50.9% 443468 ± 27% numa-meminfo.node1.PageTables
52953 ± 4% +53.2% 81112 ± 14% numa-meminfo.node1.SUnreclaim
17882 ± 4% -55.2% 8018 ± 59% numa-meminfo.node1.Shmem
84052 ± 5% +34.8% 113281 ± 11% numa-meminfo.node1.Slab
2236 ± 8% +23862.8% 535987 ± 24% numa-meminfo.node1.Writeback
11242 ± 13% -38.0% 6974 ± 21% numa-vmstat.node0.nr_anon_transparent_hugepages
262464 ± 4% -48.5% 135174 ± 17% numa-vmstat.node0.nr_file_pages
1215804 ± 3% +13.8% 1383278 ± 4% numa-vmstat.node0.nr_free_pages
1287 ± 17% +102.0% 2599 ± 24% numa-vmstat.node0.nr_isolated_anon
14625 ± 4% +54.0% 22521 ± 12% numa-vmstat.node0.nr_slab_unreclaimable
10793807 ± 9% +36.2% 14705195 ± 5% numa-vmstat.node0.nr_vmscan_write
233.00 ± 7% +15561.6% 36491 ± 15% numa-vmstat.node0.nr_writeback
10793782 ± 9% +35.9% 14668927 ± 5% numa-vmstat.node0.nr_written
233.00 ± 8% +15561.5% 36491 ± 15% numa-vmstat.node0.nr_zone_write_pending
22566322 ± 3% +31.3% 29625222 ± 12% numa-vmstat.node0.numa_foreign
2872 ± 4% -27.1% 2093 ± 23% numa-vmstat.node1.nr_anon_transparent_hugepages
404362 ± 9% +34.0% 541983 ± 19% numa-vmstat.node1.nr_free_pages
156.00 ± 11% +95.4% 304.75 ± 15% numa-vmstat.node1.nr_pages_scanned
13235 ± 4% +50.2% 19874 ± 14% numa-vmstat.node1.nr_slab_unreclaimable
49196401 ± 1% +24.1% 61044573 ± 7% numa-vmstat.node1.nr_vmscan_write
572.50 ± 14% +21710.0% 124862 ± 28% numa-vmstat.node1.nr_writeback
49196179 ± 1% +23.8% 60919929 ± 7% numa-vmstat.node1.nr_written
574.25 ± 14% +21643.7% 124863 ± 28% numa-vmstat.node1.nr_zone_write_pending
22550830 ± 3% +31.3% 29618804 ± 12% numa-vmstat.node1.numa_miss
4.086e+12 ± 0% -28.4% 2.928e+12 ± 5% perf-stat.branch-instructions
0.10 ± 0% +52.0% 0.15 ± 6% perf-stat.branch-miss-rate%
4.119e+09 ± 0% +8.5% 4.471e+09 ± 1% perf-stat.branch-misses
30.75 ± 0% -13.1% 26.72 ± 3% perf-stat.cache-miss-rate%
1.046e+10 ± 1% +9.9% 1.149e+10 ± 1% perf-stat.cache-misses
3.4e+10 ± 0% +26.5% 4.303e+10 ± 2% perf-stat.cache-references
6.545e+13 ± 0% -37.9% 4.065e+13 ± 7% perf-stat.cpu-cycles
195546 ± 4% +32.7% 259393 ± 6% perf-stat.cpu-migrations
0.04 ± 12% +84.7% 0.08 ± 10% perf-stat.dTLB-load-miss-rate%
4.077e+12 ± 0% -33.5% 2.713e+12 ± 8% perf-stat.dTLB-loads
0.18 ± 4% +16.4% 0.21 ± 2% perf-stat.dTLB-store-miss-rate%
6.92e+08 ± 3% +17.8% 8.149e+08 ± 9% perf-stat.dTLB-store-misses
37.76 ± 2% -41.7% 22.01 ± 5% perf-stat.iTLB-load-miss-rate%
42983189 ± 3% +36.3% 58591746 ± 8% perf-stat.iTLB-load-misses
70857240 ± 2% +194.4% 2.086e+08 ± 11% perf-stat.iTLB-loads
1.656e+13 ± 0% -27.9% 1.194e+13 ± 5% perf-stat.instructions
385661 ± 3% -46.6% 205896 ± 12% perf-stat.instructions-per-iTLB-miss
0.25 ± 0% +16.3% 0.29 ± 2% perf-stat.ipc
1.079e+08 ± 1% +16.4% 1.256e+08 ± 1% perf-stat.minor-faults
85.52 ± 0% -4.7% 81.49 ± 0% perf-stat.node-load-miss-rate%
9.377e+08 ± 2% +28.9% 1.208e+09 ± 4% perf-stat.node-loads
55.12 ± 0% +2.7% 56.60 ± 1% perf-stat.node-store-miss-rate%
2.191e+09 ± 0% +4.7% 2.294e+09 ± 1% perf-stat.node-store-misses
1.079e+08 ± 1% +16.4% 1.256e+08 ± 1% perf-stat.page-faults
174.75 ± 11% +206.4% 535.50 ± 38% slabinfo.bdev_cache.active_objs
174.75 ± 11% +206.4% 535.50 ± 38% slabinfo.bdev_cache.num_objs
6222 ± 1% +18.0% 7341 ± 5% slabinfo.cred_jar.active_objs
6222 ± 1% +18.0% 7341 ± 5% slabinfo.cred_jar.num_objs
453.00 ± 11% +65.0% 747.25 ± 16% slabinfo.file_lock_cache.active_objs
453.00 ± 11% +65.0% 747.25 ± 16% slabinfo.file_lock_cache.num_objs
4046 ± 1% +101.9% 8169 ± 3% slabinfo.kmalloc-1024.active_objs
126.50 ± 0% +106.5% 261.25 ± 3% slabinfo.kmalloc-1024.active_slabs
4071 ± 0% +105.6% 8372 ± 3% slabinfo.kmalloc-1024.num_objs
126.50 ± 0% +106.5% 261.25 ± 3% slabinfo.kmalloc-1024.num_slabs
7564 ± 0% +17.1% 8853 ± 1% slabinfo.kmalloc-192.active_objs
7615 ± 0% +16.3% 8860 ± 1% slabinfo.kmalloc-192.num_objs
4766 ± 1% +69.8% 8094 ± 8% slabinfo.kmalloc-2048.active_objs
307.00 ± 2% +67.9% 515.50 ± 8% slabinfo.kmalloc-2048.active_slabs
4826 ± 1% +70.5% 8227 ± 8% slabinfo.kmalloc-2048.num_objs
307.00 ± 2% +67.9% 515.50 ± 8% slabinfo.kmalloc-2048.num_slabs
14592 ± 1% +934.5% 150965 ± 34% slabinfo.kmalloc-256.active_objs
342.25 ± 1% +1924.0% 6927 ± 34% slabinfo.kmalloc-256.active_slabs
14906 ± 1% +934.8% 154254 ± 34% slabinfo.kmalloc-256.num_objs
342.25 ± 1% +1924.0% 6927 ± 34% slabinfo.kmalloc-256.num_slabs
12576 ± 3% +51.5% 19055 ± 7% slabinfo.kmalloc-512.active_objs
218.50 ± 15% +56.3% 341.50 ± 10% slabinfo.kmalloc-512.active_slabs
12727 ± 3% +54.0% 19595 ± 9% slabinfo.kmalloc-512.num_objs
218.50 ± 15% +56.3% 341.50 ± 10% slabinfo.kmalloc-512.num_slabs
765.50 ± 4% +69.1% 1294 ± 21% slabinfo.nsproxy.active_objs
765.50 ± 4% +69.1% 1294 ± 21% slabinfo.nsproxy.num_objs
1612 ± 13% -69.5% 492.25 ± 96% proc-vmstat.compact_fail
1619 ± 12% -69.1% 501.25 ± 95% proc-vmstat.compact_stall
13831 ± 9% -31.2% 9522 ± 16% proc-vmstat.nr_anon_transparent_hugepages
140989 ± 6% +25.8% 177310 ± 4% proc-vmstat.nr_dirty_background_threshold
282323 ± 6% +25.8% 355056 ± 4% proc-vmstat.nr_dirty_threshold
471239 ± 3% -20.9% 372789 ± 11% proc-vmstat.nr_file_pages
1496965 ± 6% +24.3% 1860851 ± 3% proc-vmstat.nr_free_pages
5139 ± 4% +21.5% 6246 ± 2% proc-vmstat.nr_isolated_anon
178332 ± 1% +13.1% 201722 ± 5% proc-vmstat.nr_page_table_pages
147.75 ± 8% +100.8% 296.75 ± 18% proc-vmstat.nr_pages_scanned
27862 ± 0% +52.7% 42558 ± 6% proc-vmstat.nr_slab_unreclaimable
60269851 ± 1% +25.1% 75424523 ± 5% proc-vmstat.nr_vmscan_write
774.25 ± 2% +21072.7% 163929 ± 22% proc-vmstat.nr_writeback
1.073e+08 ± 1% +15.5% 1.239e+08 ± 2% proc-vmstat.nr_written
775.50 ± 2% +21038.5% 163928 ± 22% proc-vmstat.nr_zone_write_pending
46256859 ± 1% +26.7% 58617918 ± 8% proc-vmstat.numa_foreign
13246 ± 6% -46.1% 7141 ± 17% proc-vmstat.numa_hint_faults
8187 ± 2% -42.4% 4713 ± 20% proc-vmstat.numa_hint_faults_local
46256859 ± 1% +26.7% 58617918 ± 8% proc-vmstat.numa_miss
1.394e+08 ± 0% +13.0% 1.575e+08 ± 1% proc-vmstat.pgalloc_normal
1.084e+08 ± 1% +15.4% 1.251e+08 ± 2% proc-vmstat.pgdeactivate
1.079e+08 ± 1% +16.4% 1.257e+08 ± 1% proc-vmstat.pgfault
1.377e+08 ± 1% +13.3% 1.559e+08 ± 1% proc-vmstat.pgfree
3711 ± 22% -67.7% 1198 ± 32% proc-vmstat.pgmigrate_fail
4.293e+08 ± 1% +15.5% 4.958e+08 ± 2% proc-vmstat.pgpgout
76556208 ± 1% +21.6% 93113321 ± 2% proc-vmstat.pgrefill
61504865 ± 2% +85.7% 1.142e+08 ± 5% proc-vmstat.pgrotated
1.754e+08 ± 1% +26.7% 2.223e+08 ± 4% proc-vmstat.pgscan_direct
8178772 ± 3% +61.6% 13217350 ± 18% proc-vmstat.pgscan_kswapd
1.007e+08 ± 1% +12.8% 1.136e+08 ± 2% proc-vmstat.pgsteal_direct
6418690 ± 7% +57.7% 10125267 ± 23% proc-vmstat.pgsteal_kswapd
1.073e+08 ± 1% +15.5% 1.239e+08 ± 2% proc-vmstat.pswpout
209541 ± 1% +16.5% 244206 ± 1% proc-vmstat.thp_fault_fallback
142799 ± 0% -43.9% 80147 ± 5% sched_debug.cfs_rq:/.exec_clock.avg
131668 ± 1% -69.0% 40852 ± 14% sched_debug.cfs_rq:/.exec_clock.min
5099 ± 13% +520.8% 31657 ± 20% sched_debug.cfs_rq:/.exec_clock.stddev
227.88 ± 4% +35.0% 307.70 ± 15% sched_debug.cfs_rq:/.load_avg.min
246.74 ± 26% -53.7% 114.30 ± 23% sched_debug.cfs_rq:/.load_avg.stddev
9367426 ± 0% -58.8% 3856380 ± 10% sched_debug.cfs_rq:/.min_vruntime.avg
10027888 ± 0% -46.2% 5391954 ± 6% sched_debug.cfs_rq:/.min_vruntime.max
8536208 ± 2% -70.5% 2518516 ± 18% sched_debug.cfs_rq:/.min_vruntime.min
308749 ± 8% +188.8% 891811 ± 21% sched_debug.cfs_rq:/.min_vruntime.stddev
0.82 ± 2% -31.3% 0.56 ± 15% sched_debug.cfs_rq:/.nr_running.avg
0.23 ± 13% +43.2% 0.33 ± 8% sched_debug.cfs_rq:/.nr_running.stddev
36.70 ± 13% -67.1% 12.06 ± 20% sched_debug.cfs_rq:/.nr_spread_over.avg
123.42 ± 19% -68.2% 39.20 ± 35% sched_debug.cfs_rq:/.nr_spread_over.max
8.92 ± 14% -64.9% 3.13 ± 52% sched_debug.cfs_rq:/.nr_spread_over.min
23.11 ± 20% -74.8% 5.83 ± 27% sched_debug.cfs_rq:/.nr_spread_over.stddev
32.16 ± 6% -37.6% 20.08 ± 33% sched_debug.cfs_rq:/.runnable_load_avg.avg
653879 ± 47% +135.8% 1541932 ± 15% sched_debug.cfs_rq:/.spread0.max
315599 ± 9% +184.2% 896957 ± 21% sched_debug.cfs_rq:/.spread0.stddev
898.96 ± 1% -27.4% 652.99 ± 15% sched_debug.cfs_rq:/.util_avg.avg
673946 ± 3% +24.3% 837742 ± 4% sched_debug.cpu.avg_idle.avg
49038 ± 8% +259.6% 176360 ± 60% sched_debug.cpu.avg_idle.min
301898 ± 6% -25.5% 225046 ± 21% sched_debug.cpu.avg_idle.stddev
190551 ± 0% +9.0% 207609 ± 2% sched_debug.cpu.clock.max
233.98 ± 3% +2164.8% 5299 ± 53% sched_debug.cpu.clock.stddev
190551 ± 0% +9.0% 207609 ± 2% sched_debug.cpu.clock_task.max
233.98 ± 3% +2164.8% 5299 ± 53% sched_debug.cpu.clock_task.stddev
32.11 ± 6% -37.8% 19.98 ± 33% sched_debug.cpu.cpu_load[0].avg
35.42 ± 9% -37.8% 22.02 ± 26% sched_debug.cpu.cpu_load[1].avg
133.36 ± 15% -39.1% 81.19 ± 37% sched_debug.cpu.cpu_load[1].stddev
34.65 ± 7% -38.1% 21.45 ± 27% sched_debug.cpu.cpu_load[2].avg
126.41 ± 9% -38.0% 78.32 ± 38% sched_debug.cpu.cpu_load[2].stddev
34.35 ± 6% -38.4% 21.15 ± 28% sched_debug.cpu.cpu_load[3].avg
123.58 ± 6% -38.4% 76.07 ± 38% sched_debug.cpu.cpu_load[3].stddev
34.45 ± 5% -38.7% 21.11 ± 28% sched_debug.cpu.cpu_load[4].avg
123.29 ± 4% -38.7% 75.54 ± 38% sched_debug.cpu.cpu_load[4].stddev
1367 ± 2% -31.7% 934.09 ± 15% sched_debug.cpu.curr->pid.avg
0.00 ± 2% +2051.8% 0.01 ± 53% sched_debug.cpu.next_balance.stddev
148590 ± 1% -25.6% 110559 ± 3% sched_debug.cpu.nr_load_updates.avg
139700 ± 1% -43.5% 78865 ± 5% sched_debug.cpu.nr_load_updates.min
3733 ± 13% +519.4% 23123 ± 21% sched_debug.cpu.nr_load_updates.stddev
0.84 ± 2% -31.9% 0.57 ± 17% sched_debug.cpu.nr_running.avg
0.28 ± 10% +28.5% 0.36 ± 10% sched_debug.cpu.nr_running.stddev
14395 ± 9% +34.0% 19289 ± 23% sched_debug.cpu.nr_switches.min
0.11 ± 43% +470.4% 0.62 ± 52% sched_debug.cpu.nr_uninterruptible.avg
7400 ± 21% +31.3% 9718 ± 15% sched_debug.cpu.sched_goidle.avg
1050 ± 7% +245.9% 3632 ± 38% sched_debug.cpu.sched_goidle.min
40897 ± 23% +43.3% 58610 ± 7% sched_debug.cpu.ttwu_count.max
10300 ± 11% -51.3% 5018 ± 29% sched_debug.cpu.ttwu_count.min
5838 ± 26% +99.6% 11652 ± 12% sched_debug.cpu.ttwu_count.stddev
7617 ± 7% +18.5% 9024 ± 5% sched_debug.cpu.ttwu_local.avg
4191 ± 25% +36.8% 5733 ± 9% sched_debug.cpu.ttwu_local.stddev
Thanks,
Xiaolong
4 years, 2 months
[lkp] [ipv6] 3e1ad8cb8a: kmsg.IPv6:Attempt_to_unregister_permanent_protocol
by kernel test robot
FYI, we noticed the following commit:
https://github.com/0day-ci/linux David-Lebrun/net-add-support-for-IPv6-Segment-Routing/20161104-184052
commit 3e1ad8cb8a222ca704cd38b97ba68122d4e106f2 ("ipv6: sr: add support for SRH encapsulation and injection with lwtunnels")
in testcase: fwq
with following parameters:
nr_task: 100%
samples: 100000ss
iterations: 18x
cpufreq_governor: performance
on test machine: 8 threads Intel(R) Atom(TM) CPU C2750 @ 2.40GHz with 16G memory
caused below changes:
[ 9.511484] Initializing XFRM netlink socket
[ 9.516012] NET: Registered protocol family 10
[ 9.536828] IPv6: Attempt to unregister permanent protocol 6
[ 9.549824] IPv6: Attempt to unregister permanent protocol 136
[ 9.558797] IPv6: Attempt to unregister permanent protocol 17
[ 9.671824] NET: Unregistered protocol family 10
[ 9.679917] NET: Registered protocol family 17
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Xiaolong
4 years, 2 months
[lkp] [ext4] adad5aa544: fio.write_bw_MBps +4074.4% improvement
by kernel test robot
Greeting,
FYI, we noticed a +4074.4% improvement of fio.write_bw_MBps due to commit:
commit adad5aa544e281d84f837b2786809611cb35a999 ("ext4: Use clean_bdev_aliases() instead of iteration")
https://github.com/0day-ci/linux Jan-Kara/fs-Provide-function-to-unmap-metadata-for-a-range-of-blocks/20161105-030924
in testcase: fio-basic
on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
with following parameters:
disk: 2pmem
fs: ext4
runtime: 200s
nr_task: 50%
time_based: tb
rw: randwrite
bs: 4k
ioengine: libaio
test_size: 200G
cpufreq_governor: performance
Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_bw_MBps +3928.3% improvement |
| test machine | 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=2pmem |
| | fs=ext4 |
| | ioengine=sync |
| | nr_task=50% |
| | runtime=200s |
| | rw=randwrite |
| | test_size=200G |
| | time_based=tb |
+------------------+-----------------------------------------------------------------------+
| testcase: change | fio-basic: fio.write_bw_MBps +88.0% improvement |
| test machine | 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory |
| test parameters | bs=4k |
| | cpufreq_governor=performance |
| | disk=1SSD |
| | fs=ext4 |
| | ioengine=sync |
| | nr_task=64 |
| | runtime=300s |
| | rw=randwrite |
| | test_size=400g |
+------------------+-----------------------------------------------------------------------+
| testcase: change | trinity: |
| test machine | qemu-system-x86_64 -enable-kvm -cpu IvyBridge -m 360M |
| test parameters | runtime=300s |
+------------------+-----------------------------------------------------------------------+
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
4k/gcc-6/performance/2pmem/ext4/libaio/x86_64-rhel-7.2/50%/debian-x86_64-2016-08-31.cgz/200s/randwrite/lkp-hsw-ep6/200G/fio-basic/tb
commit:
6f2b562c3a ("direct-io: Use clean_bdev_aliases() instead of handmade iteration")
adad5aa544 ("ext4: Use clean_bdev_aliases() instead of iteration")
6f2b562c3a89f4a6 adad5aa544e281d84f837b2786
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
64.45 ± 0% +4074.4% 2690 ± 2% fio.write_bw_MBps
0.05 ± 52% -100.0% 0.00 ± -1% fio.latency_1000ms%
0.01 ± 0% +1075.0% 0.12 ± 58% fio.latency_1000us%
3.41 ± 0% -97.3% 0.09 ± 8% fio.latency_10ms%
0.42 ± 4% -70.1% 0.12 ± 6% fio.latency_20ms%
23.23 ± 1% -100.0% 0.01 ± 0% fio.latency_250ms%
14.50 ± 3% -82.6% 2.52 ± 24% fio.latency_250us%
0.06 ± 20% +86.4% 0.10 ± 4% fio.latency_2ms%
0.01 ± 0% +100.0% 0.02 ± 0% fio.latency_4ms%
1.95 ± 6% -99.6% 0.01 ± 57% fio.latency_500ms%
53.08 ± 2% +52.0% 80.68 ± 3% fio.latency_500us%
0.14 ± 13% +1492.9% 2.23 ± 3% fio.latency_50ms%
1.05 ± 13% -100.0% 0.00 ± -1% fio.latency_750ms%
2.12 ± 52% +560.9% 14.03 ± 16% fio.latency_750us%
26414526 ± 0% +4069.9% 1.101e+09 ± 2% fio.time.file_system_outputs
673.00 ± 3% +915.8% 6836 ± 12% fio.time.involuntary_context_switches
35917 ± 2% +103.7% 73176 ± 11% fio.time.minor_page_faults
21.25 ± 3% +4110.6% 894.75 ± 1% fio.time.percent_of_cpu_this_job_got
29.64 ± 3% +5107.6% 1543 ± 1% fio.time.system_time
13.86 ± 2% +1755.6% 257.23 ± 1% fio.time.user_time
58627 ± 0% +808.6% 532701 ± 0% fio.time.voluntary_context_switches
166912 ± 2% -99.6% 586.00 ± 2% fio.write_clat_90%_us
207872 ± 0% -99.7% 638.00 ± 2% fio.write_clat_95%_us
521216 ± 20% -92.8% 37632 ± 1% fio.write_clat_99%_us
52577 ± 0% -97.6% 1261 ± 2% fio.write_clat_mean_us
105001 ± 2% -94.6% 5676 ± 1% fio.write_clat_stddev
16498 ± 0% +4074.4% 688706 ± 2% fio.write_iops
1693 ± 0% -97.7% 38.82 ± 2% fio.write_slat_mean_us
17054 ± 0% -94.0% 1021 ± 1% fio.write_slat_stddev
53581 ± 7% +272.4% 199560 ± 3% softirqs.RCU
158867 ± 1% +69.5% 269287 ± 0% softirqs.SCHED
565968 ± 0% +175.8% 1560998 ± 1% softirqs.TIMER
3629667 ± 9% +237.9% 12265065 ± 6% cpuidle.C1-HSW.time
66590 ± 12% +689.8% 525953 ± 2% cpuidle.C1-HSW.usage
4.193e+09 ± 11% -29.6% 2.952e+09 ± 3% cpuidle.C3-HSW.time
4346531 ± 11% -28.6% 3101279 ± 3% cpuidle.C3-HSW.usage
2467 ± 22% +434.5% 13186 ± 2% cpuidle.POLL.usage
1321503 ± 47% +4167.7% 56398032 ± 19% numa-numastat.node0.local_node
542888 ± 99% +2637.3% 14860275 ± 50% numa-numastat.node0.numa_foreign
1321513 ± 47% +4167.7% 56398044 ± 19% numa-numastat.node0.numa_hit
592541 ± 99% +1833.8% 11458393 ± 54% numa-numastat.node0.numa_miss
1425379 ± 39% +3705.7% 54245725 ± 29% numa-numastat.node1.local_node
590754 ± 99% +1840.5% 11463742 ± 55% numa-numastat.node1.numa_foreign
1425389 ± 39% +3705.7% 54245733 ± 29% numa-numastat.node1.numa_hit
541101 ± 99% +2647.3% 14865705 ± 50% numa-numastat.node1.numa_miss
66807 ± 0% +5149.0% 3506733 ± 1% vmstat.io.bo
19111 ± 1% +906.8% 192416 ± 0% vmstat.memory.buff
12571099 ± 0% +231.6% 41679612 ± 0% vmstat.memory.cache
32807369 ± 0% -89.3% 3505038 ± 0% vmstat.memory.free
27.00 ± 0% -33.3% 18.00 ± 0% vmstat.procs.b
2.00 ± 0% +475.0% 11.50 ± 4% vmstat.procs.r
2235 ± 4% +414.3% 11499 ± 3% vmstat.system.cs
57912 ± 0% +1.1% 58522 ± 0% vmstat.system.in
7.67 ± 1% +219.5% 24.51 ± 1% turbostat.%Busy
160.00 ± 1% +132.2% 371.50 ± 1% turbostat.Avg_MHz
2084 ± 1% -27.3% 1515 ± 0% turbostat.Bzy_MHz
30.71 ± 4% +10.2% 33.84 ± 1% turbostat.CPU%c1
37.92 ± 15% -32.6% 25.57 ± 9% turbostat.CPU%c3
68.50 ± 3% -6.9% 63.75 ± 3% turbostat.CoreTmp
4.84 ± 11% +69.0% 8.19 ± 21% turbostat.PKG_%
20.77 ± 19% -90.5% 1.97 ± 47% turbostat.Pkg%pc2
103.88 ± 1% +12.3% 116.62 ± 0% turbostat.PkgWatt
81.35 ± 0% +45.5% 118.40 ± 0% turbostat.RAMWatt
162447 ± 0% +131.8% 376599 ± 0% meminfo.Active
58081 ± 1% +364.5% 269806 ± 1% meminfo.Active(file)
19012 ± 1% +911.6% 192331 ± 0% meminfo.Buffers
11732139 ± 0% +240.7% 39971418 ± 0% meminfo.Cached
202839 ± 0% -78.8% 43021 ± 3% meminfo.CmaFree
8269379 ± 0% -14.8% 7041826 ± 0% meminfo.Dirty
11689070 ± 0% +236.3% 39313737 ± 0% meminfo.Inactive
11567135 ± 0% +238.8% 39191794 ± 0% meminfo.Inactive(file)
32825801 ± 0% -89.3% 3516995 ± 0% meminfo.MemFree
820135 ± 0% +106.8% 1695987 ± 0% meminfo.SReclaimable
920315 ± 0% +95.2% 1796868 ± 0% meminfo.Slab
659.25 ±100% +86807.4% 572937 ± 0% meminfo.Unevictable
4075428 ± 5% -14.0% 3506877 ± 8% numa-meminfo.node0.Dirty
5833539 ± 6% +245.8% 20174456 ± 0% numa-meminfo.node0.FilePages
5797730 ± 6% +242.3% 19845628 ± 0% numa-meminfo.node0.Inactive
5708833 ± 5% +245.6% 19726898 ± 0% numa-meminfo.node0.Inactive(file)
16344904 ± 4% -88.7% 1841535 ± 1% numa-meminfo.node0.MemFree
8261907 ± 8% +175.5% 22765276 ± 0% numa-meminfo.node0.MemUsed
381.50 ±100% +75068.0% 286766 ± 0% numa-meminfo.node0.Unevictable
76111 ± 29% +267.1% 279386 ± 12% numa-meminfo.node1.Active
24102 ± 30% +862.9% 232089 ± 5% numa-meminfo.node1.Active(file)
4193777 ± 5% -15.8% 3532488 ± 8% numa-meminfo.node1.Dirty
5920446 ± 5% +237.7% 19992694 ± 0% numa-meminfo.node1.FilePages
5894203 ± 5% +230.3% 19470961 ± 0% numa-meminfo.node1.Inactive
5861163 ± 4% +232.1% 19467743 ± 0% numa-meminfo.node1.Inactive(file)
16477881 ± 4% -89.9% 1671879 ± 2% numa-meminfo.node1.MemFree
8272729 ± 8% +179.0% 23078732 ± 0% numa-meminfo.node1.MemUsed
412948 ± 82% +176.4% 1141200 ± 6% numa-meminfo.node1.SReclaimable
457998 ± 73% +159.9% 1190119 ± 6% numa-meminfo.node1.Slab
277.50 ±100% +1e+05% 286552 ± 0% numa-meminfo.node1.Unevictable
946.00 ± 9% +31.9% 1248 ± 5% slabinfo.Acpi-ParseExt.active_objs
1003 ± 7% +28.7% 1291 ± 4% slabinfo.Acpi-ParseExt.num_objs
2728626 ± 0% +260.1% 9826661 ± 0% slabinfo.buffer_head.active_objs
69964 ± 0% +260.4% 252129 ± 0% slabinfo.buffer_head.active_slabs
2728626 ± 0% +260.4% 9833052 ± 0% slabinfo.buffer_head.num_objs
69964 ± 0% +260.4% 252129 ± 0% slabinfo.buffer_head.num_slabs
149.00 ± 37% +642.8% 1106 ± 3% slabinfo.dquot.active_objs
149.00 ± 37% +642.8% 1106 ± 3% slabinfo.dquot.num_objs
915961 ± 1% +163.7% 2415651 ± 0% slabinfo.ext4_extent_status.active_objs
8979 ± 1% +358.0% 41130 ± 1% slabinfo.ext4_extent_status.active_slabs
915961 ± 1% +358.0% 4195339 ± 1% slabinfo.ext4_extent_status.num_objs
8979 ± 1% +358.0% 41130 ± 1% slabinfo.ext4_extent_status.num_slabs
126.00 ± 0% +237.9% 425.75 ± 15% slabinfo.ext4_io_end.active_objs
126.00 ± 0% +237.9% 425.75 ± 15% slabinfo.ext4_io_end.num_objs
3350 ± 11% +32.3% 4432 ± 2% slabinfo.jbd2_journal_handle.active_objs
3350 ± 11% +32.3% 4432 ± 2% slabinfo.jbd2_journal_handle.num_objs
1818 ± 0% +1433.8% 27884 ± 2% slabinfo.jbd2_journal_head.active_objs
67.50 ± 1% +1210.4% 884.50 ± 3% slabinfo.jbd2_journal_head.active_slabs
2315 ± 1% +1199.4% 30090 ± 3% slabinfo.jbd2_journal_head.num_objs
67.50 ± 1% +1210.4% 884.50 ± 3% slabinfo.jbd2_journal_head.num_slabs
1722 ± 3% +11.7% 1923 ± 4% slabinfo.mnt_cache.active_objs
1722 ± 3% +11.7% 1923 ± 4% slabinfo.mnt_cache.num_objs
8.187e+11 ± 5% -21.5% 6.428e+11 ± 2% perf-stat.branch-instructions
0.06 ± 13% +2272.9% 1.46 ± 1% perf-stat.branch-miss-rate%
5.033e+08 ± 15% +1758.8% 9.356e+09 ± 3% perf-stat.branch-misses
3.46 ± 2% +746.3% 29.25 ± 1% perf-stat.cache-miss-rate%
1.366e+08 ± 8% +7828.0% 1.083e+10 ± 3% perf-stat.cache-misses
3.951e+09 ± 7% +837.4% 3.704e+10 ± 2% perf-stat.cache-references
447606 ± 4% +420.9% 2331472 ± 3% perf-stat.context-switches
1.91e+12 ± 3% +131.8% 4.427e+12 ± 2% perf-stat.cpu-cycles
7857 ± 6% +196.2% 23270 ± 9% perf-stat.cpu-migrations
0.03 ± 7% +3471.0% 1.23 ± 6% perf-stat.dTLB-load-miss-rate%
4.077e+08 ± 5% +2913.0% 1.228e+10 ± 7% perf-stat.dTLB-load-misses
1.195e+12 ± 12% -17.4% 9.878e+11 ± 4% perf-stat.dTLB-loads
0.01 ± 8% +1635.3% 0.13 ± 21% perf-stat.dTLB-store-miss-rate%
76890357 ± 3% +976.0% 8.273e+08 ± 23% perf-stat.dTLB-store-misses
1.067e+12 ± 11% -38.9% 6.521e+11 ± 5% perf-stat.dTLB-stores
57.00 ± 1% -38.6% 35.00 ± 2% perf-stat.iTLB-load-miss-rate%
80424911 ± 1% -15.4% 68057203 ± 2% perf-stat.iTLB-load-misses
60673587 ± 2% +108.3% 1.264e+08 ± 2% perf-stat.iTLB-loads
4.486e+12 ± 5% -20.6% 3.563e+12 ± 2% perf-stat.instructions
55749 ± 3% -6.1% 52369 ± 1% perf-stat.instructions-per-iTLB-miss
2.35 ± 2% -65.7% 0.81 ± 1% perf-stat.ipc
420593 ± 0% +9.3% 459778 ± 1% perf-stat.minor-faults
48205740 ± 7% +9150.7% 4.459e+09 ± 11% perf-stat.node-load-misses
47211353 ± 3% +9255.4% 4.417e+09 ± 17% perf-stat.node-loads
16243571 ± 8% +2951.8% 4.957e+08 ± 10% perf-stat.node-store-misses
19185688 ± 4% +2840.8% 5.642e+08 ± 4% perf-stat.node-stores
420632 ± 0% +9.3% 459778 ± 1% perf-stat.page-faults
14521 ± 1% +364.6% 67463 ± 1% proc-vmstat.nr_active_file
3310726 ± 0% +4073.0% 1.382e+08 ± 2% proc-vmstat.nr_dirtied
2067310 ± 0% -14.9% 1760148 ± 0% proc-vmstat.nr_dirty
2938080 ± 0% +241.8% 10041069 ± 0% proc-vmstat.nr_file_pages
50709 ± 0% -78.9% 10677 ± 4% proc-vmstat.nr_free_cma
8206148 ± 0% -89.3% 879094 ± 0% proc-vmstat.nr_free_pages
2892082 ± 0% +238.8% 9798085 ± 0% proc-vmstat.nr_inactive_file
205049 ± 0% +106.8% 423996 ± 0% proc-vmstat.nr_slab_reclaimable
164.75 ±100% +86842.0% 143237 ± 0% proc-vmstat.nr_unevictable
1377940 ± 1% +9813.4% 1.366e+08 ± 2% proc-vmstat.nr_written
14521 ± 1% +364.6% 67463 ± 1% proc-vmstat.nr_zone_active_file
2892082 ± 0% +238.8% 9798122 ± 0% proc-vmstat.nr_zone_inactive_file
164.75 ±100% +86842.0% 143237 ± 0% proc-vmstat.nr_zone_unevictable
2067310 ± 0% -14.9% 1760151 ± 0% proc-vmstat.nr_zone_write_pending
1133626 ± 6% +2222.1% 26324018 ± 13% proc-vmstat.numa_foreign
795.25 ± 82% +4364.9% 35507 ± 25% proc-vmstat.numa_hint_faults
493.75 ± 76% +5956.4% 29903 ± 29% proc-vmstat.numa_hint_faults_local
2748483 ± 3% +3925.7% 1.106e+08 ± 5% proc-vmstat.numa_hit
2748463 ± 3% +3925.8% 1.106e+08 ± 5% proc-vmstat.numa_local
1133626 ± 6% +2222.1% 26324092 ± 13% proc-vmstat.numa_miss
295.00 ± 93% +578.1% 2000 ± 21% proc-vmstat.numa_pages_migrated
2372 ± 77% +1632.1% 41094 ± 20% proc-vmstat.numa_pte_updates
2891 ± 7% +43.7% 4155 ± 11% proc-vmstat.pgactivate
0.00 ± 0% +Inf% 5413142 ± 5% proc-vmstat.pgalloc_dma32
4013920 ± 0% +3180.9% 1.317e+08 ± 3% proc-vmstat.pgalloc_normal
444351 ± 0% +28262.2% 1.26e+08 ± 3% proc-vmstat.pgfree
295.00 ± 93% +580.8% 2008 ± 21% proc-vmstat.pgmigrate_success
13537222 ± 0% +5160.6% 7.121e+08 ± 2% proc-vmstat.pgpgout
242.50 ±100% +70361.6% 170869 ± 0% proc-vmstat.unevictable_pgs_culled
1396319 ± 5% +1827.5% 26914501 ± 6% numa-vmstat.node0.nr_dirtied
1018825 ± 5% -14.0% 876621 ± 8% numa-vmstat.node0.nr_dirty
1458483 ± 6% +245.8% 5043691 ± 0% numa-vmstat.node0.nr_file_pages
4086118 ± 4% -88.7% 460244 ± 1% numa-vmstat.node0.nr_free_pages
1427304 ± 5% +245.5% 4931802 ± 0% numa-vmstat.node0.nr_inactive_file
95.25 ±100% +75177.2% 71701 ± 0% numa-vmstat.node0.nr_unevictable
377494 ± 7% +6797.6% 26037879 ± 6% numa-vmstat.node0.nr_written
1427304 ± 5% +245.5% 4931851 ± 0% numa-vmstat.node0.nr_zone_inactive_file
95.25 ±100% +75177.2% 71701 ± 0% numa-vmstat.node0.nr_zone_unevictable
1018825 ± 5% -14.0% 876636 ± 8% numa-vmstat.node0.nr_zone_write_pending
613506 ± 82% +938.4% 6370772 ± 55% numa-vmstat.node0.numa_foreign
1209554 ± 44% +1743.8% 22301956 ± 17% numa-vmstat.node0.numa_hit
1209543 ± 44% +1743.8% 22301941 ± 17% numa-vmstat.node0.numa_local
622639 ± 83% +717.3% 5088961 ± 41% numa-vmstat.node0.numa_miss
6025 ± 30% +863.1% 58033 ± 5% numa-vmstat.node1.nr_active_file
1399379 ± 4% +1908.6% 28107434 ± 12% numa-vmstat.node1.nr_dirtied
1048423 ± 5% -15.8% 882992 ± 8% numa-vmstat.node1.nr_dirty
1480223 ± 5% +237.7% 4998217 ± 0% numa-vmstat.node1.nr_file_pages
50710 ± 0% -78.7% 10798 ± 4% numa-vmstat.node1.nr_free_cma
4119342 ± 4% -89.9% 417911 ± 2% numa-vmstat.node1.nr_free_pages
1465406 ± 4% +232.1% 4866978 ± 0% numa-vmstat.node1.nr_inactive_file
103241 ± 82% +176.3% 285301 ± 6% numa-vmstat.node1.nr_slab_reclaimable
69.00 ±100% +1e+05% 71643 ± 0% numa-vmstat.node1.nr_unevictable
350955 ± 7% +7657.2% 27224439 ± 12% numa-vmstat.node1.nr_written
6025 ± 30% +863.1% 58033 ± 5% numa-vmstat.node1.nr_zone_active_file
1465406 ± 4% +232.1% 4866977 ± 0% numa-vmstat.node1.nr_zone_inactive_file
69.00 ±100% +1e+05% 71643 ± 0% numa-vmstat.node1.nr_zone_unevictable
1048423 ± 5% -15.8% 882992 ± 8% numa-vmstat.node1.nr_zone_write_pending
537217 ± 96% +837.7% 5037423 ± 42% numa-vmstat.node1.numa_foreign
1283021 ± 41% +1606.3% 21892238 ± 32% numa-vmstat.node1.numa_hit
1283007 ± 41% +1606.3% 21892229 ± 32% numa-vmstat.node1.numa_local
528083 ± 96% +1096.6% 6319275 ± 55% numa-vmstat.node1.numa_miss
3799 ± 0% +311.7% 15643 ± 1% sched_debug.cfs_rq:/.exec_clock.avg
88813 ± 0% -31.8% 60546 ± 15% sched_debug.cfs_rq:/.exec_clock.max
135.94 ± 11% +1748.1% 2512 ± 42% sched_debug.cfs_rq:/.exec_clock.min
16353 ± 0% -33.8% 10824 ± 12% sched_debug.cfs_rq:/.exec_clock.stddev
45073 ± 10% +356.5% 205780 ± 13% sched_debug.cfs_rq:/.load.avg
173873 ± 11% +91.4% 332795 ± 3% sched_debug.cfs_rq:/.load.stddev
31.39 ± 1% +525.1% 196.23 ± 9% sched_debug.cfs_rq:/.load_avg.avg
144.71 ± 0% +58.6% 229.45 ± 12% sched_debug.cfs_rq:/.load_avg.stddev
20203 ± 7% +47.9% 29889 ± 9% sched_debug.cfs_rq:/.min_vruntime.avg
110963 ± 2% -31.0% 76608 ± 13% sched_debug.cfs_rq:/.min_vruntime.max
17352 ± 2% -35.4% 11201 ± 11% sched_debug.cfs_rq:/.min_vruntime.stddev
0.09 ± 8% +181.7% 0.26 ± 7% sched_debug.cfs_rq:/.nr_running.avg
0.28 ± 2% +50.6% 0.43 ± 2% sched_debug.cfs_rq:/.nr_running.stddev
29.50 ± 1% +246.8% 102.30 ± 24% sched_debug.cfs_rq:/.runnable_load_avg.avg
141.43 ± 0% +44.7% 204.62 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.stddev
17352 ± 2% -35.4% 11202 ± 11% sched_debug.cfs_rq:/.spread0.stddev
188.45 ± 3% +92.8% 363.35 ± 4% sched_debug.cfs_rq:/.util_avg.avg
158.51 ± 1% +45.3% 230.30 ± 7% sched_debug.cfs_rq:/.util_avg.stddev
99800 ± 0% -27.8% 72076 ± 19% sched_debug.cpu.avg_idle.min
182448 ± 3% +13.6% 207291 ± 7% sched_debug.cpu.avg_idle.stddev
6.26 ± 8% +158.0% 16.16 ± 3% sched_debug.cpu.clock.stddev
6.26 ± 8% +158.0% 16.16 ± 3% sched_debug.cpu.clock_task.stddev
29.01 ± 1% +240.9% 98.90 ± 22% sched_debug.cpu.cpu_load[0].avg
141.19 ± 0% +44.3% 203.80 ± 12% sched_debug.cpu.cpu_load[0].stddev
29.56 ± 1% +350.4% 133.15 ± 19% sched_debug.cpu.cpu_load[1].avg
142.04 ± 0% +49.6% 212.45 ± 9% sched_debug.cpu.cpu_load[1].stddev
29.26 ± 1% +342.0% 129.30 ± 19% sched_debug.cpu.cpu_load[2].avg
141.48 ± 0% +44.3% 204.21 ± 9% sched_debug.cpu.cpu_load[2].stddev
28.93 ± 1% +327.8% 123.79 ± 18% sched_debug.cpu.cpu_load[3].avg
141.35 ± 0% +38.0% 195.12 ± 10% sched_debug.cpu.cpu_load[3].stddev
28.53 ± 1% +312.9% 117.79 ± 19% sched_debug.cpu.cpu_load[4].avg
141.39 ± 0% +30.9% 185.09 ± 10% sched_debug.cpu.cpu_load[4].stddev
182.95 ± 10% +176.1% 505.11 ± 9% sched_debug.cpu.curr->pid.avg
709.20 ± 5% +32.1% 937.00 ± 1% sched_debug.cpu.curr->pid.stddev
44133 ± 11% +374.4% 209378 ± 14% sched_debug.cpu.load.avg
171770 ± 11% +94.3% 333803 ± 4% sched_debug.cpu.load.stddev
33181 ± 0% +19.9% 39776 ± 0% sched_debug.cpu.nr_load_updates.avg
98035 ± 0% -23.4% 75114 ± 9% sched_debug.cpu.nr_load_updates.max
10011 ± 4% +100.8% 20108 ± 14% sched_debug.cpu.nr_load_updates.min
12893 ± 1% -32.8% 8664 ± 12% sched_debug.cpu.nr_load_updates.stddev
0.09 ± 11% +184.3% 0.26 ± 9% sched_debug.cpu.nr_running.avg
0.28 ± 4% +51.2% 0.43 ± 2% sched_debug.cpu.nr_running.stddev
5765 ± 2% +317.3% 24062 ± 2% sched_debug.cpu.nr_switches.avg
26550 ± 39% +581.1% 180832 ± 16% sched_debug.cpu.nr_switches.max
1909 ± 5% +151.0% 4791 ± 16% sched_debug.cpu.nr_switches.min
4347 ± 23% +554.5% 28453 ± 13% sched_debug.cpu.nr_switches.stddev
0.39 ± 0% -35.7% 0.25 ± 12% sched_debug.cpu.nr_uninterruptible.avg
10.88 ± 26% +160.9% 28.38 ± 22% sched_debug.cpu.nr_uninterruptible.max
-12.44 ±-15% +120.1% -27.38 ±-29% sched_debug.cpu.nr_uninterruptible.min
4.40 ± 22% +192.3% 12.85 ± 26% sched_debug.cpu.nr_uninterruptible.stddev
3718 ± 3% +490.5% 21955 ± 2% sched_debug.cpu.sched_count.avg
23269 ± 44% +665.3% 178092 ± 16% sched_debug.cpu.sched_count.max
170.69 ± 30% +1730.8% 3125 ± 25% sched_debug.cpu.sched_count.min
4022 ± 27% +603.4% 28294 ± 13% sched_debug.cpu.sched_count.stddev
1764 ± 4% +509.7% 10757 ± 2% sched_debug.cpu.sched_goidle.avg
11576 ± 44% +666.5% 88735 ± 16% sched_debug.cpu.sched_goidle.max
51.69 ± 28% +2705.3% 1450 ± 24% sched_debug.cpu.sched_goidle.min
2023 ± 26% +598.8% 14140 ± 13% sched_debug.cpu.sched_goidle.stddev
1782 ± 4% +512.4% 10915 ± 2% sched_debug.cpu.ttwu_count.avg
15347 ± 35% +467.3% 87069 ± 18% sched_debug.cpu.ttwu_count.max
76.25 ± 47% +1728.4% 1394 ± 26% sched_debug.cpu.ttwu_count.min
2407 ± 23% +484.3% 14068 ± 17% sched_debug.cpu.ttwu_count.stddev
1010 ± 0% +75.5% 1772 ± 1% sched_debug.cpu.ttwu_local.avg
6785 ± 20% -35.4% 4380 ± 12% sched_debug.cpu.ttwu_local.max
36.69 ± 18% +959.8% 388.81 ± 26% sched_debug.cpu.ttwu_local.min
1133 ± 15% -32.6% 764.11 ± 10% sched_debug.cpu.ttwu_local.stddev
2.70 ± 3% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.___might_sleep.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
1.21 ± 9% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.___might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
0.00 ± -1% +Inf% 1.38 ± 7% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin
0.00 ± -1% +Inf% 2.16 ± 4% perf-profile.calltrace.cycles-pp.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write
0.00 ± -1% +Inf% 15.40 ± 3% perf-profile.calltrace.cycles-pp.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 15.38 ± 3% perf-profile.calltrace.cycles-pp.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.00 ± -1% +Inf% 4.40 ± 13% perf-profile.calltrace.cycles-pp.__copy_user_nocache.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio
0.00 ± -1% +Inf% 2.36 ± 18% perf-profile.calltrace.cycles-pp.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
0.00 ± -1% +Inf% 2.35 ± 5% perf-profile.calltrace.cycles-pp.__es_insert_extent.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
0.00 ± -1% +Inf% 0.99 ± 12% perf-profile.calltrace.cycles-pp.__es_shrink.ext4_es_scan.shrink_slab.shrink_node.kswapd
0.00 ± -1% +Inf% 0.94 ± 4% perf-profile.calltrace.cycles-pp.__es_tree_search.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
0.00 ± -1% +Inf% 1.62 ± 13% perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 2.98 ± 2% perf-profile.calltrace.cycles-pp.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks
0.00 ± -1% +Inf% 1.58 ± 2% perf-profile.calltrace.cycles-pp.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent
32.81 ± 5% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
0.00 ± -1% +Inf% 29.40 ± 4% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb.do_io_submit.sys_io_submit
0.00 ± -1% +Inf% 3.17 ± 2% perf-profile.calltrace.cycles-pp.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep
1.22 ± 7% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__might_sleep.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
4.57 ± 2% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
0.00 ± -1% +Inf% 1.53 ± 8% perf-profile.calltrace.cycles-pp.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 2.09 ± 19% perf-profile.calltrace.cycles-pp.__radix_tree_lookup.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list
14.60 ± 6% -95.4% 0.67 ± 4% perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow
0.00 ± -1% +Inf% 1.65 ± 8% perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin
0.00 ± -1% +Inf% 3.30 ± 2% perf-profile.calltrace.cycles-pp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int
0.00 ± -1% +Inf% 2.99 ± 16% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
0.00 ± -1% +Inf% 1.22 ± 7% perf-profile.calltrace.cycles-pp.__set_page_dirty.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end
0.00 ± -1% +Inf% 0.92 ± 14% perf-profile.calltrace.cycles-pp.__test_set_page_writeback.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
48.30 ± 5% -67.4% 15.72 ± 10% perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
48.30 ± 5% -67.5% 15.72 ± 10% perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn
0.00 ± -1% +Inf% 1.86 ± 8% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 31.07 ± 4% perf-profile.calltrace.cycles-pp.aio_run_iocb.do_io_submit.sys_io_submit.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.48 ± 7% perf-profile.calltrace.cycles-pp.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
2.38 ± 7% -85.1% 0.35 ±100% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 2.39 ± 13% perf-profile.calltrace.cycles-pp.bio_endio.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
0.00 ± -1% +Inf% 2.19 ± 4% perf-profile.calltrace.cycles-pp.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter
49.54 ± 6% -16.1% 41.58 ± 5% perf-profile.calltrace.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 2.56 ± 4% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb
50.10 ± 5% -16.2% 41.98 ± 5% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary
49.52 ± 6% -16.0% 41.58 ± 5% perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
46.57 ± 6% -12.2% 40.89 ± 6% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 32.31 ± 4% perf-profile.calltrace.cycles-pp.do_io_submit.sys_io_submit.entry_SYSCALL_64_fastpath
48.30 ± 5% -67.5% 15.72 ± 10% perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
0.00 ± -1% +Inf% 1.05 ± 11% perf-profile.calltrace.cycles-pp.end_page_writeback.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request
0.32 ±100% +10536.2% 33.77 ± 4% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.95 ± 13% perf-profile.calltrace.cycles-pp.es_do_reclaim_extents.es_reclaim_extents.__es_shrink.ext4_es_scan.shrink_slab
0.00 ± -1% +Inf% 0.98 ± 12% perf-profile.calltrace.cycles-pp.es_reclaim_extents.__es_shrink.ext4_es_scan.shrink_slab.shrink_node
0.00 ± -1% +Inf% 0.78 ± 25% perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 8.89 ± 12% perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages
0.00 ± -1% +Inf% 14.36 ± 3% perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 22.64 ± 4% perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb
0.00 ± -1% +Inf% 3.30 ± 5% perf-profile.calltrace.cycles-pp.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb
0.00 ± -1% +Inf% 1.65 ± 6% perf-profile.calltrace.cycles-pp.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request.submit_bio
0.00 ± -1% +Inf% 3.66 ± 3% perf-profile.calltrace.cycles-pp.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
0.00 ± -1% +Inf% 4.21 ± 5% perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
0.00 ± -1% +Inf% 0.99 ± 12% perf-profile.calltrace.cycles-pp.ext4_es_scan.shrink_slab.shrink_node.kswapd.kthread
0.00 ± -1% +Inf% 5.80 ± 3% perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
46.91 ± 5% -97.7% 1.07 ± 24% perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 29.67 ± 4% perf-profile.calltrace.cycles-pp.ext4_file_write_iter.aio_run_iocb.do_io_submit.sys_io_submit.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 5.40 ± 3% perf-profile.calltrace.cycles-pp.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
0.00 ± -1% +Inf% 1.31 ± 11% perf-profile.calltrace.cycles-pp.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request
0.00 ± -1% +Inf% 6.92 ± 11% perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
0.00 ± -1% +Inf% 0.86 ± 24% perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
47.67 ± 5% -96.2% 1.81 ± 23% perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
0.00 ± -1% +Inf% 1.10 ± 16% perf-profile.calltrace.cycles-pp.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg
48.29 ± 5% -67.6% 15.63 ± 10% perf-profile.calltrace.cycles-pp.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
1.20 ± 10% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.find_get_entry.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
0.00 ± -1% +Inf% 1.21 ± 2% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp
20.02 ± 6% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks
0.00 ± -1% +Inf% 1.70 ± 9% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 0.90 ± 12% perf-profile.calltrace.cycles-pp.find_get_pages_tag.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 7.36 ± 8% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page
0.00 ± -1% +Inf% 0.86 ± 24% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 29.06 ± 4% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.aio_run_iocb.do_io_submit
0.00 ± -1% +Inf% 2.48 ± 5% perf-profile.calltrace.cycles-pp.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 1.15 ± 9% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
0.00 ± -1% +Inf% 5.26 ± 8% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.96 ± 6% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
0.00 ± -1% +Inf% 1.34 ± 12% perf-profile.calltrace.cycles-pp.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.00 ± -1% +Inf% 1.04 ± 17% perf-profile.calltrace.cycles-pp.jbd2_journal_try_to_free_buffers.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list
0.00 ± -1% +Inf% 6.95 ± 14% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
48.40 ± 5% -52.7% 22.88 ± 2% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.91 ± 9% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
0.00 ± -1% +Inf% 1.57 ± 6% perf-profile.calltrace.cycles-pp.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end
0.00 ± -1% +Inf% 1.34 ± 25% perf-profile.calltrace.cycles-pp.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
0.00 ± -1% +Inf% 11.30 ± 12% perf-profile.calltrace.cycles-pp.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
0.00 ± -1% +Inf% 10.02 ± 11% perf-profile.calltrace.cycles-pp.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 0.84 ± 25% perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 9.40 ± 12% perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 1.28 ± 2% perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block
26.12 ± 6% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
0.00 ± -1% +Inf% 5.19 ± 8% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.00 ± -1% +Inf% 0.91 ± 12% perf-profile.calltrace.cycles-pp.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 4.59 ± 8% perf-profile.calltrace.cycles-pp.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
0.00 ± -1% +Inf% 6.95 ± 8% perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page
0.00 ± -1% +Inf% 0.84 ± 23% perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages
12.15 ± 46% -77.5% 2.74 ± 50% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
48.36 ± 5% -67.4% 15.75 ± 10% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
16.45 ± 6% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata
0.00 ± -1% +Inf% 1.66 ± 8% perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
48.40 ± 5% -52.7% 22.88 ± 2% perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 ± -1% +Inf% 5.92 ± 15% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
0.00 ± -1% +Inf% 6.94 ± 14% perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
0.00 ± -1% +Inf% 5.94 ± 15% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
0.00 ± -1% +Inf% 5.37 ± 15% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd
0.00 ± -1% +Inf% 1.00 ± 13% perf-profile.calltrace.cycles-pp.shrink_slab.shrink_node.kswapd.kthread.ret_from_fork
2.32 ± 8% -85.1% 0.34 ±100% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
50.21 ± 5% -16.3% 42.00 ± 5% perf-profile.calltrace.cycles-pp.start_secondary
0.00 ± -1% +Inf% 6.91 ± 11% perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs
0.00 ± -1% +Inf% 0.86 ± 24% perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 32.48 ± 4% perf-profile.calltrace.cycles-pp.sys_io_submit.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.87 ± 10% perf-profile.calltrace.cycles-pp.test_clear_page_writeback.end_page_writeback.ext4_finish_bio.ext4_end_bio.bio_endio
0.00 ± -1% +Inf% 1.12 ± 17% perf-profile.calltrace.cycles-pp.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
42.84 ± 5% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
48.30 ± 5% -67.4% 15.72 ± 10% perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
48.30 ± 5% -67.4% 15.72 ± 10% perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
48.36 ± 5% -67.4% 15.76 ± 10% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
48.30 ± 5% -67.5% 15.72 ± 10% perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
3.93 ± 2% -84.2% 0.62 ± 2% perf-profile.children.cycles-pp.___might_sleep
0.02 ±173% +7942.9% 1.41 ± 7% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.01 ±173% +17180.0% 2.16 ± 4% perf-profile.children.cycles-pp.__block_commit_write
0.05 ± 62% +32336.8% 15.41 ± 3% perf-profile.children.cycles-pp.__block_write_begin
0.05 ± 62% +32300.0% 15.39 ± 3% perf-profile.children.cycles-pp.__block_write_begin_int
0.18 ± 13% +2764.4% 5.23 ± 9% perf-profile.children.cycles-pp.__copy_user_nocache
0.00 ± -1% +Inf% 2.36 ± 18% perf-profile.children.cycles-pp.__delete_from_page_cache
0.06 ± 64% +4572.7% 2.57 ± 4% perf-profile.children.cycles-pp.__es_insert_extent
0.00 ± -1% +Inf% 0.99 ± 12% perf-profile.children.cycles-pp.__es_shrink
0.00 ± -1% +Inf% 1.16 ± 2% perf-profile.children.cycles-pp.__es_tree_search
0.01 ±173% +14200.0% 1.79 ± 12% perf-profile.children.cycles-pp.__ext4_journal_start_sb
0.00 ± -1% +Inf% 3.10 ± 2% perf-profile.children.cycles-pp.__find_get_block
33.44 ± 5% -95.1% 1.65 ± 1% perf-profile.children.cycles-pp.__find_get_block_slow
0.30 ± 10% +9542.6% 29.41 ± 4% perf-profile.children.cycles-pp.__generic_file_write_iter
0.00 ± -1% +Inf% 3.31 ± 2% perf-profile.children.cycles-pp.__getblk_gfp
5.83 ± 3% -85.3% 0.86 ± 5% perf-profile.children.cycles-pp.__might_sleep
0.01 ±173% +10133.3% 1.53 ± 8% perf-profile.children.cycles-pp.__page_cache_alloc
15.24 ± 6% -70.8% 4.45 ± 7% perf-profile.children.cycles-pp.__radix_tree_lookup
0.00 ± -1% +Inf% 3.41 ± 2% perf-profile.children.cycles-pp.__read_extent_tree_block
0.00 ± -1% +Inf% 2.99 ± 16% perf-profile.children.cycles-pp.__remove_mapping
0.00 ± -1% +Inf% 1.23 ± 7% perf-profile.children.cycles-pp.__set_page_dirty
0.00 ± -1% +Inf% 1.04 ± 13% perf-profile.children.cycles-pp.__test_set_page_writeback
48.30 ± 5% -67.4% 15.72 ± 10% perf-profile.children.cycles-pp.__writeback_inodes_wb
48.30 ± 5% -67.5% 15.72 ± 10% perf-profile.children.cycles-pp.__writeback_single_inode
0.09 ± 17% +1352.9% 1.23 ± 8% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 ± -1% +Inf% 1.87 ± 8% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.32 ± 12% +9536.4% 31.08 ± 4% perf-profile.children.cycles-pp.aio_run_iocb
0.02 ±173% +8442.9% 1.50 ± 7% perf-profile.children.cycles-pp.alloc_pages_current
2.58 ± 7% -65.8% 0.88 ± 28% perf-profile.children.cycles-pp.apic_timer_interrupt
0.07 ± 66% +3833.3% 2.95 ± 9% perf-profile.children.cycles-pp.bio_endio
0.03 ±100% +7881.8% 2.20 ± 4% perf-profile.children.cycles-pp.block_write_end
50.15 ± 5% -15.4% 42.45 ± 4% perf-profile.children.cycles-pp.call_cpuidle
0.05 ± 9% +4868.2% 2.73 ± 4% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
50.70 ± 5% -15.5% 42.85 ± 4% perf-profile.children.cycles-pp.cpu_startup_entry
50.11 ± 5% -15.3% 42.45 ± 4% perf-profile.children.cycles-pp.cpuidle_enter
46.86 ± 6% -11.1% 41.67 ± 5% perf-profile.children.cycles-pp.cpuidle_enter_state
0.34 ± 14% +9334.3% 32.31 ± 4% perf-profile.children.cycles-pp.do_io_submit
48.30 ± 5% -67.5% 15.72 ± 10% perf-profile.children.cycles-pp.do_writepages
0.01 ±173% +10480.0% 1.32 ± 9% perf-profile.children.cycles-pp.end_page_writeback
0.64 ± 10% +5173.9% 33.88 ± 4% perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.95 ± 13% perf-profile.children.cycles-pp.es_do_reclaim_extents
0.00 ± -1% +Inf% 0.98 ± 12% perf-profile.children.cycles-pp.es_reclaim_extents
0.05 ± 61% +19285.0% 9.69 ± 10% perf-profile.children.cycles-pp.ext4_bio_write_page
0.01 ±173% +1.1e+05% 14.36 ± 3% perf-profile.children.cycles-pp.ext4_da_get_block_prep
0.17 ± 10% +13621.2% 22.64 ± 4% perf-profile.children.cycles-pp.ext4_da_write_begin
0.04 ± 59% +7238.9% 3.30 ± 5% perf-profile.children.cycles-pp.ext4_da_write_end
0.04 ±100% +5253.3% 2.01 ± 6% perf-profile.children.cycles-pp.ext4_end_bio
0.09 ± 26% +4362.2% 4.13 ± 3% perf-profile.children.cycles-pp.ext4_es_insert_extent
0.04 ± 60% +10347.1% 4.44 ± 5% perf-profile.children.cycles-pp.ext4_es_lookup_extent
0.00 ± -1% +Inf% 0.99 ± 12% perf-profile.children.cycles-pp.ext4_es_scan
46.91 ± 5% -85.3% 6.88 ± 6% perf-profile.children.cycles-pp.ext4_ext_map_blocks
0.30 ± 11% +9709.1% 29.67 ± 4% perf-profile.children.cycles-pp.ext4_file_write_iter
0.04 ±102% +15914.3% 5.60 ± 4% perf-profile.children.cycles-pp.ext4_find_extent
0.03 ±100% +5800.0% 1.62 ± 9% perf-profile.children.cycles-pp.ext4_finish_bio
0.23 ± 16% +3540.2% 8.37 ± 9% perf-profile.children.cycles-pp.ext4_io_submit
47.67 ± 5% -96.2% 1.82 ± 23% perf-profile.children.cycles-pp.ext4_map_blocks
0.00 ± -1% +Inf% 0.84 ± 31% perf-profile.children.cycles-pp.ext4_put_io_end_defer
0.00 ± -1% +Inf% 1.10 ± 16% perf-profile.children.cycles-pp.ext4_releasepage
48.29 ± 5% -67.6% 15.63 ± 10% perf-profile.children.cycles-pp.ext4_writepages
21.26 ± 6% -85.9% 2.99 ± 6% perf-profile.children.cycles-pp.find_get_entry
0.10 ± 9% +852.6% 0.91 ± 12% perf-profile.children.cycles-pp.find_get_pages_tag
0.30 ± 17% +2817.5% 8.75 ± 9% perf-profile.children.cycles-pp.generic_make_request
0.29 ± 10% +9924.1% 29.07 ± 4% perf-profile.children.cycles-pp.generic_perform_write
0.04 ± 58% +6118.7% 2.49 ± 5% perf-profile.children.cycles-pp.generic_write_end
0.01 ±173% +9420.0% 1.19 ± 9% perf-profile.children.cycles-pp.get_page_from_freelist
0.10 ± 5% +5450.0% 5.27 ± 8% perf-profile.children.cycles-pp.grab_cache_page_write_begin
1.01 ± 10% -56.3% 0.44 ± 26% perf-profile.children.cycles-pp.hrtimer_interrupt
1.55 ± 9% -69.5% 0.47 ± 26% perf-profile.children.cycles-pp.irq_exit
0.00 ± -1% +Inf% 1.49 ± 11% perf-profile.children.cycles-pp.jbd2__journal_start
0.00 ± -1% +Inf% 1.05 ± 17% perf-profile.children.cycles-pp.jbd2_journal_try_to_free_buffers
0.01 ±173% +8660.0% 1.09 ± 6% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 ± -1% +Inf% 6.95 ± 14% perf-profile.children.cycles-pp.kswapd
48.40 ± 5% -52.7% 22.88 ± 2% perf-profile.children.cycles-pp.kthread
1.04 ± 10% -56.4% 0.45 ± 27% perf-profile.children.cycles-pp.local_apic_timer_interrupt
0.00 ± -1% +Inf% 1.57 ± 5% perf-profile.children.cycles-pp.mark_buffer_dirty
0.21 ± 9% +524.4% 1.34 ± 25% perf-profile.children.cycles-pp.mpage_map_and_submit_buffers
0.12 ± 15% +9124.5% 11.30 ± 12% perf-profile.children.cycles-pp.mpage_prepare_extent_to_map
0.00 ± -1% +Inf% 10.03 ± 11% perf-profile.children.cycles-pp.mpage_process_page_bufs
0.07 ± 58% +15100.0% 10.26 ± 10% perf-profile.children.cycles-pp.mpage_submit_page
26.82 ± 6% -75.6% 6.53 ± 6% perf-profile.children.cycles-pp.pagecache_get_page
0.10 ± 9% +860.5% 0.91 ± 13% perf-profile.children.cycles-pp.pagevec_lookup_tag
0.18 ± 13% +2830.1% 5.35 ± 9% perf-profile.children.cycles-pp.pmem_do_bvec
0.28 ± 17% +2935.1% 8.42 ± 9% perf-profile.children.cycles-pp.pmem_make_request
12.15 ± 46% -77.3% 2.76 ± 50% perf-profile.children.cycles-pp.poll_idle
48.36 ± 5% -67.4% 15.75 ± 10% perf-profile.children.cycles-pp.process_one_work
17.14 ± 6% -86.0% 2.39 ± 6% perf-profile.children.cycles-pp.radix_tree_lookup_slot
48.40 ± 5% -52.7% 22.88 ± 2% perf-profile.children.cycles-pp.ret_from_fork
0.00 ± -1% +Inf% 5.92 ± 15% perf-profile.children.cycles-pp.shrink_inactive_list
0.00 ± -1% +Inf% 6.94 ± 14% perf-profile.children.cycles-pp.shrink_node
0.00 ± -1% +Inf% 5.94 ± 15% perf-profile.children.cycles-pp.shrink_node_memcg
0.00 ± -1% +Inf% 5.37 ± 15% perf-profile.children.cycles-pp.shrink_page_list
0.00 ± -1% +Inf% 1.00 ± 13% perf-profile.children.cycles-pp.shrink_slab
2.51 ± 8% -65.7% 0.86 ± 28% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
50.21 ± 5% -16.3% 42.00 ± 5% perf-profile.children.cycles-pp.start_secondary
0.30 ± 17% +2828.3% 8.79 ± 9% perf-profile.children.cycles-pp.submit_bio
0.35 ± 15% +9326.8% 32.52 ± 4% perf-profile.children.cycles-pp.sys_io_submit
0.00 ± -1% +Inf% 1.09 ± 9% perf-profile.children.cycles-pp.test_clear_page_writeback
0.00 ± -1% +Inf% 1.13 ± 17% perf-profile.children.cycles-pp.try_to_release_page
43.48 ± 5% -99.9% 0.03 ±100% perf-profile.children.cycles-pp.unmap_underlying_metadata
48.30 ± 5% -67.4% 15.72 ± 10% perf-profile.children.cycles-pp.wb_workfn
48.30 ± 5% -67.4% 15.72 ± 10% perf-profile.children.cycles-pp.wb_writeback
48.36 ± 5% -67.4% 15.76 ± 10% perf-profile.children.cycles-pp.worker_thread
48.30 ± 5% -67.5% 15.72 ± 10% perf-profile.children.cycles-pp.writeback_sb_inodes
3.93 ± 2% -84.2% 0.62 ± 2% perf-profile.self.cycles-pp.___might_sleep
0.18 ± 13% +2764.4% 5.23 ± 9% perf-profile.self.cycles-pp.__copy_user_nocache
0.01 ±173% +9700.0% 1.23 ± 2% perf-profile.self.cycles-pp.__es_insert_extent
0.00 ± -1% +Inf% 1.07 ± 2% perf-profile.self.cycles-pp.__es_tree_search
6.10 ± 4% -95.7% 0.27 ± 5% perf-profile.self.cycles-pp.__find_get_block_slow
3.10 ± 6% -87.1% 0.40 ± 8% perf-profile.self.cycles-pp.__might_sleep
15.24 ± 6% -70.8% 4.45 ± 7% perf-profile.self.cycles-pp.__radix_tree_lookup
0.05 ± 9% +4868.2% 2.73 ± 4% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.04 ± 60% +9776.5% 4.20 ± 5% perf-profile.self.cycles-pp.ext4_es_lookup_extent
1.22 ± 7% -87.1% 0.16 ± 13% perf-profile.self.cycles-pp.ext4_ext_map_blocks
0.00 ± -1% +Inf% 1.86 ± 9% perf-profile.self.cycles-pp.ext4_find_extent
0.00 ± -1% +Inf% 0.84 ± 31% perf-profile.self.cycles-pp.ext4_put_io_end_defer
4.18 ± 5% -85.6% 0.60 ± 4% perf-profile.self.cycles-pp.find_get_entry
6.03 ± 6% -98.1% 0.11 ± 17% perf-profile.self.cycles-pp.pagecache_get_page
12.15 ± 46% -77.3% 2.76 ± 50% perf-profile.self.cycles-pp.poll_idle
2.47 ± 4% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.radix_tree_lookup_slot
4.26 ± 8% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.unmap_underlying_metadata
fio.write_bw_MBps
3500 ++-------------------------------------------------------------------+
| O |
3000 ++ O |
O O O O O O O O O O O O O
2500 ++ O O O O O O O O O O O O O |
| |
2000 ++ |
| |
1500 ++ |
| |
1000 ++ |
| |
500 ++ |
| |
0 *+-*-*--*-*--*-*--*-*--*--*-*--*-*--*-*--*-*--*--*-*--*-*--*-*--*-*--*
fio.write_iops
900000 ++-----------------------------------------------------------------+
| |
800000 ++ O O |
700000 O+O O O O O O O O O O O
| O O O O O O O O O O O O O O |
600000 ++ |
500000 ++ |
| |
400000 ++ |
300000 ++ |
| |
200000 ++ |
100000 ++ |
| |
0 *+*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*--*-*
fio.write_clat_mean_us
60000 ++------------------------------------------------------------------+
| .*. .*. |
50000 *+.*.*..*.*..*.*. *..*.*. *..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*
| |
| |
40000 ++ |
| |
30000 ++ |
| |
20000 ++ |
| |
| |
10000 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O
fio.write_clat_stddev
120000 ++-----------------------------------------------------------------+
| .*.. .*..*. .*.*.. .*.|
100000 *+*..*.*..* * *. *.*..*.*..*.*..*.*..*.*..*.*..*.*. *
| |
| |
80000 ++ |
| |
60000 ++ |
| |
40000 ++ |
| |
| |
20000 ++ |
O O O O O O O O O O O O O O O O O O O O O O O O O O O O
0 ++-----------------------------------------------------------------+
fio.write_clat_90__us
200000 ++-----------------------------------------------------------------+
180000 ++ *.. |
| .*.*.. + .*..*.*..*.*.. .*.*.. .*..*. .*. .*..*.*
160000 *+*. * * *.*. *.*..* *. *..* |
140000 ++ |
| |
120000 ++ |
100000 ++ |
80000 ++ |
| |
60000 ++ |
40000 ++ |
| |
20000 ++ |
0 O+O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O
fio.write_clat_95__us
250000 ++-----------------------------------------------------------------+
| |
*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*.*
200000 ++ |
| |
| |
150000 ++ |
| |
100000 ++ |
| |
| |
50000 ++ |
| |
| |
0 O+O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O--O-O
fio.write_clat_99__us
700000 ++-----------------------------------------------------------------+
| *.* * *..* *.* *.* *..* *.* *.*
600000 ++ : : :: : : : : : : : : : : : |
| : : : : : : : : : : : : : : : |
500000 ++ : : : : : : : : : : : : : : : |
| : :: :: :: : : : : :: : : |
400000 *+* * * * *.* *.*..* * *.* |
| |
300000 ++ |
| |
200000 ++ |
| |
100000 ++ |
O O O O O O O O O O O O O O O O O O O O O O O O O O O O
0 ++-----------------------------------------------------------------+
fio.write_slat_mean_us
1800 ++----------------*---------*----------------------------------------+
*..*.*..*.*..*.*. *..*..* *.*..*.*..*.*..*..*.*..*.*..*.*..*.*..*
1600 ++ |
1400 ++ |
| |
1200 ++ |
1000 ++ |
| |
800 ++ |
600 ++ |
| |
400 ++ |
200 ++ |
| |
0 O+-O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O--O-O--O-O--O-O--O-O--O
fio.write_slat_stddev
18000 ++----------------*-*----*--*---------------------------------------+
*..*.*..*.*..*.*. * *..*.*..*.*..*.*..*.*..*.*..*.*..*.*..*
16000 ++ |
14000 ++ |
| |
12000 ++ |
10000 ++ |
| |
8000 ++ |
6000 ++ |
| |
4000 ++ |
2000 ++ |
O O O O O O O O O O O O O O O O O O O O O O O O O O O O
0 ++------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-hsw-ep6: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based:
4k/gcc-6/performance/2pmem/ext4/sync/x86_64-rhel-7.2/50%/debian-x86_64-2016-08-31.cgz/200s/randwrite/lkp-hsw-ep6/200G/fio-basic/tb
commit:
6f2b562c3a ("direct-io: Use clean_bdev_aliases() instead of handmade iteration")
adad5aa544 ("ext4: Use clean_bdev_aliases() instead of iteration")
6f2b562c3a89f4a6 adad5aa544e281d84f837b2786
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
%stddev %change %stddev
\ | \
64.51 ± 0% +3928.3% 2598 ± 1% fio.write_bw_MBps
16514 ± 0% +3928.3% 665261 ± 1% fio.write_iops
0.01 ± 34% +860.0% 0.12 ± 15% fio.latency_100us%
0.13 ± 0% -92.3% 0.01 ± 0% fio.latency_10ms%
73.95 ± 3% -44.9% 40.77 ± 9% fio.latency_10us%
22.26 ± 11% +152.0% 56.10 ± 6% fio.latency_20us%
0.99 ± 1% -99.0% 0.01 ± 0% fio.latency_250ms%
1.14 ± 38% -79.4% 0.23 ± 21% fio.latency_4us%
1.50 ± 14% +78.5% 2.68 ± 15% fio.latency_50us%
26441098 ± 0% +3924.1% 1.064e+09 ± 1% fio.time.file_system_outputs
697.50 ± 10% +751.8% 5941 ± 13% fio.time.involuntary_context_switches
35000 ± 2% +85.3% 64861 ± 9% fio.time.minor_page_faults
19.75 ± 2% +3984.8% 806.75 ± 2% fio.time.percent_of_cpu_this_job_got
27.40 ± 2% +5046.0% 1410 ± 2% fio.time.system_time
13.58 ± 4% +1448.1% 210.31 ± 2% fio.time.user_time
58660 ± 1% +880.4% 575084 ± 1% fio.time.voluntary_context_switches
12.00 ± 5% +25.0% 15.00 ± 4% fio.write_clat_90%_us
35088 ±126% -99.9% 31.50 ± 4% fio.write_clat_99%_us
1692 ± 0% -97.6% 40.74 ± 1% fio.write_clat_mean_us
17029 ± 1% -93.8% 1058 ± 0% fio.write_clat_stddev
53709 ± 8% +262.8% 194831 ± 5% softirqs.RCU
131442 ± 21% +103.9% 267986 ± 1% softirqs.SCHED
564652 ± 0% +160.4% 1470111 ± 1% softirqs.TIMER
7.64 ± 1% +202.8% 23.14 ± 1% turbostat.%Busy
160.00 ± 1% +119.7% 351.50 ± 1% turbostat.Avg_MHz
2093 ± 0% -27.5% 1518 ± 0% turbostat.Bzy_MHz
33.02 ± 8% -21.7% 25.84 ± 15% turbostat.CPU%c3
27.55 ± 3% -36.9% 17.38 ± 24% turbostat.CPU%c6
104.45 ± 1% +10.0% 114.90 ± 1% turbostat.PkgWatt
81.34 ± 0% +44.8% 117.79 ± 0% turbostat.RAMWatt
898622 ± 71% +5898.0% 53899245 ± 30% numa-numastat.node0.local_node
266526 ±170% +6795.0% 18377074 ± 77% numa-numastat.node0.numa_foreign
898629 ± 71% +5897.9% 53899261 ± 30% numa-numastat.node0.numa_hit
952875 ± 57% +971.7% 10212364 ± 77% numa-numastat.node0.numa_miss
1759729 ± 31% +2688.3% 49067006 ± 48% numa-numastat.node1.local_node
956501 ± 57% +966.7% 10203280 ± 77% numa-numastat.node1.numa_foreign
1759741 ± 31% +2688.3% 49067011 ± 48% numa-numastat.node1.numa_hit
270153 ±168% +6699.2% 18368109 ± 77% numa-numastat.node1.numa_miss
67997 ± 1% +4935.1% 3423742 ± 1% vmstat.io.bo
19171 ± 0% +902.5% 192190 ± 0% vmstat.memory.buff
12561916 ± 0% +231.4% 41626714 ± 0% vmstat.memory.cache
32814140 ± 0% -89.1% 3561112 ± 1% vmstat.memory.free
27.00 ± 0% -30.6% 18.75 ± 2% vmstat.procs.b
2.00 ± 0% +425.0% 10.50 ± 4% vmstat.procs.r
2267 ± 7% +425.8% 11922 ± 2% vmstat.system.cs
57499 ± 0% +2.0% 58675 ± 0% vmstat.system.in
3757119 ± 10% +258.1% 13452676 ± 12% cpuidle.C1-HSW.time
70143 ± 23% +684.6% 550376 ± 8% cpuidle.C1-HSW.usage
1.719e+08 ± 45% -73.5% 45531804 ± 32% cpuidle.C1E-HSW.time
214536 ± 37% -51.0% 105146 ± 14% cpuidle.C1E-HSW.usage
4.173e+09 ± 6% -24.9% 3.133e+09 ± 6% cpuidle.C3-HSW.time
4335690 ± 6% -24.1% 3290671 ± 6% cpuidle.C3-HSW.usage
6.14e+09 ± 3% -9.9% 5.529e+09 ± 2% cpuidle.C6-HSW.time
6354558 ± 3% -9.8% 5734069 ± 2% cpuidle.C6-HSW.usage
2300 ± 14% +510.3% 14041 ± 5% cpuidle.POLL.usage
164216 ± 1% +128.1% 374592 ± 1% meminfo.Active
59099 ± 2% +350.9% 266480 ± 1% meminfo.Active(file)
19136 ± 0% +904.0% 192130 ± 0% meminfo.Buffers
11734813 ± 0% +240.2% 39922439 ± 0% meminfo.Cached
202840 ± 0% -76.6% 47427 ± 7% meminfo.CmaFree
8241883 ± 0% -14.8% 7022847 ± 0% meminfo.Dirty
11690671 ± 0% +235.9% 39269224 ± 0% meminfo.Inactive
11568749 ± 0% +238.4% 39147090 ± 0% meminfo.Inactive(file)
32820647 ± 0% -89.1% 3569553 ± 1% meminfo.MemFree
820377 ± 0% +106.7% 1695619 ± 0% meminfo.SReclaimable
919835 ± 0% +95.1% 1794614 ± 0% meminfo.Slab
989.25 ± 57% +57562.5% 570426 ± 0% meminfo.Unevictable
71280 ± 25% +220.1% 228145 ± 33% numa-meminfo.node0.Active
29498 ± 24% +440.1% 159331 ± 41% numa-meminfo.node0.Active(file)
5690258 ± 5% +248.7% 19839495 ± 0% numa-meminfo.node0.FilePages
5658227 ± 5% +242.7% 19392224 ± 0% numa-meminfo.node0.Inactive
5571535 ± 5% +246.5% 19303184 ± 0% numa-meminfo.node0.Inactive(file)
16701744 ± 3% -89.1% 1827775 ± 2% numa-meminfo.node0.MemFree
7905066 ± 7% +188.2% 22779035 ± 0% numa-meminfo.node0.MemUsed
235449 ±123% +278.8% 891948 ± 12% numa-meminfo.node0.SReclaimable
288330 ±100% +228.3% 946566 ± 12% numa-meminfo.node0.Slab
467.50 ± 60% +60926.3% 285297 ± 0% numa-meminfo.node0.Unevictable
29610 ± 27% +260.3% 106675 ± 59% numa-meminfo.node1.Active(file)
6066070 ± 5% +234.0% 20260420 ± 0% numa-meminfo.node1.FilePages
6034806 ± 5% +229.1% 19862918 ± 0% numa-meminfo.node1.Inactive
5999586 ± 5% +230.5% 19829825 ± 0% numa-meminfo.node1.Inactive(file)
16116396 ± 4% -89.1% 1756070 ± 0% numa-meminfo.node1.MemFree
8634214 ± 7% +166.3% 22994540 ± 0% numa-meminfo.node1.MemUsed
521.25 ± 60% +54624.3% 285250 ± 0% numa-meminfo.node1.Unevictable
2728768 ± 0% +259.7% 9814399 ± 0% slabinfo.buffer_head.active_objs
69967 ± 0% +260.0% 251912 ± 0% slabinfo.buffer_head.active_slabs
2728768 ± 0% +260.0% 9824608 ± 0% slabinfo.buffer_head.num_objs
69967 ± 0% +260.0% 251912 ± 0% slabinfo.buffer_head.num_slabs
115.25 ± 24% +785.9% 1021 ± 9% slabinfo.dquot.active_objs
115.25 ± 24% +785.9% 1021 ± 9% slabinfo.dquot.num_objs
925018 ± 1% +162.9% 2432034 ± 0% slabinfo.ext4_extent_status.active_objs
9068 ± 1% +357.6% 41494 ± 1% slabinfo.ext4_extent_status.active_slabs
925018 ± 1% +357.6% 4232438 ± 1% slabinfo.ext4_extent_status.num_objs
9068 ± 1% +357.6% 41494 ± 1% slabinfo.ext4_extent_status.num_slabs
149.50 ± 27% +239.1% 507.00 ± 65% slabinfo.ext4_io_end.active_objs
149.50 ± 27% +239.1% 507.00 ± 65% slabinfo.ext4_io_end.num_objs
3009 ± 10% +53.9% 4631 ± 0% slabinfo.jbd2_journal_handle.active_objs
3009 ± 10% +53.9% 4631 ± 0% slabinfo.jbd2_journal_handle.num_objs
1808 ± 9% +1387.2% 26889 ± 1% slabinfo.jbd2_journal_head.active_objs
67.50 ± 4% +1158.9% 849.75 ± 1% slabinfo.jbd2_journal_head.active_slabs
2310 ± 4% +1151.5% 28909 ± 1% slabinfo.jbd2_journal_head.num_objs
67.50 ± 4% +1158.9% 849.75 ± 1% slabinfo.jbd2_journal_head.num_slabs
145655 ± 2% -77.5% 32809 ± 0% latency_stats.avg.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 40251 ± 5% latency_stats.avg.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.00 ± -1% +Inf% 40772 ± 4% latency_stats.avg.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter
245.00 ± 62% +15014.2% 37029 ± 12% latency_stats.avg.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write
381.50 ± 92% +17588.2% 67480 ± 8% latency_stats.avg.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
37333 ± 2% +212.4% 116638 ± 1% latency_stats.hits.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 428352 ± 1% latency_stats.hits.call_rwsem_down_read_failed.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
52674 ± 19% +713.2% 428352 ± 1% latency_stats.hits.max
25.25 ±173% +28336.6% 7180 ± 58% latency_stats.hits.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.do_swap_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 52028 ± 3% latency_stats.max.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.00 ± -1% +Inf% 42916 ± 11% latency_stats.max.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter
415.25 ± 61% +50138.4% 208614 ± 6% latency_stats.max.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write
406.25 ± 94% +49880.9% 203047 ± 8% latency_stats.max.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.00 ± -1% +Inf% 1808461 ± 11% latency_stats.sum.call_rwsem_down_read_failed.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 824147 ± 12% latency_stats.sum.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.00 ± -1% +Inf% 82526 ± 39% latency_stats.sum.jbd2_log_wait_commit.jbd2_log_do_checkpoint.__jbd2_log_wait_for_space.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter
369.00 ±173% +74142.4% 273954 ± 68% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.do_swap_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1932 ±109% +6.4e+05% 12442046 ± 18% latency_stats.sum.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.SyS_write
1112 ± 97% +1.8e+06% 20573361 ± 7% latency_stats.sum.wait_transaction_locked.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_dirty_inode.__mark_inode_dirty.generic_update_time.file_update_time.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
8.134e+11 ± 7% -29.2% 5.76e+11 ± 1% perf-stat.branch-instructions
0.07 ± 1% +2199.3% 1.54 ± 1% perf-stat.branch-miss-rate%
5.443e+08 ± 6% +1528.4% 8.863e+09 ± 1% perf-stat.branch-misses
3.32 ± 6% +1005.8% 36.72 ± 2% perf-stat.cache-miss-rate%
1.301e+08 ± 12% +7830.6% 1.032e+10 ± 1% perf-stat.cache-misses
3.903e+09 ± 6% +620.2% 2.811e+10 ± 1% perf-stat.cache-references
446946 ± 5% +439.6% 2411928 ± 2% perf-stat.context-switches
1.89e+12 ± 5% +118.5% 4.13e+12 ± 1% perf-stat.cpu-cycles
8650 ± 7% +161.0% 22573 ± 8% perf-stat.cpu-migrations
0.05 ± 33% +2689.6% 1.38 ± 7% perf-stat.dTLB-load-miss-rate%
3.894e+08 ± 5% +3138.7% 1.261e+10 ± 6% perf-stat.dTLB-load-misses
0.01 ± 59% +1236.0% 0.14 ± 14% perf-stat.dTLB-store-miss-rate%
71487525 ± 3% +1048.6% 8.211e+08 ± 10% perf-stat.dTLB-store-misses
58.98 ± 1% -45.2% 32.33 ± 1% perf-stat.iTLB-load-miss-rate%
81063987 ± 1% -11.6% 71683301 ± 2% perf-stat.iTLB-load-misses
56389233 ± 2% +166.2% 1.501e+08 ± 3% perf-stat.iTLB-loads
4.453e+12 ± 7% -28.5% 3.184e+12 ± 1% perf-stat.instructions
54933 ± 7% -19.1% 44447 ± 3% perf-stat.instructions-per-iTLB-miss
2.35 ± 1% -67.2% 0.77 ± 2% perf-stat.ipc
417970 ± 0% +7.8% 450558 ± 1% perf-stat.minor-faults
47679867 ± 13% +10402.2% 5.007e+09 ± 15% perf-stat.node-load-misses
44725841 ± 6% +8141.8% 3.686e+09 ± 19% perf-stat.node-loads
42.89 ± 15% +26.5% 54.27 ± 5% perf-stat.node-store-miss-rate%
12288499 ± 27% +4184.3% 5.265e+08 ± 5% perf-stat.node-store-misses
15883566 ± 5% +2695.6% 4.44e+08 ± 7% perf-stat.node-stores
417971 ± 0% +7.8% 450593 ± 1% perf-stat.page-faults
7366 ± 24% +440.8% 39839 ± 41% numa-vmstat.node0.nr_active_file
1317901 ± 5% +1806.1% 25120768 ± 13% numa-vmstat.node0.nr_dirtied
1421455 ± 5% +248.9% 4959834 ± 0% numa-vmstat.node0.nr_file_pages
4176580 ± 3% -89.1% 456964 ± 2% numa-vmstat.node0.nr_free_pages
1391778 ± 5% +246.7% 4825746 ± 0% numa-vmstat.node0.nr_inactive_file
58818 ±123% +279.1% 222990 ± 12% numa-vmstat.node0.nr_slab_reclaimable
116.00 ± 60% +61385.8% 71323 ± 0% numa-vmstat.node0.nr_unevictable
328581 ± 4% +7288.0% 24275709 ± 13% numa-vmstat.node0.nr_written
7366 ± 24% +440.8% 39840 ± 41% numa-vmstat.node0.nr_zone_active_file
1391778 ± 5% +246.7% 4825787 ± 0% numa-vmstat.node0.nr_zone_inactive_file
116.00 ± 60% +61386.0% 71323 ± 0% numa-vmstat.node0.nr_zone_unevictable
320855 ±126% +2018.4% 6796862 ± 81% numa-vmstat.node0.numa_foreign
811358 ± 69% +2361.3% 19970097 ± 36% numa-vmstat.node0.numa_hit
811349 ± 69% +2361.3% 19970080 ± 36% numa-vmstat.node0.numa_local
902242 ± 54% +496.3% 5379824 ± 70% numa-vmstat.node0.numa_miss
7395 ± 27% +260.7% 26675 ± 59% numa-vmstat.node1.nr_active_file
1475724 ± 5% +1764.2% 27510187 ± 12% numa-vmstat.node1.nr_dirtied
1515054 ± 5% +234.3% 5065379 ± 0% numa-vmstat.node1.nr_file_pages
50710 ± 0% -76.1% 12111 ± 8% numa-vmstat.node1.nr_free_cma
4030612 ± 4% -89.1% 438738 ± 0% numa-vmstat.node1.nr_free_pages
1498440 ± 5% +230.9% 4957711 ± 0% numa-vmstat.node1.nr_inactive_file
129.75 ± 60% +54860.9% 71311 ± 0% numa-vmstat.node1.nr_unevictable
403988 ± 4% +6484.2% 26599507 ± 12% numa-vmstat.node1.nr_written
7395 ± 27% +260.7% 26675 ± 59% numa-vmstat.node1.nr_zone_active_file
1498440 ± 5% +230.9% 4957710 ± 0% numa-vmstat.node1.nr_zone_inactive_file
129.75 ± 60% +54860.9% 71311 ± 0% numa-vmstat.node1.nr_zone_unevictable
872187 ± 53% +506.1% 5286095 ± 71% numa-vmstat.node1.numa_foreign
1639879 ± 30% +1191.0% 21171288 ± 43% numa-vmstat.node1.numa_hit
1639864 ± 30% +1191.0% 21171282 ± 43% numa-vmstat.node1.numa_local
290798 ±150% +2205.1% 6703187 ± 82% numa-vmstat.node1.numa_miss
14775 ± 2% +351.0% 66631 ± 1% proc-vmstat.nr_active_file
3314070 ± 0% +3927.8% 1.335e+08 ± 1% proc-vmstat.nr_dirtied
2060410 ± 0% -14.8% 1755444 ± 0% proc-vmstat.nr_dirty
2938721 ± 0% +241.3% 10029056 ± 0% proc-vmstat.nr_file_pages
50710 ± 0% -76.5% 11910 ± 8% proc-vmstat.nr_free_cma
8204888 ± 0% -89.1% 891960 ± 1% proc-vmstat.nr_free_pages
2892431 ± 0% +238.4% 9787283 ± 0% proc-vmstat.nr_inactive_file
205104 ± 0% +106.7% 423919 ± 0% proc-vmstat.nr_slab_reclaimable
247.25 ± 57% +57596.0% 142653 ± 0% proc-vmstat.nr_unevictable
1398737 ± 1% +9324.0% 1.318e+08 ± 1% proc-vmstat.nr_written
14775 ± 2% +351.0% 66633 ± 1% proc-vmstat.nr_zone_active_file
2892431 ± 0% +238.4% 9787320 ± 0% proc-vmstat.nr_zone_inactive_file
247.25 ± 57% +57596.1% 142653 ± 0% proc-vmstat.nr_zone_unevictable
2060409 ± 0% -14.8% 1755458 ± 0% proc-vmstat.nr_zone_write_pending
1223028 ± 9% +2245.9% 28690632 ± 30% proc-vmstat.numa_foreign
282.75 ±109% +8696.6% 24872 ± 11% proc-vmstat.numa_hint_faults
82.00 ± 56% +23385.1% 19257 ± 13% proc-vmstat.numa_hint_faults_local
2660314 ± 3% +3794.0% 1.036e+08 ± 10% proc-vmstat.numa_hit
2660293 ± 3% +3794.0% 1.036e+08 ± 10% proc-vmstat.numa_local
1223028 ± 9% +2245.9% 28690610 ± 30% proc-vmstat.numa_miss
195.75 ±143% +693.5% 1553 ± 22% proc-vmstat.numa_pages_migrated
1185 ±115% +2435.2% 30048 ± 10% proc-vmstat.numa_pte_updates
2845 ± 6% +57.4% 4479 ± 13% proc-vmstat.pgactivate
0.00 ± 0% +Inf% 5194751 ± 13% proc-vmstat.pgalloc_dma32
4015781 ± 0% +3068.0% 1.272e+08 ± 1% proc-vmstat.pgalloc_normal
443537 ± 0% +27265.1% 1.214e+08 ± 1% proc-vmstat.pgfree
0.75 ±173% +16200.0% 122.25 ± 42% proc-vmstat.pgmigrate_fail
195.75 ±143% +693.5% 1553 ± 22% proc-vmstat.pgmigrate_success
13772223 ± 1% +4931.5% 6.93e+08 ± 1% proc-vmstat.pgpgout
364.25 ± 57% +46700.3% 170470 ± 0% proc-vmstat.unevictable_pgs_culled
3771 ± 0% +283.2% 14453 ± 1% sched_debug.cfs_rq:/.exec_clock.avg
123.31 ± 22% +863.5% 1188 ± 30% sched_debug.cfs_rq:/.exec_clock.min
15977 ± 4% -28.2% 11469 ± 17% sched_debug.cfs_rq:/.exec_clock.stddev
41685 ± 1% +345.4% 185655 ± 7% sched_debug.cfs_rq:/.load.avg
161304 ± 1% +95.9% 315971 ± 4% sched_debug.cfs_rq:/.load.stddev
32.87 ± 4% +435.3% 175.97 ± 6% sched_debug.cfs_rq:/.load_avg.avg
157.72 ± 7% +28.4% 202.48 ± 3% sched_debug.cfs_rq:/.load_avg.stddev
18584 ± 14% +47.5% 27418 ± 5% sched_debug.cfs_rq:/.min_vruntime.avg
16749 ± 3% -28.6% 11963 ± 16% sched_debug.cfs_rq:/.min_vruntime.stddev
0.10 ± 17% +136.0% 0.23 ± 6% sched_debug.cfs_rq:/.nr_running.avg
0.29 ± 6% +43.8% 0.41 ± 3% sched_debug.cfs_rq:/.nr_running.stddev
29.02 ± 1% +220.3% 92.95 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.avg
141.32 ± 0% +35.8% 191.91 ± 4% sched_debug.cfs_rq:/.runnable_load_avg.stddev
83254 ± 2% -38.9% 50853 ± 35% sched_debug.cfs_rq:/.spread0.max
-12291 ±-14% +72.7% -21225 ±-23% sched_debug.cfs_rq:/.spread0.min
16749 ± 3% -28.6% 11966 ± 16% sched_debug.cfs_rq:/.spread0.stddev
185.76 ± 4% +85.4% 344.36 ± 2% sched_debug.cfs_rq:/.util_avg.avg
158.08 ± 1% +37.9% 217.92 ± 3% sched_debug.cfs_rq:/.util_avg.stddev
182455 ± 4% +24.2% 226649 ± 10% sched_debug.cpu.avg_idle.stddev
5.84 ± 8% +163.5% 15.38 ± 7% sched_debug.cpu.clock.stddev
5.84 ± 8% +163.5% 15.38 ± 7% sched_debug.cpu.clock_task.stddev
28.76 ± 1% +216.9% 91.13 ± 11% sched_debug.cpu.cpu_load[0].avg
141.31 ± 0% +35.0% 190.72 ± 5% sched_debug.cpu.cpu_load[0].stddev
29.02 ± 1% +352.8% 131.42 ± 15% sched_debug.cpu.cpu_load[1].avg
141.76 ± 0% +41.5% 200.56 ± 5% sched_debug.cpu.cpu_load[1].stddev
28.81 ± 1% +343.7% 127.84 ± 15% sched_debug.cpu.cpu_load[2].avg
141.36 ± 0% +39.6% 197.31 ± 5% sched_debug.cpu.cpu_load[2].stddev
28.54 ± 1% +324.6% 121.20 ± 15% sched_debug.cpu.cpu_load[3].avg
141.27 ± 0% +35.5% 191.48 ± 5% sched_debug.cpu.cpu_load[3].stddev
28.22 ± 0% +299.2% 112.65 ± 16% sched_debug.cpu.cpu_load[4].avg
141.27 ± 0% +29.2% 182.54 ± 6% sched_debug.cpu.cpu_load[4].stddev
191.26 ± 18% +141.2% 461.32 ± 7% sched_debug.cpu.curr->pid.avg
676.82 ± 11% +35.3% 915.48 ± 5% sched_debug.cpu.curr->pid.stddev
43084 ± 5% +336.3% 187993 ± 7% sched_debug.cpu.load.avg
164119 ± 2% +93.1% 316944 ± 5% sched_debug.cpu.load.stddev
0.00 ± 9% +52.3% 0.00 ± 2% sched_debug.cpu.next_balance.stddev
22918 ± 44% +69.8% 38923 ± 1% sched_debug.cpu.nr_load_updates.avg
8553 ± 24% +140.5% 20569 ± 24% sched_debug.cpu.nr_load_updates.min
14186 ± 11% -36.9% 8947 ± 13% sched_debug.cpu.nr_load_updates.stddev
0.11 ± 20% +112.0% 0.24 ± 7% sched_debug.cpu.nr_running.avg
0.32 ± 11% +28.4% 0.41 ± 4% sched_debug.cpu.nr_running.stddev
5869 ± 5% +320.3% 24672 ± 2% sched_debug.cpu.nr_switches.avg
29766 ± 33% +538.3% 190010 ± 25% sched_debug.cpu.nr_switches.max
1914 ± 4% +110.7% 4034 ± 27% sched_debug.cpu.nr_switches.min
4636 ± 27% +560.1% 30603 ± 23% sched_debug.cpu.nr_switches.stddev
0.39 ± 1% -33.8% 0.26 ± 5% sched_debug.cpu.nr_uninterruptible.avg
13.12 ± 18% +116.2% 28.38 ± 24% sched_debug.cpu.nr_uninterruptible.max
-15.00 ±-30% +110.8% -31.62 ±-22% sched_debug.cpu.nr_uninterruptible.min
4.87 ± 14% +163.9% 12.86 ± 27% sched_debug.cpu.nr_uninterruptible.stddev
3796 ± 9% +494.6% 22569 ± 2% sched_debug.cpu.sched_count.avg
26488 ± 37% +606.2% 187066 ± 26% sched_debug.cpu.sched_count.max
190.62 ± 24% +1031.8% 2157 ± 55% sched_debug.cpu.sched_count.min
4339 ± 30% +601.8% 30453 ± 23% sched_debug.cpu.sched_count.stddev
1804 ± 9% +514.1% 11080 ± 2% sched_debug.cpu.sched_goidle.avg
13206 ± 37% +606.5% 93305 ± 26% sched_debug.cpu.sched_goidle.max
45.06 ± 16% +2154.2% 1015 ± 59% sched_debug.cpu.sched_goidle.min
2184 ± 30% +595.6% 15196 ± 23% sched_debug.cpu.sched_goidle.stddev
1822 ± 9% +515.7% 11222 ± 2% sched_debug.cpu.ttwu_count.avg
15326 ± 33% +524.9% 95775 ± 19% sched_debug.cpu.ttwu_count.max
66.31 ± 22% +1399.6% 994.44 ± 71% sched_debug.cpu.ttwu_count.min
2476 ± 28% +538.9% 15824 ± 16% sched_debug.cpu.ttwu_count.stddev
1013 ± 0% +78.4% 1807 ± 0% sched_debug.cpu.ttwu_local.avg
7952 ± 25% -43.3% 4512 ± 29% sched_debug.cpu.ttwu_local.max
35.31 ± 23% +506.5% 214.19 ± 50% sched_debug.cpu.ttwu_local.min
1311 ± 16% -32.0% 891.49 ± 15% sched_debug.cpu.ttwu_local.stddev
2.58 ± 9% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.___might_sleep.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
1.32 ± 10% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.___might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
0.00 ± -1% +Inf% 0.93 ± 4% perf-profile.calltrace.cycles-pp.__add_to_page_cache_locked.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
0.00 ± -1% +Inf% 1.43 ± 11% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin
0.00 ± -1% +Inf% 2.30 ± 1% perf-profile.calltrace.cycles-pp.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write
0.00 ± -1% +Inf% 16.00 ± 3% perf-profile.calltrace.cycles-pp.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 15.98 ± 3% perf-profile.calltrace.cycles-pp.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.00 ± -1% +Inf% 4.97 ± 3% perf-profile.calltrace.cycles-pp.__copy_user_nocache.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio
0.00 ± -1% +Inf% 2.29 ± 3% perf-profile.calltrace.cycles-pp.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg
0.00 ± -1% +Inf% 2.38 ± 6% perf-profile.calltrace.cycles-pp.__es_insert_extent.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
0.00 ± -1% +Inf% 1.03 ± 5% perf-profile.calltrace.cycles-pp.__es_tree_search.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
0.00 ± -1% +Inf% 1.76 ± 6% perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 3.23 ± 3% perf-profile.calltrace.cycles-pp.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks
0.00 ± -1% +Inf% 1.70 ± 3% perf-profile.calltrace.cycles-pp.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block.ext4_find_extent
35.05 ± 7% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
0.00 ± -1% +Inf% 30.01 ± 3% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.sys_write
0.00 ± -1% +Inf% 3.42 ± 3% perf-profile.calltrace.cycles-pp.__getblk_gfp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep
1.32 ± 5% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__might_sleep.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
4.51 ± 9% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
0.00 ± -1% +Inf% 1.56 ± 11% perf-profile.calltrace.cycles-pp.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 2.00 ± 4% perf-profile.calltrace.cycles-pp.__radix_tree_lookup.__delete_from_page_cache.__remove_mapping.shrink_page_list.shrink_inactive_list
15.27 ± 6% -95.3% 0.73 ± 5% perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow
0.00 ± -1% +Inf% 1.62 ± 4% perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin
0.00 ± -1% +Inf% 3.56 ± 3% perf-profile.calltrace.cycles-pp.__read_extent_tree_block.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int
0.00 ± -1% +Inf% 2.90 ± 3% perf-profile.calltrace.cycles-pp.__remove_mapping.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
0.00 ± -1% +Inf% 1.27 ± 2% perf-profile.calltrace.cycles-pp.__set_page_dirty.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end
0.00 ± -1% +Inf% 0.96 ± 9% perf-profile.calltrace.cycles-pp.__test_set_page_writeback.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
0.82 ± 14% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.00 ± -1% +Inf% 30.45 ± 3% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn
0.00 ± -1% +Inf% 1.77 ± 5% perf-profile.calltrace.cycles-pp.add_to_page_cache_lru.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 1.53 ± 10% perf-profile.calltrace.cycles-pp.alloc_pages_current.__page_cache_alloc.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
2.79 ± 16% -83.8% 0.45 ± 62% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 2.42 ± 3% perf-profile.calltrace.cycles-pp.bio_endio.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
0.00 ± -1% +Inf% 2.35 ± 1% perf-profile.calltrace.cycles-pp.block_write_end.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter
0.00 ± -1% +Inf% 2.19 ± 2% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
0.00 ± -1% +Inf% 1.04 ± 4% perf-profile.calltrace.cycles-pp.end_page_writeback.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request
0.48 ± 58% +6534.9% 31.85 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.03 ± 2% perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 9.22 ± 3% perf-profile.calltrace.cycles-pp.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages
0.00 ± -1% +Inf% 14.90 ± 3% perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 23.29 ± 4% perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.00 ± -1% +Inf% 3.61 ± 1% perf-profile.calltrace.cycles-pp.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.00 ± -1% +Inf% 1.53 ± 3% perf-profile.calltrace.cycles-pp.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request.submit_bio
0.00 ± -1% +Inf% 3.80 ± 5% perf-profile.calltrace.cycles-pp.ext4_es_insert_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
0.00 ± -1% +Inf% 4.19 ± 4% perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
0.00 ± -1% +Inf% 6.14 ± 3% perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
49.67 ± 7% -97.1% 1.42 ± 2% perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 30.29 ± 3% perf-profile.calltrace.cycles-pp.ext4_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 5.73 ± 3% perf-profile.calltrace.cycles-pp.ext4_find_extent.ext4_ext_map_blocks.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin
0.00 ± -1% +Inf% 1.33 ± 4% perf-profile.calltrace.cycles-pp.ext4_finish_bio.ext4_end_bio.bio_endio.pmem_make_request.generic_make_request
0.00 ± -1% +Inf% 7.16 ± 2% perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map
0.00 ± -1% +Inf% 1.17 ± 1% perf-profile.calltrace.cycles-pp.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
50.46 ± 7% -95.1% 2.46 ± 2% perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
0.00 ± -1% +Inf% 0.88 ± 13% perf-profile.calltrace.cycles-pp.ext4_put_io_end_defer.bio_endio.pmem_make_request.generic_make_request.submit_bio
0.00 ± -1% +Inf% 1.18 ± 2% perf-profile.calltrace.cycles-pp.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg
51.11 ± 7% -65.3% 17.72 ± 2% perf-profile.calltrace.cycles-pp.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
1.32 ± 6% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.find_get_entry.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
0.00 ± -1% +Inf% 1.32 ± 2% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp
21.16 ± 6% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks
0.00 ± -1% +Inf% 1.66 ± 5% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write
0.00 ± -1% +Inf% 0.98 ± 2% perf-profile.calltrace.cycles-pp.find_get_pages_tag.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 7.88 ± 2% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page
0.00 ± -1% +Inf% 1.16 ± 1% perf-profile.calltrace.cycles-pp.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 29.66 ± 3% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write
0.00 ± -1% +Inf% 2.72 ± 2% perf-profile.calltrace.cycles-pp.generic_write_end.ext4_da_write_end.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.00 ± -1% +Inf% 1.16 ± 10% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.pagecache_get_page
0.00 ± -1% +Inf% 5.13 ± 7% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
1.02 ± 20% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
1.04 ± 11% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
0.00 ± -1% +Inf% 1.44 ± 5% perf-profile.calltrace.cycles-pp.jbd2__journal_start.__ext4_journal_start_sb.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.00 ± -1% +Inf% 1.11 ± 2% perf-profile.calltrace.cycles-pp.jbd2_journal_try_to_free_buffers.ext4_releasepage.try_to_release_page.shrink_page_list.shrink_inactive_list
0.00 ± -1% +Inf% 6.87 ± 1% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
51.25 ± 7% -51.4% 24.89 ± 1% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.08 ± 19% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
0.00 ± -1% +Inf% 1.65 ± 2% perf-profile.calltrace.cycles-pp.mark_buffer_dirty.__block_commit_write.block_write_end.generic_write_end.ext4_da_write_end
0.00 ± -1% +Inf% 1.80 ± 1% perf-profile.calltrace.cycles-pp.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
0.00 ± -1% +Inf% 11.87 ± 2% perf-profile.calltrace.cycles-pp.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
0.00 ± -1% +Inf% 10.50 ± 3% perf-profile.calltrace.cycles-pp.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 1.11 ± 1% perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_map_and_submit_buffers.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 9.77 ± 3% perf-profile.calltrace.cycles-pp.mpage_submit_page.mpage_process_page_bufs.mpage_prepare_extent_to_map.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 1.39 ± 2% perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.__find_get_block.__getblk_gfp.__read_extent_tree_block
27.73 ± 7% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
0.00 ± -1% +Inf% 5.07 ± 6% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.00 ± -1% +Inf% 0.98 ± 3% perf-profile.calltrace.cycles-pp.pagevec_lookup_tag.mpage_prepare_extent_to_map.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 5.05 ± 3% perf-profile.calltrace.cycles-pp.pmem_do_bvec.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit
0.00 ± -1% +Inf% 7.58 ± 2% perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_bio_write_page
0.00 ± -1% +Inf% 1.13 ± 1% perf-profile.calltrace.cycles-pp.pmem_make_request.generic_make_request.submit_bio.ext4_io_submit.ext4_writepages
51.22 ± 7% -65.1% 17.86 ± 2% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
17.22 ± 6% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata
0.00 ± -1% +Inf% 1.64 ± 4% perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin
51.25 ± 7% -51.4% 24.89 ± 1% perf-profile.calltrace.cycles-pp.ret_from_fork
0.00 ± -1% +Inf% 5.95 ± 1% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
0.00 ± -1% +Inf% 6.86 ± 1% perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
0.00 ± -1% +Inf% 5.98 ± 1% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
0.00 ± -1% +Inf% 5.36 ± 1% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd
2.73 ± 16% -88.4% 0.32 ±103% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.00 ± -1% +Inf% 7.14 ± 2% perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_bio_write_page.mpage_submit_page.mpage_process_page_bufs
0.00 ± -1% +Inf% 1.17 ± 1% perf-profile.calltrace.cycles-pp.submit_bio.ext4_io_submit.ext4_writepages.do_writepages.__writeback_single_inode
0.00 ± -1% +Inf% 31.25 ± 3% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
0.83 ± 14% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
0.00 ± -1% +Inf% 1.20 ± 1% perf-profile.calltrace.cycles-pp.try_to_release_page.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node
45.39 ± 7% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
0.00 ± -1% +Inf% 31.09 ± 3% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
51.23 ± 7% -65.1% 17.86 ± 2% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
3.93 ± 9% -88.6% 0.45 ± 6% perf-profile.children.cycles-pp.___might_sleep
0.00 ± -1% +Inf% 0.93 ± 5% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.00 ± -1% +Inf% 1.46 ± 10% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.04 ± 58% +5668.8% 2.31 ± 1% perf-profile.children.cycles-pp.__block_commit_write
0.07 ± 23% +24519.2% 16.00 ± 3% perf-profile.children.cycles-pp.__block_write_begin
0.07 ± 23% +24488.5% 15.98 ± 3% perf-profile.children.cycles-pp.__block_write_begin_int
0.21 ± 5% +2617.6% 5.78 ± 2% perf-profile.children.cycles-pp.__copy_user_nocache
0.00 ± -1% +Inf% 2.29 ± 3% perf-profile.children.cycles-pp.__delete_from_page_cache
0.05 ± 58% +5573.7% 2.69 ± 6% perf-profile.children.cycles-pp.__es_insert_extent
0.01 ±173% +10280.0% 1.30 ± 5% perf-profile.children.cycles-pp.__es_tree_search
0.04 ±102% +5485.7% 1.96 ± 6% perf-profile.children.cycles-pp.__ext4_journal_start_sb
0.00 ± -1% +Inf% 3.36 ± 2% perf-profile.children.cycles-pp.__find_get_block
35.73 ± 7% -95.0% 1.80 ± 3% perf-profile.children.cycles-pp.__find_get_block_slow
0.33 ± 11% +8997.0% 30.02 ± 3% perf-profile.children.cycles-pp.__generic_file_write_iter
0.00 ± -1% +Inf% 3.58 ± 3% perf-profile.children.cycles-pp.__getblk_gfp
5.86 ± 8% -89.9% 0.59 ± 4% perf-profile.children.cycles-pp.__might_sleep
0.00 ± -1% +Inf% 1.57 ± 10% perf-profile.children.cycles-pp.__page_cache_alloc
15.94 ± 6% -72.4% 4.41 ± 1% perf-profile.children.cycles-pp.__radix_tree_lookup
0.00 ± -1% +Inf% 3.70 ± 3% perf-profile.children.cycles-pp.__read_extent_tree_block
0.00 ± -1% +Inf% 2.91 ± 3% perf-profile.children.cycles-pp.__remove_mapping
0.00 ± -1% +Inf% 1.28 ± 2% perf-profile.children.cycles-pp.__set_page_dirty
0.00 ± -1% +Inf% 1.11 ± 9% perf-profile.children.cycles-pp.__test_set_page_writeback
0.87 ± 13% -75.4% 0.21 ± 7% perf-profile.children.cycles-pp.__tick_nohz_idle_enter
0.42 ± 7% +7123.1% 30.52 ± 3% perf-profile.children.cycles-pp.__vfs_write
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.children.cycles-pp.__writeback_inodes_wb
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.children.cycles-pp.__writeback_single_inode
0.09 ± 20% +1173.0% 1.18 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.00 ± -1% +Inf% 1.77 ± 5% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.00 ± -1% +Inf% 1.55 ± 10% perf-profile.children.cycles-pp.alloc_pages_current
2.98 ± 15% -69.8% 0.90 ± 10% perf-profile.children.cycles-pp.apic_timer_interrupt
0.10 ± 8% +2990.2% 3.17 ± 2% perf-profile.children.cycles-pp.bio_endio
0.03 ±102% +7750.0% 2.36 ± 1% perf-profile.children.cycles-pp.block_write_end
47.10 ± 8% -10.3% 42.25 ± 3% perf-profile.children.cycles-pp.call_cpuidle
0.04 ± 58% +5393.7% 2.20 ± 2% perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
47.72 ± 7% -10.5% 42.72 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
47.06 ± 8% -10.2% 42.25 ± 3% perf-profile.children.cycles-pp.cpuidle_enter
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.children.cycles-pp.do_writepages
0.01 ±173% +10920.0% 1.38 ± 4% perf-profile.children.cycles-pp.end_page_writeback
0.73 ± 4% +4272.0% 32.02 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
0.06 ± 13% +16332.0% 10.27 ± 2% perf-profile.children.cycles-pp.ext4_bio_write_page
0.03 ±100% +54100.0% 14.90 ± 3% perf-profile.children.cycles-pp.ext4_da_get_block_prep
0.18 ± 8% +12661.6% 23.29 ± 4% perf-profile.children.cycles-pp.ext4_da_write_begin
0.07 ± 19% +5259.3% 3.62 ± 1% perf-profile.children.cycles-pp.ext4_da_write_end
0.06 ± 14% +3352.2% 1.98 ± 4% perf-profile.children.cycles-pp.ext4_end_bio
0.09 ± 18% +4988.6% 4.45 ± 5% perf-profile.children.cycles-pp.ext4_es_insert_extent
0.03 ±100% +16281.8% 4.50 ± 4% perf-profile.children.cycles-pp.ext4_es_lookup_extent
49.67 ± 7% -84.8% 7.57 ± 3% perf-profile.children.cycles-pp.ext4_ext_map_blocks
0.32 ± 11% +9440.9% 30.29 ± 3% perf-profile.children.cycles-pp.ext4_file_write_iter
0.07 ± 12% +8807.4% 6.01 ± 3% perf-profile.children.cycles-pp.ext4_find_extent
0.01 ±173% +13720.0% 1.73 ± 4% perf-profile.children.cycles-pp.ext4_finish_bio
0.27 ± 6% +3330.2% 9.09 ± 2% perf-profile.children.cycles-pp.ext4_io_submit
50.46 ± 7% -95.1% 2.48 ± 2% perf-profile.children.cycles-pp.ext4_map_blocks
0.01 ±173% +7016.7% 1.07 ± 12% perf-profile.children.cycles-pp.ext4_put_io_end_defer
0.00 ± -1% +Inf% 1.18 ± 2% perf-profile.children.cycles-pp.ext4_releasepage
51.11 ± 7% -65.3% 17.72 ± 2% perf-profile.children.cycles-pp.ext4_writepages
22.53 ± 6% -86.4% 3.07 ± 3% perf-profile.children.cycles-pp.find_get_entry
0.08 ± 5% +1047.1% 0.98 ± 2% perf-profile.children.cycles-pp.find_get_pages_tag
0.34 ± 1% +2691.2% 9.56 ± 2% perf-profile.children.cycles-pp.generic_make_request
0.32 ± 8% +9243.3% 29.67 ± 3% perf-profile.children.cycles-pp.generic_perform_write
0.05 ± 62% +5657.9% 2.73 ± 2% perf-profile.children.cycles-pp.generic_write_end
0.00 ± -1% +Inf% 1.22 ± 10% perf-profile.children.cycles-pp.get_page_from_freelist
0.09 ± 16% +5774.3% 5.14 ± 7% perf-profile.children.cycles-pp.grab_cache_page_write_begin
1.17 ± 16% -61.8% 0.45 ± 8% perf-profile.children.cycles-pp.hrtimer_interrupt
1.33 ± 16% -69.0% 0.41 ± 17% perf-profile.children.cycles-pp.irq_exit
0.01 ±173% +10616.7% 1.61 ± 5% perf-profile.children.cycles-pp.jbd2__journal_start
0.00 ± -1% +Inf% 1.12 ± 2% perf-profile.children.cycles-pp.jbd2_journal_try_to_free_buffers
0.00 ± -1% +Inf% 0.94 ± 15% perf-profile.children.cycles-pp.kmem_cache_alloc
0.00 ± -1% +Inf% 6.87 ± 1% perf-profile.children.cycles-pp.kswapd
51.25 ± 7% -51.4% 24.89 ± 1% perf-profile.children.cycles-pp.kthread
1.22 ± 16% -62.0% 0.46 ± 8% perf-profile.children.cycles-pp.local_apic_timer_interrupt
0.00 ± -1% +Inf% 1.66 ± 2% perf-profile.children.cycles-pp.mark_buffer_dirty
0.20 ± 12% +787.7% 1.80 ± 1% perf-profile.children.cycles-pp.mpage_map_and_submit_buffers
0.12 ± 4% +9394.0% 11.87 ± 2% perf-profile.children.cycles-pp.mpage_prepare_extent_to_map
0.00 ± -1% +Inf% 10.51 ± 3% perf-profile.children.cycles-pp.mpage_process_page_bufs
0.07 ± 14% +14430.0% 10.90 ± 2% perf-profile.children.cycles-pp.mpage_submit_page
28.44 ± 7% -77.0% 6.55 ± 5% perf-profile.children.cycles-pp.pagecache_get_page
0.09 ± 9% +1020.0% 0.98 ± 3% perf-profile.children.cycles-pp.pagevec_lookup_tag
0.21 ± 6% +2634.9% 5.88 ± 2% perf-profile.children.cycles-pp.pmem_do_bvec
0.32 ± 2% +2774.2% 9.20 ± 2% perf-profile.children.cycles-pp.pmem_make_request
51.22 ± 7% -65.1% 17.86 ± 2% perf-profile.children.cycles-pp.process_one_work
17.93 ± 6% -86.3% 2.45 ± 3% perf-profile.children.cycles-pp.radix_tree_lookup_slot
51.25 ± 7% -51.4% 24.89 ± 1% perf-profile.children.cycles-pp.ret_from_fork
0.00 ± -1% +Inf% 5.96 ± 1% perf-profile.children.cycles-pp.shrink_inactive_list
0.00 ± -1% +Inf% 6.86 ± 1% perf-profile.children.cycles-pp.shrink_node
0.00 ± -1% +Inf% 5.98 ± 1% perf-profile.children.cycles-pp.shrink_node_memcg
0.00 ± -1% +Inf% 5.36 ± 1% perf-profile.children.cycles-pp.shrink_page_list
2.92 ± 15% -70.2% 0.87 ± 10% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.34 ± 1% +2701.5% 9.59 ± 2% perf-profile.children.cycles-pp.submit_bio
0.45 ± 8% +6782.4% 31.31 ± 3% perf-profile.children.cycles-pp.sys_write
0.00 ± -1% +Inf% 1.15 ± 4% perf-profile.children.cycles-pp.test_clear_page_writeback
0.87 ± 13% -80.8% 0.17 ± 11% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.00 ± -1% +Inf% 1.21 ± 1% perf-profile.children.cycles-pp.try_to_release_page
46.07 ± 7% -99.9% 0.06 ± 11% perf-profile.children.cycles-pp.unmap_underlying_metadata
0.45 ± 8% +6824.4% 31.16 ± 3% perf-profile.children.cycles-pp.vfs_write
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.children.cycles-pp.wb_workfn
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.children.cycles-pp.wb_writeback
51.23 ± 7% -65.1% 17.86 ± 2% perf-profile.children.cycles-pp.worker_thread
51.11 ± 7% -65.2% 17.80 ± 2% perf-profile.children.cycles-pp.writeback_sb_inodes
3.93 ± 9% -88.6% 0.45 ± 6% perf-profile.self.cycles-pp.___might_sleep
0.21 ± 5% +2617.6% 5.78 ± 2% perf-profile.self.cycles-pp.__copy_user_nocache
0.00 ± -1% +Inf% 1.31 ± 5% perf-profile.self.cycles-pp.__es_insert_extent
0.01 ±173% +9320.0% 1.18 ± 5% perf-profile.self.cycles-pp.__es_tree_search
6.65 ± 7% -95.7% 0.29 ± 9% perf-profile.self.cycles-pp.__find_get_block_slow
3.26 ± 7% -91.9% 0.26 ± 8% perf-profile.self.cycles-pp.__might_sleep
15.94 ± 6% -72.4% 4.41 ± 1% perf-profile.self.cycles-pp.__radix_tree_lookup
0.04 ± 58% +5393.7% 2.20 ± 2% perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.03 ±100% +15418.2% 4.27 ± 4% perf-profile.self.cycles-pp.ext4_es_lookup_extent
1.24 ± 14% -87.1% 0.16 ± 7% perf-profile.self.cycles-pp.ext4_ext_map_blocks
0.00 ± -1% +Inf% 1.99 ± 4% perf-profile.self.cycles-pp.ext4_find_extent
0.01 ±173% +7016.7% 1.07 ± 12% perf-profile.self.cycles-pp.ext4_put_io_end_defer
4.63 ± 7% -86.5% 0.62 ± 7% perf-profile.self.cycles-pp.find_get_entry
6.50 ± 9% -98.3% 0.11 ± 7% perf-profile.self.cycles-pp.pagecache_get_page
2.59 ± 8% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.radix_tree_lookup_slot
4.53 ± 7% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.unmap_underlying_metadata
***************************************************************************************************
lkp-bdw-de1: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase:
4k/gcc-6/performance/1SSD/ext4/sync/x86_64-rhel-7.2/64/debian-x86_64-2016-08-31.cgz/300s/randwrite/lkp-bdw-de1/400g/fio-basic
commit:
6f2b562c3a ("direct-io: Use clean_bdev_aliases() instead of handmade iteration")
adad5aa544 ("ext4: Use clean_bdev_aliases() instead of iteration")
6f2b562c3a89f4a6 adad5aa544e281d84f837b2786
---------------- --------------------------
%stddev %change %stddev
\ | \
21.99 ± 3% +88.0% 41.34 ± 0% fio.write_bw_MBps
5630 ± 3% +88.0% 10584 ± 0% fio.write_iops
0.01 ± 0% +2550.0% 0.27 ± 10% fio.latency_100ms%
1.82 ± 21% +31.9% 2.40 ± 2% fio.latency_20us%
5.41 ± 3% -44.0% 3.03 ± 1% fio.latency_250ms%
0.09 ± 24% -50.0% 0.04 ± 38% fio.latency_2us%
0.01 ± 0% +800.0% 0.09 ± 39% fio.latency_50ms%
0.10 ± 5% +36.8% 0.13 ± 12% fio.latency_50us%
0.01 ± 0% +100.0% 0.02 ± 0% fio.latency_750us%
0.01 ± 0% -100.0% 0.00 ± -1% fio.latency_>=2000ms%
13520302 ± 3% +87.9% 25407598 ± 0% fio.time.file_system_outputs
572.00 ± 11% +106.8% 1182 ± 9% fio.time.involuntary_context_switches
4.00 ± 0% +75.0% 7.00 ± 0% fio.time.percent_of_cpu_this_job_got
10.81 ± 3% +64.4% 17.77 ± 1% fio.time.system_time
124822 ± 0% +14.3% 142634 ± 0% fio.time.voluntary_context_switches
207872 ± 0% -100.0% 10.50 ± 4% fio.write_clat_95%_us
11370 ± 3% -46.9% 6043 ± 0% fio.write_clat_mean_us
48714 ± 1% -31.2% 33513 ± 0% fio.write_clat_stddev
779996 ± 4% +116.4% 1688045 ± 0% softirqs.BLOCK
236088 ± 1% -40.2% 141136 ± 2% softirqs.TIMER
1.273e+08 ± 32% +125.7% 2.873e+08 ± 7% cpuidle.C1-BDW.time
1259421 ± 8% +150.7% 3157306 ± 1% cpuidle.C1-BDW.usage
35610319 ± 8% +48.0% 52707031 ± 7% cpuidle.POLL.time
25450 ± 4% +159.0% 65928 ± 0% cpuidle.POLL.usage
6.06 ± 1% -43.9% 3.39 ± 0% iostat.sda.avgqu-sz
5181 ± 4% +116.2% 11205 ± 0% iostat.sda.w/s
48263 ± 2% +82.4% 88033 ± 0% iostat.sda.wkB/s
281.65 ± 9% +129.5% 646.34 ± 0% iostat.sda.wrqm/s
7.05 ± 2% -41.8% 4.10 ± 2% turbostat.%Busy
175.25 ± 2% -43.7% 98.75 ± 2% turbostat.Avg_MHz
0.83 ± 58% +102.7% 1.68 ± 22% turbostat.CPU%c3
24.78 ± 0% -6.2% 23.24 ± 0% turbostat.PkgWatt
20259 ± 4% +105.3% 41593 ± 0% vmstat.io.bo
40089 ± 3% +146.8% 98930 ± 0% vmstat.memory.buff
4840450 ± 1% +44.3% 6985650 ± 0% vmstat.memory.cache
2944417 ± 3% -75.0% 735714 ± 0% vmstat.memory.free
9190 ± 5% +131.5% 21271 ± 0% vmstat.system.cs
21597 ± 0% +27.5% 27530 ± 0% vmstat.system.in
206807 ± 0% +28.8% 266435 ± 0% meminfo.Active
91674 ± 1% +63.7% 150046 ± 0% meminfo.Active(file)
39950 ± 3% +147.0% 98691 ± 0% meminfo.Buffers
4301589 ± 1% +42.2% 6117099 ± 0% meminfo.Cached
147136 ± 4% -51.8% 70930 ± 3% meminfo.CmaFree
1231530 ± 0% -9.8% 1110281 ± 0% meminfo.Dirty
4165928 ± 1% +33.2% 5546973 ± 0% meminfo.Inactive
4035603 ± 1% +34.2% 5416550 ± 0% meminfo.Inactive(file)
2954425 ± 3% -75.0% 738296 ± 0% meminfo.MemFree
528634 ± 2% +63.7% 865637 ± 0% meminfo.SReclaimable
572788 ± 1% +58.8% 909788 ± 0% meminfo.Slab
82578 ± 22% +527.1% 517878 ± 0% meminfo.Unevictable
3.364e+11 ± 6% -83.3% 5.634e+10 ± 3% perf-stat.branch-instructions
0.08 ± 7% +1007.3% 0.91 ± 2% perf-stat.branch-miss-rate%
2.762e+08 ± 5% +86.2% 5.142e+08 ± 4% perf-stat.branch-misses
1.991e+09 ± 7% +56.6% 3.118e+09 ± 5% perf-stat.cache-misses
1.991e+09 ± 7% +56.6% 3.118e+09 ± 5% perf-stat.cache-references
2785024 ± 5% +131.2% 6439786 ± 0% perf-stat.context-switches
7.542e+11 ± 4% -46.1% 4.062e+11 ± 3% perf-stat.cpu-cycles
12762 ± 8% +64.5% 20989 ± 10% perf-stat.cpu-migrations
0.01 ± 4% +972.7% 0.07 ± 4% perf-stat.dTLB-load-miss-rate%
29564776 ± 14% +86.4% 55104164 ± 10% perf-stat.dTLB-load-misses
4.544e+11 ± 10% -82.6% 7.909e+10 ± 8% perf-stat.dTLB-loads
0.00 ± 11% +1059.4% 0.01 ± 2% perf-stat.dTLB-store-miss-rate%
2671103 ± 5% +34.6% 3595369 ± 3% perf-stat.dTLB-store-misses
3.157e+11 ± 6% -88.5% 3.635e+10 ± 0% perf-stat.dTLB-stores
9.80 ± 0% -23.4% 7.51 ± 4% perf-stat.iTLB-load-miss-rate%
7536032 ± 5% +15.5% 8705667 ± 4% perf-stat.iTLB-load-misses
69358427 ± 4% +54.6% 1.072e+08 ± 2% perf-stat.iTLB-loads
1.841e+12 ± 6% -84.9% 2.778e+11 ± 3% perf-stat.instructions
245649 ± 10% -87.0% 32014 ± 7% perf-stat.instructions-per-iTLB-miss
2.44 ± 2% -72.0% 0.68 ± 1% perf-stat.ipc
13430 ± 3% -8.6% 12278 ± 4% slabinfo.anon_vma.active_objs
13430 ± 3% -8.6% 12278 ± 4% slabinfo.anon_vma.num_objs
829.00 ± 3% +25.5% 1040 ± 1% slabinfo.blkdev_requests.active_objs
829.00 ± 3% +31.4% 1089 ± 1% slabinfo.blkdev_requests.num_objs
868726 ± 2% +54.0% 1338136 ± 0% slabinfo.buffer_head.active_objs
22278 ± 2% +54.1% 34328 ± 0% slabinfo.buffer_head.active_slabs
868881 ± 2% +54.1% 1338829 ± 0% slabinfo.buffer_head.num_objs
22278 ± 2% +54.1% 34328 ± 0% slabinfo.buffer_head.num_slabs
240.50 ± 9% +72.8% 415.50 ± 7% slabinfo.ext4_allocation_context.active_objs
240.50 ± 9% +72.8% 415.50 ± 7% slabinfo.ext4_allocation_context.num_objs
1008816 ± 3% +102.0% 2037569 ± 0% slabinfo.ext4_extent_status.active_objs
10145 ± 3% +99.7% 20257 ± 0% slabinfo.ext4_extent_status.active_slabs
1034848 ± 3% +99.7% 2066293 ± 0% slabinfo.ext4_extent_status.num_objs
10145 ± 3% +99.7% 20257 ± 0% slabinfo.ext4_extent_status.num_slabs
598.50 ± 7% +204.2% 1820 ± 16% slabinfo.ext4_io_end.active_objs
598.50 ± 7% +204.2% 1820 ± 16% slabinfo.ext4_io_end.num_objs
7552 ± 3% +139.9% 18119 ± 1% slabinfo.jbd2_journal_head.active_objs
225.50 ± 3% +145.5% 553.50 ± 1% slabinfo.jbd2_journal_head.active_slabs
7688 ± 3% +145.0% 18836 ± 1% slabinfo.jbd2_journal_head.num_objs
225.50 ± 3% +145.5% 553.50 ± 1% slabinfo.jbd2_journal_head.num_slabs
1705 ± 2% +10.9% 1892 ± 1% slabinfo.kmalloc-128.active_objs
1705 ± 2% +10.9% 1892 ± 1% slabinfo.kmalloc-128.num_objs
635923 ± 1% +68.0% 1068344 ± 0% slabinfo.radix_tree_node.active_objs
22711 ± 1% +68.1% 38179 ± 0% slabinfo.radix_tree_node.active_slabs
635924 ± 1% +68.1% 1069036 ± 0% slabinfo.radix_tree_node.num_objs
22711 ± 1% +68.1% 38179 ± 0% slabinfo.radix_tree_node.num_slabs
43.25 ± 30% +659.5% 328.50 ± 5% proc-vmstat.kswapd_high_wmark_hit_quickly
22911 ± 1% +63.7% 37511 ± 0% proc-vmstat.nr_active_file
1719405 ± 3% +88.3% 3238051 ± 0% proc-vmstat.nr_dirtied
307915 ± 0% -9.9% 277568 ± 0% proc-vmstat.nr_dirty
173320 ± 0% -11.2% 153959 ± 0% proc-vmstat.nr_dirty_background_threshold
347064 ± 0% -11.2% 308295 ± 0% proc-vmstat.nr_dirty_threshold
1084951 ± 1% +43.2% 1553939 ± 0% proc-vmstat.nr_file_pages
36802 ± 4% -51.8% 17727 ± 3% proc-vmstat.nr_free_cma
739051 ± 3% -75.0% 184500 ± 0% proc-vmstat.nr_free_pages
1008614 ± 1% +34.3% 1354148 ± 0% proc-vmstat.nr_inactive_file
132095 ± 2% +63.8% 216411 ± 0% proc-vmstat.nr_slab_reclaimable
20520 ± 22% +530.9% 129469 ± 0% proc-vmstat.nr_unevictable
1455387 ± 4% +104.6% 2977054 ± 0% proc-vmstat.nr_written
22911 ± 1% +63.7% 37512 ± 0% proc-vmstat.nr_zone_active_file
1008623 ± 1% +34.3% 1354156 ± 0% proc-vmstat.nr_zone_inactive_file
20520 ± 22% +530.9% 129469 ± 0% proc-vmstat.nr_zone_unevictable
307921 ± 0% -9.9% 277574 ± 0% proc-vmstat.nr_zone_write_pending
2276380 ± 2% +82.3% 4149633 ± 0% proc-vmstat.numa_hit
2276380 ± 2% +82.3% 4149633 ± 0% proc-vmstat.numa_local
2290 ± 8% -19.0% 1854 ± 3% proc-vmstat.pgactivate
586459 ± 0% +95.4% 1146061 ± 0% proc-vmstat.pgalloc_dma32
1813324 ± 3% +75.7% 3185203 ± 0% proc-vmstat.pgalloc_normal
686981 ± 11% +280.1% 2611518 ± 0% proc-vmstat.pgfree
6092313 ± 4% +106.2% 12563054 ± 0% proc-vmstat.pgpgout
11203 ± 36% +170.9% 30354 ± 8% proc-vmstat.pgrotated
449606 ± 16% +339.3% 1974929 ± 0% proc-vmstat.pgscan_kswapd
233963 ± 24% +654.4% 1764934 ± 0% proc-vmstat.pgsteal_kswapd
1198400 ± 16% +425.5% 6298080 ± 0% proc-vmstat.slabs_scanned
0.00 ± 0% +Inf% 71874 ± 1% proc-vmstat.workingset_nodereclaim
8388 ± 1% -71.6% 2385 ± 0% sched_debug.cfs_rq:/.exec_clock.avg
121674 ± 1% -83.1% 20594 ± 1% sched_debug.cfs_rq:/.exec_clock.max
488.99 ± 7% +83.9% 899.07 ± 2% sched_debug.cfs_rq:/.exec_clock.min
29252 ± 1% -83.9% 4705 ± 2% sched_debug.cfs_rq:/.exec_clock.stddev
14195 ± 1% -45.0% 7807 ± 8% sched_debug.cfs_rq:/.min_vruntime.avg
129717 ± 1% -79.5% 26584 ± 3% sched_debug.cfs_rq:/.min_vruntime.max
29978 ± 1% -82.1% 5358 ± 4% sched_debug.cfs_rq:/.min_vruntime.stddev
42.38 ± 13% -64.7% 14.96 ± 32% sched_debug.cfs_rq:/.runnable_load_avg.avg
587.46 ± 10% -73.2% 157.17 ± 58% sched_debug.cfs_rq:/.runnable_load_avg.max
142.60 ± 10% -72.0% 39.89 ± 54% sched_debug.cfs_rq:/.runnable_load_avg.stddev
-115054 ± -1% -84.2% -18209 ± -2% sched_debug.cfs_rq:/.spread0.avg
-124959 ± -1% -82.4% -21931 ± -2% sched_debug.cfs_rq:/.spread0.min
29980 ± 1% -82.1% 5359 ± 4% sched_debug.cfs_rq:/.spread0.stddev
183.46 ± 4% -12.6% 160.38 ± 3% sched_debug.cfs_rq:/.util_avg.avg
758.46 ± 6% -54.2% 347.67 ± 21% sched_debug.cfs_rq:/.util_avg.max
159.02 ± 6% -59.6% 64.19 ± 27% sched_debug.cfs_rq:/.util_avg.stddev
4.62 ± 14% +25.0% 5.78 ± 8% sched_debug.cpu.clock.stddev
4.62 ± 14% +25.0% 5.78 ± 8% sched_debug.cpu.clock_task.stddev
32.76 ± 20% -71.9% 9.22 ± 28% sched_debug.cpu.cpu_load[0].avg
476.58 ± 20% -82.2% 85.00 ± 41% sched_debug.cpu.cpu_load[0].max
115.51 ± 20% -80.9% 22.05 ± 38% sched_debug.cpu.cpu_load[0].stddev
50.30 ± 18% -35.9% 32.26 ± 22% sched_debug.cpu.cpu_load[1].avg
689.00 ± 14% -47.2% 363.67 ± 40% sched_debug.cpu.cpu_load[1].max
167.01 ± 14% -45.9% 90.39 ± 37% sched_debug.cpu.cpu_load[1].stddev
49.20 ± 17% -36.6% 31.17 ± 12% sched_debug.cpu.cpu_load[2].avg
685.33 ± 12% -48.1% 355.79 ± 25% sched_debug.cpu.cpu_load[2].max
165.88 ± 13% -47.3% 87.49 ± 22% sched_debug.cpu.cpu_load[2].stddev
48.09 ± 14% -39.4% 29.16 ± 13% sched_debug.cpu.cpu_load[3].avg
680.83 ± 10% -50.2% 339.04 ± 21% sched_debug.cpu.cpu_load[3].max
164.53 ± 10% -49.7% 82.78 ± 20% sched_debug.cpu.cpu_load[3].stddev
48.58 ± 10% -41.4% 28.46 ± 14% sched_debug.cpu.cpu_load[4].avg
688.96 ± 7% -51.1% 336.62 ± 20% sched_debug.cpu.cpu_load[4].max
166.58 ± 7% -50.9% 81.78 ± 19% sched_debug.cpu.cpu_load[4].stddev
98158 ± 2% -17.7% 80832 ± 9% sched_debug.cpu.load.avg
11358 ± 5% +18.9% 13505 ± 1% sched_debug.cpu.nr_load_updates.min
0.49 ± 22% -20.4% 0.39 ± 4% sched_debug.cpu.nr_running.stddev
68686 ± 5% +225.6% 223616 ± 0% sched_debug.cpu.nr_switches.avg
716941 ± 6% +334.4% 3114465 ± 0% sched_debug.cpu.nr_switches.max
15378 ± 9% +49.4% 22975 ± 1% sched_debug.cpu.nr_switches.min
167664 ± 7% +345.2% 746520 ± 0% sched_debug.cpu.nr_switches.stddev
27.08 ± 11% +91.2% 51.79 ± 28% sched_debug.cpu.nr_uninterruptible.max
-70.88 ±-30% +137.4% -168.25 ±-28% sched_debug.cpu.nr_uninterruptible.min
21.97 ± 21% +155.2% 56.05 ± 33% sched_debug.cpu.nr_uninterruptible.stddev
67027 ± 5% +230.8% 221705 ± 0% sched_debug.cpu.sched_count.avg
709329 ± 6% +338.0% 3106542 ± 0% sched_debug.cpu.sched_count.max
14191 ± 10% +52.9% 21695 ± 2% sched_debug.cpu.sched_count.min
166147 ± 7% +348.3% 744893 ± 0% sched_debug.cpu.sched_count.stddev
31385 ± 5% +227.5% 102792 ± 0% sched_debug.cpu.sched_goidle.avg
332305 ± 6% +333.7% 1441132 ± 0% sched_debug.cpu.sched_goidle.max
6750 ± 11% +54.1% 10404 ± 1% sched_debug.cpu.sched_goidle.min
77791 ± 7% +344.2% 345568 ± 0% sched_debug.cpu.sched_goidle.stddev
34470 ± 5% +242.0% 117883 ± 0% sched_debug.cpu.ttwu_count.avg
382534 ± 6% +338.8% 1678433 ± 0% sched_debug.cpu.ttwu_count.max
6678 ± 11% +48.9% 9940 ± 3% sched_debug.cpu.ttwu_count.min
89977 ± 7% +347.8% 402944 ± 0% sched_debug.cpu.ttwu_count.stddev
29594 ± 5% +275.3% 111065 ± 0% sched_debug.cpu.ttwu_local.avg
375624 ± 7% +342.3% 1661530 ± 0% sched_debug.cpu.ttwu_local.max
4098 ± 7% +46.2% 5994 ± 2% sched_debug.cpu.ttwu_local.min
89363 ± 7% +348.0% 400331 ± 0% sched_debug.cpu.ttwu_local.stddev
1.71 ± 25% +28.4% 2.20 ± 7% sched_debug.rt_rq:/.rt_time.max
0.44 ± 18% +21.7% 0.53 ± 7% sched_debug.rt_rq:/.rt_time.stddev
2.14 ± 21% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.___might_sleep.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
1.02 ± 29% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.___might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
0.00 ± -1% +Inf% 1.14 ± 10% perf-profile.calltrace.cycles-pp.__blk_run_queue.blk_delay_work.process_one_work.worker_thread.kthread
0.13 ±173% +649.0% 0.96 ± 25% perf-profile.calltrace.cycles-pp.__blk_run_queue.blk_run_queue.scsi_run_queue.scsi_end_request.scsi_io_completion
0.45 ±100% +266.9% 1.66 ± 11% perf-profile.calltrace.cycles-pp.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.45 ±100% +265.2% 1.65 ± 11% perf-profile.calltrace.cycles-pp.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
29.24 ± 29% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
1.71 ± 56% +164.8% 4.52 ± 4% perf-profile.calltrace.cycles-pp.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write.sys_write
0.35 ±104% +384.5% 1.72 ± 28% perf-profile.calltrace.cycles-pp.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_fasteoi_irq.handle_irq
0.00 ± -1% +Inf% 0.87 ± 21% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.15 ± 21% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__might_sleep.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
3.90 ± 22% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__might_sleep.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
12.80 ± 29% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow
0.66 ± 59% +211.3% 2.06 ± 27% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.do_IRQ.ret_from_intr.cpuidle_enter
0.42 ± 58% +185.0% 1.19 ± 8% perf-profile.calltrace.cycles-pp.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.73 ± 55% +161.6% 4.53 ± 4% perf-profile.calltrace.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.calltrace.cycles-pp.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work.worker_thread
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.calltrace.cycles-pp.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn
1.14 ± 30% +80.0% 2.05 ± 6% perf-profile.calltrace.cycles-pp._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt
0.13 ±173% +841.2% 1.20 ± 29% perf-profile.calltrace.cycles-pp.ahci_handle_port_intr.ahci_single_level_irq_intr.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event
0.34 ±104% +398.5% 1.71 ± 27% perf-profile.calltrace.cycles-pp.ahci_single_level_irq_intr.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_fasteoi_irq
3.24 ± 20% +104.9% 6.64 ± 8% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.00 ± -1% +Inf% 1.16 ± 10% perf-profile.calltrace.cycles-pp.blk_delay_work.process_one_work.worker_thread.kthread.ret_from_fork
0.66 ± 59% +202.6% 2.00 ± 26% perf-profile.calltrace.cycles-pp.blk_done_softirq.__softirqentry_text_start.irq_exit.do_IRQ.ret_from_intr
0.13 ±173% +645.3% 0.99 ± 25% perf-profile.calltrace.cycles-pp.blk_run_queue.scsi_run_queue.scsi_end_request.scsi_io_completion.scsi_finish_command
0.36 ±105% +372.6% 1.73 ± 38% perf-profile.calltrace.cycles-pp.call_console_drivers.console_unlock.vprintk_emit.vprintk_default.printk
2.92 ± 40% +296.0% 11.55 ± 19% perf-profile.calltrace.cycles-pp.call_cpuidle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
44.69 ± 17% +40.6% 62.85 ± 8% perf-profile.calltrace.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.vprintk_default.printk.perf_duration_warn
3.41 ± 39% +287.3% 13.21 ± 20% perf-profile.calltrace.cycles-pp.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
45.17 ± 17% +41.1% 63.73 ± 8% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary
2.92 ± 40% +295.3% 11.52 ± 19% perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init.start_kernel
44.67 ± 17% +40.7% 62.83 ± 8% perf-profile.calltrace.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
1.42 ± 40% +380.2% 6.81 ± 17% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
40.70 ± 17% +33.2% 54.22 ± 9% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
1.39 ± 41% +205.1% 4.22 ± 24% perf-profile.calltrace.cycles-pp.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.77 ± 24% +73.7% 1.34 ± 25% perf-profile.calltrace.cycles-pp.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.calltrace.cycles-pp.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback
2.72 ± 41% +141.6% 6.57 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath
0.15 ±173% +418.6% 0.76 ± 30% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_fastpath.read
0.33 ±100% +304.6% 1.32 ± 10% perf-profile.calltrace.cycles-pp.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin.generic_perform_write
1.01 ± 74% +210.2% 3.12 ± 3% perf-profile.calltrace.cycles-pp.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write
0.30 ±100% +269.2% 1.11 ± 11% perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_da_get_block_prep.__block_write_begin_int.__block_write_begin.ext4_da_write_begin
42.95 ± 26% -91.3% 3.74 ± 18% perf-profile.calltrace.cycles-pp.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode
1.72 ± 55% +163.6% 4.53 ± 4% perf-profile.calltrace.cycles-pp.ext4_file_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
43.95 ± 25% -89.4% 4.66 ± 17% perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes
0.92 ± 61% +103.5% 1.87 ± 18% perf-profile.calltrace.cycles-pp.ext4_split_extent.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
0.82 ± 62% +89.6% 1.55 ± 22% perf-profile.calltrace.cycles-pp.ext4_split_extent_at.ext4_split_extent.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages
46.85 ± 22% -78.5% 10.06 ± 23% perf-profile.calltrace.cycles-pp.ext4_writepages.do_writepages.__writeback_single_inode.writeback_sb_inodes.__writeback_inodes_wb
1.08 ± 23% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.find_get_entry.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
17.78 ± 29% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks
1.65 ± 58% +165.7% 4.37 ± 3% perf-profile.calltrace.cycles-pp.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter.__vfs_write.vfs_write
0.15 ±173% +672.9% 1.14 ± 9% perf-profile.calltrace.cycles-pp.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter.ext4_file_write_iter
0.39 ±103% +393.6% 1.93 ± 24% perf-profile.calltrace.cycles-pp.handle_fasteoi_irq.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter
0.40 ±103% +392.5% 1.96 ± 23% perf-profile.calltrace.cycles-pp.handle_irq.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle
0.35 ±104% +393.7% 1.75 ± 27% perf-profile.calltrace.cycles-pp.handle_irq_event.handle_fasteoi_irq.handle_irq.do_IRQ.ret_from_intr
0.35 ±104% +393.7% 1.75 ± 27% perf-profile.calltrace.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_fasteoi_irq.handle_irq.do_IRQ
0.66 ± 19% +134.0% 1.53 ± 25% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
0.00 ± -1% +Inf% 0.92 ± 39% perf-profile.calltrace.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
1.64 ± 26% +81.1% 2.98 ± 11% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
0.70 ± 60% +208.2% 2.17 ± 25% perf-profile.calltrace.cycles-pp.irq_exit.do_IRQ.ret_from_intr.cpuidle_enter.call_cpuidle
0.79 ± 9% +123.0% 1.77 ± 10% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.irq_work_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter.call_cpuidle
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter
0.00 ± -1% +Inf% 1.24 ± 22% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
47.58 ± 21% -70.6% 14.00 ± 19% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.69 ± 20% +140.4% 1.67 ± 23% perf-profile.calltrace.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle
1.14 ± 30% +80.0% 2.05 ± 6% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.tick_do_update_jiffies64.tick_irq_enter.irq_enter
23.25 ± 30% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks
0.15 ±173% +669.5% 1.14 ± 9% perf-profile.calltrace.cycles-pp.pagecache_get_page.grab_cache_page_write_begin.ext4_da_write_begin.generic_perform_write.__generic_file_write_iter
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
5.62 ± 25% +212.8% 17.59 ± 22% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.printk.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
47.34 ± 21% -74.7% 11.96 ± 20% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
14.58 ± 29% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.radix_tree_lookup_slot.find_get_entry.pagecache_get_page.__find_get_block_slow.unmap_underlying_metadata
0.15 ±173% +427.1% 0.78 ± 28% perf-profile.calltrace.cycles-pp.read
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.calltrace.cycles-pp.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
47.58 ± 21% -70.6% 14.00 ± 19% perf-profile.calltrace.cycles-pp.ret_from_fork
1.39 ± 41% +205.4% 4.23 ± 24% perf-profile.calltrace.cycles-pp.ret_from_intr.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
0.59 ± 60% +204.2% 1.81 ± 25% perf-profile.calltrace.cycles-pp.scsi_end_request.scsi_io_completion.scsi_finish_command.scsi_softirq_done.blk_done_softirq
0.62 ± 59% +203.6% 1.88 ± 26% perf-profile.calltrace.cycles-pp.scsi_finish_command.scsi_softirq_done.blk_done_softirq.__softirqentry_text_start.irq_exit
0.60 ± 60% +207.1% 1.84 ± 27% perf-profile.calltrace.cycles-pp.scsi_io_completion.scsi_finish_command.scsi_softirq_done.blk_done_softirq.__softirqentry_text_start
0.00 ± -1% +Inf% 1.12 ± 10% perf-profile.calltrace.cycles-pp.scsi_request_fn.__blk_run_queue.blk_delay_work.process_one_work.worker_thread
0.13 ±173% +641.2% 0.95 ± 24% perf-profile.calltrace.cycles-pp.scsi_request_fn.__blk_run_queue.blk_run_queue.scsi_run_queue.scsi_end_request
0.13 ±173% +645.3% 0.99 ± 25% perf-profile.calltrace.cycles-pp.scsi_run_queue.scsi_end_request.scsi_io_completion.scsi_finish_command.scsi_softirq_done
0.63 ± 59% +203.2% 1.92 ± 25% perf-profile.calltrace.cycles-pp.scsi_softirq_done.blk_done_softirq.__softirqentry_text_start.irq_exit.do_IRQ
0.20 ±173% +687.2% 1.54 ± 38% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write.call_console_drivers
0.33 ±105% +385.6% 1.60 ± 38% perf-profile.calltrace.cycles-pp.serial8250_console_write.univ8250_console_write.call_console_drivers.console_unlock.vprintk_emit
0.00 ± -1% +Inf% 0.84 ± 27% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
0.00 ± -1% +Inf% 1.23 ± 22% perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
0.00 ± -1% +Inf% 0.84 ± 27% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
0.00 ± -1% +Inf% 0.80 ± 24% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd
3.22 ± 20% +104.0% 6.56 ± 7% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt.cpuidle_enter.call_cpuidle.cpu_startup_entry
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.calltrace.cycles-pp.start_kernel.x86_64_start_reservations.x86_64_start_kernel
45.20 ± 17% +41.2% 63.81 ± 8% perf-profile.calltrace.cycles-pp.start_secondary
0.79 ± 22% +78.9% 1.42 ± 23% perf-profile.calltrace.cycles-pp.sys_wait4.entry_SYSCALL_64_fastpath
1.78 ± 55% +166.8% 4.74 ± 5% perf-profile.calltrace.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
1.25 ± 29% +77.5% 2.21 ± 9% perf-profile.calltrace.cycles-pp.tick_do_update_jiffies64.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
1.55 ± 26% +79.1% 2.78 ± 10% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
0.55 ± 8% +124.1% 1.23 ± 9% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter
0.14 ±173% +621.4% 1.01 ± 10% perf-profile.calltrace.cycles-pp.tick_nohz_stop_sched_tick.__tick_nohz_idle_enter.tick_nohz_irq_exit.irq_exit.smp_apic_timer_interrupt
0.20 ±173% +687.2% 1.54 ± 38% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.univ8250_console_write.call_console_drivers.console_unlock
0.33 ±105% +385.6% 1.60 ± 38% perf-profile.calltrace.cycles-pp.univ8250_console_write.call_console_drivers.console_unlock.vprintk_emit.vprintk_default
38.19 ± 28% -100.0% 0.00 ± -1% perf-profile.calltrace.cycles-pp.unmap_underlying_metadata.ext4_ext_map_blocks.ext4_map_blocks.ext4_writepages.do_writepages
1.76 ± 55% +166.3% 4.70 ± 4% perf-profile.calltrace.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.vprintk_default.printk.perf_duration_warn.irq_work_run_list.irq_work_run
0.36 ±105% +374.7% 1.73 ± 39% perf-profile.calltrace.cycles-pp.vprintk_emit.vprintk_default.printk.perf_duration_warn.irq_work_run_list
0.62 ± 22% +75.2% 1.09 ± 22% perf-profile.calltrace.cycles-pp.wait_consider_task.do_wait.sys_wait4.entry_SYSCALL_64_fastpath
0.19 ±173% +702.7% 1.50 ± 38% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.calltrace.cycles-pp.wb_workfn.process_one_work.worker_thread.kthread.ret_from_fork
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.calltrace.cycles-pp.wb_writeback.wb_workfn.process_one_work.worker_thread.kthread
47.44 ± 21% -74.1% 12.27 ± 19% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.calltrace.cycles-pp.writeback_sb_inodes.__writeback_inodes_wb.wb_writeback.wb_workfn.process_one_work
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.calltrace.cycles-pp.x86_64_start_kernel
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.calltrace.cycles-pp.x86_64_start_reservations.x86_64_start_kernel
3.26 ± 22% -92.8% 0.24 ± 23% perf-profile.children.cycles-pp.___might_sleep
0.92 ± 35% +151.2% 2.31 ± 20% perf-profile.children.cycles-pp.__blk_run_queue
0.56 ± 63% +199.1% 1.66 ± 11% perf-profile.children.cycles-pp.__block_write_begin
0.56 ± 63% +196.4% 1.65 ± 11% perf-profile.children.cycles-pp.__block_write_begin_int
29.84 ± 29% -99.5% 0.14 ± 26% perf-profile.children.cycles-pp.__find_get_block_slow
1.71 ± 55% +164.4% 4.52 ± 4% perf-profile.children.cycles-pp.__generic_file_write_iter
0.96 ± 37% +90.1% 1.81 ± 25% perf-profile.children.cycles-pp.__handle_irq_event_percpu
0.42 ± 21% +137.1% 0.99 ± 21% perf-profile.children.cycles-pp.__hrtimer_run_queues
5.19 ± 20% -93.8% 0.32 ± 21% perf-profile.children.cycles-pp.__might_sleep
13.51 ± 28% -93.0% 0.95 ± 9% perf-profile.children.cycles-pp.__radix_tree_lookup
0.32 ± 30% +230.7% 1.05 ± 8% perf-profile.children.cycles-pp.__schedule
1.36 ± 27% +113.4% 2.91 ± 20% perf-profile.children.cycles-pp.__softirqentry_text_start
0.67 ± 17% +151.3% 1.69 ± 10% perf-profile.children.cycles-pp.__tick_nohz_idle_enter
0.35 ± 32% +111.6% 0.73 ± 38% perf-profile.children.cycles-pp.__vfs_read
1.98 ± 50% +160.2% 5.15 ± 6% perf-profile.children.cycles-pp.__vfs_write
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.children.cycles-pp.__writeback_inodes_wb
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.children.cycles-pp.__writeback_single_inode
1.61 ± 27% +98.1% 3.19 ± 9% perf-profile.children.cycles-pp._raw_spin_lock
0.33 ± 43% +132.8% 0.76 ± 25% perf-profile.children.cycles-pp.ahci_handle_port_interrupt
0.62 ± 36% +105.3% 1.27 ± 27% perf-profile.children.cycles-pp.ahci_handle_port_intr
0.93 ± 37% +92.8% 1.80 ± 25% perf-profile.children.cycles-pp.ahci_single_level_irq_intr
3.48 ± 19% +108.3% 7.25 ± 8% perf-profile.children.cycles-pp.apic_timer_interrupt
0.22 ± 31% +218.2% 0.70 ± 34% perf-profile.children.cycles-pp.ast_imageblit
0.21 ± 45% +457.8% 1.16 ± 10% perf-profile.children.cycles-pp.blk_delay_work
1.03 ± 36% +119.9% 2.27 ± 24% perf-profile.children.cycles-pp.blk_done_softirq
0.49 ± 40% +137.9% 1.18 ± 28% perf-profile.children.cycles-pp.blk_run_queue
0.55 ± 42% +213.6% 1.73 ± 38% perf-profile.children.cycles-pp.call_console_drivers
47.62 ± 18% +56.3% 74.41 ± 4% perf-profile.children.cycles-pp.call_cpuidle
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.console_unlock
48.58 ± 18% +58.4% 76.95 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
47.58 ± 18% +56.3% 74.35 ± 4% perf-profile.children.cycles-pp.cpuidle_enter
42.12 ± 17% +44.9% 61.02 ± 6% perf-profile.children.cycles-pp.cpuidle_enter_state
0.55 ± 47% +78.5% 0.98 ± 24% perf-profile.children.cycles-pp.crypto_shash_update
2.17 ± 37% +110.4% 4.56 ± 23% perf-profile.children.cycles-pp.do_IRQ
0.77 ± 24% +74.8% 1.35 ± 26% perf-profile.children.cycles-pp.do_wait
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.children.cycles-pp.do_writepages
3.43 ± 35% +139.4% 8.22 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_fastpath
0.41 ± 58% +220.7% 1.32 ± 10% perf-profile.children.cycles-pp.ext4_da_get_block_prep
1.10 ± 58% +184.7% 3.12 ± 3% perf-profile.children.cycles-pp.ext4_da_write_begin
0.46 ± 46% +181.9% 1.28 ± 5% perf-profile.children.cycles-pp.ext4_es_lookup_extent
42.95 ± 26% -91.1% 3.84 ± 18% perf-profile.children.cycles-pp.ext4_ext_map_blocks
1.72 ± 55% +163.6% 4.53 ± 4% perf-profile.children.cycles-pp.ext4_file_write_iter
0.31 ± 35% +211.3% 0.97 ± 22% perf-profile.children.cycles-pp.ext4_find_extent
43.95 ± 25% -89.3% 4.69 ± 17% perf-profile.children.cycles-pp.ext4_map_blocks
1.00 ± 44% +87.2% 1.87 ± 18% perf-profile.children.cycles-pp.ext4_split_extent
0.88 ± 47% +77.3% 1.56 ± 22% perf-profile.children.cycles-pp.ext4_split_extent_at
46.85 ± 22% -78.5% 10.06 ± 23% perf-profile.children.cycles-pp.ext4_writepages
19.06 ± 28% -96.9% 0.60 ± 6% perf-profile.children.cycles-pp.find_get_entry
1.19 ± 38% +124.8% 2.67 ± 32% perf-profile.children.cycles-pp.find_get_pages
1.65 ± 58% +165.7% 4.37 ± 3% perf-profile.children.cycles-pp.generic_perform_write
0.36 ± 52% +215.9% 1.15 ± 10% perf-profile.children.cycles-pp.grab_cache_page_write_begin
1.04 ± 38% +96.4% 2.04 ± 21% perf-profile.children.cycles-pp.handle_fasteoi_irq
1.05 ± 38% +97.6% 2.07 ± 21% perf-profile.children.cycles-pp.handle_irq
0.98 ± 38% +90.3% 1.86 ± 25% perf-profile.children.cycles-pp.handle_irq_event
0.97 ± 38% +91.8% 1.87 ± 25% perf-profile.children.cycles-pp.handle_irq_event_percpu
0.76 ± 16% +122.6% 1.70 ± 22% perf-profile.children.cycles-pp.hrtimer_interrupt
0.29 ± 37% +227.1% 0.97 ± 39% perf-profile.children.cycles-pp.io_serial_in
1.71 ± 25% +91.8% 3.28 ± 11% perf-profile.children.cycles-pp.irq_enter
2.03 ± 22% +118.6% 4.44 ± 14% perf-profile.children.cycles-pp.irq_exit
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.irq_work_interrupt
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.irq_work_run
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.irq_work_run_list
0.00 ± -1% +Inf% 1.24 ± 22% perf-profile.children.cycles-pp.kswapd
47.58 ± 21% -70.6% 14.00 ± 19% perf-profile.children.cycles-pp.kthread
0.80 ± 17% +128.7% 1.83 ± 21% perf-profile.children.cycles-pp.local_apic_timer_interrupt
1.20 ± 28% +87.0% 2.23 ± 7% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
24.30 ± 28% -94.8% 1.27 ± 9% perf-profile.children.cycles-pp.pagecache_get_page
1.23 ± 39% +118.4% 2.70 ± 31% perf-profile.children.cycles-pp.pagevec_lookup
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.perf_duration_warn
5.69 ± 22% +209.2% 17.59 ± 22% perf-profile.children.cycles-pp.poll_idle
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.printk
47.34 ± 21% -74.7% 11.96 ± 20% perf-profile.children.cycles-pp.process_one_work
15.33 ± 28% -96.2% 0.58 ± 7% perf-profile.children.cycles-pp.radix_tree_lookup_slot
0.38 ± 33% +107.3% 0.78 ± 28% perf-profile.children.cycles-pp.read
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.children.cycles-pp.rest_init
47.58 ± 21% -70.6% 14.01 ± 19% perf-profile.children.cycles-pp.ret_from_fork
2.18 ± 37% +109.5% 4.57 ± 23% perf-profile.children.cycles-pp.ret_from_intr
0.34 ± 30% +215.3% 1.08 ± 6% perf-profile.children.cycles-pp.schedule
0.93 ± 37% +120.9% 2.06 ± 23% perf-profile.children.cycles-pp.scsi_end_request
0.97 ± 36% +120.7% 2.13 ± 23% perf-profile.children.cycles-pp.scsi_finish_command
0.94 ± 37% +121.8% 2.08 ± 24% perf-profile.children.cycles-pp.scsi_io_completion
0.87 ± 36% +161.2% 2.27 ± 19% perf-profile.children.cycles-pp.scsi_request_fn
0.51 ± 41% +134.6% 1.20 ± 29% perf-profile.children.cycles-pp.scsi_run_queue
0.99 ± 36% +120.8% 2.17 ± 23% perf-profile.children.cycles-pp.scsi_softirq_done
0.48 ± 41% +219.8% 1.54 ± 38% perf-profile.children.cycles-pp.serial8250_console_putchar
0.50 ± 41% +218.9% 1.60 ± 38% perf-profile.children.cycles-pp.serial8250_console_write
0.00 ± -1% +Inf% 0.84 ± 27% perf-profile.children.cycles-pp.shrink_inactive_list
0.00 ± -1% +Inf% 1.23 ± 22% perf-profile.children.cycles-pp.shrink_node
0.00 ± -1% +Inf% 0.84 ± 27% perf-profile.children.cycles-pp.shrink_node_memcg
0.00 ± -1% +Inf% 0.80 ± 24% perf-profile.children.cycles-pp.shrink_page_list
3.46 ± 19% +107.3% 7.16 ± 7% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.smp_irq_work_interrupt
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.children.cycles-pp.start_kernel
45.20 ± 17% +41.2% 63.81 ± 8% perf-profile.children.cycles-pp.start_secondary
0.40 ± 26% +113.1% 0.85 ± 32% perf-profile.children.cycles-pp.sys_read
0.80 ± 22% +79.9% 1.43 ± 23% perf-profile.children.cycles-pp.sys_wait4
2.07 ± 47% +162.2% 5.43 ± 6% perf-profile.children.cycles-pp.sys_write
1.31 ± 29% +88.4% 2.47 ± 11% perf-profile.children.cycles-pp.tick_do_update_jiffies64
1.61 ± 26% +90.2% 3.07 ± 10% perf-profile.children.cycles-pp.tick_irq_enter
0.59 ± 12% +137.3% 1.40 ± 4% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.59 ± 16% +141.3% 1.42 ± 8% perf-profile.children.cycles-pp.tick_nohz_stop_sched_tick
0.37 ± 42% +173.2% 1.02 ± 11% perf-profile.children.cycles-pp.try_to_wake_up
0.33 ± 42% +169.9% 0.90 ± 11% perf-profile.children.cycles-pp.ttwu_do_activate
0.48 ± 41% +219.8% 1.54 ± 38% perf-profile.children.cycles-pp.uart_console_write
0.50 ± 41% +218.9% 1.60 ± 38% perf-profile.children.cycles-pp.univ8250_console_write
38.78 ± 28% -99.7% 0.12 ± 42% perf-profile.children.cycles-pp.unmap_underlying_metadata
0.39 ± 23% +116.9% 0.84 ± 34% perf-profile.children.cycles-pp.vfs_read
2.04 ± 48% +162.8% 5.38 ± 6% perf-profile.children.cycles-pp.vfs_write
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.vprintk_default
0.55 ± 42% +215.0% 1.73 ± 39% perf-profile.children.cycles-pp.vprintk_emit
0.63 ± 22% +78.2% 1.12 ± 21% perf-profile.children.cycles-pp.wait_consider_task
0.49 ± 39% +219.9% 1.57 ± 38% perf-profile.children.cycles-pp.wait_for_xmitr
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.children.cycles-pp.wb_workfn
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.children.cycles-pp.wb_writeback
47.44 ± 21% -74.1% 12.27 ± 19% perf-profile.children.cycles-pp.worker_thread
46.87 ± 22% -78.5% 10.06 ± 23% perf-profile.children.cycles-pp.writeback_sb_inodes
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.children.cycles-pp.x86_64_start_kernel
3.42 ± 38% +287.3% 13.25 ± 20% perf-profile.children.cycles-pp.x86_64_start_reservations
3.26 ± 22% -92.8% 0.24 ± 23% perf-profile.self.cycles-pp.___might_sleep
5.38 ± 30% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.__find_get_block_slow
2.93 ± 23% -95.0% 0.15 ± 33% perf-profile.self.cycles-pp.__might_sleep
13.51 ± 28% -93.0% 0.95 ± 9% perf-profile.self.cycles-pp.__radix_tree_lookup
0.45 ± 21% +125.6% 1.01 ± 13% perf-profile.self.cycles-pp._raw_spin_lock
0.36 ± 22% +148.3% 0.89 ± 20% perf-profile.self.cycles-pp.cpuidle_enter_state
0.46 ± 46% +181.9% 1.28 ± 5% perf-profile.self.cycles-pp.ext4_es_lookup_extent
1.06 ± 24% -93.0% 0.07 ± 66% perf-profile.self.cycles-pp.ext4_ext_map_blocks
3.75 ± 30% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.find_get_entry
0.46 ± 43% +174.9% 1.26 ± 35% perf-profile.self.cycles-pp.find_get_pages
0.29 ± 37% +227.1% 0.97 ± 39% perf-profile.self.cycles-pp.io_serial_in
1.20 ± 28% +87.0% 2.23 ± 7% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
5.50 ± 29% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.pagecache_get_page
5.69 ± 22% +209.2% 17.59 ± 22% perf-profile.self.cycles-pp.poll_idle
2.18 ± 33% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.radix_tree_lookup_slot
3.85 ± 26% -100.0% 0.00 ± -1% perf-profile.self.cycles-pp.unmap_underlying_metadata
0.60 ± 21% +78.3% 1.07 ± 21% perf-profile.self.cycles-pp.wait_consider_task
Thanks,
Xiaolong
4 years, 2 months
[ovl] e7aefd2650: kernel BUG at fs/overlayfs/dir.c:903!
by kernel test robot
FYI, we noticed the following commit:
https://git.kernel.org/pub/scm/linux/kernel/git/mszeredi/vfs.git overlayfs-next
commit e7aefd26508f4ae464d960c798c15fd06a6c0279 ("ovl: check lower existence of rename target")
in testcase: phoronix-test-suite
with following parameters:
test: dbench-1.0.0
The Phoronix Test Suite is the most comprehensive testing and benchmarking platform available that provides an extensible framework for which new tests can be easily added.
on test machine: 16 threads Nehalem-EP with 5G memory
caused below changes:
+------------------------------------------------------------------+------------+------------+
| | edca5bca9f | e7aefd2650 |
+------------------------------------------------------------------+------------+------------+
| boot_successes | 12 | 3 |
| boot_failures | 4 | 13 |
| invoked_oom-killer:gfp_mask=0x | 4 | 4 |
| Mem-Info | 4 | 4 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 4 | 4 |
| kernel_BUG_at_fs/overlayfs/dir.c | 0 | 9 |
| invalid_opcode:#[##]SMP | 0 | 9 |
| RIP:ovl_rename[overlay] | 0 | 9 |
| calltrace:SyS_rename | 0 | 9 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 9 |
+------------------------------------------------------------------+------------+------------+
[ 55.906147] Started Run 2 @ 07:21:34
[ 55.910355]
[ 55.938387] ------------[ cut here ]------------
[ 55.943209] kernel BUG at fs/overlayfs/dir.c:903!
[ 55.949545] invalid opcode: 0000 [#1] SMP
[ 55.953753] Modules linked in: rpcsec_gss_krb5 nfsv4 dns_resolver nfsd auth_rpcgss ipmi_watchdog ipmi_poweroff ipmi_devintf overlay btrfs xor raid6_pq dm_mod sg sd_mod coretemp snd_pcm kvm_intel snd_timer ata_generic pata_acpi snd mptsas ppdev kvm pata_jmicron soundcore mptscsih i7core_edac ata_piix serio_raw pcspkr irqbypass crc32c_intel edac_core mptbase ipmi_si parport_pc libata scsi_transport_sas parport ipmi_msghandler shpchp acpi_cpufreq
[ 55.997082] CPU: 15 PID: 1468 Comm: dbench Not tainted 4.9.0-rc3-00021-ge7aefd26 #1
[ 56.005092] Hardware name: Supermicro X8DTN/X8DTN, BIOS 4.6.3 03/05/2009
[ 56.012006] task: ffff88007b7a4900 task.stack: ffffc90002720000
[ 56.018115] RIP: 0010:[<ffffffffa010ab78>] [<ffffffffa010ab78>] ovl_rename+0x488/0x490 [overlay]
[ 56.027376] RSP: 0018:ffffc90002723dc8 EFLAGS: 00010202
[ 56.032864] RAX: 0000000000000000 RBX: ffff880108c6aa80 RCX: 0000000000000000
[ 56.040173] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffff88011daa7680
[ 56.047478] RBP: ffffc90002723e20 R08: ffff88015c3dd950 R09: ffff880108c0b680
[ 56.054784] R10: ffff880162e19778 R11: ffff880108c22638 R12: ffff880162e19740
[ 56.062091] R13: ffff880108c6aa80 R14: ffff88011daa7680 R15: ffff880108c22600
[ 56.069406] FS: 00007f2839101700(0000) GS:ffff88017fdc0000(0000) knlGS:0000000000000000
[ 56.077801] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 56.083718] CR2: 00000000020a6000 CR3: 0000000074a26000 CR4: 00000000000006e0
[ 56.091028] Stack:
[ 56.093211] ffff880108c0b680 0000000000000000 0000000400000000 ffff88007ab1a000
[ 56.101146] ffffc90000003e00 0000000000000000 ffff88010edbb908 ffff88010edbb908
[ 56.109094] 0000000000000000 ffff880162e19740 ffff88011daa7680 ffffc90002723e98
[ 56.117029] Call Trace:
[ 56.119658] [<ffffffff81220828>] vfs_rename+0x728/0x960
[ 56.125144] [<ffffffff8121dd00>] ? trailing_symlink+0x1d0/0x240
[ 56.131324] [<ffffffff81224b8f>] SyS_rename+0x37f/0x3a0
[ 56.136813] [<ffffffff81955e37>] entry_SYSCALL_64_fastpath+0x1a/0xa9
[ 56.143425] Code: 00 00 48 c7 c7 80 ef 10 a0 e8 05 59 f7 e0 e9 54 fe ff ff 48 8b 7b 30 48 8b 75 a8 89 45 cc e8 b0 f4 ff ff 8b 45 cc e9 20 ff ff ff <0f> 0b 66 0f 1f 44 00 00 66 66 66 66 90 55 48 89 e5 41 57 41 56
[ 56.166628] RIP [<ffffffffa010ab78>] ovl_rename+0x488/0x490 [overlay]
[ 56.173398] RSP <ffffc90002723dc8>
[ 56.177078] ------------[ cut here ]------------
[ 56.177086] ---[ end trace 49473d4aec442190 ]---
[ 56.177087] Kernel panic - not syncing: Fatal exception
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Kernel Test Robot
4 years, 2 months
[lkp] [mm, shmem] 55ccecc308: [No primary change] vm-scalability.time.elapsed_time +5.7% increase
by kernel test robot
Greeting,
There is no primary kpi change in this test, below is the data collected through multiple monitors running background just for your information.
commit 55ccecc3086789c113c11ed2818fca97588bbd56 ("mm, shmem: swich huge tmpfs to multi-order radix-tree entries")
https://git.kernel.org/pub/scm/linux/kernel/git/kas/linux.git hugeext4/v4
in testcase: vm-scalability
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
runtime: 300s
test: lru-file-readtwice
cpufreq_governor: performance
The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-6/performance/x86_64-rhel-7.2/debian-x86_64-2016-08-31.cgz/300s/lkp-bdw-ep2/lru-file-readtwice/vm-scalability
commit:
8a63b84260 ("radix-tree: Add radix_tree_split_preload()")
55ccecc308 ("mm, shmem: swich huge tmpfs to multi-order radix-tree entries")
8a63b84260c90d97 55ccecc3086789c113c11ed281
---------------- --------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 kmsg.DHCP/BOOTP:Reply_not_for_us_on_eth#,op[#]xid[d2233ed]
%stddev %change %stddev
\ | \
326.24 ± 0% +5.7% 344.98 ± 0% vm-scalability.time.elapsed_time
326.24 ± 0% +5.7% 344.98 ± 0% vm-scalability.time.elapsed_time.max
2554078 ± 0% -3.4% 2467307 ± 1% vm-scalability.time.involuntary_context_switches
8016 ± 0% -7.3% 7435 ± 0% vm-scalability.time.percent_of_cpu_this_job_got
26044 ± 0% -1.9% 25538 ± 0% vm-scalability.time.system_time
514085 ± 2% -9.7% 464382 ± 1% vm-scalability.time.voluntary_context_switches
1447815 ± 1% +72.2% 2492995 ± 10% interrupts.CAL:Function_call_interrupts
20672 ± 22% -91.2% 1824 ±172% latency_stats.max.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.load_elf_binary.search_binary_handler.do_execveat_common
70253 ± 61% -97.4% 1852 ±170% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.clear_user.load_elf_binary.search_binary_handler.do_execveat_common
811244 ± 8% -26.7% 594929 ± 3% softirqs.RCU
193656 ± 5% +22.2% 236731 ± 0% softirqs.SCHED
13913 ± 10% +45.6% 20264 ± 5% meminfo.CmaFree
7478354 ± 2% +34.2% 10036972 ± 1% meminfo.MemFree
1496465 ± 0% +101.6% 3016222 ± 3% meminfo.SReclaimable
1625291 ± 0% +93.4% 3144106 ± 3% meminfo.Slab
7581041 ± 1% +30.6% 9900240 ± 2% vmstat.memory.free
9.25 ± 4% -18.9% 7.50 ± 6% vmstat.procs.b
19726 ± 1% -9.4% 17880 ± 2% vmstat.system.cs
96756 ± 0% +2.6% 99278 ± 0% vmstat.system.in
2.207e+08 ± 1% +22.3% 2.699e+08 ± 3% numa-numastat.node0.local_node
99241131 ± 1% -21.4% 77986293 ± 1% numa-numastat.node0.numa_foreign
2.207e+08 ± 1% +22.3% 2.699e+08 ± 3% numa-numastat.node0.numa_hit
98899509 ± 0% -28.0% 71192123 ± 10% numa-numastat.node0.numa_miss
98899520 ± 0% -28.0% 71197823 ± 10% numa-numastat.node1.numa_foreign
99241142 ± 1% -21.4% 77991994 ± 1% numa-numastat.node1.numa_miss
92.39 ± 0% -5.1% 87.66 ± 0% turbostat.%Busy
2580 ± 0% -5.1% 2448 ± 0% turbostat.Avg_MHz
2.21 ± 1% +58.2% 3.49 ± 3% turbostat.CPU%c1
5.40 ± 2% +63.9% 8.84 ± 2% turbostat.CPU%c6
1.62 ± 21% +79.8% 2.92 ± 20% turbostat.Pkg%pc2
223.49 ± 0% -2.4% 218.08 ± 0% turbostat.PkgWatt
95.61 ± 0% -1.9% 93.80 ± 0% turbostat.RAMWatt
54699 ± 13% +46.4% 80064 ± 6% cpuidle.C1-BDW.usage
8121 ± 8% +78.4% 14484 ± 4% cpuidle.C1E-BDW.usage
6152556 ± 3% +73.5% 10671654 ± 4% cpuidle.C3-BDW.time
19421 ± 4% +69.4% 32898 ± 4% cpuidle.C3-BDW.usage
2.196e+09 ± 1% +71.6% 3.768e+09 ± 2% cpuidle.C6-BDW.time
2249847 ± 1% +71.8% 3866227 ± 2% cpuidle.C6-BDW.usage
21967626 ± 12% +87.4% 41174433 ± 16% cpuidle.POLL.time
414.25 ± 38% +98.7% 823.25 ± 4% cpuidle.POLL.usage
85750 ± 16% -36.6% 54390 ± 38% numa-meminfo.node0.AnonPages
3980080 ± 2% +31.6% 5237278 ± 1% numa-meminfo.node0.MemFree
760401 ± 2% +117.2% 1651342 ± 5% numa-meminfo.node0.SReclaimable
830508 ± 2% +106.9% 1718268 ± 5% numa-meminfo.node0.Slab
11498 ± 19% -13.2% 9983 ± 19% numa-meminfo.node1.Mapped
3680249 ± 1% +37.6% 5062670 ± 4% numa-meminfo.node1.MemFree
733964 ± 1% +87.0% 1372221 ± 1% numa-meminfo.node1.SReclaimable
792672 ± 1% +80.8% 1433174 ± 1% numa-meminfo.node1.Slab
58402 ± 5% -7.5% 54002 ± 1% slabinfo.Acpi-State.active_objs
1160 ± 4% -7.8% 1070 ± 1% slabinfo.Acpi-State.active_slabs
59218 ± 4% -7.8% 54621 ± 1% slabinfo.Acpi-State.num_objs
1160 ± 4% -7.8% 1070 ± 1% slabinfo.Acpi-State.num_slabs
1645 ± 5% -9.7% 1485 ± 3% slabinfo.mnt_cache.active_objs
1645 ± 5% -9.7% 1485 ± 3% slabinfo.mnt_cache.num_objs
10886 ± 8% -18.5% 8868 ± 9% slabinfo.proc_inode_cache.active_objs
11296 ± 9% -19.6% 9086 ± 11% slabinfo.proc_inode_cache.num_objs
2472545 ± 0% +106.7% 5110015 ± 3% slabinfo.radix_tree_node.active_objs
47001 ± 1% +407.7% 238615 ± 5% slabinfo.radix_tree_node.active_slabs
2529037 ± 0% +105.8% 5203603 ± 3% slabinfo.radix_tree_node.num_objs
47001 ± 1% +407.7% 238615 ± 5% slabinfo.radix_tree_node.num_slabs
1723344 ± 1% -18.9% 1398286 ± 6% proc-vmstat.allocstall_movable
7047 ± 2% +115.9% 15214 ± 13% proc-vmstat.allocstall_normal
3463 ± 11% +45.3% 5031 ± 5% proc-vmstat.nr_free_cma
1857693 ± 1% +34.5% 2499291 ± 1% proc-vmstat.nr_free_pages
1953 ± 1% -16.6% 1628 ± 7% proc-vmstat.nr_isolated_file
103.00 ±171% +118.4% 225.00 ± 86% proc-vmstat.nr_mlock
1668 ± 6% -52.4% 793.75 ± 14% proc-vmstat.nr_pages_scanned
374048 ± 0% +101.3% 752811 ± 3% proc-vmstat.nr_slab_reclaimable
312.50 ± 2% +10.3% 344.75 ± 5% proc-vmstat.nr_vmscan_immediate_reclaim
1.981e+08 ± 1% -24.7% 1.492e+08 ± 4% proc-vmstat.numa_foreign
4.426e+08 ± 0% +18.2% 5.234e+08 ± 5% proc-vmstat.numa_hit
4.426e+08 ± 0% +18.2% 5.234e+08 ± 5% proc-vmstat.numa_local
1.981e+08 ± 1% -24.7% 1.492e+08 ± 4% proc-vmstat.numa_miss
5.429e+08 ± 1% -15.3% 4.599e+08 ± 8% proc-vmstat.pgscan_direct
40718282 ± 19% +282.1% 1.556e+08 ± 8% proc-vmstat.pgscan_kswapd
5.428e+08 ± 1% -15.3% 4.598e+08 ± 8% proc-vmstat.pgsteal_direct
40646328 ± 19% +282.7% 1.556e+08 ± 8% proc-vmstat.pgsteal_kswapd
7744768 ± 0% -99.5% 38400 ± 4% proc-vmstat.slabs_scanned
2173614 ± 34% +123.3% 4853513 ± 20% proc-vmstat.workingset_activate
2854098 ± 1% -100.0% 0.00 ± -1% proc-vmstat.workingset_nodereclaim
3649877 ± 33% +237.2% 12307794 ± 20% proc-vmstat.workingset_refault
21437 ± 16% -36.6% 13601 ± 38% numa-vmstat.node0.nr_anon_pages
985606 ± 3% +33.0% 1310672 ± 1% numa-vmstat.node0.nr_free_pages
928.75 ± 1% -18.5% 757.25 ± 6% numa-vmstat.node0.nr_isolated_file
43.25 ±171% +156.6% 111.00 ± 88% numa-vmstat.node0.nr_mlock
749.50 ± 5% -39.6% 453.00 ± 28% numa-vmstat.node0.nr_pages_scanned
190336 ± 2% +116.7% 412505 ± 5% numa-vmstat.node0.nr_slab_reclaimable
1.321e+08 ± 1% +18.8% 1.569e+08 ± 3% numa-vmstat.node0.numa_hit
1.321e+08 ± 1% +18.8% 1.569e+08 ± 3% numa-vmstat.node0.numa_local
49301014 ± 0% -14.5% 42174486 ± 7% numa-vmstat.node0.numa_miss
1138002 ± 45% +149.6% 2840393 ± 25% numa-vmstat.node0.workingset_activate
1535394 ± 9% -100.0% 0.00 ± -1% numa-vmstat.node0.workingset_nodereclaim
1921061 ± 28% +259.0% 6896839 ± 18% numa-vmstat.node0.workingset_refault
3529 ± 12% +45.7% 5142 ± 5% numa-vmstat.node1.nr_free_cma
909968 ± 2% +39.1% 1265387 ± 4% numa-vmstat.node1.nr_free_pages
1016 ± 1% -15.1% 862.75 ± 8% numa-vmstat.node1.nr_isolated_file
2878 ± 19% -12.6% 2515 ± 19% numa-vmstat.node1.nr_mapped
853.25 ± 11% -58.1% 357.25 ± 17% numa-vmstat.node1.nr_pages_scanned
183780 ± 1% +86.6% 342895 ± 2% numa-vmstat.node1.nr_slab_reclaimable
49245485 ± 0% -14.5% 42120499 ± 7% numa-vmstat.node1.numa_foreign
1.327e+08 ± 1% +12.5% 1.493e+08 ± 6% numa-vmstat.node1.numa_hit
1.327e+08 ± 1% +12.5% 1.493e+08 ± 6% numa-vmstat.node1.numa_local
1338852 ± 11% -100.0% 0.00 ± -1% numa-vmstat.node1.workingset_nodereclaim
4.723e+12 ± 0% +1.4% 4.787e+12 ± 0% perf-stat.branch-instructions
0.09 ± 0% +11.4% 0.10 ± 2% perf-stat.branch-miss-rate%
4.193e+09 ± 0% +12.9% 4.734e+09 ± 1% perf-stat.branch-misses
1.925e+11 ± 0% +5.2% 2.026e+11 ± 2% perf-stat.cache-references
6470738 ± 1% -4.2% 6199521 ± 2% perf-stat.context-switches
100607 ± 9% +23.6% 124305 ± 2% perf-stat.cpu-migrations
5.453e+12 ± 0% +2.1% 5.566e+12 ± 0% perf-stat.dTLB-loads
0.01 ± 11% +18.6% 0.01 ± 6% perf-stat.dTLB-store-miss-rate%
1.362e+08 ± 11% +26.3% 1.72e+08 ± 8% perf-stat.dTLB-store-misses
1.371e+12 ± 0% +6.4% 1.459e+12 ± 3% perf-stat.dTLB-stores
83.98 ± 1% -5.4% 79.42 ± 3% perf-stat.iTLB-load-miss-rate%
83153742 ± 8% +22.7% 1.021e+08 ± 5% perf-stat.iTLB-load-misses
15794816 ± 6% +67.7% 26482172 ± 14% perf-stat.iTLB-loads
2.066e+13 ± 0% +1.9% 2.105e+13 ± 0% perf-stat.instructions
250264 ± 8% -17.4% 206744 ± 5% perf-stat.instructions-per-iTLB-miss
0.28 ± 0% +1.6% 0.28 ± 0% perf-stat.ipc
700308 ± 0% +4.9% 734613 ± 0% perf-stat.minor-faults
41.34 ± 1% -16.6% 34.46 ± 5% perf-stat.node-load-miss-rate%
8.341e+09 ± 0% -14.4% 7.139e+09 ± 1% perf-stat.node-load-misses
1.184e+10 ± 1% +15.0% 1.362e+10 ± 6% perf-stat.node-loads
44.56 ± 0% -13.2% 38.70 ± 3% perf-stat.node-store-miss-rate%
2.496e+09 ± 0% -9.3% 2.263e+09 ± 1% perf-stat.node-store-misses
3.105e+09 ± 0% +15.6% 3.589e+09 ± 4% perf-stat.node-stores
700310 ± 0% +4.9% 734613 ± 0% perf-stat.page-faults
18347 ± 8% +73.2% 31786 ± 5% sched_debug.cfs_rq:/.load.avg
536196 ± 22% +63.8% 878230 ± 0% sched_debug.cfs_rq:/.load.max
64134 ± 19% +108.4% 133653 ± 3% sched_debug.cfs_rq:/.load.stddev
17.37 ± 10% +78.2% 30.96 ± 3% sched_debug.cfs_rq:/.load_avg.avg
328.50 ± 40% +185.2% 936.83 ± 10% sched_debug.cfs_rq:/.load_avg.max
4.96 ± 6% -10.9% 4.42 ± 1% sched_debug.cfs_rq:/.load_avg.min
42.73 ± 32% +211.6% 133.17 ± 8% sched_debug.cfs_rq:/.load_avg.stddev
43.79 ± 11% +89.3% 82.92 ± 51% sched_debug.cfs_rq:/.nr_spread_over.max
6.35 ± 10% +97.2% 12.52 ± 46% sched_debug.cfs_rq:/.nr_spread_over.stddev
12.88 ± 5% +117.3% 27.99 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.avg
217.79 ± 24% +290.3% 850.08 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.max
4.25 ± 5% -23.5% 3.25 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.min
30.10 ± 18% +312.9% 124.28 ± 0% sched_debug.cfs_rq:/.runnable_load_avg.stddev
263474 ± 13% -72.0% 73846 ± 26% sched_debug.cpu.avg_idle.min
194804 ± 9% +28.9% 251068 ± 2% sched_debug.cpu.avg_idle.stddev
12.86 ± 4% +117.7% 27.99 ± 0% sched_debug.cpu.cpu_load[0].avg
217.54 ± 23% +290.8% 850.21 ± 0% sched_debug.cpu.cpu_load[0].max
4.25 ± 5% -23.5% 3.25 ± 10% sched_debug.cpu.cpu_load[0].min
30.06 ± 18% +313.5% 124.28 ± 0% sched_debug.cpu.cpu_load[0].stddev
12.92 ± 4% +117.1% 28.05 ± 0% sched_debug.cpu.cpu_load[1].avg
217.92 ± 23% +290.6% 851.21 ± 0% sched_debug.cpu.cpu_load[1].max
4.25 ± 5% -20.6% 3.38 ± 9% sched_debug.cpu.cpu_load[1].min
30.07 ± 18% +313.5% 124.36 ± 0% sched_debug.cpu.cpu_load[1].stddev
12.95 ± 4% +115.9% 27.96 ± 0% sched_debug.cpu.cpu_load[2].avg
218.62 ± 22% +288.9% 850.29 ± 0% sched_debug.cpu.cpu_load[2].max
4.25 ± 5% -20.6% 3.38 ± 9% sched_debug.cpu.cpu_load[2].min
30.16 ± 17% +311.3% 124.03 ± 1% sched_debug.cpu.cpu_load[2].stddev
12.98 ± 4% +115.0% 27.90 ± 0% sched_debug.cpu.cpu_load[3].avg
218.62 ± 22% +289.3% 851.08 ± 0% sched_debug.cpu.cpu_load[3].max
4.25 ± 5% -20.6% 3.38 ± 9% sched_debug.cpu.cpu_load[3].min
30.18 ± 17% +310.4% 123.85 ± 1% sched_debug.cpu.cpu_load[3].stddev
12.93 ± 4% +115.3% 27.85 ± 1% sched_debug.cpu.cpu_load[4].avg
214.75 ± 20% +296.1% 850.54 ± 0% sched_debug.cpu.cpu_load[4].max
4.25 ± 5% -20.6% 3.38 ± 9% sched_debug.cpu.cpu_load[4].min
29.75 ± 15% +315.6% 123.63 ± 2% sched_debug.cpu.cpu_load[4].stddev
1200 ± 8% -54.6% 545.08 ± 10% sched_debug.cpu.curr->pid.min
17856 ± 5% +78.0% 31787 ± 2% sched_debug.cpu.load.avg
492762 ± 15% +78.4% 878870 ± 0% sched_debug.cpu.load.max
59608 ± 12% +124.7% 133912 ± 1% sched_debug.cpu.load.stddev
0.00 ± 25% -47.6% 0.00 ± 16% sched_debug.cpu.next_balance.stddev
0.45 ± 4% +29.3% 0.58 ± 6% sched_debug.cpu.nr_running.stddev
34.17 ± 11% +54.1% 52.67 ± 9% sched_debug.cpu.nr_uninterruptible.max
14.34 ± 4% +34.1% 19.24 ± 13% sched_debug.cpu.nr_uninterruptible.stddev
13587 ± 2% -8.3% 12456 ± 2% sched_debug.cpu.ttwu_local.avg
45.63 ± 0% -23.0% 35.13 ± 13% perf-profile.calltrace.cycles-pp.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead
1.24 ± 4% +521.7% 7.72 ± 71% perf-profile.calltrace.cycles-pp.__do_page_cache_readahead.ondemand_readahead.page_cache_sync_readahead.generic_file_read_iter.xfs_file_buffered_aio_read
21.02 ± 1% -47.8% 10.98 ± 52% perf-profile.calltrace.cycles-pp.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages
0.95 ± 5% +659.1% 7.23 ± 76% perf-profile.calltrace.cycles-pp.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead.page_cache_sync_readahead.generic_file_read_iter
0.88 ± 25% +3184.9% 28.82 ± 63% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc
19.46 ± 0% -41.8% 11.33 ± 48% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_active_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
19.32 ± 0% -48.8% 9.90 ± 49% perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
20.16 ± 1% -49.0% 10.28 ± 52% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add.add_to_page_cache_lru
21.17 ± 0% -46.5% 11.33 ± 51% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_read_iter
1.60 ± 1% -53.2% 0.75 ± 58% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.shrink_inactive_list
22.25 ± 0% -45.3% 12.18 ± 50% perf-profile.calltrace.cycles-pp.activate_page.mark_page_accessed.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
0.95 ± 4% +660.8% 7.23 ± 76% perf-profile.calltrace.cycles-pp.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead.ondemand_readahead.page_cache_sync_readahead
45.29 ± 0% -45.1% 24.88 ± 48% perf-profile.calltrace.cycles-pp.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current
1.39 ± 16% +2018.5% 29.45 ± 61% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc.__do_page_cache_readahead
0.00 ± -1% +Inf% 2.12 ± 13% perf-profile.calltrace.cycles-pp.kswapd.kthread.ret_from_fork
0.00 ± -1% +Inf% 2.48 ± 12% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
1.64 ± 1% -52.3% 0.78 ± 58% perf-profile.calltrace.cycles-pp.lru_add_drain.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
1.64 ± 1% -52.3% 0.78 ± 58% perf-profile.calltrace.cycles-pp.lru_add_drain_cpu.lru_add_drain.shrink_inactive_list.shrink_node_memcg.shrink_node
21.02 ± 1% -47.8% 10.98 ± 52% perf-profile.calltrace.cycles-pp.lru_cache_add.add_to_page_cache_lru.mpage_readpages.xfs_vm_readpages.__do_page_cache_readahead
22.69 ± 0% -44.7% 12.56 ± 49% perf-profile.calltrace.cycles-pp.mark_page_accessed.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
0.88 ± 25% +3184.9% 28.82 ± 63% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current
19.31 ± 0% -46.5% 10.33 ± 51% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.shrink_inactive_list.shrink_node_memcg.shrink_node
20.14 ± 1% -49.0% 10.26 ± 52% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add
21.15 ± 0% -46.5% 11.31 ± 51% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.activate_page.mark_page_accessed
1.59 ± 1% -53.1% 0.75 ± 58% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
1.24 ± 4% +521.7% 7.72 ± 71% perf-profile.calltrace.cycles-pp.ondemand_readahead.page_cache_sync_readahead.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter
1.24 ± 4% +521.7% 7.72 ± 71% perf-profile.calltrace.cycles-pp.page_cache_sync_readahead.generic_file_read_iter.xfs_file_buffered_aio_read.xfs_file_read_iter.__vfs_read
20.99 ± 1% -47.8% 10.95 ± 52% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add.add_to_page_cache_lru.mpage_readpages
22.20 ± 0% -45.3% 12.14 ± 50% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.activate_page.mark_page_accessed.generic_file_read_iter.xfs_file_buffered_aio_read
1.64 ± 1% -52.3% 0.78 ± 58% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.shrink_inactive_list.shrink_node_memcg
0.00 ± -1% +Inf% 2.48 ± 12% perf-profile.calltrace.cycles-pp.ret_from_fork
20.25 ± 0% -41.5% 11.84 ± 47% perf-profile.calltrace.cycles-pp.shrink_active_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
22.98 ± 0% -48.0% 11.95 ± 48% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages
0.00 ± -1% +Inf% 1.54 ± 17% perf-profile.calltrace.cycles-pp.shrink_inactive_list.shrink_node_memcg.shrink_node.kswapd.kthread
45.28 ± 0% -44.7% 25.06 ± 47% perf-profile.calltrace.cycles-pp.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask
0.00 ± -1% +Inf% 2.12 ± 13% perf-profile.calltrace.cycles-pp.shrink_node.kswapd.kthread.ret_from_fork
45.09 ± 0% -44.6% 24.97 ± 47% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.do_try_to_free_pages.try_to_free_pages.__alloc_pages_slowpath
0.00 ± -1% +Inf% 2.11 ± 14% perf-profile.calltrace.cycles-pp.shrink_node_memcg.shrink_node.kswapd.kthread.ret_from_fork
1.43 ± 1% -50.2% 0.71 ± 62% perf-profile.calltrace.cycles-pp.shrink_page_list.shrink_inactive_list.shrink_node_memcg.shrink_node.do_try_to_free_pages
45.38 ± 0% -46.5% 24.30 ± 46% perf-profile.calltrace.cycles-pp.try_to_free_pages.__alloc_pages_slowpath.__alloc_pages_nodemask.alloc_pages_current.__page_cache_alloc
0.00 ± -1% +Inf% 0.80 ±101% perf-profile.children.cycles-pp.___slab_alloc
45.86 ± 0% -21.0% 36.22 ± 14% perf-profile.children.cycles-pp.__alloc_pages_slowpath
21.28 ± 1% -47.1% 11.27 ± 51% perf-profile.children.cycles-pp.__lru_cache_add
0.00 ± -1% +Inf% 0.80 ±101% perf-profile.children.cycles-pp.__slab_alloc
2.74 ± 13% +1393.9% 40.89 ± 58% perf-profile.children.cycles-pp._raw_spin_lock
39.79 ± 0% -42.6% 22.84 ± 47% perf-profile.children.cycles-pp._raw_spin_lock_irq
43.46 ± 0% -47.2% 22.96 ± 50% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
22.25 ± 0% -45.3% 12.18 ± 50% perf-profile.children.cycles-pp.activate_page
45.50 ± 0% -44.4% 25.29 ± 47% perf-profile.children.cycles-pp.do_try_to_free_pages
1.68 ± 18% +2288.3% 40.18 ± 61% perf-profile.children.cycles-pp.get_page_from_freelist
0.00 ± -1% +Inf% 2.12 ± 13% perf-profile.children.cycles-pp.kswapd
0.07 ± 17% +3320.7% 2.48 ± 12% perf-profile.children.cycles-pp.kthread
1.73 ± 0% -44.4% 0.96 ± 40% perf-profile.children.cycles-pp.lru_add_drain
1.73 ± 0% -44.6% 0.96 ± 40% perf-profile.children.cycles-pp.lru_add_drain_cpu
21.25 ± 1% -47.1% 11.25 ± 51% perf-profile.children.cycles-pp.lru_cache_add
22.70 ± 0% -44.7% 12.56 ± 49% perf-profile.children.cycles-pp.mark_page_accessed
0.00 ± -1% +Inf% 0.80 ±102% perf-profile.children.cycles-pp.new_slab
1.24 ± 4% +521.7% 7.72 ± 71% perf-profile.children.cycles-pp.page_cache_sync_readahead
45.19 ± 0% -46.2% 24.33 ± 50% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.07 ± 17% +3320.7% 2.48 ± 12% perf-profile.children.cycles-pp.ret_from_fork
23.52 ± 0% -40.9% 13.89 ± 41% perf-profile.children.cycles-pp.shrink_inactive_list
45.51 ± 0% -39.8% 27.41 ± 43% perf-profile.children.cycles-pp.shrink_node
45.32 ± 0% -39.7% 27.32 ± 43% perf-profile.children.cycles-pp.shrink_node_memcg
45.59 ± 0% -44.4% 25.34 ± 47% perf-profile.children.cycles-pp.try_to_free_pages
vm-scalability.time.percent_of_cpu_this_job_got
9000 ++-------------------------------------------------------------------+
| .* |
8000 *+O.O O.O O O.O.O.O.O.O.O O O.O..O.O O O.O.O.O O O O O O O O O O O
7000 O+ : O : : : : : : : O |
| : : : : : : : : |
6000 ++ : : : : : : : : |
5000 ++ : : : : : : : : |
| : : : : : : : : |
4000 ++ : : : : : : : : |
3000 ++ : : : : : : : : |
| : : : : : : : : |
2000 ++ :: :: :: :: |
1000 ++ : : : : |
| : : : : |
0 ++----*-----*---------------*----------*-----------------------------+
vm-scalability.time.elapsed_time
350 O+O-O-O-O-O-O-O-O--O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O--O-O-O-O-O-O-O-O-O
*.*.* *.* *.*..*.*.*.*.* *.*.*.* *.*.*.*.* |
300 ++ : : : : : : : : |
| : : : : : : : : |
250 ++ : : : : : : : : |
| : : : : : : : : |
200 ++ : : : : : : : : |
| : : : : : : : : |
150 ++ : : : : : : : : |
| : : : : : : : : |
100 ++ : : : : : : : : |
| : : : : |
50 ++ : : : : |
| : : : : |
0 ++----*-----*----------------*---------*------------------------------+
vm-scalability.time.elapsed_time.max
350 O+O-O-O-O-O-O-O-O--O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O-O--O-O-O-O-O-O-O-O-O
*.*.* *.* *.*..*.*.*.*.* *.*.*.* *.*.*.*.* |
300 ++ : : : : : : : : |
| : : : : : : : : |
250 ++ : : : : : : : : |
| : : : : : : : : |
200 ++ : : : : : : : : |
| : : : : : : : : |
150 ++ : : : : : : : : |
| : : : : : : : : |
100 ++ : : : : : : : : |
| : : : : |
50 ++ : : : : |
| : : : : |
0 ++----*-----*----------------*---------*------------------------------+
turbostat.Avg_MHz
2800 ++-------------------------------------------------------------------+
| * * * * |
2750 ++ : : : : |
2700 ++ : : : : : : : : |
| : : : : : : : : |
2650 ++ : : : : : : : : |
| : : : : : : : : |
2600 ++ : : : : : : : : .* |
*.*.* *.* *.*.*.*.*.*.* *.*..*.* *.*.*.* |
2550 ++ |
2500 ++ |
| O O O O O O O O |
2450 O+O O O O O O O O O O O O O O O O O O O O O O O
| O O O |
2400 ++-------------------------------------------------------------------+
turbostat._Busy
100 ++--------------------------------------------------------------------+
| * * * * |
98 ++ : : : : |
| : : : : : : : : |
96 ++ : : : : : : : : |
| : : : : : : : : |
94 ++ : : : : : : : : |
| : : : : : : : : .* |
92 *+*.* *.* *.*..*.*.*.*.* *.*.*.* *.*.*.* |
| |
90 ++ |
| O O O O O O O O |
88 O+O O O O O O O O O O O O O O O O O O O O O |
| O O O O O
86 ++--------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Thanks,
Xiaolong
4 years, 2 months