[lkp] [drm/i915] 6510a406d2: WARNING: CPU: 3 PID: 637 at drivers/gpu/drm/i915/intel_display.c:3916 intel_atomic_commit+0xf20/0xf30 [i915]()
by kernel test robot
FYI, we noticed the below changes on
git://people.freedesktop.org/~mlankhorst/linux rework-page-flip
commit 6510a406d296df7399956b723bd4b81ee481f6c0 ("drm/i915: Rework intel_crtc_page_flip to be almost atomic.")
<5>[ 15.546422] Key type id_resolver registered
<5>[ 15.546664] Key type id_legacy registered
<4>[ 81.385954] ------------[ cut here ]------------
<4>[ 81.386235] WARNING: CPU: 3 PID: 637 at drivers/gpu/drm/i915/intel_display.c:3916 intel_atomic_commit+0xf20/0xf30 [i915]()
<4>[ 81.386875] Removing stuck page flip
<4>[ 81.387067] Modules linked in: rpcsec_gss_krb5 auth_rpcgss nfsv4 dns_resolver snd_hda_codec_hdmi snd_hda_codec_realtek snd_hda_codec_generic sg sd_mod x86_pkg_temp_thermal coretemp kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ahci libahci firewire_ohci ppdev aesni_intel lrw gf128mul glue_helper ablk_helper cryptd serio_raw pcspkr libata snd_hda_intel firewire_core crc_itu_t snd_hda_codec i915 snd_hda_core snd_hwdep video snd_pcm snd_timer drm_kms_helper syscopyarea sysfillrect sysimgblt snd fb_sys_fops soundcore shpchp drm parport_pc nuvoton_cir parport rc_core
<4>[ 81.390213] CPU: 3 PID: 637 Comm: kms_flip Not tainted 4.4.0-rc4-00525-g6510a40 #1
<4>[ 81.390601] Hardware name: /DH67GD, BIOS BLH6710H.86A.0132.2011.1007.1505 10/07/2011
<4>[ 81.391075] ffffffffa02b07e0 ffff88021d4e3b88 ffffffff81410ca2 ffff88021d4e3bd0
<4>[ 81.391500] ffff88021d4e3bc0 ffffffff81078cc6 ffff880204a5e9a8 ffff88021d3a8500
<4>[ 81.391917] ffff8801e6dfa000 ffff8801e6fa3a00 0000000000000000 ffff88021d4e3c20
<4>[ 81.392341] Call Trace:
<4>[ 81.392479] [<ffffffff81410ca2>] dump_stack+0x4b/0x69
<4>[ 81.392753] [<ffffffff81078cc6>] warn_slowpath_common+0x86/0xc0
<4>[ 81.393058] [<ffffffff81078d4c>] warn_slowpath_fmt+0x4c/0x50
<4>[ 81.393360] [<ffffffff810bc117>] ? finish_wait+0x67/0x80
<4>[ 81.393660] [<ffffffffa024ba40>] intel_atomic_commit+0xf20/0xf30 [i915]
<4>[ 81.394022] [<ffffffffa00aea5c>] ? drm_atomic_check_only+0x14c/0x610 [drm]
<4>[ 81.394386] [<ffffffff810bc2c0>] ? wait_woken+0xb0/0xb0
<4>[ 81.394674] [<ffffffffa00aef57>] drm_atomic_commit+0x37/0x60 [drm]
<4>[ 81.395008] [<ffffffffa014e18e>] drm_atomic_helper_connector_dpms+0xee/0x1a0 [drm_kms_helper]
<4>[ 81.395463] [<ffffffffa00a3b54>] drm_mode_obj_set_property_ioctl+0x234/0x240 [drm]
<4>[ 81.395868] [<ffffffffa00a3b90>] drm_mode_connector_property_set_ioctl+0x30/0x40 [drm]
<4>[ 81.396287] [<ffffffffa0094862>] drm_ioctl+0x142/0x590 [drm]
<4>[ 81.396593] [<ffffffffa00a3b60>] ? drm_mode_obj_set_property_ioctl+0x240/0x240 [drm]
<4>[ 81.397003] [<ffffffff813a848c>] ? selinux_file_ioctl+0xfc/0x1c0
<4>[ 81.397323] [<ffffffff811fbdb1>] do_vfs_ioctl+0x301/0x560
<4>[ 81.397608] [<ffffffff8109d7e2>] ? __might_sleep+0x52/0xb0
<4>[ 81.397898] [<ffffffff813a10b3>] ? security_file_ioctl+0x43/0x60
<4>[ 81.398218] [<ffffffff811fc089>] SyS_ioctl+0x79/0x90
<4>[ 81.398480] [<ffffffff818c6a2e>] entry_SYSCALL_64_fastpath+0x12/0x71
<4>[ 81.398812] ---[ end trace 872a05a25a529b20 ]---
<4>[ 81.398812] ---[ end trace 872a05a25a529b20 ]---
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
6 years, 5 months
[lkp] [rcu] 52b265325b: WARNING: CPU: 0 PID: 1 at fs/sysfs/group.c:61 internal_create_group+0x252/0x300()
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/paulmck/linux-rcu.git rcu/dev
commit 52b265325b7a7438cbcdd735fecdd9d7e255eb83 ("rcu: Remove TINY_RCU bloat from pointless boot parameters")
+----------------+------------+------------+
| | dc15c60e39 | 52b265325b |
+----------------+------------+------------+
| boot_successes | 4 | 0 |
+----------------+------------+------------+
[ 0.206357] Performance Events: unsupported p6 CPU model 60 no PMU driver, software events only.
[ 0.209250] devtmpfs: initialized
[ 0.211425] ------------[ cut here ]------------
[ 0.212045] WARNING: CPU: 0 PID: 1 at fs/sysfs/group.c:61 internal_create_group+0x252/0x300()
[ 0.213475] BUG: unable to handle kernel paging request at 65746f6e
[ 0.214456] IP: [<c13b8639>] strnlen+0x9/0x20
[ 0.215065] *pdpt = 0000000000000000 *pde = 0000000000000000
[ 0.215908] Oops: 0000 [#1]
[ 0.216546] CPU: 0 PID: 1 Comm: swapper Not tainted 4.4.0-rc2-00051-g52b2653 #1
[ 0.217536] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.218777] task: d443e040 ti: d4440000 task.ti: d4440000
[ 0.219519] EIP: 0060:[<c13b8639>] EFLAGS: 00010097 CPU: 0
[ 0.220372] EIP is at strnlen+0x9/0x20
[ 0.220885] EAX: 65746f6e EBX: c1fbd0ea ECX: 65746f6e EDX: fffffffe
[ 0.221803] ESI: 65746f6e EDI: c1fbd4c0 EBP: d4441da8 ESP: d4441da8
[ 0.222632] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068
[ 0.223387] CR0: 80050033 CR2: 65746f6e CR3: 01fb0000 CR4: 000406b0
[ 0.224101] Stack:
[ 0.224360] d4441dc4 c13b47bf ffffffff c1d40f67 c1fbd0ea d4441ed0 c1d40f68 d4441e08
[ 0.225368] c13b60cd 00000000 0000ffff c106ffff 00000000 00000000 00000282 c1fbd0e0
[ 0.226434] 000003e0 c1fbd4c0 c1d40f68 ff0a0004 ffffffff 000003e0 00000000 ffffffff
[ 0.227446] Call Trace:
[ 0.227740] [<c13b47bf>] string+0x2f/0xd0
[ 0.228472] [<c13b60cd>] vsnprintf+0xfd/0x440
[ 0.228984] [<c106ffff>] ? console_unlock+0x55f/0x690
[ 0.229650] [<c13b6916>] vscnprintf+0x16/0x40
[ 0.230188] [<c107023a>] vprintk_emit+0x10a/0x540
[ 0.230736] [<c10706a5>] vprintk+0x35/0x40
[ 0.231232] [<c103e18a>] warn_slowpath_common+0x6a/0xb0
[ 0.231836] [<c1158352>] ? internal_create_group+0x252/0x300
[ 0.232504] [<c1158352>] ? internal_create_group+0x252/0x300
[ 0.233183] [<c103e22e>] warn_slowpath_fmt+0x2e/0x30
[ 0.233814] [<c1158352>] internal_create_group+0x252/0x300
[ 0.234462] [<c115840c>] sysfs_create_group+0xc/0x10
[ 0.235038] [<c1f3104b>] ksysfs_init+0x28/0x78
[ 0.235574] [<c1f31023>] ? nsproxy_cache_init+0x27/0x27
[ 0.236323] [<c1f23c8c>] do_one_initcall+0x175/0x188
[ 0.237052] [<c105533f>] ? parse_args+0x2bf/0x520
[ 0.237744] [<c1f23d78>] kernel_init_freeable+0xd9/0x156
[ 0.238377] [<c1f23d78>] ? kernel_init_freeable+0xd9/0x156
[ 0.239010] [<c1aa497b>] kernel_init+0xb/0xe0
[ 0.239534] [<c105b8ec>] ? schedule_tail+0xc/0x50
[ 0.240084] [<c1aac068>] ret_from_kernel_thread+0x20/0x34
[ 0.240914] [<c1aa4970>] ? rest_init+0x70/0x70
[ 0.241453] Code: 00 00 85 c9 74 11 55 89 e5 57 89 c7 89 d0 f2 ae 75 01 4f 89 f8 5f 5d c3 8d 76 00 8d bc 27 00 00 00 00 55 89 c1 89 e5 89 c8 eb 06 <80> 38 00 74 07 40 4a 83 fa ff 75 f4 29 c8 5d c3 90 90 90 90 90
[ 0.244735] EIP: [<c13b8639>] strnlen+0x9/0x20 SS:ESP 0068:d4441da8
[ 0.245497] CR2: 0000000065746f6e
[ 0.246020] ---[ end trace 62c0b504e1bd3c01 ]---
[ 0.246566] Kernel panic - not syncing: Fatal exception
Thanks,
Ying Huang
6 years, 5 months
Re: [LKP] [smpboot] 0aefb957de: -97.6% pigz.throughput and many others changes
by Huang, Ying
Daniel Wagner <daniel.wagner(a)bmw-carit.de> writes:
> On 12/10/2015 03:41 AM, kernel test robot wrote:
>> FYI, we noticed the below changes on
>>
>> git://internal_merge_and_test_tree
>> revert-0aefb957de5f2a1d9467126114146b6c793cf180-0aefb957de5f2a1d9467126114146b6c793cf180
>> commit 0aefb957de5f2a1d9467126114146b6c793cf180 ("smpboot: Add CPU
>> hotplug state variables instead of reusing CPU states")
>>
>>
>> =========================================================================================
>> compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
>> gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-ivb-d02/pwrite3/will-it-scale
>
> The only explanation I have for these regressions over all kind of test
> (cpu, disk, ...) is that one or more CPU did not boot at all. The
> changes done are not in the hot path unless a lot of CPU hotplug
> operations are going on. Which I presume is not the case.
>
> @lkp team: Is possible to get one of the dmesg if the regression is
> observed?
The dmesg is attached. And yes, only one CPU boot with the commit:
[ 1.121157] x86: Booted up 4 nodes, 1 CPUs
Best Regards,
Huang, Ying
6 years, 5 months
[fs] 18c5809b7e: INFO: possible circular locking dependency detected ]
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree revert-18c5809b7e1183fc5ca4d104b99d53dc77723ef9-18c5809b7e1183fc5ca4d104b99d53dc77723ef9
commit 18c5809b7e1183fc5ca4d104b99d53dc77723ef9 ("fs: clear file privilege bits when mmap writing")
+------------------------------------------------------------------+----------+------------+
| | v4.4-rc3 | 18c5809b7e |
+------------------------------------------------------------------+----------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 39 | 6 |
| BUG:kernel_test_oversize | 4 | |
| BUG:kernel_boot_hang | 34 | 1 |
| invoked_oom-killer:gfp_mask=0x | 1 | 1 |
| Mem-Info | 1 | 1 |
| Kernel_panic-not_syncing:Out_of_memory_and_no_killable_processes | 1 | 1 |
| backtrace:power_supply_changed_work | 1 | |
| INFO:possible_circular_locking_dependency_detected | 0 | 4 |
| backtrace:iterate_dir | 0 | 4 |
| backtrace:SyS_getdents | 0 | 4 |
| backtrace:vm_mmap_pgoff | 0 | 4 |
| backtrace:SyS_mmap_pgoff | 0 | 4 |
| backtrace:SyS_mmap | 0 | 4 |
| backtrace:kmem_cache_alloc | 0 | 1 |
| backtrace:device_private_init | 0 | 1 |
| backtrace:serio_handle_event | 0 | 1 |
+------------------------------------------------------------------+----------+------------+
[ 54.241020] systemd-sysv-generator[171]: Ignoring K01watchdog symlink in rc6.d, not generating watchdog.service.
[ 54.918926]
[ 54.919580] ======================================================
[ 54.921207] [ INFO: possible circular locking dependency detected ]
[ 54.922873] 4.4.0-rc3-00001-g18c5809b7 #1 Not tainted
[ 54.925715] -------------------------------------------------------
[ 54.929512] systemd-journal/185 is trying to acquire lock:
[ 54.929544] (&sb->s_type->i_mutex_key#7){+.+.+.}, at: [<ffffffff81253416>] do_mmap+0x3b6/0x670
[ 54.929545]
[ 54.929545] but task is already holding lock:
[ 54.929551] (&mm->mmap_sem){++++++}, at: [<ffffffff8122e6dc>] vm_mmap_pgoff+0x4c/0xd0
[ 54.929552]
[ 54.929552] which lock already depends on the new lock.
[ 54.929552]
[ 54.929553]
[ 54.929553] the existing dependency chain (in reverse order) is:
[ 54.929556]
[ 54.929556] -> #1 (&mm->mmap_sem){++++++}:
[ 54.929566] [<ffffffff81155f61>] lock_acquire+0xa1/0xd0
[ 54.929570] [<ffffffff8124953d>] __might_fault+0x9d/0xe0
[ 54.929586] [<ffffffff812bfa2c>] filldir+0xdc/0x1b0
[ 54.929594] [<ffffffff812e2089>] dcache_readdir+0x249/0x370
[ 54.929597] [<ffffffff812bfbb5>] iterate_dir+0xb5/0x1a0
[ 54.929600] [<ffffffff812bfe36>] SyS_getdents+0xb6/0x180
[ 54.929617] [<ffffffff81d844f6>] entry_SYSCALL_64_fastpath+0x16/0x7a
[ 54.929622]
[ 54.929622] -> #0 (&sb->s_type->i_mutex_key#7){+.+.+.}:
[ 54.929625] [<ffffffff81155401>] __lock_acquire+0x1e71/0x2150
[ 54.929628] [<ffffffff81155f61>] lock_acquire+0xa1/0xd0
[ 54.929637] [<ffffffff81d7ca89>] mutex_lock_nested+0x99/0x620
[ 54.929641] [<ffffffff81253416>] do_mmap+0x3b6/0x670
[ 54.929644] [<ffffffff8122e70d>] vm_mmap_pgoff+0x7d/0xd0
[ 54.929647] [<ffffffff81250144>] SyS_mmap_pgoff+0x214/0x2e0
[ 54.929662] [<ffffffff8100dd06>] SyS_mmap+0x26/0x40
[ 54.929666] [<ffffffff81d844f6>] entry_SYSCALL_64_fastpath+0x16/0x7a
[ 54.929667]
[ 54.929667] other info that might help us debug this:
[ 54.929667]
[ 54.929668] Possible unsafe locking scenario:
[ 54.929668]
[ 54.929668] CPU0 CPU1
[ 54.929669] ---- ----
[ 54.929671] lock(&mm->mmap_sem);
[ 54.929673] lock(&sb->s_type->i_mutex_key#7);
[ 54.929675] lock(&mm->mmap_sem);
[ 54.929677] lock(&sb->s_type->i_mutex_key#7);
[ 54.929678]
[ 54.929678] *** DEADLOCK ***
[ 54.929678]
[ 54.929679] 1 lock held by systemd-journal/185:
[ 54.929685] #0: (&mm->mmap_sem){++++++}, at: [<ffffffff8122e6dc>] vm_mmap_pgoff+0x4c/0xd0
[ 54.929686]
[ 54.929686] stack backtrace:
[ 54.929700] CPU: 0 PID: 185 Comm: systemd-journal Not tainted 4.4.0-rc3-00001-g18c5809b7 #1
[ 54.929701] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 54.929714] ffffffff8357a530 ffff88001426fc70 ffffffff816bd2d8 ffffffff8357a530
[ 54.929717] ffff88001426fcb0 ffffffff8114e758 ffff88001426fd00 ffff880000b74bc0
[ 54.929790] ffff880000b74b88 0000000000000001 0000000000000001 ffff880000b74040
[ 54.929793] Call Trace:
[ 54.929814] [<ffffffff816bd2d8>] dump_stack+0x7c/0xb4
[ 54.929817] [<ffffffff8114e758>] print_circular_bug+0x2c8/0x370
[ 54.929820] [<ffffffff81155401>] __lock_acquire+0x1e71/0x2150
[ 54.929831] [<ffffffff8112fb7c>] ? local_clock+0x4c/0x60
[ 54.929834] [<ffffffff81155f61>] lock_acquire+0xa1/0xd0
[ 54.929837] [<ffffffff81253416>] ? do_mmap+0x3b6/0x670
[ 54.929841] [<ffffffff81d7ca89>] mutex_lock_nested+0x99/0x620
[ 54.929843] [<ffffffff81253416>] ? do_mmap+0x3b6/0x670
[ 54.929846] [<ffffffff81253416>] ? do_mmap+0x3b6/0x670
[ 54.929849] [<ffffffff81253416>] do_mmap+0x3b6/0x670
[ 54.929851] [<ffffffff8122e70d>] vm_mmap_pgoff+0x7d/0xd0
[ 54.929854] [<ffffffff81250144>] SyS_mmap_pgoff+0x214/0x2e0
[ 54.929857] [<ffffffff8100dd06>] SyS_mmap+0x26/0x40
[ 54.929860] [<ffffffff81d844f6>] entry_SYSCALL_64_fastpath+0x16/0x7a
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
Bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
6 years, 5 months
[smpboot] 0aefb957de: -97.6% pigz.throughput and many others changes
by kernel test robot
FYI, we noticed the below changes on
git://internal_merge_and_test_tree revert-0aefb957de5f2a1d9467126114146b6c793cf180-0aefb957de5f2a1d9467126114146b6c793cf180
commit 0aefb957de5f2a1d9467126114146b6c793cf180 ("smpboot: Add CPU hotplug state variables instead of reusing CPU states")
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-ivb-d02/pwrite3/will-it-scale
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
1364213 ± 1% +9.0% 1486873 ± 0% will-it-scale.per_process_ops
1279326 ± 0% +6.0% 1356383 ± 0% will-it-scale.per_thread_ops
0.17 ± 0% +41.4% 0.25 ± 0% will-it-scale.scalability
304.07 ± 0% -25.0% 228.07 ± 0% will-it-scale.time.elapsed_time
304.07 ± 0% -25.0% 228.07 ± 0% will-it-scale.time.elapsed_time.max
4705 ± 4% +892.6% 46711 ± 0% will-it-scale.time.involuntary_context_switches
8670 ± 0% -23.2% 6660 ± 0% will-it-scale.time.minor_page_faults
124.00 ± 0% -60.5% 49.00 ± 0% will-it-scale.time.percent_of_cpu_this_job_got
366.82 ± 0% -71.7% 103.82 ± 0% will-it-scale.time.system_time
172606 ± 16% -79.9% 34644 ± 0% will-it-scale.time.voluntary_context_switches
487.21 ± 6% -87.7% 60.08 ±161% uptime.idle
85581 ± 18% -53.1% 40116 ± 4% meminfo.DirectMap4k
17877 ± 0% -13.6% 15438 ± 0% meminfo.SUnreclaim
3891 ± 4% -38.1% 2410 ± 2% vmstat.system.cs
2722 ± 0% -60.2% 1084 ± 0% vmstat.system.in
15612 ± 5% -85.0% 2335 ± 9% softirqs.RCU
19525 ± 1% -100.0% 0.00 ± -1% softirqs.SCHED
398686 ± 0% -69.4% 121923 ± 0% softirqs.TIMER
27989466 ±132% -97.5% 710524 ±153% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
11122 ± 1% +459.4% 62221 ± 0% latency_stats.hits.generic_file_write_iter.__vfs_write.vfs_write.SyS_pwrite64.entry_SYSCALL_64_fastpath
42247018 ±145% -98.3% 710524 ±153% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1072135 ± 7% +4401.0% 48256852 ± 0% latency_stats.sum.generic_file_write_iter.__vfs_write.vfs_write.SyS_pwrite64.entry_SYSCALL_64_fastpath
50160257 ±150% -98.6% 710524 ±153% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1334934 ± 1% +267.5% 4906096 ± 1% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
4469 ± 0% -13.6% 3859 ± 0% proc-vmstat.nr_slab_unreclaimable
300600 ± 0% -25.9% 222872 ± 0% proc-vmstat.numa_hit
300600 ± 0% -25.9% 222872 ± 0% proc-vmstat.numa_local
1615 ± 2% -42.2% 933.60 ± 5% proc-vmstat.pgactivate
134199 ± 0% -24.8% 100937 ± 0% proc-vmstat.pgalloc_dma32
179571 ± 0% -26.6% 131771 ± 0% proc-vmstat.pgalloc_normal
352023 ± 0% -25.4% 262552 ± 0% proc-vmstat.pgfault
311385 ± 0% -26.2% 229764 ± 0% proc-vmstat.pgfree
304.07 ± 0% -25.0% 228.07 ± 0% time.elapsed_time
304.07 ± 0% -25.0% 228.07 ± 0% time.elapsed_time.max
4705 ± 4% +892.6% 46711 ± 0% time.involuntary_context_switches
8670 ± 0% -23.2% 6660 ± 0% time.minor_page_faults
124.00 ± 0% -60.5% 49.00 ± 0% time.percent_of_cpu_this_job_got
366.82 ± 0% -71.7% 103.82 ± 0% time.system_time
12.11 ± 1% -24.0% 9.20 ± 1% time.user_time
172606 ± 16% -79.9% 34644 ± 0% time.voluntary_context_switches
11305186 ± 14% -99.9% 7783 ±199% cpuidle.C1-IVB.time
75377 ± 13% -100.0% 4.00 ± 61% cpuidle.C1-IVB.usage
4233748 ± 18% -100.0% 699.20 ± 38% cpuidle.C1E-IVB.time
3279 ± 5% -99.9% 3.20 ± 23% cpuidle.C1E-IVB.usage
532500 ± 38% -99.9% 633.40 ± 40% cpuidle.C3-IVB.time
1070 ± 8% -99.8% 2.20 ± 44% cpuidle.C3-IVB.usage
4.359e+08 ± 0% -99.8% 679907 ± 6% cpuidle.C6-IVB.time
27901 ± 5% -99.8% 51.00 ± 17% cpuidle.C6-IVB.usage
5569279 ± 42% -99.4% 31002 ±145% cpuidle.POLL.time
386.20 ± 10% -99.6% 1.40 ± 85% cpuidle.POLL.usage
62.96 ± 0% +58.4% 99.70 ± 0% turbostat.%Busy
2072 ± 0% +58.4% 3282 ± 0% turbostat.Avg_MHz
25.01 ± 0% -98.8% 0.30 ± 7% turbostat.CPU%c1
0.03 ± 88% -100.0% 0.00 ± -1% turbostat.CPU%c3
12.00 ± 1% -100.0% 0.00 ± -1% turbostat.CPU%c6
15.30 ± 0% -16.7% 12.75 ± 0% turbostat.CorWatt
41.40 ± 1% -15.5% 35.00 ± 3% turbostat.CoreTmp
0.01 ± 0% -100.0% 0.00 ± -1% turbostat.Pkg%pc2
0.22 ± 17% -100.0% 0.00 ± -1% turbostat.Pkg%pc6
42.40 ± 1% -13.2% 36.80 ± 2% turbostat.PkgTmp
32.13 ± 0% -8.0% 29.57 ± 0% turbostat.PkgWatt
2244 ± 0% -13.6% 1938 ± 0% slabinfo.Acpi-Namespace.active_objs
2244 ± 0% -13.6% 1938 ± 0% slabinfo.Acpi-Namespace.num_objs
1993 ± 4% -43.5% 1125 ± 1% slabinfo.anon_vma.active_objs
1993 ± 4% -43.5% 1125 ± 1% slabinfo.anon_vma.num_objs
262.80 ± 13% -72.2% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
262.80 ± 13% -72.2% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
1900 ± 5% -22.5% 1472 ± 0% slabinfo.kmalloc-128.active_objs
1900 ± 5% -22.5% 1472 ± 0% slabinfo.kmalloc-128.num_objs
4281 ± 2% -10.4% 3838 ± 0% slabinfo.kmalloc-16.active_objs
4281 ± 2% -10.4% 3838 ± 0% slabinfo.kmalloc-16.num_objs
2461 ± 2% -17.5% 2030 ± 2% slabinfo.kmalloc-192.active_objs
2683 ± 3% -23.3% 2058 ± 0% slabinfo.kmalloc-192.num_objs
1180 ± 2% -13.6% 1019 ± 0% slabinfo.kmalloc-256.active_objs
1675 ± 5% -19.2% 1354 ± 2% slabinfo.kmalloc-256.num_objs
5708 ± 4% -37.2% 3584 ± 0% slabinfo.kmalloc-32.active_objs
5708 ± 4% -37.2% 3584 ± 0% slabinfo.kmalloc-32.num_objs
628.80 ± 11% -17.5% 519.00 ± 5% slabinfo.kmalloc-512.active_objs
7195 ± 3% -14.0% 6191 ± 0% slabinfo.kmalloc-64.active_objs
7515 ± 2% -17.6% 6191 ± 0% slabinfo.kmalloc-64.num_objs
4505 ± 4% -20.5% 3584 ± 0% slabinfo.kmalloc-8.active_objs
4505 ± 4% -20.5% 3584 ± 0% slabinfo.kmalloc-8.num_objs
1310 ± 3% -23.1% 1008 ± 0% slabinfo.kmalloc-96.active_objs
1310 ± 3% -23.1% 1008 ± 0% slabinfo.kmalloc-96.num_objs
243.20 ± 19% -47.4% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
243.20 ± 19% -47.4% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
1756 ± 3% -22.6% 1360 ± 1% slabinfo.proc_inode_cache.active_objs
1809 ± 3% -24.2% 1372 ± 0% slabinfo.proc_inode_cache.num_objs
350.80 ± 5% -40.4% 209.00 ± 0% slabinfo.signal_cache.active_objs
350.80 ± 5% -40.4% 209.00 ± 0% slabinfo.signal_cache.num_objs
2651 ± 3% -16.5% 2213 ± 0% slabinfo.vm_area_struct.active_objs
2692 ± 3% -15.8% 2266 ± 0% slabinfo.vm_area_struct.num_objs
0.41 ± 14% +169.6% 1.10 ± 7% perf-profile.cycles-pp.___might_sleep.__might_sleep.find_lock_entry.shmem_getpage_gfp.shmem_write_begin
0.62 ± 15% +156.6% 1.60 ± 7% perf-profile.cycles-pp.___might_sleep.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
0.54 ± 6% +179.4% 1.52 ± 16% perf-profile.cycles-pp.__fget_light.sys_pwrite64.entry_SYSCALL_64_fastpath
25.76 ± 6% +148.0% 63.89 ± 0% perf-profile.cycles-pp.__generic_file_write_iter.generic_file_write_iter.__vfs_write.vfs_write.sys_pwrite64
0.58 ± 14% +169.0% 1.56 ± 5% perf-profile.cycles-pp.__might_sleep.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write
0.33 ± 15% +219.3% 1.06 ± 12% perf-profile.cycles-pp.__might_sleep.mutex_lock.generic_file_write_iter.__vfs_write.vfs_write
0.49 ± 31% +195.5% 1.45 ± 7% perf-profile.cycles-pp.__might_sleep.percpu_down_read.__sb_start_write.vfs_write.sys_pwrite64
31.23 ± 6% -100.0% 0.01 ±100% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.__vfs_write.vfs_write
4.79 ± 34% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__mutex_unlock_slowpath.mutex_unlock.generic_file_write_iter.__vfs_write.vfs_write
0.63 ± 3% +137.1% 1.48 ± 16% perf-profile.cycles-pp.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.find_lock_entry.shmem_getpage_gfp
0.34 ± 24% +293.5% 1.32 ± 13% perf-profile.cycles-pp.__sb_end_write.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
1.14 ± 24% +186.2% 3.27 ± 5% perf-profile.cycles-pp.__sb_start_write.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
4.01 ± 34% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock.__mutex_unlock_slowpath.mutex_unlock.generic_file_write_iter.__vfs_write
0.68 ± 18% +96.5% 1.34 ± 3% perf-profile.cycles-pp.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
12.24 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
9.87 ± 6% +152.7% 24.93 ± 0% perf-profile.cycles-pp.copy_user_enhanced_fast_string.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
12.25 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
12.24 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
12.20 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
1.02 ± 9% +166.6% 2.71 ± 7% perf-profile.cycles-pp.entry_SYSCALL_64_after_swapgs
80.93 ± 5% +19.5% 96.68 ± 0% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath
0.79 ± 7% +120.9% 1.75 ± 8% perf-profile.cycles-pp.file_update_time.__generic_file_write_iter.generic_file_write_iter.__vfs_write.vfs_write
1.82 ± 7% +145.0% 4.46 ± 8% perf-profile.cycles-pp.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write
3.38 ± 8% +159.7% 8.77 ± 3% perf-profile.cycles-pp.find_lock_entry.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter
0.37 ± 43% +279.9% 1.40 ± 4% perf-profile.cycles-pp.fsnotify.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
23.75 ± 5% +142.2% 57.52 ± 0% perf-profile.cycles-pp.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write.vfs_write
17.68 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.49 ± 11% +133.6% 1.14 ± 8% perf-profile.cycles-pp.iov_iter_advance.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
0.78 ± 14% +112.4% 1.65 ± 4% perf-profile.cycles-pp.iov_iter_copy_from_user_atomic.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
0.20 ± 24% +593.1% 1.41 ± 5% perf-profile.cycles-pp.iov_iter_fault_in_readable.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
0.42 ± 16% +148.6% 1.04 ± 7% perf-profile.cycles-pp.mark_page_accessed.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
34.83 ± 5% -90.3% 3.38 ± 6% perf-profile.cycles-pp.mutex_lock.generic_file_write_iter.__vfs_write.vfs_write.sys_pwrite64
30.76 ± 6% -100.0% 0.00 ± -1% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter.__vfs_write
18.67 ± 10% -100.0% 0.00 ± -1% perf-profile.cycles-pp.mutex_spin_on_owner.isra.4.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
4.49 ± 17% -55.6% 1.99 ± 3% perf-profile.cycles-pp.mutex_unlock.__vfs_write.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
4.90 ± 33% -100.0% 0.00 ± -1% perf-profile.cycles-pp.mutex_unlock.generic_file_write_iter.__vfs_write.vfs_write.sys_pwrite64
1.63 ± 52% -100.0% 0.00 ± -1% perf-profile.cycles-pp.osq_lock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
1.70 ± 50% -100.0% 0.00 ± -1% perf-profile.cycles-pp.osq_unlock.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.generic_file_write_iter
0.92 ± 30% +187.2% 2.65 ± 6% perf-profile.cycles-pp.percpu_down_read.__sb_start_write.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
0.22 ± 27% +373.2% 1.06 ± 11% perf-profile.cycles-pp.percpu_up_read.__sb_end_write.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
0.76 ± 14% +156.2% 1.94 ± 4% perf-profile.cycles-pp.put_page.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.68 ± 3% +131.9% 1.57 ± 16% perf-profile.cycles-pp.radix_tree_lookup_slot.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_write_begin
2.37 ± 30% +102.0% 4.79 ± 6% perf-profile.cycles-pp.rw_verify_area.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
0.46 ± 14% +161.8% 1.19 ± 15% perf-profile.cycles-pp.selinux_file_permission.rw_verify_area.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
0.34 ± 27% +151.5% 0.86 ± 13% perf-profile.cycles-pp.set_page_dirty.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
4.28 ± 7% +164.4% 11.32 ± 2% perf-profile.cycles-pp.shmem_getpage_gfp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
5.10 ± 8% +153.4% 12.93 ± 2% perf-profile.cycles-pp.shmem_write_begin.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
4.58 ± 14% +102.2% 9.26 ± 3% perf-profile.cycles-pp.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.__vfs_write
12.25 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp.start_secondary
79.25 ± 5% +16.7% 92.47 ± 0% perf-profile.cycles-pp.sys_pwrite64.entry_SYSCALL_64_fastpath
2.27 ± 9% +134.2% 5.31 ± 4% perf-profile.cycles-pp.unlock_page.shmem_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter
0.13 ± 25% +541.8% 0.86 ± 13% perf-profile.cycles-pp.update_fast_ctr.percpu_up_read.__sb_end_write.vfs_write.sys_pwrite64
77.89 ± 5% +13.9% 88.70 ± 0% perf-profile.cycles-pp.vfs_write.sys_pwrite64.entry_SYSCALL_64_fastpath
=========================================================================================
blocksize/compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase:
512K/gcc-4.9/performance/x86_64-rhel/100%/debian-x86_64-2015-02-07.cgz/lkp-nex04/pigz
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
4.804e+08 ± 0% -97.6% 11606706 ± 0% pigz.throughput
300.56 ± 0% +5.3% 316.41 ± 0% pigz.time.elapsed_time
300.56 ± 0% +5.3% 316.41 ± 0% pigz.time.elapsed_time.max
931472 ± 1% -64.7% 328708 ± 0% pigz.time.involuntary_context_switches
166245 ± 0% -21.9% 129826 ± 1% pigz.time.maximum_resident_set_size
1573717 ± 8% -90.4% 151177 ± 0% pigz.time.minor_page_faults
6318 ± 0% -98.5% 96.00 ± 0% pigz.time.percent_of_cpu_this_job_got
240.25 ± 3% -98.6% 3.33 ± 1% pigz.time.system_time
18752 ± 0% -98.4% 302.80 ± 0% pigz.time.user_time
1140974 ± 0% -92.9% 80442 ± 4% pigz.time.voluntary_context_switches
2338 ± 1% -99.4% 13.81 ± 1% uptime.idle
2564754 ± 6% -99.5% 11821 ± 0% softirqs.RCU
219857 ± 0% -100.0% 0.00 ± -1% softirqs.SCHED
9845159 ± 0% -98.2% 173613 ± 0% softirqs.TIMER
633810 ± 0% -13.4% 549028 ± 0% vmstat.memory.cache
10393 ± 1% -62.3% 3913 ± 0% vmstat.system.cs
97367 ± 0% -98.9% 1095 ± 0% vmstat.system.in
98.58 ± 0% +1.4% 99.94 ± 0% turbostat.%Busy
2335 ± 0% +2.3% 2388 ± 0% turbostat.Avg_MHz
0.92 ± 8% -93.2% 0.06 ± 58% turbostat.CPU%c1
0.50 ± 4% -100.0% 0.00 ± -1% turbostat.CPU%c3
69.50 ± 0% -21.9% 54.25 ± 0% turbostat.CoreTmp
0.29 ± 3% -100.0% 0.00 ± -1% turbostat.Pkg%pc3
931472 ± 1% -64.7% 328708 ± 0% time.involuntary_context_switches
166245 ± 0% -21.9% 129826 ± 1% time.maximum_resident_set_size
1573717 ± 8% -90.4% 151177 ± 0% time.minor_page_faults
6318 ± 0% -98.5% 96.00 ± 0% time.percent_of_cpu_this_job_got
240.25 ± 3% -98.6% 3.33 ± 1% time.system_time
18752 ± 0% -98.4% 302.80 ± 0% time.user_time
1140974 ± 0% -92.9% 80442 ± 4% time.voluntary_context_switches
6256256 ± 7% -99.9% 3673 ±173% cpuidle.C1-NHM.time
57557 ± 19% -100.0% 0.75 ±173% cpuidle.C1-NHM.usage
846984 ± 19% -100.0% 0.00 ± 0% cpuidle.C1E-NHM.time
1979 ± 7% -100.0% 0.00 ± 0% cpuidle.C1E-NHM.usage
2.707e+08 ± 6% -99.9% 195386 ± 58% cpuidle.C3-NHM.time
193977 ± 6% -100.0% 21.00 ± 58% cpuidle.C3-NHM.usage
695.00 ± 72% -100.0% 0.00 ± 0% cpuidle.POLL.time
14.25 ± 33% -100.0% 0.00 ± 0% cpuidle.POLL.usage
314083 ± 0% -19.9% 251563 ± 0% meminfo.Active
212669 ± 0% -28.6% 151797 ± 0% meminfo.Active(anon)
57910 ± 1% -24.7% 43629 ± 1% meminfo.AnonHugePages
190323 ± 0% -20.8% 150824 ± 0% meminfo.AnonPages
584454 ± 3% +20.0% 701458 ± 0% meminfo.Committed_AS
153636 ± 4% -73.8% 40256 ± 8% meminfo.DirectMap4k
13591 ± 0% -14.6% 11602 ± 0% meminfo.KernelStack
48915 ± 0% -24.1% 37120 ± 0% meminfo.SReclaimable
85754 ± 0% -58.0% 36001 ± 0% meminfo.SUnreclaim
33565 ± 0% -69.1% 10387 ± 0% meminfo.Shmem
134670 ± 0% -45.7% 73122 ± 0% meminfo.Slab
11238820 ± 16% -82.8% 1928687 ± 0% numa-numastat.node0.local_node
11241118 ± 16% -82.8% 1928687 ± 0% numa-numastat.node0.numa_hit
2297 ± 98% -100.0% 0.00 ± -1% numa-numastat.node0.other_node
9917497 ± 6% -100.0% 0.00 ± -1% numa-numastat.node1.local_node
9921295 ± 6% -100.0% 3.25 ± 13% numa-numastat.node1.numa_hit
3798 ± 57% -99.9% 3.25 ± 13% numa-numastat.node1.other_node
10831204 ± 38% -100.0% 0.00 ± -1% numa-numastat.node2.local_node
10834629 ± 38% -100.0% 4.25 ± 19% numa-numastat.node2.numa_hit
16539540 ± 26% -100.0% 0.00 ± -1% numa-numastat.node3.local_node
16544507 ± 26% -100.0% 4.50 ± 19% numa-numastat.node3.numa_hit
4967 ± 9% -99.9% 4.50 ± 19% numa-numastat.node3.other_node
53137 ± 0% -28.6% 37934 ± 0% proc-vmstat.nr_active_anon
47559 ± 0% -20.8% 37690 ± 0% proc-vmstat.nr_anon_pages
129.00 ± 14% -47.3% 68.00 ± 18% proc-vmstat.nr_dirtied
849.00 ± 0% -14.7% 724.25 ± 0% proc-vmstat.nr_kernel_stack
8388 ± 0% -69.0% 2596 ± 0% proc-vmstat.nr_shmem
12228 ± 0% -24.1% 9279 ± 0% proc-vmstat.nr_slab_reclaimable
21438 ± 0% -58.0% 8999 ± 0% proc-vmstat.nr_slab_unreclaimable
1398902 ± 9% -100.0% 468.25 ± 2% proc-vmstat.numa_hint_faults
430311 ± 8% -99.9% 468.25 ± 2% proc-vmstat.numa_hint_faults_local
48502006 ± 0% -96.0% 1924877 ± 0% proc-vmstat.numa_hit
1208 ± 11% -100.0% 0.50 ±100% proc-vmstat.numa_huge_pte_updates
48489586 ± 0% -96.0% 1924865 ± 0% proc-vmstat.numa_local
12420 ± 0% -99.9% 12.00 ± 0% proc-vmstat.numa_other
210322 ± 1% -100.0% 0.00 ± -1% proc-vmstat.numa_pages_migrated
2020867 ± 10% -99.9% 1825 ± 0% proc-vmstat.numa_pte_updates
10691 ± 0% -79.0% 2245 ± 2% proc-vmstat.pgactivate
665378 ± 16% -90.5% 63448 ± 1% proc-vmstat.pgalloc_dma32
48011294 ± 0% -95.9% 1964536 ± 0% proc-vmstat.pgalloc_normal
2355082 ± 5% -65.6% 810560 ± 0% proc-vmstat.pgfault
48628427 ± 0% -95.9% 1991560 ± 0% proc-vmstat.pgfree
2560 ± 31% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_fail
210322 ± 1% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_success
697.00 ± 23% +1660.2% 12268 ± 1% latency_stats.avg.call_rwsem_down_write_failed.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
7952 ± 5% +261.7% 28765 ± 9% latency_stats.hits.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
748841 ± 0% -97.3% 20276 ± 1% latency_stats.hits.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
279787 ± 0% -98.5% 4111 ± 1% latency_stats.hits.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
13662 ± 9% +668.6% 105007 ± 37% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
10879 ± 24% +313.5% 44986 ± 40% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 24354 ± 14% latency_stats.max.call_rwsem_down_write_failed.SyS_brk.entry_SYSCALL_64_fastpath
3398 ± 83% +979.7% 36695 ± 13% latency_stats.max.call_rwsem_down_write_failed.SyS_mprotect.entry_SYSCALL_64_fastpath
13812 ± 11% +626.7% 100378 ± 43% latency_stats.max.call_rwsem_down_write_failed.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
5878524 ±100% +90.6% 11203552 ±148% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
8061 ± 59% -100.0% 0.00 ± -1% latency_stats.max.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
4989 ± 0% +1304.2% 70056 ± 93% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
121769 ± 33% -95.9% 4993 ± 0% latency_stats.max.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
13368 ± 13% -100.0% 0.00 ± -1% latency_stats.max.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
4600119 ± 17% +4057.1% 1.912e+08 ± 10% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
98531 ± 20% +784.2% 871253 ± 20% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 338052 ± 8% latency_stats.sum.call_rwsem_down_write_failed.SyS_brk.entry_SYSCALL_64_fastpath
57183 ± 20% +15459.0% 8897168 ± 9% latency_stats.sum.call_rwsem_down_write_failed.SyS_mprotect.entry_SYSCALL_64_fastpath
483056 ± 28% +2323.4% 11706316 ± 11% latency_stats.sum.call_rwsem_down_write_failed.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
10678 ± 12% -98.1% 203.25 ± 53% latency_stats.sum.call_rwsem_down_write_failed.vm_munmap.SyS_munmap.entry_SYSCALL_64_fastpath
1767988 ± 4% -89.6% 183379 ± 7% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
25141 ± 60% +159.3% 65203 ±123% latency_stats.sum.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open
3.062e+08 ± 0% -97.4% 7900413 ± 2% latency_stats.sum.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
38352 ± 31% -100.0% 0.00 ± -1% latency_stats.sum.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
18136228 ± 0% -84.8% 2749541 ± 5% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
86992591 ± 2% -97.7% 1965980 ± 2% latency_stats.sum.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
62261 ± 14% -100.0% 0.00 ± -1% latency_stats.sum.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
37570 ± 30% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.do_huge_pmd_numa_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
13868 ± 22% +173.7% 37962 ± 0% numa-vmstat.node0.nr_active_anon
12198 ± 7% +209.2% 37719 ± 0% numa-vmstat.node0.nr_anon_pages
372.00 ± 3% +30.2% 484.50 ± 0% numa-vmstat.node0.nr_kernel_stack
304.75 ± 27% +211.6% 949.75 ± 0% numa-vmstat.node0.nr_page_table_pages
3109 ± 16% +34.0% 4165 ± 3% numa-vmstat.node0.nr_slab_reclaimable
6037 ± 2% -25.0% 4527 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
5591515 ± 15% -79.7% 1133022 ± 0% numa-vmstat.node0.numa_hit
5585628 ± 15% -79.7% 1133022 ± 0% numa-vmstat.node0.numa_local
5886 ± 41% -100.0% 0.00 ± -1% numa-vmstat.node0.numa_other
12258 ± 26% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_active_anon
288.75 ± 5% -48.6% 148.50 ± 4% numa-vmstat.node1.nr_alloc_batch
10864 ± 8% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_anon_pages
102.00 ± 37% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_inactive_anon
159.50 ± 10% -49.8% 80.00 ± 0% numa-vmstat.node1.nr_kernel_stack
265.00 ± 49% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_page_table_pages
1587 ±162% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_shmem
3085 ± 11% -43.3% 1748 ± 11% numa-vmstat.node1.nr_slab_reclaimable
5059 ± 8% -72.6% 1386 ± 7% numa-vmstat.node1.nr_slab_unreclaimable
5170131 ± 15% -98.9% 57543 ± 0% numa-vmstat.node1.numa_hit
5107949 ± 15% -100.0% 0.00 ± -1% numa-vmstat.node1.numa_local
13335 ± 24% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_active_anon
548.50 ± 2% +24.6% 683.25 ± 4% numa-vmstat.node2.nr_alloc_batch
11930 ± 8% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_anon_pages
590.25 ±151% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_inactive_anon
152.75 ± 9% -47.6% 80.00 ± 0% numa-vmstat.node2.nr_kernel_stack
198.50 ± 32% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_page_table_pages
2092 ±115% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_shmem
3141 ± 12% -45.8% 1702 ± 10% numa-vmstat.node2.nr_slab_reclaimable
4969 ± 10% -67.0% 1640 ± 6% numa-vmstat.node2.nr_slab_unreclaimable
5304816 ± 34% -98.9% 57590 ± 0% numa-vmstat.node2.numa_hit
5243176 ± 35% -100.0% 0.00 ± -1% numa-vmstat.node2.numa_local
13722 ± 17% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_active_anon
563.25 ± 5% +15.9% 652.75 ± 3% numa-vmstat.node3.nr_alloc_batch
12588 ± 4% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_anon_pages
576.00 ±151% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_inactive_anon
163.00 ± 7% -50.9% 80.00 ± 0% numa-vmstat.node3.nr_kernel_stack
257.00 ± 32% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_page_table_pages
1791 ±108% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_shmem
2891 ± 9% -42.5% 1662 ± 5% numa-vmstat.node3.nr_slab_reclaimable
5371 ± 8% -73.1% 1444 ± 6% numa-vmstat.node3.nr_slab_unreclaimable
8512325 ± 26% -99.3% 57556 ± 0% numa-vmstat.node3.numa_hit
8448839 ± 26% -100.0% 0.00 ± -1% numa-vmstat.node3.numa_local
63485 ± 1% -9.3% 57556 ± 0% numa-vmstat.node3.numa_other
80249 ± 15% +119.6% 176240 ± 0% numa-meminfo.node0.Active
55464 ± 22% +172.9% 151387 ± 0% numa-meminfo.node0.Active(anon)
14472 ± 17% +200.4% 43478 ± 1% numa-meminfo.node0.AnonHugePages
48785 ± 7% +208.3% 150414 ± 0% numa-meminfo.node0.AnonPages
5957 ± 3% +30.3% 7759 ± 0% numa-meminfo.node0.KernelStack
391714 ± 6% +16.1% 454800 ± 0% numa-meminfo.node0.MemUsed
1221 ± 27% +211.1% 3799 ± 0% numa-meminfo.node0.PageTables
12437 ± 16% +34.0% 16664 ± 3% numa-meminfo.node0.SReclaimable
24152 ± 2% -25.0% 18107 ± 2% numa-meminfo.node0.SUnreclaim
74995 ± 16% -66.5% 25090 ± 4% numa-meminfo.node1.Active
49007 ± 26% -100.0% 0.00 ± -1% numa-meminfo.node1.Active(anon)
12289 ± 8% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonHugePages
43438 ± 8% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonPages
2559 ± 10% -50.0% 1280 ± 0% numa-meminfo.node1.KernelStack
353899 ± 4% -33.0% 236965 ± 0% numa-meminfo.node1.MemUsed
1060 ± 49% -100.0% 0.00 ± -1% numa-meminfo.node1.PageTables
12341 ± 11% -43.3% 6994 ± 11% numa-meminfo.node1.SReclaimable
20236 ± 8% -72.6% 5546 ± 7% numa-meminfo.node1.SUnreclaim
6351 ±162% -100.0% 0.00 ± -1% numa-meminfo.node1.Shmem
32578 ± 8% -61.5% 12540 ± 8% numa-meminfo.node1.Slab
78467 ± 18% -67.9% 25158 ± 3% numa-meminfo.node2.Active
53324 ± 24% -100.0% 0.00 ± -1% numa-meminfo.node2.Active(anon)
15471 ± 13% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonHugePages
47705 ± 8% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonPages
2362 ±151% -100.0% 0.00 ± -1% numa-meminfo.node2.Inactive(anon)
2449 ± 9% -47.7% 1280 ± 0% numa-meminfo.node2.KernelStack
365232 ± 4% -35.0% 237453 ± 0% numa-meminfo.node2.MemUsed
795.50 ± 32% -100.0% 0.00 ± -1% numa-meminfo.node2.PageTables
12566 ± 12% -45.8% 6810 ± 10% numa-meminfo.node2.SReclaimable
19878 ± 10% -67.0% 6561 ± 6% numa-meminfo.node2.SUnreclaim
8371 ±115% -100.0% 0.00 ± -1% numa-meminfo.node2.Shmem
32445 ± 3% -58.8% 13371 ± 6% numa-meminfo.node2.Slab
80327 ± 12% -69.3% 24653 ± 3% numa-meminfo.node3.Active
54830 ± 17% -100.0% 0.00 ± -1% numa-meminfo.node3.Active(anon)
15647 ± 10% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonHugePages
50291 ± 4% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonPages
2281 ±153% -100.0% 0.00 ± -1% numa-meminfo.node3.Inactive(anon)
2617 ± 7% -51.1% 1280 ± 0% numa-meminfo.node3.KernelStack
364629 ± 3% -35.1% 236563 ± 0% numa-meminfo.node3.MemUsed
1029 ± 32% -100.0% 0.00 ± -1% numa-meminfo.node3.PageTables
11568 ± 9% -42.5% 6650 ± 5% numa-meminfo.node3.SReclaimable
21484 ± 8% -73.1% 5780 ± 6% numa-meminfo.node3.SUnreclaim
7152 ±108% -100.0% 0.00 ± -1% numa-meminfo.node3.Shmem
33053 ± 5% -62.4% 12430 ± 4% numa-meminfo.node3.Slab
7535 ± 1% -18.8% 6120 ± 0% slabinfo.Acpi-Namespace.active_objs
7535 ± 1% -18.8% 6120 ± 0% slabinfo.Acpi-Namespace.num_objs
45262 ± 0% -15.3% 38327 ± 0% slabinfo.Acpi-Operand.active_objs
808.25 ± 0% -15.2% 685.25 ± 0% slabinfo.Acpi-Operand.active_slabs
45262 ± 0% -15.2% 38374 ± 0% slabinfo.Acpi-Operand.num_objs
808.25 ± 0% -15.2% 685.25 ± 0% slabinfo.Acpi-Operand.num_slabs
2466 ± 0% -96.9% 77.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
2466 ± 0% -96.9% 77.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
50812 ± 0% -13.4% 44013 ± 0% slabinfo.Acpi-State.active_objs
996.00 ± 0% -13.4% 863.00 ± 0% slabinfo.Acpi-State.active_slabs
50812 ± 0% -13.3% 44038 ± 0% slabinfo.Acpi-State.num_objs
996.00 ± 0% -13.4% 863.00 ± 0% slabinfo.Acpi-State.num_slabs
2319 ± 2% -89.1% 252.00 ± 0% slabinfo.RAW.active_objs
2319 ± 2% -89.1% 252.00 ± 0% slabinfo.RAW.num_objs
114.75 ± 6% -85.2% 17.00 ± 0% slabinfo.TCP.active_objs
114.75 ± 6% -85.2% 17.00 ± 0% slabinfo.TCP.num_objs
272.00 ± 8% -87.5% 34.00 ± 0% slabinfo.UDP.active_objs
272.00 ± 8% -87.5% 34.00 ± 0% slabinfo.UDP.num_objs
10201 ± 1% -84.0% 1628 ± 0% slabinfo.anon_vma.active_objs
199.50 ± 1% -84.5% 31.00 ± 0% slabinfo.anon_vma.active_slabs
10201 ± 1% -84.0% 1628 ± 0% slabinfo.anon_vma.num_objs
199.50 ± 1% -84.5% 31.00 ± 0% slabinfo.anon_vma.num_slabs
783.75 ± 7% -90.7% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
783.75 ± 7% -90.7% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
193.50 ± 19% -79.8% 39.00 ± 0% slabinfo.bdev_cache.active_objs
193.50 ± 19% -79.8% 39.00 ± 0% slabinfo.bdev_cache.num_objs
841.00 ± 12% -94.8% 44.00 ± 0% slabinfo.blkdev_requests.active_objs
841.00 ± 12% -94.8% 44.00 ± 0% slabinfo.blkdev_requests.num_objs
57176 ± 0% -23.3% 43852 ± 0% slabinfo.dentry.active_objs
1361 ± 0% -23.3% 1044 ± 0% slabinfo.dentry.active_slabs
57191 ± 0% -23.2% 43902 ± 0% slabinfo.dentry.num_objs
1361 ± 0% -23.3% 1044 ± 0% slabinfo.dentry.num_slabs
476.50 ± 6% -91.8% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
476.50 ± 6% -91.8% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
3067 ± 0% -94.0% 183.00 ± 0% slabinfo.files_cache.active_objs
3067 ± 0% -94.0% 183.00 ± 0% slabinfo.files_cache.num_objs
4013 ± 0% -11.1% 3568 ± 0% slabinfo.ftrace_event_field.active_objs
4013 ± 0% -11.1% 3568 ± 0% slabinfo.ftrace_event_field.num_objs
2136 ± 0% -45.2% 1170 ± 0% slabinfo.idr_layer_cache.active_objs
2136 ± 0% -45.2% 1170 ± 0% slabinfo.idr_layer_cache.num_objs
42026 ± 0% -10.9% 37440 ± 0% slabinfo.inode_cache.active_objs
42026 ± 0% -10.9% 37440 ± 0% slabinfo.inode_cache.num_objs
56283 ± 0% -30.2% 39290 ± 0% slabinfo.kernfs_node_cache.active_objs
1654 ± 0% -30.2% 1155 ± 0% slabinfo.kernfs_node_cache.active_slabs
56283 ± 0% -30.2% 39290 ± 0% slabinfo.kernfs_node_cache.num_objs
1654 ± 0% -30.2% 1155 ± 0% slabinfo.kernfs_node_cache.num_slabs
3575 ± 0% -76.1% 856.00 ± 3% slabinfo.kmalloc-1024.active_objs
3719 ± 0% -74.0% 968.00 ± 5% slabinfo.kmalloc-1024.num_objs
5023 ± 1% -58.0% 2109 ± 0% slabinfo.kmalloc-128.active_objs
5023 ± 1% -57.4% 2141 ± 0% slabinfo.kmalloc-128.num_objs
19925 ± 1% -68.7% 6235 ± 0% slabinfo.kmalloc-16.active_objs
19925 ± 1% -66.6% 6651 ± 0% slabinfo.kmalloc-16.num_objs
10246 ± 0% -64.3% 3653 ± 0% slabinfo.kmalloc-192.active_objs
243.50 ± 1% -63.2% 89.50 ± 0% slabinfo.kmalloc-192.active_slabs
10250 ± 0% -63.3% 3759 ± 0% slabinfo.kmalloc-192.num_objs
243.50 ± 1% -63.2% 89.50 ± 0% slabinfo.kmalloc-192.num_slabs
3240 ± 2% -78.0% 712.00 ± 2% slabinfo.kmalloc-2048.active_objs
208.75 ± 1% -77.6% 46.75 ± 0% slabinfo.kmalloc-2048.active_slabs
3347 ± 1% -77.5% 754.50 ± 0% slabinfo.kmalloc-2048.num_objs
208.75 ± 1% -77.6% 46.75 ± 0% slabinfo.kmalloc-2048.num_slabs
10778 ± 10% -87.2% 1379 ± 0% slabinfo.kmalloc-256.active_objs
347.50 ± 10% -77.1% 79.50 ± 5% slabinfo.kmalloc-256.active_slabs
11141 ± 10% -77.0% 2562 ± 5% slabinfo.kmalloc-256.num_objs
347.50 ± 10% -77.1% 79.50 ± 5% slabinfo.kmalloc-256.num_slabs
20631 ± 1% -60.3% 8192 ± 0% slabinfo.kmalloc-32.active_objs
20631 ± 1% -60.3% 8192 ± 0% slabinfo.kmalloc-32.num_objs
1007 ± 0% -56.1% 442.50 ± 0% slabinfo.kmalloc-4096.active_objs
1060 ± 0% -55.1% 476.50 ± 0% slabinfo.kmalloc-4096.num_objs
6919 ± 5% -63.7% 2510 ± 0% slabinfo.kmalloc-512.active_objs
219.50 ± 3% -59.9% 88.00 ± 2% slabinfo.kmalloc-512.active_slabs
7044 ± 3% -59.8% 2833 ± 1% slabinfo.kmalloc-512.num_objs
219.50 ± 3% -59.9% 88.00 ± 2% slabinfo.kmalloc-512.num_slabs
36557 ± 1% -56.2% 16001 ± 0% slabinfo.kmalloc-64.active_objs
571.00 ± 1% -54.3% 261.00 ± 0% slabinfo.kmalloc-64.active_slabs
36557 ± 1% -54.2% 16725 ± 0% slabinfo.kmalloc-64.num_objs
571.00 ± 1% -54.3% 261.00 ± 0% slabinfo.kmalloc-64.num_slabs
38060 ± 1% -86.5% 5132 ± 1% slabinfo.kmalloc-8.active_objs
39168 ± 1% -84.3% 6140 ± 0% slabinfo.kmalloc-8.num_objs
489.75 ± 0% -88.0% 59.00 ± 0% slabinfo.kmalloc-8192.active_objs
122.75 ± 0% -87.8% 15.00 ± 0% slabinfo.kmalloc-8192.active_slabs
491.00 ± 0% -87.8% 60.00 ± 0% slabinfo.kmalloc-8192.num_objs
122.75 ± 0% -87.8% 15.00 ± 0% slabinfo.kmalloc-8192.num_slabs
4181 ± 0% -62.6% 1562 ± 1% slabinfo.kmalloc-96.active_objs
4614 ± 1% -63.6% 1678 ± 0% slabinfo.kmalloc-96.num_objs
547.50 ± 10% -72.1% 153.00 ± 0% slabinfo.kmem_cache.active_objs
547.50 ± 10% -72.1% 153.00 ± 0% slabinfo.kmem_cache.num_objs
979.25 ± 7% -50.6% 484.00 ± 0% slabinfo.kmem_cache_node.active_objs
1007 ± 6% -49.2% 512.00 ± 0% slabinfo.kmem_cache_node.num_objs
1211 ± 1% -91.7% 101.00 ± 0% slabinfo.mm_struct.active_objs
1211 ± 1% -91.7% 101.00 ± 0% slabinfo.mm_struct.num_objs
1298 ± 6% -93.5% 84.00 ± 0% slabinfo.mnt_cache.active_objs
1298 ± 6% -93.5% 84.00 ± 0% slabinfo.mnt_cache.num_objs
152.00 ± 9% -78.9% 32.00 ± 0% slabinfo.nfs_inode_cache.active_objs
152.00 ± 9% -78.9% 32.00 ± 0% slabinfo.nfs_inode_cache.num_objs
9608 ± 0% -47.2% 5069 ± 0% slabinfo.proc_inode_cache.active_objs
9608 ± 0% -47.1% 5082 ± 0% slabinfo.proc_inode_cache.num_objs
10861 ± 1% -29.9% 7614 ± 0% slabinfo.radix_tree_node.active_objs
10861 ± 1% -29.9% 7614 ± 0% slabinfo.radix_tree_node.num_objs
3783 ± 1% -66.4% 1271 ± 0% slabinfo.shmem_inode_cache.active_objs
3783 ± 1% -66.4% 1271 ± 0% slabinfo.shmem_inode_cache.num_objs
1589 ± 1% -57.8% 670.00 ± 0% slabinfo.sighand_cache.active_objs
1608 ± 1% -57.9% 677.75 ± 0% slabinfo.sighand_cache.num_objs
2725 ± 0% -72.5% 749.75 ± 0% slabinfo.signal_cache.active_objs
2725 ± 0% -72.5% 749.75 ± 0% slabinfo.signal_cache.num_objs
3263 ± 0% -98.4% 51.00 ± 0% slabinfo.sigqueue.active_objs
3263 ± 0% -98.4% 51.00 ± 0% slabinfo.sigqueue.num_objs
3172 ± 4% -90.4% 306.00 ± 0% slabinfo.sock_inode_cache.active_objs
3172 ± 4% -90.4% 306.00 ± 0% slabinfo.sock_inode_cache.num_objs
1139 ± 1% -36.2% 727.25 ± 0% slabinfo.task_struct.active_objs
390.25 ± 1% -37.7% 243.25 ± 0% slabinfo.task_struct.active_slabs
1171 ± 1% -37.6% 730.75 ± 0% slabinfo.task_struct.num_objs
390.25 ± 1% -37.7% 243.25 ± 0% slabinfo.task_struct.num_slabs
2297 ± 2% -22.0% 1792 ± 0% slabinfo.trace_event_file.active_objs
2297 ± 2% -22.0% 1792 ± 0% slabinfo.trace_event_file.num_objs
20147 ± 3% -87.0% 2627 ± 0% slabinfo.vm_area_struct.active_objs
457.50 ± 3% -84.8% 69.50 ± 2% slabinfo.vm_area_struct.active_slabs
20147 ± 3% -84.7% 3080 ± 2% slabinfo.vm_area_struct.num_objs
457.50 ± 3% -84.8% 69.50 ± 2% slabinfo.vm_area_struct.num_slabs
1.40 ± 14% +411.1% 7.14 ± 6% perf-profile.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
2.46 ± 7% -61.5% 0.94 ± 32% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.pipe_write.__vfs_write.vfs_write
0.04 ± 74% +2507.1% 0.91 ± 50% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault
0.06 ± 37% +2400.0% 1.38 ± 16% perf-profile.cycles-pp.__const_udelay.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
1.91 ± 29% +160.9% 4.98 ± 29% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.01 ± 87% +7560.0% 0.96 ± 45% perf-profile.cycles-pp.__enqueue_entity.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop
26.51 ± 9% -63.8% 9.59 ± 28% perf-profile.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.73 ± 20% +349.3% 3.28 ± 19% perf-profile.cycles-pp.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
0.24 ± 18% +1133.0% 2.99 ± 16% perf-profile.cycles-pp.__libc_fork
0.15 ± 18% +258.3% 0.54 ± 65% perf-profile.cycles-pp.__module_text_address.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace
0.88 ± 25% -100.0% 0.00 ± -1% perf-profile.cycles-pp.__mutex_lock_slowpath.mutex_lock.pipe_read.__vfs_read.vfs_read
0.06 ± 18% +4234.8% 2.49 ± 26% perf-profile.cycles-pp.__schedule.schedule.exit_to_usermode_loop.prepare_exit_to_usermode.retint_user
0.23 ± 11% +684.8% 1.80 ± 48% perf-profile.cycles-pp.__schedule.schedule.exit_to_usermode_loop.syscall_return_slowpath.int_ret_from_sys_call
0.13 ± 31% +1107.8% 1.54 ± 26% perf-profile.cycles-pp.__schedule.schedule.pipe_wait.pipe_read.__vfs_read
0.09 ± 37% +577.1% 0.59 ± 53% perf-profile.cycles-pp.__switch_to
20.40 ± 4% -56.0% 8.98 ± 11% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.39 ± 21% +1131.2% 4.83 ± 24% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath.read
13.24 ± 9% -44.0% 7.41 ± 19% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.72 ± 22% +1489.9% 11.45 ± 6% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath.write
0.05 ± 61% +1495.0% 0.80 ± 71% perf-profile.cycles-pp.__wake_up_bit.unlock_page.filemap_map_pages.handle_mm_fault.__do_page_fault
0.55 ± 31% +1290.5% 7.65 ± 5% perf-profile.cycles-pp.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write
0.60 ± 30% +1257.1% 8.14 ± 6% perf-profile.cycles-pp.__wake_up_sync_key.pipe_write.__vfs_write.vfs_write.sys_write
0.04 ± 60% +2352.9% 1.04 ± 48% perf-profile.cycles-pp._dl_addr
0.19 ± 16% +1094.8% 2.30 ± 15% perf-profile.cycles-pp._do_fork.sys_clone.entry_SYSCALL_64_fastpath.__libc_fork
1.01 ± 16% -73.3% 0.27 ± 89% perf-profile.cycles-pp.account_process_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.68 ± 17% +947.3% 7.15 ± 6% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
0.05 ± 54% +2115.0% 1.11 ± 16% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_process.process_timeout
0.89 ± 22% -75.2% 0.22 ± 81% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_process.wake_up_q
2.88 ± 7% -59.9% 1.16 ± 17% perf-profile.cycles-pp.alloc_pages_current.pipe_write.__vfs_write.vfs_write.sys_write
0.04 ± 86% +2575.0% 1.07 ± 47% perf-profile.cycles-pp.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.71 ± 17% -60.8% 0.67 ± 37% perf-profile.cycles-pp.anon_pipe_buf_release.pipe_read.__vfs_read.vfs_read.sys_read
37.78 ± 9% -59.7% 15.23 ± 22% perf-profile.cycles-pp.apic_timer_interrupt
0.53 ± 33% +1314.1% 7.53 ± 4% perf-profile.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write.__vfs_write
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.call_console_drivers.constprop.24.console_unlock.vprintk_emit.vprintk_default.printk
17.02 ± 36% -100.0% 0.00 ± -1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
0.84 ± 13% -100.0% 0.00 ± -1% perf-profile.cycles-pp.call_function_interrupt
0.10 ± 23% +1743.6% 1.80 ± 20% perf-profile.cycles-pp.call_timer_fn.run_timer_softirq.__do_softirq.irq_exit.smp_apic_timer_interrupt
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.console_unlock.vprintk_emit.vprintk_default.printk.perf_duration_warn
16.27 ± 3% -53.6% 7.54 ± 14% perf-profile.cycles-pp.copy_page_to_iter.pipe_read.__vfs_read.vfs_read.sys_read
0.17 ± 16% +1201.4% 2.25 ± 18% perf-profile.cycles-pp.copy_process._do_fork.sys_clone.entry_SYSCALL_64_fastpath.__libc_fork
15.40 ± 4% -54.5% 7.00 ± 13% perf-profile.cycles-pp.copy_user_generic_string.copy_page_to_iter.pipe_read.__vfs_read.vfs_read
17.30 ± 36% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
17.02 ± 36% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
16.93 ± 36% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
0.06 ± 39% +764.0% 0.54 ± 54% perf-profile.cycles-pp.deactivate_task.__schedule.schedule.pipe_wait.pipe_read
0.95 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.default_send_IPI_mask_sequence_phys.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others
0.53 ± 33% +1314.1% 7.53 ± 4% perf-profile.cycles-pp.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.pipe_write
0.06 ± 37% +2400.0% 1.38 ± 16% perf-profile.cycles-pp.delay_tsc.__const_udelay.wait_for_xmitr.serial8250_console_putchar.uart_console_write
0.44 ± 7% +759.0% 3.82 ± 19% perf-profile.cycles-pp.do_execveat_common.isra.33.sys_execve.return_from_execve.execve
0.14 ± 19% +1303.5% 2.00 ± 14% perf-profile.cycles-pp.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
1.95 ± 14% -72.1% 0.55 ± 59% perf-profile.cycles-pp.do_futex.sys_futex.entry_SYSCALL_64_fastpath
0.13 ± 17% +1409.4% 2.00 ± 14% perf-profile.cycles-pp.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.10 ± 29% +1453.7% 1.59 ± 34% perf-profile.cycles-pp.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
1.92 ± 29% +161.6% 5.03 ± 29% perf-profile.cycles-pp.do_page_fault.page_fault
0.11 ± 33% +1179.1% 1.38 ± 25% perf-profile.cycles-pp.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.28 ± 15% +438.8% 6.91 ± 5% perf-profile.cycles-pp.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
1.50 ± 17% +438.1% 8.08 ± 7% perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.67 ± 14% +962.8% 7.15 ± 6% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
0.92 ± 20% +43.8% 1.32 ± 20% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.wake_up_process
35.92 ± 6% -46.5% 19.21 ± 4% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath
0.19 ± 17% +1125.3% 2.30 ± 15% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.__libc_fork
0.47 ± 32% +997.9% 5.13 ± 28% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.read
0.91 ± 16% +1368.1% 13.36 ± 8% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.write
0.45 ± 8% +777.7% 3.93 ± 18% perf-profile.cycles-pp.execve
0.11 ± 26% +1663.6% 1.94 ± 13% perf-profile.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.sys_exit_group
0.09 ± 21% +917.1% 0.89 ± 44% perf-profile.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
0.29 ± 16% +797.4% 2.60 ± 27% perf-profile.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.retint_user
0.04 ± 65% +4257.1% 1.52 ± 53% perf-profile.cycles-pp.exit_to_usermode_loop.syscall_return_slowpath.int_ret_from_sys_call.write
0.19 ± 23% +1154.7% 2.35 ± 33% perf-profile.cycles-pp.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.02 ± 33% +3533.3% 0.54 ± 57% perf-profile.cycles-pp.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newstat.sys_newstat
0.10 ± 13% +864.1% 0.94 ± 44% perf-profile.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve
1.25 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap
1.09 ± 24% -71.6% 0.31 ± 41% perf-profile.cycles-pp.free_hot_cold_page.put_page.anon_pipe_buf_release.pipe_read.__vfs_read
0.84 ± 21% -82.9% 0.14 ± 70% perf-profile.cycles-pp.futex_requeue.do_futex.sys_futex.entry_SYSCALL_64_fastpath
1.77 ± 10% -55.8% 0.78 ± 51% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.pipe_write.__vfs_write
1.87 ± 27% +132.4% 4.34 ± 30% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
30.01 ± 8% -66.0% 10.20 ± 29% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.05 ± 71% +3650.0% 1.69 ± 43% perf-profile.cycles-pp.int_ret_from_sys_call.write
17.41 ± 35% -100.0% 0.00 ± -1% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
0.10 ± 13% +1923.1% 1.97 ± 5% perf-profile.cycles-pp.io_serial_in.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write
4.19 ± 22% -38.8% 2.56 ± 9% perf-profile.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.irq_work_interrupt
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
0.16 ± 26% +592.2% 1.11 ± 26% perf-profile.cycles-pp.is_ftrace_trampoline.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
0.26 ± 27% +239.2% 0.86 ± 60% perf-profile.cycles-pp.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
0.10 ± 21% +817.1% 0.94 ± 53% perf-profile.cycles-pp.is_module_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
0.02 ± 79% +6766.7% 1.54 ± 34% perf-profile.cycles-pp.khugepaged.kthread.ret_from_fork
0.68 ± 17% +1182.3% 8.69 ± 57% perf-profile.cycles-pp.kthread.ret_from_fork
2.10 ± 7% -89.5% 0.22 ± 6% perf-profile.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.31 ± 9% +766.1% 2.68 ± 26% perf-profile.cycles-pp.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve.return_from_execve
30.49 ± 8% -65.5% 10.51 ± 27% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.35 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.34 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault
0.10 ± 32% +1340.0% 1.44 ± 44% perf-profile.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
0.11 ± 26% +1663.6% 1.94 ± 13% perf-profile.cycles-pp.mmput.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.09 ± 21% +917.1% 0.89 ± 44% perf-profile.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common
0.08 ± 27% +1180.6% 0.99 ± 48% perf-profile.cycles-pp.mprotect_fixup.sys_mprotect.entry_SYSCALL_64_fastpath
0.90 ± 25% -100.0% 0.00 ± -1% perf-profile.cycles-pp.mutex_lock.pipe_read.__vfs_read.vfs_read.sys_read
0.88 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pipe_read.__vfs_read
0.86 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.mutex_spin_on_owner.isra.4.mutex_optimistic_spin.__mutex_lock_slowpath.mutex_lock.pipe_read
1.25 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one.rmap_walk
0.95 ± 19% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush
1.70 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp.note_gp_changes.rcu_process_callbacks.__do_softirq.irq_exit.smp_apic_timer_interrupt
2.03 ± 27% +187.1% 5.83 ± 29% perf-profile.cycles-pp.page_fault
0.02 ± 33% +3533.3% 0.54 ± 57% perf-profile.cycles-pp.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newstat
0.06 ± 34% +1572.0% 1.05 ± 16% perf-profile.cycles-pp.path_openat.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
1.61 ± 12% -66.8% 0.53 ± 39% perf-profile.cycles-pp.perf_event_task_tick.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
0.87 ± 19% -100.0% 0.00 ± -1% perf-profile.cycles-pp.physflat_send_IPI_mask.native_send_call_func_ipi.smp_call_function_many.native_flush_tlb_others.flush_tlb_page
0.03 ± 33% +5258.3% 1.61 ± 16% perf-profile.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.prepare_exit_to_usermode
0.14 ± 15% +664.3% 1.07 ± 48% perf-profile.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.syscall_return_slowpath
20.51 ± 3% -47.3% 10.81 ± 14% perf-profile.cycles-pp.pipe_read.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.15 ± 31% +1201.6% 1.99 ± 22% perf-profile.cycles-pp.pipe_wait.pipe_read.__vfs_read.vfs_read.sys_read
13.80 ± 9% +30.7% 18.04 ± 12% perf-profile.cycles-pp.pipe_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.29 ± 14% +865.8% 2.83 ± 21% perf-profile.cycles-pp.prepare_exit_to_usermode.retint_user
1.16 ± 14% +428.2% 6.14 ± 9% perf-profile.cycles-pp.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.printk.perf_duration_warn.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
0.12 ± 15% +1026.5% 1.38 ± 24% perf-profile.cycles-pp.proc_reg_read.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.43 ± 24% +1080.1% 5.04 ± 95% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.06 ± 55% +2370.8% 1.48 ± 18% perf-profile.cycles-pp.process_timeout.call_timer_fn.run_timer_softirq.__do_softirq.irq_exit
1.25 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.ptep_clear_flush.try_to_unmap_one.rmap_walk.try_to_unmap.migrate_pages
1.34 ± 20% -62.8% 0.50 ± 26% perf-profile.cycles-pp.put_page.anon_pipe_buf_release.pipe_read.__vfs_read.vfs_read
2.73 ± 11% -80.5% 0.53 ± 39% perf-profile.cycles-pp.rcu_check_callbacks.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.04 ± 72% +2831.2% 1.17 ± 38% perf-profile.cycles-pp.rcu_gp_kthread.kthread.ret_from_fork
2.26 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp.rcu_process_callbacks.__do_softirq.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.52 ± 27% +1133.3% 6.48 ± 22% perf-profile.cycles-pp.read
0.49 ± 87% -100.0% 0.00 ± -1% perf-profile.cycles-pp.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
0.68 ± 17% +1182.3% 8.69 ± 57% perf-profile.cycles-pp.ret_from_fork
0.32 ± 14% +800.0% 2.88 ± 23% perf-profile.cycles-pp.retint_user
0.44 ± 7% +770.8% 3.88 ± 19% perf-profile.cycles-pp.return_from_execve.execve
1.26 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_mm_fault
0.64 ± 14% +232.2% 2.12 ± 19% perf-profile.cycles-pp.run_timer_softirq.__do_softirq.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.08 ± 29% +1253.1% 1.08 ± 22% perf-profile.cycles-pp.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath.write
1.29 ± 15% +435.7% 6.91 ± 5% perf-profile.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
0.06 ± 23% +3976.0% 2.55 ± 24% perf-profile.cycles-pp.schedule.exit_to_usermode_loop.prepare_exit_to_usermode.retint_user
0.03 ± 62% +4592.3% 1.52 ± 53% perf-profile.cycles-pp.schedule.exit_to_usermode_loop.syscall_return_slowpath.int_ret_from_sys_call.write
0.13 ± 30% +1130.8% 1.60 ± 27% perf-profile.cycles-pp.schedule.pipe_wait.pipe_read.__vfs_read.vfs_read
17.18 ± 9% -79.0% 3.61 ± 28% perf-profile.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.31 ± 9% +693.6% 2.48 ± 25% perf-profile.cycles-pp.search_binary_handler.do_execveat_common.sys_execve.return_from_execve.execve
0.09 ± 24% +897.3% 0.92 ± 26% perf-profile.cycles-pp.security_file_permission.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.07 ± 28% +866.7% 0.65 ± 39% perf-profile.cycles-pp.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write.sys_write
0.11 ± 15% +1071.1% 1.32 ± 26% perf-profile.cycles-pp.seq_read.proc_reg_read.__vfs_read.vfs_read.sys_read
0.15 ± 8% +2286.4% 3.52 ± 6% perf-profile.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write.call_console_drivers
0.13 ± 27% +2556.6% 3.52 ± 6% perf-profile.cycles-pp.serial8250_console_write.univ8250_console_write.call_console_drivers.console_unlock.vprintk_emit
35.62 ± 9% -60.9% 13.92 ± 23% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
1.24 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_call_function_many.native_flush_tlb_others.flush_tlb_page.ptep_clear_flush.try_to_unmap_one
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.smp_irq_work_interrupt.irq_work_interrupt
0.49 ± 87% -100.0% 0.00 ± -1% perf-profile.cycles-pp.start_kernel.x86_64_start_reservations.x86_64_start_kernel
17.31 ± 36% -100.0% 0.00 ± -1% perf-profile.cycles-pp.start_secondary
0.19 ± 16% +1094.8% 2.30 ± 15% perf-profile.cycles-pp.sys_clone.entry_SYSCALL_64_fastpath.__libc_fork
0.44 ± 7% +770.8% 3.88 ± 19% perf-profile.cycles-pp.sys_execve.return_from_execve.execve
0.13 ± 17% +1409.4% 2.00 ± 14% perf-profile.cycles-pp.sys_exit_group.entry_SYSCALL_64_fastpath
1.99 ± 13% -67.1% 0.66 ± 70% perf-profile.cycles-pp.sys_futex.entry_SYSCALL_64_fastpath
0.08 ± 31% +1850.0% 1.66 ± 34% perf-profile.cycles-pp.sys_mmap.entry_SYSCALL_64_fastpath
0.09 ± 27% +1794.3% 1.66 ± 34% perf-profile.cycles-pp.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.09 ± 21% +1091.4% 1.04 ± 46% perf-profile.cycles-pp.sys_mprotect.entry_SYSCALL_64_fastpath
20.46 ± 4% -55.6% 9.09 ± 12% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
0.47 ± 31% +1045.3% 5.44 ± 21% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath.read
13.32 ± 9% -44.4% 7.41 ± 19% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
0.91 ± 18% +1388.1% 13.47 ± 7% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath.write
0.04 ± 66% +4253.3% 1.63 ± 47% perf-profile.cycles-pp.syscall_return_slowpath.int_ret_from_sys_call.write
12.22 ± 9% -88.2% 1.44 ± 23% perf-profile.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
0.15 ± 42% +880.6% 1.52 ± 41% perf-profile.cycles-pp.tick_do_update_jiffies64.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
1.03 ± 13% -78.7% 0.22 ± 81% perf-profile.cycles-pp.tick_program_event.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.17 ± 47% +926.5% 1.75 ± 26% perf-profile.cycles-pp.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
23.17 ± 9% -74.4% 5.92 ± 31% perf-profile.cycles-pp.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
24.14 ± 9% -67.0% 7.97 ± 27% perf-profile.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
1.33 ± 30% -100.0% 0.00 ± -1% perf-profile.cycles-pp.trigger_load_balance.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
1.26 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_unmap.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault
1.26 ± 20% -100.0% 0.00 ± -1% perf-profile.cycles-pp.try_to_unmap_one.rmap_walk.try_to_unmap.migrate_pages.migrate_misplaced_page
0.92 ± 22% +739.8% 7.70 ± 6% perf-profile.cycles-pp.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
0.05 ± 48% +2645.0% 1.37 ± 11% perf-profile.cycles-pp.try_to_wake_up.wake_up_process.process_timeout.call_timer_fn.run_timer_softirq
0.72 ± 18% +986.4% 7.79 ± 5% perf-profile.cycles-pp.ttwu_do_activate.constprop.86.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
0.05 ± 54% +2315.0% 1.21 ± 6% perf-profile.cycles-pp.ttwu_do_activate.constprop.86.try_to_wake_up.wake_up_process.process_timeout.call_timer_fn
0.03 ± 45% +1876.9% 0.64 ± 51% perf-profile.cycles-pp.ttwu_do_wakeup.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
0.15 ± 8% +2286.4% 3.52 ± 6% perf-profile.cycles-pp.uart_console_write.serial8250_console_write.univ8250_console_write.call_console_drivers.console_unlock
0.13 ± 27% +2556.6% 3.52 ± 6% perf-profile.cycles-pp.univ8250_console_write.call_console_drivers.console_unlock.vprintk_emit.vprintk_default
0.07 ± 39% +1706.9% 1.31 ± 53% perf-profile.cycles-pp.unlock_page.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault
0.11 ± 31% +1378.6% 1.55 ± 38% perf-profile.cycles-pp.unmap_page_range.unmap_single_vma.unmap_vmas.exit_mmap.mmput
0.07 ± 42% +1544.4% 1.11 ± 36% perf-profile.cycles-pp.unmap_single_vma.unmap_vmas.exit_mmap.mmput.do_exit
0.07 ± 39% +1607.7% 1.11 ± 36% perf-profile.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
7.36 ± 11% -94.1% 0.43 ± 31% perf-profile.cycles-pp.update_cfs_shares.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.94 ± 11% -67.5% 0.31 ± 39% perf-profile.cycles-pp.update_curr.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
22.60 ± 9% -75.0% 5.64 ± 33% perf-profile.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.14 ± 42% +822.2% 1.25 ± 47% perf-profile.cycles-pp.update_wall_time.tick_do_update_jiffies64.tick_sched_do_timer.tick_sched_timer.__hrtimer_run_queues
0.02 ± 33% +3883.3% 0.60 ± 67% perf-profile.cycles-pp.user_path_at_empty.vfs_fstatat.SYSC_newstat.sys_newstat.entry_SYSCALL_64_fastpath
20.45 ± 4% -55.5% 9.09 ± 12% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.47 ± 30% +1021.8% 5.27 ± 21% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath.read
13.28 ± 9% -44.2% 7.41 ± 19% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.86 ± 19% +1420.4% 13.04 ± 7% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath.write
0.08 ± 27% +1968.8% 1.65 ± 34% perf-profile.cycles-pp.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.vprintk_default.printk.perf_duration_warn.irq_work_run_list.irq_work_run
0.14 ± 26% +2436.2% 3.68 ± 4% perf-profile.cycles-pp.vprintk_emit.vprintk_default.printk.perf_duration_warn.irq_work_run_list
0.15 ± 7% +2212.1% 3.35 ± 5% perf-profile.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.univ8250_console_write
0.05 ± 47% +2723.8% 1.48 ± 18% perf-profile.cycles-pp.wake_up_process.process_timeout.call_timer_fn.run_timer_softirq.__do_softirq
0.45 ± 22% +1057.2% 5.21 ± 94% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
1.02 ± 15% +1487.7% 16.20 ± 10% perf-profile.cycles-pp.write
0.49 ± 87% -100.0% 0.00 ± -1% perf-profile.cycles-pp.x86_64_start_kernel
0.49 ± 87% -100.0% 0.00 ± -1% perf-profile.cycles-pp.x86_64_start_reservations.x86_64_start_kernel
=========================================================================================
compiler/cpufreq_governor/kconfig/load/rootfs/tbox_group/test/testcase:
gcc-4.9/powersave/x86_64-rhel/6000/debian-x86_64-2015-02-07.cgz/brickland1/page_test/aim7
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
92222 ± 0% -90.8% 8479 ± 0% aim7.jobs-per-min
392.70 ± 0% -100.0% 0.00 ± -1% aim7.time.elapsed_time
392.70 ± 0% -100.0% 0.00 ± -1% aim7.time.elapsed_time.max
8461942 ± 0% -100.0% 0.00 ± -1% aim7.time.involuntary_context_switches
7865 ± 0% -100.0% 0.00 ± -1% aim7.time.maximum_resident_set_size
9.618e+08 ± 0% -100.0% 0.00 ± -1% aim7.time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% aim7.time.page_size
43770 ± 0% -100.0% 0.00 ± -1% aim7.time.system_time
1310 ± 0% -100.0% 0.00 ± -1% aim7.time.user_time
1379465 ± 0% -100.0% 0.00 ± -1% aim7.time.voluntary_context_switches
392.70 ± 0% -100.0% 0.00 ± -1% time.elapsed_time
392.70 ± 0% -100.0% 0.00 ± -1% time.elapsed_time.max
8461942 ± 0% -100.0% 0.00 ± -1% time.involuntary_context_switches
7865 ± 0% -100.0% 0.00 ± -1% time.maximum_resident_set_size
9.618e+08 ± 0% -100.0% 0.00 ± -1% time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% time.page_size
11479 ± 0% -100.0% 0.00 ± -1% time.percent_of_cpu_this_job_got
43770 ± 0% -100.0% 0.00 ± -1% time.system_time
1310 ± 0% -100.0% 0.00 ± -1% time.user_time
1379465 ± 0% -100.0% 0.00 ± -1% time.voluntary_context_switches
0.00 ± -1% +Inf% 15.15 ± 14% perf-profile.cycles-pp.___might_sleep.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region
0.00 ± -1% +Inf% 3.56 ± 15% perf-profile.cycles-pp.___might_sleep.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
27.21 ± 13% -56.4% 11.86 ± 61% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
18.62 ± 23% -100.0% 0.01 ±100% perf-profile.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.00 ± -1% +Inf% 1.01 ± 9% perf-profile.cycles-pp.__list_del_entry.list_del.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list
14.18 ± 28% -99.9% 0.01 ±110% perf-profile.cycles-pp.__percpu_counter_add.__vm_enough_memory.security_vm_enough_memory_mm.do_brk.sys_brk
15.62 ± 25% -99.8% 0.03 ± 45% perf-profile.cycles-pp.__vm_enough_memory.security_vm_enough_memory_mm.do_brk.sys_brk.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 2.00 ± 19% perf-profile.cycles-pp._cond_resched.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
8.68 ± 54% -99.6% 0.04 ± 29% perf-profile.cycles-pp._raw_spin_lock.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list.release_pages
2.41 ± 36% -99.9% 0.00 ±173% perf-profile.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault
0.00 ± -1% +Inf% 1.08 ± 65% perf-profile.cycles-pp._raw_spin_lock.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
11.48 ± 32% -100.0% 0.00 ± -1% perf-profile.cycles-pp._raw_spin_lock_irqsave.__percpu_counter_add.__vm_enough_memory.security_vm_enough_memory_mm.do_brk
0.99 ± 34% -99.3% 0.01 ± 70% perf-profile.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add_active_or_unevictable.handle_mm_fault
8.13 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.apic_timer_interrupt.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
2.87 ± 29% -100.0% 0.00 ± -1% perf-profile.cycles-pp.apic_timer_interrupt.__percpu_counter_add.__vm_enough_memory.security_vm_enough_memory_mm.do_brk
1.11 ± 30% -100.0% 0.00 ± -1% perf-profile.cycles-pp.apic_timer_interrupt.do_page_fault.page_fault
1.17 ± 5% -99.6% 0.01 ±100% perf-profile.cycles-pp.apic_timer_interrupt.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free
2.13 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.apic_timer_interrupt.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add_active_or_unevictable.handle_mm_fault
2.62 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.apic_timer_interrupt.perf_event_mmap.do_brk.sys_brk.entry_SYSCALL_64_fastpath
2.41 ± 14% -99.6% 0.01 ±-10000% perf-profile.cycles-pp.apic_timer_interrupt.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
0.02 ±127% +27000.0% 6.10 ± 58% perf-profile.cycles-pp.clear_page_c_e.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
33.43 ± 15% -89.8% 3.41 ± 65% perf-profile.cycles-pp.do_brk.sys_brk.entry_SYSCALL_64_fastpath
22.17 ± 26% +265.1% 80.92 ± 13% perf-profile.cycles-pp.do_munmap.sys_brk.entry_SYSCALL_64_fastpath
29.94 ± 13% -60.2% 11.91 ± 60% perf-profile.cycles-pp.do_page_fault.page_fault
63.35 ± 3% +34.3% 85.06 ± 10% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.92 ± 16% perf-profile.cycles-pp.free_pages_prepare.free_hot_cold_page.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache
0.00 ± -1% +Inf% 10.19 ± 22% perf-profile.cycles-pp.free_pgd_range.free_pgtables.unmap_region.do_munmap.sys_brk
0.00 ± -1% +Inf% 10.25 ± 22% perf-profile.cycles-pp.free_pgtables.unmap_region.do_munmap.sys_brk.entry_SYSCALL_64_fastpath
24.26 ± 13% -52.7% 11.46 ± 61% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
8.75 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__alloc_pages_nodemask
2.81 ± 29% -100.0% 0.00 ± -1% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__percpu_counter_add
1.10 ± 30% -100.0% 0.00 ± -1% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.do_page_fault
1.10 ± 5% -99.5% 0.01 ±100% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.free_hot_cold_page_list
2.12 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.pagevec_lru_move_fn
2.58 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.perf_event_mmap
2.34 ± 15% -100.0% 0.00 ± -1% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.unmap_single_vma
1.44 ± 30% -100.0% 0.00 ± -1% perf-profile.cycles-pp.ktime_get.clockevents_program_event.tick_program_event.hrtimer_interrupt.local_apic_timer_interrupt
4.88 ± 28% -100.0% 0.00 ± -1% perf-profile.cycles-pp.ktime_get.sched_clock_tick.scheduler_tick.update_process_times.tick_sched_handle
7.50 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.ktime_get.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
1.07 ± 46% -100.0% 0.00 ± -1% perf-profile.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.00 ± -1% +Inf% 1.39 ± 14% perf-profile.cycles-pp.list_del.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list.release_pages
0.74 ± 25% -100.0% 0.00 ± -1% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__alloc_pages_nodemask.alloc_pages_current
7.98 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__alloc_pages_nodemask.alloc_pages_vma
2.81 ± 29% -100.0% 0.00 ± -1% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.__percpu_counter_add.__vm_enough_memory
1.10 ± 30% -100.0% 0.00 ± -1% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.do_page_fault.page_fault
1.11 ± 5% -99.5% 0.01 ±100% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.free_hot_cold_page_list.release_pages
2.09 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.pagevec_lru_move_fn.__lru_cache_add
2.58 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.perf_event_mmap.do_brk
2.34 ± 15% -100.0% 0.00 ± -1% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.unmap_single_vma.unmap_vmas
0.00 ± -1% +Inf% 0.65 ± 61% perf-profile.cycles-pp.mem_cgroup_try_charge.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 ± -1% +Inf% 0.73 ± 59% perf-profile.cycles-pp.native_irq_return_iret
13.20 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_pcppages_bulk.free_hot_cold_page.free_hot_cold_page_list
3.95 ± 5% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
17.57 ± 12% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.__percpu_counter_add.__vm_enough_memory.security_vm_enough_memory_mm
1.48 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add_active_or_unevictable
30.13 ± 12% -60.4% 11.94 ± 60% perf-profile.cycles-pp.page_fault
0.00 ± -1% +Inf% 2.19 ± 17% perf-profile.cycles-pp.page_remove_rmap.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region
0.02 ±-5000% +9550.0% 1.93 ± 62% perf-profile.cycles-pp.perf_event_aux.perf_event_mmap.do_brk.sys_brk.entry_SYSCALL_64_fastpath
0.04 ± 73% +5170.6% 2.24 ± 63% perf-profile.cycles-pp.perf_event_mmap.do_brk.sys_brk.entry_SYSCALL_64_fastpath
2.06 ± 21% -100.0% 0.00 ± -1% perf-profile.cycles-pp.read_hpet.ktime_get.clockevents_program_event.tick_program_event.hrtimer_interrupt
7.23 ± 23% -100.0% 0.00 ± -1% perf-profile.cycles-pp.read_hpet.ktime_get.sched_clock_tick.scheduler_tick.update_process_times
11.48 ± 25% -100.0% 0.00 ± -1% perf-profile.cycles-pp.read_hpet.ktime_get.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
10.70 ± 27% -100.0% 0.00 ± -1% perf-profile.cycles-pp.read_hpet.ktime_get_update_offsets_now.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
1.02 ± 55% -100.0% 0.00 ± -1% perf-profile.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
17.45 ± 20% -98.3% 0.30 ± 59% perf-profile.cycles-pp.security_vm_enough_memory_mm.do_brk.sys_brk.entry_SYSCALL_64_fastpath
8.09 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault
2.85 ± 29% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.__percpu_counter_add.__vm_enough_memory.security_vm_enough_memory_mm
1.11 ± 29% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.do_page_fault.page_fault
1.16 ± 4% -99.6% 0.01 ±100% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache
2.12 ± 26% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.pagevec_lru_move_fn.__lru_cache_add.lru_cache_add_active_or_unevictable
2.62 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.perf_event_mmap.do_brk.sys_brk
2.40 ± 14% -99.6% 0.01 ±-10000% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.unmap_single_vma.unmap_vmas.unmap_region
62.91 ± 3% +34.7% 84.73 ± 10% perf-profile.cycles-pp.sys_brk.entry_SYSCALL_64_fastpath
2.01 ± 56% -100.0% 0.00 ± -1% perf-profile.cycles-pp.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
17.39 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
0.01 ±-10000% +4.6e+05% 45.80 ± 14% perf-profile.cycles-pp.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
22.09 ± 26% +265.6% 80.76 ± 13% perf-profile.cycles-pp.unmap_region.do_munmap.sys_brk.entry_SYSCALL_64_fastpath
0.04 ± 82% +1.5e+05% 51.84 ± 15% perf-profile.cycles-pp.unmap_single_vma.unmap_vmas.unmap_region.do_munmap.sys_brk
0.04 ± 82% +1.5e+05% 51.86 ± 15% perf-profile.cycles-pp.unmap_vmas.unmap_region.do_munmap.sys_brk.entry_SYSCALL_64_fastpath
1.91 ± 56% -100.0% 0.00 ± -1% perf-profile.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 ± -1% +Inf% 0.54 ± 66% perf-profile.cycles-pp.vma_merge.do_brk.sys_brk.entry_SYSCALL_64_fastpath
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1HDD/9B/f2fs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/nhm4/400M/fsmark
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
4027079 ± 10% +109.1% 8418766 ± 3% fsmark.app_overhead
1355 ± 0% +26.7% 1716 ± 0% fsmark.files_per_sec
76.33 ± 0% -21.0% 60.28 ± 0% fsmark.time.elapsed_time
76.33 ± 0% -21.0% 60.28 ± 0% fsmark.time.elapsed_time.max
1896890 ± 0% -1.1% 1875796 ± 0% fsmark.time.file_system_outputs
21424 ± 1% -9.8% 19326 ± 0% fsmark.time.involuntary_context_switches
36.00 ± 0% -80.6% 7.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
27.05 ± 0% -85.3% 3.97 ± 0% fsmark.time.system_time
975566 ± 0% -76.9% 225370 ± 0% fsmark.time.voluntary_context_switches
52132 ± 6% -30.4% 36260 ± 4% meminfo.DirectMap4k
519.97 ± 0% -87.0% 67.41 ±130% uptime.idle
51034 ± 0% -83.4% 8476 ± 0% proc-vmstat.pgactivate
107205 ± 0% -15.7% 90362 ± 0% proc-vmstat.pgfault
93793 ± 1% -12.6% 81987 ± 3% proc-vmstat.pgfree
17409 ± 0% +176.1% 48062 ± 0% softirqs.BLOCK
22860 ± 2% -80.8% 4380 ± 4% softirqs.RCU
21946 ± 1% -100.0% 0.00 ± -1% softirqs.SCHED
4.71 ± 0% +122.6% 10.48 ± 0% turbostat.%Busy
146.25 ± 0% +138.8% 349.25 ± 0% turbostat.Avg_MHz
62.05 ± 0% +44.3% 89.52 ± 0% turbostat.CPU%c1
21.54 ± 0% -100.0% 0.00 ± -1% turbostat.CPU%c3
11.71 ± 1% -100.0% 0.00 ± -1% turbostat.CPU%c6
6857 ± 0% +23.5% 8467 ± 0% vmstat.io.bo
605.00 ± 1% -64.6% 214.00 ± 1% vmstat.memory.buff
9.25 ± 8% -40.5% 5.50 ± 9% vmstat.procs.b
25176 ± 1% -67.8% 8100 ± 0% vmstat.system.cs
1315 ± 1% +39.3% 1831 ± 0% vmstat.system.in
76.33 ± 0% -21.0% 60.28 ± 0% time.elapsed_time
76.33 ± 0% -21.0% 60.28 ± 0% time.elapsed_time.max
36.00 ± 0% -80.6% 7.00 ± 0% time.percent_of_cpu_this_job_got
27.05 ± 0% -85.3% 3.97 ± 0% time.system_time
0.89 ± 4% -42.7% 0.51 ± 1% time.user_time
975566 ± 0% -76.9% 225370 ± 0% time.voluntary_context_switches
2.28e+08 ± 0% -88.1% 27227040 ± 0% cpuidle.C1-NHM.time
334829 ± 1% -84.8% 50823 ± 0% cpuidle.C1-NHM.usage
38611787 ± 1% -90.3% 3750457 ± 1% cpuidle.C1E-NHM.time
122293 ± 1% -96.4% 4383 ± 2% cpuidle.C1E-NHM.usage
1.793e+08 ± 1% -94.7% 9496358 ± 1% cpuidle.C3-NHM.time
259062 ± 0% -97.2% 7144 ± 2% cpuidle.C3-NHM.usage
1.448e+08 ± 2% -90.1% 14263960 ± 1% cpuidle.C6-NHM.time
127979 ± 1% -98.8% 1498 ± 2% cpuidle.C6-NHM.usage
2033 ± 8% -67.7% 656.50 ± 12% cpuidle.POLL.time
191.00 ± 4% -63.6% 69.50 ± 15% cpuidle.POLL.usage
942.50 ± 16% +848.0% 8934 ± 12% latency_stats.avg.call_rwsem_down_read_failed.f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
1549 ± 9% +1421.1% 23570 ± 4% latency_stats.avg.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
78.50 ±173% +20853.2% 16448 ± 17% latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
55103 ± 0% -99.9% 33.25 ± 31% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
67547 ± 1% -100.0% 23.00 ± 44% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
76671 ± 0% -100.0% 27.00 ± 37% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
109802 ± 0% -100.0% 43.75 ± 43% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open
22906 ± 0% -99.9% 22.75 ± 48% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_write_inline_data.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range
28676 ± 0% -100.0% 12.75 ± 33% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].get_read_data_page.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.path_openat.do_filp_open
61445 ± 0% -100.0% 23.00 ± 38% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
16009 ± 0% -99.9% 9.75 ± 73% latency_stats.hits.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
10285 ± 0% -99.9% 7.75 ± 60% latency_stats.hits.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
38496 ± 0% -100.0% 11.25 ± 45% latency_stats.hits.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
66377 ± 0% -100.0% 26.75 ± 35% latency_stats.hits.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
42333 ± 0% -99.9% 27.25 ± 35% latency_stats.hits.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
66623 ± 0% -99.9% 34.25 ± 32% latency_stats.hits.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
64133 ± 0% -92.0% 5141 ± 2% latency_stats.hits.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
8423 ± 3% +474.7% 48412 ± 2% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
892908 ± 41% -72.7% 243382 ± 23% latency_stats.max.f2fs_sync_fs.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
78.50 ±173% +28207.6% 22221 ± 13% latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
500239 ± 6% -90.9% 45738 ± 34% latency_stats.sum.alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
31619 ± 6% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.add_free_nid.[f2fs].build_free_nids.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
1668876 ± 3% +1100.0% 20026914 ± 11% latency_stats.sum.call_rwsem_down_read_failed.f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
50641 ± 56% +712.2% 411336 ± 61% latency_stats.sum.call_rwsem_down_read_failed.f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
336517 ± 2% -90.3% 32791 ± 20% latency_stats.sum.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
11699569 ± 0% -100.0% 1299 ± 46% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
48341 ± 1% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
942937 ± 0% -100.0% 151.25 ±112% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
14463821 ± 1% -100.0% 703.75 ± 84% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
63549 ± 6% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
13106613 ± 0% -100.0% 791.50 ± 71% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
33684 ± 7% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range
28833316 ± 0% -100.0% 1671 ± 88% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open
80281 ± 7% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
2484259 ± 0% -100.0% 416.25 ± 73% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_write_inline_data.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range
18328 ± 11% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].get_read_data_page.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.__lookup_hash.filename_create
4566294 ± 0% -100.0% 433.00 ± 43% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].get_read_data_page.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.path_openat.do_filp_open
221064 ± 2% -100.0% 26.75 ± 94% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
60653 ± 7% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
8554135 ± 1% -100.0% 581.75 ± 69% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
21248 ± 8% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_fdatawrite.sync_dirty_dir_inodes.[f2fs]
1753053 ± 0% -100.0% 181.50 ± 94% latency_stats.sum.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1259144 ± 0% -100.0% 180.50 ± 89% latency_stats.sum.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
3130739 ± 1% -100.0% 210.50 ± 57% latency_stats.sum.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
805.00 ± 69% +1614.6% 13802 ± 76% latency_stats.sum.call_rwsem_down_write_failed.f2fs_init_extent_tree.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
68585 ± 4% -98.3% 1180 ± 10% latency_stats.sum.call_rwsem_down_write_failed.f2fs_submit_merged_bio.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
14609333 ± 1% -100.0% 889.25 ± 70% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
67081 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
1766106 ± 1% -100.0% 207.00 ± 65% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
10660937 ± 0% -100.0% 684.25 ± 83% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
46579 ± 6% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
19075347 ± 37% -79.8% 3843968 ± 11% latency_stats.sum.f2fs_sync_fs.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
78.50 ±173% +75824.2% 59600 ± 9% latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
6580 ± 2% -22.6% 5096 ± 0% slabinfo.Acpi-Operand.active_objs
6580 ± 2% -22.6% 5096 ± 0% slabinfo.Acpi-Operand.num_objs
312.00 ± 0% -75.0% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
312.00 ± 0% -75.0% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
360.00 ± 0% -47.5% 189.00 ± 8% slabinfo.RAW.active_objs
360.00 ± 0% -47.5% 189.00 ± 8% slabinfo.RAW.num_objs
170.00 ± 14% -80.0% 34.00 ± 0% slabinfo.UDP.active_objs
170.00 ± 14% -80.0% 34.00 ± 0% slabinfo.UDP.num_objs
2642 ± 1% -55.0% 1189 ± 1% slabinfo.anon_vma.active_objs
2642 ± 1% -55.0% 1189 ± 1% slabinfo.anon_vma.num_objs
438.00 ± 11% -83.3% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
438.00 ± 11% -83.3% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
195.00 ± 14% -80.0% 39.00 ± 0% slabinfo.bdev_cache.active_objs
195.00 ± 14% -80.0% 39.00 ± 0% slabinfo.bdev_cache.num_objs
505.50 ± 4% -65.2% 176.00 ± 0% slabinfo.blkdev_requests.active_objs
505.50 ± 4% -65.2% 176.00 ± 0% slabinfo.blkdev_requests.num_objs
630.75 ± 3% -15.6% 532.25 ± 3% slabinfo.buffer_head.active_objs
799.50 ± 2% -11.0% 711.75 ± 2% slabinfo.buffer_head.num_objs
792.50 ± 0% -87.1% 102.00 ± 0% slabinfo.ext4_extent_status.active_objs
792.50 ± 0% -87.1% 102.00 ± 0% slabinfo.ext4_extent_status.num_objs
270.75 ± 1% -85.6% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
270.75 ± 1% -85.6% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
401.50 ± 9% -65.9% 137.00 ± 0% slabinfo.files_cache.active_objs
401.50 ± 9% -65.9% 137.00 ± 0% slabinfo.files_cache.num_objs
14862 ± 6% -87.4% 1870 ± 0% slabinfo.free_nid.active_objs
14891 ± 6% -87.4% 1870 ± 0% slabinfo.free_nid.num_objs
491.25 ± 1% -23.7% 375.00 ± 0% slabinfo.idr_layer_cache.active_objs
491.25 ± 1% -23.7% 375.00 ± 0% slabinfo.idr_layer_cache.num_objs
952.00 ± 3% -33.5% 633.25 ± 1% slabinfo.kmalloc-1024.active_objs
952.00 ± 3% -32.9% 639.25 ± 0% slabinfo.kmalloc-1024.num_objs
1783 ± 3% -37.2% 1119 ± 0% slabinfo.kmalloc-128.active_objs
1783 ± 3% -37.2% 1119 ± 0% slabinfo.kmalloc-128.num_objs
4900 ± 1% -19.4% 3950 ± 2% slabinfo.kmalloc-16.active_objs
4900 ± 1% -19.4% 3950 ± 2% slabinfo.kmalloc-16.num_objs
3880 ± 2% -40.5% 2310 ± 0% slabinfo.kmalloc-192.active_objs
3880 ± 2% -40.5% 2310 ± 0% slabinfo.kmalloc-192.num_objs
799.25 ± 4% -13.4% 692.50 ± 1% slabinfo.kmalloc-2048.active_objs
821.00 ± 3% -15.7% 692.50 ± 1% slabinfo.kmalloc-2048.num_objs
1854 ± 1% -24.4% 1402 ± 1% slabinfo.kmalloc-256.active_objs
2538 ± 3% -28.5% 1815 ± 8% slabinfo.kmalloc-256.num_objs
6877 ± 3% -40.4% 4096 ± 0% slabinfo.kmalloc-32.active_objs
6877 ± 3% -40.4% 4096 ± 0% slabinfo.kmalloc-32.num_objs
1847 ± 5% -51.5% 895.25 ± 1% slabinfo.kmalloc-512.active_objs
1859 ± 4% -36.8% 1176 ± 4% slabinfo.kmalloc-512.num_objs
6272 ± 8% -34.7% 4096 ± 0% slabinfo.kmalloc-8.active_objs
6272 ± 8% -34.7% 4096 ± 0% slabinfo.kmalloc-8.num_objs
1438 ± 3% -29.9% 1008 ± 0% slabinfo.kmalloc-96.active_objs
1438 ± 3% -29.9% 1008 ± 0% slabinfo.kmalloc-96.num_objs
320.00 ± 14% -60.0% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
320.00 ± 14% -60.0% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
377.50 ± 0% -66.6% 126.00 ± 0% slabinfo.mnt_cache.active_objs
377.50 ± 0% -66.6% 126.00 ± 0% slabinfo.mnt_cache.num_objs
1713 ± 5% -39.7% 1033 ± 2% slabinfo.proc_inode_cache.active_objs
1754 ± 5% -32.2% 1189 ± 3% slabinfo.proc_inode_cache.num_objs
1151 ± 1% -12.5% 1007 ± 0% slabinfo.shmem_inode_cache.active_objs
1151 ± 1% -12.5% 1007 ± 0% slabinfo.shmem_inode_cache.num_objs
360.50 ± 1% -29.6% 253.75 ± 0% slabinfo.sighand_cache.active_objs
360.50 ± 1% -29.6% 253.75 ± 0% slabinfo.sighand_cache.num_objs
531.00 ± 8% -43.7% 299.00 ± 0% slabinfo.signal_cache.active_objs
531.00 ± 8% -43.7% 299.00 ± 0% slabinfo.signal_cache.num_objs
200.00 ± 0% -87.5% 25.00 ± 0% slabinfo.sigqueue.active_objs
200.00 ± 0% -87.5% 25.00 ± 0% slabinfo.sigqueue.num_objs
287.50 ± 4% -47.8% 150.00 ± 0% slabinfo.sock_inode_cache.active_objs
287.50 ± 4% -47.8% 150.00 ± 0% slabinfo.sock_inode_cache.num_objs
1989 ± 1% -9.8% 1794 ± 0% slabinfo.trace_event_file.active_objs
1989 ± 1% -9.8% 1794 ± 0% slabinfo.trace_event_file.num_objs
5726 ± 8% -58.5% 2374 ± 2% slabinfo.vm_area_struct.active_objs
5741 ± 8% -58.6% 2376 ± 2% slabinfo.vm_area_struct.num_objs
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-sbx04/pread3/will-it-scale
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
2729845 ± 1% -75.4% 671463 ± 0% will-it-scale.per_process_ops
2442702 ± 0% -76.6% 571872 ± 0% will-it-scale.per_thread_ops
0.47 ± 1% -96.7% 0.02 ± 0% will-it-scale.scalability
308.70 ± 0% -28.6% 220.33 ± 0% will-it-scale.time.elapsed_time
308.70 ± 0% -28.6% 220.33 ± 0% will-it-scale.time.elapsed_time.max
19174 ± 10% +359.0% 88004 ± 0% will-it-scale.time.involuntary_context_switches
48720 ± 0% -76.8% 11279 ± 0% will-it-scale.time.minor_page_faults
1375 ± 0% -96.5% 48.00 ± 0% will-it-scale.time.percent_of_cpu_this_job_got
3837 ± 0% -97.6% 93.85 ± 0% will-it-scale.time.system_time
407.32 ± 0% -96.7% 13.51 ± 0% will-it-scale.time.user_time
343.77 ± 0% -27.4% 249.62 ± 0% uptime.boot
13414 ± 0% -99.9% 13.94 ± 0% uptime.idle
597038 ± 3% -98.9% 6733 ± 2% softirqs.RCU
241694 ± 0% -100.0% 0.00 ± -1% softirqs.SCHED
4674474 ± 0% -97.5% 118125 ± 0% softirqs.TIMER
646731 ± 0% -12.5% 565617 ± 0% vmstat.memory.cache
27.00 ± 0% +18.5% 32.00 ± 0% vmstat.procs.r
2376 ± 1% +69.3% 4023 ± 0% vmstat.system.cs
29727 ± 0% -96.2% 1141 ± 0% vmstat.system.in
135531 ± 23% -100.0% 0.00 ± -1% latency_stats.avg.expand_files.__alloc_fd.get_unused_fd_flags.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
135781 ± 23% -100.0% 0.00 ± -1% latency_stats.max.expand_files.__alloc_fd.get_unused_fd_flags.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
5673 ± 46% +1003.3% 62588 ± 5% latency_stats.sum.ep_poll.SyS_epoll_wait.entry_SYSCALL_64_fastpath
542125 ± 23% -100.0% 0.00 ± -1% latency_stats.sum.expand_files.__alloc_fd.get_unused_fd_flags.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
633854 ± 41% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.89 ± 3% -8.1% 1.73 ± 3% perf-profile.cycles.__fget_light.sys_pread64.entry_SYSCALL_64_fastpath
0.10 ± 10% +1190.2% 1.32 ± 7% perf-profile.cycles.__radix_tree_lookup.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter
1.36 ± 6% +92.3% 2.62 ± 4% perf-profile.cycles.__radix_tree_lookup.radix_tree_lookup_slot.find_get_entry.find_lock_entry.shmem_getpage_gfp
4.88 ± 2% +47.8% 7.22 ± 1% perf-profile.cycles.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read
9.89 ± 1% +23.0% 12.17 ± 2% perf-profile.cycles.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read.vfs_read
1.58 ± 6% +82.9% 2.88 ± 6% perf-profile.cycles.radix_tree_lookup_slot.find_get_entry.find_lock_entry.shmem_getpage_gfp.shmem_file_read_iter
0.96 ± 4% -13.0% 0.84 ± 6% perf-profile.cycles.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys_pread64
15.21 ± 1% +14.4% 17.41 ± 2% perf-profile.cycles.shmem_getpage_gfp.shmem_file_read_iter.__vfs_read.vfs_read.sys_pread64
308.70 ± 0% -28.6% 220.33 ± 0% time.elapsed_time
308.70 ± 0% -28.6% 220.33 ± 0% time.elapsed_time.max
19174 ± 10% +359.0% 88004 ± 0% time.involuntary_context_switches
48720 ± 0% -76.8% 11279 ± 0% time.minor_page_faults
1375 ± 0% -96.5% 48.00 ± 0% time.percent_of_cpu_this_job_got
3837 ± 0% -97.6% 93.85 ± 0% time.system_time
407.32 ± 0% -96.7% 13.51 ± 0% time.user_time
609.00 ± 3% -45.5% 332.00 ± 0% time.voluntary_context_switches
68593 ± 20% +578.4% 465372 ± 0% numa-numastat.node0.local_node
72714 ± 19% +540.0% 465372 ± 0% numa-numastat.node0.numa_hit
4120 ± 0% -100.0% 0.00 ± -1% numa-numastat.node0.other_node
115421 ± 6% -100.0% 0.00 ± -1% numa-numastat.node1.local_node
118512 ± 6% -100.0% 5.25 ± 15% numa-numastat.node1.numa_hit
169708 ± 10% -100.0% 0.00 ± -1% numa-numastat.node2.local_node
170741 ± 9% -100.0% 5.50 ± 9% numa-numastat.node2.numa_hit
437345 ± 3% -100.0% 0.00 ± -1% numa-numastat.node3.local_node
441466 ± 3% -100.0% 6.75 ± 12% numa-numastat.node3.numa_hit
4121 ± 0% -99.8% 6.75 ± 12% numa-numastat.node3.other_node
139447 ± 1% -9.0% 126896 ± 0% meminfo.Active
38756 ± 3% -29.3% 27402 ± 1% meminfo.Active(anon)
5811 ± 17% -30.2% 4055 ± 0% meminfo.AnonHugePages
28305 ± 4% -7.2% 26260 ± 1% meminfo.AnonPages
6076928 ± 0% -28.6% 4338176 ± 10% meminfo.DirectMap2M
136796 ± 14% -70.4% 40540 ± 4% meminfo.DirectMap4k
14553 ± 0% -12.6% 12719 ± 0% meminfo.KernelStack
53280 ± 0% -27.1% 38868 ± 0% meminfo.SReclaimable
103204 ± 0% -54.9% 46582 ± 0% meminfo.SUnreclaim
21239 ± 0% -47.4% 11165 ± 0% meminfo.Shmem
156486 ± 0% -45.4% 85450 ± 0% meminfo.Slab
43.08 ± 0% +131.6% 99.79 ± 0% turbostat.%Busy
1247 ± 0% +131.5% 2887 ± 0% turbostat.Avg_MHz
21.90 ± 0% -99.1% 0.21 ± 11% turbostat.CPU%c1
0.05 ± 45% -100.0% 0.00 ± -1% turbostat.CPU%c3
34.96 ± 0% -100.0% 0.00 ± -1% turbostat.CPU%c7
221.81 ± 0% -82.7% 38.45 ± 0% turbostat.CorWatt
71.25 ± 0% -29.1% 50.50 ± 1% turbostat.CoreTmp
28.42 ± 1% -100.0% 0.00 ± -1% turbostat.Pkg%pc2
0.12 ± 18% -100.0% 0.00 ± -1% turbostat.Pkg%pc6
71.50 ± 0% -29.4% 50.50 ± 1% turbostat.PkgTmp
276.79 ± 0% -81.2% 52.15 ± 0% turbostat.PkgWatt
33245984 ± 38% -100.0% 0.00 ± 0% cpuidle.C1-SNB.time
99348 ± 9% -100.0% 0.00 ± 0% cpuidle.C1-SNB.usage
20275044 ± 24% -100.0% 237.00 ±173% cpuidle.C1E-SNB.time
3278 ± 23% -100.0% 0.25 ±173% cpuidle.C1E-SNB.usage
525630 ± 27% -100.0% 0.00 ± 0% cpuidle.C3-SNB.time
336.75 ± 16% -100.0% 0.00 ± 0% cpuidle.C3-SNB.usage
13379 ± 58% -96.0% 533.25 ±173% cpuidle.C6-SNB.time
30.75 ± 39% -99.2% 0.25 ±173% cpuidle.C6-SNB.usage
1.123e+10 ± 0% -100.0% 449847 ± 10% cpuidle.C7-SNB.time
314007 ± 1% -100.0% 12.25 ± 15% cpuidle.C7-SNB.usage
25173571 ± 14% -99.8% 55758 ± 93% cpuidle.POLL.time
844.25 ± 16% -99.9% 1.00 ± 70% cpuidle.POLL.usage
9685 ± 3% -29.2% 6858 ± 1% proc-vmstat.nr_active_anon
166.75 ± 3% -56.7% 72.25 ± 1% proc-vmstat.nr_dirtied
909.25 ± 0% -12.6% 794.50 ± 0% proc-vmstat.nr_kernel_stack
5309 ± 0% -47.4% 2791 ± 0% proc-vmstat.nr_shmem
13319 ± 0% -27.1% 9716 ± 0% proc-vmstat.nr_slab_reclaimable
25800 ± 0% -54.9% 11645 ± 0% proc-vmstat.nr_slab_unreclaimable
169.25 ± 1% -43.4% 95.75 ± 1% proc-vmstat.nr_written
7589 ± 1% -96.4% 273.00 ± 0% proc-vmstat.numa_hint_faults
6404 ± 1% -95.7% 273.00 ± 0% proc-vmstat.numa_hint_faults_local
799545 ± 0% -42.3% 461637 ± 0% proc-vmstat.numa_hit
788638 ± 0% -41.5% 461620 ± 0% proc-vmstat.numa_local
10907 ± 11% -99.8% 17.50 ± 6% proc-vmstat.numa_other
24713 ± 0% -96.5% 867.00 ± 0% proc-vmstat.numa_pte_updates
5531 ± 0% -65.9% 1884 ± 2% proc-vmstat.pgactivate
19003 ± 15% +392.3% 93557 ± 0% proc-vmstat.pgalloc_dma32
814564 ± 0% -52.2% 389240 ± 0% proc-vmstat.pgalloc_normal
875402 ± 0% -38.4% 539508 ± 0% proc-vmstat.pgfault
831290 ± 0% -42.1% 481073 ± 0% proc-vmstat.pgfree
31759 ± 11% +62.6% 51626 ± 3% numa-meminfo.node0.Active
5751 ± 70% +376.8% 27420 ± 1% numa-meminfo.node0.Active(anon)
5825 ± 69% +351.7% 26310 ± 1% numa-meminfo.node0.AnonPages
93660 ± 4% +10.5% 103482 ± 1% numa-meminfo.node0.Inactive
2403 ±147% +316.6% 10013 ± 0% numa-meminfo.node0.Inactive(anon)
7131 ± 2% +24.3% 8865 ± 0% numa-meminfo.node0.KernelStack
5105 ± 50% +89.1% 9656 ± 0% numa-meminfo.node0.Mapped
770.00 ± 20% +430.2% 4082 ± 0% numa-meminfo.node0.PageTables
13064 ± 4% +38.3% 18074 ± 3% numa-meminfo.node0.SReclaimable
26763 ± 6% -16.5% 22336 ± 1% numa-meminfo.node0.SUnreclaim
2502 ±141% +344.9% 11134 ± 0% numa-meminfo.node0.Shmem
34698 ± 15% -27.1% 25293 ± 3% numa-meminfo.node1.Active
10512 ± 56% -100.0% 0.00 ± -1% numa-meminfo.node1.Active(anon)
7935 ± 50% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonPages
2500 ±139% -100.0% 0.00 ± -1% numa-meminfo.node1.Inactive(anon)
2344 ± 4% -45.4% 1280 ± 0% numa-meminfo.node1.KernelStack
289660 ± 21% -42.7% 166011 ± 0% numa-meminfo.node1.MemUsed
975.00 ± 28% -100.0% 0.00 ± -1% numa-meminfo.node1.PageTables
12758 ± 10% -48.5% 6570 ± 10% numa-meminfo.node1.SReclaimable
25154 ± 4% -69.6% 7652 ± 4% numa-meminfo.node1.SUnreclaim
5220 ± 92% -100.0% 0.00 ± -1% numa-meminfo.node1.Shmem
37913 ± 6% -62.5% 14222 ± 6% numa-meminfo.node1.Slab
41206 ± 10% -39.9% 24753 ± 2% numa-meminfo.node2.Active
15316 ± 30% -100.0% 0.00 ± -1% numa-meminfo.node2.Active(anon)
2582 ± 72% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonHugePages
7448 ± 31% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonPages
129976 ± 3% -9.9% 117092 ± 0% numa-meminfo.node2.FilePages
4800 ± 83% -100.0% 0.00 ± -1% numa-meminfo.node2.Inactive(anon)
2403 ± 7% -46.7% 1280 ± 0% numa-meminfo.node2.KernelStack
297365 ± 21% -43.6% 167858 ± 0% numa-meminfo.node2.MemUsed
14695 ± 8% -49.2% 7462 ± 5% numa-meminfo.node2.SReclaimable
24487 ± 6% -64.9% 8590 ± 3% numa-meminfo.node2.SUnreclaim
12786 ± 30% -100.0% 0.00 ± -1% numa-meminfo.node2.Shmem
39183 ± 5% -59.0% 16052 ± 4% numa-meminfo.node2.Slab
31752 ± 6% -20.5% 25231 ± 4% numa-meminfo.node3.Active
7145 ± 10% -100.0% 0.00 ± -1% numa-meminfo.node3.Active(anon)
7101 ± 9% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonPages
2668 ± 2% -52.0% 1280 ± 0% numa-meminfo.node3.KernelStack
226009 ± 0% -26.3% 166525 ± 0% numa-meminfo.node3.MemUsed
1385 ± 5% -100.0% 0.00 ± -1% numa-meminfo.node3.PageTables
12758 ± 9% -47.0% 6756 ± 6% numa-meminfo.node3.SReclaimable
26790 ± 5% -70.2% 7983 ± 3% numa-meminfo.node3.SUnreclaim
718.50 ± 37% -100.0% 0.00 ± -1% numa-meminfo.node3.Shmem
39548 ± 1% -62.7% 14739 ± 4% numa-meminfo.node3.Slab
1436 ± 70% +376.9% 6851 ± 1% numa-vmstat.node0.nr_active_anon
1455 ± 69% +351.5% 6573 ± 1% numa-vmstat.node0.nr_anon_pages
600.50 ±147% +316.9% 2503 ± 0% numa-vmstat.node0.nr_inactive_anon
445.50 ± 2% +24.2% 553.50 ± 0% numa-vmstat.node0.nr_kernel_stack
1275 ± 50% +89.1% 2413 ± 0% numa-vmstat.node0.nr_mapped
192.00 ± 20% +430.3% 1018 ± 0% numa-vmstat.node0.nr_page_table_pages
625.25 ±141% +345.3% 2784 ± 0% numa-vmstat.node0.nr_shmem
3265 ± 4% +38.3% 4517 ± 3% numa-vmstat.node0.nr_slab_reclaimable
6690 ± 6% -16.5% 5583 ± 1% numa-vmstat.node0.nr_slab_unreclaimable
106747 ± 12% +242.0% 365071 ± 0% numa-vmstat.node0.numa_hit
70620 ± 19% +416.9% 365071 ± 0% numa-vmstat.node0.numa_local
36126 ± 0% -100.0% 0.00 ± -1% numa-vmstat.node0.numa_other
2626 ± 56% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_active_anon
1982 ± 50% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_anon_pages
624.75 ±139% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_inactive_anon
146.00 ± 5% -45.2% 80.00 ± 0% numa-vmstat.node1.nr_kernel_stack
243.00 ± 28% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_page_table_pages
1304 ± 92% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_shmem
3189 ± 10% -48.5% 1642 ± 10% numa-vmstat.node1.nr_slab_reclaimable
6288 ± 4% -69.6% 1912 ± 4% numa-vmstat.node1.nr_slab_unreclaimable
211207 ± 6% -82.0% 38035 ± 0% numa-vmstat.node1.numa_hit
193131 ± 8% -100.0% 0.00 ± -1% numa-vmstat.node1.numa_local
3828 ± 30% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_active_anon
1860 ± 31% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_anon_pages
32493 ± 3% -9.9% 29273 ± 0% numa-vmstat.node2.nr_file_pages
1199 ± 83% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_inactive_anon
149.75 ± 7% -46.6% 80.00 ± 0% numa-vmstat.node2.nr_kernel_stack
227.00 ± 7% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_page_table_pages
3196 ± 30% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_shmem
3673 ± 8% -49.2% 1865 ± 5% numa-vmstat.node2.nr_slab_reclaimable
6121 ± 6% -64.9% 2147 ± 3% numa-vmstat.node2.nr_slab_unreclaimable
200562 ± 13% -81.0% 38202 ± 0% numa-vmstat.node2.numa_hit
168527 ± 16% -100.0% 0.00 ± -1% numa-vmstat.node2.numa_local
1785 ± 10% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_active_anon
1774 ± 10% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_anon_pages
143.00 ± 41% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_inactive_anon
166.50 ± 2% -52.0% 80.00 ± 0% numa-vmstat.node3.nr_kernel_stack
346.00 ± 5% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_page_table_pages
179.25 ± 37% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_shmem
3189 ± 9% -47.0% 1689 ± 6% numa-vmstat.node3.nr_slab_reclaimable
6697 ± 5% -70.2% 1995 ± 3% numa-vmstat.node3.nr_slab_unreclaimable
285548 ± 3% -86.7% 38047 ± 0% numa-vmstat.node3.numa_hit
242535 ± 4% -100.0% 0.00 ± -1% numa-vmstat.node3.numa_local
43013 ± 0% -11.5% 38047 ± 0% numa-vmstat.node3.numa_other
92008 ± 0% -11.4% 81556 ± 0% slabinfo.Acpi-Operand.active_objs
1643 ± 0% -11.2% 1458 ± 0% slabinfo.Acpi-Operand.active_slabs
92008 ± 0% -11.2% 81690 ± 0% slabinfo.Acpi-Operand.num_objs
1643 ± 0% -11.2% 1458 ± 0% slabinfo.Acpi-Operand.num_slabs
2476 ± 0% -96.9% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
2476 ± 0% -96.9% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
56099 ± 0% -17.2% 46473 ± 0% slabinfo.Acpi-State.active_objs
1099 ± 0% -17.1% 911.00 ± 0% slabinfo.Acpi-State.active_slabs
56099 ± 0% -17.1% 46498 ± 0% slabinfo.Acpi-State.num_objs
1099 ± 0% -17.1% 911.00 ± 0% slabinfo.Acpi-State.num_slabs
2397 ± 2% -83.5% 396.00 ± 0% slabinfo.RAW.active_objs
2397 ± 2% -83.5% 396.00 ± 0% slabinfo.RAW.num_objs
131.75 ± 5% -87.1% 17.00 ± 0% slabinfo.TCP.active_objs
131.75 ± 5% -87.1% 17.00 ± 0% slabinfo.TCP.num_objs
229.50 ± 6% -85.2% 34.00 ± 0% slabinfo.UDP.active_objs
229.50 ± 6% -85.2% 34.00 ± 0% slabinfo.UDP.num_objs
11110 ± 6% -84.8% 1686 ± 1% slabinfo.anon_vma.active_objs
217.50 ± 6% -84.0% 34.75 ± 1% slabinfo.anon_vma.active_slabs
11110 ± 6% -83.9% 1793 ± 0% slabinfo.anon_vma.num_objs
217.50 ± 6% -84.0% 34.75 ± 1% slabinfo.anon_vma.num_slabs
1168 ± 9% -93.8% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
1168 ± 9% -93.8% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
750.75 ± 9% -94.8% 39.00 ± 0% slabinfo.bdev_cache.active_objs
750.75 ± 9% -94.8% 39.00 ± 0% slabinfo.bdev_cache.num_objs
157.50 ± 7% -82.2% 28.00 ± 0% slabinfo.blkdev_queue.active_objs
157.50 ± 7% -82.2% 28.00 ± 0% slabinfo.blkdev_queue.num_objs
1682 ± 5% -92.2% 132.00 ± 0% slabinfo.blkdev_requests.active_objs
1682 ± 5% -92.2% 132.00 ± 0% slabinfo.blkdev_requests.num_objs
263.25 ± 12% -85.2% 39.00 ± 0% slabinfo.buffer_head.active_objs
263.25 ± 12% -85.2% 39.00 ± 0% slabinfo.buffer_head.num_objs
61989 ± 0% -25.5% 46171 ± 0% slabinfo.dentry.active_objs
1476 ± 0% -25.6% 1099 ± 0% slabinfo.dentry.active_slabs
62035 ± 0% -25.5% 46190 ± 0% slabinfo.dentry.num_objs
1476 ± 0% -25.6% 1099 ± 0% slabinfo.dentry.num_slabs
885.00 ± 12% -95.6% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
885.00 ± 12% -95.6% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
3038 ± 2% -90.7% 284.00 ± 0% slabinfo.files_cache.active_objs
3038 ± 2% -90.7% 284.00 ± 0% slabinfo.files_cache.num_objs
3867 ± 1% -9.9% 3485 ± 0% slabinfo.ftrace_event_field.active_objs
3867 ± 1% -9.9% 3485 ± 0% slabinfo.ftrace_event_field.num_objs
2145 ± 0% -44.8% 1185 ± 0% slabinfo.idr_layer_cache.active_objs
2145 ± 0% -44.8% 1185 ± 0% slabinfo.idr_layer_cache.num_objs
46395 ± 0% -15.7% 39095 ± 0% slabinfo.inode_cache.active_objs
813.50 ± 0% -15.8% 685.00 ± 0% slabinfo.inode_cache.active_slabs
46395 ± 0% -15.7% 39095 ± 0% slabinfo.inode_cache.num_objs
813.50 ± 0% -15.8% 685.00 ± 0% slabinfo.inode_cache.num_slabs
68442 ± 0% -30.5% 47600 ± 0% slabinfo.kernfs_node_cache.active_objs
1006 ± 0% -30.5% 700.00 ± 0% slabinfo.kernfs_node_cache.active_slabs
68442 ± 0% -30.5% 47600 ± 0% slabinfo.kernfs_node_cache.num_objs
1006 ± 0% -30.5% 700.00 ± 0% slabinfo.kernfs_node_cache.num_slabs
3607 ± 1% -67.3% 1180 ± 0% slabinfo.kmalloc-1024.active_objs
3685 ± 2% -67.0% 1215 ± 0% slabinfo.kmalloc-1024.num_objs
7135 ± 2% -70.7% 2090 ± 0% slabinfo.kmalloc-128.active_objs
7135 ± 2% -69.5% 2176 ± 0% slabinfo.kmalloc-128.num_objs
23282 ± 0% -61.1% 9055 ± 0% slabinfo.kmalloc-16.active_objs
23649 ± 0% -59.4% 9598 ± 1% slabinfo.kmalloc-16.num_objs
11567 ± 5% -60.9% 4523 ± 1% slabinfo.kmalloc-192.active_objs
277.75 ± 4% -56.5% 120.75 ± 1% slabinfo.kmalloc-192.active_slabs
11697 ± 4% -56.3% 5110 ± 1% slabinfo.kmalloc-192.num_objs
277.75 ± 4% -56.5% 120.75 ± 1% slabinfo.kmalloc-192.num_slabs
3859 ± 5% -78.5% 830.00 ± 0% slabinfo.kmalloc-2048.active_objs
250.75 ± 6% -76.1% 60.00 ± 2% slabinfo.kmalloc-2048.active_slabs
4020 ± 5% -76.1% 962.25 ± 1% slabinfo.kmalloc-2048.num_objs
250.75 ± 6% -76.1% 60.00 ± 2% slabinfo.kmalloc-2048.num_slabs
20387 ± 10% -91.0% 1836 ± 0% slabinfo.kmalloc-256.active_objs
340.50 ± 9% -84.8% 51.75 ± 3% slabinfo.kmalloc-256.active_slabs
21813 ± 9% -84.7% 3341 ± 3% slabinfo.kmalloc-256.num_objs
340.50 ± 9% -84.8% 51.75 ± 3% slabinfo.kmalloc-256.num_slabs
27080 ± 4% -44.5% 15023 ± 0% slabinfo.kmalloc-32.active_objs
27218 ± 4% -41.2% 15995 ± 1% slabinfo.kmalloc-32.num_objs
1304 ± 0% -40.0% 782.50 ± 0% slabinfo.kmalloc-4096.active_objs
1388 ± 0% -39.8% 835.00 ± 0% slabinfo.kmalloc-4096.num_objs
12429 ± 3% -63.8% 4504 ± 1% slabinfo.kmalloc-512.active_objs
194.75 ± 3% -60.7% 76.50 ± 2% slabinfo.kmalloc-512.active_slabs
12498 ± 4% -60.5% 4938 ± 2% slabinfo.kmalloc-512.num_objs
194.75 ± 3% -60.7% 76.50 ± 2% slabinfo.kmalloc-512.num_slabs
45108 ± 3% -46.9% 23971 ± 0% slabinfo.kmalloc-64.active_objs
705.00 ± 3% -44.7% 389.75 ± 0% slabinfo.kmalloc-64.active_slabs
45157 ± 3% -44.7% 24986 ± 0% slabinfo.kmalloc-64.num_objs
705.00 ± 3% -44.7% 389.75 ± 0% slabinfo.kmalloc-64.num_slabs
37504 ± 0% -89.1% 4096 ± 0% slabinfo.kmalloc-8.active_objs
37504 ± 0% -89.1% 4096 ± 0% slabinfo.kmalloc-8.num_objs
476.00 ± 1% -91.8% 39.25 ± 1% slabinfo.kmalloc-8192.active_objs
119.00 ± 1% -91.2% 10.50 ± 4% slabinfo.kmalloc-8192.active_slabs
476.00 ± 1% -91.2% 42.00 ± 4% slabinfo.kmalloc-8192.num_objs
119.00 ± 1% -91.2% 10.50 ± 4% slabinfo.kmalloc-8192.num_slabs
5147 ± 6% -65.1% 1794 ± 0% slabinfo.kmalloc-96.active_objs
5409 ± 5% -60.8% 2122 ± 0% slabinfo.kmalloc-96.num_objs
484.50 ± 18% -68.4% 153.00 ± 0% slabinfo.kmem_cache.active_objs
484.50 ± 18% -68.4% 153.00 ± 0% slabinfo.kmem_cache.num_objs
865.00 ± 13% -48.1% 449.00 ± 0% slabinfo.kmem_cache_node.active_objs
928.00 ± 12% -44.8% 512.00 ± 0% slabinfo.kmem_cache_node.num_objs
1207 ± 3% -89.9% 122.25 ± 0% slabinfo.mm_struct.active_objs
1207 ± 3% -89.6% 125.50 ± 0% slabinfo.mm_struct.num_objs
1954 ± 4% -94.6% 105.00 ± 20% slabinfo.mnt_cache.active_objs
1954 ± 4% -94.6% 105.00 ± 20% slabinfo.mnt_cache.num_objs
152.00 ± 9% -78.9% 32.00 ± 0% slabinfo.nfs_inode_cache.active_objs
152.00 ± 9% -78.9% 32.00 ± 0% slabinfo.nfs_inode_cache.num_objs
10117 ± 1% -47.2% 5341 ± 0% slabinfo.proc_inode_cache.active_objs
10117 ± 1% -47.2% 5341 ± 0% slabinfo.proc_inode_cache.num_objs
11020 ± 0% -29.4% 7784 ± 0% slabinfo.radix_tree_node.active_objs
11020 ± 0% -29.4% 7784 ± 0% slabinfo.radix_tree_node.num_objs
3968 ± 1% -59.3% 1616 ± 0% slabinfo.shmem_inode_cache.active_objs
3968 ± 1% -59.3% 1616 ± 0% slabinfo.shmem_inode_cache.num_objs
1749 ± 2% -54.9% 789.50 ± 0% slabinfo.sighand_cache.active_objs
1768 ± 1% -54.8% 799.50 ± 0% slabinfo.sighand_cache.num_objs
2843 ± 1% -66.5% 951.50 ± 0% slabinfo.signal_cache.active_objs
2843 ± 1% -66.5% 951.50 ± 0% slabinfo.signal_cache.num_objs
3267 ± 0% -98.3% 55.00 ± 0% slabinfo.sigqueue.active_objs
3267 ± 0% -98.3% 55.00 ± 0% slabinfo.sigqueue.num_objs
3353 ± 2% -84.8% 510.00 ± 0% slabinfo.sock_inode_cache.active_objs
3353 ± 2% -84.8% 510.00 ± 0% slabinfo.sock_inode_cache.num_objs
1207 ± 0% -33.9% 797.75 ± 0% slabinfo.task_struct.active_objs
413.75 ± 0% -35.2% 268.25 ± 0% slabinfo.task_struct.active_slabs
1242 ± 0% -35.2% 804.75 ± 0% slabinfo.task_struct.num_objs
413.75 ± 0% -35.2% 268.25 ± 0% slabinfo.task_struct.num_slabs
1018 ± 0% -96.3% 38.00 ± 0% slabinfo.taskstats.active_objs
1018 ± 0% -96.3% 38.00 ± 0% slabinfo.taskstats.num_objs
2645 ± 2% -33.9% 1748 ± 0% slabinfo.trace_event_file.active_objs
2645 ± 2% -33.9% 1748 ± 0% slabinfo.trace_event_file.num_objs
20850 ± 6% -85.1% 3111 ± 0% slabinfo.vm_area_struct.active_objs
473.00 ± 6% -83.9% 76.25 ± 1% slabinfo.vm_area_struct.active_slabs
20850 ± 6% -83.8% 3381 ± 1% slabinfo.vm_area_struct.num_objs
473.00 ± 6% -83.9% 76.25 ± 1% slabinfo.vm_area_struct.num_slabs
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1HDD/8K/f2fs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/nhm4/400M/fsmark
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
4869620 ± 8% -16.3% 4077423 ± 5% fsmark.app_overhead
511.70 ± 0% +15.0% 588.67 ± 0% fsmark.files_per_sec
100.74 ± 0% -13.3% 87.33 ± 0% fsmark.time.elapsed_time
100.74 ± 0% -13.3% 87.33 ± 0% fsmark.time.elapsed_time.max
1311972 ± 0% -1.8% 1288254 ± 0% fsmark.time.file_system_outputs
14906 ± 1% -32.6% 10047 ± 1% fsmark.time.involuntary_context_switches
15.00 ± 0% -86.7% 2.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
15.08 ± 0% -85.2% 2.23 ± 1% fsmark.time.system_time
499439 ± 0% -68.3% 158519 ± 0% fsmark.time.voluntary_context_switches
48548 ± 6% -26.4% 35748 ± 4% meminfo.DirectMap4k
32438 ± 0% -89.0% 3575 ± 2% proc-vmstat.pgactivate
130992 ± 0% -11.2% 116266 ± 0% proc-vmstat.pgfault
159.65 ± 41% -32.4% 107.88 ± 0% uptime.boot
616.98 ± 86% -97.8% 13.79 ± 1% uptime.idle
6360 ± 0% +13.6% 7227 ± 0% vmstat.io.bo
600.00 ± 1% -64.7% 212.00 ± 0% vmstat.memory.buff
11949 ± 0% -54.2% 5474 ± 0% vmstat.system.cs
23824 ± 0% +39.5% 33225 ± 0% softirqs.BLOCK
17370 ± 4% -76.3% 4113 ± 0% softirqs.RCU
16565 ± 5% -100.0% 0.00 ± -1% softirqs.SCHED
26434 ± 2% -34.8% 17241 ± 0% softirqs.TIMER
2.24 ± 0% +136.4% 5.29 ± 0% turbostat.%Busy
66.50 ± 0% +165.4% 176.50 ± 0% turbostat.Avg_MHz
2975 ± 0% +12.0% 3333 ± 0% turbostat.Bzy_MHz
44.29 ± 1% +113.8% 94.70 ± 0% turbostat.CPU%c1
44.75 ± 1% -100.0% 0.00 ± -1% turbostat.CPU%c3
8.72 ± 3% -100.0% 0.00 ± -1% turbostat.CPU%c6
100.74 ± 0% -13.3% 87.33 ± 0% time.elapsed_time
100.74 ± 0% -13.3% 87.33 ± 0% time.elapsed_time.max
14906 ± 1% -32.6% 10047 ± 1% time.involuntary_context_switches
15.00 ± 0% -86.7% 2.00 ± 0% time.percent_of_cpu_this_job_got
15.08 ± 0% -85.2% 2.23 ± 1% time.system_time
0.56 ± 6% -55.2% 0.25 ± 7% time.user_time
499439 ± 0% -68.3% 158519 ± 0% time.voluntary_context_switches
1.933e+08 ± 2% -82.5% 33777725 ± 0% cpuidle.C1-NHM.time
205957 ± 2% -69.1% 63597 ± 1% cpuidle.C1-NHM.usage
27418772 ± 2% -88.7% 3087064 ± 8% cpuidle.C1E-NHM.time
42420 ± 2% -96.8% 1338 ± 4% cpuidle.C1E-NHM.usage
3.989e+08 ± 0% -90.8% 36819105 ± 0% cpuidle.C3-NHM.time
204527 ± 0% -97.9% 4195 ± 4% cpuidle.C3-NHM.usage
1.77e+08 ± 1% -94.4% 9834887 ± 2% cpuidle.C6-NHM.time
95424 ± 1% -99.3% 692.25 ± 3% cpuidle.C6-NHM.usage
1101 ± 9% -26.8% 805.75 ± 14% cpuidle.POLL.time
10030 ± 59% +770.4% 87308 ± 4% latency_stats.avg.call_rwsem_down_read_failed.f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
2537 ± 15% +816.6% 23260 ± 31% latency_stats.avg.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
15329 ± 0% -79.0% 3217 ± 10% latency_stats.avg.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
19773 ± 1% -99.8% 35.50 ± 36% latency_stats.hits.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
20025 ± 1% -100.0% 9.50 ± 21% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
22898 ± 0% -100.0% 8.00 ± 27% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
35764 ± 0% -100.0% 12.50 ± 38% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
20189 ± 0% -100.0% 6.00 ± 54% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
45245 ± 0% -100.0% 9.50 ± 18% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open
11452 ± 1% -99.9% 9.00 ± 22% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].get_read_data_page.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.path_openat.do_filp_open
14357 ± 1% -99.9% 9.25 ± 56% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
30422 ± 0% -100.0% 7.75 ± 27% latency_stats.hits.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
32498 ± 0% -100.0% 8.25 ± 17% latency_stats.hits.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
21271 ± 1% -94.7% 1134 ± 13% latency_stats.hits.f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
9992 ± 1% +675.7% 77512 ± 1% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
8907 ± 37% -99.5% 42.00 ± 16% latency_stats.max.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
11371 ± 16% -99.7% 36.50 ± 20% latency_stats.max.call_rwsem_down_write_failed.f2fs_submit_merged_bio.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
12480 ± 1% -98.8% 148.00 ± 32% latency_stats.max.call_rwsem_down_write_failed.f2fs_submit_page_mbio.[f2fs].do_write_page.[f2fs].write_node_page.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
134008 ± 14% -90.4% 12909 ± 29% latency_stats.sum.alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
10767 ± 26% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.add_free_nid.[f2fs].build_free_nids.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
5016725 ± 18% +305.6% 20346108 ± 9% latency_stats.sum.call_rwsem_down_read_failed.f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1693586 ± 4% -100.0% 627.25 ± 44% latency_stats.sum.call_rwsem_down_read_failed.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].wait_on_node_pages_writeback.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
3462045 ± 1% -100.0% 444.00 ± 58% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
23890 ± 11% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
475110 ± 2% -100.0% 161.75 ±109% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
4350665 ± 1% -100.0% 329.00 ± 43% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
28599 ± 3% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
875527 ± 1% -100.0% 217.50 ± 86% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
5262125 ± 0% -100.0% 543.25 ± 61% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
610751 ± 2% -100.0% 127.75 ± 66% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range
2787585 ± 1% -100.0% 261.50 ± 83% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
9524677 ± 1% -100.0% 412.75 ± 23% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open
35645 ± 6% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
1403361 ± 1% -100.0% 377.50 ± 33% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].get_read_data_page.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.path_openat.do_filp_open
93443 ± 2% -99.9% 60.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
30595 ± 4% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
1842740 ± 2% -100.0% 349.00 ± 68% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
437921 ± 3% -100.0% 85.75 ± 93% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs]
978238 ± 2% -100.0% 296.25 ± 43% latency_stats.sum.call_rwsem_down_read_failed.is_checkpointed_node.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
743986 ± 2% -100.0% 202.75 ± 94% latency_stats.sum.call_rwsem_down_read_failed.need_dentry_mark.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
758052 ± 1% -100.0% 289.50 ± 71% latency_stats.sum.call_rwsem_down_read_failed.need_inode_block_update.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
211383 ± 24% -87.0% 27536 ± 57% latency_stats.sum.call_rwsem_down_write_failed.block_operations.[f2fs].write_checkpoint.[f2fs].f2fs_sync_fs.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
302311 ± 5% -100.0% 76.00 ± 24% latency_stats.sum.call_rwsem_down_write_failed.f2fs_submit_merged_bio.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
1091213 ± 6% -99.9% 1286 ± 38% latency_stats.sum.call_rwsem_down_write_failed.f2fs_submit_page_mbio.[f2fs].do_write_page.[f2fs].write_node_page.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
4675431 ± 1% -100.0% 423.50 ± 60% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
38734 ± 13% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
531207 ± 2% -100.0% 230.00 ± 38% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
3517334 ± 1% -100.0% 324.00 ± 36% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
22754 ± 9% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_mkdir.[f2fs].vfs_mkdir.SyS_mkdir.entry_SYSCALL_64_fastpath
7.936e+08 ± 0% -79.2% 1.648e+08 ± 10% latency_stats.sum.wait_on_page_bit.__filemap_fdatawait_range.filemap_fdatawait_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
6734 ± 3% -23.9% 5124 ± 0% slabinfo.Acpi-Operand.active_objs
6734 ± 3% -23.9% 5124 ± 0% slabinfo.Acpi-Operand.num_objs
321.50 ± 5% -75.7% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
321.50 ± 5% -75.7% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
360.00 ± 0% -47.5% 189.00 ± 8% slabinfo.RAW.active_objs
360.00 ± 0% -47.5% 189.00 ± 8% slabinfo.RAW.num_objs
170.00 ± 14% -80.0% 34.00 ± 0% slabinfo.UDP.active_objs
170.00 ± 14% -80.0% 34.00 ± 0% slabinfo.UDP.num_objs
2645 ± 9% -55.5% 1177 ± 1% slabinfo.anon_vma.active_objs
2645 ± 9% -55.5% 1177 ± 1% slabinfo.anon_vma.num_objs
383.25 ± 15% -81.0% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
383.25 ± 15% -81.0% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
185.25 ± 17% -78.9% 39.00 ± 0% slabinfo.bdev_cache.active_objs
185.25 ± 17% -78.9% 39.00 ± 0% slabinfo.bdev_cache.num_objs
528.00 ± 5% -66.7% 176.00 ± 0% slabinfo.blkdev_requests.active_objs
528.00 ± 5% -66.7% 176.00 ± 0% slabinfo.blkdev_requests.num_objs
624.25 ± 4% -16.4% 522.00 ± 0% slabinfo.buffer_head.active_objs
780.25 ± 0% -86.9% 102.00 ± 0% slabinfo.ext4_extent_status.active_objs
780.25 ± 0% -86.9% 102.00 ± 0% slabinfo.ext4_extent_status.num_objs
242.00 ± 12% -83.9% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
242.00 ± 12% -83.9% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
379.25 ± 5% -63.9% 137.00 ± 0% slabinfo.files_cache.active_objs
379.25 ± 5% -63.9% 137.00 ± 0% slabinfo.files_cache.num_objs
12699 ± 8% -85.3% 1870 ± 0% slabinfo.free_nid.active_objs
12706 ± 8% -85.3% 1870 ± 0% slabinfo.free_nid.num_objs
487.50 ± 1% -23.1% 375.00 ± 0% slabinfo.idr_layer_cache.active_objs
487.50 ± 1% -23.1% 375.00 ± 0% slabinfo.idr_layer_cache.num_objs
984.00 ± 1% -35.1% 639.00 ± 0% slabinfo.kmalloc-1024.active_objs
984.00 ± 1% -35.1% 639.00 ± 0% slabinfo.kmalloc-1024.num_objs
1782 ± 3% -35.4% 1150 ± 0% slabinfo.kmalloc-128.active_objs
1782 ± 3% -35.4% 1150 ± 0% slabinfo.kmalloc-128.num_objs
4984 ± 4% -22.2% 3879 ± 1% slabinfo.kmalloc-16.active_objs
4984 ± 4% -22.2% 3879 ± 1% slabinfo.kmalloc-16.num_objs
3902 ± 3% -40.8% 2310 ± 0% slabinfo.kmalloc-192.active_objs
3902 ± 3% -40.8% 2310 ± 0% slabinfo.kmalloc-192.num_objs
806.75 ± 2% -14.0% 693.50 ± 1% slabinfo.kmalloc-2048.active_objs
845.00 ± 2% -17.9% 693.50 ± 1% slabinfo.kmalloc-2048.num_objs
1834 ± 3% -24.1% 1393 ± 0% slabinfo.kmalloc-256.active_objs
2613 ± 7% -26.6% 1917 ± 7% slabinfo.kmalloc-256.num_objs
6684 ± 3% -38.7% 4096 ± 0% slabinfo.kmalloc-32.active_objs
6684 ± 3% -38.7% 4096 ± 0% slabinfo.kmalloc-32.num_objs
1739 ± 12% -48.4% 898.50 ± 1% slabinfo.kmalloc-512.active_objs
1739 ± 12% -35.2% 1128 ± 4% slabinfo.kmalloc-512.num_objs
6528 ± 3% -37.3% 4096 ± 0% slabinfo.kmalloc-8.active_objs
6528 ± 3% -37.3% 4096 ± 0% slabinfo.kmalloc-8.num_objs
1447 ± 2% -30.3% 1008 ± 0% slabinfo.kmalloc-96.active_objs
1447 ± 2% -30.3% 1008 ± 0% slabinfo.kmalloc-96.num_objs
304.00 ± 9% -57.9% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
304.00 ± 9% -57.9% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
388.50 ± 4% -67.6% 126.00 ± 0% slabinfo.mnt_cache.active_objs
388.50 ± 4% -67.6% 126.00 ± 0% slabinfo.mnt_cache.num_objs
1777 ± 6% -42.9% 1015 ± 1% slabinfo.proc_inode_cache.active_objs
1807 ± 5% -35.3% 1170 ± 3% slabinfo.proc_inode_cache.num_objs
1133 ± 0% -11.1% 1007 ± 0% slabinfo.shmem_inode_cache.active_objs
1133 ± 0% -11.1% 1007 ± 0% slabinfo.shmem_inode_cache.num_objs
545.75 ± 8% -46.5% 291.75 ± 4% slabinfo.signal_cache.active_objs
545.75 ± 8% -46.5% 291.75 ± 4% slabinfo.signal_cache.num_objs
200.00 ± 0% -87.5% 25.00 ± 0% slabinfo.sigqueue.active_objs
200.00 ± 0% -87.5% 25.00 ± 0% slabinfo.sigqueue.num_objs
287.50 ± 4% -47.8% 150.00 ± 0% slabinfo.sock_inode_cache.active_objs
287.50 ± 4% -47.8% 150.00 ± 0% slabinfo.sock_inode_cache.num_objs
2001 ± 1% -10.3% 1794 ± 0% slabinfo.trace_event_file.active_objs
2001 ± 1% -10.3% 1794 ± 0% slabinfo.trace_event_file.num_objs
4486 ± 8% -47.7% 2344 ± 1% slabinfo.vm_area_struct.active_objs
4526 ± 8% -48.2% 2344 ± 1% slabinfo.vm_area_struct.num_objs
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-ivb-d04/linpack
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
45.27 ± 0% -48.5% 23.32 ± 0% linpack.GFlops
104.70 ± 0% +72.1% 180.18 ± 0% linpack.time.elapsed_time
104.70 ± 0% +72.1% 180.18 ± 0% linpack.time.elapsed_time.max
4729 ± 18% +38.1% 6529 ± 4% linpack.time.involuntary_context_switches
5134 ± 5% +11.1% 5706 ± 5% linpack.time.minor_page_faults
235.75 ± 0% -58.4% 98.00 ± 0% linpack.time.percent_of_cpu_this_job_got
245.43 ± 0% -27.5% 177.86 ± 0% linpack.time.user_time
10679 ± 19% -86.3% 1463 ± 4% softirqs.RCU
137008 ± 2% -30.0% 95840 ± 0% softirqs.TIMER
126.98 ± 4% +54.7% 196.50 ± 0% uptime.boot
250.28 ± 8% -96.5% 8.83 ± 15% uptime.idle
33266 ± 11% +265.8% 121692 ± 1% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
453713 ± 3% +305.7% 1840510 ± 6% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
859539 ± 3% +411.0% 4392268 ± 8% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
66188 ± 2% -45.6% 35980 ± 4% meminfo.DirectMap4k
2800 ± 1% -9.0% 2547 ± 0% meminfo.KernelStack
17137 ± 0% -12.9% 14932 ± 0% meminfo.SUnreclaim
2.00 ± 0% -50.0% 1.00 ± 0% vmstat.procs.r
926.75 ± 6% +90.9% 1769 ± 3% vmstat.system.cs
2391 ± 0% -58.0% 1005 ± 0% vmstat.system.in
4284 ± 0% -12.9% 3732 ± 0% proc-vmstat.nr_slab_unreclaimable
112981 ± 0% +60.0% 180741 ± 0% proc-vmstat.numa_hit
112981 ± 0% +60.0% 180741 ± 0% proc-vmstat.numa_local
131691 ± 0% +61.7% 212958 ± 0% proc-vmstat.pgfault
113408 ± 0% +60.8% 182335 ± 0% proc-vmstat.pgfree
104.70 ± 0% +72.1% 180.18 ± 0% time.elapsed_time
104.70 ± 0% +72.1% 180.18 ± 0% time.elapsed_time.max
4729 ± 18% +38.1% 6529 ± 4% time.involuntary_context_switches
5134 ± 5% +11.1% 5706 ± 5% time.minor_page_faults
235.75 ± 0% -58.4% 98.00 ± 0% time.percent_of_cpu_this_job_got
1.92 ± 3% -86.2% 0.27 ± 1% time.system_time
245.43 ± 0% -27.5% 177.86 ± 0% time.user_time
611.25 ± 1% -87.2% 78.00 ± 0% time.voluntary_context_switches
1458158 ± 22% -98.6% 20421 ±173% cpuidle.C1-IVB.time
11676 ± 17% -100.0% 1.00 ±122% cpuidle.C1-IVB.usage
1574739 ± 24% -100.0% 181.00 ± 24% cpuidle.C1E-IVB.time
269.00 ± 7% -99.3% 1.75 ± 24% cpuidle.C1E-IVB.usage
92320 ± 21% -98.1% 1726 ± 43% cpuidle.C3-IVB.time
169.75 ± 15% -97.1% 5.00 ± 24% cpuidle.C3-IVB.usage
1.702e+08 ± 0% -99.5% 805505 ± 4% cpuidle.C6-IVB.time
4452 ± 4% -98.2% 80.00 ± 3% cpuidle.C6-IVB.usage
81.25 ± 49% -98.2% 1.50 ±173% cpuidle.POLL.time
11.25 ± 55% -93.3% 0.75 ±173% cpuidle.POLL.usage
58.98 ± 0% +68.8% 99.54 ± 0% turbostat.%Busy
1942 ± 0% +68.7% 3277 ± 0% turbostat.Avg_MHz
39.80 ± 0% -98.8% 0.46 ± 0% turbostat.CPU%c1
0.02 ± 19% -100.0% 0.00 ± -1% turbostat.CPU%c3
1.20 ± 13% -100.0% 0.00 ± -1% turbostat.CPU%c6
19.34 ± 0% -29.8% 13.58 ± 0% turbostat.CorWatt
42.25 ± 1% -8.3% 38.75 ± 1% turbostat.CoreTmp
0.04 ± 10% -100.0% 0.00 ± -1% turbostat.Pkg%pc2
0.02 ± 86% -100.0% 0.00 ± -1% turbostat.Pkg%pc3
0.82 ± 3% -100.0% 0.00 ± -1% turbostat.Pkg%pc6
36.95 ± 0% -16.3% 30.91 ± 0% turbostat.PkgWatt
2091 ± 2% -12.2% 1836 ± 0% slabinfo.Acpi-Namespace.active_objs
2091 ± 2% -12.2% 1836 ± 0% slabinfo.Acpi-Namespace.num_objs
156.00 ± 0% -75.0% 39.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
156.00 ± 0% -75.0% 39.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
1816 ± 4% -43.7% 1022 ± 0% slabinfo.anon_vma.active_objs
1816 ± 4% -43.7% 1022 ± 0% slabinfo.anon_vma.num_objs
273.75 ± 11% -73.3% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
273.75 ± 11% -73.3% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
1647 ± 3% -12.1% 1447 ± 0% slabinfo.kmalloc-128.active_objs
1647 ± 3% -12.1% 1447 ± 0% slabinfo.kmalloc-128.num_objs
4284 ± 4% -10.4% 3838 ± 0% slabinfo.kmalloc-16.active_objs
4284 ± 4% -10.4% 3838 ± 0% slabinfo.kmalloc-16.num_objs
2429 ± 2% -20.5% 1931 ± 0% slabinfo.kmalloc-192.active_objs
2614 ± 2% -26.1% 1931 ± 0% slabinfo.kmalloc-192.num_objs
1162 ± 2% -11.5% 1028 ± 0% slabinfo.kmalloc-256.active_objs
5598 ± 7% -36.0% 3584 ± 0% slabinfo.kmalloc-32.active_objs
5598 ± 7% -36.0% 3584 ± 0% slabinfo.kmalloc-32.num_objs
787.50 ± 3% -21.4% 618.75 ± 2% slabinfo.kmalloc-512.active_objs
7301 ± 4% -17.6% 6018 ± 0% slabinfo.kmalloc-64.active_objs
7538 ± 4% -20.2% 6018 ± 0% slabinfo.kmalloc-64.num_objs
4480 ± 4% -20.0% 3584 ± 0% slabinfo.kmalloc-8.active_objs
4480 ± 4% -20.0% 3584 ± 0% slabinfo.kmalloc-8.num_objs
1113 ± 4% -13.2% 966.00 ± 0% slabinfo.kmalloc-96.active_objs
1113 ± 4% -13.2% 966.00 ± 0% slabinfo.kmalloc-96.num_objs
304.00 ± 9% -57.9% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
304.00 ± 9% -57.9% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
1256 ± 9% -26.6% 922.00 ± 2% slabinfo.proc_inode_cache.active_objs
1325 ± 4% -18.1% 1085 ± 2% slabinfo.proc_inode_cache.num_objs
314.25 ± 4% -33.5% 209.00 ± 0% slabinfo.signal_cache.active_objs
314.25 ± 4% -33.5% 209.00 ± 0% slabinfo.signal_cache.num_objs
2475 ± 2% -15.0% 2104 ± 0% slabinfo.vm_area_struct.active_objs
2506 ± 2% -15.9% 2107 ± 0% slabinfo.vm_area_struct.num_objs
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-sb02/all_utime/reaim
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
8.12 ± 0% -63.7% 2.94 ± 0% reaim.child_utime
17817 ± 0% -71.2% 5139 ± 0% reaim.jobs_per_min
3431 ± 0% -22.0% 2677 ± 0% reaim.jobs_per_min_child
95.81 ± 0% +4.4% 100.00 ± 0% reaim.jti
20745 ± 0% -75.1% 5164 ± 0% reaim.max_jobs_per_min
2.22 ± 0% +34.1% 2.98 ± 0% reaim.parent_time
3.79 ± 6% -100.0% 0.00 ± -1% reaim.std_dev_percent
0.09 ± 5% -100.0% 0.00 ± -1% reaim.std_dev_time
39.73 ± 0% -57.4% 16.94 ± 0% reaim.time.elapsed_time
39.73 ± 0% -57.4% 16.94 ± 0% reaim.time.elapsed_time.max
11361 ± 1% -60.3% 4509 ± 0% reaim.time.involuntary_context_switches
2400 ± 0% +3.9% 2493 ± 1% reaim.time.maximum_resident_set_size
11711 ± 0% -65.1% 4090 ± 0% reaim.time.minor_page_faults
244.75 ± 0% -71.8% 69.00 ± 0% reaim.time.percent_of_cpu_this_job_got
97.45 ± 0% -87.9% 11.78 ± 0% reaim.time.user_time
62925 ± 0% -76.9% 14562 ± 2% softirqs.TIMER
59.88 ± 0% -31.0% 41.33 ± 12% uptime.boot
132.29 ± 1% -83.7% 21.56 ± 23% uptime.idle
67302 ± 1% -9.5% 60918 ± 1% meminfo.Committed_AS
66608 ± 28% -42.3% 38448 ± 7% meminfo.DirectMap4k
18050 ± 0% -13.3% 15643 ± 0% meminfo.SUnreclaim
5.00 ± 0% -60.0% 2.00 ± 0% vmstat.procs.r
1382 ± 1% +12.6% 1555 ± 5% vmstat.system.cs
2420 ± 0% -70.8% 705.75 ± 1% vmstat.system.in
4511 ± 0% -13.3% 3909 ± 0% proc-vmstat.nr_slab_unreclaimable
58560 ± 0% -47.5% 30756 ± 0% proc-vmstat.numa_hit
58560 ± 0% -47.5% 30756 ± 0% proc-vmstat.numa_local
46094 ± 0% -46.1% 24857 ± 0% proc-vmstat.pgalloc_dma32
15691 ± 0% -51.1% 7670 ± 0% proc-vmstat.pgalloc_normal
68010 ± 0% -47.6% 35651 ± 0% proc-vmstat.pgfault
59666 ± 0% -49.2% 30340 ± 0% proc-vmstat.pgfree
124679 ± 4% -99.0% 1233 ± 37% cpuidle.C1-SNB.time
1088 ± 13% -98.0% 22.25 ± 66% cpuidle.C1-SNB.usage
426214 ± 61% -95.5% 19322 ± 6% cpuidle.C1E-SNB.time
401.50 ± 8% -86.9% 52.50 ± 5% cpuidle.C1E-SNB.usage
71067 ± 28% -95.0% 3524 ± 29% cpuidle.C3-SNB.time
109.00 ± 11% -90.1% 10.75 ± 13% cpuidle.C3-SNB.usage
63922064 ± 0% -91.0% 5727394 ± 0% cpuidle.C6-SNB.time
3859 ± 4% -92.4% 294.50 ± 4% cpuidle.C6-SNB.usage
115648 ±101% -90.7% 10761 ±173% cpuidle.POLL.time
39.73 ± 0% -57.4% 16.94 ± 0% time.elapsed_time
39.73 ± 0% -57.4% 16.94 ± 0% time.elapsed_time.max
120.00 ± 0% -53.3% 56.00 ± 0% time.file_system_outputs
11361 ± 1% -60.3% 4509 ± 0% time.involuntary_context_switches
11711 ± 0% -65.1% 4090 ± 0% time.minor_page_faults
244.75 ± 0% -71.8% 69.00 ± 0% time.percent_of_cpu_this_job_got
0.07 ± 5% -72.4% 0.02 ± 0% time.system_time
97.45 ± 0% -87.9% 11.78 ± 0% time.user_time
714.25 ± 0% -71.9% 200.50 ± 0% time.voluntary_context_switches
60.33 ± 0% +12.1% 67.60 ± 0% turbostat.%Busy
1751 ± 0% +11.7% 1956 ± 0% turbostat.Avg_MHz
0.44 ± 37% -61.7% 0.17 ± 4% turbostat.CPU%c1
0.10 ± 15% -80.5% 0.02 ± 35% turbostat.CPU%c3
39.13 ± 0% -17.7% 32.21 ± 0% turbostat.CPU%c6
21.75 ± 0% -16.8% 18.09 ± 0% turbostat.CorWatt
1.75 ± 3% -100.0% 0.00 ± -1% turbostat.Pkg%pc2
0.14 ± 11% -100.0% 0.00 ± -1% turbostat.Pkg%pc3
30.35 ± 1% -100.0% 0.00 ± -1% turbostat.Pkg%pc6
25.34 ± 0% -14.3% 21.70 ± 0% turbostat.PkgWatt
1377 ± 3% -11.1% 1224 ± 0% slabinfo.Acpi-Namespace.active_objs
1377 ± 3% -11.1% 1224 ± 0% slabinfo.Acpi-Namespace.num_objs
156.00 ± 0% -75.0% 39.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
156.00 ± 0% -75.0% 39.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
2006 ± 6% -47.2% 1059 ± 0% slabinfo.anon_vma.active_objs
2006 ± 6% -47.2% 1059 ± 0% slabinfo.anon_vma.num_objs
273.75 ± 11% -73.3% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
273.75 ± 11% -73.3% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
204.00 ± 0% -50.0% 102.00 ± 0% slabinfo.ext4_extent_status.active_objs
204.00 ± 0% -50.0% 102.00 ± 0% slabinfo.ext4_extent_status.num_objs
1870 ± 0% -14.5% 1600 ± 0% slabinfo.kmalloc-128.active_objs
1870 ± 0% -14.5% 1600 ± 0% slabinfo.kmalloc-128.num_objs
4462 ± 2% -13.9% 3840 ± 0% slabinfo.kmalloc-16.active_objs
4462 ± 2% -13.9% 3840 ± 0% slabinfo.kmalloc-16.num_objs
2577 ± 5% -19.3% 2079 ± 0% slabinfo.kmalloc-192.active_objs
2619 ± 4% -20.6% 2079 ± 0% slabinfo.kmalloc-192.num_objs
1345 ± 1% -7.7% 1241 ± 2% slabinfo.kmalloc-256.active_objs
1802 ± 4% -15.3% 1526 ± 3% slabinfo.kmalloc-256.num_objs
5728 ± 4% -37.4% 3584 ± 0% slabinfo.kmalloc-32.active_objs
5728 ± 4% -37.4% 3584 ± 0% slabinfo.kmalloc-32.num_objs
625.50 ± 9% -25.2% 468.00 ± 4% slabinfo.kmalloc-512.active_objs
7370 ± 3% -14.5% 6304 ± 0% slabinfo.kmalloc-64.active_objs
7608 ± 2% -17.1% 6304 ± 0% slabinfo.kmalloc-64.num_objs
4864 ± 5% -15.8% 4096 ± 0% slabinfo.kmalloc-8.active_objs
4864 ± 5% -15.8% 4096 ± 0% slabinfo.kmalloc-8.num_objs
1239 ± 2% -11.9% 1092 ± 0% slabinfo.kmalloc-96.active_objs
1239 ± 2% -11.9% 1092 ± 0% slabinfo.kmalloc-96.num_objs
288.00 ± 11% -55.6% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
288.00 ± 11% -55.6% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
1273 ± 3% -26.4% 937.50 ± 3% slabinfo.proc_inode_cache.active_objs
1365 ± 4% -19.0% 1105 ± 4% slabinfo.proc_inode_cache.num_objs
343.50 ± 4% -38.9% 210.00 ± 0% slabinfo.signal_cache.active_objs
343.50 ± 4% -38.9% 210.00 ± 0% slabinfo.signal_cache.num_objs
2773 ± 2% -22.2% 2157 ± 0% slabinfo.vm_area_struct.active_objs
2797 ± 2% -22.8% 2160 ± 0% slabinfo.vm_area_struct.num_objs
3326 ± 18% +157.5% 8565 ± 0% sched_debug.cfs_rq[0]:/.exec_clock
616.00 ± 16% +78.0% 1096 ± 0% sched_debug.cfs_rq[0]:/.load_avg
3924 ± 16% +101.9% 7924 ± 0% sched_debug.cfs_rq[0]:/.min_vruntime
7.75 ± 14% +119.4% 17.00 ± 9% sched_debug.cfs_rq[0]:/.nr_spread_over
6.00 ±100% +16558.3% 999.50 ± 0% sched_debug.cfs_rq[0]:/.runnable_load_avg
1549 ± 17% -28.7% 1105 ± 0% sched_debug.cfs_rq[0]:/.tg_load_avg
616.00 ± 16% +79.5% 1105 ± 0% sched_debug.cfs_rq[0]:/.tg_load_avg_contrib
635.75 ± 14% +58.8% 1009 ± 0% sched_debug.cfs_rq[0]:/.util_avg
0.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.MIN_vruntime
2118 ± 63% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.exec_clock
306.50 ± 25% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.load_avg
0.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.max_vruntime
2653 ± 49% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.min_vruntime
3.00 ± 23% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.nr_spread_over
76.00 ± 90% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.runnable_load_avg
1540 ± 17% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.tg_load_avg
306.50 ± 25% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.tg_load_avg_contrib
593.00 ± 18% -100.0% 0.00 ± -1% sched_debug.cfs_rq[1]:/.util_avg
0.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.MIN_vruntime
1627 ± 6% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.exec_clock
437.25 ± 3% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.load
281.00 ± 34% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.load_avg
0.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.max_vruntime
2413 ± 6% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.min_vruntime
1.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.nr_running
3.00 ± 23% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.nr_spread_over
102.50 ± 11% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.runnable_load_avg
-1511 ±-40% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.spread0
1540 ± 17% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.tg_load_avg
281.00 ± 34% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.tg_load_avg_contrib
539.50 ± 7% -100.0% 0.00 ± -1% sched_debug.cfs_rq[2]:/.util_avg
0.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.MIN_vruntime
2199 ± 55% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.exec_clock
338.50 ± 52% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.load_avg
0.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.max_vruntime
2818 ± 43% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.min_vruntime
3.00 ± 23% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.nr_spread_over
25.25 ±144% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.runnable_load_avg
1538 ± 17% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.tg_load_avg
338.25 ± 53% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.tg_load_avg_contrib
600.75 ± 23% -100.0% 0.00 ± -1% sched_debug.cfs_rq[3]:/.util_avg
241050 ± 16% +234.1% 805228 ± 0% sched_debug.cpu#0.avg_idle
65.00 ±103% +1436.2% 998.50 ± 0% sched_debug.cpu#0.cpu_load[0]
52.75 ± 81% +1791.9% 998.00 ± 0% sched_debug.cpu#0.cpu_load[1]
54.00 ± 49% +1748.1% 998.00 ± 0% sched_debug.cpu#0.cpu_load[2]
58.50 ± 34% +1605.1% 997.50 ± 0% sched_debug.cpu#0.cpu_load[3]
64.00 ± 32% +1458.6% 997.50 ± 0% sched_debug.cpu#0.cpu_load[4]
3466 ± 3% +149.9% 8662 ± 3% sched_debug.cpu#0.nr_load_updates
6361 ± 8% +747.6% 53925 ± 3% sched_debug.cpu#0.nr_switches
-7.00 ±-44% -100.0% 0.00 ± -1% sched_debug.cpu#0.nr_uninterruptible
152836 ± 0% -64.7% 53935 ± 3% sched_debug.cpu#0.sched_count
2208 ± 9% +117.3% 4797 ± 6% sched_debug.cpu#0.sched_goidle
4027 ± 5% +608.6% 28542 ± 3% sched_debug.cpu#0.ttwu_count
1871 ± 10% +1424.9% 28542 ± 3% sched_debug.cpu#0.ttwu_local
186707 ± 30% -100.0% 0.00 ± -1% sched_debug.cpu#1.avg_idle
19748 ± 2% -100.0% 0.00 ± -1% sched_debug.cpu#1.clock
19748 ± 2% -100.0% 0.00 ± -1% sched_debug.cpu#1.clock_task
66.25 ± 60% -100.0% 0.00 ± -1% sched_debug.cpu#1.cpu_load[1]
67.50 ± 45% -100.0% 0.00 ± -1% sched_debug.cpu#1.cpu_load[2]
73.00 ± 36% -100.0% 0.00 ± -1% sched_debug.cpu#1.cpu_load[3]
81.00 ± 36% -100.0% 0.00 ± -1% sched_debug.cpu#1.cpu_load[4]
500000 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#1.max_idle_balance_cost
4294 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#1.next_balance
4449 ± 32% -100.0% 0.00 ± -1% sched_debug.cpu#1.nr_load_updates
9718 ± 30% -100.0% 0.00 ± -1% sched_debug.cpu#1.nr_switches
9748 ± 29% -100.0% 0.00 ± -1% sched_debug.cpu#1.sched_count
3776 ± 34% -100.0% 0.00 ± -1% sched_debug.cpu#1.sched_goidle
4756 ± 31% -100.0% 0.00 ± -1% sched_debug.cpu#1.ttwu_count
3137 ± 45% -100.0% 0.00 ± -1% sched_debug.cpu#1.ttwu_local
304621 ± 31% -100.0% 0.00 ± -1% sched_debug.cpu#2.avg_idle
19749 ± 2% -100.0% 0.00 ± -1% sched_debug.cpu#2.clock
19749 ± 2% -100.0% 0.00 ± -1% sched_debug.cpu#2.clock_task
27.25 ± 78% -100.0% 0.00 ± -1% sched_debug.cpu#2.cpu_load[1]
31.50 ± 33% -100.0% 0.00 ± -1% sched_debug.cpu#2.cpu_load[2]
34.00 ± 31% -100.0% 0.00 ± -1% sched_debug.cpu#2.cpu_load[3]
41.50 ± 27% -100.0% 0.00 ± -1% sched_debug.cpu#2.cpu_load[4]
755.25 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#2.curr->pid
437.25 ± 3% -100.0% 0.00 ± -1% sched_debug.cpu#2.load
500000 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#2.max_idle_balance_cost
4294 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#2.next_balance
4770 ± 35% -100.0% 0.00 ± -1% sched_debug.cpu#2.nr_load_updates
1.00 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#2.nr_running
9692 ± 35% -100.0% 0.00 ± -1% sched_debug.cpu#2.nr_switches
9717 ± 35% -100.0% 0.00 ± -1% sched_debug.cpu#2.sched_count
4001 ± 41% -100.0% 0.00 ± -1% sched_debug.cpu#2.sched_goidle
4181 ± 38% -100.0% 0.00 ± -1% sched_debug.cpu#2.ttwu_count
3159 ± 51% -100.0% 0.00 ± -1% sched_debug.cpu#2.ttwu_local
304004 ± 28% -100.0% 0.00 ± -1% sched_debug.cpu#3.avg_idle
19748 ± 2% -100.0% 0.00 ± -1% sched_debug.cpu#3.clock
19748 ± 2% -100.0% 0.00 ± -1% sched_debug.cpu#3.clock_task
23.50 ± 86% -100.0% 0.00 ± -1% sched_debug.cpu#3.cpu_load[1]
30.75 ± 29% -100.0% 0.00 ± -1% sched_debug.cpu#3.cpu_load[2]
34.00 ± 28% -100.0% 0.00 ± -1% sched_debug.cpu#3.cpu_load[3]
40.00 ± 26% -100.0% 0.00 ± -1% sched_debug.cpu#3.cpu_load[4]
500000 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#3.max_idle_balance_cost
4294 ± 0% -100.0% 0.00 ± -1% sched_debug.cpu#3.next_balance
3224 ± 13% -100.0% 0.00 ± -1% sched_debug.cpu#3.nr_load_updates
6232 ± 21% -100.0% 0.00 ± -1% sched_debug.cpu#3.nr_switches
6258 ± 21% -100.0% 0.00 ± -1% sched_debug.cpu#3.sched_count
2167 ± 24% -100.0% 0.00 ± -1% sched_debug.cpu#3.sched_goidle
2988 ± 12% -100.0% 0.00 ± -1% sched_debug.cpu#3.ttwu_count
1605 ± 39% -100.0% 0.00 ± -1% sched_debug.cpu#3.ttwu_local
7.26 ± 44% +188.3% 20.94 ± 0% sched_debug.rt_rq[0]:/.rt_time
950.00 ± 0% -100.0% 0.00 ± -1% sched_debug.rt_rq[1]:/.rt_runtime
7.07 ± 40% -100.0% 0.00 ± -1% sched_debug.rt_rq[1]:/.rt_time
950.00 ± 0% -100.0% 0.00 ± -1% sched_debug.rt_rq[2]:/.rt_runtime
4.36 ± 46% -100.0% 0.00 ± -1% sched_debug.rt_rq[2]:/.rt_time
950.00 ± 0% -100.0% 0.00 ± -1% sched_debug.rt_rq[3]:/.rt_runtime
0.98 ± 41% -100.0% 0.00 ± -1% sched_debug.rt_rq[3]:/.rt_time
18.00 ± 0% -66.7% 6.00 ± 0% sched_debug.sysctl_sched.sysctl_sched_latency
2.25 ± 0% -66.7% 0.75 ± 0% sched_debug.sysctl_sched.sysctl_sched_min_granularity
3.00 ± 0% -66.7% 1.00 ± 0% sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/x86_64-rhel/200%/debian-x86_64-2015-02-07.cgz/lkp-nex04/kbuild
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
104.84 ± 5% -100.0% 0.00 ± -1% kbuild.time.elapsed_time
104.84 ± 5% -100.0% 0.00 ± -1% kbuild.time.elapsed_time.max
412541 ± 1% -100.0% 0.00 ± -1% kbuild.time.involuntary_context_switches
121633 ± 0% -100.0% 0.00 ± -1% kbuild.time.maximum_resident_set_size
52747117 ± 0% -100.0% 0.00 ± -1% kbuild.time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% kbuild.time.page_size
3778 ± 5% -100.0% 0.00 ± -1% kbuild.time.percent_of_cpu_this_job_got
379.86 ± 0% -100.0% 0.00 ± -1% kbuild.time.system_time
3570 ± 0% -100.0% 0.00 ± -1% kbuild.time.user_time
288008 ± 2% -100.0% 0.00 ± -1% kbuild.time.voluntary_context_switches
142.44 ± 3% +1089.5% 1694 ± 0% uptime.boot
5002 ± 7% -99.7% 13.52 ± 5% uptime.idle
904720 ± 0% -85.4% 131979 ± 0% softirqs.RCU
121427 ± 0% -100.0% 0.00 ± -1% softirqs.SCHED
2361233 ± 0% -64.0% 850141 ± 0% softirqs.TIMER
1480596 ± 0% -13.3% 1284395 ± 0% vmstat.memory.cache
76.75 ± 4% +66.1% 127.50 ± 0% vmstat.procs.r
12767 ± 3% -72.7% 3483 ± 0% vmstat.system.cs
98762 ± 4% -98.9% 1097 ± 0% vmstat.system.in
59.72 ± 5% +67.4% 99.99 ± 0% turbostat.%Busy
1391 ± 5% +72.0% 2392 ± 0% turbostat.Avg_MHz
2.19 ± 6% -99.4% 0.01 ± 66% turbostat.CPU%c1
38.09 ± 8% -100.0% 0.00 ± -1% turbostat.CPU%c3
21.12 ± 18% -100.0% 0.00 ± -1% turbostat.Pkg%pc3
5495433 ± 17% -100.0% 215.25 ± 23% cpuidle.C1-NHM.time
31751 ± 26% -100.0% 3.75 ± 29% cpuidle.C1-NHM.usage
1805244 ± 37% -100.0% 72.25 ±104% cpuidle.C1E-NHM.time
3774 ± 1% -99.9% 2.00 ± 50% cpuidle.C1E-NHM.usage
2.731e+09 ± 13% -100.0% 191811 ± 62% cpuidle.C3-NHM.time
130803 ± 6% -100.0% 33.00 ± 40% cpuidle.C3-NHM.usage
15662 ±171% -100.0% 0.00 ± 0% cpuidle.POLL.time
10.00 ± 12% -100.0% 0.00 ± 0% cpuidle.POLL.usage
11799872 ± 0% +151.4% 29663670 ± 0% numa-numastat.node0.local_node
11803235 ± 0% +151.3% 29663670 ± 0% numa-numastat.node0.numa_hit
3363 ± 40% -100.0% 0.00 ± -1% numa-numastat.node0.other_node
11101506 ± 2% -100.0% 0.00 ± -1% numa-numastat.node1.local_node
11105625 ± 2% -100.0% 3.25 ± 13% numa-numastat.node1.numa_hit
4119 ± 42% -99.9% 3.25 ± 13% numa-numastat.node1.other_node
11853583 ± 0% -100.0% 0.00 ± -1% numa-numastat.node2.local_node
11858825 ± 0% -100.0% 3.50 ± 14% numa-numastat.node2.numa_hit
5242 ± 0% -99.9% 3.50 ± 14% numa-numastat.node2.other_node
11573634 ± 3% -100.0% 0.00 ± -1% numa-numastat.node3.local_node
11576259 ± 3% -100.0% 5.25 ± 34% numa-numastat.node3.numa_hit
104.84 ± 5% -100.0% 0.00 ± -1% time.elapsed_time
104.84 ± 5% -100.0% 0.00 ± -1% time.elapsed_time.max
72.00 ± 0% -100.0% 0.00 ± -1% time.file_system_outputs
412541 ± 1% -100.0% 0.00 ± -1% time.involuntary_context_switches
121633 ± 0% -100.0% 0.00 ± -1% time.maximum_resident_set_size
52747117 ± 0% -100.0% 0.00 ± -1% time.minor_page_faults
4096 ± 0% -100.0% 0.00 ± -1% time.page_size
3778 ± 5% -100.0% 0.00 ± -1% time.percent_of_cpu_this_job_got
379.86 ± 0% -100.0% 0.00 ± -1% time.system_time
3570 ± 0% -100.0% 0.00 ± -1% time.user_time
288008 ± 2% -100.0% 0.00 ± -1% time.voluntary_context_switches
2066024 ± 3% +60.5% 3315086 ± 0% meminfo.Active
1794443 ± 4% +70.8% 3065010 ± 0% meminfo.Active(anon)
22315 ± 6% +403.0% 112252 ± 2% meminfo.AnonHugePages
1696178 ± 4% +80.3% 3057526 ± 0% meminfo.AnonPages
1272562 ± 1% -9.9% 1146164 ± 0% meminfo.Cached
1880690 ± 3% +82.9% 3439557 ± 0% meminfo.Committed_AS
131011 ± 17% -73.9% 34239 ± 2% meminfo.DirectMap4k
19678 ± 2% -49.1% 10014 ± 0% meminfo.Inactive(anon)
37596 ± 3% -13.2% 32645 ± 0% meminfo.Mapped
22047 ± 5% +55.7% 34324 ± 0% meminfo.PageTables
101076 ± 0% -15.5% 85395 ± 0% meminfo.SReclaimable
106116 ± 0% -50.2% 52881 ± 0% meminfo.SUnreclaim
117627 ± 5% -87.7% 14428 ± 0% meminfo.Shmem
207193 ± 0% -33.3% 138276 ± 0% meminfo.Slab
449109 ± 4% +70.6% 766102 ± 0% proc-vmstat.nr_active_anon
424353 ± 4% +80.1% 764234 ± 0% proc-vmstat.nr_anon_pages
318120 ± 1% -9.9% 286543 ± 0% proc-vmstat.nr_file_pages
4894 ± 2% -48.9% 2503 ± 0% proc-vmstat.nr_inactive_anon
9339 ± 2% -12.6% 8162 ± 0% proc-vmstat.nr_mapped
5499 ± 4% +56.0% 8579 ± 0% proc-vmstat.nr_page_table_pages
29375 ± 5% -87.7% 3607 ± 0% proc-vmstat.nr_shmem
25268 ± 0% -15.5% 21348 ± 0% proc-vmstat.nr_slab_reclaimable
26527 ± 0% -50.2% 13219 ± 0% proc-vmstat.nr_slab_unreclaimable
755717 ± 1% -91.8% 61736 ± 9% proc-vmstat.numa_hint_faults
689912 ± 2% -91.1% 61700 ± 9% proc-vmstat.numa_hint_faults_local
46356067 ± 0% -36.0% 29659695 ± 0% proc-vmstat.numa_hit
46343679 ± 0% -36.0% 29659683 ± 0% proc-vmstat.numa_local
12388 ± 0% -99.9% 12.00 ± 21% proc-vmstat.numa_other
57823 ± 21% -100.0% 0.00 ± -1% proc-vmstat.numa_pages_migrated
1990271 ± 1% -80.1% 395921 ± 2% proc-vmstat.numa_pte_updates
162988 ± 0% -60.6% 64139 ± 0% proc-vmstat.pgactivate
655192 ± 0% +44.2% 944722 ± 0% proc-vmstat.pgalloc_dma32
46328460 ± 0% -35.8% 29750505 ± 0% proc-vmstat.pgalloc_normal
53042719 ± 0% -36.4% 33736338 ± 0% proc-vmstat.pgfault
46867311 ± 0% -36.4% 29785295 ± 0% proc-vmstat.pgfree
315.50 ± 79% -88.7% 35.75 ± 36% proc-vmstat.pgmigrate_fail
57823 ± 21% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_success
49.00 ± 18% +2333.2% 1192 ± 0% proc-vmstat.thp_collapse_alloc
1665376 ± 10% +319.3% 6982441 ± 0% latency_stats.avg.async_synchronize_cookie_domain.async_synchronize_full.do_init_module.load_module.SyS_finit_module.entry_SYSCALL_64_fastpath
4338 ± 67% +1052.7% 50004 ±141% latency_stats.avg.call_rwsem_down_read_failed.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2125 ± 67% +3514.0% 76816 ± 12% latency_stats.avg.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
795.75 ±129% +9140.8% 73534 ± 18% latency_stats.avg.call_rwsem_down_write_failed.vma_adjust.__split_vma.split_vma.mprotect_fixup.SyS_mprotect.entry_SYSCALL_64_fastpath
1165 ± 74% +4634.4% 55191 ± 13% latency_stats.avg.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
5711 ±125% +1029.5% 64513 ± 22% latency_stats.avg.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.vm_mmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
122831 ±100% -100.0% 0.00 ± -1% latency_stats.avg.call_usermodehelper_exec.__request_module.misc_open.chrdev_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 39597 ±108% latency_stats.avg.do_unlinkat.SyS_unlink.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 73352 ± 25% latency_stats.avg.do_unlinkat.SyS_unlinkat.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 49371 ± 75% latency_stats.avg.filename_create.SyS_mkdir.entry_SYSCALL_64_fastpath
35223 ± 6% -94.2% 2047 ±101% latency_stats.avg.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 14611942 ± 33% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
8426 ± 65% +768.5% 73176 ± 22% latency_stats.avg.walk_component.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newstat.SyS_newstat.entry_SYSCALL_64_fastpath
218812 ± 1% +522.6% 1362320 ± 0% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
9134 ± 68% +804.7% 82637 ±141% latency_stats.max.call_rwsem_down_read_failed.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
12282 ± 53% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
7134 ± 37% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
12658 ± 50% +874.7% 123382 ± 3% latency_stats.max.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
122831 ±100% -100.0% 0.00 ± -1% latency_stats.max.call_usermodehelper_exec.__request_module.misc_open.chrdev_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 43262 ±112% latency_stats.max.do_unlinkat.SyS_unlink.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 118159 ± 3% latency_stats.max.do_unlinkat.SyS_unlinkat.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 49371 ± 75% latency_stats.max.filename_create.SyS_mkdir.entry_SYSCALL_64_fastpath
168505 ± 9% -95.9% 6848 ± 99% latency_stats.max.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 51311280 ± 63% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
5000 ± 0% +4875.6% 248777 ± 16% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
14429 ± 40% -100.0% 0.00 ± -1% latency_stats.max.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
12322 ± 64% +1089.5% 146575 ± 28% latency_stats.max.walk_component.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newstat.SyS_newstat.entry_SYSCALL_64_fastpath
90763 ± 20% -91.4% 7823 ± 30% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
20718 ± 72% +624.0% 150011 ±141% latency_stats.sum.call_rwsem_down_read_failed.rmap_walk.remove_migration_ptes.migrate_pages.migrate_misplaced_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
243636 ± 75% +778.5% 2140388 ± 42% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
5951 ± 82% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
20277 ± 33% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.unlink_file_vma.free_pgtables.unmap_region.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
14602 ± 44% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.vma_adjust.__split_vma.do_munmap.vm_munmap.elf_map.load_elf_binary.search_binary_handler.do_execveat_common.SyS_execve.return_from_execve
53613 ± 82% +1634.2% 929747 ± 23% latency_stats.sum.call_rwsem_down_write_failed.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
122831 ±100% -100.0% 0.00 ± -1% latency_stats.sum.call_usermodehelper_exec.__request_module.misc_open.chrdev_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 65312 ±128% latency_stats.sum.do_unlinkat.SyS_unlink.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 308983 ± 33% latency_stats.sum.do_unlinkat.SyS_unlinkat.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 49371 ± 75% latency_stats.sum.filename_create.SyS_mkdir.entry_SYSCALL_64_fastpath
193289 ± 9% -96.4% 6943 ± 98% latency_stats.sum.flush_work.__cancel_work_timer.cancel_delayed_work_sync.disk_block_events.__blkdev_get.blkdev_get.blkdev_open.do_dentry_open.vfs_open.path_openat.do_filp_open.do_sys_open
0.00 ± -1% +Inf% 1.36e+08 ± 63% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1.188e+08 ± 3% -82.5% 20794203 ± 2% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
4281 ± 5% +327.1% 18286 ± 19% latency_stats.sum.unix_wait_for_peer.unix_dgram_sendmsg.sock_sendmsg.___sys_sendmsg.__sys_sendmsg.SyS_sendmsg.entry_SYSCALL_64_fastpath
26548 ± 57% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
27361 ±107% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit_killable.__lock_page_or_retry.filemap_fault.__do_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
18947055 ± 2% -93.3% 1266142 ± 3% latency_stats.sum.wait_woken.inotify_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
71501253 ± 15% +638.0% 5.277e+08 ± 11% latency_stats.sum.walk_component.link_path_walk.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
26943 ± 78% +4133.8% 1140734 ± 68% latency_stats.sum.walk_component.path_lookupat.filename_lookup.user_path_at_empty.vfs_fstatat.SYSC_newstat.SyS_newstat.entry_SYSCALL_64_fastpath
110139 ± 11% +589.8% 759693 ± 0% numa-vmstat.node0.nr_active_anon
16984 ± 6% +14.5% 19449 ± 1% numa-vmstat.node0.nr_active_file
104129 ± 5% +627.8% 757813 ± 0% numa-vmstat.node0.nr_anon_pages
679.50 ±146% +268.4% 2503 ± 0% numa-vmstat.node0.nr_inactive_anon
433.50 ± 2% +113.2% 924.25 ± 0% numa-vmstat.node0.nr_kernel_stack
1872 ± 2% +67.1% 3128 ± 0% numa-vmstat.node0.nr_mapped
1320 ± 9% +544.5% 8510 ± 0% numa-vmstat.node0.nr_page_table_pages
5854 ± 6% +34.8% 7893 ± 1% numa-vmstat.node0.nr_slab_reclaimable
57.50 ±100% +125.7% 129.75 ± 2% numa-vmstat.node0.nr_written
5938372 ± 0% +159.4% 15406922 ± 0% numa-vmstat.node0.numa_hit
5932132 ± 0% +159.7% 15406922 ± 0% numa-vmstat.node0.numa_local
6239 ± 25% -100.0% 0.00 ± -1% numa-vmstat.node0.numa_other
108408 ± 12% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_active_anon
16904 ± 1% -15.7% 14256 ± 0% numa-vmstat.node1.nr_active_file
299.00 ± 3% -42.3% 172.50 ± 10% numa-vmstat.node1.nr_alloc_batch
102542 ± 6% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_anon_pages
716.50 ±154% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_inactive_anon
209.25 ± 2% -61.8% 80.00 ± 0% numa-vmstat.node1.nr_kernel_stack
1944 ± 7% -13.9% 1673 ± 0% numa-vmstat.node1.nr_mapped
1290 ± 6% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_page_table_pages
6538 ±169% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_shmem
6193 ± 13% -26.5% 4555 ± 2% numa-vmstat.node1.nr_slab_reclaimable
6407 ± 5% -73.1% 1726 ± 4% numa-vmstat.node1.nr_slab_unreclaimable
5623718 ± 3% -98.3% 95861 ± 0% numa-vmstat.node1.numa_hit
5523150 ± 3% -100.0% 0.00 ± -1% numa-vmstat.node1.numa_local
107833 ± 4% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_active_anon
17345 ± 3% -16.4% 14492 ± 1% numa-vmstat.node2.nr_active_file
568.00 ± 3% +21.2% 688.50 ± 2% numa-vmstat.node2.nr_alloc_batch
107789 ± 5% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_anon_pages
579.25 ±153% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_inactive_anon
229.25 ± 5% -65.1% 80.00 ± 0% numa-vmstat.node2.nr_kernel_stack
2230 ± 27% -24.9% 1673 ± 0% numa-vmstat.node2.nr_mapped
1496 ± 7% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_page_table_pages
628.50 ±139% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_shmem
6596 ± 5% -32.9% 4424 ± 2% numa-vmstat.node2.nr_slab_reclaimable
6283 ± 5% -72.5% 1726 ± 3% numa-vmstat.node2.nr_slab_unreclaimable
5929199 ± 0% -98.4% 95779 ± 0% numa-vmstat.node2.numa_hit
5826957 ± 0% -100.0% 0.00 ± -1% numa-vmstat.node2.numa_local
116961 ± 6% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_active_anon
16707 ± 2% -13.9% 14380 ± 3% numa-vmstat.node3.nr_active_file
561.25 ± 10% +19.1% 668.50 ± 2% numa-vmstat.node3.nr_alloc_batch
104224 ± 5% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_anon_pages
87508 ± 16% -22.8% 67595 ± 0% numa-vmstat.node3.nr_file_pages
2769 ± 66% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_inactive_anon
237.00 ± 3% -66.2% 80.00 ± 0% numa-vmstat.node3.nr_kernel_stack
2984 ± 21% -43.9% 1673 ± 0% numa-vmstat.node3.nr_mapped
1427 ± 5% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_page_table_pages
15490 ± 93% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_shmem
6616 ± 6% -32.3% 4477 ± 3% numa-vmstat.node3.nr_slab_reclaimable
6653 ± 4% -72.6% 1821 ± 2% numa-vmstat.node3.nr_slab_unreclaimable
5847574 ± 3% -98.4% 95859 ± 0% numa-vmstat.node3.numa_hit
5748223 ± 3% -100.0% 0.00 ± -1% numa-vmstat.node3.numa_local
509957 ± 9% +511.3% 3117196 ± 0% numa-meminfo.node0.Active
442054 ± 11% +587.6% 3039390 ± 0% numa-meminfo.node0.Active(anon)
67901 ± 6% +14.6% 77805 ± 1% numa-meminfo.node0.Active(file)
4423 ± 15% +2423.8% 111633 ± 2% numa-meminfo.node0.AnonHugePages
418189 ± 5% +625.0% 3031716 ± 0% numa-meminfo.node0.AnonPages
224231 ± 2% +12.9% 253176 ± 0% numa-meminfo.node0.Inactive
2729 ±147% +266.9% 10012 ± 0% numa-meminfo.node0.Inactive(anon)
6946 ± 2% +113.0% 14795 ± 0% numa-meminfo.node0.KernelStack
7512 ± 4% +66.7% 12521 ± 0% numa-meminfo.node0.Mapped
984792 ± 6% +268.3% 3627071 ± 0% numa-meminfo.node0.MemUsed
5295 ± 9% +543.1% 34049 ± 0% numa-meminfo.node0.PageTables
23417 ± 6% +34.8% 31574 ± 1% numa-meminfo.node0.SReclaimable
52037 ± 4% +21.6% 63262 ± 0% numa-meminfo.node0.Slab
499568 ± 10% -88.6% 57028 ± 0% numa-meminfo.node1.Active
431939 ± 12% -100.0% 0.00 ± -1% numa-meminfo.node1.Active(anon)
67627 ± 1% -15.7% 57028 ± 0% numa-meminfo.node1.Active(file)
3606 ± 14% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonHugePages
408452 ± 6% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonPages
2780 ±153% -100.0% 0.00 ± -1% numa-meminfo.node1.Inactive(anon)
3359 ± 2% -61.9% 1280 ± 0% numa-meminfo.node1.KernelStack
7686 ± 5% -12.9% 6695 ± 0% numa-meminfo.node1.Mapped
941516 ± 7% -57.2% 403269 ± 0% numa-meminfo.node1.MemUsed
5218 ± 6% -100.0% 0.00 ± -1% numa-meminfo.node1.PageTables
24774 ± 13% -26.5% 18220 ± 2% numa-meminfo.node1.SReclaimable
25634 ± 5% -73.1% 6907 ± 4% numa-meminfo.node1.SUnreclaim
26059 ±169% -100.0% 0.00 ± -1% numa-meminfo.node1.Shmem
50409 ± 8% -50.2% 25127 ± 2% numa-meminfo.node1.Slab
502526 ± 3% -88.5% 57971 ± 1% numa-meminfo.node2.Active
433231 ± 4% -100.0% 0.00 ± -1% numa-meminfo.node2.Active(anon)
69294 ± 3% -16.3% 57971 ± 1% numa-meminfo.node2.Active(file)
5696 ± 38% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonHugePages
432924 ± 4% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonPages
2324 ±152% -100.0% 0.00 ± -1% numa-meminfo.node2.Inactive(anon)
3667 ± 6% -65.1% 1280 ± 0% numa-meminfo.node2.KernelStack
8914 ± 27% -24.9% 6698 ± 0% numa-meminfo.node2.Mapped
957199 ± 2% -57.9% 402880 ± 0% numa-meminfo.node2.MemUsed
5993 ± 6% -100.0% 0.00 ± -1% numa-meminfo.node2.PageTables
26387 ± 5% -32.9% 17696 ± 2% numa-meminfo.node2.SReclaimable
25142 ± 5% -72.5% 6906 ± 3% numa-meminfo.node2.SUnreclaim
2526 ±138% -100.0% 0.00 ± -1% numa-meminfo.node2.Shmem
51530 ± 5% -52.3% 24602 ± 1% numa-meminfo.node2.Slab
534938 ± 6% -89.2% 57522 ± 3% numa-meminfo.node3.Active
468139 ± 7% -100.0% 0.00 ± -1% numa-meminfo.node3.Active(anon)
66798 ± 2% -13.9% 57522 ± 3% numa-meminfo.node3.Active(file)
8733 ± 32% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonHugePages
417249 ± 5% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonPages
350026 ± 16% -22.8% 270382 ± 0% numa-meminfo.node3.FilePages
232290 ± 3% -8.4% 212855 ± 0% numa-meminfo.node3.Inactive
11082 ± 66% -100.0% 0.00 ± -1% numa-meminfo.node3.Inactive(anon)
3810 ± 4% -66.4% 1280 ± 0% numa-meminfo.node3.KernelStack
11993 ± 21% -44.1% 6698 ± 0% numa-meminfo.node3.Mapped
992484 ± 4% -59.3% 403674 ± 0% numa-meminfo.node3.MemUsed
5683 ± 6% -100.0% 0.00 ± -1% numa-meminfo.node3.PageTables
26471 ± 6% -32.3% 17910 ± 3% numa-meminfo.node3.SReclaimable
26622 ± 4% -72.6% 7285 ± 1% numa-meminfo.node3.SUnreclaim
61961 ± 93% -100.0% 0.00 ± -1% numa-meminfo.node3.Shmem
53094 ± 2% -52.5% 25195 ± 1% numa-meminfo.node3.Slab
12113 ± 0% -48.7% 6220 ± 0% slabinfo.Acpi-Namespace.active_objs
12113 ± 0% -48.7% 6220 ± 0% slabinfo.Acpi-Namespace.num_objs
45248 ± 0% -15.3% 38340 ± 0% slabinfo.Acpi-Operand.active_objs
808.00 ± 0% -15.2% 685.50 ± 0% slabinfo.Acpi-Operand.active_slabs
45248 ± 0% -15.2% 38388 ± 0% slabinfo.Acpi-Operand.num_objs
808.00 ± 0% -15.2% 685.50 ± 0% slabinfo.Acpi-Operand.num_slabs
2457 ± 0% -96.9% 77.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
2457 ± 0% -96.9% 77.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
101478 ± 0% -10.4% 90897 ± 0% slabinfo.Acpi-State.active_objs
1991 ± 0% -10.5% 1782 ± 0% slabinfo.Acpi-State.active_slabs
101558 ± 0% -10.5% 90924 ± 0% slabinfo.Acpi-State.num_objs
1991 ± 0% -10.5% 1782 ± 0% slabinfo.Acpi-State.num_slabs
2381 ± 1% -89.0% 261.00 ± 5% slabinfo.RAW.active_objs
2381 ± 1% -89.0% 261.00 ± 5% slabinfo.RAW.num_objs
119.00 ± 0% -85.7% 17.00 ± 0% slabinfo.TCP.active_objs
119.00 ± 0% -85.7% 17.00 ± 0% slabinfo.TCP.num_objs
280.50 ± 5% -87.9% 34.00 ± 0% slabinfo.UDP.active_objs
280.50 ± 5% -87.9% 34.00 ± 0% slabinfo.UDP.num_objs
32761 ± 1% -75.7% 7962 ± 0% slabinfo.anon_vma.active_objs
644.75 ± 1% -74.8% 162.25 ± 0% slabinfo.anon_vma.active_slabs
32914 ± 1% -74.8% 8298 ± 0% slabinfo.anon_vma.num_objs
644.75 ± 1% -74.8% 162.25 ± 0% slabinfo.anon_vma.num_slabs
727.75 ± 12% -90.0% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
727.75 ± 12% -90.0% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
286.75 ± 14% -86.4% 39.00 ± 0% slabinfo.bdev_cache.active_objs
286.75 ± 14% -86.4% 39.00 ± 0% slabinfo.bdev_cache.num_objs
850.50 ± 11% -94.8% 44.00 ± 0% slabinfo.blkdev_requests.active_objs
850.50 ± 11% -94.8% 44.00 ± 0% slabinfo.blkdev_requests.num_objs
102815 ± 0% -11.7% 90801 ± 0% slabinfo.dentry.active_objs
2514 ± 0% -14.0% 2163 ± 0% slabinfo.dentry.active_slabs
105626 ± 0% -14.0% 90887 ± 0% slabinfo.dentry.num_objs
2514 ± 0% -14.0% 2163 ± 0% slabinfo.dentry.num_slabs
530.75 ± 10% -92.7% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
530.75 ± 10% -92.7% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
3108 ± 0% -76.5% 729.75 ± 0% slabinfo.files_cache.active_objs
3108 ± 0% -76.5% 729.75 ± 0% slabinfo.files_cache.num_objs
3970 ± 2% -12.2% 3485 ± 0% slabinfo.ftrace_event_field.active_objs
3970 ± 2% -12.2% 3485 ± 0% slabinfo.ftrace_event_field.num_objs
2129 ± 1% -45.0% 1170 ± 0% slabinfo.idr_layer_cache.active_objs
2129 ± 1% -45.0% 1170 ± 0% slabinfo.idr_layer_cache.num_objs
55954 ± 0% -30.7% 38763 ± 0% slabinfo.kernfs_node_cache.active_objs
1645 ± 0% -30.7% 1139 ± 0% slabinfo.kernfs_node_cache.active_slabs
55954 ± 0% -30.7% 38763 ± 0% slabinfo.kernfs_node_cache.num_objs
1645 ± 0% -30.7% 1139 ± 0% slabinfo.kernfs_node_cache.num_slabs
3558 ± 0% -72.7% 972.50 ± 0% slabinfo.kmalloc-1024.active_objs
3687 ± 0% -72.3% 1020 ± 1% slabinfo.kmalloc-1024.num_objs
5673 ± 2% -58.0% 2380 ± 0% slabinfo.kmalloc-128.active_objs
177.00 ± 2% -57.6% 75.00 ± 0% slabinfo.kmalloc-128.active_slabs
5673 ± 2% -57.3% 2424 ± 0% slabinfo.kmalloc-128.num_objs
177.00 ± 2% -57.6% 75.00 ± 0% slabinfo.kmalloc-128.num_slabs
21548 ± 0% -54.6% 9784 ± 0% slabinfo.kmalloc-16.active_objs
21548 ± 0% -52.7% 10200 ± 0% slabinfo.kmalloc-16.num_objs
17530 ± 2% -73.2% 4704 ± 0% slabinfo.kmalloc-192.active_objs
434.25 ± 2% -73.2% 116.25 ± 0% slabinfo.kmalloc-192.active_slabs
18270 ± 2% -73.2% 4900 ± 0% slabinfo.kmalloc-192.num_objs
434.25 ± 2% -73.2% 116.25 ± 0% slabinfo.kmalloc-192.num_slabs
3461 ± 2% -79.2% 718.75 ± 0% slabinfo.kmalloc-2048.active_objs
220.00 ± 2% -78.8% 46.75 ± 0% slabinfo.kmalloc-2048.active_slabs
3534 ± 2% -78.7% 751.75 ± 0% slabinfo.kmalloc-2048.num_objs
220.00 ± 2% -78.8% 46.75 ± 0% slabinfo.kmalloc-2048.num_slabs
11286 ± 1% -57.5% 4801 ± 0% slabinfo.kmalloc-256.active_objs
394.75 ± 0% -56.2% 172.75 ± 2% slabinfo.kmalloc-256.active_slabs
12643 ± 0% -56.1% 5545 ± 2% slabinfo.kmalloc-256.num_objs
394.75 ± 0% -56.2% 172.75 ± 2% slabinfo.kmalloc-256.num_slabs
25089 ± 5% -68.9% 7797 ± 0% slabinfo.kmalloc-32.active_objs
197.50 ± 4% -62.4% 74.25 ± 1% slabinfo.kmalloc-32.active_slabs
25367 ± 4% -62.1% 9626 ± 1% slabinfo.kmalloc-32.num_objs
197.50 ± 4% -62.4% 74.25 ± 1% slabinfo.kmalloc-32.num_slabs
1080 ± 1% -61.5% 416.00 ± 0% slabinfo.kmalloc-4096.active_objs
1113 ± 1% -56.6% 483.50 ± 0% slabinfo.kmalloc-4096.num_objs
6479 ± 4% -61.9% 2469 ± 0% slabinfo.kmalloc-512.active_objs
206.00 ± 4% -57.3% 88.00 ± 0% slabinfo.kmalloc-512.active_slabs
6621 ± 4% -57.2% 2834 ± 0% slabinfo.kmalloc-512.num_objs
206.00 ± 4% -57.3% 88.00 ± 0% slabinfo.kmalloc-512.num_slabs
54719 ± 2% -51.5% 26525 ± 0% slabinfo.kmalloc-64.active_objs
861.50 ± 2% -50.1% 430.00 ± 0% slabinfo.kmalloc-64.active_slabs
55167 ± 2% -50.0% 27557 ± 0% slabinfo.kmalloc-64.num_objs
861.50 ± 2% -50.1% 430.00 ± 0% slabinfo.kmalloc-64.num_slabs
37940 ± 1% -86.7% 5040 ± 3% slabinfo.kmalloc-8.active_objs
39296 ± 1% -84.7% 6015 ± 3% slabinfo.kmalloc-8.num_objs
489.00 ± 0% -87.8% 59.50 ± 0% slabinfo.kmalloc-8192.active_objs
122.25 ± 0% -87.7% 15.00 ± 0% slabinfo.kmalloc-8192.active_slabs
489.00 ± 0% -87.7% 60.00 ± 0% slabinfo.kmalloc-8192.num_objs
122.25 ± 0% -87.7% 15.00 ± 0% slabinfo.kmalloc-8192.num_slabs
4326 ± 1% -65.1% 1510 ± 0% slabinfo.kmalloc-96.active_objs
4754 ± 2% -64.7% 1679 ± 0% slabinfo.kmalloc-96.num_objs
547.00 ± 12% -72.0% 153.00 ± 0% slabinfo.kmem_cache.active_objs
547.00 ± 12% -72.0% 153.00 ± 0% slabinfo.kmem_cache.num_objs
961.00 ± 8% -53.3% 449.00 ± 0% slabinfo.kmem_cache_node.active_objs
1006 ± 8% -49.1% 512.00 ± 0% slabinfo.kmem_cache_node.num_objs
2199 ± 0% -72.6% 602.50 ± 0% slabinfo.mm_struct.active_objs
2199 ± 0% -71.5% 627.50 ± 0% slabinfo.mm_struct.num_objs
1250 ± 3% -93.3% 84.00 ± 0% slabinfo.mnt_cache.active_objs
1250 ± 3% -93.3% 84.00 ± 0% slabinfo.mnt_cache.num_objs
160.00 ± 0% -80.0% 32.00 ± 0% slabinfo.nfs_inode_cache.active_objs
160.00 ± 0% -80.0% 32.00 ± 0% slabinfo.nfs_inode_cache.num_objs
11179 ± 2% -55.0% 5028 ± 0% slabinfo.proc_inode_cache.active_objs
215.50 ± 2% -54.5% 98.00 ± 0% slabinfo.proc_inode_cache.active_slabs
11232 ± 2% -54.2% 5146 ± 0% slabinfo.proc_inode_cache.num_objs
215.50 ± 2% -54.5% 98.00 ± 0% slabinfo.proc_inode_cache.num_slabs
35555 ± 0% -15.3% 30118 ± 0% slabinfo.radix_tree_node.active_objs
638.50 ± 0% -15.7% 538.50 ± 0% slabinfo.radix_tree_node.active_slabs
35779 ± 0% -15.6% 30182 ± 0% slabinfo.radix_tree_node.num_objs
638.50 ± 0% -15.7% 538.50 ± 0% slabinfo.radix_tree_node.num_slabs
4685 ± 1% -70.8% 1370 ± 0% slabinfo.shmem_inode_cache.active_objs
4685 ± 1% -70.8% 1370 ± 0% slabinfo.shmem_inode_cache.num_objs
3003 ± 1% -61.0% 1172 ± 0% slabinfo.sighand_cache.active_objs
201.00 ± 1% -60.3% 79.75 ± 0% slabinfo.sighand_cache.active_slabs
3021 ± 1% -60.2% 1203 ± 0% slabinfo.sighand_cache.num_objs
201.00 ± 1% -60.3% 79.75 ± 0% slabinfo.sighand_cache.num_slabs
3402 ± 0% -63.4% 1245 ± 0% slabinfo.signal_cache.active_objs
3402 ± 0% -62.2% 1284 ± 0% slabinfo.signal_cache.num_objs
3301 ± 1% -98.5% 51.00 ± 0% slabinfo.sigqueue.active_objs
3301 ± 1% -98.5% 51.00 ± 0% slabinfo.sigqueue.num_objs
3402 ± 1% -91.0% 306.00 ± 0% slabinfo.sock_inode_cache.active_objs
3402 ± 1% -91.0% 306.00 ± 0% slabinfo.sock_inode_cache.num_objs
1293 ± 0% -9.5% 1170 ± 0% slabinfo.task_struct.active_objs
1338 ± 0% -11.5% 1185 ± 0% slabinfo.task_struct.num_objs
2329 ± 5% -25.0% 1747 ± 0% slabinfo.trace_event_file.active_objs
2329 ± 5% -25.0% 1747 ± 0% slabinfo.trace_event_file.num_objs
35881 ± 1% -48.6% 18445 ± 0% slabinfo.vm_area_struct.active_objs
821.00 ± 1% -46.9% 435.75 ± 0% slabinfo.vm_area_struct.active_slabs
36135 ± 1% -46.9% 19188 ± 0% slabinfo.vm_area_struct.num_objs
821.00 ± 1% -46.9% 435.75 ± 0% slabinfo.vm_area_struct.num_slabs
0.02 ± 57% +4900.0% 1.12 ± 27% perf-profile.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
4.11 ± 2% +163.5% 10.82 ± 3% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault
0.78 ± 61% +1929.0% 15.73 ± 2% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
1.71 ± 41% +53.6% 2.63 ± 3% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault.memset
0.20 ± 2% +370.7% 0.96 ± 8% perf-profile.cycles-pp.__do_softirq.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.47 ± 0% +122.8% 3.27 ± 6% perf-profile.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
0.19 ± 3% +534.2% 1.21 ± 4% perf-profile.cycles-pp.__libc_fork
0.91 ± 4% +114.2% 1.95 ± 10% perf-profile.cycles-pp.__memcpy.mga_imageblit.bit_putcs.fbcon_putcs.fbcon_redraw
0.01 ±100% +66150.0% 3.31 ± 25% perf-profile.cycles-pp.__memcpy.mga_imageblit.soft_cursor.bit_cursor.fb_flashcursor
0.04 ± 10% +3082.4% 1.35 ± 19% perf-profile.cycles-pp.__schedule.schedule.exit_to_usermode_loop.prepare_exit_to_usermode.retint_user
0.00 ± -1% +Inf% 1.26 ± 7% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.59 ± 9% +80.5% 1.06 ± 5% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath.read
0.00 ± -1% +Inf% 0.71 ± 48% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
1.49 ± 3% +152.9% 3.77 ± 8% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath.write
3.84 ± 1% -56.4% 1.67 ± 5% perf-profile.cycles-pp._cpp_lex_direct
4.39 ± 1% +164.5% 11.61 ± 2% perf-profile.cycles-pp.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.28 ± 2% +239.3% 0.95 ± 12% perf-profile.cycles-pp.alloc_pages_vma.wp_page_copy.do_wp_page.handle_mm_fault.__do_page_fault
1.90 ± 2% +135.2% 4.46 ± 4% perf-profile.cycles-pp.apic_timer_interrupt
0.01 ±100% +68400.0% 3.43 ± 24% perf-profile.cycles-pp.bit_cursor.fb_flashcursor.process_one_work.worker_thread.kthread
1.20 ± 3% +112.1% 2.54 ± 8% perf-profile.cycles-pp.bit_putcs.fbcon_putcs.fbcon_redraw.fbcon_scroll.scrup
2.78 ± 1% -45.1% 1.53 ± 9% perf-profile.cycles-pp.bool)
0.01 ±100% +18400.0% 0.92 ± 18% perf-profile.cycles-pp.call_timer_fn.run_timer_softirq.__do_softirq.irq_exit.smp_apic_timer_interrupt
2.89 ± 1% +163.3% 7.60 ± 3% perf-profile.cycles-pp.clear_page.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
1.36 ± 3% +157.6% 3.50 ± 10% perf-profile.cycles-pp.con_write.do_output_char.n_tty_write.tty_write.redirected_tty_write
5.49 ± 2% -86.4% 0.74 ± 24% perf-profile.cycles-pp.const*)
0.42 ± 3% +119.0% 0.92 ± 10% perf-profile.cycles-pp.copy_page_to_iter.generic_file_read_iter.__vfs_read.vfs_read.sys_read
1.34 ± 3% +103.2% 2.72 ± 9% perf-profile.cycles-pp.do_con_trol.do_con_write.con_write.do_output_char.n_tty_write
1.34 ± 3% +159.5% 3.49 ± 10% perf-profile.cycles-pp.do_con_write.part.23.con_write.do_output_char.n_tty_write.tty_write
0.39 ± 5% +542.6% 2.49 ± 8% perf-profile.cycles-pp.do_execveat_common.isra.33.sys_execve.return_from_execve.execve
0.01 ±-10000% +34525.0% 3.46 ± 9% perf-profile.cycles-pp.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.83 ± 14% perf-profile.cycles-pp.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
0.95 ± 26% +90.3% 1.81 ± 8% perf-profile.cycles-pp.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath.open64
0.01 ±-10000% +33600.0% 3.37 ± 11% perf-profile.cycles-pp.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.52 ± 2% +273.6% 1.94 ± 2% perf-profile.cycles-pp.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
1.36 ± 3% +157.2% 3.50 ± 9% perf-profile.cycles-pp.do_output_char.n_tty_write.tty_write.redirected_tty_write.__vfs_write
0.79 ± 60% +1940.8% 16.02 ± 2% perf-profile.cycles-pp.do_page_fault.page_fault
2.21 ± 18% +23.4% 2.72 ± 5% perf-profile.cycles-pp.do_page_fault.page_fault.memset
0.01 ±-10000% +20225.0% 2.03 ± 14% perf-profile.cycles-pp.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
0.62 ± 3% +311.2% 2.56 ± 9% perf-profile.cycles-pp.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.01 ± 57% +7133.3% 1.08 ± 26% perf-profile.cycles-pp.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
0.03 ± 13% +4100.0% 1.36 ± 21% perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
0.61 ± 21% +1467.2% 9.56 ± 12% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath
0.55 ± 52% +254.3% 1.94 ± 12% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.open64
0.24 ± 56% +293.8% 0.95 ± 10% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.read
1.47 ± 3% +149.7% 3.66 ± 9% perf-profile.cycles-pp.entry_SYSCALL_64_fastpath.write
0.42 ± 1% +496.5% 2.54 ± 8% perf-profile.cycles-pp.execve
1.07 ± 2% +229.7% 3.53 ± 4% perf-profile.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.sys_exit_group
0.11 ± 4% +741.3% 0.97 ± 12% perf-profile.cycles-pp.exit_mmap.mmput.flush_old_exec.load_elf_binary.search_binary_handler
0.06 ± 0% +2233.3% 1.40 ± 23% perf-profile.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.retint_user
0.01 ±100% +68500.0% 3.43 ± 24% perf-profile.cycles-pp.fb_flashcursor.process_one_work.worker_thread.kthread.ret_from_fork
1.20 ± 3% +112.5% 2.54 ± 8% perf-profile.cycles-pp.fbcon_putcs.fbcon_redraw.fbcon_scroll.scrup.lf
1.22 ± 3% +104.7% 2.49 ± 9% perf-profile.cycles-pp.fbcon_redraw.isra.23.fbcon_scroll.scrup.lf.do_con_trol
1.34 ± 3% +103.2% 2.72 ± 9% perf-profile.cycles-pp.fbcon_scroll.scrup.lf.do_con_trol.do_con_write
0.45 ± 1% +550.6% 2.93 ± 5% perf-profile.cycles-pp.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.12 ± 5% +750.0% 1.02 ± 11% perf-profile.cycles-pp.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve
0.36 ± 1% +149.0% 0.89 ± 10% perf-profile.cycles-pp.free_hot_cold_page.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free
0.24 ± 3% +272.2% 0.90 ± 7% perf-profile.cycles-pp.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu
0.38 ± 3% +353.6% 1.71 ± 3% perf-profile.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap.mmput
0.57 ± 1% +138.3% 1.37 ± 9% perf-profile.cycles-pp.generic_file_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.88 ± 0% +163.5% 2.31 ± 7% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
0.69 ± 72% +1864.3% 13.60 ± 2% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.51 ± 1% +97.7% 2.98 ± 10% perf-profile.cycles-pp.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
5.30 ± 2% -49.4% 2.68 ± 1% perf-profile.cycles-pp.ht_lookup_option)
10.34 ± 0% -67.4% 3.37 ± 8% perf-profile.cycles-pp.int)
0.33 ± 1% +260.2% 1.20 ± 1% perf-profile.cycles-pp.ira_init
0.20 ± 3% +310.0% 0.82 ± 18% perf-profile.cycles-pp.ira_init_register_move_cost
0.24 ± 2% +333.3% 1.04 ± 10% perf-profile.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.12 ± 21% +4718.0% 6.02 ± 12% perf-profile.cycles-pp.kthread.ret_from_fork
1.34 ± 3% +103.2% 2.72 ± 9% perf-profile.cycles-pp.lf.do_con_trol.do_con_write.con_write.do_output_char
0.63 ± 3% +111.2% 1.32 ± 11% perf-profile.cycles-pp.link_path_walk.path_openat.do_filp_open.do_sys_open.sys_open
0.36 ± 1% +652.1% 2.75 ± 6% perf-profile.cycles-pp.load_elf_binary.search_binary_handler.do_execveat_common.sys_execve.return_from_execve
1.53 ± 1% +97.9% 3.03 ± 10% perf-profile.cycles-pp.local_apic_timer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.79 ± 1% -65.7% 1.64 ± 5% perf-profile.cycles-pp.long)
1.06 ± 5% -41.8% 0.61 ± 19% perf-profile.cycles-pp.machine_mode)
6.65 ± 1% -30.4% 4.63 ± 7% perf-profile.cycles-pp.memset
1.18 ± 3% +112.3% 2.51 ± 8% perf-profile.cycles-pp.mga_imageblit.bit_putcs.fbcon_putcs.fbcon_redraw.fbcon_scroll
0.01 ±100% +68400.0% 3.43 ± 24% perf-profile.cycles-pp.mga_imageblit.soft_cursor.bit_cursor.fb_flashcursor.process_one_work
0.43 ± 1% +274.0% 1.62 ± 3% perf-profile.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
1.07 ± 3% +230.3% 3.54 ± 4% perf-profile.cycles-pp.mmput.do_exit.do_group_exit.sys_exit_group.entry_SYSCALL_64_fastpath
0.12 ± 3% +734.0% 0.98 ± 12% perf-profile.cycles-pp.mmput.flush_old_exec.load_elf_binary.search_binary_handler.do_execveat_common
1.40 ± 3% +154.7% 3.57 ± 10% perf-profile.cycles-pp.n_tty_write.tty_write.redirected_tty_write.__vfs_write.vfs_write
2.29 ±140% +681.4% 17.89 ± 2% perf-profile.cycles-pp.page_fault
0.81 ± 3% +198.2% 2.42 ± 7% perf-profile.cycles-pp.parse_dep_file
1.37 ± 1% +168.7% 3.69 ± 8% perf-profile.cycles-pp.path_openat.do_filp_open.do_sys_open.sys_open.entry_SYSCALL_64_fastpath
0.02 ± 19% +3733.3% 0.86 ± 23% perf-profile.cycles-pp.pick_next_task_fair.__schedule.schedule.exit_to_usermode_loop.prepare_exit_to_usermode
0.06 ± 13% +1132.0% 0.77 ± 39% perf-profile.cycles-pp.pipe_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
1.10 ± 4% -88.6% 0.12 ± 35% perf-profile.cycles-pp.pop_scope
0.06 ± 0% +2345.8% 1.47 ± 22% perf-profile.cycles-pp.prepare_exit_to_usermode.retint_user
0.01 ±173% +18750.0% 0.94 ± 26% perf-profile.cycles-pp.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
0.01 ±100% +70200.0% 3.52 ± 23% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 ± -1% +Inf% 0.85 ± 18% perf-profile.cycles-pp.process_timeout.call_timer_fn.run_timer_softirq.__do_softirq.irq_exit
0.01 ± 0% +10850.0% 1.09 ± 17% perf-profile.cycles-pp.rcu_gp_kthread.kthread.ret_from_fork
0.74 ± 2% +58.2% 1.17 ± 6% perf-profile.cycles-pp.read
1.40 ± 3% +154.7% 3.57 ± 10% perf-profile.cycles-pp.redirected_tty_write.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.36 ± 3% +326.1% 1.51 ± 3% perf-profile.cycles-pp.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap
0.12 ± 21% +4734.0% 6.04 ± 11% perf-profile.cycles-pp.ret_from_fork
0.00 ± -1% +Inf% 1.55 ± 22% perf-profile.cycles-pp.retint_user
0.39 ± 7% +540.6% 2.48 ± 8% perf-profile.cycles-pp.return_from_execve.execve
0.03 ± 0% +3166.7% 0.98 ± 17% perf-profile.cycles-pp.run_timer_softirq.__do_softirq.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
0.01 ± 87% +8580.0% 1.08 ± 26% perf-profile.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
0.04 ± 0% +3325.0% 1.37 ± 23% perf-profile.cycles-pp.schedule.exit_to_usermode_loop.prepare_exit_to_usermode.retint_user
1.34 ± 3% +103.2% 2.72 ± 9% perf-profile.cycles-pp.scrup.lf.do_con_trol.do_con_write.con_write
0.28 ± 5% +558.6% 1.83 ± 13% perf-profile.cycles-pp.search_binary_handler.do_execveat_common.sys_execve.return_from_execve.execve
1.82 ± 1% +130.4% 4.21 ± 4% perf-profile.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
0.01 ±100% +68400.0% 3.43 ± 24% perf-profile.cycles-pp.soft_cursor.bit_cursor.fb_flashcursor.process_one_work.worker_thread
0.39 ± 6% +539.1% 2.49 ± 8% perf-profile.cycles-pp.sys_execve.return_from_execve.execve
0.01 ±-10000% +33600.0% 3.37 ± 11% perf-profile.cycles-pp.sys_exit_group.entry_SYSCALL_64_fastpath
0.06 ± 6% +1472.0% 0.98 ± 3% perf-profile.cycles-pp.sys_mmap.entry_SYSCALL_64_fastpath
0.06 ± 6% +1464.0% 0.98 ± 3% perf-profile.cycles-pp.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.00 ±173% +83200.0% 2.08 ± 14% perf-profile.cycles-pp.sys_open.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1.39 ± 10% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
0.67 ± 6% +67.8% 1.12 ± 4% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath.read
0.01 ± 57% +11500.0% 0.87 ± 42% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
1.52 ± 3% +149.8% 3.79 ± 8% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath.write
1.33 ± 1% +65.7% 2.20 ± 8% perf-profile.cycles-pp.tick_sched_handle.isra.17.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt
1.36 ± 1% +109.2% 2.84 ± 7% perf-profile.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.local_apic_timer_interrupt.smp_apic_timer_interrupt
0.36 ± 4% +343.4% 1.58 ± 3% perf-profile.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.35 ± 3% +345.1% 1.58 ± 3% perf-profile.cycles-pp.tlb_flush_mmu_free.tlb_finish_mmu.exit_mmap.mmput.do_exit
7.40 ± 1% -72.7% 2.02 ± 11% perf-profile.cycles-pp.tree_node*)
0.00 ± -1% +Inf% 0.82 ± 18% perf-profile.cycles-pp.try_to_wake_up.wake_up_process.process_timeout.call_timer_fn.run_timer_softirq
1.40 ± 3% +154.7% 3.57 ± 10% perf-profile.cycles-pp.tty_write.redirected_tty_write.__vfs_write.vfs_write.sys_write
0.15 ± 3% +631.0% 1.06 ± 5% perf-profile.cycles-pp.unlock_page.filemap_map_pages.handle_mm_fault.__do_page_fault.do_page_fault
0.67 ± 2% +142.9% 1.63 ± 9% perf-profile.cycles-pp.unmap_page_range.unmap_single_vma.unmap_vmas.exit_mmap.mmput
0.66 ± 3% +119.8% 1.45 ± 5% perf-profile.cycles-pp.unmap_single_vma.unmap_vmas.exit_mmap.mmput.do_exit
0.66 ± 3% +121.3% 1.46 ± 4% perf-profile.cycles-pp.unmap_vmas.exit_mmap.mmput.do_exit.do_group_exit
1.30 ± 0% +61.8% 2.11 ± 7% perf-profile.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
0.00 ± -1% +Inf% 1.35 ± 9% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
0.64 ± 14% +73.2% 1.10 ± 4% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath.read
0.01 ± 0% +8200.0% 0.83 ± 43% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
1.50 ± 3% +151.7% 3.79 ± 8% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath.write
0.07 ± 10% +1289.3% 0.97 ± 2% perf-profile.cycles-pp.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 0.85 ± 18% perf-profile.cycles-pp.wake_up_process.process_timeout.call_timer_fn.run_timer_softirq.__do_softirq
0.01 ±100% +70300.0% 3.52 ± 23% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
0.44 ± 1% +266.3% 1.60 ± 15% perf-profile.cycles-pp.wp_page_copy.isra.58.do_wp_page.handle_mm_fault.__do_page_fault.do_page_fault
1.53 ± 3% +150.1% 3.83 ± 8% perf-profile.cycles-pp.write
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
gcc-4.9/performance/1HDD/16MB/f2fs/1x/x86_64-rhel/16d/256fpd/32t/debian-x86_64-2015-02-07.cgz/fsyncBeforeClose/nhm4/60G/fsmark
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
791288 ± 2% +1263.4% 10788460 ± 5% fsmark.app_overhead
494.86 ± 0% +2.3% 506.45 ± 0% fsmark.time.elapsed_time
494.86 ± 0% +2.3% 506.45 ± 0% fsmark.time.elapsed_time.max
29598 ± 2% +33.6% 39544 ± 0% fsmark.time.involuntary_context_switches
19.50 ± 2% -59.0% 8.00 ± 0% fsmark.time.percent_of_cpu_this_job_got
98.06 ± 1% -55.9% 43.25 ± 0% fsmark.time.system_time
882595 ± 0% -92.5% 65869 ± 1% fsmark.time.voluntary_context_switches
45988 ± 6% -22.3% 35748 ± 4% meminfo.DirectMap4k
1983 ± 0% -99.0% 20.25 ± 2% uptime.idle
35575 ± 1% -87.7% 4386 ± 1% softirqs.RCU
42625 ± 0% -100.0% 0.00 ± -1% softirqs.SCHED
75521 ± 0% -45.9% 40839 ± 0% softirqs.TIMER
3.14 ± 0% +271.5% 11.64 ± 0% turbostat.%Busy
100.50 ± 0% +286.1% 388.00 ± 0% turbostat.Avg_MHz
14.76 ± 1% +498.6% 88.36 ± 0% turbostat.CPU%c1
21.60 ± 1% -100.0% 0.00 ± -1% turbostat.CPU%c3
60.51 ± 0% -100.0% 0.00 ± -1% turbostat.CPU%c6
1252 ± 0% -12.6% 1094 ± 1% time.file_system_inputs
29598 ± 2% +33.6% 39544 ± 0% time.involuntary_context_switches
19.50 ± 2% -59.0% 8.00 ± 0% time.percent_of_cpu_this_job_got
98.06 ± 1% -55.9% 43.25 ± 0% time.system_time
0.69 ± 4% -47.4% 0.36 ± 2% time.user_time
882595 ± 0% -92.5% 65869 ± 1% time.voluntary_context_switches
126962 ± 0% -2.3% 124044 ± 0% vmstat.io.bo
256.25 ± 3% -13.0% 223.00 ± 5% vmstat.memory.buff
17.75 ± 2% -8.5% 16.25 ± 2% vmstat.procs.b
0.00 ± 0% +Inf% 2.00 ± 0% vmstat.procs.r
4414 ± 1% -50.3% 2192 ± 0% vmstat.system.cs
630.50 ± 0% -36.8% 398.75 ± 0% vmstat.system.in
1.457e+08 ± 5% -55.0% 65592950 ± 2% cpuidle.C1-NHM.time
420131 ± 2% -94.0% 25205 ± 1% cpuidle.C1-NHM.usage
157273 ± 2% -75.3% 38902 ± 1% cpuidle.C1E-NHM.usage
5.06e+08 ± 1% -57.8% 2.135e+08 ± 0% cpuidle.C3-NHM.time
161592 ± 1% -62.4% 60815 ± 0% cpuidle.C3-NHM.usage
3.047e+09 ± 0% -98.9% 32859965 ± 5% cpuidle.C6-NHM.time
130268 ± 0% -94.4% 7285 ± 6% cpuidle.C6-NHM.usage
910716 ± 59% -93.5% 59009 ±139% cpuidle.POLL.time
173.25 ± 6% -87.2% 22.25 ± 16% cpuidle.POLL.usage
2785 ± 7% -91.7% 231.00 ± 4% proc-vmstat.allocstall
41004 ± 50% -100.0% 0.00 ± -1% proc-vmstat.compact_free_scanned
928.75 ± 23% -98.6% 13.00 ± 24% proc-vmstat.kswapd_high_wmark_hit_quickly
11778 ± 2% +40.4% 16541 ± 0% proc-vmstat.kswapd_low_wmark_hit_quickly
188.75 ± 28% -89.1% 20.50 ± 73% proc-vmstat.nr_pages_scanned
8109 ± 15% -97.9% 168.25 ± 36% proc-vmstat.nr_vmscan_immediate_reclaim
13663 ± 3% +22.1% 16679 ± 0% proc-vmstat.pageoutrun
21477 ± 0% -88.3% 2523 ± 2% proc-vmstat.pgactivate
1398 ± 12% -84.6% 214.75 ± 23% proc-vmstat.pgalloc_dma
1464 ± 75% -90.7% 135.75 ± 31% proc-vmstat.pgrotated
463204 ± 5% -82.4% 81721 ± 1% proc-vmstat.pgscan_direct_dma32
17825 ± 28% -99.7% 54.25 ± 7% proc-vmstat.pgscan_kswapd_dma
382311 ± 7% -92.0% 30436 ± 3% proc-vmstat.pgsteal_direct_dma32
376514 ± 3% -37.6% 235044 ± 0% proc-vmstat.slabs_scanned
31030 ± 54% +532.0% 196129 ± 37% latency_stats.avg.alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
97723 ± 60% -100.0% 0.00 ± -1% latency_stats.avg.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
55449 ± 23% -100.0% 0.00 ± -1% latency_stats.avg.call_rwsem_down_write_failed.f2fs_submit_merged_bio.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
166272 ± 82% -100.0% 0.00 ± -1% latency_stats.avg.get_request.blk_queue_bio.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_merged_bio.[f2fs].ra_meta_pages.[f2fs].build_free_nids.[f2fs].alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs]
77955 ±101% -100.0% 0.00 ± -1% latency_stats.avg.get_request.blk_queue_bio.generic_make_request.submit_bio.submit_bio_wait.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
461592 ± 64% -100.0% 0.00 ± -1% latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].set_data_blkaddr.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range
0.00 ± -1% +Inf% 13758 ± 10% latency_stats.avg.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
10810 ± 0% -99.8% 17.25 ±173% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
174329 ± 0% -99.9% 149.50 ±173% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
181340 ± 0% -99.9% 159.00 ±173% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
149512 ± 0% -99.9% 207.50 ±173% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
194491 ± 0% -99.9% 157.50 ±173% latency_stats.hits.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
16907 ± 0% -99.9% 25.25 ±173% latency_stats.hits.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
16263 ± 0% -99.8% 25.00 ±173% latency_stats.hits.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
47934 ± 0% +786.0% 424715 ± 1% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
11445 ± 56% +214.8% 36035 ± 92% latency_stats.max.alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
146916 ± 32% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
8154 ± 64% -90.1% 807.75 ±173% latency_stats.max.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
215029 ± 24% -100.0% 0.00 ± -1% latency_stats.max.call_rwsem_down_write_failed.f2fs_submit_merged_bio.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
6281 ± 73% -98.6% 87.25 ±173% latency_stats.max.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
6922 ± 89% -98.8% 81.75 ±173% latency_stats.max.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
211936 ± 76% -100.0% 0.00 ± -1% latency_stats.max.get_request.blk_queue_bio.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_merged_bio.[f2fs].ra_meta_pages.[f2fs].build_free_nids.[f2fs].alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs]
88752 ±104% -100.0% 0.00 ± -1% latency_stats.max.get_request.blk_queue_bio.generic_make_request.submit_bio.submit_bio_wait.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
17473022 ± 34% +100.2% 34977406 ± 88% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
461592 ± 64% -100.0% 0.00 ± -1% latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].set_data_blkaddr.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range
0.00 ± -1% +Inf% 19842 ± 0% latency_stats.max.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
2798443 ± 56% +1633.2% 48502387 ± 33% latency_stats.sum.alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
41166275 ± 3% -99.5% 190477 ± 34% latency_stats.sum.balance_dirty_pages.balance_dirty_pages_ratelimited.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
147025 ± 32% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync
685660 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
360591 ± 0% -99.7% 1233 ±173% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
1694024 ± 1% -99.9% 1934 ±173% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
1039930 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
25396916 ± 1% -99.9% 12834 ±173% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_convert_inline_inode.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
26847777 ± 1% -99.9% 18782 ±173% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write.SyS_write
388761 ± 11% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range
21200825 ± 1% -99.9% 17555 ±173% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs]
1646843 ± 2% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open
320326 ± 3% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].get_read_data_page.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.path_openat.do_filp_open
27653 ± 17% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
27613150 ± 1% -99.9% 16296 ±173% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
170377 ± 12% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.get_node_info.[f2fs].write_data_page.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range.f2fs_sync_file.[f2fs]
4452 ±101% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.f2fs_init_extent_tree.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
899019 ± 40% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.f2fs_submit_merged_bio.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
893402 ± 5% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].f2fs_new_inode.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
1358444 ± 1% -99.9% 1789 ±173% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
9697 ± 61% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].get_new_data_page.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open
19206 ± 55% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.get_node_info.[f2fs].read_node_page.[f2fs].get_node_page.[f2fs].get_dnode_of_data.[f2fs].get_read_data_page.[f2fs].find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.path_openat.do_filp_open
1493713 ± 1% -99.9% 2025 ±173% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs].f2fs_write_begin.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write
935312 ± 3% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.set_node_addr.[f2fs].new_node_page.[f2fs].new_inode_page.[f2fs].init_inode_metadata.[f2fs].__f2fs_add_link.[f2fs].f2fs_create.[f2fs].vfs_create.path_openat.do_filp_open.do_sys_open.SyS_open
332545 ± 82% -100.0% 0.00 ± -1% latency_stats.sum.get_request.blk_queue_bio.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_merged_bio.[f2fs].ra_meta_pages.[f2fs].build_free_nids.[f2fs].alloc_nid.[f2fs].get_dnode_of_data.[f2fs].f2fs_reserve_block.[f2fs].f2fs_get_block.[f2fs]
3296371 ± 13% -88.0% 396462 ± 17% latency_stats.sum.get_request.blk_queue_bio.generic_make_request.submit_bio.__submit_merged_bio.[f2fs].f2fs_submit_page_mbio.[f2fs].do_write_page.[f2fs].write_node_page.[f2fs].f2fs_write_node_page.[f2fs].sync_node_pages.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range
1066734 ±158% -100.0% 0.00 ± -1% latency_stats.sum.get_request.blk_queue_bio.generic_make_request.submit_bio.submit_bio_wait.f2fs_issue_flush.[f2fs].f2fs_sync_file.[f2fs].vfs_fsync_range.do_fsync.SyS_fsync.entry_SYSCALL_64_fastpath
461592 ± 64% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].set_data_blkaddr.[f2fs].do_write_data_page.[f2fs].f2fs_write_data_page.[f2fs].__f2fs_writepage.[f2fs].f2fs_write_cache_pages.[f2fs].f2fs_write_data_pages.[f2fs].do_writepages.__filemap_fdatawrite_range.filemap_write_and_wait_range
0.00 ± -1% +Inf% 389579 ± 22% latency_stats.sum.wait_on_page_bit.f2fs_wait_on_page_writeback.[f2fs].f2fs_wait_on_page_writeback.[f2fs].update_inode.[f2fs].update_inode_page.[f2fs].f2fs_write_end.[f2fs].generic_perform_write.__generic_file_write_iter.generic_file_write_iter.f2fs_file_write_iter.[f2fs].__vfs_write.vfs_write
42496 ± 33% -76.3% 10051 ± 62% latency_stats.sum.wait_on_page_bit.find_data_page.[f2fs].f2fs_find_entry.[f2fs].f2fs_lookup.[f2fs].lookup_real.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
6594 ± 2% -22.7% 5096 ± 0% slabinfo.Acpi-Operand.active_objs
6594 ± 2% -22.7% 5096 ± 0% slabinfo.Acpi-Operand.num_objs
331.00 ± 5% -76.4% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
331.00 ± 5% -76.4% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
360.00 ± 0% -50.0% 180.00 ± 0% slabinfo.RAW.active_objs
360.00 ± 0% -50.0% 180.00 ± 0% slabinfo.RAW.num_objs
178.50 ± 15% -81.0% 34.00 ± 0% slabinfo.UDP.active_objs
178.50 ± 15% -81.0% 34.00 ± 0% slabinfo.UDP.num_objs
2577 ± 5% -54.5% 1172 ± 0% slabinfo.anon_vma.active_objs
2577 ± 5% -54.5% 1172 ± 0% slabinfo.anon_vma.num_objs
456.25 ± 6% -84.0% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
456.25 ± 6% -84.0% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
185.25 ± 17% -78.9% 39.00 ± 0% slabinfo.bdev_cache.active_objs
185.25 ± 17% -78.9% 39.00 ± 0% slabinfo.bdev_cache.num_objs
716.25 ± 0% -51.7% 346.00 ± 0% slabinfo.blkdev_requests.active_objs
767.00 ± 0% -54.9% 346.00 ± 0% slabinfo.blkdev_requests.num_objs
769.75 ± 1% -86.7% 102.00 ± 0% slabinfo.ext4_extent_status.active_objs
769.75 ± 1% -86.7% 102.00 ± 0% slabinfo.ext4_extent_status.num_objs
2140 ± 1% -18.8% 1738 ± 0% slabinfo.f2fs_extent_tree.active_objs
2140 ± 1% -18.8% 1738 ± 0% slabinfo.f2fs_extent_tree.num_objs
1841 ± 2% -11.2% 1634 ± 0% slabinfo.f2fs_inode_cache.active_objs
1865 ± 2% -11.5% 1650 ± 0% slabinfo.f2fs_inode_cache.num_objs
263.25 ± 12% -85.2% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
263.25 ± 12% -85.2% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
388.75 ± 5% -64.8% 137.00 ± 0% slabinfo.files_cache.active_objs
388.75 ± 5% -64.8% 137.00 ± 0% slabinfo.files_cache.num_objs
12292 ± 11% -84.8% 1870 ± 0% slabinfo.free_nid.active_objs
12339 ± 11% -84.8% 1870 ± 0% slabinfo.free_nid.num_objs
495.00 ± 3% -24.2% 375.00 ± 0% slabinfo.idr_layer_cache.active_objs
495.00 ± 3% -24.2% 375.00 ± 0% slabinfo.idr_layer_cache.num_objs
7265 ± 3% -60.1% 2897 ± 0% slabinfo.jbd2_revoke_record_s.active_objs
7271 ± 3% -60.2% 2897 ± 0% slabinfo.jbd2_revoke_record_s.num_objs
963.00 ± 3% -34.1% 634.25 ± 1% slabinfo.kmalloc-1024.active_objs
963.00 ± 3% -33.6% 639.25 ± 0% slabinfo.kmalloc-1024.num_objs
2087 ± 1% -44.9% 1151 ± 0% slabinfo.kmalloc-128.active_objs
2087 ± 1% -44.9% 1151 ± 0% slabinfo.kmalloc-128.num_objs
4979 ± 2% -20.6% 3954 ± 2% slabinfo.kmalloc-16.active_objs
4979 ± 2% -20.6% 3954 ± 2% slabinfo.kmalloc-16.num_objs
3810 ± 1% -39.4% 2310 ± 0% slabinfo.kmalloc-192.active_objs
3810 ± 1% -39.4% 2310 ± 0% slabinfo.kmalloc-192.num_objs
835.75 ± 2% -16.4% 699.00 ± 0% slabinfo.kmalloc-2048.num_objs
1762 ± 0% -17.5% 1454 ± 0% slabinfo.kmalloc-256.active_objs
2586 ± 5% -32.3% 1752 ± 6% slabinfo.kmalloc-256.num_objs
6879 ± 3% -40.5% 4096 ± 0% slabinfo.kmalloc-32.active_objs
6879 ± 3% -40.5% 4096 ± 0% slabinfo.kmalloc-32.num_objs
1719 ± 9% -48.1% 892.25 ± 1% slabinfo.kmalloc-512.active_objs
1769 ± 7% -34.4% 1160 ± 3% slabinfo.kmalloc-512.num_objs
6656 ± 5% -38.5% 4096 ± 0% slabinfo.kmalloc-8.active_objs
6656 ± 5% -38.5% 4096 ± 0% slabinfo.kmalloc-8.num_objs
1438 ± 4% -29.9% 1008 ± 0% slabinfo.kmalloc-96.active_objs
1438 ± 4% -29.9% 1008 ± 0% slabinfo.kmalloc-96.num_objs
240.00 ± 6% -46.7% 128.00 ± 0% slabinfo.kmem_cache.active_objs
240.00 ± 6% -46.7% 128.00 ± 0% slabinfo.kmem_cache.num_objs
352.00 ± 9% -63.6% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
352.00 ± 9% -63.6% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
401.00 ± 4% -68.6% 126.00 ± 0% slabinfo.mnt_cache.active_objs
401.00 ± 4% -68.6% 126.00 ± 0% slabinfo.mnt_cache.num_objs
1569 ± 8% -33.5% 1043 ± 3% slabinfo.proc_inode_cache.active_objs
1689 ± 5% -28.8% 1202 ± 3% slabinfo.proc_inode_cache.num_objs
1157 ± 1% -13.0% 1007 ± 0% slabinfo.shmem_inode_cache.active_objs
1157 ± 1% -13.0% 1007 ± 0% slabinfo.shmem_inode_cache.num_objs
354.25 ± 1% -30.2% 247.25 ± 0% slabinfo.sighand_cache.active_objs
354.50 ± 1% -28.3% 254.00 ± 0% slabinfo.sighand_cache.num_objs
509.00 ± 0% -42.6% 292.25 ± 2% slabinfo.signal_cache.active_objs
509.00 ± 0% -41.3% 299.00 ± 0% slabinfo.signal_cache.num_objs
200.00 ± 0% -87.5% 25.00 ± 0% slabinfo.sigqueue.active_objs
200.00 ± 0% -87.5% 25.00 ± 0% slabinfo.sigqueue.num_objs
281.25 ± 3% -46.7% 150.00 ± 0% slabinfo.sock_inode_cache.active_objs
281.25 ± 3% -46.7% 150.00 ± 0% slabinfo.sock_inode_cache.num_objs
1978 ± 3% -9.3% 1794 ± 0% slabinfo.trace_event_file.active_objs
1978 ± 3% -9.3% 1794 ± 0% slabinfo.trace_event_file.num_objs
4820 ± 3% -50.4% 2392 ± 0% slabinfo.vm_area_struct.active_objs
4893 ± 3% -51.1% 2392 ± 0% slabinfo.vm_area_struct.num_objs
=========================================================================================
compiler/cpufreq_governor/entries/iterations/kconfig/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/512/32x/x86_64-rhel/200%/debian-x86_64-2015-02-07.cgz/lkp-ne04/tlbflush
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
40.25 ± 1% -70.2% 12.00 ± 0% tlbflush.mem_acc_cost_ns_time
24194 ± 0% +224.0% 78392 ± 0% tlbflush.mem_acc_time_thread_ms
13.50 ± 8% -22.2% 10.50 ± 8% tlbflush.munmap_ms
93302 ± 1% -11.4% 82676 ± 0% tlbflush.munmap_ns_time
1850 ± 2% -96.4% 67.44 ± 0% tlbflush.time.elapsed_time
1850 ± 2% -96.4% 67.44 ± 0% tlbflush.time.elapsed_time.max
15481 ± 7% -95.0% 769.50 ± 2% tlbflush.time.involuntary_context_switches
1089421 ± 0% -1.5% 1073125 ± 0% tlbflush.time.minor_page_faults
106.00 ± 4% -94.3% 6.00 ± 0% tlbflush.time.percent_of_cpu_this_job_got
1857 ± 3% -99.8% 3.26 ± 0% tlbflush.time.system_time
114.14 ± 0% -99.2% 0.87 ± 1% tlbflush.time.user_time
2.937e+08 ± 1% -99.6% 1149148 ± 0% tlbflush.time.voluntary_context_switches
1884 ± 2% -95.0% 93.39 ± 4% uptime.boot
29615 ± 2% -99.7% 77.05 ± 5% uptime.idle
189778 ± 1% -99.3% 1300 ± 5% softirqs.RCU
449769 ± 2% -100.0% 0.00 ± -1% softirqs.SCHED
527955 ± 2% -97.3% 14115 ± 2% softirqs.TIMER
0.00 ± 0% +Inf% 1.00 ± 0% vmstat.procs.r
318492 ± 3% -91.4% 27411 ± 0% vmstat.system.cs
150983 ± 3% -94.5% 8329 ± 0% vmstat.system.in
2170127 ± 1% -44.1% 1212584 ± 0% numa-numastat.node0.local_node
2170130 ± 1% -44.1% 1212584 ± 0% numa-numastat.node0.numa_hit
2186570 ± 2% -100.0% 0.00 ± -1% numa-numastat.node1.local_node
2186572 ± 2% -100.0% 0.75 ± 57% numa-numastat.node1.numa_hit
1850 ± 2% -96.4% 67.44 ± 0% time.elapsed_time
1850 ± 2% -96.4% 67.44 ± 0% time.elapsed_time.max
15481 ± 7% -95.0% 769.50 ± 2% time.involuntary_context_switches
106.00 ± 4% -94.3% 6.00 ± 0% time.percent_of_cpu_this_job_got
1857 ± 3% -99.8% 3.26 ± 0% time.system_time
114.14 ± 0% -99.2% 0.87 ± 1% time.user_time
2.937e+08 ± 1% -99.6% 1149148 ± 0% time.voluntary_context_switches
23322 ± 1% -9.6% 21088 ± 3% meminfo.Active(anon)
23069 ± 1% -10.0% 20761 ± 3% meminfo.AnonPages
448939 ± 0% -28.2% 322421 ± 0% meminfo.Committed_AS
75620 ± 9% -55.5% 33636 ± 9% meminfo.DirectMap4k
5124 ± 0% -9.4% 4642 ± 0% meminfo.KernelStack
36775 ± 0% -10.0% 33099 ± 0% meminfo.SReclaimable
34169 ± 0% -37.2% 21461 ± 1% meminfo.SUnreclaim
70945 ± 0% -23.1% 54561 ± 0% meminfo.Slab
10.16 ± 3% +25.7% 12.78 ± 0% turbostat.%Busy
193.25 ± 3% +111.3% 408.25 ± 0% turbostat.Avg_MHz
1902 ± 0% +67.8% 3192 ± 0% turbostat.Bzy_MHz
76.39 ± 2% +14.2% 87.22 ± 0% turbostat.CPU%c1
0.17 ± 36% -100.0% 0.00 ± -1% turbostat.CPU%c3
13.28 ± 15% -100.0% 0.00 ± -1% turbostat.CPU%c6
0.17 ± 37% -100.0% 0.00 ± -1% turbostat.Pkg%pc3
0.93 ± 6% -100.0% 0.00 ± -1% turbostat.Pkg%pc6
3.452e+08 ± 13% -100.0% 33527 ± 8% cpuidle.C1-NHM.time
17560412 ± 6% -100.0% 1062 ± 6% cpuidle.C1-NHM.usage
1.618e+10 ± 2% -99.8% 30101860 ± 0% cpuidle.C1E-NHM.time
2.734e+08 ± 2% -99.8% 561256 ± 0% cpuidle.C1E-NHM.usage
46772830 ± 23% -99.7% 138721 ± 13% cpuidle.C3-NHM.time
8532 ± 5% -96.8% 269.75 ± 10% cpuidle.C3-NHM.usage
1.008e+10 ± 8% -99.7% 29131409 ± 1% cpuidle.C6-NHM.time
294031 ± 11% -99.4% 1634 ± 1% cpuidle.C6-NHM.usage
18990826 ± 6% -99.3% 125616 ± 50% cpuidle.POLL.time
1217914 ± 1% -100.0% 142.75 ± 13% cpuidle.POLL.usage
5830 ± 1% -9.6% 5268 ± 3% proc-vmstat.nr_active_anon
5767 ± 1% -10.1% 5187 ± 3% proc-vmstat.nr_anon_pages
231.50 ± 2% -78.4% 50.00 ± 13% proc-vmstat.nr_dirtied
9193 ± 0% -10.0% 8274 ± 0% proc-vmstat.nr_slab_reclaimable
8542 ± 0% -37.2% 5364 ± 1% proc-vmstat.nr_slab_unreclaimable
229.00 ± 1% -59.7% 92.25 ± 5% proc-vmstat.nr_written
8368 ± 2% -100.0% 0.00 ± 0% proc-vmstat.numa_hint_faults
6412 ± 3% -100.0% 0.00 ± 0% proc-vmstat.numa_hint_faults_local
4352636 ± 1% -72.2% 1208291 ± 0% proc-vmstat.numa_hit
4352630 ± 1% -72.2% 1208290 ± 0% proc-vmstat.numa_local
4911 ± 33% -100.0% 0.00 ± -1% proc-vmstat.numa_pages_migrated
44578 ± 37% -100.0% 0.00 ± 0% proc-vmstat.numa_pte_updates
809846 ± 2% -42.6% 464591 ± 0% proc-vmstat.pgalloc_dma32
3707267 ± 1% -79.7% 753227 ± 0% proc-vmstat.pgalloc_normal
4938958 ± 1% -75.1% 1230331 ± 0% proc-vmstat.pgfault
4476805 ± 1% -73.6% 1180993 ± 0% proc-vmstat.pgfree
4911 ± 33% -100.0% 0.00 ± -1% proc-vmstat.pgmigrate_success
15521 ± 25% -100.0% 0.00 ± -1% latency_stats.avg.cgroup_kn_lock_live.__cgroup_procs_write.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
13899367 ± 31% -100.0% 0.00 ± -1% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
15761 ± 25% -100.0% 0.00 ± -1% latency_stats.avg.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
30956 ± 1% -95.7% 1316 ± 0% latency_stats.hits.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
203077 ± 3% -100.0% 47.75 ± 5% latency_stats.hits.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
2.935e+08 ± 1% -99.6% 1148536 ± 0% latency_stats.hits.hrtimer_nanosleep.SyS_nanosleep.entry_SYSCALL_64_fastpath
326462 ± 2% -73.9% 85271 ± 1% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
15521 ± 25% -100.0% 0.00 ± -1% latency_stats.max.cgroup_kn_lock_live.__cgroup_procs_write.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
51851682 ± 59% -100.0% 0.00 ± -1% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
15761 ± 25% -100.0% 0.00 ± -1% latency_stats.max.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
62270 ± 11% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
9619 ± 3% -100.0% 0.00 ± -1% latency_stats.sum.call_rwsem_down_write_failed.SyS_mprotect.entry_SYSCALL_64_fastpath
15521 ± 25% -100.0% 0.00 ± -1% latency_stats.sum.cgroup_kn_lock_live.__cgroup_procs_write.cgroup_procs_write.cgroup_file_write.kernfs_fop_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
9595382 ± 3% -93.3% 643311 ± 1% latency_stats.sum.do_wait.SyS_wait4.entry_SYSCALL_64_fastpath
480285 ± 4% -99.5% 2564 ± 23% latency_stats.sum.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
1.493e+10 ± 1% -99.6% 57819715 ± 0% latency_stats.sum.hrtimer_nanosleep.SyS_nanosleep.entry_SYSCALL_64_fastpath
1.381e+08 ± 45% -100.0% 0.00 ± -1% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
15513065 ± 3% -93.1% 1064895 ± 1% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
15128 ± 17% -100.0% 0.00 ± -1% latency_stats.sum.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
15761 ± 25% -100.0% 0.00 ± -1% latency_stats.sum.proc_cgroup_show.proc_single_show.seq_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
50381 ± 28% -97.0% 1488 ± 29% latency_stats.sum.rpc_wait_bit_killable.__rpc_execute.rpc_execute.rpc_run_task.nfs4_call_sync_sequence.[nfsv4]._nfs4_proc_access.[nfsv4].nfs4_proc_access.[nfsv4].nfs_do_access.nfs_permission.__inode_permission.inode_permission.link_path_walk
3307 ± 16% +59.2% 5263 ± 3% numa-vmstat.node0.nr_active_anon
3269 ± 16% +58.5% 5182 ± 3% numa-vmstat.node0.nr_anon_pages
76608 ± 2% +20.1% 91993 ± 0% numa-vmstat.node0.nr_file_pages
18363 ± 8% +83.1% 33630 ± 0% numa-vmstat.node0.nr_inactive_anon
193.75 ± 5% +28.6% 249.25 ± 0% numa-vmstat.node0.nr_kernel_stack
18719 ± 8% +78.9% 33496 ± 0% numa-vmstat.node0.nr_mapped
486.50 ± 10% +86.4% 906.75 ± 0% numa-vmstat.node0.nr_page_table_pages
18412 ± 8% +83.1% 33716 ± 0% numa-vmstat.node0.nr_shmem
4314 ± 4% -11.1% 3834 ± 2% numa-vmstat.node0.nr_slab_unreclaimable
1195184 ± 2% -36.6% 757237 ± 0% numa-vmstat.node0.numa_hit
1193548 ± 2% -36.6% 757237 ± 0% numa-vmstat.node0.numa_local
1635 ± 20% -100.0% 0.00 ± -1% numa-vmstat.node0.numa_other
2524 ± 19% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_active_anon
2499 ± 19% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_anon_pages
74964 ± 2% -22.5% 58104 ± 0% numa-vmstat.node1.nr_file_pages
16719 ± 9% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_inactive_anon
125.50 ± 7% -68.1% 40.00 ± 0% numa-vmstat.node1.nr_kernel_stack
17613 ± 9% -92.6% 1305 ± 0% numa-vmstat.node1.nr_mapped
414.00 ± 12% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_page_table_pages
16757 ± 9% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_shmem
4593 ± 3% -24.4% 3473 ± 3% numa-vmstat.node1.nr_slab_reclaimable
4227 ± 3% -63.8% 1529 ± 4% numa-vmstat.node1.nr_slab_unreclaimable
1211585 ± 3% -94.7% 64037 ± 0% numa-vmstat.node1.numa_hit
1148733 ± 3% -100.0% 0.00 ± -1% numa-vmstat.node1.numa_local
13229 ± 16% +59.3% 21068 ± 3% numa-meminfo.node0.Active(anon)
13080 ± 16% +58.6% 20744 ± 3% numa-meminfo.node0.AnonPages
306422 ± 2% +20.2% 368375 ± 0% numa-meminfo.node0.FilePages
257002 ± 2% +24.4% 319620 ± 0% numa-meminfo.node0.Inactive
73438 ± 8% +83.7% 134925 ± 0% numa-meminfo.node0.Inactive(anon)
3110 ± 5% +28.5% 3998 ± 0% numa-meminfo.node0.KernelStack
74863 ± 8% +79.6% 134428 ± 0% numa-meminfo.node0.Mapped
400809 ± 1% +14.9% 460453 ± 0% numa-meminfo.node0.MemUsed
1947 ± 10% +86.5% 3631 ± 0% numa-meminfo.node0.PageTables
17259 ± 4% -11.1% 15340 ± 2% numa-meminfo.node0.SUnreclaim
73635 ± 8% +83.7% 135270 ± 0% numa-meminfo.node0.Shmem
59303 ± 5% -17.0% 49195 ± 1% numa-meminfo.node1.Active
10097 ± 19% -100.0% 0.00 ± -1% numa-meminfo.node1.Active(anon)
1315 ±141% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonHugePages
9997 ± 19% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonPages
299861 ± 2% -22.5% 232416 ± 0% numa-meminfo.node1.FilePages
250508 ± 2% -26.9% 183219 ± 0% numa-meminfo.node1.Inactive
66881 ± 9% -100.0% 0.00 ± -1% numa-meminfo.node1.Inactive(anon)
2013 ± 8% -68.2% 640.00 ± 0% numa-meminfo.node1.KernelStack
70467 ± 9% -92.6% 5223 ± 0% numa-meminfo.node1.Mapped
379163 ± 1% -28.8% 270081 ± 0% numa-meminfo.node1.MemUsed
1656 ± 12% -100.0% 0.00 ± -1% numa-meminfo.node1.PageTables
18374 ± 3% -24.4% 13892 ± 3% numa-meminfo.node1.SReclaimable
16910 ± 3% -63.8% 6118 ± 4% numa-meminfo.node1.SUnreclaim
67031 ± 9% -100.0% 0.00 ± -1% numa-meminfo.node1.Shmem
35284 ± 0% -43.3% 20010 ± 3% numa-meminfo.node1.Slab
3074 ± 3% -37.0% 1938 ± 0% slabinfo.Acpi-Namespace.active_objs
3074 ± 3% -37.0% 1938 ± 0% slabinfo.Acpi-Namespace.num_objs
614.25 ± 2% -87.3% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
614.25 ± 2% -87.3% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
648.00 ± 0% -70.8% 189.00 ± 8% slabinfo.RAW.active_objs
648.00 ± 0% -70.8% 189.00 ± 8% slabinfo.RAW.num_objs
114.75 ± 6% -85.2% 17.00 ± 0% slabinfo.TCP.active_objs
114.75 ± 6% -85.2% 17.00 ± 0% slabinfo.TCP.num_objs
204.00 ± 11% -83.3% 34.00 ± 0% slabinfo.UDP.active_objs
204.00 ± 11% -83.3% 34.00 ± 0% slabinfo.UDP.num_objs
4338 ± 5% -73.0% 1171 ± 0% slabinfo.anon_vma.active_objs
4338 ± 5% -73.0% 1171 ± 0% slabinfo.anon_vma.num_objs
784.75 ± 7% -90.7% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
784.75 ± 7% -90.7% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
351.00 ± 7% -88.9% 39.00 ± 0% slabinfo.bdev_cache.active_objs
351.00 ± 7% -88.9% 39.00 ± 0% slabinfo.bdev_cache.num_objs
649.00 ± 5% -93.2% 44.00 ± 0% slabinfo.blkdev_requests.active_objs
649.00 ± 5% -93.2% 44.00 ± 0% slabinfo.blkdev_requests.num_objs
312.00 ± 0% -75.0% 78.00 ± 0% slabinfo.buffer_head.active_objs
312.00 ± 0% -75.0% 78.00 ± 0% slabinfo.buffer_head.num_objs
204.00 ± 0% -50.0% 102.00 ± 0% slabinfo.ext4_extent_status.active_objs
204.00 ± 0% -50.0% 102.00 ± 0% slabinfo.ext4_extent_status.num_objs
467.75 ± 7% -91.7% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
467.75 ± 7% -91.7% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
736.00 ± 0% -81.4% 137.00 ± 0% slabinfo.files_cache.active_objs
736.00 ± 0% -81.4% 137.00 ± 0% slabinfo.files_cache.num_objs
3995 ± 2% -10.6% 3570 ± 0% slabinfo.ftrace_event_field.active_objs
3995 ± 2% -10.6% 3570 ± 0% slabinfo.ftrace_event_field.num_objs
744.25 ± 2% -33.5% 495.00 ± 0% slabinfo.idr_layer_cache.active_objs
744.25 ± 2% -33.5% 495.00 ± 0% slabinfo.idr_layer_cache.num_objs
29312 ± 0% -13.4% 25389 ± 0% slabinfo.kernfs_node_cache.active_objs
862.00 ± 0% -13.4% 746.75 ± 0% slabinfo.kernfs_node_cache.active_slabs
29312 ± 0% -13.4% 25389 ± 0% slabinfo.kernfs_node_cache.num_objs
862.00 ± 0% -13.4% 746.75 ± 0% slabinfo.kernfs_node_cache.num_slabs
1476 ± 2% -48.2% 764.25 ± 0% slabinfo.kmalloc-1024.active_objs
1476 ± 2% -48.0% 768.00 ± 0% slabinfo.kmalloc-1024.num_objs
2397 ± 3% -44.0% 1343 ± 0% slabinfo.kmalloc-128.active_objs
2397 ± 3% -44.0% 1343 ± 0% slabinfo.kmalloc-128.num_objs
9651 ± 5% -39.1% 5874 ± 21% slabinfo.kmalloc-16.active_objs
9651 ± 5% -35.9% 6183 ± 28% slabinfo.kmalloc-16.num_objs
4914 ± 1% -48.4% 2538 ± 1% slabinfo.kmalloc-192.active_objs
4934 ± 1% -47.6% 2583 ± 1% slabinfo.kmalloc-192.num_objs
681.00 ± 4% -42.8% 389.25 ± 1% slabinfo.kmalloc-2048.active_objs
744.00 ± 3% -45.2% 407.75 ± 1% slabinfo.kmalloc-2048.num_objs
3212 ± 5% -55.1% 1442 ± 8% slabinfo.kmalloc-256.active_objs
3782 ± 5% -47.7% 1979 ± 5% slabinfo.kmalloc-256.num_objs
9887 ± 1% -43.0% 5632 ± 0% slabinfo.kmalloc-32.active_objs
9887 ± 1% -43.0% 5632 ± 0% slabinfo.kmalloc-32.num_objs
336.00 ± 2% -38.4% 207.00 ± 0% slabinfo.kmalloc-4096.active_objs
336.00 ± 2% -38.4% 207.00 ± 0% slabinfo.kmalloc-4096.num_objs
2137 ± 4% -46.7% 1139 ± 0% slabinfo.kmalloc-512.active_objs
2232 ± 1% -35.5% 1440 ± 3% slabinfo.kmalloc-512.num_objs
21337 ± 1% -34.0% 14075 ± 0% slabinfo.kmalloc-64.active_objs
338.50 ± 1% -34.9% 220.50 ± 0% slabinfo.kmalloc-64.active_slabs
21692 ± 1% -34.7% 14158 ± 0% slabinfo.kmalloc-64.num_objs
338.50 ± 1% -34.9% 220.50 ± 0% slabinfo.kmalloc-64.num_slabs
9882 ± 2% -62.6% 3698 ± 5% slabinfo.kmalloc-8.active_objs
10112 ± 2% -63.3% 3712 ± 5% slabinfo.kmalloc-8.num_objs
1864 ± 7% -42.8% 1067 ± 0% slabinfo.kmalloc-96.active_objs
1976 ± 4% -39.4% 1197 ± 1% slabinfo.kmalloc-96.num_objs
275.00 ± 6% -54.5% 125.00 ± 0% slabinfo.kmem_cache.active_objs
275.00 ± 6% -54.5% 125.00 ± 0% slabinfo.kmem_cache.num_objs
631.00 ± 7% -60.9% 247.00 ± 0% slabinfo.kmem_cache_node.active_objs
640.00 ± 7% -60.0% 256.00 ± 0% slabinfo.kmem_cache_node.num_objs
310.00 ± 2% -67.4% 101.00 ± 0% slabinfo.mm_struct.active_objs
310.00 ± 2% -67.4% 101.00 ± 0% slabinfo.mm_struct.num_objs
661.50 ± 2% -81.0% 126.00 ± 0% slabinfo.mnt_cache.active_objs
661.50 ± 2% -81.0% 126.00 ± 0% slabinfo.mnt_cache.num_objs
2464 ± 3% -41.1% 1451 ± 0% slabinfo.proc_inode_cache.active_objs
2569 ± 3% -42.8% 1469 ± 0% slabinfo.proc_inode_cache.num_objs
9852 ± 1% -18.6% 8017 ± 0% slabinfo.radix_tree_node.active_objs
10056 ± 1% -20.2% 8020 ± 0% slabinfo.radix_tree_node.num_objs
2544 ± 9% -40.7% 1507 ± 9% slabinfo.shmem_inode_cache.active_objs
2627 ± 9% -40.5% 1562 ± 8% slabinfo.shmem_inode_cache.num_objs
539.00 ± 3% -44.5% 299.00 ± 0% slabinfo.sighand_cache.active_objs
539.00 ± 3% -44.5% 299.00 ± 0% slabinfo.sighand_cache.num_objs
774.75 ± 2% -53.7% 359.00 ± 0% slabinfo.signal_cache.active_objs
774.75 ± 2% -53.7% 359.00 ± 0% slabinfo.signal_cache.num_objs
425.00 ± 0% -94.1% 25.00 ± 0% slabinfo.sigqueue.active_objs
425.00 ± 0% -94.1% 25.00 ± 0% slabinfo.sigqueue.num_objs
512.50 ± 2% -65.9% 175.00 ± 0% slabinfo.sock_inode_cache.active_objs
512.50 ± 2% -65.9% 175.00 ± 0% slabinfo.sock_inode_cache.num_objs
329.25 ± 2% -93.3% 22.00 ± 0% slabinfo.taskstats.active_objs
329.25 ± 2% -93.3% 22.00 ± 0% slabinfo.taskstats.num_objs
2139 ± 1% -16.1% 1794 ± 0% slabinfo.trace_event_file.active_objs
2139 ± 1% -16.1% 1794 ± 0% slabinfo.trace_event_file.num_objs
7852 ± 1% -64.9% 2759 ± 11% slabinfo.vm_area_struct.active_objs
180.75 ± 1% -63.3% 66.25 ± 12% slabinfo.vm_area_struct.active_slabs
7964 ± 1% -63.0% 2945 ± 12% slabinfo.vm_area_struct.num_objs
180.75 ± 1% -63.3% 66.25 ± 12% slabinfo.vm_area_struct.num_slabs
=========================================================================================
compiler/cpufreq_governor/ipc/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
gcc-4.9/performance/socket/x86_64-rhel/threads/1600%/debian-x86_64-2015-02-07.cgz/lkp-ivb-d02/hackbench
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
20799 ± 0% -60.4% 8242 ± 0% hackbench.throughput
610.81 ± 0% +2.6% 626.57 ± 0% hackbench.time.elapsed_time
610.81 ± 0% +2.6% 626.57 ± 0% hackbench.time.elapsed_time.max
2432500 ± 11% +193.8% 7147756 ± 2% hackbench.time.involuntary_context_switches
152233 ± 0% -59.2% 62096 ± 0% hackbench.time.minor_page_faults
395.00 ± 0% -75.2% 98.00 ± 0% hackbench.time.percent_of_cpu_this_job_got
2294 ± 0% -74.7% 580.34 ± 0% hackbench.time.system_time
119.19 ± 0% -66.9% 39.50 ± 0% hackbench.time.user_time
64.79 ± 1% -82.3% 11.44 ± 2% uptime.idle
48697 ± 1% +10.1% 53595 ± 2% vmstat.system.cs
4849 ± 1% -78.5% 1041 ± 0% vmstat.system.in
156904 ± 1% -92.7% 11518 ± 1% softirqs.RCU
19311 ± 1% -100.0% 0.00 ± -1% softirqs.SCHED
1226199 ± 0% -73.9% 320362 ± 0% softirqs.TIMER
73396 ± 25% -47.4% 38580 ± 7% meminfo.DirectMap4k
85032 ± 0% +18.9% 101134 ± 1% meminfo.SUnreclaim
13612 ± 0% -22.2% 10592 ± 0% meminfo.Shmem
120348 ± 0% +12.7% 135681 ± 0% meminfo.Slab
216.00 ± 0% -59.3% 88.00 ± 0% time.file_system_outputs
2432500 ± 11% +193.8% 7147756 ± 2% time.involuntary_context_switches
152233 ± 0% -59.2% 62096 ± 0% time.minor_page_faults
395.00 ± 0% -75.2% 98.00 ± 0% time.percent_of_cpu_this_job_got
2294 ± 0% -74.7% 580.34 ± 0% time.system_time
119.19 ± 0% -66.9% 39.50 ± 0% time.user_time
3318176 ± 4% -99.0% 32776 ±172% cpuidle.C1-IVB.time
212064 ± 4% -100.0% 5.00 ± 82% cpuidle.C1-IVB.usage
400214 ± 17% -98.3% 6757 ± 25% cpuidle.C1E-IVB.time
2638 ± 12% -99.0% 25.75 ± 16% cpuidle.C1E-IVB.usage
377012 ± 24% -97.6% 9008 ± 16% cpuidle.C3-IVB.time
501.75 ± 10% -95.9% 20.75 ± 29% cpuidle.C3-IVB.usage
7514918 ± 2% -91.0% 674327 ± 9% cpuidle.C6-IVB.time
3212 ± 4% -96.4% 114.50 ± 8% cpuidle.C6-IVB.usage
25.75 ± 42% -90.3% 2.50 ± 60% cpuidle.POLL.usage
287530 ± 1% -71.4% 82258 ± 0% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
21314 ± 1% -75.0% 5331 ± 0% latency_stats.hits.poll_schedule_timeout.do_sys_poll.SyS_poll.entry_SYSCALL_64_fastpath
5790615 ± 0% -82.5% 1010834 ± 3% latency_stats.hits.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
45471476 ±104% -79.2% 9474069 ±100% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
29980 ± 93% +1198.2% 389198 ±171% latency_stats.max.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
68545649 ± 77% -66.5% 22958470 ±100% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
52450453 ± 1% -79.2% 10917503 ± 1% latency_stats.sum.poll_schedule_timeout.do_sys_poll.SyS_poll.entry_SYSCALL_64_fastpath
1.046e+10 ± 0% -89.5% 1.103e+09 ± 3% latency_stats.sum.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
7.692e+09 ± 2% -75.3% 1.902e+09 ± 1% latency_stats.sum.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
0.26 ± 1% -54.9% 0.11 ± 7% turbostat.CPU%c1
0.02 ± 35% -100.0% 0.00 ± -1% turbostat.CPU%c3
0.20 ± 4% -100.0% 0.00 ± -1% turbostat.CPU%c6
18.31 ± 0% -34.1% 12.06 ± 0% turbostat.CorWatt
44.75 ± 2% -16.8% 37.25 ± 2% turbostat.CoreTmp
0.01 ± 0% -100.0% 0.00 ± -1% turbostat.Pkg%pc2
0.02 ± 61% -100.0% 0.00 ± -1% turbostat.Pkg%pc3
0.12 ± 8% -100.0% 0.00 ± -1% turbostat.Pkg%pc6
45.25 ± 1% -17.1% 37.50 ± 2% turbostat.PkgTmp
35.83 ± 0% -18.8% 29.11 ± 0% turbostat.PkgWatt
141.25 ± 18% -51.7% 68.25 ± 4% proc-vmstat.nr_dirtied
3400 ± 0% -22.2% 2646 ± 0% proc-vmstat.nr_shmem
21489 ± 1% +18.6% 25495 ± 1% proc-vmstat.nr_slab_unreclaimable
141.50 ± 19% -33.2% 94.50 ± 5% proc-vmstat.nr_written
1631639 ± 2% -37.2% 1025165 ± 0% proc-vmstat.numa_hit
1631639 ± 2% -37.2% 1025165 ± 0% proc-vmstat.numa_local
1776 ± 1% -71.2% 511.50 ± 2% proc-vmstat.pgactivate
1137636 ± 2% -39.1% 693352 ± 0% proc-vmstat.pgalloc_dma32
1497599 ± 2% -39.5% 905878 ± 0% proc-vmstat.pgalloc_normal
312396 ± 0% -45.7% 169585 ± 0% proc-vmstat.pgfault
2609201 ± 2% -39.7% 1573353 ± 0% proc-vmstat.pgfree
2213 ± 1% -12.4% 1938 ± 0% slabinfo.Acpi-Namespace.active_objs
2213 ± 1% -12.4% 1938 ± 0% slabinfo.Acpi-Namespace.num_objs
2137 ± 7% -50.1% 1067 ± 0% slabinfo.anon_vma.active_objs
2141 ± 7% -50.2% 1067 ± 0% slabinfo.anon_vma.num_objs
273.75 ± 11% -73.3% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
273.75 ± 11% -73.3% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
204.00 ± 0% -50.0% 102.00 ± 0% slabinfo.ext4_extent_status.active_objs
204.00 ± 0% -50.0% 102.00 ± 0% slabinfo.ext4_extent_status.num_objs
4682 ± 1% -21.4% 3680 ± 0% slabinfo.kmalloc-128.active_objs
4788 ± 1% -20.7% 3795 ± 0% slabinfo.kmalloc-128.num_objs
12605 ± 5% -49.6% 6357 ± 0% slabinfo.kmalloc-16.active_objs
12745 ± 5% -50.1% 6357 ± 0% slabinfo.kmalloc-16.num_objs
2465 ± 5% -20.1% 1969 ± 1% slabinfo.kmalloc-192.active_objs
2615 ± 4% -21.7% 2047 ± 0% slabinfo.kmalloc-192.num_objs
26550 ± 3% +36.9% 36341 ± 2% slabinfo.kmalloc-256.active_objs
3371 ± 3% +47.0% 4954 ± 1% slabinfo.kmalloc-256.active_slabs
53940 ± 3% +47.0% 79284 ± 1% slabinfo.kmalloc-256.num_objs
3371 ± 3% +47.0% 4954 ± 1% slabinfo.kmalloc-256.num_slabs
8607 ± 8% -39.3% 5223 ± 0% slabinfo.kmalloc-32.active_objs
8758 ± 8% -40.4% 5223 ± 0% slabinfo.kmalloc-32.num_objs
23697 ± 3% +40.6% 33308 ± 3% slabinfo.kmalloc-512.active_objs
3182 ± 3% +49.7% 4764 ± 1% slabinfo.kmalloc-512.active_slabs
50916 ± 3% +49.7% 76240 ± 1% slabinfo.kmalloc-512.num_objs
3182 ± 3% +49.7% 4764 ± 1% slabinfo.kmalloc-512.num_slabs
4478 ± 9% -20.0% 3584 ± 0% slabinfo.kmalloc-8.active_objs
4478 ± 9% -20.0% 3584 ± 0% slabinfo.kmalloc-8.num_objs
1278 ± 7% -21.2% 1008 ± 0% slabinfo.kmalloc-96.active_objs
1278 ± 7% -21.2% 1008 ± 0% slabinfo.kmalloc-96.num_objs
256.00 ± 17% -50.0% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
256.00 ± 17% -50.0% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
1680 ± 2% -21.1% 1326 ± 0% slabinfo.proc_inode_cache.active_objs
1730 ± 2% -20.4% 1377 ± 0% slabinfo.proc_inode_cache.num_objs
362.25 ± 1% -42.3% 209.00 ± 0% slabinfo.signal_cache.active_objs
362.25 ± 1% -42.3% 209.00 ± 0% slabinfo.signal_cache.num_objs
1.71 ± 10% +312.9% 7.06 ± 5% perf-profile.cycles-pp.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate
11.64 ± 2% -17.3% 9.63 ± 2% perf-profile.cycles-pp.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg
0.60 ± 17% +448.5% 3.30 ± 6% perf-profile.cycles-pp.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency
2.83 ± 7% +18.3% 3.35 ± 4% perf-profile.cycles-pp.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
0.15 ± 23% +541.9% 0.99 ± 6% perf-profile.cycles-pp.__schedule.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg
0.83 ± 9% +37.8% 1.14 ± 8% perf-profile.cycles-pp.__slab_alloc.__kmalloc_node_track_caller.__kmalloc_reserve.__alloc_skb.alloc_skb_with_frags
0.87 ± 13% +28.4% 1.12 ± 6% perf-profile.cycles-pp.__slab_alloc.kmem_cache_alloc_node.__alloc_skb.alloc_skb_with_frags.sock_alloc_send_pskb
1.02 ± 9% +58.3% 1.62 ± 6% perf-profile.cycles-pp.__slab_free.kmem_cache_free.kfree_skbmem.consume_skb.unix_stream_read_generic
40.33 ± 0% -9.0% 36.71 ± 1% perf-profile.cycles-pp.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
35.45 ± 0% +23.0% 43.61 ± 1% perf-profile.cycles-pp.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
1.44 ± 14% +467.4% 8.19 ± 4% perf-profile.cycles-pp.__wake_up_common.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg.sock_sendmsg
1.55 ± 8% -64.5% 0.55 ± 26% perf-profile.cycles-pp.__wake_up_common.__wake_up_sync_key.unix_write_space.sock_wfree.unix_destruct_scm
1.48 ± 15% +461.4% 8.28 ± 4% perf-profile.cycles-pp.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
1.56 ± 8% -64.4% 0.55 ± 26% perf-profile.cycles-pp.__wake_up_sync_key.unix_write_space.sock_wfree.unix_destruct_scm.skb_release_head_state
2.40 ± 9% +234.6% 8.01 ± 5% perf-profile.cycles-pp.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function.autoremove_wake_function
12.42 ± 2% -16.2% 10.40 ± 2% perf-profile.cycles-pp.alloc_skb_with_frags.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
1.40 ± 14% +481.1% 8.13 ± 4% perf-profile.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable.unix_stream_sendmsg
1.54 ± 8% -64.7% 0.54 ± 25% perf-profile.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.unix_write_space.sock_wfree
14.73 ± 1% -19.7% 11.84 ± 3% perf-profile.cycles-pp.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
5.08 ± 4% -10.8% 4.52 ± 5% perf-profile.cycles-pp.copy_user_enhanced_fast_string.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg
1.39 ± 14% +484.9% 8.12 ± 4% perf-profile.cycles-pp.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.sock_def_readable
1.50 ± 8% -65.2% 0.52 ± 26% perf-profile.cycles-pp.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key.unix_write_space
1.52 ± 8% +348.1% 6.82 ± 5% perf-profile.cycles-pp.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair
2.13 ± 9% +258.1% 7.64 ± 5% perf-profile.cycles-pp.enqueue_entity.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up
2.31 ± 9% +242.2% 7.91 ± 5% perf-profile.cycles-pp.enqueue_task_fair.activate_task.ttwu_do_activate.try_to_wake_up.default_wake_function
0.34 ± 21% +275.6% 1.27 ± 8% perf-profile.cycles-pp.entry_SYSCALL_64_after_swapgs
0.14 ± 26% +820.0% 1.26 ± 8% perf-profile.cycles-pp.is_module_text_address.__kernel_text_address.print_context_stack.dump_trace.save_stack_trace_tsk
2.07 ± 6% +17.5% 2.44 ± 6% perf-profile.cycles-pp.kfree.skb_free_head.skb_release_data.skb_release_all.consume_skb
1.31 ± 8% +374.7% 6.24 ± 6% perf-profile.cycles-pp.print_context_stack.dump_trace.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity
4.64 ± 3% -48.3% 2.40 ± 7% perf-profile.cycles-pp.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
3.73 ± 8% -50.2% 1.86 ± 4% perf-profile.cycles-pp.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
1.52 ± 8% +348.2% 6.83 ± 5% perf-profile.cycles-pp.save_stack_trace_tsk.__account_scheduler_latency.enqueue_entity.enqueue_task_fair.activate_task
0.17 ± 27% +519.4% 1.04 ± 6% perf-profile.cycles-pp.schedule.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
0.18 ± 25% +511.3% 1.08 ± 5% perf-profile.cycles-pp.schedule_timeout.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
4.10 ± 4% -49.5% 2.07 ± 7% perf-profile.cycles-pp.security_file_permission.rw_verify_area.vfs_read.sys_read.entry_SYSCALL_64_fastpath
3.07 ± 10% -48.4% 1.58 ± 4% perf-profile.cycles-pp.security_file_permission.rw_verify_area.vfs_write.sys_write.entry_SYSCALL_64_fastpath
0.79 ± 11% +50.0% 1.19 ± 4% perf-profile.cycles-pp.security_socket_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write.vfs_write
2.68 ± 5% -47.1% 1.42 ± 8% perf-profile.cycles-pp.selinux_file_permission.security_file_permission.rw_verify_area.vfs_read.sys_read
2.69 ± 10% -49.8% 1.35 ± 6% perf-profile.cycles-pp.selinux_file_permission.security_file_permission.rw_verify_area.vfs_write.sys_write
0.53 ± 13% +80.6% 0.95 ± 6% perf-profile.cycles-pp.selinux_socket_sendmsg.security_socket_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write
2.62 ± 4% -23.2% 2.01 ± 6% perf-profile.cycles-pp.skb_copy_datagram_from_iter.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write
7.87 ± 3% -25.0% 5.90 ± 3% perf-profile.cycles-pp.skb_copy_datagram_iter.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
2.11 ± 5% +16.6% 2.46 ± 6% perf-profile.cycles-pp.skb_free_head.skb_release_data.skb_release_all.consume_skb.unix_stream_read_generic
0.67 ± 6% +149.1% 1.66 ± 6% perf-profile.cycles-pp.skb_queue_tail.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write
11.04 ± 2% -19.8% 8.86 ± 2% perf-profile.cycles-pp.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg
6.24 ± 2% -21.2% 4.91 ± 7% perf-profile.cycles-pp.skb_release_data.skb_release_all.consume_skb.unix_stream_read_generic.unix_stream_recvmsg
2.04 ± 7% +160.6% 5.30 ± 3% perf-profile.cycles-pp.skb_set_owner_w.sock_alloc_send_pskb.unix_stream_sendmsg.sock_sendmsg.sock_write_iter
0.79 ± 6% +84.9% 1.46 ± 10% perf-profile.cycles-pp.skb_unlink.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
3.12 ± 8% +220.4% 10.00 ± 4% perf-profile.cycles-pp.sock_def_readable.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write
39.40 ± 0% -8.5% 36.07 ± 1% perf-profile.cycles-pp.sock_read_iter.__vfs_read.vfs_read.sys_read.entry_SYSCALL_64_fastpath
33.54 ± 1% +25.5% 42.09 ± 1% perf-profile.cycles-pp.sock_sendmsg.sock_write_iter.__vfs_write.vfs_write.sys_write
34.55 ± 1% +24.5% 42.99 ± 1% perf-profile.cycles-pp.sock_write_iter.__vfs_write.vfs_write.sys_write.entry_SYSCALL_64_fastpath
50.49 ± 0% -14.8% 42.99 ± 1% perf-profile.cycles-pp.sys_read.entry_SYSCALL_64_fastpath
45.34 ± 0% +9.6% 49.69 ± 1% perf-profile.cycles-pp.sys_write.entry_SYSCALL_64_fastpath
2.88 ± 7% +199.0% 8.62 ± 5% perf-profile.cycles-pp.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common.__wake_up_sync_key
2.44 ± 10% +232.3% 8.12 ± 4% perf-profile.cycles-pp.ttwu_do_activate.constprop.86.try_to_wake_up.default_wake_function.autoremove_wake_function.__wake_up_common
8.00 ± 3% -24.9% 6.01 ± 3% perf-profile.cycles-pp.unix_stream_read_actor.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter
34.62 ± 0% -9.2% 31.44 ± 2% perf-profile.cycles-pp.unix_stream_read_generic.unix_stream_recvmsg.sock_recvmsg.sock_read_iter.__vfs_read
30.80 ± 1% +28.2% 39.49 ± 1% perf-profile.cycles-pp.unix_stream_sendmsg.sock_sendmsg.sock_write_iter.__vfs_write.vfs_write
1.94 ± 10% -26.1% 1.44 ± 16% perf-profile.cycles-pp.unix_write_space.sock_wfree.unix_destruct_scm.skb_release_head_state.skb_release_all
47.80 ± 0% -14.4% 40.90 ± 1% perf-profile.cycles-pp.vfs_read.sys_read.entry_SYSCALL_64_fastpath
42.32 ± 0% +12.0% 47.39 ± 1% perf-profile.cycles-pp.vfs_write.sys_write.entry_SYSCALL_64_fastpath
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lituya/malloc1/will-it-scale
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
211040 ± 0% -36.6% 133899 ± 2% will-it-scale.per_process_ops
100225 ± 0% +25.4% 125723 ± 2% will-it-scale.per_thread_ops
0.28 ± 0% -76.9% 0.06 ± 0% will-it-scale.scalability
304.14 ± 0% -25.0% 228.07 ± 0% will-it-scale.time.elapsed_time
304.14 ± 0% -25.0% 228.07 ± 0% will-it-scale.time.elapsed_time.max
500.75 ± 5% +1.7e+05% 869953 ± 15% will-it-scale.time.involuntary_context_switches
66.00 ± 1% -25.8% 49.00 ± 0% will-it-scale.time.percent_of_cpu_this_job_got
194.65 ± 1% -45.4% 106.29 ± 0% will-it-scale.time.system_time
322.40 ± 0% -23.8% 245.80 ± 0% uptime.boot
3537 ± 0% -99.7% 11.70 ± 0% uptime.idle
52213 ± 5% -93.3% 3521 ± 5% softirqs.RCU
85543 ± 2% -100.0% 0.00 ± -1% softirqs.SCHED
821398 ± 0% -85.1% 122622 ± 0% softirqs.TIMER
5.00 ± 0% +35.0% 6.75 ± 6% vmstat.procs.r
47456 ± 7% -13.1% 41237 ± 7% vmstat.system.cs
12929 ± 15% -91.6% 1082 ± 0% vmstat.system.in
304.14 ± 0% -25.0% 228.07 ± 0% time.elapsed_time
304.14 ± 0% -25.0% 228.07 ± 0% time.elapsed_time.max
500.75 ± 5% +1.7e+05% 869953 ± 15% time.involuntary_context_switches
66.00 ± 1% -25.8% 49.00 ± 0% time.percent_of_cpu_this_job_got
194.65 ± 1% -45.4% 106.29 ± 0% time.system_time
7.76 ± 2% -20.3% 6.19 ± 5% time.user_time
29486 ± 2% -19.8% 23651 ± 3% meminfo.Active(anon)
1176564 ± 0% -12.0% 1035437 ± 0% meminfo.Committed_AS
81756 ± 10% -50.1% 40796 ± 5% meminfo.DirectMap4k
39768 ± 0% -10.0% 35810 ± 0% meminfo.SReclaimable
53921 ± 0% -28.0% 38819 ± 0% meminfo.SUnreclaim
15076 ± 0% -35.4% 9743 ± 0% meminfo.Shmem
93691 ± 0% -20.3% 74630 ± 0% meminfo.Slab
32.88 ± 0% +203.3% 99.72 ± 0% turbostat.%Busy
1085 ± 0% +211.6% 3382 ± 2% turbostat.Avg_MHz
40.41 ± 1% -99.7% 0.13 ±102% turbostat.CPU%c1
3.46 ± 11% -100.0% 0.00 ± -1% turbostat.CPU%c3
23.25 ± 0% -98.7% 0.30 ± 0% turbostat.CPU%c6
56.48 ± 0% -39.7% 34.08 ± 7% turbostat.PkgWatt
0.94 ± 1% -66.8% 0.31 ± 2% turbostat.RAMWatt
1.304e+08 ± 34% -100.0% 6.50 ±155% cpuidle.C1-HSW.time
2435313 ± 47% -100.0% 0.50 ±100% cpuidle.C1-HSW.usage
61520601 ± 22% -100.0% 1015 ± 39% cpuidle.C1E-HSW.time
898875 ± 19% -100.0% 11.25 ± 45% cpuidle.C1E-HSW.usage
6.752e+08 ± 3% -100.0% 526.25 ± 15% cpuidle.C3-HSW.time
3378988 ± 1% -100.0% 6.00 ± 42% cpuidle.C3-HSW.usage
2.412e+09 ± 1% -100.0% 639581 ± 12% cpuidle.C6-HSW.time
1953772 ± 3% -100.0% 28.25 ± 15% cpuidle.C6-HSW.usage
233.50 ± 77% -99.8% 0.50 ±173% cpuidle.POLL.usage
46885311 ±116% -96.1% 1842699 ±173% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
2.75 ± 30% +4.2e+07% 1153671 ± 12% latency_stats.hits.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
110243 ± 4% -74.2% 28405 ± 2% latency_stats.hits.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
9797 ± 49% -66.8% 3249 ± 5% latency_stats.max.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
6891 ± 81% -50.5% 3413 ± 5% latency_stats.max.call_rwsem_down_write_failed.vm_mmap_pgoff.SyS_mmap_pgoff.SyS_mmap.entry_SYSCALL_64_fastpath
9965 ± 48% -64.8% 3507 ± 6% latency_stats.max.call_rwsem_down_write_failed.vm_munmap.SyS_munmap.entry_SYSCALL_64_fastpath
50113913 ±105% -96.3% 1842699 ±173% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
5.309e+08 ± 0% -76.8% 1.231e+08 ± 3% latency_stats.sum.call_rwsem_down_read_failed.__do_page_fault.do_page_fault.page_fault
24.25 ± 25% +9.1e+08% 2.211e+08 ± 7% latency_stats.sum.futex_wait_queue_me.futex_wait.do_futex.SyS_futex.entry_SYSCALL_64_fastpath
53087121 ± 96% -96.5% 1842699 ±173% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
995734 ± 1% +597.0% 6940437 ± 1% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
7369 ± 2% -19.8% 5913 ± 3% proc-vmstat.nr_active_anon
94.25 ± 17% -82.0% 17.00 ± 0% proc-vmstat.nr_dirtied
3768 ± 0% -35.4% 2435 ± 0% proc-vmstat.nr_shmem
9941 ± 0% -10.0% 8952 ± 0% proc-vmstat.nr_slab_reclaimable
13480 ± 0% -28.0% 9704 ± 0% proc-vmstat.nr_slab_unreclaimable
4.611e+08 ± 0% -74.9% 1.158e+08 ± 3% proc-vmstat.numa_hit
4.611e+08 ± 0% -74.9% 1.158e+08 ± 3% proc-vmstat.numa_local
3234 ± 0% -71.4% 923.75 ± 5% proc-vmstat.pgactivate
1.142e+08 ± 1% -46.6% 60957798 ± 5% proc-vmstat.pgalloc_dma32
4.937e+08 ± 0% -60.4% 1.954e+08 ± 3% proc-vmstat.pgalloc_normal
2.314e+08 ± 0% -74.5% 59076648 ± 3% proc-vmstat.pgfault
6.079e+08 ± 0% -57.8% 2.564e+08 ± 3% proc-vmstat.pgfree
1.56 ± 2% +141.0% 3.77 ± 3% perf-profile.cycles-pp.___might_sleep.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region
0.83 ± 6% +98.5% 1.65 ± 4% perf-profile.cycles-pp.___might_sleep.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
2.83 ± 3% +56.7% 4.43 ± 2% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault
2.74 ± 2% +64.4% 4.51 ± 1% perf-profile.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault
12.21 ± 2% +72.7% 21.07 ± 2% perf-profile.cycles-pp.__do_page_fault.do_page_fault.page_fault
1.10 ± 5% -29.5% 0.78 ± 10% perf-profile.cycles-pp.__pagevec_lru_add_fn.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
1.17 ± 3% -33.8% 0.77 ± 9% perf-profile.cycles-pp.__percpu_counter_add.__vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap
1.57 ± 5% -38.8% 0.96 ± 8% perf-profile.cycles-pp.__percpu_counter_add.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
3.68 ± 3% +72.2% 6.33 ± 2% perf-profile.cycles-pp.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.46 ± 3% -35.9% 0.93 ± 7% perf-profile.cycles-pp.__vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap.vm_mmap_pgoff
0.78 ± 4% +73.6% 1.35 ± 13% perf-profile.cycles-pp.__vma_link_rb.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
0.26 ± 4% +294.1% 1.00 ± 4% perf-profile.cycles-pp._raw_spin_lock.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
13.75 ± 3% -97.3% 0.38 ± 12% perf-profile.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
12.93 ± 2% -96.3% 0.47 ± 6% perf-profile.cycles-pp._raw_spin_lock_irqsave.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu
3.06 ± 3% +60.8% 4.91 ± 4% perf-profile.cycles-pp.alloc_pages_current.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault
2.98 ± 2% +68.5% 5.02 ± 1% perf-profile.cycles-pp.alloc_pages_vma.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2.05 ± 3% +44.7% 2.97 ± 3% perf-profile.cycles-pp.anon_vma_prepare.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
2.55 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations
8.12 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
0.94 ± 5% +84.8% 1.73 ± 5% perf-profile.cycles-pp.clear_page_c_e.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc
0.93 ± 4% +97.6% 1.84 ± 4% perf-profile.cycles-pp.clear_page_c_e.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
2.56 ± 19% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpu_startup_entry.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
8.12 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
2.55 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init.start_kernel
8.12 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
2.53 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.rest_init
8.12 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
9.78 ± 2% +72.3% 16.85 ± 0% perf-profile.cycles-pp.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
62.39 ± 1% -16.7% 51.97 ± 1% perf-profile.cycles-pp.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
12.30 ± 2% +72.7% 21.25 ± 2% perf-profile.cycles-pp.do_page_fault.page_fault
1.08 ± 6% +79.7% 1.95 ± 4% perf-profile.cycles-pp.flush_tlb_mm_range.tlb_flush_mmu_tlbonly.tlb_finish_mmu.unmap_region.do_munmap
1.56 ± 1% +37.6% 2.15 ± 6% perf-profile.cycles-pp.free_hot_cold_page.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free
1.85 ± 2% +32.6% 2.45 ± 6% perf-profile.cycles-pp.free_hot_cold_page_list.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu
21.02 ± 1% -69.9% 6.32 ± 3% perf-profile.cycles-pp.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.unmap_region.do_munmap
3.13 ± 4% +102.9% 6.35 ± 2% perf-profile.cycles-pp.free_pgd_range.free_pgtables.unmap_region.do_munmap.vm_munmap
5.17 ± 3% +83.5% 9.48 ± 2% perf-profile.cycles-pp.free_pgtables.unmap_region.do_munmap.vm_munmap.sys_munmap
1.41 ± 5% +32.4% 1.87 ± 7% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_current.pte_alloc_one.__pte_alloc
1.42 ± 2% +32.5% 1.89 ± 4% perf-profile.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.handle_mm_fault.__do_page_fault
0.54 ± 7% +82.0% 0.99 ± 9% perf-profile.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
11.07 ± 2% +72.3% 19.07 ± 1% perf-profile.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
10.36 ± 16% -100.0% 0.00 ± -1% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
18.55 ± 2% -85.3% 2.73 ± 7% perf-profile.cycles-pp.lru_add_drain.unmap_region.do_munmap.vm_munmap.sys_munmap
18.52 ± 2% -85.4% 2.69 ± 6% perf-profile.cycles-pp.lru_add_drain_cpu.lru_add_drain.unmap_region.do_munmap.vm_munmap
1.04 ± 2% -95.4% 0.05 ± 77% perf-profile.cycles-pp.mem_cgroup_page_lruvec.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu
8.91 ± 1% +70.6% 15.19 ± 0% perf-profile.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap
0.53 ± 8% +91.5% 1.02 ± 13% perf-profile.cycles-pp.native_flush_tlb.flush_tlb_mm_range.tlb_flush_mmu_tlbonly.tlb_finish_mmu.unmap_region
1.24 ± 41% +162.7% 3.26 ± 12% perf-profile.cycles-pp.native_irq_return_iret
12.63 ± 3% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain
11.82 ± 2% -100.0% 0.00 ± -1% perf-profile.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free
12.35 ± 2% +73.0% 21.36 ± 2% perf-profile.cycles-pp.page_fault
18.38 ± 2% -87.6% 2.29 ± 6% perf-profile.cycles-pp.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region.do_munmap
3.59 ± 2% +96.6% 7.06 ± 3% perf-profile.cycles-pp.perf_event_aux.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
1.03 ± 2% +97.3% 2.02 ± 7% perf-profile.cycles-pp.perf_event_aux_ctx.perf_event_aux.perf_event_mmap.mmap_region.do_mmap
4.47 ± 2% +90.0% 8.50 ± 0% perf-profile.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff
3.24 ± 3% +65.4% 5.36 ± 3% perf-profile.cycles-pp.pte_alloc_one.__pte_alloc.handle_mm_fault.__do_page_fault.do_page_fault
20.11 ± 2% -71.1% 5.81 ± 4% perf-profile.cycles-pp.release_pages.free_pages_and_swap_cache.tlb_flush_mmu_free.tlb_finish_mmu.unmap_region
2.59 ± 2% -83.2% 0.43 ± 17% perf-profile.cycles-pp.release_pages.pagevec_lru_move_fn.lru_add_drain_cpu.lru_add_drain.unmap_region
2.56 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.rest_init.start_kernel.x86_64_start_reservations.x86_64_start_kernel
2.56 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.start_kernel.x86_64_start_reservations.x86_64_start_kernel
8.12 ± 24% -100.0% 0.00 ± -1% perf-profile.cycles-pp.start_secondary
10.64 ± 2% +72.7% 18.37 ± 0% perf-profile.cycles-pp.sys_mmap.entry_SYSCALL_64_fastpath
10.63 ± 2% +72.7% 18.36 ± 0% perf-profile.cycles-pp.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
63.09 ± 1% -15.8% 53.15 ± 1% perf-profile.cycles-pp.sys_munmap.entry_SYSCALL_64_fastpath
22.54 ± 2% -59.7% 9.08 ± 3% perf-profile.cycles-pp.tlb_finish_mmu.unmap_region.do_munmap.vm_munmap.sys_munmap
21.11 ± 1% -69.3% 6.48 ± 4% perf-profile.cycles-pp.tlb_flush_mmu_free.tlb_finish_mmu.unmap_region.do_munmap.vm_munmap
1.33 ± 6% +80.5% 2.41 ± 4% perf-profile.cycles-pp.tlb_flush_mmu_tlbonly.tlb_finish_mmu.unmap_region.do_munmap.vm_munmap
1.82 ± 4% +51.0% 2.75 ± 4% perf-profile.cycles-pp.unlink_anon_vmas.free_pgtables.unmap_region.do_munmap.vm_munmap
10.85 ± 2% +110.0% 22.78 ± 0% perf-profile.cycles-pp.unmap_page_range.unmap_single_vma.unmap_vmas.unmap_region.do_munmap
58.91 ± 1% -19.6% 47.39 ± 1% perf-profile.cycles-pp.unmap_region.do_munmap.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
12.35 ± 1% +107.2% 25.59 ± 1% perf-profile.cycles-pp.unmap_single_vma.unmap_vmas.unmap_region.do_munmap.vm_munmap
12.43 ± 2% +106.9% 25.72 ± 1% perf-profile.cycles-pp.unmap_vmas.unmap_region.do_munmap.vm_munmap.sys_munmap
10.52 ± 2% +72.8% 18.17 ± 0% perf-profile.cycles-pp.vm_mmap_pgoff.sys_mmap_pgoff.sys_mmap.entry_SYSCALL_64_fastpath
62.92 ± 1% -15.9% 52.90 ± 1% perf-profile.cycles-pp.vm_munmap.sys_munmap.entry_SYSCALL_64_fastpath
0.47 ± 4% +78.5% 0.83 ± 14% perf-profile.cycles-pp.vma_compute_subtree_gap.__vma_link_rb.vma_link.mmap_region.do_mmap
0.97 ± 2% +69.8% 1.64 ± 9% perf-profile.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.sys_mmap_pgoff
2.56 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.x86_64_start_kernel
2.56 ± 18% -100.0% 0.00 ± -1% perf-profile.cycles-pp.x86_64_start_reservations.x86_64_start_kernel
624.00 ± 0% -87.5% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
624.00 ± 0% -87.5% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
882.00 ± 2% -51.0% 432.00 ± 0% slabinfo.RAW.active_objs
882.00 ± 2% -51.0% 432.00 ± 0% slabinfo.RAW.num_objs
170.00 ± 14% -80.0% 34.00 ± 0% slabinfo.UDP.active_objs
170.00 ± 14% -80.0% 34.00 ± 0% slabinfo.UDP.num_objs
3824 ± 3% -67.1% 1260 ± 0% slabinfo.anon_vma.active_objs
3824 ± 3% -67.1% 1260 ± 0% slabinfo.anon_vma.num_objs
602.25 ± 10% -87.9% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
602.25 ± 10% -87.9% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
253.50 ± 7% -84.6% 39.00 ± 0% slabinfo.bdev_cache.active_objs
253.50 ± 7% -84.6% 39.00 ± 0% slabinfo.bdev_cache.num_objs
539.00 ± 12% -91.8% 44.00 ± 0% slabinfo.blkdev_requests.active_objs
539.00 ± 12% -91.8% 44.00 ± 0% slabinfo.blkdev_requests.num_objs
46484 ± 0% -9.3% 42166 ± 0% slabinfo.dentry.active_objs
1111 ± 0% -9.8% 1003 ± 0% slabinfo.dentry.active_slabs
46716 ± 0% -9.7% 42166 ± 0% slabinfo.dentry.num_objs
1111 ± 0% -9.8% 1003 ± 0% slabinfo.dentry.num_slabs
428.50 ± 6% -90.9% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
428.50 ± 6% -90.9% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
874.00 ± 0% -70.4% 259.00 ± 0% slabinfo.files_cache.active_objs
874.00 ± 0% -70.4% 259.00 ± 0% slabinfo.files_cache.num_objs
3803 ± 2% -8.4% 3485 ± 0% slabinfo.ftrace_event_field.active_objs
3803 ± 2% -8.4% 3485 ± 0% slabinfo.ftrace_event_field.num_objs
735.00 ± 1% -34.7% 480.00 ± 0% slabinfo.idr_layer_cache.active_objs
735.00 ± 1% -34.7% 480.00 ± 0% slabinfo.idr_layer_cache.num_objs
42228 ± 0% -10.1% 37944 ± 0% slabinfo.kernfs_node_cache.active_objs
42228 ± 0% -10.1% 37944 ± 0% slabinfo.kernfs_node_cache.num_objs
1434 ± 1% -45.5% 781.75 ± 0% slabinfo.kmalloc-1024.active_objs
1434 ± 1% -42.0% 832.00 ± 0% slabinfo.kmalloc-1024.num_objs
3135 ± 3% -53.1% 1471 ± 0% slabinfo.kmalloc-128.active_objs
3135 ± 3% -53.1% 1471 ± 0% slabinfo.kmalloc-128.num_objs
9783 ± 0% -24.1% 7424 ± 0% slabinfo.kmalloc-16.active_objs
9783 ± 0% -24.1% 7424 ± 0% slabinfo.kmalloc-16.num_objs
6059 ± 2% -36.3% 3859 ± 0% slabinfo.kmalloc-192.active_objs
6479 ± 2% -34.3% 4259 ± 0% slabinfo.kmalloc-192.num_objs
2100 ± 1% -36.3% 1337 ± 0% slabinfo.kmalloc-2048.active_objs
2220 ± 1% -34.5% 1454 ± 0% slabinfo.kmalloc-2048.num_objs
6528 ± 3% -71.2% 1879 ± 2% slabinfo.kmalloc-256.active_objs
7083 ± 2% -54.8% 3200 ± 3% slabinfo.kmalloc-256.num_objs
22970 ± 4% -14.2% 19710 ± 0% slabinfo.kmalloc-32.active_objs
22970 ± 4% -14.2% 19710 ± 0% slabinfo.kmalloc-32.num_objs
364.75 ± 0% -28.5% 260.75 ± 0% slabinfo.kmalloc-4096.active_objs
386.25 ± 0% -28.3% 277.00 ± 1% slabinfo.kmalloc-4096.num_objs
8302 ± 5% -47.4% 4369 ± 2% slabinfo.kmalloc-512.active_objs
8526 ± 4% -43.1% 4848 ± 4% slabinfo.kmalloc-512.num_objs
36996 ± 1% -17.7% 30430 ± 0% slabinfo.kmalloc-64.active_objs
577.50 ± 1% -17.6% 476.00 ± 0% slabinfo.kmalloc-64.active_slabs
36996 ± 1% -17.6% 30494 ± 0% slabinfo.kmalloc-64.num_objs
577.50 ± 1% -17.6% 476.00 ± 0% slabinfo.kmalloc-64.num_slabs
10112 ± 4% -64.6% 3584 ± 0% slabinfo.kmalloc-8.active_objs
10112 ± 4% -64.6% 3584 ± 0% slabinfo.kmalloc-8.num_objs
3729 ± 2% -58.3% 1554 ± 0% slabinfo.kmalloc-96.active_objs
3729 ± 2% -58.3% 1554 ± 0% slabinfo.kmalloc-96.num_objs
240.00 ± 11% -46.7% 128.00 ± 0% slabinfo.kmem_cache.active_objs
240.00 ± 11% -46.7% 128.00 ± 0% slabinfo.kmem_cache.num_objs
240.00 ± 11% -46.7% 128.00 ± 0% slabinfo.kmem_cache_node.active_objs
240.00 ± 11% -46.7% 128.00 ± 0% slabinfo.kmem_cache_node.num_objs
301.75 ± 2% -68.2% 96.00 ± 0% slabinfo.mm_struct.active_objs
301.75 ± 2% -68.2% 96.00 ± 0% slabinfo.mm_struct.num_objs
609.00 ± 5% -86.2% 84.00 ± 0% slabinfo.mnt_cache.active_objs
609.00 ± 5% -86.2% 84.00 ± 0% slabinfo.mnt_cache.num_objs
3772 ± 2% -44.9% 2080 ± 0% slabinfo.proc_inode_cache.active_objs
3775 ± 2% -44.9% 2080 ± 0% slabinfo.proc_inode_cache.num_objs
8496 ± 0% -9.7% 7672 ± 0% slabinfo.radix_tree_node.active_objs
8496 ± 0% -9.7% 7672 ± 0% slabinfo.radix_tree_node.num_objs
1579 ± 1% -31.8% 1078 ± 0% slabinfo.shmem_inode_cache.active_objs
1579 ± 1% -31.8% 1078 ± 0% slabinfo.shmem_inode_cache.num_objs
635.50 ± 1% -34.9% 414.00 ± 0% slabinfo.sighand_cache.active_objs
635.50 ± 1% -34.9% 414.00 ± 0% slabinfo.sighand_cache.num_objs
1009 ± 2% -43.6% 570.00 ± 0% slabinfo.signal_cache.active_objs
1009 ± 2% -43.6% 570.00 ± 0% slabinfo.signal_cache.num_objs
816.00 ± 0% -93.8% 51.00 ± 0% slabinfo.sigqueue.active_objs
816.00 ± 0% -93.8% 51.00 ± 0% slabinfo.sigqueue.num_objs
1249 ± 2% -55.1% 561.00 ± 0% slabinfo.sock_inode_cache.active_objs
1249 ± 2% -55.1% 561.00 ± 0% slabinfo.sock_inode_cache.num_objs
2081 ± 3% -16.0% 1748 ± 0% slabinfo.trace_event_file.active_objs
2081 ± 3% -16.0% 1748 ± 0% slabinfo.trace_event_file.num_objs
8559 ± 5% -72.1% 2387 ± 0% slabinfo.vm_area_struct.active_objs
194.25 ± 5% -71.3% 55.75 ± 0% slabinfo.vm_area_struct.active_slabs
8568 ± 5% -71.0% 2482 ± 0% slabinfo.vm_area_struct.num_objs
194.25 ± 5% -71.3% 55.75 ± 0% slabinfo.vm_area_struct.num_slabs
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase/testtime:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-wsx02/shared_memory/aim9/300s
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
945356 ± 0% -12.4% 827988 ± 4% aim9.shared_memory.ops_per_sec
387.75 ± 0% +26660.6% 103764 ± 0% aim9.time.involuntary_context_switches
2837238 ± 0% -12.4% 2485141 ± 4% aim9.time.minor_page_faults
99.00 ± 0% -3.0% 96.00 ± 0% aim9.time.percent_of_cpu_this_job_got
259.94 ± 0% -3.6% 250.69 ± 0% aim9.time.system_time
26895 ± 1% -99.9% 19.16 ± 26% uptime.idle
387.75 ± 0% +26660.6% 103764 ± 0% time.involuntary_context_switches
2837238 ± 0% -12.4% 2485141 ± 4% time.minor_page_faults
77943 ± 2% -66.7% 25962 ± 0% softirqs.RCU
147092 ± 0% -100.0% 0.00 ± -1% softirqs.SCHED
651218 ± 0% -74.7% 164897 ± 0% softirqs.TIMER
647512 ± 0% -13.3% 561279 ± 0% vmstat.memory.cache
3707 ± 1% +15.6% 4285 ± 0% vmstat.system.cs
2015 ± 0% -29.8% 1415 ± 0% vmstat.system.in
0.00 ± -1% +Inf% 1626990 ±100% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1626990 ±100% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
4043 ± 26% +794.1% 36152 ± 14% latency_stats.sum.ep_poll.SyS_epoll_wait.entry_SYSCALL_64_fastpath
0.00 ± -1% +Inf% 1626990 ±100% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
6889764 ± 1% +200.1% 20678450 ± 2% latency_stats.sum.pipe_wait.pipe_read.__vfs_read.vfs_read.SyS_read.entry_SYSCALL_64_fastpath
12329 ± 4% -99.5% 63.25 ± 3% latency_stats.sum.pipe_wait.pipe_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
1.45 ± 0% +6805.9% 99.96 ± 0% turbostat.%Busy
38.00 ± 0% +6875.0% 2650 ± 4% turbostat.Avg_MHz
1.84 ± 0% -96.7% 0.06 ±-1666% turbostat.CPU%c1
0.56 ± 10% -100.0% 0.00 ± -1% turbostat.CPU%c3
96.15 ± 0% -100.0% 0.03 ±141% turbostat.CPU%c6
0.20 ± 14% -84.8% 0.03 ±141% turbostat.Pkg%pc3
110688 ± 7% -38.6% 67990 ± 0% meminfo.Committed_AS
6096896 ± 0% -24.4% 4606976 ± 11% meminfo.DirectMap2M
116984 ± 5% -70.9% 34040 ± 5% meminfo.DirectMap4k
15057 ± 0% -16.7% 12545 ± 0% meminfo.KernelStack
52369 ± 0% -27.7% 37856 ± 0% meminfo.SReclaimable
110068 ± 1% -64.1% 39523 ± 0% meminfo.SUnreclaim
162438 ± 1% -52.4% 77379 ± 0% meminfo.Slab
22092158 ± 5% -100.0% 367.25 ±117% cpuidle.C1-NHM.time
232500 ± 2% -100.0% 5.50 ±134% cpuidle.C1-NHM.usage
15154580 ± 77% -100.0% 272.00 ±101% cpuidle.C1E-NHM.time
763.00 ± 28% -99.5% 3.75 ±100% cpuidle.C1E-NHM.usage
73634849 ± 10% -100.0% 528.00 ±105% cpuidle.C3-NHM.time
19059 ± 3% -100.0% 5.50 ±100% cpuidle.C3-NHM.usage
2.362e+10 ± 0% -100.0% 112246 ±104% cpuidle.C6-NHM.time
608291 ± 1% -100.0% 3.00 ±102% cpuidle.C6-NHM.usage
3843019 ± 55% -100.0% 0.00 ± 0% cpuidle.POLL.time
60.00 ± 45% -100.0% 0.00 ± 0% cpuidle.POLL.usage
500747 ± 93% +534.0% 3174963 ± 3% numa-numastat.node0.local_node
503454 ± 93% +530.6% 3174963 ± 3% numa-numastat.node0.numa_hit
2707 ± 97% -100.0% 0.00 ± -1% numa-numastat.node0.other_node
1018770 ±135% -100.0% 0.00 ± -1% numa-numastat.node1.local_node
1023905 ±134% -100.0% 4.25 ± 30% numa-numastat.node1.numa_hit
5135 ± 0% -99.9% 4.25 ± 30% numa-numastat.node1.other_node
749376 ±120% -100.0% 0.00 ± -1% numa-numastat.node2.local_node
751947 ±120% -100.0% 3.75 ± 22% numa-numastat.node2.numa_hit
1801676 ± 87% -100.0% 0.00 ± -1% numa-numastat.node3.local_node
1806773 ± 87% -100.0% 4.75 ± 27% numa-numastat.node3.numa_hit
5097 ± 0% -99.9% 4.75 ± 27% numa-numastat.node3.other_node
1617 ± 3% -41.9% 939.25 ± 4% proc-vmstat.nr_alloc_batch
174.00 ± 1% -57.3% 74.25 ± 11% proc-vmstat.nr_dirtied
940.75 ± 0% -16.7% 784.00 ± 0% proc-vmstat.nr_kernel_stack
13092 ± 0% -27.7% 9463 ± 0% proc-vmstat.nr_slab_reclaimable
27483 ± 1% -64.0% 9881 ± 0% proc-vmstat.nr_slab_unreclaimable
172.75 ± 2% -46.7% 92.00 ± 0% proc-vmstat.nr_written
4078832 ± 0% -22.3% 3170316 ± 3% proc-vmstat.numa_hit
4063463 ± 0% -22.0% 3170303 ± 3% proc-vmstat.numa_local
15369 ± 0% -99.9% 12.75 ± 26% proc-vmstat.numa_other
72722 ± 92% +185.3% 207454 ± 5% proc-vmstat.pgalloc_dma32
6246311 ± 1% -49.4% 3161179 ± 5% proc-vmstat.pgalloc_normal
6321814 ± 0% -46.8% 3364037 ± 5% proc-vmstat.pgfree
54.27 ± 1% +10.8% 60.11 ± 1% perf-profile.cycles-pp.SYSC_semtimedop.sys_semop.entry_SYSCALL_64_fastpath
1.01 ± 14% -97.0% 0.03 ± 23% perf-profile.cycles-pp.__memcpy.mga_imageblit.soft_cursor.bit_cursor.fb_flashcursor
1.92 ± 4% +37.4% 2.64 ± 10% perf-profile.cycles-pp.__might_fault.SYSC_semtimedop.sys_semop.entry_SYSCALL_64_fastpath
1.24 ± 4% +48.1% 1.84 ± 13% perf-profile.cycles-pp.__might_sleep.__might_fault.SYSC_semtimedop.sys_semop.entry_SYSCALL_64_fastpath
1.40 ± 3% -23.5% 1.07 ± 2% perf-profile.cycles-pp.__shmem_file_setup.shmem_kernel_file_setup.newseg.ipcget.sys_shmget
1.01 ± 14% -96.8% 0.03 ± 13% perf-profile.cycles-pp.bit_cursor.fb_flashcursor.process_one_work.worker_thread.kthread
4.77 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.call_cpuidle.cpu_startup_entry.start_secondary
5.41 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpu_startup_entry.start_secondary
4.77 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
4.26 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry.start_secondary
1.34 ± 4% +14.9% 1.54 ± 7% perf-profile.cycles-pp.do_munmap.sys_shmdt.entry_SYSCALL_64_fastpath
2.10 ± 2% -8.7% 1.92 ± 3% perf-profile.cycles-pp.do_shmat.sys_shmat.entry_SYSCALL_64_fastpath
1.03 ± 16% -96.9% 0.03 ± 13% perf-profile.cycles-pp.fb_flashcursor.process_one_work.worker_thread.kthread.ret_from_fork
4.24 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.call_cpuidle.cpu_startup_entry
5.73 ± 4% +23.3% 7.07 ± 4% perf-profile.cycles-pp.ipc_has_perm.isra.26.selinux_ipc_permission.security_ipc_permission.ipcperms.SYSC_semtimedop
2.67 ± 4% -23.0% 2.05 ± 2% perf-profile.cycles-pp.ipcget.sys_shmget.entry_SYSCALL_64_fastpath
11.46 ± 2% +23.8% 14.19 ± 6% perf-profile.cycles-pp.ipcperms.SYSC_semtimedop.sys_semop.entry_SYSCALL_64_fastpath
1.92 ± 15% -63.2% 0.71 ± 58% perf-profile.cycles-pp.kthread.ret_from_fork
1.01 ± 14% -96.8% 0.03 ± 13% perf-profile.cycles-pp.mga_imageblit.soft_cursor.bit_cursor.fb_flashcursor.process_one_work
2.54 ± 4% -23.3% 1.95 ± 3% perf-profile.cycles-pp.newseg.ipcget.sys_shmget.entry_SYSCALL_64_fastpath
1.68 ± 4% +23.2% 2.07 ± 8% perf-profile.cycles-pp.perform_atomic_semop.isra.4.SYSC_semtimedop.sys_semop.entry_SYSCALL_64_fastpath
1.04 ± 16% -96.2% 0.04 ± 17% perf-profile.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.92 ± 15% -63.2% 0.71 ± 58% perf-profile.cycles-pp.ret_from_fork
8.47 ± 4% +20.9% 10.24 ± 3% perf-profile.cycles-pp.security_ipc_permission.ipcperms.SYSC_semtimedop.sys_semop.entry_SYSCALL_64_fastpath
6.48 ± 4% +25.6% 8.14 ± 5% perf-profile.cycles-pp.selinux_ipc_permission.security_ipc_permission.ipcperms.SYSC_semtimedop.sys_semop
1.45 ± 3% -23.1% 1.11 ± 0% perf-profile.cycles-pp.shmem_kernel_file_setup.newseg.ipcget.sys_shmget.entry_SYSCALL_64_fastpath
1.01 ± 14% -96.8% 0.03 ± 13% perf-profile.cycles-pp.soft_cursor.bit_cursor.fb_flashcursor.process_one_work.worker_thread
5.43 ± 9% -100.0% 0.00 ± -1% perf-profile.cycles-pp.start_secondary
2.15 ± 3% -8.9% 1.96 ± 3% perf-profile.cycles-pp.sys_shmat.entry_SYSCALL_64_fastpath
1.40 ± 4% +16.4% 1.63 ± 7% perf-profile.cycles-pp.sys_shmdt.entry_SYSCALL_64_fastpath
2.69 ± 4% -22.6% 2.08 ± 2% perf-profile.cycles-pp.sys_shmget.entry_SYSCALL_64_fastpath
0.98 ± 7% +17.8% 1.16 ± 10% perf-profile.cycles-pp.unmap_region.do_munmap.sys_shmdt.entry_SYSCALL_64_fastpath
1.04 ± 16% -96.2% 0.04 ± 17% perf-profile.cycles-pp.worker_thread.kthread.ret_from_fork
0.02 ± 35% +2625.0% 0.55 ± 90% perf-profile.cycles-pp.write
33183 ± 16% +63.3% 54174 ± 2% numa-meminfo.node0.Active
9029 ± 56% +236.0% 30334 ± 2% numa-meminfo.node0.Active(anon)
497.33 ± 70% +820.9% 4580 ± 19% numa-meminfo.node0.AnonHugePages
5620 ± 22% +286.1% 21698 ± 3% numa-meminfo.node0.AnonPages
120525 ± 3% +12.1% 135059 ± 0% numa-meminfo.node0.FilePages
529.25 ± 28% +1698.3% 9517 ± 0% numa-meminfo.node0.Inactive(anon)
2878 ± 0% +197.0% 8547 ± 0% numa-meminfo.node0.Mapped
940.25 ± 19% +239.6% 3192 ± 0% numa-meminfo.node0.PageTables
31892 ± 20% -42.8% 18242 ± 4% numa-meminfo.node0.SUnreclaim
3992 ± 97% +355.1% 18169 ± 0% numa-meminfo.node0.Shmem
46158 ± 19% -23.4% 35344 ± 2% numa-meminfo.node0.Slab
28573 ± 11% -15.2% 24240 ± 3% numa-meminfo.node1.Active
4538 ± 78% -100.0% 0.00 ± -1% numa-meminfo.node1.Active(anon)
3207 ± 43% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonPages
2511 ± 4% -36.3% 1600 ± 0% numa-meminfo.node1.KernelStack
251025 ± 1% -24.5% 189492 ± 0% numa-meminfo.node1.MemUsed
670.75 ± 34% -100.0% 0.00 ± -1% numa-meminfo.node1.PageTables
12102 ± 8% -42.0% 7020 ± 9% numa-meminfo.node1.SReclaimable
23477 ± 11% -69.5% 7171 ± 10% numa-meminfo.node1.SUnreclaim
1588 ±133% -100.0% 0.00 ± -1% numa-meminfo.node1.Shmem
35580 ± 7% -60.1% 14191 ± 8% numa-meminfo.node1.Slab
35909 ± 15% -34.0% 23700 ± 3% numa-meminfo.node2.Active
11217 ± 54% -100.0% 0.00 ± -1% numa-meminfo.node2.Active(anon)
7744 ± 34% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonPages
126406 ± 5% -8.0% 116263 ± 0% numa-meminfo.node2.FilePages
6530 ± 56% -100.0% 0.00 ± -1% numa-meminfo.node2.Inactive(anon)
2594 ± 6% -38.3% 1600 ± 0% numa-meminfo.node2.KernelStack
7153 ± 34% -59.8% 2877 ± 0% numa-meminfo.node2.Mapped
265732 ± 4% -28.5% 190010 ± 0% numa-meminfo.node2.MemUsed
13992 ± 9% -51.2% 6828 ± 6% numa-meminfo.node2.SReclaimable
25439 ± 10% -73.4% 6770 ± 12% numa-meminfo.node2.SUnreclaim
10054 ± 64% -100.0% 0.00 ± -1% numa-meminfo.node2.Shmem
39432 ± 5% -65.5% 13598 ± 8% numa-meminfo.node2.Slab
32201 ± 11% -19.6% 25885 ± 2% numa-meminfo.node3.Active
7541 ± 28% -100.0% 0.00 ± -1% numa-meminfo.node3.Active(anon)
6204 ± 26% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonPages
2403 ±145% -100.0% 0.00 ± -1% numa-meminfo.node3.Inactive(anon)
2894 ± 4% -44.7% 1600 ± 0% numa-meminfo.node3.KernelStack
258097 ± 2% -26.5% 189733 ± 0% numa-meminfo.node3.MemUsed
926.00 ± 27% -100.0% 0.00 ± -1% numa-meminfo.node3.PageTables
12008 ± 12% -42.5% 6906 ± 3% numa-meminfo.node3.SReclaimable
29494 ± 23% -75.1% 7335 ± 11% numa-meminfo.node3.SUnreclaim
3776 ± 87% -100.0% 0.00 ± -1% numa-meminfo.node3.Shmem
41503 ± 14% -65.7% 14241 ± 6% numa-meminfo.node3.Slab
2256 ± 56% +236.0% 7583 ± 2% numa-vmstat.node0.nr_active_anon
418.75 ± 3% -14.8% 356.75 ± 2% numa-vmstat.node0.nr_alloc_batch
1404 ± 22% +286.2% 5424 ± 3% numa-vmstat.node0.nr_anon_pages
30130 ± 3% +12.1% 33764 ± 0% numa-vmstat.node0.nr_file_pages
132.00 ± 28% +1702.3% 2379 ± 0% numa-vmstat.node0.nr_inactive_anon
719.00 ± 0% +197.1% 2136 ± 0% numa-vmstat.node0.nr_mapped
234.75 ± 19% +239.6% 797.25 ± 0% numa-vmstat.node0.nr_page_table_pages
997.50 ± 97% +355.3% 4542 ± 0% numa-vmstat.node0.nr_shmem
7972 ± 20% -42.8% 4560 ± 4% numa-vmstat.node0.nr_slab_unreclaimable
435246 ± 90% +293.4% 1712328 ± 3% numa-vmstat.node0.numa_hit
398199 ± 98% +330.0% 1712328 ± 3% numa-vmstat.node0.numa_local
37047 ± 8% -100.0% 0.00 ± -1% numa-vmstat.node0.numa_other
1134 ± 78% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_active_anon
451.25 ± 7% -50.6% 223.00 ± 14% numa-vmstat.node1.nr_alloc_batch
801.00 ± 43% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_anon_pages
156.50 ± 4% -36.1% 100.00 ± 0% numa-vmstat.node1.nr_kernel_stack
167.25 ± 34% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_page_table_pages
396.75 ±133% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_shmem
3025 ± 8% -42.0% 1755 ± 9% numa-vmstat.node1.nr_slab_reclaimable
5873 ± 11% -69.5% 1792 ± 10% numa-vmstat.node1.nr_slab_unreclaimable
601135 ±113% -92.6% 44250 ± 0% numa-vmstat.node1.numa_hit
558278 ±121% -100.0% 0.00 ± -1% numa-vmstat.node1.numa_local
2803 ± 54% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_active_anon
475.75 ± 4% -58.2% 198.75 ± 15% numa-vmstat.node2.nr_alloc_batch
1935 ± 34% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_anon_pages
31601 ± 5% -8.0% 29065 ± 0% numa-vmstat.node2.nr_file_pages
1632 ± 56% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_inactive_anon
161.75 ± 6% -38.2% 100.00 ± 0% numa-vmstat.node2.nr_kernel_stack
1788 ± 34% -59.8% 719.00 ± 0% numa-vmstat.node2.nr_mapped
164.75 ± 16% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_page_table_pages
2513 ± 64% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_shmem
3497 ± 9% -51.2% 1707 ± 6% numa-vmstat.node2.nr_slab_reclaimable
6356 ± 10% -73.4% 1692 ± 12% numa-vmstat.node2.nr_slab_unreclaimable
379630 ± 76% -88.4% 44216 ± 0% numa-vmstat.node2.numa_hit
353146 ± 83% -100.0% 0.00 ± -1% numa-vmstat.node2.numa_local
1884 ± 28% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_active_anon
440.50 ± 10% -64.5% 156.50 ± 21% numa-vmstat.node3.nr_alloc_batch
1550 ± 26% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_anon_pages
600.50 ±145% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_inactive_anon
180.25 ± 4% -44.5% 100.00 ± 0% numa-vmstat.node3.nr_kernel_stack
231.25 ± 27% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_page_table_pages
943.50 ± 87% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_shmem
3001 ± 12% -42.5% 1726 ± 3% numa-vmstat.node3.nr_slab_reclaimable
7371 ± 23% -75.1% 1833 ± 11% numa-vmstat.node3.nr_slab_unreclaimable
988661 ± 78% -95.5% 44266 ± 0% numa-vmstat.node3.numa_hit
936273 ± 83% -100.0% 0.00 ± -1% numa-vmstat.node3.numa_local
52387 ± 0% -15.5% 44266 ± 0% numa-vmstat.node3.numa_other
7980 ± 1% -11.8% 7038 ± 0% slabinfo.Acpi-Namespace.active_objs
7980 ± 1% -11.8% 7038 ± 0% slabinfo.Acpi-Namespace.num_objs
56168 ± 0% -15.6% 47403 ± 0% slabinfo.Acpi-Operand.active_objs
1003 ± 0% -15.3% 849.25 ± 0% slabinfo.Acpi-Operand.active_slabs
56168 ± 0% -15.3% 47558 ± 0% slabinfo.Acpi-Operand.num_objs
1003 ± 0% -15.3% 849.25 ± 0% slabinfo.Acpi-Operand.num_slabs
3081 ± 0% -97.5% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
3081 ± 0% -97.5% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
63773 ± 4% -29.6% 44900 ± 0% slabinfo.Acpi-State.active_objs
1253 ± 4% -29.8% 880.25 ± 0% slabinfo.Acpi-State.active_slabs
63945 ± 4% -29.7% 44925 ± 0% slabinfo.Acpi-State.num_objs
1253 ± 4% -29.8% 880.25 ± 0% slabinfo.Acpi-State.num_slabs
2760 ± 1% -89.2% 297.00 ± 5% slabinfo.RAW.active_objs
2760 ± 1% -89.2% 297.00 ± 5% slabinfo.RAW.num_objs
131.75 ± 5% -87.1% 17.00 ± 0% slabinfo.TCP.active_objs
131.75 ± 5% -87.1% 17.00 ± 0% slabinfo.TCP.num_objs
272.00 ± 12% -87.5% 34.00 ± 0% slabinfo.UDP.active_objs
272.00 ± 12% -87.5% 34.00 ± 0% slabinfo.UDP.num_objs
12139 ± 3% -89.6% 1260 ± 1% slabinfo.anon_vma.active_objs
237.50 ± 3% -90.0% 23.75 ± 1% slabinfo.anon_vma.active_slabs
12139 ± 3% -89.6% 1260 ± 1% slabinfo.anon_vma.num_objs
237.50 ± 3% -90.0% 23.75 ± 1% slabinfo.anon_vma.num_slabs
1131 ± 7% -93.5% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
1131 ± 7% -93.5% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
399.75 ± 8% -90.2% 39.00 ± 0% slabinfo.bdev_cache.active_objs
399.75 ± 8% -90.2% 39.00 ± 0% slabinfo.bdev_cache.num_objs
399.00 ± 15% -86.0% 56.00 ± 0% slabinfo.blkdev_queue.active_objs
399.00 ± 15% -86.0% 56.00 ± 0% slabinfo.blkdev_queue.num_objs
1221 ± 4% -96.4% 44.00 ± 0% slabinfo.blkdev_requests.active_objs
1221 ± 4% -96.4% 44.00 ± 0% slabinfo.blkdev_requests.num_objs
59543 ± 1% -24.6% 44917 ± 0% slabinfo.dentry.active_objs
1419 ± 1% -24.6% 1069 ± 0% slabinfo.dentry.active_slabs
59611 ± 1% -24.6% 44932 ± 0% slabinfo.dentry.num_objs
1419 ± 1% -24.6% 1069 ± 0% slabinfo.dentry.num_slabs
617.25 ± 14% -93.7% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
617.25 ± 14% -93.7% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
3748 ± 2% -95.1% 184.00 ± 0% slabinfo.files_cache.active_objs
3748 ± 2% -95.1% 184.00 ± 0% slabinfo.files_cache.num_objs
3888 ± 0% -10.4% 3485 ± 0% slabinfo.ftrace_event_field.active_objs
3888 ± 0% -10.4% 3485 ± 0% slabinfo.ftrace_event_field.num_objs
4055 ± 3% -62.2% 1531 ± 0% slabinfo.idr_layer_cache.active_objs
274.75 ± 3% -61.7% 105.25 ± 0% slabinfo.idr_layer_cache.active_slabs
4127 ± 3% -61.6% 1584 ± 0% slabinfo.idr_layer_cache.num_objs
274.75 ± 3% -61.7% 105.25 ± 0% slabinfo.idr_layer_cache.num_slabs
43639 ± 0% -14.6% 37276 ± 0% slabinfo.inode_cache.active_objs
764.75 ± 0% -14.6% 653.00 ± 0% slabinfo.inode_cache.active_slabs
43639 ± 0% -14.6% 37276 ± 0% slabinfo.inode_cache.num_objs
764.75 ± 0% -14.6% 653.00 ± 0% slabinfo.inode_cache.num_slabs
60554 ± 0% -36.5% 38454 ± 0% slabinfo.kernfs_node_cache.active_objs
1781 ± 0% -36.5% 1131 ± 0% slabinfo.kernfs_node_cache.active_slabs
60554 ± 0% -36.5% 38454 ± 0% slabinfo.kernfs_node_cache.num_objs
1781 ± 0% -36.5% 1131 ± 0% slabinfo.kernfs_node_cache.num_slabs
4283 ± 0% -80.9% 819.75 ± 3% slabinfo.kmalloc-1024.active_objs
137.75 ± 0% -77.9% 30.50 ± 5% slabinfo.kmalloc-1024.active_slabs
4410 ± 0% -77.9% 976.00 ± 5% slabinfo.kmalloc-1024.num_objs
137.75 ± 0% -77.9% 30.50 ± 5% slabinfo.kmalloc-1024.num_slabs
6191 ± 1% -62.9% 2296 ± 1% slabinfo.kmalloc-128.active_objs
192.75 ± 1% -62.4% 72.50 ± 1% slabinfo.kmalloc-128.active_slabs
6191 ± 1% -62.5% 2320 ± 1% slabinfo.kmalloc-128.num_objs
192.75 ± 1% -62.4% 72.50 ± 1% slabinfo.kmalloc-128.num_slabs
26275 ± 0% -72.3% 7283 ± 1% slabinfo.kmalloc-16.active_objs
26556 ± 0% -71.3% 7615 ± 1% slabinfo.kmalloc-16.num_objs
9568 ± 3% -59.2% 3903 ± 0% slabinfo.kmalloc-192.active_objs
227.75 ± 3% -57.6% 96.50 ± 0% slabinfo.kmalloc-192.active_slabs
9606 ± 3% -57.8% 4053 ± 0% slabinfo.kmalloc-192.num_objs
227.75 ± 3% -57.6% 96.50 ± 0% slabinfo.kmalloc-192.num_slabs
4112 ± 4% -81.7% 750.50 ± 1% slabinfo.kmalloc-2048.active_objs
264.00 ± 4% -81.0% 50.25 ± 0% slabinfo.kmalloc-2048.active_slabs
4229 ± 4% -80.9% 807.75 ± 0% slabinfo.kmalloc-2048.num_objs
264.00 ± 4% -81.0% 50.25 ± 0% slabinfo.kmalloc-2048.num_slabs
14676 ± 10% -89.0% 1615 ± 1% slabinfo.kmalloc-256.active_objs
470.50 ± 10% -83.7% 76.50 ± 1% slabinfo.kmalloc-256.active_slabs
15079 ± 10% -83.7% 2464 ± 0% slabinfo.kmalloc-256.num_objs
470.50 ± 10% -83.7% 76.50 ± 1% slabinfo.kmalloc-256.num_slabs
22983 ± 0% -59.9% 9217 ± 0% slabinfo.kmalloc-32.active_objs
179.00 ± 0% -59.2% 73.00 ± 0% slabinfo.kmalloc-32.active_slabs
22983 ± 0% -59.3% 9344 ± 0% slabinfo.kmalloc-32.num_objs
179.00 ± 0% -59.2% 73.00 ± 0% slabinfo.kmalloc-32.num_slabs
1162 ± 0% -59.1% 475.00 ± 0% slabinfo.kmalloc-4096.active_objs
1229 ± 0% -57.4% 523.00 ± 0% slabinfo.kmalloc-4096.num_objs
8679 ± 6% -65.5% 2996 ± 1% slabinfo.kmalloc-512.active_objs
276.00 ± 5% -62.5% 103.50 ± 1% slabinfo.kmalloc-512.active_slabs
8845 ± 5% -62.3% 3334 ± 1% slabinfo.kmalloc-512.num_objs
276.00 ± 5% -62.5% 103.50 ± 1% slabinfo.kmalloc-512.num_slabs
40241 ± 2% -58.4% 16722 ± 0% slabinfo.kmalloc-64.active_objs
628.00 ± 2% -57.6% 266.50 ± 0% slabinfo.kmalloc-64.active_slabs
40241 ± 2% -57.5% 17115 ± 0% slabinfo.kmalloc-64.num_objs
628.00 ± 2% -57.6% 266.50 ± 0% slabinfo.kmalloc-64.num_slabs
146474 ± 10% -97.2% 4094 ± 0% slabinfo.kmalloc-8.active_objs
287.50 ± 10% -97.6% 7.00 ± 0% slabinfo.kmalloc-8.active_slabs
147537 ± 10% -97.2% 4094 ± 0% slabinfo.kmalloc-8.num_objs
287.50 ± 10% -97.6% 7.00 ± 0% slabinfo.kmalloc-8.num_slabs
585.50 ± 1% -91.1% 52.00 ± 0% slabinfo.kmalloc-8192.active_objs
147.00 ± 1% -90.6% 13.75 ± 3% slabinfo.kmalloc-8192.active_slabs
588.00 ± 1% -90.6% 55.00 ± 3% slabinfo.kmalloc-8192.num_objs
147.00 ± 1% -90.6% 13.75 ± 3% slabinfo.kmalloc-8192.num_slabs
6062 ± 1% -59.8% 2440 ± 0% slabinfo.kmalloc-96.active_objs
6321 ± 2% -56.5% 2750 ± 1% slabinfo.kmalloc-96.num_objs
612.00 ± 0% -75.0% 153.00 ± 0% slabinfo.kmem_cache.active_objs
612.00 ± 0% -75.0% 153.00 ± 0% slabinfo.kmem_cache.num_objs
1025 ± 0% -56.2% 449.00 ± 0% slabinfo.kmem_cache_node.active_objs
1088 ± 0% -52.9% 512.00 ± 0% slabinfo.kmem_cache_node.num_objs
1512 ± 0% -93.3% 101.00 ± 0% slabinfo.mm_struct.active_objs
1512 ± 0% -93.3% 101.00 ± 0% slabinfo.mm_struct.num_objs
1743 ± 7% -95.2% 84.00 ± 0% slabinfo.mnt_cache.active_objs
1743 ± 7% -95.2% 84.00 ± 0% slabinfo.mnt_cache.num_objs
152.00 ± 17% -78.9% 32.00 ± 0% slabinfo.nfs_inode_cache.active_objs
152.00 ± 17% -78.9% 32.00 ± 0% slabinfo.nfs_inode_cache.num_objs
11506 ± 1% -48.2% 5959 ± 0% slabinfo.proc_inode_cache.active_objs
220.75 ± 1% -48.4% 114.00 ± 0% slabinfo.proc_inode_cache.active_slabs
11506 ± 1% -48.2% 5964 ± 0% slabinfo.proc_inode_cache.num_objs
220.75 ± 1% -48.4% 114.00 ± 0% slabinfo.proc_inode_cache.num_slabs
11577 ± 0% -33.4% 7714 ± 0% slabinfo.radix_tree_node.active_objs
11577 ± 0% -33.4% 7714 ± 0% slabinfo.radix_tree_node.num_objs
9696 ± 11% -85.4% 1420 ± 0% slabinfo.shmem_inode_cache.active_objs
199.75 ± 11% -86.0% 28.00 ± 0% slabinfo.shmem_inode_cache.active_slabs
9818 ± 11% -85.5% 1420 ± 0% slabinfo.shmem_inode_cache.num_objs
199.75 ± 11% -86.0% 28.00 ± 0% slabinfo.shmem_inode_cache.num_slabs
1999 ± 0% -60.3% 794.00 ± 0% slabinfo.sighand_cache.active_objs
2010 ± 1% -60.5% 794.00 ± 0% slabinfo.sighand_cache.num_objs
3456 ± 2% -74.0% 899.00 ± 0% slabinfo.signal_cache.active_objs
3456 ± 2% -74.0% 899.00 ± 0% slabinfo.signal_cache.num_objs
4079 ± 0% -98.7% 51.00 ± 0% slabinfo.sigqueue.active_objs
4079 ± 0% -98.7% 51.00 ± 0% slabinfo.sigqueue.num_objs
3722 ± 5% -90.4% 357.00 ± 0% slabinfo.sock_inode_cache.active_objs
3722 ± 5% -90.4% 357.00 ± 0% slabinfo.sock_inode_cache.num_objs
1320 ± 0% -40.4% 787.25 ± 0% slabinfo.task_struct.active_objs
450.00 ± 0% -40.8% 266.50 ± 0% slabinfo.task_struct.active_slabs
1351 ± 0% -40.8% 800.25 ± 0% slabinfo.task_struct.num_objs
450.00 ± 0% -40.8% 266.50 ± 0% slabinfo.task_struct.num_slabs
2357 ± 1% -25.9% 1748 ± 0% slabinfo.trace_event_file.active_objs
2357 ± 1% -25.9% 1748 ± 0% slabinfo.trace_event_file.num_objs
22037 ± 3% -89.2% 2370 ± 2% slabinfo.vm_area_struct.active_objs
500.75 ± 3% -88.3% 58.50 ± 1% slabinfo.vm_area_struct.active_slabs
22053 ± 3% -88.2% 2601 ± 1% slabinfo.vm_area_struct.num_objs
500.75 ± 3% -88.3% 58.50 ± 1% slabinfo.vm_area_struct.num_slabs
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/tbox_group/test/testcase:
gcc-4.9/performance/x86_64-rhel/debian-x86_64-2015-02-07.cgz/lkp-sbx04/poll2/will-it-scale
commit:
8025943c32f5de29b3fd548e917e24424297ca73
0aefb957de5f2a1d9467126114146b6c793cf180
8025943c32f5de29 0aefb957de5f2a1d9467126114
---------------- --------------------------
%stddev %change %stddev
\ | \
621087 ± 0% -75.6% 151438 ± 0% will-it-scale.per_process_ops
270552 ± 0% -76.8% 62683 ± 0% will-it-scale.per_thread_ops
0.51 ± 0% -96.9% 0.02 ± 0% will-it-scale.scalability
308.83 ± 0% -28.6% 220.39 ± 0% will-it-scale.time.elapsed_time
308.83 ± 0% -28.6% 220.39 ± 0% will-it-scale.time.elapsed_time.max
16661 ± 4% +427.8% 87940 ± 0% will-it-scale.time.involuntary_context_switches
47746 ± 0% -76.4% 11255 ± 0% will-it-scale.time.minor_page_faults
1371 ± 0% -96.5% 48.00 ± 0% will-it-scale.time.percent_of_cpu_this_job_got
3997 ± 0% -97.5% 101.17 ± 0% will-it-scale.time.system_time
238.37 ± 0% -97.4% 6.18 ± 0% will-it-scale.time.user_time
2415 ± 13% -86.3% 332.00 ± 0% will-it-scale.time.voluntary_context_switches
343.60 ± 0% -27.3% 249.68 ± 0% uptime.boot
13400 ± 0% -99.9% 13.95 ± 0% uptime.idle
23.86 ± 0% -16.9% 19.83 ± 1% perf-profile.cycles.__fget_light.do_sys_poll.sys_poll.entry_SYSCALL_64_fastpath
1.41 ± 2% -14.0% 1.22 ± 5% perf-profile.cycles.__kmalloc.do_sys_poll.sys_poll.entry_SYSCALL_64_fastpath
2.58 ± 2% -10.6% 2.30 ± 6% perf-profile.cycles.kfree.do_sys_poll.sys_poll.entry_SYSCALL_64_fastpath
595356 ± 2% -98.9% 6604 ± 3% softirqs.RCU
239132 ± 0% -100.0% 0.00 ± -1% softirqs.SCHED
4674720 ± 0% -97.5% 118150 ± 0% softirqs.TIMER
647756 ± 0% -12.0% 569945 ± 0% vmstat.memory.cache
27.00 ± 0% +18.5% 32.00 ± 0% vmstat.procs.r
2496 ± 3% +61.6% 4034 ± 0% vmstat.system.cs
29660 ± 0% -96.2% 1139 ± 0% vmstat.system.in
308.83 ± 0% -28.6% 220.39 ± 0% time.elapsed_time
308.83 ± 0% -28.6% 220.39 ± 0% time.elapsed_time.max
16661 ± 4% +427.8% 87940 ± 0% time.involuntary_context_switches
47746 ± 0% -76.4% 11255 ± 0% time.minor_page_faults
1371 ± 0% -96.5% 48.00 ± 0% time.percent_of_cpu_this_job_got
3997 ± 0% -97.5% 101.17 ± 0% time.system_time
238.37 ± 0% -97.4% 6.18 ± 0% time.user_time
2415 ± 13% -86.3% 332.00 ± 0% time.voluntary_context_switches
12555159 ±111% -100.0% 0.00 ± -1% latency_stats.avg.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
110916 ± 26% -100.0% 0.00 ± -1% latency_stats.max.expand_files.__alloc_fd.get_unused_fd_flags.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
13529494 ±114% -100.0% 0.00 ± -1% latency_stats.max.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
28467 ±119% -100.0% 0.00 ± -1% latency_stats.sum.do_unlinkat.SyS_unlink.entry_SYSCALL_64_fastpath
27346 ± 8% +273.2% 102063 ± 7% latency_stats.sum.ep_poll.SyS_epoll_wait.entry_SYSCALL_64_fastpath
5252412 ± 3% -100.0% 0.00 ± -1% latency_stats.sum.expand_files.__alloc_fd.get_unused_fd_flags.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
21016052 ±132% -100.0% 0.00 ± -1% latency_stats.sum.nfs_wait_on_request.nfs_updatepage.nfs_write_end.generic_perform_write.__generic_file_write_iter.generic_file_write_iter.nfs_file_write.__vfs_write.vfs_write.SyS_write.entry_SYSCALL_64_fastpath
29706 ±127% -100.0% 0.00 ± -1% latency_stats.sum.path_openat.do_filp_open.do_sys_open.SyS_open.entry_SYSCALL_64_fastpath
875325 ± 36% -100.0% 0.00 ± -1% latency_stats.sum.wait_on_page_bit.__migration_entry_wait.migration_entry_wait.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
37435 ± 2% -27.1% 27283 ± 1% meminfo.Active(anon)
5839 ± 17% -30.5% 4056 ± 0% meminfo.AnonHugePages
28264 ± 3% -6.9% 26318 ± 1% meminfo.AnonPages
6068736 ± 0% -32.8% 4076544 ± 0% meminfo.DirectMap2M
144988 ± 16% -72.4% 40028 ± 2% meminfo.DirectMap4k
14592 ± 0% -12.8% 12721 ± 0% meminfo.KernelStack
53538 ± 0% -26.0% 39607 ± 0% meminfo.SReclaimable
105238 ± 0% -52.3% 50204 ± 0% meminfo.SUnreclaim
19949 ± 0% -44.9% 10987 ± 0% meminfo.Shmem
158777 ± 0% -43.4% 89812 ± 0% meminfo.Slab
64609 ± 1% +625.6% 468811 ± 0% numa-numastat.node0.local_node
67698 ± 4% +592.5% 468811 ± 0% numa-numastat.node0.numa_hit
3089 ± 57% -100.0% 0.00 ± -1% numa-numastat.node0.other_node
123400 ± 5% -100.0% 0.00 ± -1% numa-numastat.node1.local_node
127518 ± 5% -100.0% 4.50 ± 11% numa-numastat.node1.numa_hit
4117 ± 0% -99.9% 4.50 ± 11% numa-numastat.node1.other_node
189086 ± 14% -100.0% 0.00 ± -1% numa-numastat.node2.local_node
193204 ± 13% -100.0% 4.75 ± 17% numa-numastat.node2.numa_hit
4118 ± 0% -99.9% 4.75 ± 17% numa-numastat.node2.other_node
416535 ± 4% -100.0% 0.00 ± -1% numa-numastat.node3.local_node
417570 ± 4% -100.0% 8.25 ± 5% numa-numastat.node3.numa_hit
42.99 ± 0% +132.1% 99.79 ± 0% turbostat.%Busy
1244 ± 0% +132.0% 2887 ± 0% turbostat.Avg_MHz
21.65 ± 0% -99.0% 0.21 ± 6% turbostat.CPU%c1
0.01 ± 0% -100.0% 0.00 ± -1% turbostat.CPU%c3
35.36 ± 0% -100.0% 0.00 ± -1% turbostat.CPU%c7
205.94 ± 0% -81.8% 37.53 ± 0% turbostat.CorWatt
69.75 ± 1% -28.3% 50.00 ± 1% turbostat.CoreTmp
30.57 ± 0% -100.0% 0.00 ± -1% turbostat.Pkg%pc2
0.07 ± 43% -100.0% 0.00 ± -1% turbostat.Pkg%pc6
69.75 ± 1% -28.3% 50.00 ± 1% turbostat.PkgTmp
260.89 ± 0% -80.4% 51.23 ± 0% turbostat.PkgWatt
26901856 ± 24% -100.0% 0.00 ± 0% cpuidle.C1-SNB.time
114490 ± 2% -100.0% 0.00 ± 0% cpuidle.C1-SNB.usage
13309860 ± 5% -100.0% 550.25 ±115% cpuidle.C1E-SNB.time
3950 ± 35% -100.0% 0.50 ±100% cpuidle.C1E-SNB.usage
447645 ± 20% -99.3% 3217 ±109% cpuidle.C3-SNB.time
360.75 ± 9% -99.7% 1.25 ± 87% cpuidle.C3-SNB.usage
10273 ± 69% -91.3% 894.67 ±141% cpuidle.C6-SNB.time
21.50 ± 26% -98.4% 0.33 ±141% cpuidle.C6-SNB.usage
1.126e+10 ± 0% -100.0% 459884 ± 7% cpuidle.C7-SNB.time
312083 ± 1% -100.0% 14.25 ± 35% cpuidle.C7-SNB.usage
15647510 ± 83% -99.8% 36264 ±103% cpuidle.POLL.time
655.00 ± 53% -99.8% 1.50 ± 33% cpuidle.POLL.usage
9357 ± 2% -27.1% 6823 ± 1% proc-vmstat.nr_active_anon
7063 ± 3% -6.8% 6582 ± 1% proc-vmstat.nr_anon_pages
153.75 ± 12% -52.5% 73.00 ± 0% proc-vmstat.nr_dirtied
911.50 ± 0% -12.8% 794.50 ± 0% proc-vmstat.nr_kernel_stack
4986 ± 0% -44.9% 2746 ± 0% proc-vmstat.nr_shmem
13384 ± 0% -26.0% 9901 ± 0% proc-vmstat.nr_slab_reclaimable
26309 ± 0% -52.3% 12550 ± 0% proc-vmstat.nr_slab_unreclaimable
146.00 ± 11% -33.6% 97.00 ± 0% proc-vmstat.nr_written
5821 ± 2% -96.2% 222.00 ± 0% proc-vmstat.numa_hint_faults
4816 ± 3% -95.4% 222.00 ± 0% proc-vmstat.numa_hint_faults_local
799548 ± 0% -41.9% 464510 ± 0% proc-vmstat.numa_hit
789143 ± 0% -41.1% 464492 ± 0% proc-vmstat.numa_local
10405 ± 13% -99.8% 17.50 ± 6% proc-vmstat.numa_other
23329 ± 0% -96.3% 852.00 ± 0% proc-vmstat.numa_pte_updates
4709 ± 0% -67.7% 1521 ± 2% proc-vmstat.pgactivate
17082 ± 2% +458.2% 95354 ± 0% proc-vmstat.pgalloc_dma32
820429 ± 0% -51.5% 397809 ± 0% proc-vmstat.pgalloc_normal
875926 ± 0% -38.2% 541689 ± 0% proc-vmstat.pgfault
833766 ± 0% -41.3% 489231 ± 0% proc-vmstat.pgfree
34304 ± 11% +50.7% 51706 ± 1% numa-meminfo.node0.Active
8891 ± 39% +207.5% 27338 ± 1% numa-meminfo.node0.Active(anon)
6646 ± 42% +297.2% 26396 ± 1% numa-meminfo.node0.AnonPages
94246 ± 3% +9.6% 103265 ± 0% numa-meminfo.node0.Inactive
2510 ±135% +298.8% 10012 ± 0% numa-meminfo.node0.Inactive(anon)
7218 ± 1% +22.8% 8865 ± 0% numa-meminfo.node0.KernelStack
5065 ± 51% +90.7% 9662 ± 0% numa-meminfo.node0.Mapped
860.00 ± 21% +377.8% 4109 ± 0% numa-meminfo.node0.PageTables
13345 ± 6% +40.1% 18695 ± 3% numa-meminfo.node0.SReclaimable
29466 ± 1% -14.1% 25300 ± 1% numa-meminfo.node0.SUnreclaim
4924 ± 88% +122.6% 10964 ± 0% numa-meminfo.node0.Shmem
28020 ± 6% -10.4% 25101 ± 2% numa-meminfo.node1.Active
3829 ± 14% -100.0% 0.00 ± -1% numa-meminfo.node1.Active(anon)
3912 ± 13% -100.0% 0.00 ± -1% numa-meminfo.node1.AnonPages
2477 ± 5% -48.3% 1280 ± 0% numa-meminfo.node1.KernelStack
250026 ± 21% -33.2% 167095 ± 0% numa-meminfo.node1.MemUsed
12473 ± 9% -43.6% 7040 ± 8% numa-meminfo.node1.SReclaimable
24000 ± 5% -65.9% 8193 ± 3% numa-meminfo.node1.SUnreclaim
36474 ± 6% -58.2% 15233 ± 4% numa-meminfo.node1.Slab
34538 ± 9% -26.4% 25414 ± 7% numa-meminfo.node2.Active
8814 ± 30% -100.0% 0.00 ± -1% numa-meminfo.node2.Active(anon)
8808 ± 30% -100.0% 0.00 ± -1% numa-meminfo.node2.AnonPages
4691 ± 86% -100.0% 0.00 ± -1% numa-meminfo.node2.Inactive(anon)
2540 ± 9% -49.6% 1280 ± 0% numa-meminfo.node2.KernelStack
258906 ± 22% -35.5% 166905 ± 0% numa-meminfo.node2.MemUsed
1184 ± 12% -100.0% 0.00 ± -1% numa-meminfo.node2.PageTables
13055 ± 9% -46.2% 7028 ± 3% numa-meminfo.node2.SReclaimable
24555 ± 8% -66.9% 8135 ± 3% numa-meminfo.node2.SUnreclaim
4817 ± 83% -100.0% 0.00 ± -1% numa-meminfo.node2.Shmem
37611 ± 7% -59.7% 15163 ± 2% numa-meminfo.node2.Slab
41359 ± 5% -39.1% 25174 ± 4% numa-meminfo.node3.Active
15871 ± 12% -100.0% 0.00 ± -1% numa-meminfo.node3.Active(anon)
2088 ±112% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonHugePages
8900 ± 31% -100.0% 0.00 ± -1% numa-meminfo.node3.AnonPages
2760 ±123% -100.0% 0.00 ± -1% numa-meminfo.node3.Inactive(anon)
2349 ± 8% -45.5% 1280 ± 0% numa-meminfo.node3.KernelStack
298598 ± 20% -44.0% 167245 ± 0% numa-meminfo.node3.MemUsed
1071 ± 9% -100.0% 0.00 ± -1% numa-meminfo.node3.PageTables
14657 ± 11% -53.6% 6808 ± 5% numa-meminfo.node3.SReclaimable
27185 ± 10% -69.1% 8406 ± 3% numa-meminfo.node3.SUnreclaim
9824 ± 6% -100.0% 0.00 ± -1% numa-meminfo.node3.Shmem
41842 ± 8% -63.6% 15214 ± 3% numa-meminfo.node3.Slab
2221 ± 39% +207.1% 6822 ± 1% numa-vmstat.node0.nr_active_anon
1661 ± 42% +296.5% 6586 ± 1% numa-vmstat.node0.nr_anon_pages
627.25 ±135% +298.8% 2501 ± 0% numa-vmstat.node0.nr_inactive_anon
450.50 ± 1% +22.9% 553.50 ± 0% numa-vmstat.node0.nr_kernel_stack
1266 ± 51% +90.7% 2413 ± 0% numa-vmstat.node0.nr_mapped
214.50 ± 21% +377.3% 1023 ± 0% numa-vmstat.node0.nr_page_table_pages
1231 ± 88% +122.6% 2740 ± 0% numa-vmstat.node0.nr_shmem
3336 ± 6% +40.1% 4674 ± 3% numa-vmstat.node0.nr_slab_reclaimable
7365 ± 1% -14.1% 6328 ± 1% numa-vmstat.node0.nr_slab_unreclaimable
112532 ± 9% +225.1% 365843 ± 0% numa-vmstat.node0.numa_hit
77391 ± 16% +372.7% 365843 ± 0% numa-vmstat.node0.numa_local
35141 ± 5% -100.0% 0.00 ± -1% numa-vmstat.node0.numa_other
957.25 ± 14% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_active_anon
977.75 ± 13% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_anon_pages
154.50 ± 6% -48.2% 80.00 ± 0% numa-vmstat.node1.nr_kernel_stack
224.75 ± 15% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_page_table_pages
86.50 ± 22% -100.0% 0.00 ± -1% numa-vmstat.node1.nr_shmem
3118 ± 9% -43.6% 1760 ± 8% numa-vmstat.node1.nr_slab_reclaimable
5999 ± 5% -65.9% 2047 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
186838 ± 2% -79.6% 38117 ± 0% numa-vmstat.node1.numa_hit
167571 ± 6% -100.0% 0.00 ± -1% numa-vmstat.node1.numa_local
2203 ± 30% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_active_anon
2201 ± 30% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_anon_pages
1172 ± 86% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_inactive_anon
158.25 ± 9% -49.4% 80.00 ± 0% numa-vmstat.node2.nr_kernel_stack
295.25 ± 11% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_page_table_pages
1204 ± 83% -100.0% 0.00 ± -1% numa-vmstat.node2.nr_shmem
3263 ± 9% -46.2% 1757 ± 3% numa-vmstat.node2.nr_slab_reclaimable
6138 ± 8% -66.9% 2033 ± 3% numa-vmstat.node2.nr_slab_unreclaimable
201075 ± 11% -81.0% 38129 ± 0% numa-vmstat.node2.numa_hit
166043 ± 10% -100.0% 0.00 ± -1% numa-vmstat.node2.numa_local
3966 ± 12% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_active_anon
310.00 ± 2% +44.0% 446.25 ± 6% numa-vmstat.node3.nr_alloc_batch
2225 ± 31% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_anon_pages
689.50 ±123% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_inactive_anon
146.50 ± 8% -45.4% 80.00 ± 0% numa-vmstat.node3.nr_kernel_stack
267.25 ± 9% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_page_table_pages
2455 ± 6% -100.0% 0.00 ± -1% numa-vmstat.node3.nr_shmem
3663 ± 11% -53.5% 1702 ± 5% numa-vmstat.node3.nr_slab_reclaimable
6796 ± 10% -69.1% 2101 ± 3% numa-vmstat.node3.nr_slab_unreclaimable
300233 ± 8% -87.3% 38111 ± 0% numa-vmstat.node3.numa_hit
260400 ± 10% -100.0% 0.00 ± -1% numa-vmstat.node3.numa_local
92008 ± 0% -11.4% 81550 ± 0% slabinfo.Acpi-Operand.active_objs
1643 ± 0% -11.2% 1459 ± 0% slabinfo.Acpi-Operand.active_slabs
92008 ± 0% -11.2% 81718 ± 0% slabinfo.Acpi-Operand.num_objs
1643 ± 0% -11.2% 1459 ± 0% slabinfo.Acpi-Operand.num_slabs
2466 ± 0% -96.8% 78.00 ± 0% slabinfo.Acpi-ParseExt.active_objs
2466 ± 0% -96.8% 78.00 ± 0% slabinfo.Acpi-ParseExt.num_objs
61161 ± 0% -17.6% 50373 ± 0% slabinfo.Acpi-State.active_objs
1199 ± 0% -17.7% 987.75 ± 0% slabinfo.Acpi-State.active_slabs
61191 ± 0% -17.6% 50392 ± 0% slabinfo.Acpi-State.num_objs
1199 ± 0% -17.7% 987.75 ± 0% slabinfo.Acpi-State.num_slabs
2391 ± 2% -83.4% 396.00 ± 0% slabinfo.RAW.active_objs
2391 ± 2% -83.4% 396.00 ± 0% slabinfo.RAW.num_objs
127.50 ± 11% -86.7% 17.00 ± 0% slabinfo.TCP.active_objs
127.50 ± 11% -86.7% 17.00 ± 0% slabinfo.TCP.num_objs
221.00 ± 7% -84.6% 34.00 ± 0% slabinfo.UDP.active_objs
221.00 ± 7% -84.6% 34.00 ± 0% slabinfo.UDP.num_objs
10428 ± 3% -83.9% 1675 ± 1% slabinfo.anon_vma.active_objs
203.75 ± 3% -83.1% 34.50 ± 1% slabinfo.anon_vma.active_slabs
10428 ± 3% -82.9% 1787 ± 0% slabinfo.anon_vma.num_objs
203.75 ± 3% -83.1% 34.50 ± 1% slabinfo.anon_vma.num_slabs
1149 ± 12% -93.7% 73.00 ± 0% slabinfo.avc_xperms_node.active_objs
1149 ± 12% -93.7% 73.00 ± 0% slabinfo.avc_xperms_node.num_objs
721.50 ± 15% -94.6% 39.00 ± 0% slabinfo.bdev_cache.active_objs
721.50 ± 15% -94.6% 39.00 ± 0% slabinfo.bdev_cache.num_objs
154.00 ± 17% -81.8% 28.00 ± 0% slabinfo.blkdev_queue.active_objs
154.00 ± 17% -81.8% 28.00 ± 0% slabinfo.blkdev_queue.num_objs
1737 ± 8% -92.4% 132.00 ± 0% slabinfo.blkdev_requests.active_objs
1737 ± 8% -92.4% 132.00 ± 0% slabinfo.blkdev_requests.num_objs
282.75 ± 20% -86.2% 39.00 ± 0% slabinfo.buffer_head.active_objs
282.75 ± 20% -86.2% 39.00 ± 0% slabinfo.buffer_head.num_objs
64437 ± 0% -22.4% 50035 ± 0% slabinfo.dentry.active_objs
1535 ± 0% -22.3% 1192 ± 0% slabinfo.dentry.active_slabs
64489 ± 0% -22.4% 50074 ± 0% slabinfo.dentry.num_objs
1535 ± 0% -22.3% 1192 ± 0% slabinfo.dentry.num_slabs
805.00 ± 10% -95.2% 39.00 ± 0% slabinfo.file_lock_cache.active_objs
805.00 ± 10% -95.2% 39.00 ± 0% slabinfo.file_lock_cache.num_objs
3069 ± 1% -90.7% 284.00 ± 0% slabinfo.files_cache.active_objs
3069 ± 1% -90.7% 284.00 ± 0% slabinfo.files_cache.num_objs
3846 ± 1% -9.4% 3485 ± 0% slabinfo.ftrace_event_field.active_objs
3846 ± 1% -9.4% 3485 ± 0% slabinfo.ftrace_event_field.num_objs
2145 ± 0% -44.8% 1185 ± 0% slabinfo.idr_layer_cache.active_objs
2145 ± 0% -44.8% 1185 ± 0% slabinfo.idr_layer_cache.num_objs
46494 ± 0% -15.9% 39094 ± 0% slabinfo.inode_cache.active_objs
815.50 ± 0% -16.0% 685.00 ± 0% slabinfo.inode_cache.active_slabs
46494 ± 0% -15.9% 39094 ± 0% slabinfo.inode_cache.num_objs
815.50 ± 0% -16.0% 685.00 ± 0% slabinfo.inode_cache.num_slabs
68289 ± 0% -30.3% 47600 ± 0% slabinfo.kernfs_node_cache.active_objs
1004 ± 0% -30.3% 700.00 ± 0% slabinfo.kernfs_node_cache.active_slabs
68289 ± 0% -30.3% 47600 ± 0% slabinfo.kernfs_node_cache.num_objs
1004 ± 0% -30.3% 700.00 ± 0% slabinfo.kernfs_node_cache.num_slabs
3581 ± 0% -67.0% 1181 ± 0% slabinfo.kmalloc-1024.active_objs
3678 ± 0% -67.0% 1215 ± 0% slabinfo.kmalloc-1024.num_objs
7279 ± 3% -70.9% 2115 ± 1% slabinfo.kmalloc-128.active_objs
7279 ± 3% -69.7% 2208 ± 1% slabinfo.kmalloc-128.num_objs
24301 ± 0% -46.6% 12981 ± 0% slabinfo.kmalloc-16.active_objs
24503 ± 0% -44.5% 13588 ± 0% slabinfo.kmalloc-16.num_objs
11912 ± 2% -62.3% 4493 ± 0% slabinfo.kmalloc-192.active_objs
284.25 ± 2% -57.3% 121.50 ± 1% slabinfo.kmalloc-192.active_slabs
11950 ± 2% -57.0% 5142 ± 1% slabinfo.kmalloc-192.num_objs
284.25 ± 2% -57.3% 121.50 ± 1% slabinfo.kmalloc-192.num_slabs
3519 ± 2% -76.1% 841.50 ± 1% slabinfo.kmalloc-2048.active_objs
226.50 ± 3% -73.5% 60.00 ± 1% slabinfo.kmalloc-2048.active_slabs
3634 ± 3% -73.4% 967.75 ± 1% slabinfo.kmalloc-2048.num_objs
226.50 ± 3% -73.5% 60.00 ± 1% slabinfo.kmalloc-2048.num_slabs
18521 ± 5% -70.2% 5514 ± 0% slabinfo.kmalloc-256.active_objs
303.00 ± 5% -68.2% 96.50 ± 1% slabinfo.kmalloc-256.active_slabs
19420 ± 5% -68.0% 6211 ± 1% slabinfo.kmalloc-256.num_objs
303.00 ± 5% -68.2% 96.50 ± 1% slabinfo.kmalloc-256.num_slabs
26391 ± 4% -43.2% 15000 ± 0% slabinfo.kmalloc-32.active_objs
26512 ± 3% -39.8% 15963 ± 0% slabinfo.kmalloc-32.num_objs
1308 ± 0% -40.2% 782.75 ± 0% slabinfo.kmalloc-4096.active_objs
1390 ± 0% -40.0% 834.00 ± 0% slabinfo.kmalloc-4096.num_objs
12543 ± 4% -64.3% 4473 ± 0% slabinfo.kmalloc-512.active_objs
200.00 ± 5% -62.5% 75.00 ± 2% slabinfo.kmalloc-512.active_slabs
12855 ± 5% -62.2% 4863 ± 2% slabinfo.kmalloc-512.num_objs
200.00 ± 5% -62.5% 75.00 ± 2% slabinfo.kmalloc-512.num_slabs
43396 ± 1% -44.7% 23979 ± 0% slabinfo.kmalloc-64.active_objs
678.25 ± 1% -42.6% 389.00 ± 0% slabinfo.kmalloc-64.active_slabs
43428 ± 1% -42.6% 24935 ± 0% slabinfo.kmalloc-64.num_objs
678.25 ± 1% -42.6% 389.00 ± 0% slabinfo.kmalloc-64.num_slabs
37376 ± 0% -89.0% 4096 ± 0% slabinfo.kmalloc-8.active_objs
37376 ± 0% -89.0% 4096 ± 0% slabinfo.kmalloc-8.num_objs
469.00 ± 1% -91.6% 39.25 ± 1% slabinfo.kmalloc-8192.active_objs
117.25 ± 1% -91.0% 10.50 ± 4% slabinfo.kmalloc-8192.active_slabs
469.00 ± 1% -91.0% 42.00 ± 4% slabinfo.kmalloc-8192.num_objs
117.25 ± 1% -91.0% 10.50 ± 4% slabinfo.kmalloc-8192.num_slabs
5323 ± 2% -66.1% 1802 ± 0% slabinfo.kmalloc-96.active_objs
5596 ± 2% -62.5% 2100 ± 2% slabinfo.kmalloc-96.num_objs
561.00 ± 6% -72.7% 153.00 ± 0% slabinfo.kmem_cache.active_objs
561.00 ± 6% -72.7% 153.00 ± 0% slabinfo.kmem_cache.num_objs
961.00 ± 4% -53.3% 449.00 ± 0% slabinfo.kmem_cache_node.active_objs
1024 ± 4% -50.0% 512.00 ± 0% slabinfo.kmem_cache_node.num_objs
1189 ± 1% -89.7% 122.00 ± 0% slabinfo.mm_struct.active_objs
1189 ± 1% -89.4% 125.75 ± 0% slabinfo.mm_struct.num_objs
1889 ± 7% -93.9% 115.50 ± 15% slabinfo.mnt_cache.active_objs
1889 ± 7% -93.9% 115.50 ± 15% slabinfo.mnt_cache.num_objs
160.00 ± 0% -80.0% 32.00 ± 0% slabinfo.nfs_inode_cache.active_objs
160.00 ± 0% -80.0% 32.00 ± 0% slabinfo.nfs_inode_cache.num_objs
9849 ± 1% -45.8% 5342 ± 0% slabinfo.proc_inode_cache.active_objs
9849 ± 1% -45.8% 5342 ± 0% slabinfo.proc_inode_cache.num_objs
10929 ± 0% -28.8% 7784 ± 0% slabinfo.radix_tree_node.active_objs
10929 ± 0% -28.8% 7784 ± 0% slabinfo.radix_tree_node.num_objs
9192 ± 3% -39.4% 5570 ± 0% slabinfo.shmem_inode_cache.active_objs
9202 ± 3% -39.5% 5570 ± 0% slabinfo.shmem_inode_cache.num_objs
1765 ± 0% -55.3% 789.50 ± 0% slabinfo.sighand_cache.active_objs
1765 ± 0% -54.8% 798.75 ± 0% slabinfo.sighand_cache.num_objs
2801 ± 1% -66.0% 951.25 ± 0% slabinfo.signal_cache.active_objs
2801 ± 1% -66.0% 951.25 ± 0% slabinfo.signal_cache.num_objs
3267 ± 0% -98.3% 55.00 ± 0% slabinfo.sigqueue.active_objs
3267 ± 0% -98.3% 55.00 ± 0% slabinfo.sigqueue.num_objs
3299 ± 6% -84.5% 510.00 ± 0% slabinfo.sock_inode_cache.active_objs
3299 ± 6% -84.5% 510.00 ± 0% slabinfo.sock_inode_cache.num_objs
1199 ± 0% -33.5% 797.75 ± 0% slabinfo.task_struct.active_objs
410.50 ± 1% -35.0% 267.00 ± 0% slabinfo.task_struct.active_slabs
1232 ± 1% -34.9% 802.25 ± 0% slabinfo.task_struct.num_objs
410.50 ± 1% -35.0% 267.00 ± 0% slabinfo.task_struct.num_slabs
1081 ± 2% -96.5% 38.00 ± 0% slabinfo.taskstats.active_objs
1081 ± 2% -96.5% 38.00 ± 0% slabinfo.taskstats.num_objs
2599 ± 1% -32.7% 1748 ± 0% slabinfo.trace_event_file.active_objs
2599 ± 1% -32.7% 1748 ± 0% slabinfo.trace_event_file.num_objs
19377 ± 3% -84.0% 3103 ± 0% slabinfo.vm_area_struct.active_objs
441.25 ± 3% -82.6% 76.75 ± 0% slabinfo.vm_area_struct.active_slabs
19433 ± 3% -82.5% 3405 ± 0% slabinfo.vm_area_struct.num_objs
441.25 ± 3% -82.6% 76.75 ± 0% slabinfo.vm_area_struct.num_slabs
lkp-ivb-d04: Ivy Bridge
Memory: 4G
lkp-wsx02: Westmere-EX
Memory: 128G
lkp-nex04: Nehalem-EX
Memory: 256G
ivb42: Ivytown Ivy Bridge-EP
Memory: 64G
lkp-a06: Atom
Memory: 8G
lkp-sbx04: Sandy Bridge-EX
Memory: 64G
lkp-sb03: Sandy Bridge-EP
Memory: 64G
lkp-ivb-d02: Ivy Bridge
Memory: 8G
nhm4: Nehalem
Memory: 4G
wsm: Westmere
Memory: 6G
lkp-hsw01: Grantley Haswell-EP
Memory: 64G
lkp-hsw-ep2: Brickland Haswell-EP
Memory: 128G
brickland1: Brickland Ivy Bridge-EX
Memory: 128G
lkp-sb02: Sandy Bridge-EP
Memory: 4G
lkp-ne04: Nehalem-EP
Memory: 12G
lituya: Grantley Haswell
Memory: 16G
snb-drag: Sandy Bridge
Memory: 6G
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Ying Huang
6 years, 5 months
[lkp] [mm/memblock.c] cabc3d3f73: kernel BUG at arch/x86/mm/pageattr.c:1348!
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit cabc3d3f732505b3ad56009e4a8aba0c7d39a7d7 ("mm/memblock.c: use memblock_insert_region() for the empty array")
+-----------------------------------------------------------+------------+------------+
| | b0fd5507e8 | cabc3d3f73 |
+-----------------------------------------------------------+------------+------------+
| boot_successes | 70 | 0 |
| boot_failures | 26 | 28 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 26 | |
| kernel_BUG_at_arch/x86/mm/pageattr.c | 0 | 28 |
| invalid_opcode:#[##]DEBUG_PAGEALLOC | 0 | 28 |
| RIP:__change_page_attr_set_clr | 0 | 28 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 28 |
| backtrace:set_real_mode_permissions | 0 | 28 |
| backtrace:kernel_init_freeable | 0 | 28 |
+-----------------------------------------------------------+------------+------------+
[ 0.401891] Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0
[ 0.402749] CPU: GenuineIntel Intel Xeon E312xx (Sandy Bridge) (family: 0x6, model: 0x2a, stepping: 0x1)
[ 0.407809] ------------[ cut here ]------------
[ 0.408497] kernel BUG at arch/x86/mm/pageattr.c:1348!
[ 0.409498] invalid opcode: 0000 [#1] DEBUG_PAGEALLOC
[ 0.410280] Modules linked in:
[ 0.410746] CPU: 0 PID: 1 Comm: swapper Not tainted 4.4.0-rc4-00140-gcabc3d3 #1
[ 0.411798] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 0.413063] task: ffff880000128000 ti: ffff88000011c000 task.ti: ffff88000011c000
[ 0.414136] RIP: 0010:[<ffffffff8102687c>] [<ffffffff8102687c>] __change_page_attr_set_clr+0x905/0xaf7
[ 0.415507] RSP: 0000:ffff88000011fce0 EFLAGS: 00010286
[ 0.416273] RAX: 0000000000000001 RBX: ffff88000011fde0 RCX: ffff880000000000
[ 0.417325] RDX: 0000000000000099 RSI: 000000000000009a RDI: ffffffff81000000
[ 0.418303] RBP: ffff88000011fdb8 R08: 0000000000000099 R09: ffff880000000000
[ 0.419280] R10: 00003fffffe00000 R11: 0000000000000067 R12: 0000000002600000
[ 0.420262] R13: ffff8800025ed4c8 R14: 000000000179a000 R15: 8000000000000161
[ 0.421239] FS: 0000000000000000(0000) GS:ffffffff817bc000(0000) knlGS:0000000000000000
[ 0.422348] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.423134] CR2: 00000000ffffffff CR3: 00000000017ab000 CR4: 00000000000406b0
[ 0.424119] Stack:
[ 0.424413] ffffffff81998b00 0000000000000000 ffff88000011fd98 0000000000000002
[ 0.425500] 00ffffff81998b00 ffff88000011fd20 0000000000000000 ffff88000011fd40
[ 0.426590] ffff88000011fd88 ffffff6700000001 8000000000099163 0000000000000099
[ 0.427709] Call Trace:
[ 0.428059] [<ffffffff81026bd0>] change_page_attr_set_clr+0x162/0x36b
[ 0.428958] [<ffffffff810276a8>] set_memory_ro+0x21/0x23
[ 0.429708] [<ffffffff81a226ef>] set_real_mode_permissions+0x74/0x98
[ 0.430594] [<ffffffff81a2267b>] ? map_vsyscall+0x40/0x40
[ 0.431350] [<ffffffff8100048a>] do_one_initcall+0x191/0x1a6
[ 0.432141] [<ffffffff81065751>] ? finish_task_switch+0x152/0x1b3
[ 0.432993] [<ffffffff814c2043>] ? rest_init+0xba/0xba
[ 0.433717] [<ffffffff81a20fa3>] kernel_init_freeable+0x5b/0x182
[ 0.434559] [<ffffffff814c2043>] ? rest_init+0xba/0xba
[ 0.435284] [<ffffffff814c204c>] kernel_init+0x9/0xca
[ 0.435993] [<ffffffff814ccb0f>] ret_from_fork+0x3f/0x70
[ 0.436742] [<ffffffff814c2043>] ? rest_init+0xba/0xba
[ 0.437491] Code: b8 f4 ff ff ff e9 01 02 00 00 85 c0 0f 85 f9 01 00 00 83 bd 70 ff ff ff 00 75 12 48 63 43 20 39 85 74 ff ff ff 0f 8d 95 01 00 00 <0f> 0b 48 8b 7b 28 48 b9 00 00 00 00 00 88 ff ff 48 89 f8 48 8d
[ 0.441265] RIP [<ffffffff8102687c>] __change_page_attr_set_clr+0x905/0xaf7
[ 0.442251] RSP <ffff88000011fce0>
[ 0.442772] ---[ end trace 3d6543145e4caae4 ]---
[ 0.443446] Kernel panic - not syncing: Fatal exception
Thanks,
Ying Huang
6 years, 5 months
[lkp] [x86, tracing, perf] 7f47d8cc03: INFO: suspicious RCU usage. ]
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git perf/core
commit 7f47d8cc039f8746e0038fe05f1ddcb15a2e27f0 ("x86, tracing, perf: Add trace point for MSR accesses")
+--------------------------------------------------+------------+------------+
| | bd2a634d9e | 7f47d8cc03 |
+--------------------------------------------------+------------+------------+
| boot_successes | 10 | 2 |
| boot_failures | 0 | 8 |
| BUG:kernel_early-boot_hang | 0 | 2 |
| INFO:suspicious_RCU_usage | 0 | 6 |
| RCU_used_illegally_from_idle_CPU | 0 | 6 |
| RCU_used_illegally_from_extended_quiescent_state | 0 | 6 |
| invoked_oom-killer:gfp_mask=0x | 0 | 5 |
| Mem-Info | 0 | 5 |
| Out_of_memory:Kill_process | 0 | 5 |
| backtrace:cpu_startup_entry | 0 | 4 |
| backtrace:vfs_writev | 0 | 3 |
| backtrace:SyS_writev | 0 | 3 |
| backtrace:_do_fork | 0 | 1 |
| backtrace:SyS_clone | 0 | 1 |
| backtrace:vfs_write | 0 | 1 |
| backtrace:SyS_write | 0 | 1 |
+--------------------------------------------------+------------+------------+
[main] Added 32 filenames from /dev
[ 45.280682]
[ 45.281065] ===============================
[ 45.281884] [ INFO: suspicious RCU usage. ]
[ 45.282527] 4.4.0-rc2-00097-g7f47d8c #23 Not tainted
[ 45.283253] -------------------------------
[ 45.284035] arch/x86/include/asm/msr-trace.h:47 suspicious rcu_dereference_check() usage!
[ 45.285533]
[ 45.285533] other info that might help us debug this:
[ 45.285533]
[ 45.286929]
[ 45.286929] RCU used illegally from idle CPU!
[ 45.286929] rcu_scheduler_active = 1, debug_locks = 0
[ 45.289836] RCU used illegally from extended quiescent state!
[ 45.291309] no locks held by swapper/0/0.
[ 45.292430]
[ 45.292430] stack backtrace:
[ 45.293441] CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.0-rc2-00097-g7f47d8c #23
[ 45.294571] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 45.295898] 0000000000000000 ffff88009f403ed8 ffffffff8173cf5f ffffffff84019500
[ 45.298424] ffff88009f403f08 ffffffff81113775 0000000000000000 0000000000000000
[ 45.300879] ffffffff84004000 ffffffff84000000 ffff88009f403f38 ffffffff81772b19
[ 45.303367] Call Trace:
[ 45.304171] <IRQ> [<ffffffff8173cf5f>] dump_stack+0x4b/0x63
[ 45.305793] [<ffffffff81113775>] lockdep_rcu_suspicious+0xf7/0x100
[ 45.307352] [<ffffffff81772b19>] do_trace_write_msr+0x9b/0xf4
[ 45.308856] [<ffffffff81092624>] native_write_msr_safe+0x2e/0x33
[ 45.310293] [<ffffffff8108b8eb>] paravirt_write_msr+0xf/0x13
[ 45.311116] [<ffffffff8108ba04>] native_apic_msr_write+0x29/0x2b
[ 45.311997] [<ffffffff81091999>] kvm_guest_apic_eoi_write+0x36/0x38
[ 45.312878] [<ffffffff8108685b>] apic_eoi+0x18/0x1a
[ 45.313856] [<ffffffff82e16fc1>] smp_apic_timer_interrupt+0x1f/0x3e
[ 45.314766] [<ffffffff82e15207>] apic_timer_interrupt+0x87/0x90
[ 45.315605] <EOI> [<ffffffff81092338>] ? native_safe_halt+0x6/0x8
[ 45.316571] [<ffffffff8105f2f1>] default_idle+0x24/0x37
[ 45.317750] [<ffffffff8105f93e>] arch_cpu_idle+0xf/0x11
[ 45.318794] [<ffffffff8110cbe2>] default_idle_call+0x28/0x2f
[ 45.319610] [<ffffffff8110cdbd>] cpu_startup_entry+0x17a/0x29a
[ 45.321018] [<ffffffff82e08603>] rest_init+0x13a/0x140
[ 45.322382] [<ffffffff84607f65>] start_kernel+0x45e/0x46b
[ 45.323443] [<ffffffff84607120>] ? early_idt_handler_array+0x120/0x120
[ 45.324386] [<ffffffff84607339>] x86_64_start_reservations+0x2a/0x2c
[ 45.325363] [<ffffffff84607468>] x86_64_start_kernel+0x12d/0x13a
[main] Added 107172 filenames from /proc
[main] Added 25531 filenames from /sys
[main] Enabled 9 fd providers.
To reproduce:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/wfg/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Ying Huang
6 years, 5 months
[lkp] [x86 tsc] 1dd8e21222:
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/lenb/linux.git cpufreq
commit 1dd8e212220cdec445793afb130d1634146e70aa ("x86 tsc: Use Sklake CPUID to distinguish cpu_khz and tsc_khz")
We found the following new message in kernel log after your commit.
[ 0.000000] tsc: Fast TSC calibration failed
[ 0.000000] tsc: Fast TSC calibration failed
[ 0.000000] tsc: Unable to calibrate against PIT
[ 0.000000] tsc: Unable to calibrate against PIT
Thanks,
Ying Huang
6 years, 5 months
[lkp] [rhashtable] 95e435afef: INFO: task swapper:1 blocked for more than 120 seconds.
by kernel test robot
FYI, we noticed the below changes on
https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git master
commit 95e435afefe98b8ef6ae8b764879a064cd931a5c ("rhashtable-test: calculate max_entries value by default")
+--------------------------------------------------+------------+------------+
| | 9e9089e5a2 | 95e435afef |
+--------------------------------------------------+------------+------------+
| boot_successes | 18 | 0 |
| boot_failures | 0 | 18 |
| INFO:task_blocked_for_more_than#seconds | 0 | 18 |
| EIP_is_at_default_send_IPI_mask_logical | 0 | 18 |
| Kernel_panic-not_syncing:hung_task:blocked_tasks | 0 | 18 |
| backtrace:kthread_stop | 0 | 18 |
| backtrace:test_rht_init | 0 | 18 |
| backtrace:kernel_init_freeable | 0 | 18 |
| backtrace:watchdog | 0 | 18 |
+--------------------------------------------------+------------+------------+
[ 129.350121] Duration of test: 10581199064 ns
[ 129.350623] Average test time: 11562182047
[ 129.410494] Testing concurrent rhashtable access from 10 threads
[ 277.354558] INFO: task swapper:1 blocked for more than 120 seconds.
[ 277.736905] Not tainted 4.4.0-rc1-00171-g95e435a #1
[ 277.737368] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 278.119478] swapper D 40032000 0 1 0 0x00000000
[ 278.291467] 40031e68 00000046 4001d040 40032000 7fffffff 4001d040 40031e74 418f3013
[ 278.292248] 7fffffff 40031ec0 418f9532 00000046 4001d39c 4001d040 40031eb0 4001d3c8
[ 278.987038] 4001d3c8 418f3da7 00000000 00000001 484d5f7c 00000001 484d5f6c 7fffffff
[ 279.522167] Call Trace:
[ 279.522379] [<418f3013>] schedule+0xf5/0x132
[ 279.522749] [<418f9532>] schedule_timeout+0x25/0x218
[ 279.523153] [<418f3da7>] ? wait_for_common+0x18b/0x21b
[ 279.594083] [<418f3dbc>] wait_for_common+0x1a0/0x21b
[ 279.594523] [<410b3070>] ? try_to_wake_up+0x3d5/0x3d5
[ 279.595032] [<418f3f6a>] wait_for_completion+0x20/0x30
[ 279.655709] [<410aaecc>] kthread_stop+0x92/0x100
[ 279.656093] [<41fbd306>] test_rht_init+0x56f/0x638
[ 279.699115] [<41fbcd97>] ? test_rhashtable+0xaec/0xaec
[ 279.699645] [<410022a1>] do_one_initcall+0x1f3/0x35c
[ 279.700053] [<41f77bac>] kernel_init_freeable+0x225/0x393
[ 279.785443] [<418ef1bb>] kernel_init+0x16/0x1e7
[ 279.785884] [<418fc610>] ret_from_kernel_thread+0x20/0x40
[ 279.786320] [<418ef1a5>] ? rest_init+0x156/0x156
[ 283.781461] no locks held by swapper/1.
[ 283.781876] Sending NMI to all CPUs:
[ 283.782235] NMI backtrace for cpu 0
[ 283.782524] CPU: 0 PID: 17 Comm: khungtaskd Not tainted 4.4.0-rc1-00171-g95e435a #1
[ 283.783135] task: 401a6040 ti: 401a8000 task.ti: 401a8000
[ 283.882237] EIP: 0060:[<4105c55f>] EFLAGS: 00000046 CPU: 0
[ 283.882698] EIP is at default_send_IPI_mask_logical+0x189/0x1db
[ 283.883166] EAX: fffff000 EBX: 01000000 ECX: 00000c00 EDX: fffff000
[ 283.936032] ESI: 00000002 EDI: 00000c00 EBP: 401a9ed4 ESP: 401a9ec4
[ 283.936978] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068
[ 283.980899] CR0: 8005003b CR2: 00000000 CR3: 02019000 CR4: 000006b0
[ 283.981542] Stack:
[ 283.981748] 00000246 4105d441 4001d164 00000002 401a9edc 4105d460 401a9ef4 414d6218
[ 284.037667] 41c23509 41c1de38 4001d040 4001d164 401a9efc 4105d4d1 401a9f20 411343bb
[ 284.066938] 000003ff 00008000 4001d040 00000078 401a0de0 00000000 41133e45 401a9fac
[ 284.102979] Call Trace:
[ 284.103187] [<4105d441>] ? setup_vector_irq+0x241/0x241
[ 284.103674] [<4105d460>] nmi_raise_cpu_backtrace+0x1f/0x2f
[ 284.145209] [<414d6218>] nmi_trigger_all_cpu_backtrace+0x129/0x348
[ 284.159395] [<4105d4d1>] arch_trigger_all_cpu_backtrace+0x1e/0x2e
[ 284.201362] [<411343bb>] watchdog+0x576/0x605
[ 284.201796] [<41133e45>] ? reset_hung_task_detector+0x1d/0x1d
[ 284.202292] [<410aa675>] kthread+0x14b/0x15e
[ 284.250410] [<418fc610>] ret_from_kernel_thread+0x20/0x40
[ 284.250923] [<410aa52a>] ? kthread_create_on_node+0x24c/0x24c
[ 284.251392] Code: 01 83 15 f4 ba 04 42 00 eb 14 81 cf 00 04 00 00 83 05 f8 ba 04 42 01 83 15 fc ba 04 42 00 a1 b8 1f d0 41 89 f9 89 88 00 d3 ff ff <83> 3d f0 f4 cf 41 00 75 1e 83 05 00 bb 04 42 01 83 15 04 bb 04
[ 290.400355] Kernel panic - not syncing: hung_task: blocked tasks
[ 290.408226] CPU: 0 PID: 17 Comm: khungtaskd Not tainted 4.4.0-rc1-00171-g95e435a #1
[ 290.408831] 00000000 4001d040 401a9ee0 414cf5e9 401a9ef8 411625d4 4001d164 4001d040
[ 290.470110] 4001d164 00000002 401a9f20 411343d3 41bfc28e 000003ff 00008000 4001d040
[ 290.470892] 00000078 401a0de0 00000000 41133e45 401a9fac 410aa675 00000000 00000046
[ 290.514010] Call Trace:
[ 290.514215] [<414cf5e9>] dump_stack+0x40/0x5e
[ 290.564689] [<411625d4>] panic+0x152/0x3b0
[ 290.565034] [<411343d3>] watchdog+0x58e/0x605
[ 290.565383] [<41133e45>] ? reset_hung_task_detector+0x1d/0x1d
[ 290.598256] [<410aa675>] kthread+0x14b/0x15e
[ 290.598617] [<418fc610>] ret_from_kernel_thread+0x20/0x40
[ 290.644298] [<410aa52a>] ? kthread_create_on_node+0x24c/0x24c
[ 290.644771] Kernel Offset: disabled
Elapsed time: 400
Thanks,
Ying Huang
6 years, 5 months