[lkp-robot] 398ceb91b8 BUG: kernel hang in boot stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://anongit.freedesktop.org/drm-intel topic/core-for-CI
commit 398ceb91b81505e7138eecdac870a59e85661671
Author: Daniel Vetter <daniel.vetter(a)ffwll.ch>
AuthorDate: Tue Mar 20 17:02:58 2018 +0100
Commit: Daniel Vetter <daniel.vetter(a)ffwll.ch>
CommitDate: Mon Jun 25 18:07:12 2018 +0200
RFC: debugobjects: capture stack traces at _init() time
Sometimes it's really easy to know which object has gone boom and
where the offending code is, and sometimes it's really hard. One case
we're trying to hunt down is when module unload catches a live debug
object, with a module with lots of them.
Capture a full stack trace from debug_object_init() and dump it when
there's a problem.
FIXME: Should we have a separate Kconfig knob for the backtraces,
they're quite expensive? Atm I'm just selecting it for the general
debug object stuff.
v2: Drop the locks for gathering&storing the backtrace. This is
required because depot_save_stack can call free_pages (to drop it's
preallocation), which can call debug_check_no_obj_freed, which will
recurse into the db->lock spinlocks.
Cc: Philippe Ombredanne <pombredanne(a)nexb.com>
Cc: Greg Kroah-Hartman <gregkh(a)linuxfoundation.org>
Cc: Thomas Gleixner <tglx(a)linutronix.de>
Cc: Kate Stewart <kstewart(a)linuxfoundation.org>
Cc: Daniel Vetter <daniel.vetter(a)ffwll.ch>
Cc: Waiman Long <longman(a)redhat.com>
Acked-by-for-CI-testing: Chris Wilson <chris(a)chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter(a)intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180320160258.11381-1-dani...
ebaef7b2f7 kernel/panic: Repeat the line and caller information at the end of the OOPS
398ceb91b8 RFC: debugobjects: capture stack traces at _init() time
b1ee8b0d94 Documentation: e100: Fix docs build error
+-------------------------------------------------------+------------+------------+------------+
| | ebaef7b2f7 | 398ceb91b8 | b1ee8b0d94 |
+-------------------------------------------------------+------------+------------+------------+
| boot_successes | 23 | 0 | 0 |
| boot_failures | 20 | 15 | 13 |
| WARNING:possible_circular_locking_dependency_detected | 20 | | |
| BUG:kernel_hang_in_boot_stage | 0 | 15 | 13 |
+-------------------------------------------------------+------------+------------+------------+
kernel_total_size: 0x0000000003aaf000
trampoline_32bit: 0x000000000009d000
Decompressing Linux... Parsing ELF... done.
Booting the kernel.
BUG: kernel hang in boot stage
Linux version 4.18.0-rc2-00007-g398ceb9 #1
Command line: root=/dev/ram0 hung_task_panic=1 debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw link=/kbuild-tests/run-queue/yocto-lkp-nhm-dp2/x86_64-randconfig-s3-06300434/linux-devel:devel-catchup-201806300333:398ceb91b81505e7138eecdac870a59e85661671:bisect-linux-32/.vmlinuz-398ceb91b81505e7138eecdac870a59e85661671-20180630111015-8:yocto-lkp-nhm-dp2-10 branch=linux-devel/devel-catchup-201806300333 BOOT_IMAGE=/pkg/linux/x86_64-randconfig-s3-06300434/gcc-7/398ceb91b81505e7138eecdac870a59e85661671/vmlinuz-4.18.0-rc2-00007-g398ceb9 drbd.minor_count=8 rcuperf.shutdown=0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 691ef198391e10a174dedbea19727b45d453ad47 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect good 3b03431ffcdfa7ae2bbb7c3f1c71766f66849d83 # 06:06 G 10 0 5 5 Merge 'lpieralisi-pci/pci/controller-fixes' into devel-catchup-201806300333
git bisect bad 6d16a4c44349091e57d96550facbb6a7d597bd5c # 06:26 B 0 1 14 0 Merge 'linux-review/Philipp-Zabel/media-coda-mark-CODA960-firmware-version-2-1-9-as-supported/20180629-220504' into devel-catchup-201806300333
git bisect bad b65c314ab490a111080b5cd07e9aa024a439a9b2 # 06:48 B 0 2 15 0 Merge 'ath6kl/master-pending' into devel-catchup-201806300333
git bisect bad 0c633bf0e5915066d9e83ad4a68ce428b4e9621a # 07:20 B 0 1 14 0 Merge 'cifs/for-next' into devel-catchup-201806300333
git bisect good 5b6f5648fe62afa5b3fcab8ae9bdd8ae0d6ae742 # 07:40 G 10 0 2 2 Merge 'jpirko-mlxsw/petrm_erspan_lag_fixes' into devel-catchup-201806300333
git bisect bad a0275c5d4ac6da625e02a5722994c321a681977c # 08:07 B 0 5 18 0 Merge 'drm-tip/drm-tip' into devel-catchup-201806300333
git bisect good 78796877c37cb2c3898c4bcd2a12238d83858287 # 08:24 G 11 0 5 5 drm/i915: Move the irq_counter inside the spinlock
git bisect good 57e23de02f4878061818fd118129a6b0e1516b11 # 08:43 G 10 0 5 7 drm/sun4i: DW HDMI: Expand algorithm for possible crtcs
git bisect good 6e0ef9d85b99baeeea4b9c4a9777809cb0c6040a # 09:11 G 10 0 5 5 drm/amd/display: Write TEST_EDID_CHECKSUM_WRITE for EDID tests
git bisect good 81d984fa3495f93aa4e9726598a8b4767eaca86f # 09:35 G 10 0 5 5 Merge remote-tracking branch 'drm/drm-next' into drm-tip
git bisect good 186633be741825ed88fe3d92ef6f334364a26ee3 # 09:51 G 11 0 7 7 Merge remote-tracking branch 'drm-intel/drm-intel-next-queued' into drm-tip
git bisect bad b1ee8b0d945633e4165f9e160af4cda8be6497f5 # 10:10 B 0 2 15 0 Documentation: e100: Fix docs build error
git bisect good f25ae7126147dcbc5c2e80125d6ee941d0485e98 # 10:35 G 10 0 4 4 lockdep: finer-grained completion key for kthread
git bisect bad 4b06b972bbb472ad77ba86397e8548e87318f8d5 # 10:53 B 0 1 14 0 mei: discard messages from not connected client during power down.
git bisect bad 398ceb91b81505e7138eecdac870a59e85661671 # 11:19 B 0 2 15 0 RFC: debugobjects: capture stack traces at _init() time
git bisect good ebaef7b2f7d0648b79f9475b5679bff4114fc1fa # 11:44 G 11 0 7 7 kernel/panic: Repeat the line and caller information at the end of the OOPS
# first bad commit: [398ceb91b81505e7138eecdac870a59e85661671] RFC: debugobjects: capture stack traces at _init() time
git bisect good ebaef7b2f7d0648b79f9475b5679bff4114fc1fa # 11:53 G 32 0 13 20 kernel/panic: Repeat the line and caller information at the end of the OOPS
# extra tests with debug options
git bisect bad 398ceb91b81505e7138eecdac870a59e85661671 # 12:09 B 0 8 21 0 RFC: debugobjects: capture stack traces at _init() time
# extra tests on HEAD of linux-devel/devel-catchup-201806300333
git bisect bad 691ef198391e10a174dedbea19727b45d453ad47 # 12:09 B 0 13 29 0 0day head guard for 'devel-catchup-201806300333'
# extra tests on tree/branch drm-intel/topic/core-for-CI
git bisect bad b1ee8b0d945633e4165f9e160af4cda8be6497f5 # 12:16 B 0 13 26 0 Documentation: e100: Fix docs build error
# extra tests with first bad commit reverted
git bisect good b847b36176eff1c29f964bf7048fd7f8bf2f521a # 12:32 G 11 0 4 4 Revert "RFC: debugobjects: capture stack traces at _init() time"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
2 years, 6 months
[lkp-robot] 39b6345460 [ 1.374744] BUG: sleeping function called from invalid context at mm/mmap.c:179
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Rik-van-Riel/x86-tlb-mm-make-laz...
commit 39b6345460010b3ffecbfd95c547ce92715bdb9d
Author: Rik van Riel <riel(a)surriel.com>
AuthorDate: Fri Jun 29 10:29:16 2018 -0400
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Fri Jun 29 23:28:30 2018 +0800
x86,tlb: only send page table free TLB flush to lazy TLB CPUs
CPUs in !is_lazy have either received TLB flush IPIs earlier on during
the munmap (when the user memory was unmapped), or have context switched
and reloaded during that stage of the munmap.
Page table free TLB flushes only need to be sent to CPUs in lazy TLB
mode, which TLB contents might not yet be up to date yet.
Signed-off-by: Rik van Riel <riel(a)surriel.com>
Tested-by: Song Liu <songliubraving(a)fb.com>
f636c25d9e x86,tlb: make lazy TLB mode lazier
39b6345460 x86,tlb: only send page table free TLB flush to lazy TLB CPUs
4dcb489ec0 x86,switch_mm: skip atomic operations for init_mm
+-----------------------------------------------------------------------------+------------+------------+------------+
| | f636c25d9e | 39b6345460 | 4dcb489ec0 |
+-----------------------------------------------------------------------------+------------+------------+------------+
| boot_successes | 67 | 0 | 0 |
| boot_failures | 0 | 30 | 8 |
| BUG:sleeping_function_called_from_invalid_context_at_mm/mmap.c | 0 | 16 | 6 |
| BUG:scheduling_while_atomic | 0 | 16 | 7 |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= | 0 | 7 | |
| BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/rwsem.c | 0 | 15 | 2 |
| Kernel_panic-not_syncing:No_working_init_found | 0 | 14 | 2 |
+-----------------------------------------------------------------------------+------------+------------+------------+
[ 1.362029] rtc-test rtc-test.0: setting system clock to 2018-06-29 19:33:47 UTC (1530300827)
[ 1.363895] Freeing unused kernel memory: 704K
[ 1.369781] Write protecting the kernel text: 6960k
[ 1.370907] Write protecting the kernel read-only data: 3796k
[ 1.372586] rodata_test: all tests were successful
[ 1.374744] BUG: sleeping function called from invalid context at mm/mmap.c:179
[ 1.376821] in_atomic(): 1, irqs_disabled(): 0, pid: 1, name: init
[ 1.377780] 1 lock held by init/1:
[ 1.378145] #0: (ptrval) (&mm->mmap_sem){++++}, at: vm_munmap+0x2e/0x6a
[ 1.378996] CPU: 0 PID: 1 Comm: init Not tainted 4.18.0-rc2-00176-g39b6345 #1
[ 1.379744] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.380621] Call Trace:
[ 1.380881] ? dump_stack+0x79/0xab
[ 1.381246] ? ___might_sleep+0x181/0x195
[ 1.381725] ? __might_sleep+0x65/0x6c
[ 1.382112] ? remove_vma+0x1b/0x4e
[ 1.382606] ? do_munmap+0x206/0x240
[ 1.382984] ? vm_munmap+0x47/0x6a
[ 1.383332] ? sys_munmap+0xe/0x10
[ 1.383737] ? do_int80_syscall_32+0x59/0x6b
[ 1.384172] ? entry_INT80_32+0x2c/0x2c
[ 1.384696] BUG: scheduling while atomic: init/1/0x00000002
[ 1.385273] no locks held by init/1.
[ 1.385693] Modules linked in:
[ 1.386027] CPU: 0 PID: 1 Comm: init Tainted: G W 4.18.0-rc2-00176-g39b6345 #1
[ 1.387020] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.387924] Call Trace:
[ 1.388184] ? dump_stack+0x79/0xab
[ 1.388594] ? __schedule_bug+0x6b/0x7d
[ 1.388987] ? __schedule+0x50/0x3e8
[ 1.389344] ? prepare_exit_to_usermode+0x96/0xe9
[ 1.389868] ? schedule+0x1a/0x2f
[ 1.390209] ? prepare_exit_to_usermode+0xa5/0xe9
[ 1.390821] ? syscall_return_slowpath+0x6d/0x74
[ 1.391319] ? do_int80_syscall_32+0x66/0x6b
[ 1.391805] ? entry_INT80_32+0x2c/0x2c
[ 1.392340] tsc: Refined TSC clocksource calibration: 2593.988 MHz
[ 1.393071] clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x25640cb3a67, max_idle_ns: 440795258696 ns
[ 1.394610] init[1]: segfault at 6ff5a000 ip 6ff62f7c sp 77cf1ae4 error 4 in ld-uClibc-0.9.33.2.so[6ff60000+6000]
[ 1.395714] Code: ff ff 8b 8d 24 ff ff ff 53 89 c3 b8 03 00 00 00 cd 80 5b 3d 00 f0 ff ff 76 0a f7 d8 8b 93 30 00 00 00 89 02 8b 85 24 ff ff ff <81> 38 7f 45 4c 46 74 51 ff 75 10 8b 83 54 00 00 00 ff 30 8d 83 d3
[ 1.397830] BUG: scheduling while atomic: init/1/0x00000002
[ 1.398402] no locks held by init/1.
[ 1.398925] Modules linked in:
[ 1.399259] CPU: 0 PID: 1 Comm: init Tainted: G W 4.18.0-rc2-00176-g39b6345 #1
[ 1.400177] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.401026] Call Trace:
[ 1.401282] ? dump_stack+0x79/0xab
[ 1.401634] ? __schedule_bug+0x6b/0x7d
[ 1.402024] ? __schedule+0x50/0x3e8
[ 1.402381] ? prepare_exit_to_usermode+0x96/0xe9
[ 1.402847] ? kvm_async_pf_task_wake+0xca/0xca
[ 1.403303] ? kvm_async_pf_task_wake+0xca/0xca
[ 1.403753] ? schedule+0x1a/0x2f
[ 1.404093] ? prepare_exit_to_usermode+0xa5/0xe9
[ 1.404595] ? resume_userspace+0x13/0x18
[ 1.405007] ? kvm_async_pf_task_wake+0xca/0xca
[ 1.405736] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
[ 1.405736]
[ 1.406656] CPU: 0 PID: 1 Comm: init Tainted: G W 4.18.0-rc2-00176-g39b6345 #1
[ 1.407496] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 1.408318] Call Trace:
[ 1.408573] ? dump_stack+0x79/0xab
[ 1.408933] ? panic+0x99/0x1d0
[ 1.409261] ? do_exit+0x37e/0x768
[ 1.409612] ? do_group_exit+0x88/0x88
[ 1.410002] ? get_signal+0x405/0x42a
[ 1.410374] ? kvm_async_pf_task_wake+0xca/0xca
[ 1.410828] ? do_signal+0x19/0x48a
[ 1.411194] ? find_held_lock+0x22/0x5f
[ 1.411602] ? kvm_async_pf_task_wake+0xca/0xca
[ 1.412063] ? prepare_exit_to_usermode+0xb1/0xe9
[ 1.412533] ? resume_userspace+0x13/0x18
[ 1.412948] ? kvm_async_pf_task_wake+0xca/0xca
[ 1.413425] Kernel Offset: disabled
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start f9ced71f7fe48a035f69e5028d43a5f7d6b48a41 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect bad b1c56ab1b360d41433851927cdea11125b140319 # 00:46 B 0 5 48 13 Merge 'ath6kl/ath-qca' into devel-catchup-201806300004
git bisect good bd1ee351b836f2952720fa08bc4887dd363aaa90 # 01:09 G 11 0 11 11 Merge 'linux-review/Uros-Bizjak/x86-asm-boot-Use-CC_SET-CC_OUT-in-arch-x86-boot/20180629-233629' into devel-catchup-201806300004
git bisect bad ce51b6dc4277f852a10dfd8e811d0b232de6efbe # 01:20 B 0 1 15 0 Merge 'kdave-btrfs-devel/cleanups-4.19' into devel-catchup-201806300004
git bisect bad 060072d1482816414326a2ffc85f0ef5c7432da5 # 01:36 B 0 10 24 0 Merge 'linux-review/Rik-van-Riel/x86-tlb-mm-make-lazy-TLB-mode-even-lazier/20180629-232822' into devel-catchup-201806300004
git bisect good 813244ea9141679c8068c9102b0baa004a7f3153 # 01:55 G 11 0 11 22 Merge 'linux-review/S-bastien-Szymanski/ARM-dts-imx6ull-add-operating-points/20180629-233237' into devel-catchup-201806300004
git bisect good c7e1d692ea8250c42278bb67fdedee01c822985f # 02:20 G 10 0 10 19 Merge tag 'mtd/fixes-for-4.18-rc3' of git://git.infradead.org/linux-mtd
git bisect good b41f794f284966fd6ec634111e3b40d241389f96 # 02:51 G 11 0 11 33 ALSA: timer: Fix UBSAN warning at SNDRV_TIMER_IOCTL_NEXT_DEVICE ioctl
git bisect good 8ea2d7bac9fe899108a3d4683cea7757be56edce # 03:03 G 11 0 0 1 mm: allocate mm_cpumask dynamically based on nr_cpu_ids
git bisect good f636c25d9eacd034acb35be6035af8125059cad4 # 03:14 G 10 0 0 0 x86,tlb: make lazy TLB mode lazier
git bisect bad dc6b19728327115d17c2c03014e763df1a2a2269 # 03:18 B 0 2 24 8 x86,mm: always use lazy TLB mode
git bisect bad 39b6345460010b3ffecbfd95c547ce92715bdb9d # 03:32 B 0 2 16 0 x86,tlb: only send page table free TLB flush to lazy TLB CPUs
# first bad commit: [39b6345460010b3ffecbfd95c547ce92715bdb9d] x86,tlb: only send page table free TLB flush to lazy TLB CPUs
git bisect good f636c25d9eacd034acb35be6035af8125059cad4 # 03:46 G 31 0 0 12 x86,tlb: make lazy TLB mode lazier
# extra tests on HEAD of linux-devel/devel-catchup-201806300004
git bisect bad f9ced71f7fe48a035f69e5028d43a5f7d6b48a41 # 03:46 B 0 35 52 0 0day head guard for 'devel-catchup-201806300004'
# extra tests on tree/branch linux-review/Rik-van-Riel/x86-tlb-mm-make-lazy-TLB-mode-even-lazier/20180629-232822
git bisect bad 4dcb489ec0ca1e81a4cdc36f26183ca1dd51059a # 03:48 B 0 8 22 0 x86,switch_mm: skip atomic operations for init_mm
# extra tests with first bad commit reverted
git bisect good f929bd2714bff938b3a45a278537f7d4d1fe0459 # 04:05 G 10 0 0 2 Revert "x86,tlb: only send page table free TLB flush to lazy TLB CPUs"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
2 years, 6 months
0d0e0f71d0 ("ipv6: make ipv6_renew_options() interrupt/kernel .."): BUG: unable to handle kernel NULL pointer dereference at 0000000c
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Paul-Moore/ipv6-make-ipv6_renew_...
commit 0d0e0f71d089ce4568f08e8dcb23618fbd06d66b
Author: Paul Moore <paul(a)paul-moore.com>
AuthorDate: Sun Jul 1 23:01:02 2018 -0400
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Mon Jul 2 11:03:19 2018 +0800
ipv6: make ipv6_renew_options() interrupt/kernel safe
At present the ipv6_renew_options_kern() function ends up calling into
access_ok() which is problematic if done from inside an interrupt as
access_ok() calls WARN_ON_IN_IRQ() on some (all?) architectures
(x86-64 is affected). Example warning/backtrace is shown below:
WARNING: CPU: 1 PID: 3144 at lib/usercopy.c:11 _copy_from_user+0x85/0x90
...
Call Trace:
<IRQ>
ipv6_renew_option+0xb2/0xf0
ipv6_renew_options+0x26a/0x340
ipv6_renew_options_kern+0x2c/0x40
calipso_req_setattr+0x72/0xe0
netlbl_req_setattr+0x126/0x1b0
selinux_netlbl_inet_conn_request+0x80/0x100
selinux_inet_conn_request+0x6d/0xb0
security_inet_conn_request+0x32/0x50
tcp_conn_request+0x35f/0xe00
? __lock_acquire+0x250/0x16c0
? selinux_socket_sock_rcv_skb+0x1ae/0x210
? tcp_rcv_state_process+0x289/0x106b
tcp_rcv_state_process+0x289/0x106b
? tcp_v6_do_rcv+0x1a7/0x3c0
tcp_v6_do_rcv+0x1a7/0x3c0
tcp_v6_rcv+0xc82/0xcf0
ip6_input_finish+0x10d/0x690
ip6_input+0x45/0x1e0
? ip6_rcv_finish+0x1d0/0x1d0
ipv6_rcv+0x32b/0x880
? ip6_make_skb+0x1e0/0x1e0
__netif_receive_skb_core+0x6f2/0xdf0
? process_backlog+0x85/0x250
? process_backlog+0x85/0x250
? process_backlog+0xec/0x250
process_backlog+0xec/0x250
net_rx_action+0x153/0x480
__do_softirq+0xd9/0x4f7
do_softirq_own_stack+0x2a/0x40
</IRQ>
...
While not present in the backtrace, ipv6_renew_option() ends up calling
access_ok() via the following chain:
access_ok()
_copy_from_user()
copy_from_user()
ipv6_renew_option()
The fix presented in this patch is to perform the userspace copy
earlier in the call chain such that it is only called when the option
data is actually coming from userspace; that place is
do_ipv6_setsockopt(). Not only does this solve the problem seen in
the backtrace above, it also allows us to simplify the code quite a
bit by removing ipv6_renew_options_kern() completely. We also take
this opportunity to cleanup ipv6_renew_options()/ipv6_renew_option()
a small amount as well.
This patch is heavily based on a rough patch by Al Viro. I've taken
his original patch, converted a kmemdup() call in do_ipv6_setsockopt()
to a memdup_user() call, made better use of the e_inval jump target in
the same function, and cleaned up the use ipv6_renew_option() by
ipv6_renew_options().
CC: Al Viro <viro(a)zeniv.linux.org.uk>
Signed-off-by: Paul Moore <paul(a)paul-moore.com>
1236f22fba tcp: prevent bogus FRTO undos with non-SACK flows
0d0e0f71d0 ipv6: make ipv6_renew_options() interrupt/kernel safe
+------------------------------------------+------------+------------+
| | 1236f22fba | 0d0e0f71d0 |
+------------------------------------------+------------+------------+
| boot_successes | 37 | 16 |
| boot_failures | 0 | 22 |
| BUG:unable_to_handle_kernel | 0 | 21 |
| Oops:#[##] | 0 | 21 |
| EIP:ipv6_renew_options | 0 | 21 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 21 |
| Mem-Info | 0 | 1 |
+------------------------------------------+------------+------------+
01 00 00 00 40 00 00 00 04 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0f 00 00 00 00 00 00 00 b2 81 1b 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
01 00 00 00 60 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 09 00 00 00 00 00 00 00 6e 56 c3 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
01 00 00 00 48 00 00 00 07 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0e 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
01 00 00 00 40 00 00 00 07 00 00 00 00 00 0b 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
00 00 a2 30 00 00 00 00 00 00 7e 05 00 00 00 00 00 00 03 00 00 00 00 00 00 00 20 a6 2e 04 00 00 00 00 00 00 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0d 46 00 00 00 00 00 00 04 00 00 00 00 00 00 00 4a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 3f 00 00 00 fe ff 00 00
[ 21.130187] BUG: unable to handle kernel NULL pointer dereference at 0000000c
[ 21.130883] *pde = 00000000
[ 21.131142] Oops: 0000 [#1] PREEMPT
[ 21.131487] CPU: 0 PID: 496 Comm: trinity-main Not tainted 4.18.0-rc2-00130-g0d0e0f71 #151
[ 21.132269] EIP: ipv6_renew_options+0xce/0x1c0
[ 21.132699] Code: 85 c0 89 c6 0f 84 f2 00 00 00 31 c0 89 d9 89 f7 f3 aa 8d 46 24 c7 06 01 00 00 00 89 5e 04 8d 5d f0 8d 56 0c 89 45 f0 8b 45 ec <8b> 48 0c 53 b8 36 00 00 00 ff 75 e8 ff 75 08 e8 7b f0 ff ff 8b 45
[ 21.134527] EAX: 00000000 EBX: cde71d48 ECX: 00000000 EDX: cdeb770c
[ 21.135142] ESI: cdeb7700 EDI: cdeb772c EBP: cde71d58 ESP: cde71d40
[ 21.135737] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068 EFLAGS: 00010246
[ 21.136401] CR0: 80050033 CR2: 0000000c CR3: 0dc6c000 CR4: 001406d0
[ 21.137032] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 21.137630] DR6: fffe0ff0 DR7: 00000400
[ 21.137998] Call Trace:
[ 21.138225] do_ipv6_setsockopt+0x597/0xdd9
[ 21.138666] ? kvm_clock_read+0x1d/0x2d
[ 21.139026] ? kvm_sched_clock_read+0x9/0x18
[ 21.139402] ? paravirt_sched_clock+0x9/0xd
[ 21.139790] ? sched_clock+0x9/0xc
[ 21.140154] ? kvm_clock_read+0x1d/0x2d
[ 21.140510] ? kvm_sched_clock_read+0x9/0x18
[ 21.140930] ? paravirt_sched_clock+0x9/0xd
[ 21.141297] ? sched_clock+0x9/0xc
[ 21.141633] ? sched_clock_cpu+0x19/0x167
[ 21.141997] ? preempt_latency_start+0x1e/0x49
[ 21.142382] ? mm_fault_error+0x67/0xfa
[ 21.142731] ? check_chain_key+0x9a/0xef
[ 21.143104] ? kvm_clock_read+0x1d/0x2d
[ 21.143475] ? kvm_sched_clock_read+0x9/0x18
[ 21.143867] ? paravirt_sched_clock+0x9/0xd
[ 21.144234] ? sched_clock+0x9/0xc
[ 21.144536] ? sched_clock_cpu+0x19/0x167
[ 21.144920] ipv6_setsockopt+0x43/0x50
[ 21.145260] ? do_ipv6_setsockopt+0xdd9/0xdd9
[ 21.145721] ? sock_common_recvmsg+0x46/0x46
[ 21.146118] tcp_setsockopt+0x27/0x7c0
[ 21.146454] ? preempt_latency_start+0x1e/0x49
[ 21.146878] ? tcp_time_stamp_raw+0x2c/0x2c
[ 21.147275] ? sock_common_recvmsg+0x46/0x46
[ 21.147694] sock_common_setsockopt+0x18/0x1d
[ 21.148106] __sys_setsockopt+0x5b/0x7d
[ 21.148476] sys_socketcall+0x10e/0x16a
[ 21.148825] do_int80_syscall_32+0x57/0x69
[ 21.149203] entry_INT80_32+0x2c/0x2c
[ 21.149548] EIP: 0x809af42
[ 21.149814] Code: 89 c8 c3 90 8d 74 26 00 85 c0 c7 01 01 00 00 00 75 d8 a1 ec bd a7 08 eb d1 66 90 66 90 66 90 66 90 66 90 66 90 66 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 10 a3 14 be a7 08 85
[ 21.151556] EAX: ffffffda EBX: 0000000e ECX: bfce793c EDX: 08a792e0
[ 21.152121] ESI: 0000005f EDI: 00000164 EBP: 09e17150 ESP: bfce792c
[ 21.152712] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000286
[ 21.153290] Modules linked in:
[ 21.153563] CR2: 000000000000000c
[main] trace_fd was -1
[main] kernel became tainted! (128/0) Last seed was 2909835065
trinity: Detected kernel tainting. Last seed was 2909835065
[ 21.155996] ---[ end trace b5dcd09637b3c563 ]---
[ 21.156410] EIP: ipv6_renew_options+0xce/0x1c0
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start f37157a5fa11920ddea4107342ae29ff39ce5a8b 7daf201d7fe8334e2d2364d4e8ed3394ec9af819 --
git bisect bad e98f18f8eb2a5f52292fa7611894c542e93375f5 # 11:34 B 0 6 23 3 Merge 'linux-review/Randy-Dunlap/x86-make-Memory-Management-options-more-visible/20180702-105043' into devel-catchup-201807021106
git bisect good d33e9b6169ecd16559abce59dc204b45322b2ab2 # 11:48 G 12 0 3 3 Merge 'drm-exynos/exynos-drm-fixes' into devel-catchup-201807021106
git bisect bad 3d006920eb51a3861723934a36937deb49d8cb02 # 11:59 B 0 4 18 0 Merge 'linux-review/Paul-Moore/ipv6-make-ipv6_renew_options-interrupt-kernel-safe/20180702-110318' into devel-catchup-201807021106
git bisect good 933e671f8cff87552d4d26cf3874633b11ae78ba # 12:06 G 12 0 3 4 selftests/net: Fix permissions for fib_tests.sh
git bisect good 4664610537d398d55be19432f9cd9c29c831e159 # 12:18 G 12 0 2 2 Revert "s390/qeth: use Read device to query hypervisor for MAC"
git bisect good 9901c5d77e969d8215a8e8d087ef02e6feddc84c # 12:25 G 12 0 0 0 bpf: sockmap, fix crash when ipv6 sock is added
git bisect good 3ffe64f1a641b80a82d9ef4efa7a05ce69049871 # 12:32 G 12 0 2 2 hv_netvsc: split sub-channel setup into async and sync
git bisect good bf2b866a2fe2d74558fe4b7bdf63a4bc0afbdf70 # 12:35 G 11 0 0 0 Merge branch 'bpf-sockmap-fixes'
git bisect good 271b955e52a965f729c9e67f281685c2e7d8726a # 12:49 G 11 0 2 2 Merge git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
git bisect bad 0d0e0f71d089ce4568f08e8dcb23618fbd06d66b # 12:54 B 1 1 0 4 ipv6: make ipv6_renew_options() interrupt/kernel safe
git bisect good 1236f22fbae15df3736ab4a984c64c0c6ee6254c # 13:02 G 12 0 4 4 tcp: prevent bogus FRTO undos with non-SACK flows
# first bad commit: [0d0e0f71d089ce4568f08e8dcb23618fbd06d66b] ipv6: make ipv6_renew_options() interrupt/kernel safe
git bisect good 1236f22fbae15df3736ab4a984c64c0c6ee6254c # 13:10 G 39 0 8 12 tcp: prevent bogus FRTO undos with non-SACK flows
# extra tests with debug options
git bisect bad 0d0e0f71d089ce4568f08e8dcb23618fbd06d66b # 13:17 B 0 1 15 0 ipv6: make ipv6_renew_options() interrupt/kernel safe
# extra tests on HEAD of linux-devel/devel-catchup-201807021106
git bisect bad f37157a5fa11920ddea4107342ae29ff39ce5a8b # 13:18 B 5 6 0 1 0day head guard for 'devel-catchup-201807021106'
# extra tests on tree/branch linux-review/Paul-Moore/ipv6-make-ipv6_renew_options-interrupt-kernel-safe/20180702-110318
git bisect bad 0d0e0f71d089ce4568f08e8dcb23618fbd06d66b # 13:21 B 4 22 0 1 ipv6: make ipv6_renew_options() interrupt/kernel safe
# extra tests with first bad commit reverted
git bisect good b9af297edb775f880d5416b1fbeb4d256f8e6501 # 13:32 G 16 0 4 4 Revert "ipv6: make ipv6_renew_options() interrupt/kernel safe"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
2 years, 6 months
[lkp-robot] [vfs] 8a2e54b8af: netperf.Throughput_total_tps -26.1% regression
by kernel test robot
Greeting,
FYI, we noticed a -26.1% regression of netperf.Throughput_total_tps due to commit:
commit: 8a2e54b8af88639527b308878bcde31dfa2644f6 ("vfs: Implement a filesystem superblock creation/configuration context")
https://git.kernel.org/cgit/linux/kernel/git/next/linux-next.git master
in testcase: netperf
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
ip: ipv4
runtime: 300s
nr_threads: 25%
cluster: cs-localhost
test: TCP_CRR
cpufreq_governor: performance
test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
test-url: http://www.netperf.org/netperf/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase:
cs-localhost/gcc-7/performance/ipv4/x86_64-rhel-7.2/25%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-ep2/TCP_CRR/netperf
commit:
889bc23d98 ("vfs: Require specification of size of mount data for internal mounts")
8a2e54b8af ("vfs: Implement a filesystem superblock creation/configuration context")
889bc23d98821043 8a2e54b8af88639527b308878b
---------------- --------------------------
%stddev %change %stddev
\ | \
468684 -26.1% 346523 netperf.Throughput_total_tps
21303 -26.1% 15751 netperf.Throughput_tps
305.02 +0.0% 305.02 netperf.time.elapsed_time
305.02 +0.0% 305.02 netperf.time.elapsed_time.max
1240905 +4.0% 1290438 netperf.time.involuntary_context_switches
2340 +0.7% 2357 netperf.time.maximum_resident_set_size
6722 +1.4% 6813 ± 2% netperf.time.minor_page_faults
4096 +0.0% 4096 netperf.time.page_size
1879 +4.4% 1962 netperf.time.percent_of_cpu_this_job_got
5453 +6.1% 5786 netperf.time.system_time
279.41 -28.7% 199.20 netperf.time.user_time
2.724e+08 -25.9% 2.018e+08 netperf.time.voluntary_context_switches
1.406e+08 -26.1% 1.04e+08 netperf.workload
336286 -0.2% 335615 interrupts.CAL:Function_call_interrupts
438.30 -2.1% 429.02 pmeter.Average_Active_Power
48.61 -24.5% 36.71 pmeter.performance_per_watt
27.42 -0.5% 27.29 ± 4% boot-time.boot
15.78 -3.0% 15.30 ± 5% boot-time.dhcp
2226 +2.3% 2277 boot-time.idle
18.86 ± 2% -1.3% 18.61 ± 5% boot-time.kernel_boot
4.304e+08 -24.7% 3.243e+08 softirqs.NET_RX
39646822 -18.7% 32219712 softirqs.RCU
5590041 +7.3% 5996673 ± 2% softirqs.SCHED
12330529 ± 10% -1.3% 12166738 ± 12% softirqs.TIMER
69.16 -5.2 63.92 mpstat.cpu.idle%
0.00 ±100% -0.0 0.00 ±173% mpstat.cpu.iowait%
11.34 -3.6 7.71 mpstat.cpu.soft%
18.02 +9.3 27.32 mpstat.cpu.sys%
1.47 -0.4 1.05 mpstat.cpu.usr%
4.00 +0.0% 4.00 vmstat.memory.buff
1005802 +0.5% 1010518 vmstat.memory.cache
1.3e+08 -0.0% 1.3e+08 vmstat.memory.free
27.75 +14.4% 31.75 vmstat.procs.r
3249087 -33.5% 2159914 vmstat.system.cs
93331 -0.1% 93238 vmstat.system.in
0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
478419 ± 29% -5.1% 454087 ± 22% numa-numastat.node0.local_node
492847 ± 28% -5.5% 465546 ± 22% numa-numastat.node0.numa_hit
14432 ± 33% -20.6% 11460 ± 61% numa-numastat.node0.other_node
0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
431892 ± 34% -14.4% 369692 ± 29% numa-numastat.node1.local_node
434779 ± 32% -13.6% 375541 ± 29% numa-numastat.node1.numa_hit
2887 ±169% +102.5% 5848 ±120% numa-numastat.node1.other_node
4.093e+09 -30.4% 2.848e+09 cpuidle.C1.time
4.895e+08 -35.2% 3.173e+08 cpuidle.C1.usage
81116892 ± 2% +177.7% 2.253e+08 ± 3% cpuidle.C1E.time
3590244 ± 2% +173.5% 9819699 ± 3% cpuidle.C1E.usage
85009589 -21.6% 66665506 cpuidle.C3.time
209102 -14.3% 179301 cpuidle.C3.usage
1.333e+10 +0.4% 1.338e+10 cpuidle.C6.time
13846804 -0.0% 13839975 cpuidle.C6.usage
43536088 -33.9% 28763801 cpuidle.POLL.time
6632569 -29.1% 4700058 cpuidle.POLL.usage
983.75 +10.9% 1090 turbostat.Avg_MHz
35.27 +3.8 39.08 turbostat.Busy%
2797 +0.0% 2797 turbostat.Bzy_MHz
4.895e+08 -35.2% 3.173e+08 turbostat.C1
15.19 -4.6 10.57 turbostat.C1%
3590143 ± 2% +173.5% 9819585 ± 3% turbostat.C1E
0.30 ± 3% +0.5 0.83 ± 3% turbostat.C1E%
208933 -14.3% 179136 turbostat.C3
0.32 -0.1 0.25 turbostat.C3%
13844451 -0.0% 13837698 turbostat.C6
49.44 +0.2 49.64 turbostat.C6%
53.73 -5.7% 50.69 turbostat.CPU%c1
0.23 -26.1% 0.17 turbostat.CPU%c3
10.78 -6.6% 10.07 ± 2% turbostat.CPU%c6
75.00 ± 5% -7.7% 69.25 turbostat.CoreTmp
28660427 -0.1% 28629630 turbostat.IRQ
0.55 ± 9% -2.3% 0.54 ± 10% turbostat.Pkg%pc2
0.55 ± 6% +5.5% 0.58 ± 6% turbostat.Pkg%pc6
77.00 ± 2% -5.8% 72.50 turbostat.PkgTmp
221.95 -2.6% 216.17 turbostat.PkgWatt
16.44 +3.5% 17.01 turbostat.RAMWatt
2195 +0.0% 2195 turbostat.TSC_MHz
286110 +1.6% 290795 meminfo.Active
285878 +1.6% 290563 meminfo.Active(anon)
177310 +0.2% 177735 meminfo.AnonHugePages
255935 -0.0% 255847 meminfo.AnonPages
939803 +0.5% 944631 meminfo.Cached
200608 +0.0% 200608 meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
65946628 +0.0% 65946628 meminfo.CommitLimit
422156 -1.6% 415405 meminfo.Committed_AS
1.292e+08 +0.2% 1.295e+08 meminfo.DirectMap1G
6697984 ± 6% -3.9% 6434304 ± 7% meminfo.DirectMap2M
261424 ± 14% +0.6% 262960 ± 4% meminfo.DirectMap4k
2048 +0.0% 2048 meminfo.Hugepagesize
18398 +0.7% 18518 meminfo.Inactive
18051 +0.7% 18171 meminfo.Inactive(anon)
15429 -0.3% 15389 meminfo.KernelStack
24964 +1.0% 25218 meminfo.Mapped
1.294e+08 -0.0% 1.294e+08 meminfo.MemAvailable
1.3e+08 -0.0% 1.3e+08 meminfo.MemFree
1.319e+08 -0.0% 1.319e+08 meminfo.MemTotal
809.25 ±100% -50.1% 403.50 ±173% meminfo.Mlocked
5903 -0.7% 5860 meminfo.PageTables
65803 -0.2% 65678 meminfo.SReclaimable
219346 -0.3% 218728 meminfo.SUnreclaim
48660 ± 6% +9.9% 53484 meminfo.Shmem
285150 -0.3% 284407 meminfo.Slab
890696 -0.0% 890634 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
3.39e+12 -4.9% 3.225e+12 perf-stat.branch-instructions
2.35 -0.5 1.81 perf-stat.branch-miss-rate%
7.958e+10 -26.5% 5.853e+10 perf-stat.branch-misses
1.28 +1.0 2.28 perf-stat.cache-miss-rate%
5.826e+09 +35.8% 7.912e+09 perf-stat.cache-misses
4.561e+11 -24.0% 3.465e+11 perf-stat.cache-references
1.001e+09 -33.5% 6.652e+08 perf-stat.context-switches
1.55 +21.2% 1.88 perf-stat.cpi
2.662e+13 +10.1% 2.931e+13 perf-stat.cpu-cycles
101022 -35.2% 65435 perf-stat.cpu-migrations
0.08 -0.0 0.07 perf-stat.dTLB-load-miss-rate%
4.13e+09 -22.5% 3.203e+09 perf-stat.dTLB-load-misses
5.225e+12 -12.2% 4.587e+12 perf-stat.dTLB-loads
0.05 -0.0 0.05 perf-stat.dTLB-store-miss-rate%
1.794e+09 -35.9% 1.151e+09 perf-stat.dTLB-store-misses
3.329e+12 -25.7% 2.474e+12 perf-stat.dTLB-stores
31.10 ± 21% -1.8 29.27 ± 4% perf-stat.iTLB-load-miss-rate%
7.682e+09 ± 32% -38.5% 4.728e+09 ± 4% perf-stat.iTLB-load-misses
1.65e+10 -30.8% 1.142e+10 perf-stat.iTLB-loads
1.72e+13 -9.1% 1.562e+13 perf-stat.instructions
2429 ± 23% +36.3% 3311 ± 4% perf-stat.instructions-per-iTLB-miss
0.65 -17.5% 0.53 perf-stat.ipc
791192 +0.1% 791897 perf-stat.minor-faults
89.78 +6.1 95.89 perf-stat.node-load-miss-rate%
1.938e+09 +89.9% 3.681e+09 perf-stat.node-load-misses
2.205e+08 -28.4% 1.578e+08 ± 2% perf-stat.node-loads
83.13 -22.4 60.74 perf-stat.node-store-miss-rate%
1.868e+09 +0.3% 1.873e+09 perf-stat.node-store-misses
3.791e+08 +219.3% 1.211e+09 perf-stat.node-stores
791204 +0.1% 791913 perf-stat.page-faults
122296 +22.9% 150294 perf-stat.path-length
71487 +1.6% 72661 proc-vmstat.nr_active_anon
63972 -0.0% 63948 proc-vmstat.nr_anon_pages
3229476 -0.0% 3229370 proc-vmstat.nr_dirty_background_threshold
6466849 -0.0% 6466637 proc-vmstat.nr_dirty_threshold
234958 +0.5% 236166 proc-vmstat.nr_file_pages
50152 +0.0% 50152 proc-vmstat.nr_free_cma
32496765 -0.0% 32495704 proc-vmstat.nr_free_pages
4489 +0.6% 4516 proc-vmstat.nr_inactive_anon
15435 -0.3% 15392 proc-vmstat.nr_kernel_stack
6301 +0.9% 6357 proc-vmstat.nr_mapped
202.25 ±100% -50.1% 101.00 ±173% proc-vmstat.nr_mlock
1476 -0.8% 1464 proc-vmstat.nr_page_table_pages
12171 ± 6% +9.9% 13379 proc-vmstat.nr_shmem
16450 -0.2% 16419 proc-vmstat.nr_slab_reclaimable
54829 -0.3% 54661 proc-vmstat.nr_slab_unreclaimable
222673 -0.0% 222658 proc-vmstat.nr_unevictable
71487 +1.6% 72661 proc-vmstat.nr_zone_active_anon
4489 +0.6% 4516 proc-vmstat.nr_zone_inactive_anon
222673 -0.0% 222658 proc-vmstat.nr_zone_unevictable
3898 ± 74% -26.1% 2880 ± 16% proc-vmstat.numa_hint_faults
958.75 ± 36% +1.5% 973.50 ± 15% proc-vmstat.numa_hint_faults_local
956184 -9.2% 868423 proc-vmstat.numa_hit
938852 -9.3% 851105 proc-vmstat.numa_local
17332 -0.1% 17318 proc-vmstat.numa_other
2474 ±111% -44.0% 1384 ± 32% proc-vmstat.numa_pages_migrated
10570 ±132% +7.1% 11325 ±130% proc-vmstat.numa_pte_updates
11386 ± 8% +13.4% 12916 proc-vmstat.pgactivate
1101289 -10.5% 985451 proc-vmstat.pgalloc_normal
812475 +0.3% 815020 proc-vmstat.pgfault
1043137 -11.0% 927908 proc-vmstat.pgfree
2474 ±111% -44.0% 1384 ± 32% proc-vmstat.pgmigrate_success
153400 ± 33% +6.8% 163789 ± 12% numa-meminfo.node0.Active
153284 ± 33% +6.8% 163673 ± 12% numa-meminfo.node0.Active(anon)
100577 ± 37% +6.9% 107492 ± 18% numa-meminfo.node0.AnonHugePages
143856 ± 36% +7.4% 154566 ± 10% numa-meminfo.node0.AnonPages
472071 ± 3% -0.8% 468470 ± 3% numa-meminfo.node0.FilePages
16604 ± 4% -19.6% 13355 ± 50% numa-meminfo.node0.Inactive
16433 ± 4% -19.8% 13180 ± 51% numa-meminfo.node0.Inactive(anon)
8551 ± 6% -7.4% 7915 numa-meminfo.node0.KernelStack
14689 -6.6% 13723 ± 14% numa-meminfo.node0.Mapped
64814375 +0.1% 64857851 numa-meminfo.node0.MemFree
65867452 +0.0% 65867452 numa-meminfo.node0.MemTotal
1053075 ± 5% -4.1% 1009599 ± 6% numa-meminfo.node0.MemUsed
3796 ± 35% -41.0% 2238 ± 17% numa-meminfo.node0.PageTables
36487 ± 9% -11.1% 32442 ± 19% numa-meminfo.node0.SReclaimable
117644 ± 3% -10.2% 105665 ± 3% numa-meminfo.node0.SUnreclaim
26220 ± 53% -13.7% 22631 ± 75% numa-meminfo.node0.Shmem
154132 ± 4% -10.4% 138109 ± 7% numa-meminfo.node0.Slab
445627 -0.0% 445610 numa-meminfo.node0.Unevictable
132774 ± 40% -4.3% 127037 ± 16% numa-meminfo.node1.Active
132658 ± 40% -4.3% 126921 ± 16% numa-meminfo.node1.Active(anon)
76742 ± 49% -8.4% 70282 ± 30% numa-meminfo.node1.AnonHugePages
112031 ± 47% -9.6% 101265 ± 16% numa-meminfo.node1.AnonPages
467751 ± 2% +1.8% 476180 ± 3% numa-meminfo.node1.FilePages
1694 ± 41% +202.8% 5130 ±132% numa-meminfo.node1.Inactive
1519 ± 51% +226.4% 4958 ±139% numa-meminfo.node1.Inactive(anon)
6879 ± 7% +8.6% 7473 ± 2% numa-meminfo.node1.KernelStack
10006 +14.1% 11417 ± 18% numa-meminfo.node1.Mapped
65172705 -0.1% 65124824 numa-meminfo.node1.MemFree
66025808 -0.0% 66025804 numa-meminfo.node1.MemTotal
853102 ± 6% +5.6% 900978 ± 7% numa-meminfo.node1.MemUsed
2107 ± 62% +71.8% 3620 ± 11% numa-meminfo.node1.PageTables
29316 ± 12% +13.4% 33234 ± 18% numa-meminfo.node1.SReclaimable
101687 ± 5% +11.1% 112982 ± 3% numa-meminfo.node1.SUnreclaim
22455 ± 55% +37.5% 30868 ± 56% numa-meminfo.node1.Shmem
131004 ± 6% +11.6% 146217 ± 6% numa-meminfo.node1.Slab
445068 -0.0% 445023 numa-meminfo.node1.Unevictable
194846 +3.2% 201093 ± 4% slabinfo.Acpi-Namespace.active_objs
195003 +3.5% 201804 ± 4% slabinfo.Acpi-Namespace.num_objs
150215 -0.1% 150062 slabinfo.Acpi-Operand.active_objs
150220 -0.1% 150066 slabinfo.Acpi-Operand.num_objs
13071 ± 4% -1.4% 12890 slabinfo.anon_vma.active_objs
13071 ± 4% -1.4% 12890 slabinfo.anon_vma.num_objs
1952 ± 14% +23.9% 2418 ± 2% slabinfo.biovec-64.active_objs
1952 ± 14% +23.9% 2418 ± 2% slabinfo.biovec-64.num_objs
7860 ± 2% -2.5% 7663 ± 2% slabinfo.cred_jar.active_objs
7860 ± 2% -2.5% 7663 ± 2% slabinfo.cred_jar.num_objs
73763 -1.0% 73030 slabinfo.dentry.active_objs
74616 -1.0% 73851 slabinfo.dentry.num_objs
681.25 ± 11% +14.5% 780.25 ± 8% slabinfo.dmaengine-unmap-128.active_objs
681.25 ± 11% +14.5% 780.25 ± 8% slabinfo.dmaengine-unmap-128.num_objs
4603 ± 6% -4.1% 4414 ± 10% slabinfo.dmaengine-unmap-16.active_objs
4603 ± 6% -4.1% 4414 ± 10% slabinfo.dmaengine-unmap-16.num_objs
3439 ± 5% -6.7% 3208 ± 6% slabinfo.eventpoll_pwq.active_objs
3439 ± 5% -6.7% 3208 ± 6% slabinfo.eventpoll_pwq.num_objs
37128 -9.2% 33700 slabinfo.filp.active_objs
1161 -9.3% 1054 slabinfo.filp.active_slabs
37186 -9.2% 33748 slabinfo.filp.num_objs
1161 -9.3% 1054 slabinfo.filp.num_slabs
78122 -0.2% 77950 slabinfo.kernfs_node_cache.active_objs
78122 -0.2% 77950 slabinfo.kernfs_node_cache.num_objs
6797 -2.7% 6612 ± 3% slabinfo.kmalloc-2048.active_objs
6857 ± 2% -2.8% 6666 ± 3% slabinfo.kmalloc-2048.num_objs
37114 +1.8% 37780 ± 3% slabinfo.kmalloc-32.active_objs
37408 +1.6% 37990 ± 2% slabinfo.kmalloc-32.num_objs
8206 ± 4% -2.8% 7974 ± 3% slabinfo.kmalloc-512.active_objs
133449 +0.9% 134658 slabinfo.kmalloc-64.active_objs
133782 +0.8% 134895 slabinfo.kmalloc-64.num_objs
10650 ± 3% +3.2% 10990 ± 5% slabinfo.kmalloc-96.active_objs
10664 ± 3% +3.2% 11002 ± 5% slabinfo.kmalloc-96.num_objs
681.00 ± 4% -9.4% 617.00 ± 8% slabinfo.kmem_cache_node.active_objs
736.00 ± 4% -8.7% 672.00 ± 8% slabinfo.kmem_cache_node.num_objs
31500 ± 4% -2.6% 30689 slabinfo.pid.active_objs
31500 ± 4% -2.6% 30689 slabinfo.pid.num_objs
278948 +0.1% 279251 slabinfo.tw_sock_TCP.active_objs
279098 +0.1% 279394 slabinfo.tw_sock_TCP.num_objs
27069 ± 3% -2.7% 26348 ± 2% slabinfo.vm_area_struct.active_objs
27069 ± 3% -2.6% 26367 ± 2% slabinfo.vm_area_struct.num_objs
19563 ± 15% -11.1% 17395 ± 20% numa-vmstat.node0
38314 ± 33% +6.8% 40916 ± 12% numa-vmstat.node0.nr_active_anon
35963 ± 36% +7.5% 38645 ± 10% numa-vmstat.node0.nr_anon_pages
118011 ± 3% -0.8% 117119 ± 3% numa-vmstat.node0.nr_file_pages
16203578 +0.1% 16214439 numa-vmstat.node0.nr_free_pages
4108 ± 4% -19.6% 3302 ± 51% numa-vmstat.node0.nr_inactive_anon
8552 ± 6% -7.4% 7915 numa-vmstat.node0.nr_kernel_stack
3741 ± 2% -6.4% 3501 ± 13% numa-vmstat.node0.nr_mapped
101.25 ±102% -42.2% 58.50 ±173% numa-vmstat.node0.nr_mlock
948.25 ± 35% -41.0% 559.00 ± 17% numa-vmstat.node0.nr_page_table_pages
6549 ± 53% -13.6% 5659 ± 75% numa-vmstat.node0.nr_shmem
9121 ± 9% -11.1% 8110 ± 19% numa-vmstat.node0.nr_slab_reclaimable
29408 ± 3% -10.1% 26433 ± 3% numa-vmstat.node0.nr_slab_unreclaimable
111406 -0.0% 111402 numa-vmstat.node0.nr_unevictable
38314 ± 33% +6.8% 40916 ± 12% numa-vmstat.node0.nr_zone_active_anon
4108 ± 4% -19.6% 3302 ± 51% numa-vmstat.node0.nr_zone_inactive_anon
111406 -0.0% 111402 numa-vmstat.node0.nr_zone_unevictable
724926 ± 8% -6.9% 675065 ± 11% numa-vmstat.node0.numa_hit
131009 -0.1% 130856 numa-vmstat.node0.numa_interleave
710309 ± 9% -6.7% 663035 ± 11% numa-vmstat.node0.numa_local
14616 ± 33% -17.7% 12029 ± 60% numa-vmstat.node0.numa_other
10355 ± 30% +21.0% 12528 ± 27% numa-vmstat.node1
33151 ± 40% -4.3% 31734 ± 16% numa-vmstat.node1.nr_active_anon
28010 ± 47% -9.6% 25317 ± 16% numa-vmstat.node1.nr_anon_pages
116927 ± 2% +1.8% 119045 ± 3% numa-vmstat.node1.nr_file_pages
50152 +0.0% 50152 numa-vmstat.node1.nr_free_cma
16293163 -0.1% 16281185 numa-vmstat.node1.nr_free_pages
385.75 ± 50% +220.5% 1236 ±139% numa-vmstat.node1.nr_inactive_anon
6881 ± 7% +8.6% 7475 numa-vmstat.node1.nr_kernel_stack
2576 +12.8% 2907 ± 16% numa-vmstat.node1.nr_mapped
100.50 ±102% -58.2% 42.00 ±173% numa-vmstat.node1.nr_mlock
527.00 ± 62% +71.6% 904.50 ± 11% numa-vmstat.node1.nr_page_table_pages
5603 ± 55% +37.7% 7717 ± 56% numa-vmstat.node1.nr_shmem
7329 ± 12% +13.4% 8309 ± 18% numa-vmstat.node1.nr_slab_reclaimable
25419 ± 5% +11.2% 28263 ± 3% numa-vmstat.node1.nr_slab_unreclaimable
111266 -0.0% 111255 numa-vmstat.node1.nr_unevictable
33151 ± 40% -4.3% 31734 ± 16% numa-vmstat.node1.nr_zone_active_anon
385.75 ± 50% +220.5% 1236 ±139% numa-vmstat.node1.nr_zone_inactive_anon
111266 -0.0% 111255 numa-vmstat.node1.nr_zone_unevictable
545817 ± 12% -0.3% 544018 ± 14% numa-vmstat.node1.numa_hit
130594 +0.1% 130718 numa-vmstat.node1.numa_interleave
408964 ± 18% -1.1% 404358 ± 19% numa-vmstat.node1.numa_local
136853 ± 3% +2.1% 139659 ± 5% numa-vmstat.node1.numa_other
26133 ± 24% -1.0% 25884 ± 41% sched_debug.cfs_rq:/.MIN_vruntime.avg
950650 ± 23% +1.2% 962090 ± 31% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
141020 ± 22% +3.2% 145486 ± 36% sched_debug.cfs_rq:/.MIN_vruntime.stddev
21791 ± 4% +2.0% 22223 ± 9% sched_debug.cfs_rq:/.load.avg
86656 -19.7% 69575 sched_debug.cfs_rq:/.load.max
30245 -11.9% 26637 ± 2% sched_debug.cfs_rq:/.load.stddev
29.05 ± 6% +5.4% 30.63 ± 5% sched_debug.cfs_rq:/.load_avg.avg
232.03 ± 6% -5.4% 219.54 ± 7% sched_debug.cfs_rq:/.load_avg.max
53.60 ± 11% -0.7% 53.25 ± 13% sched_debug.cfs_rq:/.load_avg.stddev
26133 ± 24% -1.0% 25884 ± 41% sched_debug.cfs_rq:/.max_vruntime.avg
950650 ± 23% +1.2% 962090 ± 31% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
141020 ± 22% +3.2% 145486 ± 36% sched_debug.cfs_rq:/.max_vruntime.stddev
757888 ± 11% +28.0% 969742 ± 9% sched_debug.cfs_rq:/.min_vruntime.avg
1142337 ± 12% +21.3% 1385641 ± 9% sched_debug.cfs_rq:/.min_vruntime.max
340383 ± 15% +60.4% 545887 ± 9% sched_debug.cfs_rq:/.min_vruntime.min
210789 ± 13% +2.2% 215407 ± 8% sched_debug.cfs_rq:/.min_vruntime.stddev
0.35 ± 4% +9.5% 0.39 ± 8% sched_debug.cfs_rq:/.nr_running.avg
1.00 +0.0% 1.00 sched_debug.cfs_rq:/.nr_running.max
0.47 -0.6% 0.46 sched_debug.cfs_rq:/.nr_running.stddev
3.26 ± 65% +11.9% 3.65 ± 76% sched_debug.cfs_rq:/.removed.load_avg.avg
185.72 ± 8% -26.5% 136.45 ± 58% sched_debug.cfs_rq:/.removed.load_avg.max
23.47 ± 33% -9.0% 21.35 ± 65% sched_debug.cfs_rq:/.removed.load_avg.stddev
149.99 ± 65% +12.2% 168.31 ± 76% sched_debug.cfs_rq:/.removed.runnable_sum.avg
8552 ± 9% -26.3% 6299 ± 58% sched_debug.cfs_rq:/.removed.runnable_sum.max
1079 ± 33% -8.7% 984.95 ± 65% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
1.33 ± 56% +19.4% 1.58 ± 82% sched_debug.cfs_rq:/.removed.util_avg.avg
84.93 ± 21% -27.0% 62.02 ± 64% sched_debug.cfs_rq:/.removed.util_avg.max
9.87 ± 30% -6.8% 9.19 ± 72% sched_debug.cfs_rq:/.removed.util_avg.stddev
13.40 ± 5% +18.5% 15.88 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.avg
43.87 -2.1% 42.94 sched_debug.cfs_rq:/.runnable_load_avg.max
17.85 +3.7% 18.51 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.stddev
21442 ± 5% +4.9% 22495 ± 10% sched_debug.cfs_rq:/.runnable_weight.avg
85912 -20.8% 68077 sched_debug.cfs_rq:/.runnable_weight.max
30069 ± 2% -11.8% 26527 ± 3% sched_debug.cfs_rq:/.runnable_weight.stddev
349537 ± 7% -15.3% 296038 ± 12% sched_debug.cfs_rq:/.spread0.avg
733987 ± 9% -3.0% 711942 ± 11% sched_debug.cfs_rq:/.spread0.max
-67975 +88.1% -127834 sched_debug.cfs_rq:/.spread0.min
210791 ± 13% +2.2% 215411 ± 8% sched_debug.cfs_rq:/.spread0.stddev
376.57 ± 3% +11.4% 419.65 ± 3% sched_debug.cfs_rq:/.util_avg.avg
1013 ± 3% -3.2% 981.61 sched_debug.cfs_rq:/.util_avg.max
349.07 +6.0% 369.94 sched_debug.cfs_rq:/.util_avg.stddev
230.99 ± 2% +27.2% 293.92 ± 10% sched_debug.cfs_rq:/.util_est_enqueued.avg
852.23 +2.5% 873.57 sched_debug.cfs_rq:/.util_est_enqueued.max
328.76 ± 2% +7.8% 354.36 ± 2% sched_debug.cfs_rq:/.util_est_enqueued.stddev
321875 ± 15% +12.2% 361029 ± 7% sched_debug.cpu.avg_idle.avg
1000000 +1.7% 1017141 ± 2% sched_debug.cpu.avg_idle.max
6003 ± 56% +10.3% 6624 ± 42% sched_debug.cpu.avg_idle.min
367737 ± 10% +7.9% 396924 ± 2% sched_debug.cpu.avg_idle.stddev
164232 ± 9% +4.4% 171536 ± 7% sched_debug.cpu.clock.avg
164237 ± 9% +4.4% 171541 ± 7% sched_debug.cpu.clock.max
164227 ± 9% +4.4% 171531 ± 7% sched_debug.cpu.clock.min
3.09 ± 4% -1.8% 3.03 ± 2% sched_debug.cpu.clock.stddev
164232 ± 9% +4.4% 171536 ± 7% sched_debug.cpu.clock_task.avg
164237 ± 9% +4.4% 171541 ± 7% sched_debug.cpu.clock_task.max
164227 ± 9% +4.4% 171531 ± 7% sched_debug.cpu.clock_task.min
3.09 ± 4% -1.8% 3.03 ± 2% sched_debug.cpu.clock_task.stddev
12.70 ± 2% +12.6% 14.30 sched_debug.cpu.cpu_load[0].avg
43.85 -1.8% 43.07 sched_debug.cpu.cpu_load[0].max
17.67 +3.7% 18.33 sched_debug.cpu.cpu_load[0].stddev
12.76 ± 2% +13.4% 14.48 sched_debug.cpu.cpu_load[1].avg
44.38 ± 8% -5.5% 41.92 sched_debug.cpu.cpu_load[1].max
15.37 +5.6% 16.24 ± 2% sched_debug.cpu.cpu_load[1].stddev
12.81 +13.8% 14.58 sched_debug.cpu.cpu_load[2].avg
45.12 ± 17% -10.8% 40.23 sched_debug.cpu.cpu_load[2].max
14.84 ± 4% +4.6% 15.53 sched_debug.cpu.cpu_load[2].stddev
12.81 +14.9% 14.72 sched_debug.cpu.cpu_load[3].avg
45.01 ± 11% -6.1% 42.24 ± 3% sched_debug.cpu.cpu_load[3].max
14.59 ± 3% +5.9% 15.45 sched_debug.cpu.cpu_load[3].stddev
13.10 +15.8% 15.16 sched_debug.cpu.cpu_load[4].avg
63.50 ± 10% -0.9% 62.96 ± 4% sched_debug.cpu.cpu_load[4].max
16.43 ± 4% +7.0% 17.58 sched_debug.cpu.cpu_load[4].stddev
617.86 ± 3% +9.5% 676.70 sched_debug.cpu.curr->pid.avg
5226 ± 7% +3.5% 5406 ± 5% sched_debug.cpu.curr->pid.max
994.97 ± 2% +1.2% 1007 ± 2% sched_debug.cpu.curr->pid.stddev
19393 ± 3% -0.5% 19287 sched_debug.cpu.load.avg
133909 ± 60% -48.0% 69575 sched_debug.cpu.load.max
32436 ± 19% -20.2% 25887 sched_debug.cpu.load.stddev
500000 +0.1% 500448 sched_debug.cpu.max_idle_balance_cost.avg
500000 +2.7% 513396 ± 4% sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
4294 +0.0% 4294 sched_debug.cpu.next_balance.avg
4294 +0.0% 4294 sched_debug.cpu.next_balance.max
4294 +0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ± 7% -0.6% 0.00 ± 7% sched_debug.cpu.next_balance.stddev
140354 ± 10% +5.3% 147746 ± 8% sched_debug.cpu.nr_load_updates.avg
147133 ± 10% +5.3% 154878 ± 8% sched_debug.cpu.nr_load_updates.max
138176 ± 11% +5.3% 145481 ± 8% sched_debug.cpu.nr_load_updates.min
1184 ± 15% -0.2% 1181 ± 4% sched_debug.cpu.nr_load_updates.stddev
0.31 ± 2% +9.7% 0.34 sched_debug.cpu.nr_running.avg
1.00 +0.0% 1.00 sched_debug.cpu.nr_running.max
0.46 +0.4% 0.46 sched_debug.cpu.nr_running.stddev
5058918 ± 11% -29.7% 3554884 ± 8% sched_debug.cpu.nr_switches.avg
7455016 ± 11% -32.1% 5065543 ± 8% sched_debug.cpu.nr_switches.max
2132318 ± 13% -8.7% 1946837 ± 11% sched_debug.cpu.nr_switches.min
1418043 ± 14% -43.1% 807437 ± 7% sched_debug.cpu.nr_switches.stddev
0.01 ± 10% +28.2% 0.01 ± 41% sched_debug.cpu.nr_uninterruptible.avg
9.00 ± 32% -1.9% 8.83 ± 17% sched_debug.cpu.nr_uninterruptible.max
-8.50 +11.3% -9.46 sched_debug.cpu.nr_uninterruptible.min
3.05 ± 14% -2.5% 2.97 ± 15% sched_debug.cpu.nr_uninterruptible.stddev
164227 ± 9% +4.4% 171531 ± 7% sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 +0.0% 4.295e+09 sched_debug.jiffies
164227 ± 9% +4.4% 171531 ± 7% sched_debug.ktime
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.5% 955.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 -0.1% 948.92 sched_debug.rt_rq:/.rt_runtime.min
0.00 ±120% -69.0% 0.00 ±173% sched_debug.rt_rq:/.rt_time.avg
0.01 ±137% -1.4% 0.01 ±173% sched_debug.rt_rq:/.rt_time.max
0.00 ±124% -10.1% 0.00 ±173% sched_debug.rt_rq:/.rt_time.stddev
164984 ± 9% +4.7% 172783 ± 7% sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4118331 +0.0% 4118331 sched_debug.sysctl_sched.sysctl_sched_features
24.00 +0.0% 24.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
4.00 +0.0% 4.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
18.26 -6.7 11.57 perf-profile.calltrace.cycles-pp.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_queue_xmit.tcp_transmit_skb
18.26 -6.7 11.58 perf-profile.calltrace.cycles-pp.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output.ip_queue_xmit
18.14 -6.6 11.49 perf-profile.calltrace.cycles-pp.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2.ip_output
18.13 -6.6 11.49 perf-profile.calltrace.cycles-pp.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip.ip_finish_output2
17.88 -6.5 11.35 perf-profile.calltrace.cycles-pp.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq.__local_bh_enable_ip
17.62 -6.4 11.20 perf-profile.calltrace.cycles-pp.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack.do_softirq
17.26 -6.3 10.97 perf-profile.calltrace.cycles-pp.__netif_receive_skb_core.process_backlog.net_rx_action.__softirqentry_text_start.do_softirq_own_stack
16.98 -6.2 10.81 perf-profile.calltrace.cycles-pp.ip_rcv.__netif_receive_skb_core.process_backlog.net_rx_action.__softirqentry_text_start
16.59 -6.0 10.59 perf-profile.calltrace.cycles-pp.ip_local_deliver.ip_rcv.__netif_receive_skb_core.process_backlog.net_rx_action
16.47 -5.9 10.52 perf-profile.calltrace.cycles-pp.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_core.process_backlog
16.32 -5.9 10.43 perf-profile.calltrace.cycles-pp.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv.__netif_receive_skb_core
42.22 -5.7 36.53 perf-profile.calltrace.cycles-pp.secondary_startup_64
41.67 -5.6 36.09 perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
41.67 -5.6 36.09 perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
41.67 -5.6 36.09 perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
12.36 -4.7 7.63 perf-profile.calltrace.cycles-pp.ip_output.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames
12.20 -4.7 7.50 perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit
12.04 -4.2 7.81 perf-profile.calltrace.cycles-pp.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.02 -4.0 6.01 perf-profile.calltrace.cycles-pp.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.98 -4.0 5.97 perf-profile.calltrace.cycles-pp.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.82 -3.9 5.87 perf-profile.calltrace.cycles-pp.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
9.68 -3.9 5.78 perf-profile.calltrace.cycles-pp.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64
9.78 -3.7 6.06 perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv
36.84 -3.7 33.18 perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
10.39 -3.6 6.77 perf-profile.calltrace.cycles-pp.sock_close.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
10.37 -3.6 6.76 perf-profile.calltrace.cycles-pp.__sock_release.sock_close.__fput.task_work_run.exit_to_usermode_loop
10.24 -3.6 6.68 perf-profile.calltrace.cycles-pp.inet_release.__sock_release.sock_close.__fput.task_work_run
8.68 -3.5 5.16 perf-profile.calltrace.cycles-pp.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
9.97 -3.5 6.50 perf-profile.calltrace.cycles-pp.tcp_close.inet_release.__sock_release.sock_close.__fput
11.08 -3.5 7.62 perf-profile.calltrace.cycles-pp.__x64_sys_connect.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.06 -3.5 7.61 perf-profile.calltrace.cycles-pp.__sys_connect.__x64_sys_connect.do_syscall_64.entry_SYSCALL_64_after_hwframe
35.81 -3.4 32.42 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
10.72 -3.4 7.35 perf-profile.calltrace.cycles-pp.inet_stream_connect.__sys_connect.__x64_sys_connect.do_syscall_64.entry_SYSCALL_64_after_hwframe
10.65 -3.3 7.31 perf-profile.calltrace.cycles-pp.__inet_stream_connect.inet_stream_connect.__sys_connect.__x64_sys_connect.do_syscall_64
7.64 -3.1 4.50 perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.__sys_sendto
7.63 -3.1 4.50 perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
7.67 -2.7 4.98 perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_close.inet_release.__sock_release
7.68 -2.7 4.98 perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_close.inet_release.__sock_release.sock_close
6.13 -2.5 3.59 perf-profile.calltrace.cycles-pp.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked.tcp_sendmsg
5.89 -2.5 3.43 perf-profile.calltrace.cycles-pp.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_sendmsg_locked
6.92 -2.4 4.50 perf-profile.calltrace.cycles-pp.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_close.inet_release
6.64 -2.3 4.31 perf-profile.calltrace.cycles-pp.ip_queue_xmit.tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_close
5.31 -2.2 3.08 perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
5.96 -2.1 3.90 perf-profile.calltrace.cycles-pp.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.92 -2.0 3.88 perf-profile.calltrace.cycles-pp.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.69 -2.0 3.72 perf-profile.calltrace.cycles-pp.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
5.62 -1.9 3.68 perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64
5.25 -1.8 3.48 perf-profile.calltrace.cycles-pp.release_sock.__inet_stream_connect.inet_stream_connect.__sys_connect.__x64_sys_connect
5.22 -1.8 3.47 perf-profile.calltrace.cycles-pp.__release_sock.release_sock.__inet_stream_connect.inet_stream_connect.__sys_connect
5.16 -1.7 3.43 perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.__release_sock.release_sock.__inet_stream_connect.inet_stream_connect
4.69 -1.6 3.11 perf-profile.calltrace.cycles-pp.tcp_rcv_state_process.tcp_v4_do_rcv.__release_sock.release_sock.__inet_stream_connect
5.20 -1.5 3.72 perf-profile.calltrace.cycles-pp.tcp_v4_connect.__inet_stream_connect.inet_stream_connect.__sys_connect.__x64_sys_connect
4.43 -1.5 2.95 perf-profile.calltrace.cycles-pp.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
4.09 -1.4 2.70 perf-profile.calltrace.cycles-pp.tcp_transmit_skb.tcp_rcv_state_process.tcp_v4_do_rcv.__release_sock.release_sock
2.23 -1.4 0.86 ± 3% perf-profile.calltrace.cycles-pp.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
3.99 -1.4 2.63 perf-profile.calltrace.cycles-pp.ip_queue_xmit.tcp_transmit_skb.tcp_rcv_state_process.tcp_v4_do_rcv.__release_sock
3.91 -1.3 2.58 perf-profile.calltrace.cycles-pp.ip_output.ip_queue_xmit.tcp_transmit_skb.tcp_rcv_state_process.tcp_v4_do_rcv
3.55 -1.3 2.25 perf-profile.calltrace.cycles-pp.sk_wait_data.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
3.79 -1.3 2.50 perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.ip_queue_xmit.tcp_transmit_skb.tcp_rcv_state_process
4.32 -1.3 3.03 perf-profile.calltrace.cycles-pp.tcp_connect.tcp_v4_connect.__inet_stream_connect.inet_stream_connect.__sys_connect
2.53 -1.1 1.42 perf-profile.calltrace.cycles-pp.wait_woken.sk_wait_data.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
3.66 -1.1 2.57 perf-profile.calltrace.cycles-pp.tcp_transmit_skb.tcp_connect.tcp_v4_connect.__inet_stream_connect.inet_stream_connect
2.58 ± 2% -1.0 1.55 perf-profile.calltrace.cycles-pp.schedule_idle.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
2.55 ± 2% -1.0 1.52 perf-profile.calltrace.cycles-pp.__schedule.schedule_idle.do_idle.cpu_startup_entry.start_secondary
2.31 -1.0 1.30 perf-profile.calltrace.cycles-pp.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg.inet_recvmsg
3.42 ± 2% -1.0 2.42 perf-profile.calltrace.cycles-pp.ip_queue_xmit.tcp_transmit_skb.tcp_connect.tcp_v4_connect.__inet_stream_connect
3.10 -1.0 2.10 perf-profile.calltrace.cycles-pp.__x64_sys_accept.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.29 -1.0 1.29 perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.wait_woken.sk_wait_data.tcp_recvmsg
3.09 -1.0 2.10 perf-profile.calltrace.cycles-pp.__sys_accept4.__x64_sys_accept.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.25 -1.0 1.28 perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.wait_woken.sk_wait_data
3.03 ± 2% -0.9 2.09 perf-profile.calltrace.cycles-pp.ip_finish_output2.ip_output.ip_queue_xmit.tcp_transmit_skb.tcp_connect
3.05 ± 2% -0.9 2.13 perf-profile.calltrace.cycles-pp.ip_output.ip_queue_xmit.tcp_transmit_skb.tcp_connect.tcp_v4_connect
0.78 ± 2% -0.8 0.00 perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.__release_sock.release_sock.tcp_sendmsg.sock_sendmsg
0.75 -0.7 0.00 perf-profile.calltrace.cycles-pp.dequeue_task_fair.__schedule.schedule.schedule_timeout.wait_woken
0.75 ± 2% -0.7 0.00 perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock.tcp_sendmsg
0.74 ± 2% -0.7 0.00 perf-profile.calltrace.cycles-pp.security_socket_bind.__sys_bind.__x64_sys_bind.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.73 ± 2% -0.7 0.00 perf-profile.calltrace.cycles-pp.ip_queue_xmit.tcp_transmit_skb.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv
0.72 ± 2% -0.7 0.00 perf-profile.calltrace.cycles-pp.selinux_socket_bind.security_socket_bind.__sys_bind.__x64_sys_bind.do_syscall_64
0.71 ± 2% -0.7 0.00 perf-profile.calltrace.cycles-pp.sock_def_readable.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv
1.87 ± 2% -0.7 1.19 perf-profile.calltrace.cycles-pp.tcp_child_process.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv
0.67 ± 3% -0.7 0.00 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv
0.65 ± 2% -0.7 0.00 perf-profile.calltrace.cycles-pp.destroy_inode.__dentry_kill.dentry_kill.dput.__fput
0.65 -0.6 0.00 perf-profile.calltrace.cycles-pp.tcp_ack.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
0.64 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.__destroy_inode.destroy_inode.__dentry_kill.dentry_kill.dput
0.64 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.sock_alloc_file.__sys_accept4.__x64_sys_accept.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.63 -0.6 0.00 perf-profile.calltrace.cycles-pp.tcp_send_fin.tcp_close.inet_release.__sock_release.sock_close
0.63 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_data_queue.tcp_rcv_established
0.62 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.tcp_create_openreq_child.tcp_v4_syn_recv_sock.tcp_check_req.tcp_v4_rcv.ip_local_deliver_finish
0.61 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_data_queue
0.61 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
0.60 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.cpuidle_enter_state.do_idle.cpu_startup_entry
1.81 ± 2% -0.6 1.22 perf-profile.calltrace.cycles-pp.inet_accept.__sys_accept4.__x64_sys_accept.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.59 -0.6 0.00 perf-profile.calltrace.cycles-pp.inet_create.__sock_create.__sys_socket.__x64_sys_socket.do_syscall_64
0.58 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock
1.71 -0.6 1.13 perf-profile.calltrace.cycles-pp.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
1.67 -0.6 1.10 perf-profile.calltrace.cycles-pp.tcp_fin.tcp_data_queue.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv
0.56 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.tcp_v4_send_synack.tcp_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv
0.56 ± 4% -0.6 0.00 perf-profile.calltrace.cycles-pp.set_next_entity.pick_next_task_fair.__schedule.schedule_idle.do_idle
0.55 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock
0.55 ± 2% -0.6 0.00 perf-profile.calltrace.cycles-pp.__kfree_skb.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
1.77 -0.5 1.23 perf-profile.calltrace.cycles-pp.__x64_sys_socket.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.54 ± 3% -0.5 0.00 perf-profile.calltrace.cycles-pp.switch_mm.__schedule.schedule.schedule_timeout.wait_woken
1.75 -0.5 1.22 perf-profile.calltrace.cycles-pp.__sys_socket.__x64_sys_socket.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.54 ± 2% -0.5 0.00 perf-profile.calltrace.cycles-pp.__inet_bind.__sys_bind.__x64_sys_bind.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.37 -0.5 0.83 perf-profile.calltrace.cycles-pp.__x64_sys_bind.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.53 ± 3% -0.5 0.00 perf-profile.calltrace.cycles-pp.security_inode_free.__destroy_inode.destroy_inode.__dentry_kill.dentry_kill
1.35 -0.5 0.82 perf-profile.calltrace.cycles-pp.__sys_bind.__x64_sys_bind.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.53 ± 2% -0.5 0.00 perf-profile.calltrace.cycles-pp.enqueue_task_fair.ttwu_do_activate.try_to_wake_up.autoremove_wake_function.__wake_up_common
0.53 ± 2% -0.5 0.00 perf-profile.calltrace.cycles-pp.ip_output.ip_queue_xmit.tcp_transmit_skb.tcp_rcv_established.tcp_v4_do_rcv
1.70 -0.5 1.18 perf-profile.calltrace.cycles-pp.tcp_conn_request.tcp_rcv_state_process.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
1.35 ± 2% -0.5 0.83 perf-profile.calltrace.cycles-pp.sock_def_readable.tcp_child_process.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
0.52 -0.5 0.00 perf-profile.calltrace.cycles-pp.sel_netport_sid.selinux_socket_bind.security_socket_bind.__sys_bind.__x64_sys_bind
0.52 -0.5 0.00 perf-profile.calltrace.cycles-pp.ttwu_do_activate.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable
1.27 ± 2% -0.5 0.78 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.tcp_child_process.tcp_v4_rcv.ip_local_deliver_finish
1.42 -0.5 0.94 perf-profile.calltrace.cycles-pp.inet_csk_accept.inet_accept.__sys_accept4.__x64_sys_accept.do_syscall_64
1.20 ± 2% -0.5 0.74 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_child_process.tcp_v4_rcv
0.85 ± 3% -0.5 0.39 ± 57% perf-profile.calltrace.cycles-pp.pick_next_task_fair.__schedule.schedule_idle.do_idle.cpu_startup_entry
1.19 ± 2% -0.5 0.73 perf-profile.calltrace.cycles-pp.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_child_process
1.36 -0.5 0.91 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
1.17 ± 2% -0.4 0.72 perf-profile.calltrace.cycles-pp.try_to_wake_up.autoremove_wake_function.__wake_up_common.__wake_up_common_lock.sock_def_readable
1.27 -0.4 0.82 perf-profile.calltrace.cycles-pp.dput.__fput.task_work_run.exit_to_usermode_loop.do_syscall_64
1.32 ± 3% -0.4 0.88 perf-profile.calltrace.cycles-pp.__entry_trampoline_start
1.22 -0.4 0.79 perf-profile.calltrace.cycles-pp.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
1.35 -0.4 0.94 perf-profile.calltrace.cycles-pp.tcp_check_req.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver.ip_rcv
1.17 -0.4 0.76 perf-profile.calltrace.cycles-pp.dentry_kill.dput.__fput.task_work_run.exit_to_usermode_loop
1.15 -0.4 0.74 perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv
1.22 -0.4 0.81 ± 2% perf-profile.calltrace.cycles-pp.__sock_create.__sys_socket.__x64_sys_socket.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.11 -0.4 0.71 ± 3% perf-profile.calltrace.cycles-pp.tcp_fin.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv
1.14 -0.4 0.74 perf-profile.calltrace.cycles-pp.__dentry_kill.dentry_kill.dput.__fput.task_work_run
0.39 ± 57% -0.4 0.00 perf-profile.calltrace.cycles-pp.__indirect_thunk_start
1.06 -0.4 0.67 ± 2% perf-profile.calltrace.cycles-pp.sock_def_wakeup.tcp_fin.tcp_data_queue.tcp_rcv_established.tcp_v4_do_rcv
1.08 -0.4 0.69 perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_rcv_established.tcp_v4_do_rcv
1.13 -0.4 0.74 ± 2% perf-profile.calltrace.cycles-pp.tcp_v4_syn_recv_sock.tcp_check_req.tcp_v4_rcv.ip_local_deliver_finish.ip_local_deliver
1.05 ± 2% -0.4 0.67 perf-profile.calltrace.cycles-pp.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_readable.tcp_rcv_established
0.91 -0.4 0.53 perf-profile.calltrace.cycles-pp.switch_mm_irqs_off.__schedule.schedule_idle.do_idle.cpu_startup_entry
0.99 -0.4 0.63 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common_lock.sock_def_wakeup.tcp_fin.tcp_data_queue.tcp_rcv_established
0.92 -0.4 0.56 ± 2% perf-profile.calltrace.cycles-pp.release_sock.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
0.91 -0.4 0.55 ± 2% perf-profile.calltrace.cycles-pp.tcp_transmit_skb.tcp_rcv_established.tcp_v4_do_rcv.tcp_v4_rcv.ip_local_deliver_finish
0.87 ± 2% -0.3 0.53 ± 2% perf-profile.calltrace.cycles-pp.__release_sock.release_sock.tcp_sendmsg.sock_sendmsg.__sys_sendto
0.93 -0.3 0.59 ± 2% perf-profile.calltrace.cycles-pp.__wake_up_common.__wake_up_common_lock.sock_def_wakeup.tcp_fin.tcp_data_queue
0.89 -0.3 0.57 ± 2% perf-profile.calltrace.cycles-pp.try_to_wake_up.__wake_up_common.__wake_up_common_lock.sock_def_wakeup.tcp_fin
0.45 ± 58% -0.3 0.13 ±173% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
0.44 ± 58% -0.3 0.13 ±173% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_kernel
1.02 -0.3 0.71 perf-profile.calltrace.cycles-pp.release_sock.tcp_close.inet_release.__sock_release.sock_close
0.82 -0.3 0.51 perf-profile.calltrace.cycles-pp.menu_select.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
0.98 -0.3 0.68 perf-profile.calltrace.cycles-pp.__release_sock.release_sock.tcp_close.inet_release.__sock_release
0.89 ± 2% -0.3 0.61 perf-profile.calltrace.cycles-pp.schedule_timeout.inet_csk_accept.inet_accept.__sys_accept4.__x64_sys_accept
0.88 ± 2% -0.3 0.61 perf-profile.calltrace.cycles-pp.schedule.schedule_timeout.inet_csk_accept.inet_accept.__sys_accept4
0.93 -0.3 0.66 perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.__release_sock.release_sock.tcp_close.inet_release
0.93 -0.3 0.66 perf-profile.calltrace.cycles-pp.tcp_rcv_state_process.tcp_v4_do_rcv.__release_sock.release_sock.tcp_close
0.87 ± 2% -0.3 0.60 perf-profile.calltrace.cycles-pp.__schedule.schedule.schedule_timeout.inet_csk_accept.inet_accept
0.26 ±100% -0.3 0.00 perf-profile.calltrace.cycles-pp.inet_csk_clone_lock.tcp_create_openreq_child.tcp_v4_syn_recv_sock.tcp_check_req.tcp_v4_rcv
0.48 ± 58% -0.2 0.26 ±100% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_kernel.secondary_startup_64
0.48 ± 58% -0.2 0.26 ±100% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_kernel.secondary_startup_64
0.48 ± 58% -0.2 0.26 ±100% perf-profile.calltrace.cycles-pp.start_kernel.secondary_startup_64
0.00 +0.1 0.12 ±173% perf-profile.calltrace.cycles-pp.release_sock.sk_wait_data.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
0.00 +1.5 1.51 perf-profile.calltrace.cycles-pp.mnt_get_count.mntput_no_expire.task_work_run.exit_to_usermode_loop.do_syscall_64
47.12 +9.2 56.32 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
46.93 +9.3 56.21 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.24 +21.3 33.53 perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
12.21 +21.3 33.51 perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +23.5 23.48 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.mntput_no_expire.task_work_run.exit_to_usermode_loop
0.00 +23.8 23.77 perf-profile.calltrace.cycles-pp._raw_spin_lock.mntput_no_expire.task_work_run.exit_to_usermode_loop.do_syscall_64
0.00 +25.5 25.47 perf-profile.calltrace.cycles-pp.mntput_no_expire.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.48 -7.9 14.54 perf-profile.children.cycles-pp.tcp_transmit_skb
21.39 -7.5 13.84 perf-profile.children.cycles-pp.ip_queue_xmit
20.72 -7.3 13.39 perf-profile.children.cycles-pp.ip_output
19.92 -7.1 12.82 perf-profile.children.cycles-pp.ip_finish_output2
19.03 -6.8 12.26 perf-profile.children.cycles-pp.__softirqentry_text_start
18.65 -6.7 11.92 perf-profile.children.cycles-pp.__local_bh_enable_ip
18.48 -6.7 11.82 perf-profile.children.cycles-pp.do_softirq
18.34 -6.6 11.72 perf-profile.children.cycles-pp.do_softirq_own_stack
18.14 -6.5 11.61 perf-profile.children.cycles-pp.net_rx_action
17.88 -6.4 11.46 perf-profile.children.cycles-pp.process_backlog
17.46 -6.3 11.18 perf-profile.children.cycles-pp.__netif_receive_skb_core
17.21 -6.2 11.06 perf-profile.children.cycles-pp.ip_rcv
16.82 -6.0 10.84 perf-profile.children.cycles-pp.ip_local_deliver
17.08 -6.0 11.11 perf-profile.children.cycles-pp.tcp_v4_do_rcv
16.67 -5.9 10.73 perf-profile.children.cycles-pp.ip_local_deliver_finish
16.58 -5.9 10.69 perf-profile.children.cycles-pp.tcp_v4_rcv
15.33 -5.8 9.49 perf-profile.children.cycles-pp.tcp_write_xmit
15.32 -5.8 9.49 perf-profile.children.cycles-pp.__tcp_push_pending_frames
42.27 -5.7 36.56 perf-profile.children.cycles-pp.do_idle
42.23 -5.7 36.53 perf-profile.children.cycles-pp.cpu_startup_entry
42.22 -5.7 36.53 perf-profile.children.cycles-pp.secondary_startup_64
41.67 -5.6 36.09 perf-profile.children.cycles-pp.start_secondary
12.07 -4.2 7.83 perf-profile.children.cycles-pp.__fput
10.00 -4.0 5.99 perf-profile.children.cycles-pp.__sys_sendto
10.02 -4.0 6.01 perf-profile.children.cycles-pp.__x64_sys_sendto
9.83 -4.0 5.88 perf-profile.children.cycles-pp.sock_sendmsg
9.69 -3.9 5.78 perf-profile.children.cycles-pp.tcp_sendmsg
37.39 -3.8 33.63 perf-profile.children.cycles-pp.cpuidle_enter_state
10.39 -3.6 6.77 perf-profile.children.cycles-pp.sock_close
10.38 -3.6 6.77 perf-profile.children.cycles-pp.__sock_release
10.25 -3.6 6.69 perf-profile.children.cycles-pp.inet_release
8.70 -3.5 5.17 perf-profile.children.cycles-pp.tcp_sendmsg_locked
36.34 -3.5 32.85 perf-profile.children.cycles-pp.intel_idle
10.00 -3.5 6.51 perf-profile.children.cycles-pp.tcp_close
10.52 -3.5 7.04 perf-profile.children.cycles-pp.tcp_rcv_state_process
11.08 -3.5 7.62 perf-profile.children.cycles-pp.__x64_sys_connect
11.07 -3.4 7.62 perf-profile.children.cycles-pp.__sys_connect
10.73 -3.4 7.36 perf-profile.children.cycles-pp.inet_stream_connect
10.66 -3.3 7.32 perf-profile.children.cycles-pp.__inet_stream_connect
8.02 -2.5 5.52 perf-profile.children.cycles-pp.release_sock
6.45 -2.5 4.00 perf-profile.children.cycles-pp.tcp_rcv_established
7.49 -2.3 5.15 perf-profile.children.cycles-pp.__release_sock
5.75 -2.3 3.46 perf-profile.children.cycles-pp.__schedule
5.97 -2.1 3.91 perf-profile.children.cycles-pp.__x64_sys_recvfrom
5.94 -2.1 3.89 perf-profile.children.cycles-pp.__sys_recvfrom
5.70 -2.0 3.73 perf-profile.children.cycles-pp.inet_recvmsg
5.64 -1.9 3.69 perf-profile.children.cycles-pp.tcp_recvmsg
4.27 -1.8 2.47 perf-profile.children.cycles-pp.__wake_up_common_lock
4.16 -1.7 2.45 perf-profile.children.cycles-pp.tcp_data_queue
3.97 -1.7 2.27 perf-profile.children.cycles-pp.__wake_up_common
3.82 -1.6 2.20 perf-profile.children.cycles-pp.try_to_wake_up
3.42 -1.5 1.91 perf-profile.children.cycles-pp.sock_def_readable
5.21 -1.5 3.73 perf-profile.children.cycles-pp.tcp_v4_connect
3.56 -1.3 2.25 perf-profile.children.cycles-pp.sk_wait_data
4.33 -1.3 3.03 perf-profile.children.cycles-pp.tcp_connect
3.21 -1.3 1.92 perf-profile.children.cycles-pp.schedule
2.56 -1.1 1.44 perf-profile.children.cycles-pp.wait_woken
2.60 ± 2% -1.0 1.56 perf-profile.children.cycles-pp.schedule_idle
3.11 -1.0 2.10 perf-profile.children.cycles-pp.__x64_sys_accept
3.10 -1.0 2.10 perf-profile.children.cycles-pp.__sys_accept4
2.79 -1.0 1.82 perf-profile.children.cycles-pp.tcp_fin
1.76 -0.7 1.03 perf-profile.children.cycles-pp.ttwu_do_activate
2.04 -0.7 1.35 perf-profile.children.cycles-pp.tcp_ack
1.88 ± 2% -0.7 1.19 perf-profile.children.cycles-pp.tcp_child_process
1.70 -0.7 1.03 perf-profile.children.cycles-pp.switch_mm_irqs_off
1.63 -0.7 0.97 perf-profile.children.cycles-pp.enqueue_task_fair
1.82 ± 2% -0.6 1.23 perf-profile.children.cycles-pp.inet_accept
1.44 -0.6 0.86 ± 2% perf-profile.children.cycles-pp.enqueue_entity
1.78 -0.6 1.23 perf-profile.children.cycles-pp.__x64_sys_socket
1.75 -0.5 1.22 perf-profile.children.cycles-pp.__sys_socket
1.35 -0.5 0.82 perf-profile.children.cycles-pp.__sys_bind
1.37 -0.5 0.83 perf-profile.children.cycles-pp.__x64_sys_bind
1.70 -0.5 1.18 perf-profile.children.cycles-pp.tcp_conn_request
1.69 -0.5 1.18 perf-profile.children.cycles-pp.__dev_queue_xmit
1.14 ± 2% -0.5 0.64 perf-profile.children.cycles-pp.pick_next_task_fair
1.43 -0.5 0.95 perf-profile.children.cycles-pp.inet_csk_accept
1.20 ± 2% -0.5 0.74 perf-profile.children.cycles-pp.autoremove_wake_function
1.37 -0.5 0.92 perf-profile.children.cycles-pp.syscall_return_via_sysret
1.28 -0.4 0.83 perf-profile.children.cycles-pp.dput
1.33 ± 3% -0.4 0.89 perf-profile.children.cycles-pp.__entry_trampoline_start
1.51 ± 2% -0.4 1.08 ± 2% perf-profile.children.cycles-pp._raw_spin_lock_bh
1.21 -0.4 0.78 ± 2% perf-profile.children.cycles-pp.sock_def_wakeup
1.18 -0.4 0.76 perf-profile.children.cycles-pp.dentry_kill
1.36 -0.4 0.95 perf-profile.children.cycles-pp.tcp_check_req
1.09 -0.4 0.68 ± 2% perf-profile.children.cycles-pp.__kfree_skb
1.15 -0.4 0.74 perf-profile.children.cycles-pp.__dentry_kill
1.08 -0.4 0.68 ± 2% perf-profile.children.cycles-pp.dequeue_task_fair
1.18 -0.4 0.77 ± 2% perf-profile.children.cycles-pp.tcp_clean_rtx_queue
1.20 -0.4 0.80 perf-profile.children.cycles-pp._raw_spin_lock_irqsave
1.22 -0.4 0.82 perf-profile.children.cycles-pp.__sock_create
1.14 -0.4 0.75 ± 2% perf-profile.children.cycles-pp.tcp_v4_syn_recv_sock
0.99 -0.4 0.60 ± 2% perf-profile.children.cycles-pp.dequeue_entity
1.12 -0.4 0.76 ± 2% perf-profile.children.cycles-pp.__alloc_skb
1.20 ± 2% -0.4 0.84 perf-profile.children.cycles-pp.dev_hard_start_xmit
1.14 -0.3 0.79 perf-profile.children.cycles-pp.loopback_xmit
0.70 ± 2% -0.3 0.35 ± 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.95 ± 2% -0.3 0.62 perf-profile.children.cycles-pp.tcp_filter
0.95 ± 2% -0.3 0.62 perf-profile.children.cycles-pp.sk_filter_trim_cap
0.85 -0.3 0.53 perf-profile.children.cycles-pp.menu_select
0.80 ± 2% -0.3 0.48 perf-profile.children.cycles-pp.mod_timer
0.84 ± 2% -0.3 0.53 perf-profile.children.cycles-pp.update_load_avg
0.90 ± 2% -0.3 0.59 perf-profile.children.cycles-pp.security_sock_rcv_skb
0.92 ± 2% -0.3 0.61 perf-profile.children.cycles-pp.selinux_socket_sock_rcv_skb
1.06 -0.3 0.75 perf-profile.children.cycles-pp.sock_alloc_file
0.78 -0.3 0.48 ± 2% perf-profile.children.cycles-pp.sk_reset_timer
0.81 ± 3% -0.3 0.51 perf-profile.children.cycles-pp.switch_mm
0.91 -0.3 0.61 ± 2% perf-profile.children.cycles-pp.__inet_lookup_established
0.73 ± 2% -0.3 0.44 ± 3% perf-profile.children.cycles-pp.selinux_socket_bind
0.74 ± 2% -0.3 0.45 ± 2% perf-profile.children.cycles-pp.security_socket_bind
0.68 -0.3 0.40 ± 2% perf-profile.children.cycles-pp.inet_csk_destroy_sock
1.07 -0.3 0.79 ± 2% perf-profile.children.cycles-pp.lock_sock_nested
0.88 -0.3 0.61 perf-profile.children.cycles-pp.nf_hook_slow
0.63 ± 3% -0.3 0.38 ± 2% perf-profile.children.cycles-pp.__slab_free
0.65 ± 2% -0.2 0.41 ± 2% perf-profile.children.cycles-pp.destroy_inode
0.65 ± 2% -0.2 0.41 ± 2% perf-profile.children.cycles-pp.__destroy_inode
0.55 -0.2 0.31 ± 5% perf-profile.children.cycles-pp.update_curr
0.55 ± 3% -0.2 0.31 perf-profile.children.cycles-pp.__inet_bind
0.72 ± 2% -0.2 0.49 ± 2% perf-profile.children.cycles-pp.__indirect_thunk_start
0.57 ± 3% -0.2 0.34 perf-profile.children.cycles-pp.load_new_mm_cr3
0.53 ± 3% -0.2 0.30 ± 3% perf-profile.children.cycles-pp.security_inode_free
0.64 -0.2 0.41 perf-profile.children.cycles-pp.tcp_send_fin
0.70 ± 2% -0.2 0.48 ± 2% perf-profile.children.cycles-pp.sel_netport_sid
0.70 ± 2% -0.2 0.47 perf-profile.children.cycles-pp.selinux_ip_postroute
0.59 ± 4% -0.2 0.37 perf-profile.children.cycles-pp.set_next_entity
0.56 -0.2 0.34 ± 3% perf-profile.children.cycles-pp.skb_release_all
0.55 -0.2 0.34 ± 2% perf-profile.children.cycles-pp.skb_release_head_state
0.53 ± 2% -0.2 0.32 perf-profile.children.cycles-pp.ttwu_do_wakeup
0.52 ± 2% -0.2 0.30 perf-profile.children.cycles-pp.check_preempt_curr
0.61 -0.2 0.40 ± 2% perf-profile.children.cycles-pp.kmem_cache_alloc
0.61 -0.2 0.40 ± 5% perf-profile.children.cycles-pp.sched_clock_cpu
0.61 ± 2% -0.2 0.40 perf-profile.children.cycles-pp.kmem_cache_free
0.63 ± 2% -0.2 0.42 ± 3% perf-profile.children.cycles-pp.tcp_create_openreq_child
0.61 ± 2% -0.2 0.40 ± 2% perf-profile.children.cycles-pp.sk_stream_alloc_skb
0.46 -0.2 0.26 perf-profile.children.cycles-pp.__sk_mem_reduce_allocated
0.56 -0.2 0.36 ± 5% perf-profile.children.cycles-pp.sched_clock
0.59 -0.2 0.39 ± 2% perf-profile.children.cycles-pp.sock_alloc
0.55 ± 2% -0.2 0.35 ± 2% perf-profile.children.cycles-pp.d_instantiate
0.59 -0.2 0.41 perf-profile.children.cycles-pp.inet_create
0.48 ± 2% -0.2 0.30 ± 2% perf-profile.children.cycles-pp.sk_forced_mem_schedule
0.54 ± 2% -0.2 0.35 ± 4% perf-profile.children.cycles-pp.netif_rx_internal
0.47 ± 4% -0.2 0.29 ± 3% perf-profile.children.cycles-pp.tcp_event_new_data_sent
0.47 -0.2 0.29 perf-profile.children.cycles-pp.percpu_counter_add_batch
0.53 ± 2% -0.2 0.34 ± 4% perf-profile.children.cycles-pp.native_sched_clock
0.54 -0.2 0.37 ± 2% perf-profile.children.cycles-pp.new_inode_pseudo
0.41 ± 2% -0.2 0.23 perf-profile.children.cycles-pp.security_d_instantiate
0.29 -0.2 0.11 ± 4% perf-profile.children.cycles-pp.select_idle_sibling
0.50 -0.2 0.33 ± 3% perf-profile.children.cycles-pp.alloc_inode
0.43 -0.2 0.26 ± 2% perf-profile.children.cycles-pp.__x64_sys_close
0.53 ± 3% -0.2 0.37 ± 2% perf-profile.children.cycles-pp.selinux_ip_postroute_compat
0.42 ± 2% -0.2 0.26 perf-profile.children.cycles-pp.resched_curr
0.53 ± 2% -0.2 0.36 ± 4% perf-profile.children.cycles-pp.rcu_process_callbacks
0.51 ± 2% -0.2 0.35 ± 2% perf-profile.children.cycles-pp.inet_csk_clone_lock
0.44 ± 2% -0.2 0.28 ± 5% perf-profile.children.cycles-pp.ip_local_out
0.35 ± 3% -0.2 0.19 ± 3% perf-profile.children.cycles-pp.selinux_inode_free_security
0.48 -0.2 0.33 ± 3% perf-profile.children.cycles-pp.tcp_init_transfer
0.53 ± 2% -0.2 0.37 ± 5% perf-profile.children.cycles-pp.ip_route_output_key_hash
0.57 ± 2% -0.2 0.42 ± 2% perf-profile.children.cycles-pp.tcp_v4_send_synack
0.47 ± 4% -0.2 0.32 perf-profile.children.cycles-pp.dst_release
0.45 -0.2 0.30 ± 2% perf-profile.children.cycles-pp.selinux_sock_rcv_skb_compat
0.42 ± 3% -0.1 0.27 ± 2% perf-profile.children.cycles-pp.tcp_send_ack
0.41 ± 3% -0.1 0.26 ± 2% perf-profile.children.cycles-pp.tcp_get_metrics
0.38 ± 4% -0.1 0.23 ± 2% perf-profile.children.cycles-pp.inet_csk_get_port
0.35 ± 3% -0.1 0.21 ± 3% perf-profile.children.cycles-pp.ret_from_fork
0.35 ± 3% -0.1 0.21 ± 3% perf-profile.children.cycles-pp.kthread
0.46 ± 2% -0.1 0.32 ± 2% perf-profile.children.cycles-pp.sk_clone_lock
0.39 -0.1 0.25 perf-profile.children.cycles-pp.__call_rcu
0.35 ± 2% -0.1 0.21 ± 3% perf-profile.children.cycles-pp.tcp_v4_destroy_sock
0.36 ± 3% -0.1 0.22 ± 3% perf-profile.children.cycles-pp.__tcp_get_metrics
0.50 ± 3% -0.1 0.36 ± 5% perf-profile.children.cycles-pp.ip_route_output_key_hash_rcu
0.52 -0.1 0.38 ± 2% perf-profile.children.cycles-pp.lock_timer_base
0.41 ± 2% -0.1 0.28 ± 6% perf-profile.children.cycles-pp.__ip_local_out
0.44 -0.1 0.31 ± 4% perf-profile.children.cycles-pp.avc_has_perm
0.33 ± 2% -0.1 0.20 ± 4% perf-profile.children.cycles-pp.sock_rfree
0.31 ± 3% -0.1 0.18 ± 5% perf-profile.children.cycles-pp.smpboot_thread_fn
0.31 ± 2% -0.1 0.18 ± 3% perf-profile.children.cycles-pp.filp_close
0.33 ± 2% -0.1 0.21 ± 2% perf-profile.children.cycles-pp.inode_doinit_with_dentry
0.36 ± 2% -0.1 0.24 ± 3% perf-profile.children.cycles-pp.tcp_finish_connect
0.97 ± 3% -0.1 0.84 perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.28 ± 4% -0.1 0.16 ± 8% perf-profile.children.cycles-pp.run_ksoftirqd
0.97 ± 2% -0.1 0.85 perf-profile.children.cycles-pp.apic_timer_interrupt
0.28 -0.1 0.16 ± 2% perf-profile.children.cycles-pp.finish_task_switch
0.38 ± 3% -0.1 0.26 ± 3% perf-profile.children.cycles-pp.__skb_clone
0.36 -0.1 0.24 ± 7% perf-profile.children.cycles-pp.__sk_dst_check
0.34 -0.1 0.23 ± 4% perf-profile.children.cycles-pp.__kmalloc_reserve
0.34 ± 2% -0.1 0.23 ± 6% perf-profile.children.cycles-pp.ipv4_dst_check
0.32 ± 2% -0.1 0.20 ± 2% perf-profile.children.cycles-pp.update_cfs_group
0.31 -0.1 0.19 ± 4% perf-profile.children.cycles-pp.tcp_time_wait
0.35 ± 3% -0.1 0.23 ± 3% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.36 ± 3% -0.1 0.24 ± 3% perf-profile.children.cycles-pp.___might_sleep
0.26 ± 3% -0.1 0.15 ± 2% perf-profile.children.cycles-pp.kfree
0.33 -0.1 0.21 ± 5% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
0.28 -0.1 0.17 ± 4% perf-profile.children.cycles-pp.__list_del_entry_valid
0.29 ± 5% -0.1 0.17 ± 2% perf-profile.children.cycles-pp.poll_idle
0.33 ± 4% -0.1 0.21 ± 2% perf-profile.children.cycles-pp.read_tsc
0.32 ± 3% -0.1 0.21 ± 8% perf-profile.children.cycles-pp.ip_route_output_flow
0.35 -0.1 0.24 ± 5% perf-profile.children.cycles-pp.__sk_destruct
0.29 ± 2% -0.1 0.18 ± 2% perf-profile.children.cycles-pp.ktime_get_with_offset
0.55 ± 28% -0.1 0.45 ± 26% perf-profile.children.cycles-pp.start_kernel
0.26 ± 3% -0.1 0.15 ± 2% perf-profile.children.cycles-pp.tcp_update_metrics
0.22 ± 3% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.fput
0.35 ± 2% -0.1 0.24 ± 2% perf-profile.children.cycles-pp.sockfd_lookup_light
0.30 ± 2% -0.1 0.20 ± 2% perf-profile.children.cycles-pp.selinux_parse_skb
0.24 ± 4% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.__list_add_valid
0.25 ± 4% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.skb_copy_datagram_iter
0.33 ± 2% -0.1 0.23 perf-profile.children.cycles-pp.get_empty_filp
0.28 ± 2% -0.1 0.18 ± 2% perf-profile.children.cycles-pp.inode_init_always
0.23 ± 3% -0.1 0.13 ± 3% perf-profile.children.cycles-pp.___perf_sw_event
0.66 -0.1 0.56 perf-profile.children.cycles-pp.tcp_done
0.36 -0.1 0.26 ± 2% perf-profile.children.cycles-pp.alloc_file
0.32 -0.1 0.22 ± 4% perf-profile.children.cycles-pp.account_entity_enqueue
0.29 ± 3% -0.1 0.19 ± 4% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.21 ± 3% -0.1 0.11 ± 3% perf-profile.children.cycles-pp.remove_wait_queue
0.27 -0.1 0.17 perf-profile.children.cycles-pp.tcp_init_metrics
0.27 ± 3% -0.1 0.17 perf-profile.children.cycles-pp.reweight_entity
0.24 ± 3% -0.1 0.15 ± 4% perf-profile.children.cycles-pp.update_rq_clock
0.21 ± 5% -0.1 0.11 ± 7% perf-profile.children.cycles-pp.inet_csk_route_req
0.19 ± 2% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.available_idle_cpu
0.19 ± 2% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.task_work_add
0.24 -0.1 0.15 ± 3% perf-profile.children.cycles-pp.inet_csk_reqsk_queue_hash_add
0.25 ± 2% -0.1 0.16 perf-profile.children.cycles-pp.sk_prot_alloc
0.29 -0.1 0.21 ± 2% perf-profile.children.cycles-pp.skb_release_data
0.23 ± 7% -0.1 0.15 ± 5% perf-profile.children.cycles-pp.inet_csk_bind_conflict
0.21 ± 5% -0.1 0.13 ± 3% perf-profile.children.cycles-pp._copy_to_iter
0.39 ± 2% -0.1 0.30 ± 4% perf-profile.children.cycles-pp.__x64_sys_getsockopt
0.33 -0.1 0.24 perf-profile.children.cycles-pp.kmem_cache_alloc_node
0.27 ± 4% -0.1 0.18 perf-profile.children.cycles-pp.ipv4_mtu
0.24 ± 2% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.rcu_segcblist_enqueue
0.22 ± 3% -0.1 0.14 ± 3% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.19 ± 5% -0.1 0.11 ± 4% perf-profile.children.cycles-pp._copy_from_iter_full
0.31 ± 2% -0.1 0.23 ± 3% perf-profile.children.cycles-pp.enqueue_to_backlog
0.27 ± 3% -0.1 0.18 ± 2% perf-profile.children.cycles-pp.__inet_hash_connect
0.08 ± 10% -0.1 0.00 perf-profile.children.cycles-pp.security_req_classify_flow
0.08 ± 5% -0.1 0.00 perf-profile.children.cycles-pp.inet_csk_reqsk_queue_drop
0.25 -0.1 0.17 ± 4% perf-profile.children.cycles-pp.__switch_to
0.08 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.08 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.25 ± 3% -0.1 0.17 ± 2% perf-profile.children.cycles-pp.__fget_light
0.41 ± 3% -0.1 0.33 ± 5% perf-profile.children.cycles-pp.tcp_set_state
0.37 ± 2% -0.1 0.29 ± 5% perf-profile.children.cycles-pp.__sys_getsockopt
0.28 -0.1 0.20 ± 7% perf-profile.children.cycles-pp.fib_table_lookup
0.28 ± 5% -0.1 0.21 ± 2% perf-profile.children.cycles-pp.ip_finish_output
0.27 -0.1 0.19 perf-profile.children.cycles-pp.sk_alloc
0.08 ± 10% -0.1 0.00 perf-profile.children.cycles-pp.put_prev_entity
0.24 ± 16% -0.1 0.17 ± 10% perf-profile.children.cycles-pp.ktime_get
0.24 ± 3% -0.1 0.17 ± 2% perf-profile.children.cycles-pp.tcp_v4_init_sock
0.23 ± 4% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.__inet_check_established
0.22 ± 3% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.pick_next_task_idle
0.18 ± 3% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.tcp_write_queue_purge
0.19 ± 4% -0.1 0.11 perf-profile.children.cycles-pp.tcp_current_mss
0.20 ± 2% -0.1 0.12 ± 8% perf-profile.children.cycles-pp.native_write_msr
0.19 -0.1 0.11 ± 7% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.22 -0.1 0.14 ± 5% perf-profile.children.cycles-pp.ip_send_check
0.21 ± 4% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.__might_sleep
0.20 ± 5% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.tick_nohz_idle_enter
0.07 ± 11% -0.1 0.00 perf-profile.children.cycles-pp.inet_sock_destruct
0.07 ± 5% -0.1 0.00 perf-profile.children.cycles-pp.__tcp_v4_send_check
0.07 ± 5% -0.1 0.00 perf-profile.children.cycles-pp.down_write
0.07 ± 5% -0.1 0.00 perf-profile.children.cycles-pp.secure_tcp_ts_off
0.07 ± 5% -0.1 0.00 perf-profile.children.cycles-pp.raw_local_deliver
0.11 ± 6% -0.1 0.04 ± 57% perf-profile.children.cycles-pp.inet_csk_complete_hashdance
0.20 ± 3% -0.1 0.13 perf-profile.children.cycles-pp.security_socket_create
0.07 ± 10% -0.1 0.00 perf-profile.children.cycles-pp.inet_sendmsg
0.07 ± 10% -0.1 0.00 perf-profile.children.cycles-pp.__ip_dev_find
0.07 ± 10% -0.1 0.00 perf-profile.children.cycles-pp.rcu_all_qs
0.22 ± 5% -0.1 0.15 ± 2% perf-profile.children.cycles-pp.copy_user_generic_unrolled
0.22 -0.1 0.15 ± 8% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.17 ± 4% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.inet_csk_route_child_sock
0.18 ± 2% -0.1 0.11 ± 6% perf-profile.children.cycles-pp.selinux_socket_create
0.21 ± 6% -0.1 0.14 ± 7% perf-profile.children.cycles-pp.sock_alloc_inode
0.07 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.fsnotify
0.07 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
0.07 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.del_timer_sync
0.21 ± 2% -0.1 0.15 ± 3% perf-profile.children.cycles-pp.tcp_make_synack
0.46 ± 8% -0.1 0.40 ± 4% perf-profile.children.cycles-pp.hrtimer_interrupt
0.28 ± 2% -0.1 0.22 perf-profile.children.cycles-pp.security_socket_connect
0.15 ± 3% -0.1 0.08 perf-profile.children.cycles-pp.sk_stream_kill_queues
0.23 -0.1 0.17 ± 2% perf-profile.children.cycles-pp.selinux_socket_connect_helper
0.16 ± 5% -0.1 0.10 ± 4% perf-profile.children.cycles-pp.__update_load_avg_se
0.16 -0.1 0.10 ± 5% perf-profile.children.cycles-pp.tcp_update_pacing_rate
0.11 ± 4% -0.1 0.04 ± 58% perf-profile.children.cycles-pp.iput
0.07 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.__d_instantiate
0.07 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.kfree_skb
0.07 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.___slab_alloc
0.22 -0.1 0.15 ± 5% perf-profile.children.cycles-pp.tcp_init_sock
0.20 ± 7% -0.1 0.14 ± 6% perf-profile.children.cycles-pp.__might_fault
0.18 ± 2% -0.1 0.11 ± 7% perf-profile.children.cycles-pp.nr_iowait_cpu
0.08 ± 6% -0.1 0.01 ±173% perf-profile.children.cycles-pp.selinux_netlbl_socket_post_create
0.24 ± 6% -0.1 0.18 ± 6% perf-profile.children.cycles-pp.__x64_sys_setsockopt
0.19 ± 3% -0.1 0.13 ± 6% perf-profile.children.cycles-pp.security_inode_alloc
0.18 ± 4% -0.1 0.12 ± 5% perf-profile.children.cycles-pp.selinux_inode_alloc_security
0.06 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.tcp_options_write
0.06 ± 17% -0.1 0.00 perf-profile.children.cycles-pp.netlink_has_listeners
0.06 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.try_to_del_timer_sync
0.06 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.security_sk_free
0.06 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.cpus_share_cache
0.15 ± 4% -0.1 0.09 perf-profile.children.cycles-pp.tcp_rcv_space_adjust
0.15 ± 4% -0.1 0.09 ± 7% perf-profile.children.cycles-pp.inet_ehash_insert
0.14 ± 5% -0.1 0.08 perf-profile.children.cycles-pp.pick_next_entity
0.07 ± 5% -0.1 0.01 ±173% perf-profile.children.cycles-pp.security_socket_getsockopt
0.24 ± 2% -0.1 0.18 perf-profile.children.cycles-pp.selinux_socket_connect
0.12 ± 7% -0.1 0.06 ± 7% perf-profile.children.cycles-pp.pm_qos_request
0.06 ± 11% -0.1 0.00 perf-profile.children.cycles-pp.tcp_select_initial_window
0.06 -0.1 0.00 perf-profile.children.cycles-pp.tcp_cleanup_rbuf
0.06 ± 11% -0.1 0.00 perf-profile.children.cycles-pp.tick_check_broadcast_expired
0.06 ± 11% -0.1 0.00 perf-profile.children.cycles-pp.tcp_assign_congestion_control
0.11 ± 7% -0.1 0.05 ± 8% perf-profile.children.cycles-pp.activate_task
0.14 ± 7% -0.1 0.08 ± 5% perf-profile.children.cycles-pp.inet_ehashfn
0.07 ± 10% -0.1 0.01 ±173% perf-profile.children.cycles-pp.account_entity_dequeue
0.08 ± 10% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__sk_free
0.06 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.rcu_cblist_dequeue
0.06 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.selinux_sock_graft
0.06 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.tcp_validate_incoming
0.15 ± 8% -0.1 0.10 ± 5% perf-profile.children.cycles-pp.tcp_wfree
0.14 ± 7% -0.1 0.09 ± 4% perf-profile.children.cycles-pp.copyout
0.06 ± 9% -0.1 0.00 perf-profile.children.cycles-pp.sched_ttwu_pending
0.06 ± 9% -0.1 0.00 perf-profile.children.cycles-pp.sel_netnode_sid
0.20 ± 2% -0.1 0.14 ± 5% perf-profile.children.cycles-pp.update_ts_time_stats
0.16 ± 2% -0.1 0.10 ± 8% perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.16 ± 6% -0.1 0.10 ± 8% perf-profile.children.cycles-pp.tcp_send_mss
0.05 ± 9% -0.1 0.00 perf-profile.children.cycles-pp.inet_addr_type_table
0.22 ± 7% -0.1 0.16 ± 5% perf-profile.children.cycles-pp.__sys_setsockopt
0.17 ± 11% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.validate_xmit_skb
0.05 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.import_single_range
0.05 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.netlbl_req_setattr
0.05 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.inet_lookup_ifaddr_rcu
0.05 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.ip_mc_drop_socket
0.05 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.hrtimer_try_to_cancel
0.05 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.sk_free
0.05 ± 8% -0.1 0.00 perf-profile.children.cycles-pp.xfrm_lookup
0.05 ± 57% -0.1 0.00 perf-profile.children.cycles-pp.__usecs_to_jiffies
0.17 ± 2% -0.1 0.12 ± 3% perf-profile.children.cycles-pp.inet_unhash
0.08 ± 6% -0.1 0.03 ±100% perf-profile.children.cycles-pp.__switch_to_asm
0.16 ± 4% -0.1 0.11 perf-profile.children.cycles-pp.security_inet_conn_request
0.05 -0.1 0.00 perf-profile.children.cycles-pp.skb_push
0.05 -0.1 0.00 perf-profile.children.cycles-pp.security_socket_setsockopt
0.05 -0.1 0.00 perf-profile.children.cycles-pp.tcp_rate_check_app_limited
0.05 -0.1 0.00 perf-profile.children.cycles-pp.update_blocked_averages
0.05 -0.1 0.00 perf-profile.children.cycles-pp.security_port_sid
0.17 ± 2% -0.0 0.12 perf-profile.children.cycles-pp.sock_has_perm
0.13 ± 6% -0.0 0.09 ± 5% perf-profile.children.cycles-pp.rcu_eqs_enter
0.06 ± 11% -0.0 0.01 ±173% perf-profile.children.cycles-pp.move_addr_to_kernel
0.09 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.kfree_skbmem
0.05 ± 58% -0.0 0.00 perf-profile.children.cycles-pp.inet_ehash_nolisten
0.36 ± 5% -0.0 0.32 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.15 ± 4% -0.0 0.10 ± 8% perf-profile.children.cycles-pp.inet_sk_rebuild_header
0.14 ± 7% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.ksize
0.10 ± 4% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.rcu_idle_exit
0.07 -0.0 0.03 ±100% perf-profile.children.cycles-pp.__calc_delta
0.11 ± 4% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.put_prev_task_fair
0.12 ± 3% -0.0 0.07 ± 15% perf-profile.children.cycles-pp.security_socket_post_create
0.12 ± 3% -0.0 0.07 ± 15% perf-profile.children.cycles-pp.selinux_socket_post_create
0.08 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.deactivate_task
0.07 ± 6% -0.0 0.03 ±100% perf-profile.children.cycles-pp.bictcp_cong_avoid
0.15 -0.0 0.11 ± 4% perf-profile.children.cycles-pp.selinux_inet_conn_request
0.10 ± 4% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.tcp_schedule_loss_probe
0.06 ± 9% -0.0 0.01 ±173% perf-profile.children.cycles-pp.tcp_check_space
0.05 ± 9% -0.0 0.01 ±173% perf-profile.children.cycles-pp.run_rebalance_domains
0.04 ± 58% -0.0 0.00 perf-profile.children.cycles-pp._copy_from_user
0.16 -0.0 0.12 ± 5% perf-profile.children.cycles-pp.evict
0.14 ± 6% -0.0 0.10 ± 4% perf-profile.children.cycles-pp.tick_nohz_next_event
0.12 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.tcp_release_cb
0.10 ± 10% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.ipv4_default_advmss
0.10 ± 4% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.rb_erase_cached
0.10 ± 4% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.selinux_sk_alloc_security
0.08 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.inet_twsk_alloc
0.07 ± 7% -0.0 0.03 ±100% perf-profile.children.cycles-pp.locks_remove_posix
0.04 ± 58% -0.0 0.00 perf-profile.children.cycles-pp.__internal_add_timer
0.09 -0.0 0.05 perf-profile.children.cycles-pp.sock_recvmsg
0.08 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.call_cpuidle
0.12 ± 5% -0.0 0.08 ± 8% perf-profile.children.cycles-pp._cond_resched
0.14 ± 6% -0.0 0.10 perf-profile.children.cycles-pp._copy_to_user
0.13 -0.0 0.09 ± 4% perf-profile.children.cycles-pp.move_addr_to_user
0.12 ± 3% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.bictcp_acked
0.10 ± 4% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.inet_reqsk_alloc
0.10 ± 9% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.copyin
0.08 ± 6% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.tcp_v4_send_check
0.08 ± 6% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.security_socket_recvmsg
0.04 ± 57% -0.0 0.00 perf-profile.children.cycles-pp.file_free_rcu
0.04 ± 57% -0.0 0.00 perf-profile.children.cycles-pp._raw_write_lock_bh
0.04 ± 57% -0.0 0.00 perf-profile.children.cycles-pp.xfrm_lookup_route
0.31 ± 3% -0.0 0.27 ± 5% perf-profile.children.cycles-pp.sk_stop_timer
0.11 ± 13% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.netif_skb_features
0.14 ± 6% -0.0 0.10 ± 8% perf-profile.children.cycles-pp.memcpy_erms
0.13 ± 6% -0.0 0.10 ± 8% perf-profile.children.cycles-pp.sock_setsockopt
0.12 ± 5% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.secure_tcp_seq
0.10 ± 8% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.__alloc_fd
0.07 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.prepare_to_wait_exclusive
0.24 ± 9% -0.0 0.21 ± 2% perf-profile.children.cycles-pp.tick_sched_timer
0.43 ± 3% -0.0 0.40 ± 2% perf-profile.children.cycles-pp.irq_exit
0.13 ± 3% -0.0 0.10 ± 5% perf-profile.children.cycles-pp.__inet_inherit_port
0.13 ± 3% -0.0 0.10 perf-profile.children.cycles-pp.memset_erms
0.12 ± 5% -0.0 0.09 ± 4% perf-profile.children.cycles-pp.__d_alloc
0.09 ± 4% -0.0 0.06 perf-profile.children.cycles-pp.tcp_init_buffer_space
0.10 -0.0 0.07 ± 6% perf-profile.children.cycles-pp.security_sk_alloc
0.06 ± 7% -0.0 0.03 ±100% perf-profile.children.cycles-pp.security_sock_graft
0.11 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.tcp_queue_rcv
0.21 ± 8% -0.0 0.18 ± 3% perf-profile.children.cycles-pp.tick_sched_handle
0.09 ± 5% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.menu_reflect
0.23 -0.0 0.20 ± 2% perf-profile.children.cycles-pp.sock_getsockopt
0.08 ± 5% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.skb_entail
0.08 ± 5% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.selinux_ipv4_output
0.10 ± 8% -0.0 0.08 ± 6% perf-profile.children.cycles-pp.tcp_init_xmit_timers
0.09 ± 5% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.tcp_v4_inbound_md5_hash
0.20 ± 8% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.update_process_times
0.10 ± 4% -0.0 0.08 ± 6% perf-profile.children.cycles-pp.fsnotify_destroy_marks
0.11 ± 15% -0.0 0.09 ± 13% perf-profile.children.cycles-pp.__inet_lookup_listener
0.10 ± 4% -0.0 0.08 ± 6% perf-profile.children.cycles-pp.fsnotify_grab_connector
0.10 ± 4% -0.0 0.07 perf-profile.children.cycles-pp.bictcp_cwnd_event
0.09 ± 4% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.tcp_v4_fill_cb
0.08 -0.0 0.05 ± 8% perf-profile.children.cycles-pp.security_file_alloc
0.08 ± 5% -0.0 0.05 perf-profile.children.cycles-pp.tcp_established_options
0.28 ± 3% -0.0 0.25 ± 6% perf-profile.children.cycles-pp.inet_csk_clear_xmit_timers
0.08 ± 6% -0.0 0.05 perf-profile.children.cycles-pp.__slab_alloc
0.08 ± 6% -0.0 0.05 perf-profile.children.cycles-pp.add_wait_queue
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.note_gp_changes
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.selinux_d_instantiate
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.worker_thread
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.dentry_unlink_inode
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.rcu_check_callbacks
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.tcp_newly_delivered
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.sel_netnode_find
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.netlbl_sock_setattr
0.10 ± 4% -0.0 0.08 perf-profile.children.cycles-pp.sock_put
0.08 ± 6% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.selinux_file_alloc_security
0.07 ± 11% -0.0 0.05 perf-profile.children.cycles-pp.__sock_wfree
0.10 ± 5% -0.0 0.07 ± 5% perf-profile.children.cycles-pp.__copy_skb_header
0.19 ± 6% -0.0 0.17 ± 7% perf-profile.children.cycles-pp.inet_put_port
0.08 ± 5% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.tcp_parse_options
0.08 ± 6% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.selinux_socket_accept
0.08 ± 16% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.eth_type_trans
0.07 -0.0 0.05 perf-profile.children.cycles-pp.selinux_netlbl_inet_conn_request
0.07 ± 5% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.ip_rcv_finish
0.08 ± 5% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.sock_destroy_inode
0.08 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.security_socket_accept
0.07 ± 5% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.__srcu_read_lock
0.04 ± 58% -0.0 0.03 ±100% perf-profile.children.cycles-pp.__update_idle_core
0.08 ± 6% -0.0 0.06 ± 11% perf-profile.children.cycles-pp.__hrtimer_init
0.07 ± 6% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.__get_user_4
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.up_write
0.04 ±100% -0.0 0.03 ±100% perf-profile.children.cycles-pp.clockevents_program_event
0.11 ± 6% -0.0 0.10 ± 5% perf-profile.children.cycles-pp.scheduler_tick
0.21 ± 5% -0.0 0.19 ± 4% perf-profile.children.cycles-pp.del_timer
0.10 ± 4% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.inet_twsk_deschedule_put
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.tcp_md5_do_lookup
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.update_min_vruntime
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.console_unlock
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.lockref_put_return
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.serial8250_console_write
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.wait_for_xmitr
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.uart_console_write
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.serial8250_console_putchar
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.tcp_ca_openreq_child
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.process_one_work
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.__tcp_select_window
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.woken_wake_function
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.rcu_eqs_exit
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp._atomic_dec_and_lock
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.security_inet_conn_established
0.08 ± 10% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.load_balance
0.05 -0.0 0.04 ± 57% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.07 ± 5% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.tcp_openreq_init_rwin
0.07 -0.0 0.06 perf-profile.children.cycles-pp.security_file_free
0.09 ± 4% -0.0 0.08 perf-profile.children.cycles-pp.selinux_ipv4_postroute
0.07 ± 11% -0.0 0.07 ± 12% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.06 ± 14% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.find_busiest_group
0.08 ± 5% -0.0 0.08 ± 5% perf-profile.children.cycles-pp.inet_twsk_kill
0.10 ± 10% -0.0 0.10 ± 12% perf-profile.children.cycles-pp.rebalance_domains
0.06 -0.0 0.06 ± 7% perf-profile.children.cycles-pp.security_socket_sendmsg
0.06 ± 6% -0.0 0.06 ± 11% perf-profile.children.cycles-pp._raw_spin_trylock
0.06 ± 9% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.selinux_skb_peerlbl_sid
0.05 ± 9% -0.0 0.05 ± 8% perf-profile.children.cycles-pp.selinux_netlbl_sock_rcv_skb
0.06 +0.0 0.06 ± 6% perf-profile.children.cycles-pp.__close_fd
0.13 ± 3% +0.0 0.15 ± 7% perf-profile.children.cycles-pp.tcp_ack_update_rtt
0.06 ± 7% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.tcp_synack_rtt_meas
0.06 ± 6% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.minmax_running_min
0.08 ± 6% +0.1 0.14 ± 5% perf-profile.children.cycles-pp.sk_setup_caps
0.08 ± 8% +0.1 0.18 ± 4% perf-profile.children.cycles-pp._raw_spin_lock_irq
0.00 +0.3 0.27 perf-profile.children.cycles-pp.find_next_bit
0.00 +0.4 0.36 perf-profile.children.cycles-pp.cpumask_next
0.00 +1.6 1.57 perf-profile.children.cycles-pp.mnt_get_count
47.19 +9.2 56.39 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
47.02 +9.3 56.30 perf-profile.children.cycles-pp.do_syscall_64
12.25 +21.3 33.53 perf-profile.children.cycles-pp.exit_to_usermode_loop
12.21 +21.3 33.51 perf-profile.children.cycles-pp.task_work_run
2.04 ± 2% +23.2 25.26 perf-profile.children.cycles-pp._raw_spin_lock
0.73 ± 2% +23.3 23.99 perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
0.00 +25.5 25.47 perf-profile.children.cycles-pp.mntput_no_expire
36.34 -3.5 32.85 perf-profile.self.cycles-pp.intel_idle
1.24 ± 2% -0.5 0.78 perf-profile.self.cycles-pp._raw_spin_lock_bh
1.17 -0.5 0.71 perf-profile.self.cycles-pp.switch_mm_irqs_off
1.37 -0.5 0.92 perf-profile.self.cycles-pp.syscall_return_via_sysret
1.33 ± 3% -0.4 0.89 perf-profile.self.cycles-pp.__entry_trampoline_start
1.19 -0.4 0.80 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.92 -0.4 0.57 ± 2% perf-profile.self.cycles-pp.__schedule
0.71 ± 2% -0.3 0.45 perf-profile.self.cycles-pp.update_load_avg
0.82 -0.3 0.56 perf-profile.self.cycles-pp.__inet_lookup_established
0.63 ± 2% -0.3 0.38 ± 2% perf-profile.self.cycles-pp.__slab_free
0.64 ± 2% -0.2 0.39 perf-profile.self.cycles-pp.tcp_transmit_skb
0.72 ± 2% -0.2 0.49 perf-profile.self.cycles-pp.__indirect_thunk_start
0.57 ± 3% -0.2 0.34 perf-profile.self.cycles-pp.load_new_mm_cr3
0.46 -0.2 0.26 perf-profile.self.cycles-pp.__sk_mem_reduce_allocated
0.66 -0.2 0.46 ± 3% perf-profile.self.cycles-pp.tcp_v4_rcv
0.59 -0.2 0.40 perf-profile.self.cycles-pp.tcp_ack
0.58 ± 2% -0.2 0.39 perf-profile.self.cycles-pp.tcp_recvmsg
0.44 -0.2 0.25 ± 5% perf-profile.self.cycles-pp.update_curr
0.48 ± 2% -0.2 0.29 ± 2% perf-profile.self.cycles-pp.sk_forced_mem_schedule
0.47 -0.2 0.29 perf-profile.self.cycles-pp.percpu_counter_add_batch
0.52 -0.2 0.34 ± 4% perf-profile.self.cycles-pp.native_sched_clock
0.41 ± 2% -0.2 0.23 ± 3% perf-profile.self.cycles-pp.select_task_rq_fair
0.51 ± 4% -0.2 0.34 ± 2% perf-profile.self.cycles-pp.selinux_socket_sock_rcv_skb
0.42 -0.2 0.26 ± 3% perf-profile.self.cycles-pp.menu_select
0.39 -0.2 0.22 perf-profile.self.cycles-pp.ip_rcv
0.42 ± 2% -0.2 0.26 perf-profile.self.cycles-pp.resched_curr
0.44 ± 2% -0.2 0.28 ± 5% perf-profile.self.cycles-pp.kmem_cache_alloc
0.28 -0.2 0.12 ± 3% perf-profile.self.cycles-pp.pick_next_task_fair
0.38 ± 2% -0.2 0.22 perf-profile.self.cycles-pp.do_idle
0.46 ± 4% -0.2 0.31 perf-profile.self.cycles-pp.dst_release
0.51 -0.1 0.36 ± 3% perf-profile.self.cycles-pp.sel_netport_sid
0.38 ± 4% -0.1 0.24 perf-profile.self.cycles-pp.set_next_entity
0.41 -0.1 0.28 ± 4% perf-profile.self.cycles-pp.tcp_clean_rtx_queue
0.37 -0.1 0.24 perf-profile.self.cycles-pp.ip_queue_xmit
0.29 -0.1 0.16 ± 6% perf-profile.self.cycles-pp.__netif_receive_skb_core
0.36 ± 4% -0.1 0.22 ± 3% perf-profile.self.cycles-pp.__tcp_get_metrics
0.44 -0.1 0.31 ± 3% perf-profile.self.cycles-pp.avc_has_perm
0.37 ± 4% -0.1 0.24 ± 7% perf-profile.self.cycles-pp.ip_finish_output2
0.33 ± 2% -0.1 0.20 ± 4% perf-profile.self.cycles-pp.sock_rfree
0.34 -0.1 0.22 perf-profile.self.cycles-pp.__alloc_skb
0.34 ± 2% -0.1 0.22 ± 4% perf-profile.self.cycles-pp.ipv4_dst_check
0.33 ± 2% -0.1 0.21 ± 4% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
0.32 ± 2% -0.1 0.20 ± 2% perf-profile.self.cycles-pp.update_cfs_group
0.30 ± 2% -0.1 0.18 ± 2% perf-profile.self.cycles-pp.tcp_write_xmit
0.31 ± 3% -0.1 0.19 ± 3% perf-profile.self.cycles-pp.tcp_rcv_established
0.26 ± 4% -0.1 0.14 ± 5% perf-profile.self.cycles-pp.try_to_wake_up
0.26 ± 3% -0.1 0.15 ± 2% perf-profile.self.cycles-pp.kfree
0.28 -0.1 0.17 ± 4% perf-profile.self.cycles-pp.__list_del_entry_valid
0.32 ± 3% -0.1 0.21 ± 2% perf-profile.self.cycles-pp.read_tsc
0.26 -0.1 0.15 ± 3% perf-profile.self.cycles-pp.wait_woken
0.35 ± 2% -0.1 0.24 ± 3% perf-profile.self.cycles-pp.___might_sleep
0.25 ± 2% -0.1 0.14 ± 3% perf-profile.self.cycles-pp.enqueue_entity
0.23 ± 4% -0.1 0.13 ± 5% perf-profile.self.cycles-pp.mod_timer
0.24 -0.1 0.14 ± 6% perf-profile.self.cycles-pp.finish_task_switch
0.24 ± 4% -0.1 0.14 ± 5% perf-profile.self.cycles-pp.__list_add_valid
0.29 -0.1 0.19 ± 2% perf-profile.self.cycles-pp.selinux_parse_skb
0.24 ± 2% -0.1 0.15 ± 5% perf-profile.self.cycles-pp.process_backlog
0.24 ± 4% -0.1 0.14 ± 5% perf-profile.self.cycles-pp.enqueue_task_fair
0.33 ± 3% -0.1 0.23 ± 5% perf-profile.self.cycles-pp.__dev_queue_xmit
0.27 ± 3% -0.1 0.18 ± 2% perf-profile.self.cycles-pp.loopback_xmit
0.27 ± 3% -0.1 0.17 perf-profile.self.cycles-pp.ip_output
0.26 -0.1 0.17 ± 3% perf-profile.self.cycles-pp.__kmalloc_node_track_caller
0.23 ± 7% -0.1 0.13 perf-profile.self.cycles-pp.__softirqentry_text_start
0.20 ± 2% -0.1 0.11 ± 6% perf-profile.self.cycles-pp.___perf_sw_event
0.30 -0.1 0.21 ± 3% perf-profile.self.cycles-pp.do_syscall_64
0.19 ± 2% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.available_idle_cpu
0.28 ± 5% -0.1 0.19 ± 2% perf-profile.self.cycles-pp.__skb_clone
0.29 -0.1 0.21 ± 2% perf-profile.self.cycles-pp.skb_release_data
0.23 ± 2% -0.1 0.15 ± 3% perf-profile.self.cycles-pp.sock_def_readable
0.23 ± 7% -0.1 0.15 ± 5% perf-profile.self.cycles-pp.inet_csk_bind_conflict
0.26 ± 3% -0.1 0.17 ± 2% perf-profile.self.cycles-pp.reweight_entity
0.24 ± 2% -0.1 0.15 ± 5% perf-profile.self.cycles-pp.net_rx_action
0.18 ± 2% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.task_work_add
0.28 -0.1 0.20 ± 2% perf-profile.self.cycles-pp.kmem_cache_free
0.26 ± 2% -0.1 0.18 ± 2% perf-profile.self.cycles-pp.ipv4_mtu
0.24 ± 2% -0.1 0.15 ± 3% perf-profile.self.cycles-pp.rcu_segcblist_enqueue
0.19 ± 2% -0.1 0.11 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.08 ± 10% -0.1 0.00 perf-profile.self.cycles-pp.selinux_socket_bind
0.25 -0.1 0.17 ± 4% perf-profile.self.cycles-pp.__switch_to
0.23 ± 3% -0.1 0.15 ± 3% perf-profile.self.cycles-pp.poll_idle
0.28 -0.1 0.20 ± 5% perf-profile.self.cycles-pp.fib_table_lookup
0.21 ± 5% -0.1 0.12 ± 4% perf-profile.self.cycles-pp.dequeue_entity
0.08 ± 8% -0.1 0.00 perf-profile.self.cycles-pp.netif_rx_internal
0.21 ± 3% -0.1 0.13 ± 3% perf-profile.self.cycles-pp.__local_bh_enable_ip
0.25 ± 4% -0.1 0.17 ± 2% perf-profile.self.cycles-pp.__fget_light
0.20 ± 2% -0.1 0.12 ± 8% perf-profile.self.cycles-pp.native_write_msr
0.19 ± 3% -0.1 0.11 ± 7% perf-profile.self.cycles-pp.tcp_sendmsg_locked
0.22 -0.1 0.14 ± 5% perf-profile.self.cycles-pp.ip_send_check
0.21 ± 4% -0.1 0.14 ± 5% perf-profile.self.cycles-pp.__might_sleep
0.07 ± 11% -0.1 0.00 perf-profile.self.cycles-pp.put_prev_entity
0.07 ± 5% -0.1 0.00 perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.07 ± 5% -0.1 0.00 perf-profile.self.cycles-pp.__tcp_v4_send_check
0.07 ± 5% -0.1 0.00 perf-profile.self.cycles-pp.raw_local_deliver
0.22 ± 5% -0.1 0.15 perf-profile.self.cycles-pp.copy_user_generic_unrolled
0.19 ± 3% -0.1 0.12 ± 5% perf-profile.self.cycles-pp.ip_local_deliver
0.07 ± 10% -0.1 0.00 perf-profile.self.cycles-pp.tcp_v4_syn_recv_sock
0.07 ± 10% -0.1 0.00 perf-profile.self.cycles-pp.inet_sendmsg
0.07 -0.1 0.00 perf-profile.self.cycles-pp.__inet_stream_connect
0.07 -0.1 0.00 perf-profile.self.cycles-pp.__x64_sys_close
0.07 ± 10% -0.1 0.00 perf-profile.self.cycles-pp.lock_sock_nested
0.07 ± 10% -0.1 0.00 perf-profile.self.cycles-pp.rcu_all_qs
0.08 -0.1 0.01 ±173% perf-profile.self.cycles-pp.__sys_sendto
0.07 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.tick_nohz_get_sleep_length
0.07 ± 12% -0.1 0.00 perf-profile.self.cycles-pp.account_entity_dequeue
0.07 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.fsnotify
0.07 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
0.27 ± 7% -0.1 0.20 ± 2% perf-profile.self.cycles-pp.tcp_rcv_state_process
0.27 -0.1 0.20 ± 3% perf-profile.self.cycles-pp.kmem_cache_alloc_node
0.16 ± 4% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.update_rq_clock
0.07 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.security_inode_free
0.07 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.__inet_bind
0.07 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.put_prev_task_fair
0.07 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.iput
0.07 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.inet_twsk_alloc
0.18 ± 2% -0.1 0.11 ± 7% perf-profile.self.cycles-pp.nr_iowait_cpu
0.18 ± 3% -0.1 0.12 ± 3% perf-profile.self.cycles-pp.selinux_ip_postroute
0.21 ± 2% -0.1 0.15 ± 4% perf-profile.self.cycles-pp.enqueue_to_backlog
0.18 ± 2% -0.1 0.12 ± 5% perf-profile.self.cycles-pp.tcp_close
0.16 ± 2% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.tcp_update_pacing_rate
0.13 ± 3% -0.1 0.07 perf-profile.self.cycles-pp.selinux_socket_create
0.12 ± 3% -0.1 0.06 perf-profile.self.cycles-pp.tcp_data_queue
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.tcp_update_metrics
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.tick_nohz_idle_enter
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.skb_entail
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.tcp_options_write
0.06 ± 13% -0.1 0.00 perf-profile.self.cycles-pp.tcp_v4_connect
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.skb_release_head_state
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.inode_init_always
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.___slab_alloc
0.06 ± 17% -0.1 0.00 perf-profile.self.cycles-pp.netlink_has_listeners
0.06 ± 6% -0.1 0.00 perf-profile.self.cycles-pp.cpus_share_cache
0.07 ± 5% -0.1 0.01 ±173% perf-profile.self.cycles-pp.tcp_ack_update_rtt
0.07 ± 5% -0.1 0.01 ±173% perf-profile.self.cycles-pp.tcp_create_openreq_child
0.07 ± 5% -0.1 0.01 ±173% perf-profile.self.cycles-pp.inet_reqsk_alloc
0.15 ± 3% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.__update_load_avg_se
0.13 ± 5% -0.1 0.07 perf-profile.self.cycles-pp.pick_next_entity
0.12 ± 7% -0.1 0.06 ± 7% perf-profile.self.cycles-pp.pm_qos_request
0.06 -0.1 0.00 perf-profile.self.cycles-pp.rcu_idle_exit
0.06 ± 11% -0.1 0.00 perf-profile.self.cycles-pp.tcp_schedule_loss_probe
0.06 -0.1 0.00 perf-profile.self.cycles-pp.menu_reflect
0.06 ± 11% -0.1 0.00 perf-profile.self.cycles-pp.tcp_select_initial_window
0.06 ± 11% -0.1 0.00 perf-profile.self.cycles-pp.tick_check_broadcast_expired
0.11 ± 7% -0.1 0.05 ± 8% perf-profile.self.cycles-pp.activate_task
0.15 ± 7% -0.1 0.09 perf-profile.self.cycles-pp.ip_local_deliver_finish
0.07 -0.1 0.01 ±173% perf-profile.self.cycles-pp.inet_csk_reqsk_queue_hash_add
0.08 ± 5% -0.1 0.03 ±100% perf-profile.self.cycles-pp.__fput
0.20 ± 5% -0.1 0.14 ± 6% perf-profile.self.cycles-pp.selinux_ip_postroute_compat
0.06 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.inet_csk_route_req
0.06 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.selinux_ipv4_output
0.06 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.rcu_cblist_dequeue
0.06 ± 7% -0.1 0.00 perf-profile.self.cycles-pp.selinux_sock_graft
0.15 ± 8% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.tcp_wfree
0.14 ± 5% -0.1 0.08 ± 5% perf-profile.self.cycles-pp.inet_ehashfn
0.17 ± 3% -0.1 0.11 ± 6% perf-profile.self.cycles-pp.selinux_sock_rcv_skb_compat
0.06 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.inet_csk_get_port
0.06 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.inode_doinit_with_dentry
0.06 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.sched_ttwu_pending
0.06 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.tcp_validate_incoming
0.16 ± 2% -0.1 0.10 ± 8% perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.05 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.tcp_child_process
0.05 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.sock_setsockopt
0.05 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.inet_addr_type_table
0.05 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.__sys_recvfrom
0.05 ± 9% -0.1 0.00 perf-profile.self.cycles-pp.tcp_assign_congestion_control
0.15 ± 2% -0.1 0.10 ± 5% perf-profile.self.cycles-pp.__call_rcu
0.11 -0.1 0.06 ± 7% perf-profile.self.cycles-pp.__release_sock
0.07 ± 7% -0.1 0.01 ±173% perf-profile.self.cycles-pp.inet_create
0.07 ± 7% -0.1 0.01 ±173% perf-profile.self.cycles-pp.sched_clock_cpu
0.05 ± 8% -0.1 0.00 perf-profile.self.cycles-pp.ip_local_out
0.05 ± 8% -0.1 0.00 perf-profile.self.cycles-pp.import_single_range
0.05 ± 8% -0.1 0.00 perf-profile.self.cycles-pp.tcp_init_metrics
0.05 ± 8% -0.1 0.00 perf-profile.self.cycles-pp.inet_lookup_ifaddr_rcu
0.05 ± 8% -0.1 0.00 perf-profile.self.cycles-pp.ip_mc_drop_socket
0.05 ± 8% -0.1 0.00 perf-profile.self.cycles-pp.sk_free
0.05 ± 57% -0.1 0.00 perf-profile.self.cycles-pp.__usecs_to_jiffies
0.09 ± 4% -0.1 0.04 ± 58% perf-profile.self.cycles-pp.check_preempt_curr
0.18 ± 3% -0.1 0.13 ± 3% perf-profile.self.cycles-pp.account_entity_enqueue
0.15 ± 2% -0.1 0.10 ± 7% perf-profile.self.cycles-pp.kmem_cache_alloc_trace
0.12 ± 3% -0.1 0.07 ± 7% perf-profile.self.cycles-pp.tcp_rcv_space_adjust
0.08 ± 6% -0.1 0.03 ±100% perf-profile.self.cycles-pp.__switch_to_asm
0.08 ± 6% -0.1 0.03 ±100% perf-profile.self.cycles-pp.tcp_v4_send_check
0.06 ± 13% -0.1 0.01 ±173% perf-profile.self.cycles-pp.sk_filter_trim_cap
0.05 -0.1 0.00 perf-profile.self.cycles-pp.task_work_run
0.05 -0.1 0.00 perf-profile.self.cycles-pp.tcp_event_new_data_sent
0.05 -0.1 0.00 perf-profile.self.cycles-pp.tcp_cleanup_rbuf
0.05 -0.1 0.00 perf-profile.self.cycles-pp.skb_push
0.05 -0.1 0.00 perf-profile.self.cycles-pp.tcp_rate_check_app_limited
0.17 ± 2% -0.0 0.12 perf-profile.self.cycles-pp.sock_has_perm
0.15 ± 4% -0.0 0.10 ± 4% perf-profile.self.cycles-pp.sk_clone_lock
0.14 ± 7% -0.0 0.10 ± 5% perf-profile.self.cycles-pp.ksize
0.13 ± 6% -0.0 0.09 ± 5% perf-profile.self.cycles-pp.rcu_eqs_enter
0.06 -0.0 0.01 ±173% perf-profile.self.cycles-pp.tcp_set_state
0.09 ± 5% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.kfree_skbmem
0.14 ± 7% -0.0 0.10 ± 5% perf-profile.self.cycles-pp.do_softirq
0.13 ± 6% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.__inet_check_established
0.07 -0.0 0.03 ±100% perf-profile.self.cycles-pp.__calc_delta
0.11 ± 3% -0.0 0.07 ± 6% perf-profile.self.cycles-pp.release_sock
0.08 ± 13% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.ipv4_default_advmss
0.08 ± 5% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.deactivate_task
0.07 ± 6% -0.0 0.03 ±100% perf-profile.self.cycles-pp.bictcp_cong_avoid
0.12 ± 4% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.sk_reset_timer
0.06 ± 9% -0.0 0.01 ±173% perf-profile.self.cycles-pp.tcp_check_space
0.11 ± 9% -0.0 0.07 perf-profile.self.cycles-pp.ktime_get_with_offset
0.15 ± 7% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.__sk_destruct
0.14 ± 6% -0.0 0.10 ± 5% perf-profile.self.cycles-pp.tcp_connect
0.12 ± 4% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.tcp_release_cb
0.10 ± 8% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.__wake_up_common
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.__sk_dst_check
0.07 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.locks_remove_posix
0.04 ± 58% -0.0 0.00 perf-profile.self.cycles-pp.__kmalloc_reserve
0.04 ± 58% -0.0 0.00 perf-profile.self.cycles-pp.xfrm_lookup
0.12 ± 8% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.dev_hard_start_xmit
0.08 ± 5% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.call_cpuidle
0.18 ± 2% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.tcp_conn_request
0.15 ± 4% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.ip_finish_output
0.12 ± 3% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.bictcp_acked
0.12 ± 5% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.get_empty_filp
0.10 ± 5% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.tcp_v4_do_rcv
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp.tcp_get_metrics
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp.select_idle_sibling
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp._copy_from_iter_full
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp.selinux_inet_conn_request
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp._copy_to_user
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp.tcp_v4_inbound_md5_hash
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp.down_write
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp.file_free_rcu
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp.switch_mm
0.04 ± 57% -0.0 0.00 perf-profile.self.cycles-pp._raw_write_lock_bh
0.12 ± 6% -0.0 0.09 ± 4% perf-profile.self.cycles-pp.sock_def_wakeup
0.09 -0.0 0.05 ± 8% perf-profile.self.cycles-pp.lock_timer_base
0.14 ± 6% -0.0 0.10 ± 8% perf-profile.self.cycles-pp.memcpy_erms
0.09 ± 4% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.rb_erase_cached
0.07 ± 11% -0.0 0.04 ± 57% perf-profile.self.cycles-pp.__sock_wfree
0.12 ± 5% -0.0 0.09 ± 5% perf-profile.self.cycles-pp.nf_hook_slow
0.10 ± 33% -0.0 0.07 ± 30% perf-profile.self.cycles-pp.ktime_get
0.13 ± 3% -0.0 0.10 perf-profile.self.cycles-pp.memset_erms
0.10 ± 9% -0.0 0.06 ± 6% perf-profile.self.cycles-pp.tcp_send_ack
0.10 ± 11% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.netif_skb_features
0.09 ± 13% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.tcp_current_mss
0.06 ± 7% -0.0 0.03 ±100% perf-profile.self.cycles-pp.tcp_check_req
0.11 ± 4% -0.0 0.08 ± 5% perf-profile.self.cycles-pp.tcp_queue_rcv
0.11 ± 4% -0.0 0.08 ± 10% perf-profile.self.cycles-pp.sk_stop_timer
0.16 ± 6% -0.0 0.13 ± 6% perf-profile.self.cycles-pp.ip_route_output_key_hash_rcu
0.13 ± 6% -0.0 0.10 ± 4% perf-profile.self.cycles-pp.dequeue_task_fair
0.08 -0.0 0.05 perf-profile.self.cycles-pp.pick_next_task_idle
0.07 ± 6% -0.0 0.04 ± 57% perf-profile.self.cycles-pp._cond_resched
0.10 ± 10% -0.0 0.07 ± 10% perf-profile.self.cycles-pp.__might_fault
0.09 ± 9% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.inet_sk_rebuild_header
0.05 ± 8% -0.0 0.03 ±100% perf-profile.self.cycles-pp.selinux_netlbl_sock_rcv_skb
0.11 ± 15% -0.0 0.09 ± 13% perf-profile.self.cycles-pp.__inet_lookup_listener
0.09 ± 4% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.inet_csk_accept
0.09 -0.0 0.06 ± 6% perf-profile.self.cycles-pp.sockfd_lookup_light
0.10 ± 4% -0.0 0.07 perf-profile.self.cycles-pp.bictcp_cwnd_event
0.09 -0.0 0.06 ± 6% perf-profile.self.cycles-pp.rcu_process_callbacks
0.09 ± 4% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.tcp_v4_fill_cb
0.08 ± 5% -0.0 0.05 perf-profile.self.cycles-pp.tcp_established_options
0.07 ± 7% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.tcp_make_synack
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.__wake_up_common_lock
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.tick_nohz_next_event
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.selinux_d_instantiate
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.dentry_unlink_inode
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.security_d_instantiate
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.find_busiest_group
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.tcp_newly_delivered
0.03 ±100% -0.0 0.00 perf-profile.self.cycles-pp.sel_netnode_find
0.10 ± 4% -0.0 0.08 perf-profile.self.cycles-pp.sock_put
0.07 ± 11% -0.0 0.05 perf-profile.self.cycles-pp.validate_xmit_skb
0.07 ± 5% -0.0 0.05 perf-profile.self.cycles-pp.move_addr_to_user
0.08 ± 16% -0.0 0.06 ± 15% perf-profile.self.cycles-pp.eth_type_trans
0.11 ± 4% -0.0 0.09 ± 5% perf-profile.self.cycles-pp.sk_alloc
0.11 ± 3% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.selinux_inode_free_security
0.10 ± 5% -0.0 0.07 ± 5% perf-profile.self.cycles-pp.__copy_skb_header
0.08 ± 5% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.tcp_parse_options
0.07 ± 10% -0.0 0.05 perf-profile.self.cycles-pp.inet_recvmsg
0.07 ± 10% -0.0 0.05 perf-profile.self.cycles-pp.selinux_inode_alloc_security
0.07 ± 5% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.__srcu_read_lock
0.07 ± 5% -0.0 0.05 ± 8% perf-profile.self.cycles-pp.ip_rcv_finish
0.07 ± 6% -0.0 0.05 perf-profile.self.cycles-pp.tcp_v4_destroy_sock
0.07 ± 6% -0.0 0.05 perf-profile.self.cycles-pp.__get_user_4
0.04 ± 58% -0.0 0.03 ±100% perf-profile.self.cycles-pp.__update_idle_core
0.08 ± 6% -0.0 0.06 ± 11% perf-profile.self.cycles-pp.__hrtimer_init
0.08 ± 5% -0.0 0.07 perf-profile.self.cycles-pp.__ip_local_out
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.__internal_add_timer
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.tcp_md5_do_lookup
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.update_min_vruntime
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.sk_prot_alloc
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.lockref_put_return
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.__tcp_select_window
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.woken_wake_function
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.__x64_sys_recvfrom
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.tcp_fin
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.security_req_classify_flow
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.up_write
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.inet_release
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.schedule
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.tcp_init_buffer_space
1.58 ± 2% -0.0 1.57 perf-profile.self.cycles-pp._raw_spin_lock
0.08 ± 6% -0.0 0.07 ± 6% perf-profile.self.cycles-pp.cpuidle_enter_state
0.09 ± 4% -0.0 0.08 perf-profile.self.cycles-pp.selinux_ipv4_postroute
0.06 ± 6% -0.0 0.06 ± 11% perf-profile.self.cycles-pp._raw_spin_trylock
0.07 ± 7% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.sock_getsockopt
0.00 +0.0 0.01 ±173% perf-profile.self.cycles-pp.tcp_openreq_init_rwin
0.06 ± 6% +0.0 0.11 ± 3% perf-profile.self.cycles-pp.minmax_running_min
0.07 +0.1 0.14 ± 3% perf-profile.self.cycles-pp.sk_setup_caps
0.08 ± 8% +0.1 0.18 ± 4% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.00 +0.1 0.11 ± 3% perf-profile.self.cycles-pp.cpumask_next
0.00 +0.2 0.18 ± 3% perf-profile.self.cycles-pp.mntput_no_expire
0.00 +0.3 0.27 perf-profile.self.cycles-pp.find_next_bit
0.00 +1.2 1.24 perf-profile.self.cycles-pp.mnt_get_count
0.73 +23.1 23.84 perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
netperf.Throughput_tps
23000 +-+-----------------------------------------------------------------+
| + |
22000 +-+++.+ + +. |
21000 +-+ +.++. ++.++.++.++.+++.++.++.++.++.+++.++.++.++.+++.++ ++.+|
| + |
20000 +-+ |
| |
19000 +-+ |
| |
18000 +-+ |
17000 +-+ |
| |
16000 +-+ O O O |
OO OO O OO OOO O OO OO OOO OO OO OO OO O O OO OO |
15000 +-+-----------------------------------------------------------------+
netperf.Throughput_total_tps
500000 +-+----------------------------------------------------------------+
| +. |
480000 +-+++.+ + .++.++. : ++.+|
460000 +-+ :+ :+ +++.++.++.+++.++.+++.++.++.+++.++.++.+++.: |
| + + + |
440000 +-+ |
420000 +-+ |
| |
400000 +-+ |
380000 +-+ |
| |
360000 +-+ |
340000 OO+OO O OO OO OO OOO OO OO OOO OO OOO OO OO OOO |
| O O |
320000 +-+----------------------------------------------------------------+
netperf.workload
1.45e+08 +-+--------------------------------------------------------------+
|+.++.+ + +.++.+ : ++.+|
1.4e+08 +-+ ::+ + ++.++.+++.+++.++.+++.+++.++.+++.++.+++.+ : |
1.35e+08 +-+ + + : |
| + |
1.3e+08 +-+ |
1.25e+08 +-+ |
| |
1.2e+08 +-+ |
1.15e+08 +-+ |
| |
1.1e+08 +-+ |
1.05e+08 +-+ O O |
OO OO OO OOO O OOO OO OOO OOO OO OOO O O OO OO |
1e+08 +-+---O----------------------------------------------------------+
netperf.time.user_time
300 +-+-------------------------------------------------------------------+
290 +-++ .++. |
|+ +.+ + ++. +.+ .+|
280 +-+ : + + .+ +. +. .++.+ .+ +. +. .+ .+ + : + |
270 +-+ :+ + +.+ + ++ + +.+ ++.+ ++ + +.+ +: |
260 +-+ + + |
250 +-+ |
| |
240 +-+ |
230 +-+ |
220 +O+OO O |
210 O-+ OO O O O O O O O |
| O OO O OO O OO OO OO O O O O O |
200 +-+ O O O |
190 +-+-------------------------------------------------------------------+
netperf.time.system_time
5900 +-+------------------------------------------------------------------+
| O O |
5800 OO+OO O OO OO OO OO OOO O O OO OO OO OO OO OOO O |
5700 +-+ |
| O |
5600 +-+ |
5500 +-+ |
|+.++.+ + .++.++.++.+++.++.++.++.++.++.++.++.+++.++.++.++.++ +.++.+|
5400 +-+ : : + : : |
5300 +-+ :: : : |
| + : : |
5200 +-+ :: |
5100 +-+ : |
| + |
5000 +-+------------------------------------------------------------------+
netperf.time.percent_of_cpu_this_job_got
2000 +-+------------------------------------------------------------------+
| O OO OO |
1950 OO+OO O OO OO OO O OOO OO OO OO OO OO OOO O |
| |
| O |
1900 +-+ +.++.+ |
|+.++.+ + .+ +.+++.++.++.++.++.++.++.++.+++.++.++.++.++ +.++.+|
1850 +-+ : : + : : |
| :: : : |
1800 +-+ + : : |
| :: |
| : |
1750 +-+ + |
| |
1700 +-+------------------------------------------------------------------+
netperf.time.voluntary_context_switches
2.8e+08 +-+---------------------------------------------------------------+
|+ + +. .+++.++. +. +. +. +.++. +. +.++ +. + +. : ++.+|
2.7e+08 +-+ :: ++ ++ + ++ + ++ + +.+ + +.+ +: |
2.6e+08 +-+ + + |
| |
2.5e+08 +-+ |
2.4e+08 +-+ |
| |
2.3e+08 +-+ |
2.2e+08 +-+ |
| |
2.1e+08 +-+ |
2e+08 +O+ O OOO OOO OO O O OO OOO OO OO OOO OO OOO O |
O O OO O |
1.9e+08 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 6 months
[lkp-robot] [fs] 8fbedc19c9: unixbench.score -10.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -10.2% regression of unixbench.score due to commit:
commit: 8fbedc19c94fd25a2b9b327015e87105c1aaa6d0 ("fs: replace f_ops->get_poll_head with a static ->f_poll_head pointer")
git://git.infradead.org/users/hch/vfs remove-get-poll-head
in testcase: unixbench
on test machine: 16 threads Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz with 8G memory
with following parameters:
runtime: 300s
nr_task: 100%
test: syscall
cpufreq_governor: performance
test-description: UnixBench is the original BYTE UNIX benchmark suite aims to test performance of Unix-like system.
test-url: https://github.com/kdlucas/byte-unixbench
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/nr_task/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/100%/debian-x86_64-2018-04-03.cgz/300s/lkp-bdw-de1/syscall/unixbench
commit:
b45859ef86 ("net: remove sock_poll_busy_loop")
8fbedc19c9 ("fs: replace f_ops->get_poll_head with a static ->f_poll_head pointer")
b45859ef86052d62 8fbedc19c94fd25a2b9b327015
---------------- --------------------------
%stddev %change %stddev
\ | \
4882 -10.2% 4383 ± 5% unixbench.score
392.99 -0.4% 391.46 unixbench.time.elapsed_time
392.99 -0.4% 391.46 unixbench.time.elapsed_time.max
78225 +4.2% 81514 ± 2% unixbench.time.involuntary_context_switches
9391 -0.0% 9388 unixbench.time.maximum_resident_set_size
365043 -0.7% 362451 unixbench.time.minor_page_faults
4096 +0.0% 4096 unixbench.time.page_size
1217 +0.4% 1222 unixbench.time.percent_of_cpu_this_job_got
3384 +4.5% 3535 unixbench.time.system_time
1400 -10.8% 1249 ± 5% unixbench.time.user_time
3827 +0.1% 3832 unixbench.time.voluntary_context_switches
2.874e+09 -10.5% 2.573e+09 ± 5% unixbench.workload
58535 -2.9% 56857 ± 3% interrupts.CAL:Function_call_interrupts
25.84 +0.3% 25.90 boot-time.boot
14.02 +0.0% 14.02 boot-time.dhcp
381.11 -0.3% 379.87 boot-time.idle
16.79 +0.1% 16.80 boot-time.kernel_boot
3286 ± 9% +109.9% 6896 ± 48% softirqs.NET_RX
344104 ± 5% +23.5% 424930 ± 13% softirqs.RCU
243744 -2.5% 237564 ± 2% softirqs.SCHED
3053571 ± 2% -9.1% 2776239 ± 9% softirqs.TIMER
23.63 -0.2 23.45 mpstat.cpu.idle%
0.03 ± 85% -0.0 0.01 ± 68% mpstat.cpu.iowait%
0.00 ± 49% +0.0 0.01 ± 54% mpstat.cpu.soft%
53.88 +2.6 56.44 mpstat.cpu.sys%
22.46 -2.4 20.10 ± 5% mpstat.cpu.usr%
1003344 -0.1% 1001844 vmstat.memory.cache
6688418 -0.0% 6687468 vmstat.memory.free
0.00 -100.0% 0.00 vmstat.procs.b
12.00 +0.0% 12.00 vmstat.procs.r
2667 ± 12% +4.9% 2799 ± 11% vmstat.system.cs
16584 -0.2% 16550 vmstat.system.in
594059 ± 5% +3.8% 616385 ± 8% cpuidle.C1.time
18443 ± 14% -1.3% 18212 ± 8% cpuidle.C1.usage
3580716 -1.1% 3540931 cpuidle.C1E.time
9191 -0.7% 9124 cpuidle.C1E.usage
3781885 ± 45% -38.2% 2338197 ± 16% cpuidle.C3.time
11370 ± 16% -11.9% 10020 ± 9% cpuidle.C3.usage
1.409e+09 -1.5% 1.388e+09 cpuidle.C6.time
1500023 -1.3% 1480672 cpuidle.C6.usage
6206 ± 7% -1.0% 6144 ± 8% cpuidle.POLL.time
403.00 ± 12% +3.7% 418.00 ± 9% cpuidle.POLL.usage
1920 +0.4% 1927 turbostat.Avg_MHz
77.62 +0.3 77.90 turbostat.Busy%
2474 -0.0% 2473 turbostat.Bzy_MHz
17542 ± 12% -1.3% 17314 ± 9% turbostat.C1
0.01 +0.0 0.01 turbostat.C1%
9097 -0.3% 9066 turbostat.C1E
0.06 +0.0 0.06 turbostat.C1E%
11279 ± 16% -12.1% 9915 ± 9% turbostat.C3
0.06 ± 46% -0.0 0.04 ± 14% turbostat.C3%
1499668 -1.3% 1480367 turbostat.C6
22.34 -0.3 22.08 turbostat.C6%
7.81 -0.5% 7.77 turbostat.CPU%c1
0.06 ± 67% -60.9% 0.02 ± 19% turbostat.CPU%c3
14.52 -1.5% 14.31 turbostat.CPU%c6
62.25 ± 3% -1.2% 61.50 turbostat.CoreTmp
6550269 -0.5% 6515364 turbostat.IRQ
2.03 ± 13% -1.8% 1.99 ± 11% turbostat.Pkg%pc2
0.25 ± 69% -68.3% 0.08 ± 38% turbostat.Pkg%pc3
11.18 ± 2% -0.3% 11.14 ± 2% turbostat.Pkg%pc6
61.75 ± 2% -0.8% 61.25 ± 2% turbostat.PkgTmp
31.16 -1.3% 30.77 turbostat.PkgWatt
1.97 +0.6% 1.98 turbostat.RAMWatt
2100 +0.0% 2100 turbostat.TSC_MHz
5.067e+11 -8.9% 4.616e+11 ± 5% perf-stat.branch-instructions
4.74 +0.2 4.94 perf-stat.branch-miss-rate%
2.403e+10 -5.0% 2.282e+10 ± 4% perf-stat.branch-misses
100.00 +0.0 100.00 perf-stat.cache-miss-rate%
1.963e+10 ± 19% +9.4% 2.146e+10 ± 29% perf-stat.cache-misses
1.963e+10 ± 19% +9.4% 2.146e+10 ± 29% perf-stat.cache-references
1050620 ± 12% +4.7% 1099671 ± 11% perf-stat.context-switches
4.23 +10.6% 4.68 ± 5% perf-stat.cpi
1.2e+13 -0.0% 1.2e+13 perf-stat.cpu-cycles
20365 ± 2% -0.3% 20304 perf-stat.cpu-migrations
0.01 ± 11% +0.0 0.01 ± 4% perf-stat.dTLB-load-miss-rate%
78173358 ± 11% -3.4% 75490648 perf-stat.dTLB-load-misses
9.119e+11 -9.6% 8.24e+11 ± 5% perf-stat.dTLB-loads
0.00 ± 3% +0.0 0.00 ± 9% perf-stat.dTLB-store-miss-rate%
15313885 ± 3% -0.6% 15218098 ± 8% perf-stat.dTLB-store-misses
6.546e+11 -9.6% 5.918e+11 ± 5% perf-stat.dTLB-stores
53.18 +0.0 53.19 perf-stat.iTLB-load-miss-rate%
1.304e+10 -10.6% 1.166e+10 ± 5% perf-stat.iTLB-load-misses
1.148e+10 -10.6% 1.026e+10 ± 5% perf-stat.iTLB-loads
2.837e+12 -9.4% 2.57e+12 ± 5% perf-stat.instructions
217.61 +1.3% 220.42 perf-stat.instructions-per-iTLB-miss
0.24 -9.4% 0.21 ± 5% perf-stat.ipc
841532 -0.3% 838965 perf-stat.minor-faults
841536 -0.3% 838966 perf-stat.page-faults
987.05 +1.2% 999.08 perf-stat.path-length
65664 -0.4% 65409 proc-vmstat.nr_active_anon
63659 -0.2% 63514 proc-vmstat.nr_anon_pages
163492 -0.0% 163468 proc-vmstat.nr_dirty_background_threshold
327385 -0.0% 327337 proc-vmstat.nr_dirty_threshold
239712 -0.2% 239254 proc-vmstat.nr_file_pages
51004 +0.0% 51004 proc-vmstat.nr_free_cma
1672100 -0.0% 1671861 proc-vmstat.nr_free_pages
5881 ± 14% -8.5% 5379 proc-vmstat.nr_inactive_anon
5145 +0.0% 5145 proc-vmstat.nr_kernel_stack
6293 +1.0% 6354 proc-vmstat.nr_mapped
182.00 ± 80% +46.3% 266.25 ± 63% proc-vmstat.nr_mlock
1417 +0.2% 1420 proc-vmstat.nr_page_table_pages
7930 ± 11% -7.6% 7328 ± 3% proc-vmstat.nr_shmem
11121 +0.7% 11202 proc-vmstat.nr_slab_reclaimable
10785 +0.6% 10847 proc-vmstat.nr_slab_unreclaimable
231782 +0.1% 231939 proc-vmstat.nr_unevictable
65664 -0.4% 65409 proc-vmstat.nr_zone_active_anon
5881 ± 14% -8.5% 5379 proc-vmstat.nr_zone_inactive_anon
231782 +0.1% 231939 proc-vmstat.nr_zone_unevictable
722667 -0.4% 719651 proc-vmstat.numa_hit
722667 -0.4% 719651 proc-vmstat.numa_local
2190 ± 7% -6.7% 2043 ± 20% proc-vmstat.pgactivate
772748 -0.2% 770827 proc-vmstat.pgalloc_normal
856672 -0.1% 855933 proc-vmstat.pgfault
753377 -0.1% 752262 proc-vmstat.pgfree
262470 -0.4% 261424 meminfo.Active
262466 -0.4% 261420 meminfo.Active(anon)
182927 -0.0% 182871 meminfo.AnonHugePages
254413 -0.2% 253809 meminfo.AnonPages
958840 -0.2% 957026 meminfo.Cached
204016 +0.0% 204016 meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
4028680 +0.0% 4028684 meminfo.CommitLimit
419307 -1.0% 415152 meminfo.Committed_AS
7340032 -3.6% 7077888 ± 6% meminfo.DirectMap1G
2917888 +9.0% 3179520 ± 14% meminfo.DirectMap2M
123024 ± 9% +0.4% 123536 ± 10% meminfo.DirectMap4k
2048 +0.0% 2048 meminfo.Hugepagesize
23626 ± 14% -8.4% 21636 meminfo.Inactive
23514 ± 15% -8.5% 21523 meminfo.Inactive(anon)
5143 +0.0% 5145 meminfo.KernelStack
24307 +1.1% 24565 meminfo.Mapped
6571564 -0.0% 6570767 meminfo.MemAvailable
6688436 -0.0% 6687475 meminfo.MemFree
8057364 +0.0% 8057368 meminfo.MemTotal
728.50 ± 80% +46.4% 1066 ± 63% meminfo.Mlocked
5665 +0.3% 5680 meminfo.PageTables
44485 +0.7% 44811 meminfo.SReclaimable
43144 +0.6% 43390 meminfo.SUnreclaim
31712 ± 11% -7.5% 29321 ± 3% meminfo.Shmem
87630 +0.7% 88202 meminfo.Slab
927128 +0.1% 927757 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
53613 +0.6% 53912 slabinfo.Acpi-Namespace.active_objs
53613 +0.6% 53912 slabinfo.Acpi-Namespace.num_objs
49700 +1.4% 50407 slabinfo.dentry.active_objs
50254 +1.1% 50825 slabinfo.dentry.num_objs
798.75 ± 22% +37.9% 1101 ± 24% slabinfo.dmaengine-unmap-16.active_objs
798.75 ± 22% +37.9% 1101 ± 24% slabinfo.dmaengine-unmap-16.num_objs
4919 ± 5% -21.8% 3848 ± 2% slabinfo.filp.active_objs
5237 ± 5% -18.5% 4266 ± 2% slabinfo.filp.num_objs
4101 +2.1% 4186 ± 2% slabinfo.ftrace_event_field.active_objs
4101 +2.1% 4186 ± 2% slabinfo.ftrace_event_field.num_objs
40950 +0.5% 41154 slabinfo.inode_cache.active_objs
41017 +0.5% 41227 slabinfo.inode_cache.num_objs
36953 +1.0% 37330 slabinfo.kernfs_node_cache.active_objs
36953 +1.0% 37330 slabinfo.kernfs_node_cache.num_objs
7616 ± 2% +2.5% 7808 slabinfo.kmalloc-16.active_objs
7616 ± 2% +2.5% 7808 slabinfo.kmalloc-16.num_objs
11363 +1.1% 11484 ± 2% slabinfo.kmalloc-32.active_objs
11363 +1.1% 11484 ± 2% slabinfo.kmalloc-32.num_objs
9056 -0.7% 8993 slabinfo.kmalloc-64.active_objs
9056 -0.7% 8993 slabinfo.kmalloc-64.num_objs
657.00 +11.1% 730.00 ± 12% slabinfo.nsproxy.active_objs
657.00 +11.1% 730.00 ± 12% slabinfo.nsproxy.num_objs
3865 +3.5% 4000 ± 2% slabinfo.proc_inode_cache.active_objs
3895 +3.3% 4021 ± 2% slabinfo.proc_inode_cache.num_objs
4414 ± 2% +0.7% 4447 ± 2% slabinfo.selinux_file_security.active_objs
4414 ± 2% +0.7% 4447 ± 2% slabinfo.selinux_file_security.num_objs
1520 ± 7% -16.3% 1272 ± 15% slabinfo.skbuff_head_cache.active_objs
1520 ± 7% -16.3% 1272 ± 15% slabinfo.skbuff_head_cache.num_objs
10718 ± 2% +1.1% 10839 slabinfo.vm_area_struct.active_objs
10787 ± 2% +1.2% 10918 slabinfo.vm_area_struct.num_objs
1660 ±173% -99.8% 3.67 ±173% sched_debug.cfs_rq:/.MIN_vruntime.avg
26574 ±173% -99.8% 58.68 ±173% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
6432 ±173% -99.8% 14.20 ±173% sched_debug.cfs_rq:/.MIN_vruntime.stddev
136415 +0.3% 136887 sched_debug.cfs_rq:/.exec_clock.avg
139290 +0.5% 139965 sched_debug.cfs_rq:/.exec_clock.max
136065 +0.4% 136559 sched_debug.cfs_rq:/.exec_clock.min
768.38 ± 6% +5.2% 807.99 ± 7% sched_debug.cfs_rq:/.exec_clock.stddev
72947 ± 9% -5.5% 68931 ± 3% sched_debug.cfs_rq:/.load.avg
283069 ± 21% -11.1% 251734 sched_debug.cfs_rq:/.load.max
42874 -1.3% 42306 sched_debug.cfs_rq:/.load.min
69013 ± 23% -14.2% 59241 ± 3% sched_debug.cfs_rq:/.load.stddev
87.02 ± 7% -4.2% 83.39 ± 2% sched_debug.cfs_rq:/.load_avg.avg
390.89 ± 12% -6.4% 365.75 ± 2% sched_debug.cfs_rq:/.load_avg.max
43.39 ± 3% +1.7% 44.14 ± 3% sched_debug.cfs_rq:/.load_avg.min
98.14 ± 7% -9.2% 89.09 ± 4% sched_debug.cfs_rq:/.load_avg.stddev
1660 ±173% -99.8% 3.67 ±173% sched_debug.cfs_rq:/.max_vruntime.avg
26574 ±173% -99.8% 58.68 ±173% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
6432 ±173% -99.8% 14.20 ±173% sched_debug.cfs_rq:/.max_vruntime.stddev
2233218 +0.4% 2241793 sched_debug.cfs_rq:/.min_vruntime.avg
2277626 +0.4% 2287031 sched_debug.cfs_rq:/.min_vruntime.max
2188278 +0.3% 2194407 sched_debug.cfs_rq:/.min_vruntime.min
32031 ± 5% +1.7% 32583 ± 3% sched_debug.cfs_rq:/.min_vruntime.stddev
0.82 ± 2% +0.6% 0.83 ± 2% sched_debug.cfs_rq:/.nr_running.avg
1.04 ± 5% -3.4% 1.00 sched_debug.cfs_rq:/.nr_running.max
0.71 +0.0% 0.71 sched_debug.cfs_rq:/.nr_running.min
0.13 ± 10% -10.6% 0.12 ± 4% sched_debug.cfs_rq:/.nr_running.stddev
1.81 ± 4% -0.2% 1.80 ± 5% sched_debug.cfs_rq:/.nr_spread_over.avg
10.89 ± 12% +6.9% 11.64 ± 17% sched_debug.cfs_rq:/.nr_spread_over.max
0.86 +0.0% 0.86 sched_debug.cfs_rq:/.nr_spread_over.min
2.48 ± 14% +5.8% 2.63 ± 19% sched_debug.cfs_rq:/.nr_spread_over.stddev
2.28 ±172% -100.0% 0.00 sched_debug.cfs_rq:/.removed.load_avg.avg
36.54 ±172% -100.0% 0.00 sched_debug.cfs_rq:/.removed.load_avg.max
8.84 ±172% -100.0% 0.00 sched_debug.cfs_rq:/.removed.load_avg.stddev
106.00 ±172% -100.0% 0.00 sched_debug.cfs_rq:/.removed.runnable_sum.avg
1696 ±172% -100.0% 0.00 sched_debug.cfs_rq:/.removed.runnable_sum.max
410.54 ±172% -100.0% 0.00 sched_debug.cfs_rq:/.removed.runnable_sum.stddev
0.02 ±145% -100.0% 0.00 sched_debug.cfs_rq:/.removed.util_avg.avg
0.29 ±145% -100.0% 0.00 sched_debug.cfs_rq:/.removed.util_avg.max
0.07 ±145% -100.0% 0.00 sched_debug.cfs_rq:/.removed.util_avg.stddev
59.48 ± 3% -2.5% 58.01 ± 2% sched_debug.cfs_rq:/.runnable_load_avg.avg
232.68 +0.0% 232.79 sched_debug.cfs_rq:/.runnable_load_avg.max
40.29 ± 3% +1.3% 40.82 sched_debug.cfs_rq:/.runnable_load_avg.min
53.54 ± 5% -4.5% 51.11 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.stddev
69999 ± 9% -6.8% 65217 ± 3% sched_debug.cfs_rq:/.runnable_weight.avg
272708 ± 22% -12.7% 238207 sched_debug.cfs_rq:/.runnable_weight.max
42865 -1.3% 42297 sched_debug.cfs_rq:/.runnable_weight.min
65676 ± 25% -15.7% 55376 ± 3% sched_debug.cfs_rq:/.runnable_weight.stddev
22714 ± 13% +29.7% 29464 ± 30% sched_debug.cfs_rq:/.spread0.avg
67086 ± 11% +11.3% 74678 ± 10% sched_debug.cfs_rq:/.spread0.max
-22206 -19.4% -17898 sched_debug.cfs_rq:/.spread0.min
32005 ± 5% +1.7% 32564 ± 3% sched_debug.cfs_rq:/.spread0.stddev
864.16 -0.6% 858.71 sched_debug.cfs_rq:/.util_avg.avg
1089 ± 2% -3.2% 1054 ± 2% sched_debug.cfs_rq:/.util_avg.max
754.54 ± 6% +2.5% 773.21 sched_debug.cfs_rq:/.util_avg.min
91.35 ± 16% -15.1% 77.54 ± 16% sched_debug.cfs_rq:/.util_avg.stddev
12.14 ± 41% +36.9% 16.62 ± 30% sched_debug.cfs_rq:/.util_est_enqueued.avg
87.46 ± 30% +18.9% 104.04 ± 26% sched_debug.cfs_rq:/.util_est_enqueued.max
0.71 +0.0% 0.71 sched_debug.cfs_rq:/.util_est_enqueued.min
25.96 ± 31% +16.6% 30.27 ± 26% sched_debug.cfs_rq:/.util_est_enqueued.stddev
881890 -1.2% 870985 sched_debug.cpu.avg_idle.avg
1000000 -1.2% 987524 sched_debug.cpu.avg_idle.max
590128 ± 6% -10.8% 526650 ± 12% sched_debug.cpu.avg_idle.min
119303 ± 7% +9.2% 130313 ± 12% sched_debug.cpu.avg_idle.stddev
206733 -0.0% 206680 sched_debug.cpu.clock.avg
206736 -0.0% 206682 sched_debug.cpu.clock.max
206725 -0.0% 206671 sched_debug.cpu.clock.min
3.07 ± 14% -15.2% 2.60 ± 13% sched_debug.cpu.clock.stddev
206733 -0.0% 206680 sched_debug.cpu.clock_task.avg
206736 -0.0% 206682 sched_debug.cpu.clock_task.max
206725 -0.0% 206671 sched_debug.cpu.clock_task.min
3.07 ± 14% -15.2% 2.60 ± 13% sched_debug.cpu.clock_task.stddev
56.93 +0.3% 57.12 sched_debug.cpu.cpu_load[0].avg
247.61 ± 2% -0.1% 247.43 ± 2% sched_debug.cpu.cpu_load[0].max
40.25 ± 3% +1.4% 40.82 sched_debug.cpu.cpu_load[0].min
51.54 ± 2% +0.3% 51.68 sched_debug.cpu.cpu_load[0].stddev
54.52 ± 3% +0.7% 54.92 ± 3% sched_debug.cpu.cpu_load[1].avg
176.07 ± 10% +0.3% 176.57 ± 6% sched_debug.cpu.cpu_load[1].max
41.50 -0.6% 41.25 sched_debug.cpu.cpu_load[1].min
33.93 ± 13% +1.3% 34.36 ± 7% sched_debug.cpu.cpu_load[1].stddev
52.28 ± 2% +1.5% 53.06 ± 2% sched_debug.cpu.cpu_load[2].avg
130.11 ± 10% +4.3% 135.71 ± 6% sched_debug.cpu.cpu_load[2].max
41.89 +0.3% 42.04 sched_debug.cpu.cpu_load[2].min
23.42 ± 13% +5.4% 24.68 ± 7% sched_debug.cpu.cpu_load[2].stddev
50.68 +1.5% 51.46 sched_debug.cpu.cpu_load[3].avg
103.54 ± 9% +5.7% 109.39 ± 4% sched_debug.cpu.cpu_load[3].max
42.46 +0.2% 42.54 sched_debug.cpu.cpu_load[3].min
16.62 ± 15% +8.0% 17.95 ± 3% sched_debug.cpu.cpu_load[3].stddev
50.12 +1.3% 50.77 sched_debug.cpu.cpu_load[4].avg
99.39 ± 8% +4.6% 103.93 ± 4% sched_debug.cpu.cpu_load[4].max
42.68 -0.0% 42.68 sched_debug.cpu.cpu_load[4].min
14.64 ± 15% +8.2% 15.84 ± 5% sched_debug.cpu.cpu_load[4].stddev
3533 ± 3% +2.0% 3604 sched_debug.cpu.curr->pid.avg
4162 -0.1% 4159 sched_debug.cpu.curr->pid.max
3164 ± 7% +5.3% 3332 ± 5% sched_debug.cpu.curr->pid.min
270.61 ± 21% -15.3% 229.33 ± 17% sched_debug.cpu.curr->pid.stddev
67539 ± 7% +1.3% 68398 ± 7% sched_debug.cpu.load.avg
283069 ± 21% -0.1% 282894 ± 20% sched_debug.cpu.load.max
42874 -1.3% 42306 sched_debug.cpu.load.min
63473 ± 23% +0.5% 63820 ± 21% sched_debug.cpu.load.stddev
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.avg
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
4294 -0.0% 4294 sched_debug.cpu.next_balance.max
4294 -0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ± 7% +7.3% 0.00 ± 8% sched_debug.cpu.next_balance.stddev
185941 +0.1% 186144 sched_debug.cpu.nr_load_updates.avg
192620 -0.5% 191714 sched_debug.cpu.nr_load_updates.max
184232 -0.0% 184151 sched_debug.cpu.nr_load_updates.min
2015 ± 14% -5.4% 1907 ± 3% sched_debug.cpu.nr_load_updates.stddev
0.88 ± 4% +1.8% 0.90 ± 2% sched_debug.cpu.nr_running.avg
1.75 ± 6% +4.1% 1.82 ± 3% sched_debug.cpu.nr_running.max
0.71 +0.0% 0.71 sched_debug.cpu.nr_running.min
0.31 ± 8% +6.4% 0.33 ± 4% sched_debug.cpu.nr_running.stddev
32099 ± 10% +15.1% 36951 ± 10% sched_debug.cpu.nr_switches.avg
98874 ± 12% +21.6% 120228 ± 4% sched_debug.cpu.nr_switches.max
5360 ± 15% +5.5% 5656 ± 8% sched_debug.cpu.nr_switches.min
27909 ± 7% +17.0% 32658 ± 9% sched_debug.cpu.nr_switches.stddev
0.02 ± 20% -40.0% 0.01 ± 74% sched_debug.cpu.nr_uninterruptible.avg
7.50 ± 20% +0.0% 7.50 ± 23% sched_debug.cpu.nr_uninterruptible.max
-7.79 -8.7% -7.11 sched_debug.cpu.nr_uninterruptible.min
4.10 ± 26% -0.3% 4.09 ± 20% sched_debug.cpu.nr_uninterruptible.stddev
30199 ± 10% +15.7% 34944 ± 10% sched_debug.cpu.sched_count.avg
95606 ± 12% +20.5% 115247 ± 3% sched_debug.cpu.sched_count.max
4540 ± 18% -0.7% 4509 ± 6% sched_debug.cpu.sched_count.min
27865 ± 5% +14.9% 32009 ± 9% sched_debug.cpu.sched_count.stddev
1138 ± 5% -1.6% 1119 ± 3% sched_debug.cpu.sched_goidle.avg
2711 ± 9% -4.5% 2589 ± 15% sched_debug.cpu.sched_goidle.max
358.14 ± 21% +12.4% 402.68 ± 19% sched_debug.cpu.sched_goidle.min
665.41 ± 8% -7.2% 617.69 ± 11% sched_debug.cpu.sched_goidle.stddev
14406 ± 11% +16.6% 16800 ± 11% sched_debug.cpu.ttwu_count.avg
47204 ± 13% +21.7% 57461 ± 5% sched_debug.cpu.ttwu_count.max
2043 ± 13% +1.0% 2063 ± 13% sched_debug.cpu.ttwu_count.min
13732 ± 7% +15.3% 15838 ± 9% sched_debug.cpu.ttwu_count.stddev
13145 ± 12% +18.8% 15618 ± 12% sched_debug.cpu.ttwu_local.avg
46078 ± 13% +21.7% 56054 ± 5% sched_debug.cpu.ttwu_local.max
1424 ± 15% -4.0% 1368 ± 11% sched_debug.cpu.ttwu_local.min
13776 ± 7% +14.8% 15818 ± 9% sched_debug.cpu.ttwu_local.stddev
206725 -0.0% 206672 sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
206725 -0.0% 206672 sched_debug.ktime
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
0.00 ± 51% -28.8% 0.00 ± 43% sched_debug.rt_rq:/.rt_time.avg
0.01 ± 4% +29.9% 0.01 ± 61% sched_debug.rt_rq:/.rt_time.max
0.00 ± 64% -48.8% 0.00 ± 97% sched_debug.rt_rq:/.rt_time.min
0.00 ± 6% +35.0% 0.00 ± 66% sched_debug.rt_rq:/.rt_time.stddev
207112 -0.0% 207058 sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4118331 +0.0% 4118331 sched_debug.sysctl_sched.sysctl_sched_features
24.00 +0.0% 24.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
4.00 +0.0% 4.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
27.31 -3.2 24.06 ± 6% perf-profile.calltrace.cycles-pp.syscall_return_via_sysret
24.31 -2.8 21.47 ± 6% perf-profile.calltrace.cycles-pp.__entry_trampoline_start
2.40 ± 4% -1.8 0.63 ±100% perf-profile.calltrace.cycles-pp.dnotify_flush.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.53 ± 4% -0.4 0.13 ±173% perf-profile.calltrace.cycles-pp.__indirect_thunk_start
1.46 ± 2% -0.2 1.24 ± 3% perf-profile.calltrace.cycles-pp.__x64_sys_getuid.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.60 ± 3% -0.2 1.39 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_getpid.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.40 ± 3% -0.2 1.23 ± 4% perf-profile.calltrace.cycles-pp.__x64_sys_umask.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.78 ± 2% -0.1 1.64 ± 6% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_stage2
0.12 ±173% -0.1 0.00 perf-profile.calltrace.cycles-pp.from_kuid_munged.__x64_sys_getuid.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.07 ± 4% -0.1 0.95 ± 5% perf-profile.calltrace.cycles-pp.__task_pid_nr_ns.__x64_sys_getpid.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.91 -0.0 0.88 ± 2% perf-profile.calltrace.cycles-pp.__close_fd.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.57 ± 4% +0.0 0.59 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock.__close_fd.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.19 ± 2% +0.1 1.24 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock.__alloc_fd.ksys_dup.__x64_sys_dup.do_syscall_64
2.43 +0.1 2.48 perf-profile.calltrace.cycles-pp.__alloc_fd.ksys_dup.__x64_sys_dup.do_syscall_64.entry_SYSCALL_64_after_hwframe
15.50 +0.1 15.61 ± 21% perf-profile.calltrace.cycles-pp.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.1 0.14 ±173% perf-profile.calltrace.cycles-pp.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
0.00 +0.1 0.15 ±173% perf-profile.calltrace.cycles-pp.ksys_dup.__x64_sys_dup.do_syscall_64.entry_SYSCALL_64_after_hwframe.dup
0.00 +0.1 0.15 ±173% perf-profile.calltrace.cycles-pp.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
0.00 +0.2 0.15 ±173% perf-profile.calltrace.cycles-pp.__x64_sys_dup.do_syscall_64.entry_SYSCALL_64_after_hwframe.dup
0.00 +0.2 0.16 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.close
0.00 +0.2 0.16 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.close
0.00 +0.2 0.16 ±173% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.dup
0.00 +0.2 0.17 ±173% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.dup
13.85 ± 2% +0.2 14.07 ± 24% perf-profile.calltrace.cycles-pp.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +0.2 0.23 ±173% perf-profile.calltrace.cycles-pp.close
0.00 +0.2 0.24 ±173% perf-profile.calltrace.cycles-pp.dup
1.66 ± 37% +0.3 1.98 ± 7% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.04 ± 59% +0.4 1.40 ± 5% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.84 ± 37% +0.4 2.21 ± 7% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
1.84 ± 37% +0.4 2.21 ± 7% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
1.84 ± 37% +0.4 2.21 ± 7% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
1.93 ± 38% +0.4 2.33 ± 6% perf-profile.calltrace.cycles-pp.secondary_startup_64
4.73 ± 2% +1.4 6.15 ± 5% perf-profile.calltrace.cycles-pp.fput.filp_close.__x64_sys_close.do_syscall_64.entry_SYSCALL_64_after_hwframe
41.95 +5.1 47.02 ± 5% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
39.39 +5.3 44.72 ± 5% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.72 +6.3 21.02 ± 2% perf-profile.calltrace.cycles-pp.__x64_sys_dup.do_syscall_64.entry_SYSCALL_64_after_hwframe
14.31 +6.3 20.63 ± 2% perf-profile.calltrace.cycles-pp.ksys_dup.__x64_sys_dup.do_syscall_64.entry_SYSCALL_64_after_hwframe
11.38 +6.4 17.80 perf-profile.calltrace.cycles-pp.__fget.ksys_dup.__x64_sys_dup.do_syscall_64.entry_SYSCALL_64_after_hwframe
27.38 -3.1 24.30 ± 5% perf-profile.children.cycles-pp.syscall_return_via_sysret
24.38 -2.7 21.68 ± 6% perf-profile.children.cycles-pp.__entry_trampoline_start
2.41 ± 4% -1.8 0.66 ± 96% perf-profile.children.cycles-pp.dnotify_flush
0.50 ± 4% -0.3 0.19 ± 42% perf-profile.children.cycles-pp.locks_remove_posix
1.65 ± 3% -0.2 1.43 ± 3% perf-profile.children.cycles-pp.__x64_sys_getpid
1.48 ± 2% -0.2 1.27 ± 3% perf-profile.children.cycles-pp.__x64_sys_getuid
1.42 ± 3% -0.2 1.26 ± 4% perf-profile.children.cycles-pp.__x64_sys_umask
1.78 ± 2% -0.1 1.66 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_stage2
0.47 ± 2% -0.1 0.35 ± 11% perf-profile.children.cycles-pp.__fd_install
1.10 ± 5% -0.1 0.99 ± 4% perf-profile.children.cycles-pp.__task_pid_nr_ns
0.63 ± 3% -0.1 0.56 ± 5% perf-profile.children.cycles-pp.__indirect_thunk_start
0.48 ± 5% -0.1 0.42 perf-profile.children.cycles-pp.from_kuid_munged
0.45 ± 4% -0.1 0.39 ± 3% perf-profile.children.cycles-pp.map_id_up
0.99 -0.0 0.97 ± 2% perf-profile.children.cycles-pp.__close_fd
0.05 ± 63% -0.0 0.03 ±100% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.04 ± 58% -0.0 0.03 ±100% perf-profile.children.cycles-pp.ktime_get
0.06 ± 15% +0.0 0.06 ± 14% perf-profile.children.cycles-pp.vfs_write
0.06 ± 7% +0.0 0.06 ± 20% perf-profile.children.cycles-pp.read
0.06 ± 15% +0.0 0.06 ± 16% perf-profile.children.cycles-pp.task_tick_fair
0.10 ± 15% +0.0 0.10 ± 10% perf-profile.children.cycles-pp.write
0.06 ± 14% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.ksys_write
0.09 ± 24% +0.0 0.10 ± 22% perf-profile.children.cycles-pp.worker_thread
0.09 ± 24% +0.0 0.10 ± 22% perf-profile.children.cycles-pp.process_one_work
0.10 ± 22% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.scheduler_tick
0.01 ±173% +0.0 0.03 ±100% perf-profile.children.cycles-pp.pipe_write
0.10 ± 20% +0.0 0.11 ± 17% perf-profile.children.cycles-pp.ret_from_fork
0.10 ± 21% +0.0 0.11 ± 21% perf-profile.children.cycles-pp.kthread
0.01 ±173% +0.0 0.03 ±100% perf-profile.children.cycles-pp.run_timer_softirq
0.03 ±100% +0.0 0.04 ± 58% perf-profile.children.cycles-pp.get_next_timer_interrupt
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.__schedule
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.__vfs_read
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.try_to_wake_up
0.24 ± 2% +0.0 0.25 ± 4% perf-profile.children.cycles-pp.find_next_zero_bit
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.schedule
0.07 ± 64% +0.0 0.08 ± 17% perf-profile.children.cycles-pp.fb_flashcursor
0.07 ± 64% +0.0 0.08 ± 17% perf-profile.children.cycles-pp.bit_cursor
0.07 ± 64% +0.0 0.08 ± 17% perf-profile.children.cycles-pp.soft_cursor
0.07 ± 66% +0.0 0.09 ± 18% perf-profile.children.cycles-pp.memcpy_erms
0.15 ± 40% +0.0 0.17 ± 52% perf-profile.children.cycles-pp.irq_work_interrupt
0.15 ± 40% +0.0 0.17 ± 52% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.15 ± 40% +0.0 0.17 ± 52% perf-profile.children.cycles-pp.irq_work_run
0.15 ± 40% +0.0 0.17 ± 52% perf-profile.children.cycles-pp.printk
0.15 ± 40% +0.0 0.17 ± 52% perf-profile.children.cycles-pp.vprintk_emit
0.07 ± 64% +0.0 0.09 ± 12% perf-profile.children.cycles-pp.ast_dirty_update
0.06 ± 59% +0.0 0.08 ± 11% perf-profile.children.cycles-pp.tick_nohz_next_event
0.05 ± 59% +0.0 0.07 ± 29% perf-profile.children.cycles-pp.delay_tsc
0.03 ±102% +0.0 0.05 ± 8% perf-profile.children.cycles-pp.__vfs_write
0.18 ± 25% +0.0 0.20 ± 7% perf-profile.children.cycles-pp.irq_exit
0.07 ± 61% +0.0 0.10 ± 11% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
0.00 +0.0 0.03 ±100% perf-profile.children.cycles-pp.update_load_avg
0.00 +0.0 0.03 ±100% perf-profile.children.cycles-pp.native_irq_return_iret
0.22 ± 8% +0.0 0.25 ± 2% perf-profile.children.cycles-pp.expand_files
0.15 ± 25% +0.0 0.17 ± 4% perf-profile.children.cycles-pp.__softirqentry_text_start
0.09 ± 69% +0.0 0.12 ± 7% perf-profile.children.cycles-pp.start_kernel
0.01 ±173% +0.0 0.04 ± 58% perf-profile.children.cycles-pp.ksys_read
0.03 ±100% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.rebalance_domains
0.00 +0.0 0.04 ± 57% perf-profile.children.cycles-pp.load_balance
0.00 +0.0 0.04 ± 58% perf-profile.children.cycles-pp.vfs_read
0.10 ± 26% +0.0 0.14 ± 39% perf-profile.children.cycles-pp.io_serial_in
0.00 +0.0 0.04 ± 57% perf-profile.children.cycles-pp.wake_up_klogd_work_func
0.15 ± 31% +0.0 0.20 ± 40% perf-profile.children.cycles-pp.uart_console_write
0.16 ± 30% +0.0 0.21 ± 8% perf-profile.children.cycles-pp.update_process_times
0.16 ± 27% +0.0 0.21 ± 39% perf-profile.children.cycles-pp.serial8250_console_write
0.16 ± 28% +0.0 0.20 ± 42% perf-profile.children.cycles-pp.wait_for_xmitr
0.16 ± 30% +0.0 0.21 ± 10% perf-profile.children.cycles-pp.tick_sched_handle
0.15 ± 29% +0.0 0.20 ± 39% perf-profile.children.cycles-pp.serial8250_console_putchar
0.13 ± 58% +0.0 0.18 ± 6% perf-profile.children.cycles-pp.menu_select
0.29 ± 24% +0.1 0.35 ± 7% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.17 ± 28% +0.1 0.22 ± 42% perf-profile.children.cycles-pp.console_unlock
0.35 ± 23% +0.1 0.41 ± 6% perf-profile.children.cycles-pp.hrtimer_interrupt
0.16 ± 37% +0.1 0.22 ± 44% perf-profile.children.cycles-pp.irq_work_run_list
0.18 ± 27% +0.1 0.24 ± 11% perf-profile.children.cycles-pp.tick_sched_timer
0.58 ± 24% +0.1 0.64 ± 6% perf-profile.children.cycles-pp.apic_timer_interrupt
0.57 ± 24% +0.1 0.63 ± 6% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.07 ± 16% +0.1 0.13 ± 11% perf-profile.children.cycles-pp.get_unused_fd_flags
1.82 ± 2% +0.1 1.91 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
2.55 +0.1 2.64 perf-profile.children.cycles-pp.__alloc_fd
0.02 ±173% +0.1 0.12 ± 97% perf-profile.children.cycles-pp.__getpid
0.02 ±173% +0.1 0.12 ±106% perf-profile.children.cycles-pp.getuid
0.02 ±173% +0.1 0.13 ±100% perf-profile.children.cycles-pp.umask
0.08 ±118% +0.2 0.33 ±109% perf-profile.children.cycles-pp.close
0.08 ±113% +0.3 0.34 ±109% perf-profile.children.cycles-pp.dup
1.21 ± 39% +0.3 1.48 ± 5% perf-profile.children.cycles-pp.intel_idle
15.56 +0.3 15.83 ± 22% perf-profile.children.cycles-pp.__x64_sys_close
1.76 ± 37% +0.3 2.10 ± 7% perf-profile.children.cycles-pp.cpuidle_enter_state
14.02 +0.3 14.37 ± 24% perf-profile.children.cycles-pp.filp_close
1.84 ± 37% +0.4 2.21 ± 7% perf-profile.children.cycles-pp.start_secondary
1.93 ± 38% +0.4 2.33 ± 6% perf-profile.children.cycles-pp.do_idle
1.93 ± 38% +0.4 2.33 ± 6% perf-profile.children.cycles-pp.secondary_startup_64
1.93 ± 38% +0.4 2.33 ± 6% perf-profile.children.cycles-pp.cpu_startup_entry
4.76 ± 2% +1.5 6.24 ± 6% perf-profile.children.cycles-pp.fput
42.19 +5.5 47.71 ± 6% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
39.95 +5.7 45.69 ± 6% perf-profile.children.cycles-pp.do_syscall_64
11.45 +6.4 17.86 perf-profile.children.cycles-pp.__fget
14.84 +6.5 21.31 perf-profile.children.cycles-pp.__x64_sys_dup
14.55 +6.5 21.02 perf-profile.children.cycles-pp.ksys_dup
27.38 -3.1 24.30 ± 5% perf-profile.self.cycles-pp.syscall_return_via_sysret
24.38 -2.7 21.68 ± 6% perf-profile.self.cycles-pp.__entry_trampoline_start
2.41 ± 4% -1.8 0.66 ± 96% perf-profile.self.cycles-pp.dnotify_flush
4.78 ± 2% -0.5 4.31 ± 6% perf-profile.self.cycles-pp.do_syscall_64
0.50 ± 4% -0.3 0.19 ± 42% perf-profile.self.cycles-pp.locks_remove_posix
2.56 ± 3% -0.2 2.32 ± 5% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
1.42 ± 3% -0.2 1.26 ± 4% perf-profile.self.cycles-pp.__x64_sys_umask
0.99 ± 3% -0.1 0.84 ± 4% perf-profile.self.cycles-pp.__x64_sys_getuid
1.78 ± 2% -0.1 1.66 ± 6% perf-profile.self.cycles-pp.entry_SYSCALL_64_stage2
0.47 ± 2% -0.1 0.35 ± 11% perf-profile.self.cycles-pp.__fd_install
1.09 ± 5% -0.1 0.99 ± 4% perf-profile.self.cycles-pp.__task_pid_nr_ns
0.57 ± 3% -0.1 0.47 ± 4% perf-profile.self.cycles-pp.__x64_sys_getpid
0.74 ± 2% -0.1 0.67 ± 6% perf-profile.self.cycles-pp.__x64_sys_close
0.63 ± 3% -0.1 0.56 ± 5% perf-profile.self.cycles-pp.__indirect_thunk_start
0.45 ± 4% -0.1 0.39 ± 2% perf-profile.self.cycles-pp.map_id_up
0.43 ± 4% -0.1 0.37 ± 9% perf-profile.self.cycles-pp.__close_fd
0.49 ± 2% -0.0 0.47 ± 7% perf-profile.self.cycles-pp.__x64_sys_dup
0.03 ±100% -0.0 0.01 ±173% perf-profile.self.cycles-pp.from_kuid_munged
0.30 ± 5% -0.0 0.29 ± 3% perf-profile.self.cycles-pp.ksys_dup
0.04 ± 58% +0.0 0.05 ± 58% perf-profile.self.cycles-pp.menu_select
0.24 ± 2% +0.0 0.25 ± 4% perf-profile.self.cycles-pp.find_next_zero_bit
1.02 +0.0 1.03 ± 4% perf-profile.self.cycles-pp.__alloc_fd
0.07 ± 64% +0.0 0.09 ± 18% perf-profile.self.cycles-pp.memcpy_erms
0.05 ± 59% +0.0 0.07 ± 29% perf-profile.self.cycles-pp.delay_tsc
0.00 +0.0 0.03 ±100% perf-profile.self.cycles-pp.native_irq_return_iret
0.22 ± 8% +0.0 0.25 ± 2% perf-profile.self.cycles-pp.expand_files
0.10 ± 26% +0.0 0.14 ± 39% perf-profile.self.cycles-pp.io_serial_in
0.07 ± 16% +0.1 0.13 ± 11% perf-profile.self.cycles-pp.get_unused_fd_flags
1.81 ± 2% +0.1 1.90 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
1.20 ± 39% +0.3 1.47 ± 4% perf-profile.self.cycles-pp.intel_idle
6.42 +0.9 7.35 ± 33% perf-profile.self.cycles-pp.filp_close
4.75 ± 2% +1.5 6.23 ± 6% perf-profile.self.cycles-pp.fput
11.42 +6.4 17.83 perf-profile.self.cycles-pp.__fget
unixbench.score
5000 +-+------------------------------------------------------------------+
|.++.+.+ +. .+ |
4900 +-+ + : +.+.++.+.++.+.+.+ .+.++.+.++.+.+.++.+.++.+.+ +.+.+.++.|
4800 +-+ + :.+ |
| + |
4700 +-O O O |
4600 +-+ O O O O O O O |
| |
4500 +-+ |
4400 +-+ |
| |
4300 +-+ |
4200 +-+ |
O O O O OO O O O O OO OO |
4100 +-+------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 6 months
[lkp-robot] [x86/kernel] b1ff47aace: WARNING:at_kernel/jump_label.c:#__jump_label_update
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: b1ff47aacea95e5be1bedf2aee740395b52f4591 ("[PATCH 5/5] x86/kernel: jump_table: use relative references")
url: https://github.com/0day-ci/linux/commits/Ard-Biesheuvel/add-support-for-r...
in testcase: boot
on test machine: qemu-system-i386 -enable-kvm -m 360M
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------+------------+------------+
| | 1843c4017f | b1ff47aace |
+-----------------------------------------------------+------------+------------+
| boot_successes | 57 | 46 |
| boot_failures | 2 | 14 |
| Mem-Info | 2 | 3 |
| invoked_oom-killer:gfp_mask=0x | 1 | 3 |
| Out_of_memory:Kill_process | 1 | 3 |
| WARNING:at_kernel/jump_label.c:#__jump_label_update | 0 | 11 |
| EIP:__jump_label_update | 0 | 11 |
+-----------------------------------------------------+------------+------------+
[ 43.154660] WARNING: CPU: 0 PID: 351 at kernel/jump_label.c:388 __jump_label_update+0x101/0x130
[ 43.172391] Modules linked in:
[ 43.176312] CPU: 0 PID: 351 Comm: trinity-main Not tainted 4.18.0-rc2-00124-gb1ff47a #206
[ 43.186389] EIP: __jump_label_update+0x101/0x130
[ 43.192131] Code: a5 bf fd ff 6a 01 31 c9 ba 01 00 00 00 b8 c0 02 cd b1 c6 05 ba 2e cb b1 01 e8 8b bf fd ff ff 33 68 8b 35 b2 b1 e8 cf 74 f3 ff <0f> 0b 6a 01 31 c9 ba 01 00 00 00 b8 a8 02 cd b1 e8 6a bf fd ff 83
[ 43.215879] EAX: 00000021 EBX: b1cb67b0 ECX: 00000000 EDX: 00000000
[ 43.223498] ESI: b1cb67b8 EDI: b1cb2fbc EBP: b89c9dc0 ESP: b89c9d9c
[ 43.231212] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010292
[ 43.239602] CR0: 80050033 CR2: 0805a000 CR3: 08979000 CR4: 00000690
[ 43.247344] Call Trace:
[ 43.250614] jump_label_update+0x95/0x120
[ 43.255705] static_key_slow_inc_cpuslocked+0xcd/0xe0
[ 43.261993] static_key_slow_inc+0xd/0x10
[ 43.266986] tracepoint_probe_register_prio+0x257/0x320
[ 43.273467] tracepoint_probe_register+0xf/0x20
[ 43.279104] trace_event_reg+0x90/0x100
[ 43.283964] perf_trace_init+0x222/0x280
[ 43.288833] perf_tp_event_init+0x1d/0x50
[ 43.293947] perf_try_init_event+0x27/0xb0
[ 43.299066] perf_event_alloc+0x757/0xb20
[ 43.304996] __do_sys_perf_event_open+0x3de/0xd60
[ 43.310932] sys_perf_event_open+0x17/0x20
[ 43.315362] do_int80_syscall_32+0x98/0x1f0
[ 43.319354] entry_INT80_32+0x33/0x33
[ 43.322816] EIP: 0xa7fa41b2
[ 43.325381] Code: 89 c2 31 c0 89 d7 f3 aa 8b 44 24 1c 89 30 c6 40 04 00 83 c4 2c 89 f0 5b 5e 5f 5d c3 90 90 90 90 90 90 90 90 90 90 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 1c 24 c3 8d b6 00 00
[ 43.346022] EAX: ffffffda EBX: 080d3000 ECX: 0000015f EDX: ffffffff
[ 43.355195] ESI: ffffffff EDI: 00000001 EBP: 00000000 ESP: af819388
[ 43.362426] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000282
[ 43.368681] ---[ end trace 323a8199e30cb153 ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Xiaolong
2 years, 6 months
[lkp-robot] [fs] 5c6de586e8: vm-scalability.throughput +12.4% improvement
by kernel test robot
Greeting,
FYI, we noticed a +12.4% improvement of vm-scalability.throughput due to commit:
commit: 5c6de586e899a4a80a0ffa26468639f43dee1009 ("[PATCH] fs: shave 8 bytes off of struct inode")
url: https://github.com/0day-ci/linux/commits/Amir-Goldstein/fs-shave-8-bytes-...
in testcase: vm-scalability
on test machine: 56 threads Intel(R) Xeon(R) CPU E5-2695 v3 @ 2.30GHz with 256G memory
with following parameters:
runtime: 300s
test: small-allocs
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.2/debian-x86_64-2018-04-03.cgz/300s/lkp-hsw-ep5/small-allocs/vm-scalability
commit:
8efcf34a26 (" ARM: SoC: late updates")
5c6de586e8 ("fs: shave 8 bytes off of struct inode")
8efcf34a263965e4 5c6de586e899a4a80a0ffa2646
---------------- --------------------------
%stddev %change %stddev
\ | \
19335952 +12.4% 21729332 vm-scalability.throughput
693688 +11.9% 775935 vm-scalability.median
0.56 ± 55% -43.6% 0.32 ± 62% vm-scalability.stddev
288.16 -7.0% 267.96 vm-scalability.time.elapsed_time
288.16 -7.0% 267.96 vm-scalability.time.elapsed_time.max
48921 ± 6% -3.3% 47314 ± 5% vm-scalability.time.involuntary_context_switches
3777 -4.4% 3610 vm-scalability.time.maximum_resident_set_size
1.074e+09 +0.0% 1.074e+09 vm-scalability.time.minor_page_faults
4096 +0.0% 4096 vm-scalability.time.page_size
2672 +0.6% 2689 vm-scalability.time.percent_of_cpu_this_job_got
5457 -9.4% 4942 vm-scalability.time.system_time
2244 +0.9% 2263 vm-scalability.time.user_time
5529533 ± 6% -65.2% 1923014 ± 6% vm-scalability.time.voluntary_context_switches
4.832e+09 +0.0% 4.832e+09 vm-scalability.workload
93827 ± 3% -7.0% 87299 ± 3% interrupts.CAL:Function_call_interrupts
26.50 -2.0% 25.98 ± 3% boot-time.boot
16.69 -3.2% 16.16 ± 5% boot-time.dhcp
674.18 -1.1% 666.43 ± 2% boot-time.idle
17.61 -3.5% 17.00 ± 5% boot-time.kernel_boot
15034 ± 62% -30.0% 10528 ± 79% softirqs.NET_RX
453251 ± 9% -2.9% 440306 ± 14% softirqs.RCU
46795 -14.9% 39806 ± 2% softirqs.SCHED
3565160 ± 8% +6.1% 3784023 softirqs.TIMER
4.87 ± 4% -0.6 4.25 ± 3% mpstat.cpu.idle%
0.00 ± 13% -0.0 0.00 ± 14% mpstat.cpu.iowait%
0.00 ± 37% +0.0 0.00 ± 37% mpstat.cpu.soft%
67.35 -1.8 65.60 mpstat.cpu.sys%
27.78 +2.4 30.14 mpstat.cpu.usr%
1038 -0.5% 1033 vmstat.memory.buff
1117006 -0.3% 1113807 vmstat.memory.cache
2.463e+08 -0.2% 2.457e+08 vmstat.memory.free
26.00 +1.9% 26.50 vmstat.procs.r
42239 ± 6% -55.9% 18619 ± 5% vmstat.system.cs
31359 -0.7% 31132 vmstat.system.in
0.00 -100.0% 0.00 numa-numastat.node0.interleave_hit
2713006 -1.0% 2685108 numa-numastat.node0.local_node
2716657 -1.0% 2689682 numa-numastat.node0.numa_hit
3651 ± 36% +25.3% 4576 ± 34% numa-numastat.node0.other_node
0.00 -100.0% 0.00 numa-numastat.node1.interleave_hit
2713025 -0.5% 2699801 numa-numastat.node1.local_node
2714924 -0.5% 2700769 numa-numastat.node1.numa_hit
1900 ± 68% -49.1% 968.00 ±162% numa-numastat.node1.other_node
21859882 ± 6% -56.1% 9599175 ± 2% cpuidle.C1.time
5231991 ± 6% -65.3% 1814200 ± 8% cpuidle.C1.usage
620147 ± 9% -22.4% 481528 ± 10% cpuidle.C1E.time
7829 ± 6% -34.5% 5126 ± 17% cpuidle.C1E.usage
5343219 ± 5% -58.8% 2202020 ± 4% cpuidle.C3.time
22942 ± 5% -54.9% 10349 ± 4% cpuidle.C3.usage
3.345e+08 ± 3% -15.1% 2.839e+08 ± 3% cpuidle.C6.time
355754 ± 3% -16.3% 297683 ± 3% cpuidle.C6.usage
248800 ± 6% -74.5% 63413 ± 5% cpuidle.POLL.time
90897 ± 6% -76.4% 21409 ± 7% cpuidle.POLL.usage
2631 +0.5% 2644 turbostat.Avg_MHz
95.35 +0.5 95.88 turbostat.Busy%
2759 -0.1% 2757 turbostat.Bzy_MHz
5227940 ± 6% -65.4% 1809983 ± 8% turbostat.C1
0.27 ± 5% -0.1 0.13 ± 3% turbostat.C1%
7646 ± 5% -36.0% 4894 ± 17% turbostat.C1E
0.01 +0.0 0.01 turbostat.C1E%
22705 ± 5% -56.0% 9995 ± 3% turbostat.C3
0.07 ± 7% -0.0 0.03 turbostat.C3%
354625 ± 3% -16.3% 296732 ± 3% turbostat.C6
4.11 ± 3% -0.4 3.75 ± 2% turbostat.C6%
1.68 ± 3% -15.6% 1.42 ± 2% turbostat.CPU%c1
0.04 -62.5% 0.01 ± 33% turbostat.CPU%c3
2.93 ± 3% -8.1% 2.69 ± 2% turbostat.CPU%c6
64.50 -1.6% 63.50 ± 2% turbostat.CoreTmp
9095020 -7.6% 8400668 turbostat.IRQ
11.78 ± 5% -2.3 9.45 ± 6% turbostat.PKG_%
0.10 ± 27% -4.9% 0.10 ± 22% turbostat.Pkg%pc2
0.00 ±173% -100.0% 0.00 turbostat.Pkg%pc6
68.50 ± 2% +0.0% 68.50 turbostat.PkgTmp
230.97 +0.1% 231.10 turbostat.PkgWatt
22.45 -0.8% 22.27 turbostat.RAMWatt
10304 -8.6% 9415 ± 2% turbostat.SMI
2300 +0.0% 2300 turbostat.TSC_MHz
269189 -0.6% 267505 meminfo.Active
269118 -0.6% 267429 meminfo.Active(anon)
167369 -1.7% 164478 meminfo.AnonHugePages
245606 +0.2% 246183 meminfo.AnonPages
1042 -0.2% 1039 meminfo.Buffers
1066883 -0.2% 1064908 meminfo.Cached
203421 -0.0% 203421 meminfo.CmaFree
204800 +0.0% 204800 meminfo.CmaTotal
1.32e+08 -0.0% 1.32e+08 meminfo.CommitLimit
485388 ± 15% -5.3% 459546 ± 9% meminfo.Committed_AS
2.65e+08 +0.0% 2.65e+08 meminfo.DirectMap1G
5240434 ± 8% +0.3% 5257281 ± 16% meminfo.DirectMap2M
169088 ± 6% -10.0% 152241 ± 5% meminfo.DirectMap4k
2048 +0.0% 2048 meminfo.Hugepagesize
150295 +0.1% 150389 meminfo.Inactive
149164 +0.1% 149264 meminfo.Inactive(anon)
1130 -0.6% 1124 meminfo.Inactive(file)
7375 -0.4% 7345 meminfo.KernelStack
28041 -0.5% 27895 meminfo.Mapped
2.451e+08 -0.2% 2.445e+08 meminfo.MemAvailable
2.462e+08 -0.2% 2.457e+08 meminfo.MemFree
2.64e+08 -0.0% 2.64e+08 meminfo.MemTotal
1179 ± 57% -35.2% 764.50 ±100% meminfo.Mlocked
4507108 +3.4% 4660163 meminfo.PageTables
50580 -1.9% 49633 meminfo.SReclaimable
11620141 +3.3% 12007592 meminfo.SUnreclaim
172885 -1.4% 170543 meminfo.Shmem
11670722 +3.3% 12057225 meminfo.Slab
894024 +0.0% 894327 meminfo.Unevictable
3.436e+10 +0.0% 3.436e+10 meminfo.VmallocTotal
6.873e+12 -2.1% 6.727e+12 perf-stat.branch-instructions
0.08 ± 2% -0.0 0.08 perf-stat.branch-miss-rate%
5.269e+09 ± 2% -3.7% 5.072e+09 perf-stat.branch-misses
37.34 +1.1 38.44 perf-stat.cache-miss-rate%
8.136e+09 ± 2% -7.0% 7.568e+09 ± 3% perf-stat.cache-misses
2.179e+10 ± 2% -9.6% 1.969e+10 ± 3% perf-stat.cache-references
12287807 ± 6% -59.1% 5020001 ± 5% perf-stat.context-switches
0.87 -3.5% 0.84 perf-stat.cpi
2.116e+13 -6.5% 1.978e+13 perf-stat.cpu-cycles
24160 ± 5% -13.8% 20819 ± 3% perf-stat.cpu-migrations
0.16 ± 6% +0.0 0.16 ± 4% perf-stat.dTLB-load-miss-rate%
1.057e+10 ± 6% +0.4% 1.061e+10 ± 5% perf-stat.dTLB-load-misses
6.792e+12 -3.5% 6.557e+12 perf-stat.dTLB-loads
0.00 ± 9% +0.0 0.00 ± 20% perf-stat.dTLB-store-miss-rate%
22950816 ± 9% +6.2% 24373827 ± 20% perf-stat.dTLB-store-misses
9.067e+11 -0.7% 9.005e+11 perf-stat.dTLB-stores
95.10 +2.7 97.81 perf-stat.iTLB-load-miss-rate%
2.437e+09 +6.8% 2.604e+09 ± 4% perf-stat.iTLB-load-misses
1.257e+08 ± 8% -53.7% 58211601 ± 6% perf-stat.iTLB-loads
2.44e+13 -3.1% 2.364e+13 perf-stat.instructions
10011 -9.1% 9100 ± 4% perf-stat.instructions-per-iTLB-miss
1.15 +3.6% 1.20 perf-stat.ipc
1.074e+09 -0.0% 1.074e+09 perf-stat.minor-faults
64.40 ± 4% -4.2 60.22 ± 5% perf-stat.node-load-miss-rate%
4.039e+09 ± 2% -10.9% 3.599e+09 ± 3% perf-stat.node-load-misses
2.241e+09 ± 10% +6.7% 2.39e+09 ± 12% perf-stat.node-loads
49.24 -1.7 47.53 perf-stat.node-store-miss-rate%
9.005e+08 -18.3% 7.357e+08 ± 2% perf-stat.node-store-misses
9.282e+08 -12.5% 8.123e+08 ± 2% perf-stat.node-stores
1.074e+09 -0.0% 1.074e+09 perf-stat.page-faults
5049 -3.1% 4893 perf-stat.path-length
67282 -0.6% 66860 proc-vmstat.nr_active_anon
61404 +0.2% 61545 proc-vmstat.nr_anon_pages
6117769 -0.2% 6104302 proc-vmstat.nr_dirty_background_threshold
12250497 -0.2% 12223531 proc-vmstat.nr_dirty_threshold
266952 -0.2% 266464 proc-vmstat.nr_file_pages
50855 -0.0% 50855 proc-vmstat.nr_free_cma
61552505 -0.2% 61417644 proc-vmstat.nr_free_pages
37259 +0.1% 37286 proc-vmstat.nr_inactive_anon
282.25 -0.6% 280.50 proc-vmstat.nr_inactive_file
7375 -0.4% 7344 proc-vmstat.nr_kernel_stack
7124 -0.5% 7085 proc-vmstat.nr_mapped
295.00 ± 57% -35.3% 190.75 ±100% proc-vmstat.nr_mlock
1125531 +3.4% 1163908 proc-vmstat.nr_page_table_pages
43193 -1.3% 42613 proc-vmstat.nr_shmem
12644 -1.9% 12407 proc-vmstat.nr_slab_reclaimable
2901812 +3.3% 2998814 proc-vmstat.nr_slab_unreclaimable
223506 +0.0% 223581 proc-vmstat.nr_unevictable
67282 -0.6% 66860 proc-vmstat.nr_zone_active_anon
37259 +0.1% 37286 proc-vmstat.nr_zone_inactive_anon
282.25 -0.6% 280.50 proc-vmstat.nr_zone_inactive_file
223506 +0.0% 223581 proc-vmstat.nr_zone_unevictable
2685 ±104% +108.2% 5591 ± 84% proc-vmstat.numa_hint_faults
1757 ±140% +77.9% 3125 ± 87% proc-vmstat.numa_hint_faults_local
5456982 -0.7% 5417832 proc-vmstat.numa_hit
5451431 -0.7% 5412285 proc-vmstat.numa_local
5551 -0.1% 5547 proc-vmstat.numa_other
977.50 ± 40% +193.7% 2871 ±101% proc-vmstat.numa_pages_migrated
10519 ±113% +140.4% 25286 ± 97% proc-vmstat.numa_pte_updates
10726 ± 8% -3.8% 10315 ± 7% proc-vmstat.pgactivate
8191987 -0.5% 8150192 proc-vmstat.pgalloc_normal
1.074e+09 -0.0% 1.074e+09 proc-vmstat.pgfault
8143430 -2.7% 7926613 proc-vmstat.pgfree
977.50 ± 40% +193.7% 2871 ±101% proc-vmstat.pgmigrate_success
2155 -0.4% 2147 proc-vmstat.pgpgin
2049 -0.0% 2048 proc-vmstat.pgpgout
67375 -0.2% 67232 slabinfo.Acpi-Namespace.active_objs
67375 -0.2% 67232 slabinfo.Acpi-Namespace.num_objs
604.00 ± 19% -0.1% 603.50 ± 9% slabinfo.Acpi-ParseExt.active_objs
604.00 ± 19% -0.1% 603.50 ± 9% slabinfo.Acpi-ParseExt.num_objs
7972 ± 4% -0.3% 7949 ± 2% slabinfo.anon_vma.active_objs
7972 ± 4% -0.3% 7949 ± 2% slabinfo.anon_vma.num_objs
1697 ± 12% -10.0% 1528 ± 7% slabinfo.avtab_node.active_objs
1697 ± 12% -10.0% 1528 ± 7% slabinfo.avtab_node.num_objs
58071 -0.1% 58034 slabinfo.dentry.active_objs
1351 ± 25% -31.2% 930.50 ± 30% slabinfo.dmaengine-unmap-16.active_objs
1351 ± 25% -31.2% 930.50 ± 30% slabinfo.dmaengine-unmap-16.num_objs
1354 ± 6% -7.1% 1258 ± 9% slabinfo.eventpoll_pwq.active_objs
1354 ± 6% -7.1% 1258 ± 9% slabinfo.eventpoll_pwq.num_objs
8880 ± 5% +0.0% 8883 ± 6% slabinfo.filp.num_objs
2715 ± 7% -9.0% 2470 ± 5% slabinfo.kmalloc-1024.active_objs
2831 ± 7% -10.2% 2543 ± 3% slabinfo.kmalloc-1024.num_objs
12480 -2.7% 12140 ± 2% slabinfo.kmalloc-16.active_objs
12480 -2.7% 12140 ± 2% slabinfo.kmalloc-16.num_objs
39520 ± 3% +1.0% 39922 slabinfo.kmalloc-32.active_objs
37573 +0.1% 37616 slabinfo.kmalloc-64.active_objs
37590 +0.2% 37673 slabinfo.kmalloc-64.num_objs
17182 +2.1% 17550 ± 2% slabinfo.kmalloc-8.active_objs
17663 +2.2% 18045 ± 2% slabinfo.kmalloc-8.num_objs
4428 ± 7% -2.9% 4298 ± 4% slabinfo.kmalloc-96.active_objs
956.50 ± 10% +4.5% 999.50 ± 16% slabinfo.nsproxy.active_objs
956.50 ± 10% +4.5% 999.50 ± 16% slabinfo.nsproxy.num_objs
19094 ± 3% -3.3% 18463 ± 5% slabinfo.pid.active_objs
19094 ± 3% -3.0% 18523 ± 5% slabinfo.pid.num_objs
2088 ± 14% -21.1% 1648 ± 8% slabinfo.skbuff_head_cache.active_objs
2136 ± 16% -19.5% 1720 ± 7% slabinfo.skbuff_head_cache.num_objs
872.00 ± 16% -9.3% 791.00 ± 15% slabinfo.task_group.active_objs
872.00 ± 16% -9.3% 791.00 ± 15% slabinfo.task_group.num_objs
57677936 +3.4% 59655533 slabinfo.vm_area_struct.active_objs
1442050 +3.4% 1491482 slabinfo.vm_area_struct.active_slabs
57682039 +3.4% 59659325 slabinfo.vm_area_struct.num_objs
1442050 +3.4% 1491482 slabinfo.vm_area_struct.num_slabs
132527 ± 15% +1.9% 135006 ± 2% numa-meminfo.node0.Active
132475 ± 15% +1.9% 134985 ± 2% numa-meminfo.node0.Active(anon)
86736 ± 32% +6.5% 92413 ± 15% numa-meminfo.node0.AnonHugePages
124613 ± 19% +3.4% 128788 ± 4% numa-meminfo.node0.AnonPages
556225 ± 11% -5.1% 527584 ± 12% numa-meminfo.node0.FilePages
100973 ± 57% -27.7% 73011 ± 89% numa-meminfo.node0.Inactive
100128 ± 57% -27.4% 72686 ± 90% numa-meminfo.node0.Inactive(anon)
843.75 ± 57% -61.5% 324.75 ±115% numa-meminfo.node0.Inactive(file)
4058 ± 3% +3.5% 4200 ± 4% numa-meminfo.node0.KernelStack
12846 ± 26% +1.6% 13053 ± 26% numa-meminfo.node0.Mapped
1.228e+08 -0.1% 1.226e+08 numa-meminfo.node0.MemFree
1.32e+08 +0.0% 1.32e+08 numa-meminfo.node0.MemTotal
9191573 +1.7% 9347737 numa-meminfo.node0.MemUsed
2321138 +2.2% 2372650 numa-meminfo.node0.PageTables
24816 ± 13% -5.8% 23377 ± 16% numa-meminfo.node0.SReclaimable
5985628 +2.2% 6116406 numa-meminfo.node0.SUnreclaim
108046 ± 57% -26.9% 78955 ± 80% numa-meminfo.node0.Shmem
6010444 +2.2% 6139784 numa-meminfo.node0.Slab
447344 +0.2% 448404 numa-meminfo.node0.Unevictable
136675 ± 13% -3.0% 132521 ± 2% numa-meminfo.node1.Active
136655 ± 13% -3.1% 132467 ± 2% numa-meminfo.node1.Active(anon)
80581 ± 35% -10.5% 72102 ± 18% numa-meminfo.node1.AnonHugePages
120993 ± 20% -3.0% 117407 ± 4% numa-meminfo.node1.AnonPages
511901 ± 12% +5.2% 538396 ± 11% numa-meminfo.node1.FilePages
49526 ±116% +56.3% 77402 ± 84% numa-meminfo.node1.Inactive
49238 ±115% +55.6% 76603 ± 85% numa-meminfo.node1.Inactive(anon)
287.75 ±168% +177.6% 798.75 ± 47% numa-meminfo.node1.Inactive(file)
3316 ± 3% -5.2% 3143 ± 5% numa-meminfo.node1.KernelStack
15225 ± 23% -2.0% 14918 ± 22% numa-meminfo.node1.Mapped
1.234e+08 -0.3% 1.23e+08 numa-meminfo.node1.MemFree
1.321e+08 -0.0% 1.321e+08 numa-meminfo.node1.MemTotal
8664187 +4.4% 9041729 numa-meminfo.node1.MemUsed
2185511 +4.6% 2286356 numa-meminfo.node1.PageTables
25762 ± 12% +1.9% 26255 ± 15% numa-meminfo.node1.SReclaimable
5634774 +4.5% 5887602 numa-meminfo.node1.SUnreclaim
65037 ± 92% +40.9% 91621 ± 68% numa-meminfo.node1.Shmem
5660536 +4.5% 5913858 numa-meminfo.node1.Slab
446680 -0.2% 445922 numa-meminfo.node1.Unevictable
15553 ± 18% -14.1% 13366 ± 11% numa-vmstat.node0
33116 ± 15% +1.9% 33742 ± 2% numa-vmstat.node0.nr_active_anon
31157 ± 19% +3.3% 32196 ± 4% numa-vmstat.node0.nr_anon_pages
139001 ± 11% -5.1% 131864 ± 12% numa-vmstat.node0.nr_file_pages
30692447 -0.1% 30654328 numa-vmstat.node0.nr_free_pages
24983 ± 57% -27.4% 18142 ± 90% numa-vmstat.node0.nr_inactive_anon
210.25 ± 57% -61.6% 80.75 ±116% numa-vmstat.node0.nr_inactive_file
4058 ± 3% +3.5% 4199 ± 4% numa-vmstat.node0.nr_kernel_stack
3304 ± 26% +1.2% 3344 ± 25% numa-vmstat.node0.nr_mapped
139.00 ± 60% -20.3% 110.75 ±100% numa-vmstat.node0.nr_mlock
579931 +2.1% 592262 numa-vmstat.node0.nr_page_table_pages
26956 ± 57% -26.9% 19707 ± 80% numa-vmstat.node0.nr_shmem
6203 ± 13% -5.8% 5844 ± 16% numa-vmstat.node0.nr_slab_reclaimable
1495541 +2.2% 1527781 numa-vmstat.node0.nr_slab_unreclaimable
111835 +0.2% 112100 numa-vmstat.node0.nr_unevictable
33116 ± 15% +1.9% 33742 ± 2% numa-vmstat.node0.nr_zone_active_anon
24983 ± 57% -27.4% 18142 ± 90% numa-vmstat.node0.nr_zone_inactive_anon
210.25 ± 57% -61.6% 80.75 ±116% numa-vmstat.node0.nr_zone_inactive_file
111835 +0.2% 112100 numa-vmstat.node0.nr_zone_unevictable
1840693 ± 2% +1.6% 1869501 numa-vmstat.node0.numa_hit
144048 +0.2% 144385 numa-vmstat.node0.numa_interleave
1836656 ± 2% +1.5% 1864579 numa-vmstat.node0.numa_local
4036 ± 33% +21.9% 4921 ± 31% numa-vmstat.node0.numa_other
11577 ± 24% +17.8% 13635 ± 11% numa-vmstat.node1
34171 ± 13% -3.1% 33126 ± 2% numa-vmstat.node1.nr_active_anon
30247 ± 20% -3.0% 29352 ± 4% numa-vmstat.node1.nr_anon_pages
127979 ± 12% +5.2% 134601 ± 11% numa-vmstat.node1.nr_file_pages
50855 -0.0% 50855 numa-vmstat.node1.nr_free_cma
30858027 -0.3% 30763794 numa-vmstat.node1.nr_free_pages
12305 ±116% +55.6% 19145 ± 85% numa-vmstat.node1.nr_inactive_anon
71.75 ±168% +177.7% 199.25 ± 47% numa-vmstat.node1.nr_inactive_file
3315 ± 3% -5.2% 3144 ± 5% numa-vmstat.node1.nr_kernel_stack
3823 ± 23% -1.8% 3754 ± 22% numa-vmstat.node1.nr_mapped
155.00 ± 60% -48.2% 80.25 ±100% numa-vmstat.node1.nr_mlock
545973 +4.6% 571069 numa-vmstat.node1.nr_page_table_pages
16263 ± 92% +40.9% 22907 ± 68% numa-vmstat.node1.nr_shmem
6440 ± 12% +1.9% 6563 ± 15% numa-vmstat.node1.nr_slab_reclaimable
1407904 +4.5% 1471030 numa-vmstat.node1.nr_slab_unreclaimable
111669 -0.2% 111480 numa-vmstat.node1.nr_unevictable
34171 ± 13% -3.1% 33126 ± 2% numa-vmstat.node1.nr_zone_active_anon
12305 ±116% +55.6% 19145 ± 85% numa-vmstat.node1.nr_zone_inactive_anon
71.75 ±168% +177.7% 199.25 ± 47% numa-vmstat.node1.nr_zone_inactive_file
111669 -0.2% 111480 numa-vmstat.node1.nr_zone_unevictable
1846889 ± 2% +1.4% 1872108 numa-vmstat.node1.numa_hit
144151 -0.2% 143830 numa-vmstat.node1.numa_interleave
1699975 ± 2% +1.6% 1726462 numa-vmstat.node1.numa_local
146913 -0.9% 145645 numa-vmstat.node1.numa_other
0.00 +1.2e+12% 12083 ±100% sched_debug.cfs_rq:/.MIN_vruntime.avg
0.00 +3.4e+13% 338347 ±100% sched_debug.cfs_rq:/.MIN_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.MIN_vruntime.min
0.00 +1.5e+28% 62789 ±100% sched_debug.cfs_rq:/.MIN_vruntime.stddev
118226 +0.4% 118681 sched_debug.cfs_rq:/.exec_clock.avg
119425 +0.3% 119724 sched_debug.cfs_rq:/.exec_clock.max
117183 +0.1% 117244 sched_debug.cfs_rq:/.exec_clock.min
395.73 ± 14% +11.0% 439.18 ± 23% sched_debug.cfs_rq:/.exec_clock.stddev
32398 +14.1% 36980 ± 9% sched_debug.cfs_rq:/.load.avg
73141 ± 5% +128.0% 166780 ± 57% sched_debug.cfs_rq:/.load.max
17867 ± 19% +30.4% 23301 ± 13% sched_debug.cfs_rq:/.load.min
11142 ± 3% +146.4% 27458 ± 62% sched_debug.cfs_rq:/.load.stddev
59.52 ± 2% -2.3% 58.13 ± 6% sched_debug.cfs_rq:/.load_avg.avg
305.15 ± 10% -8.8% 278.35 ± 3% sched_debug.cfs_rq:/.load_avg.max
27.20 ± 6% +8.6% 29.55 sched_debug.cfs_rq:/.load_avg.min
71.12 ± 9% -8.1% 65.38 ± 5% sched_debug.cfs_rq:/.load_avg.stddev
0.00 +1.2e+12% 12083 ±100% sched_debug.cfs_rq:/.max_vruntime.avg
0.00 +3.4e+13% 338347 ±100% sched_debug.cfs_rq:/.max_vruntime.max
0.00 +0.0% 0.00 sched_debug.cfs_rq:/.max_vruntime.min
0.00 +1.5e+28% 62789 ±100% sched_debug.cfs_rq:/.max_vruntime.stddev
3364874 +0.8% 3391671 sched_debug.cfs_rq:/.min_vruntime.avg
3399316 +0.7% 3423051 sched_debug.cfs_rq:/.min_vruntime.max
3303899 +0.9% 3335083 sched_debug.cfs_rq:/.min_vruntime.min
20061 ± 14% -2.5% 19552 ± 16% sched_debug.cfs_rq:/.min_vruntime.stddev
0.87 +3.1% 0.89 ± 2% sched_debug.cfs_rq:/.nr_running.avg
1.00 +5.0% 1.05 ± 8% sched_debug.cfs_rq:/.nr_running.max
0.50 ± 19% +20.0% 0.60 sched_debug.cfs_rq:/.nr_running.min
0.16 ± 14% -6.0% 0.15 ± 11% sched_debug.cfs_rq:/.nr_running.stddev
4.14 ± 6% -6.5% 3.87 ± 10% sched_debug.cfs_rq:/.nr_spread_over.avg
15.10 ± 7% +29.8% 19.60 ± 12% sched_debug.cfs_rq:/.nr_spread_over.max
1.50 ± 11% -16.7% 1.25 ± 20% sched_debug.cfs_rq:/.nr_spread_over.min
2.82 ± 8% +23.0% 3.47 ± 13% sched_debug.cfs_rq:/.nr_spread_over.stddev
7.31 -6.2% 6.86 ± 69% sched_debug.cfs_rq:/.removed.load_avg.avg
204.80 -28.2% 147.10 ± 57% sched_debug.cfs_rq:/.removed.load_avg.max
38.01 -20.3% 30.29 ± 60% sched_debug.cfs_rq:/.removed.load_avg.stddev
337.44 -6.0% 317.11 ± 69% sched_debug.cfs_rq:/.removed.runnable_sum.avg
9448 -27.8% 6819 ± 57% sched_debug.cfs_rq:/.removed.runnable_sum.max
1753 -20.2% 1399 ± 60% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
2.16 ± 56% -25.7% 1.60 ± 57% sched_debug.cfs_rq:/.removed.util_avg.avg
60.40 ± 56% -37.3% 37.90 ± 61% sched_debug.cfs_rq:/.removed.util_avg.max
11.21 ± 56% -33.8% 7.42 ± 58% sched_debug.cfs_rq:/.removed.util_avg.stddev
30.32 ± 2% +0.6% 30.52 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
80.90 ± 14% -19.2% 65.35 ± 34% sched_debug.cfs_rq:/.runnable_load_avg.max
14.95 ± 24% +47.2% 22.00 ± 12% sched_debug.cfs_rq:/.runnable_load_avg.min
12.10 ± 19% -27.7% 8.74 ± 42% sched_debug.cfs_rq:/.runnable_load_avg.stddev
30853 +13.2% 34914 ± 10% sched_debug.cfs_rq:/.runnable_weight.avg
61794 +152.3% 155911 ± 64% sched_debug.cfs_rq:/.runnable_weight.max
17867 ± 19% +30.4% 23300 ± 13% sched_debug.cfs_rq:/.runnable_weight.min
8534 ± 7% +193.7% 25066 ± 72% sched_debug.cfs_rq:/.runnable_weight.stddev
45914 ± 45% -50.7% 22619 ± 62% sched_debug.cfs_rq:/.spread0.avg
80385 ± 26% -32.8% 53990 ± 34% sched_debug.cfs_rq:/.spread0.max
-15013 +126.3% -33973 sched_debug.cfs_rq:/.spread0.min
20053 ± 14% -2.5% 19541 ± 16% sched_debug.cfs_rq:/.spread0.stddev
964.19 -0.2% 962.12 sched_debug.cfs_rq:/.util_avg.avg
1499 ± 14% -13.2% 1301 ± 4% sched_debug.cfs_rq:/.util_avg.max
510.75 ± 12% +35.6% 692.65 ± 13% sched_debug.cfs_rq:/.util_avg.min
177.84 ± 21% -34.4% 116.72 ± 23% sched_debug.cfs_rq:/.util_avg.stddev
768.04 +5.1% 807.40 sched_debug.cfs_rq:/.util_est_enqueued.avg
1192 ± 14% -22.5% 924.20 sched_debug.cfs_rq:/.util_est_enqueued.max
170.90 ± 99% +89.6% 324.05 ± 21% sched_debug.cfs_rq:/.util_est_enqueued.min
201.22 ± 17% -36.7% 127.44 ± 15% sched_debug.cfs_rq:/.util_est_enqueued.stddev
111567 ± 4% +7.6% 120067 ± 7% sched_debug.cpu.avg_idle.avg
549432 ± 16% -16.6% 458264 ± 5% sched_debug.cpu.avg_idle.max
4419 ± 79% +87.7% 8293 ± 31% sched_debug.cpu.avg_idle.min
123967 ± 13% -8.9% 112928 ± 15% sched_debug.cpu.avg_idle.stddev
147256 -0.4% 146724 sched_debug.cpu.clock.avg
147258 -0.4% 146728 sched_debug.cpu.clock.max
147252 -0.4% 146720 sched_debug.cpu.clock.min
1.70 ± 11% +39.5% 2.37 ± 29% sched_debug.cpu.clock.stddev
147256 -0.4% 146724 sched_debug.cpu.clock_task.avg
147258 -0.4% 146728 sched_debug.cpu.clock_task.max
147252 -0.4% 146720 sched_debug.cpu.clock_task.min
1.70 ± 11% +39.4% 2.37 ± 29% sched_debug.cpu.clock_task.stddev
30.85 +0.4% 30.97 ± 3% sched_debug.cpu.cpu_load[0].avg
84.15 ± 13% -13.8% 72.55 ± 21% sched_debug.cpu.cpu_load[0].max
17.35 ± 24% +26.5% 21.95 ± 24% sched_debug.cpu.cpu_load[0].min
12.75 ± 18% -19.9% 10.22 ± 22% sched_debug.cpu.cpu_load[0].stddev
30.88 +1.1% 31.22 ± 2% sched_debug.cpu.cpu_load[1].avg
77.35 ± 19% -9.8% 69.75 ± 18% sched_debug.cpu.cpu_load[1].max
18.60 ± 19% +31.2% 24.40 ± 10% sched_debug.cpu.cpu_load[1].min
11.10 ± 24% -20.5% 8.82 ± 25% sched_debug.cpu.cpu_load[1].stddev
31.13 +2.3% 31.84 ± 2% sched_debug.cpu.cpu_load[2].avg
71.40 ± 26% +0.4% 71.70 ± 20% sched_debug.cpu.cpu_load[2].max
19.45 ± 21% +32.9% 25.85 ± 5% sched_debug.cpu.cpu_load[2].min
9.61 ± 36% -8.3% 8.81 ± 27% sched_debug.cpu.cpu_load[2].stddev
31.79 +2.9% 32.71 ± 3% sched_debug.cpu.cpu_load[3].avg
76.75 ± 19% +8.1% 82.95 ± 37% sched_debug.cpu.cpu_load[3].max
20.25 ± 14% +33.3% 27.00 ± 3% sched_debug.cpu.cpu_load[3].min
9.88 ± 28% +6.7% 10.54 ± 52% sched_debug.cpu.cpu_load[3].stddev
32.95 +2.3% 33.71 ± 3% sched_debug.cpu.cpu_load[4].avg
107.65 ± 9% +3.9% 111.90 ± 33% sched_debug.cpu.cpu_load[4].max
20.35 ± 8% +30.5% 26.55 sched_debug.cpu.cpu_load[4].min
15.33 ± 12% +1.2% 15.51 ± 46% sched_debug.cpu.cpu_load[4].stddev
1244 +2.0% 1269 sched_debug.cpu.curr->pid.avg
4194 -0.7% 4165 sched_debug.cpu.curr->pid.max
655.90 ± 22% +16.7% 765.75 ± 3% sched_debug.cpu.curr->pid.min
656.54 -0.8% 651.46 sched_debug.cpu.curr->pid.stddev
32481 +13.8% 36973 ± 9% sched_debug.cpu.load.avg
73132 ± 5% +130.1% 168294 ± 56% sched_debug.cpu.load.max
17867 ± 19% +20.3% 21497 sched_debug.cpu.load.min
11194 ± 3% +148.6% 27826 ± 61% sched_debug.cpu.load.stddev
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.avg
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.max
500000 +0.0% 500000 sched_debug.cpu.max_idle_balance_cost.min
4294 -0.0% 4294 sched_debug.cpu.next_balance.avg
4294 -0.0% 4294 sched_debug.cpu.next_balance.max
4294 -0.0% 4294 sched_debug.cpu.next_balance.min
0.00 ± 4% -3.6% 0.00 ± 5% sched_debug.cpu.next_balance.stddev
126191 -0.0% 126167 sched_debug.cpu.nr_load_updates.avg
133303 -0.6% 132467 sched_debug.cpu.nr_load_updates.max
124404 +0.4% 124852 sched_debug.cpu.nr_load_updates.min
1826 ± 13% -8.5% 1672 ± 5% sched_debug.cpu.nr_load_updates.stddev
0.90 +2.2% 0.92 ± 2% sched_debug.cpu.nr_running.avg
1.80 ± 7% -5.6% 1.70 ± 5% sched_debug.cpu.nr_running.max
0.50 ± 19% +20.0% 0.60 sched_debug.cpu.nr_running.min
0.29 ± 7% -13.6% 0.25 ± 4% sched_debug.cpu.nr_running.stddev
204123 ± 8% -56.8% 88239 ± 5% sched_debug.cpu.nr_switches.avg
457439 ± 15% -60.4% 181234 ± 8% sched_debug.cpu.nr_switches.max
116531 ± 16% -62.8% 43365 ± 13% sched_debug.cpu.nr_switches.min
72910 ± 22% -50.5% 36095 ± 13% sched_debug.cpu.nr_switches.stddev
0.03 ± 19% -52.9% 0.01 ± 35% sched_debug.cpu.nr_uninterruptible.avg
16.50 ± 14% -11.5% 14.60 ± 16% sched_debug.cpu.nr_uninterruptible.max
-16.90 -26.3% -12.45 sched_debug.cpu.nr_uninterruptible.min
7.97 ± 12% -19.4% 6.42 ± 15% sched_debug.cpu.nr_uninterruptible.stddev
210655 ± 8% -56.7% 91289 ± 5% sched_debug.cpu.sched_count.avg
467518 ± 15% -60.4% 185362 ± 9% sched_debug.cpu.sched_count.max
120764 ± 16% -62.5% 45233 ± 14% sched_debug.cpu.sched_count.min
74259 ± 23% -51.0% 36403 ± 14% sched_debug.cpu.sched_count.stddev
89621 ± 8% -63.5% 32750 ± 7% sched_debug.cpu.sched_goidle.avg
189668 ± 8% -67.5% 61630 ± 18% sched_debug.cpu.sched_goidle.max
54342 ± 16% -68.0% 17412 ± 15% sched_debug.cpu.sched_goidle.min
28685 ± 15% -58.7% 11834 ± 16% sched_debug.cpu.sched_goidle.stddev
109820 ± 8% -56.0% 48303 ± 5% sched_debug.cpu.ttwu_count.avg
144975 ± 13% -41.1% 85424 ± 7% sched_debug.cpu.ttwu_count.max
96409 ± 8% -61.1% 37542 ± 6% sched_debug.cpu.ttwu_count.min
12310 ± 20% -2.4% 12009 ± 14% sched_debug.cpu.ttwu_count.stddev
9749 ± 10% -5.0% 9257 ± 6% sched_debug.cpu.ttwu_local.avg
45094 ± 30% +4.8% 47270 ± 13% sched_debug.cpu.ttwu_local.max
1447 ± 6% +1.2% 1465 ± 7% sched_debug.cpu.ttwu_local.min
12231 ± 24% -6.1% 11487 ± 13% sched_debug.cpu.ttwu_local.stddev
147253 -0.4% 146720 sched_debug.cpu_clk
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.avg
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.max
996147 +0.0% 996147 sched_debug.dl_rq:.dl_bw->bw.min
4.295e+09 -0.0% 4.295e+09 sched_debug.jiffies
147253 -0.4% 146720 sched_debug.ktime
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.avg
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.max
950.00 +0.0% 950.00 sched_debug.rt_rq:/.rt_runtime.min
0.00 ±146% -69.8% 0.00 ±100% sched_debug.rt_rq:/.rt_time.avg
0.04 ±146% -69.8% 0.01 ±100% sched_debug.rt_rq:/.rt_time.max
0.01 ±146% -69.8% 0.00 ±100% sched_debug.rt_rq:/.rt_time.stddev
147626 -0.3% 147114 sched_debug.sched_clk
1.00 +0.0% 1.00 sched_debug.sched_clock_stable()
4118331 +0.0% 4118331 sched_debug.sysctl_sched.sysctl_sched_features
24.00 +0.0% 24.00 sched_debug.sysctl_sched.sysctl_sched_latency
3.00 +0.0% 3.00 sched_debug.sysctl_sched.sysctl_sched_min_granularity
1.00 +0.0% 1.00 sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
4.00 +0.0% 4.00 sched_debug.sysctl_sched.sysctl_sched_wakeup_granularity
68.63 -2.2 66.43 perf-profile.calltrace.cycles-pp.osq_lock.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
73.63 -1.9 71.69 perf-profile.calltrace.cycles-pp.call_rwsem_down_write_failed.down_write.vma_link.mmap_region.do_mmap
73.63 -1.9 71.69 perf-profile.calltrace.cycles-pp.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link.mmap_region
73.99 -1.9 72.05 perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
78.36 -1.7 76.67 perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
79.68 -1.5 78.19 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
81.34 -1.2 80.10 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
83.36 -1.2 82.15 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
81.59 -1.2 80.39 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
83.38 -1.2 82.18 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
82.00 -1.2 80.83 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.44 ± 5% -0.7 1.79 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
3.17 ± 4% -0.6 2.62 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
1.15 ± 2% -0.2 0.97 ± 2% perf-profile.calltrace.cycles-pp.up_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
1.27 ± 18% -0.0 1.24 ± 16% perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.27 ± 18% -0.0 1.24 ± 16% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.27 ± 18% -0.0 1.24 ± 16% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.55 ± 4% +0.0 0.58 ± 3% perf-profile.calltrace.cycles-pp.__rb_insert_augmented.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
0.94 ± 14% +0.1 1.02 ± 12% perf-profile.calltrace.cycles-pp.task_numa_work.task_work_run.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
0.94 ± 14% +0.1 1.02 ± 12% perf-profile.calltrace.cycles-pp.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
0.94 ± 14% +0.1 1.02 ± 12% perf-profile.calltrace.cycles-pp.task_work_run.exit_to_usermode_loop.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
6.67 +0.1 6.76 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.95 ± 14% +0.1 1.04 ± 11% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
6.70 +0.1 6.79 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
6.75 +0.1 6.86 perf-profile.calltrace.cycles-pp.page_fault
0.61 ± 6% +0.1 0.75 ± 3% perf-profile.calltrace.cycles-pp.___perf_sw_event.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.09 ± 2% +0.1 1.23 ± 3% perf-profile.calltrace.cycles-pp.vmacache_find.find_vma.__do_page_fault.do_page_fault.page_fault
0.74 ± 4% +0.2 0.91 ± 3% perf-profile.calltrace.cycles-pp.__perf_sw_event.__do_page_fault.do_page_fault.page_fault
1.35 +0.2 1.56 perf-profile.calltrace.cycles-pp.unmapped_area_topdown.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff
1.39 +0.2 1.61 perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
1.56 +0.2 1.78 ± 2% perf-profile.calltrace.cycles-pp.find_vma.__do_page_fault.do_page_fault.page_fault
1.50 +0.2 1.72 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
2.71 +0.3 3.05 perf-profile.calltrace.cycles-pp.native_irq_return_iret
2.15 +0.4 2.50 ± 3% perf-profile.calltrace.cycles-pp.vma_interval_tree_insert.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
2.70 ± 5% +0.4 3.07 ± 3% perf-profile.calltrace.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.83 +0.4 4.26 perf-profile.calltrace.cycles-pp.rwsem_spin_on_owner.rwsem_down_write_failed.call_rwsem_down_write_failed.down_write.vma_link
0.00 +0.5 0.52 perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
68.65 -2.2 66.45 perf-profile.children.cycles-pp.osq_lock
73.63 -1.9 71.69 perf-profile.children.cycles-pp.call_rwsem_down_write_failed
73.63 -1.9 71.69 perf-profile.children.cycles-pp.rwsem_down_write_failed
73.99 -1.9 72.05 perf-profile.children.cycles-pp.down_write
78.36 -1.7 76.68 perf-profile.children.cycles-pp.vma_link
79.70 -1.5 78.21 perf-profile.children.cycles-pp.mmap_region
81.35 -1.2 80.12 perf-profile.children.cycles-pp.do_mmap
81.61 -1.2 80.41 perf-profile.children.cycles-pp.vm_mmap_pgoff
83.48 -1.2 82.28 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
83.46 -1.2 82.26 perf-profile.children.cycles-pp.do_syscall_64
82.01 -1.2 80.85 perf-profile.children.cycles-pp.ksys_mmap_pgoff
2.51 ± 5% -0.6 1.87 perf-profile.children.cycles-pp.__handle_mm_fault
3.24 ± 4% -0.5 2.70 perf-profile.children.cycles-pp.handle_mm_fault
1.23 ± 2% -0.2 1.06 ± 2% perf-profile.children.cycles-pp.up_write
0.20 ± 12% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.do_idle
0.20 ± 12% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.secondary_startup_64
0.20 ± 12% -0.1 0.10 ± 7% perf-profile.children.cycles-pp.cpu_startup_entry
0.18 ± 13% -0.1 0.09 ± 13% perf-profile.children.cycles-pp.start_secondary
0.26 ± 11% -0.1 0.18 ± 8% perf-profile.children.cycles-pp.rwsem_wake
0.26 ± 11% -0.1 0.18 ± 6% perf-profile.children.cycles-pp.call_rwsem_wake
0.07 ± 17% -0.1 0.00 perf-profile.children.cycles-pp.intel_idle
0.08 ± 14% -0.1 0.01 ±173% perf-profile.children.cycles-pp.cpuidle_enter_state
0.06 ± 13% -0.1 0.00 perf-profile.children.cycles-pp.schedule
0.06 ± 6% -0.1 0.00 perf-profile.children.cycles-pp.save_stack_trace_tsk
0.18 ± 9% -0.1 0.12 perf-profile.children.cycles-pp.wake_up_q
0.06 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.sched_ttwu_pending
0.06 ± 7% -0.1 0.00 perf-profile.children.cycles-pp.__save_stack_trace
0.18 ± 8% -0.1 0.13 perf-profile.children.cycles-pp.try_to_wake_up
0.05 -0.1 0.00 perf-profile.children.cycles-pp.unwind_next_frame
0.11 ± 6% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.enqueue_task_fair
0.12 ± 9% -0.0 0.07 ± 10% perf-profile.children.cycles-pp.ttwu_do_activate
0.08 ± 5% -0.0 0.04 ± 57% perf-profile.children.cycles-pp.__account_scheduler_latency
0.11 ± 10% -0.0 0.06 ± 13% perf-profile.children.cycles-pp.enqueue_entity
0.10 ± 17% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.__schedule
0.36 ± 5% -0.0 0.32 ± 7% perf-profile.children.cycles-pp.osq_unlock
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.serial8250_console_write
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.uart_console_write
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.wait_for_xmitr
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.serial8250_console_putchar
0.03 ±100% -0.0 0.00 perf-profile.children.cycles-pp.__softirqentry_text_start
0.08 ± 11% -0.0 0.05 ± 9% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.03 ±100% -0.0 0.01 ±173% perf-profile.children.cycles-pp.console_unlock
0.05 ± 9% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.update_load_avg
0.03 ±100% -0.0 0.01 ±173% perf-profile.children.cycles-pp.irq_work_run_list
0.03 ±100% -0.0 0.01 ±173% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.03 ±100% -0.0 0.01 ±173% perf-profile.children.cycles-pp.irq_exit
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.process_one_work
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.ktime_get
0.01 ±173% -0.0 0.00 perf-profile.children.cycles-pp.__vma_link_file
0.37 ± 3% -0.0 0.36 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
0.24 ± 6% -0.0 0.23 ± 6% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.37 ± 4% -0.0 0.36 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.06 ± 14% -0.0 0.05 ± 9% perf-profile.children.cycles-pp.file_has_perm
0.31 ± 5% -0.0 0.31 ± 7% perf-profile.children.cycles-pp.hrtimer_interrupt
0.16 ± 10% -0.0 0.15 ± 7% perf-profile.children.cycles-pp.update_process_times
0.04 ± 57% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.write
0.06 ± 7% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.native_iret
0.62 ± 5% +0.0 0.62 ± 2% perf-profile.children.cycles-pp.__rb_insert_augmented
0.16 ± 10% +0.0 0.16 ± 9% perf-profile.children.cycles-pp.tick_sched_handle
0.01 ±173% +0.0 0.01 ±173% perf-profile.children.cycles-pp.ksys_write
0.01 ±173% +0.0 0.01 ±173% perf-profile.children.cycles-pp.worker_thread
0.18 ± 9% +0.0 0.18 ± 8% perf-profile.children.cycles-pp.tick_sched_timer
0.05 ± 8% +0.0 0.06 ± 9% perf-profile.children.cycles-pp._cond_resched
0.11 ± 4% +0.0 0.11 ± 3% perf-profile.children.cycles-pp.scheduler_tick
0.08 ± 10% +0.0 0.09 ± 13% perf-profile.children.cycles-pp.mem_cgroup_from_task
0.08 ± 5% +0.0 0.09 ± 7% perf-profile.children.cycles-pp.task_tick_fair
0.06 +0.0 0.07 ± 7% perf-profile.children.cycles-pp.security_mmap_addr
0.07 ± 7% +0.0 0.07 ± 5% perf-profile.children.cycles-pp.pmd_devmap_trans_unstable
0.09 ± 5% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.vma_gap_callbacks_rotate
0.01 ±173% +0.0 0.03 ±100% perf-profile.children.cycles-pp.ret_from_fork
0.01 ±173% +0.0 0.03 ±100% perf-profile.children.cycles-pp.kthread
0.06 ± 13% +0.0 0.07 ± 5% perf-profile.children.cycles-pp.fput
0.07 ± 7% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.__slab_alloc
0.05 +0.0 0.06 ± 6% perf-profile.children.cycles-pp.prepend_path
0.05 +0.0 0.06 ± 6% perf-profile.children.cycles-pp.new_slab
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.vfs_write
0.00 +0.0 0.01 ±173% perf-profile.children.cycles-pp.down_write_killable
0.06 ± 6% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.___slab_alloc
0.07 +0.0 0.08 ± 5% perf-profile.children.cycles-pp.perf_exclude_event
0.04 ± 58% +0.0 0.06 ± 9% perf-profile.children.cycles-pp.get_page_from_freelist
0.04 ± 58% +0.0 0.06 ± 7% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.03 ±100% +0.0 0.04 ± 58% perf-profile.children.cycles-pp.__pte_alloc
0.21 ± 3% +0.0 0.23 ± 6% perf-profile.children.cycles-pp.__entry_trampoline_start
0.21 ± 4% +0.0 0.23 ± 2% perf-profile.children.cycles-pp.vma_interval_tree_augment_rotate
0.11 ± 7% +0.0 0.14 ± 8% perf-profile.children.cycles-pp.selinux_mmap_file
0.04 ± 57% +0.0 0.06 perf-profile.children.cycles-pp.kfree
0.04 ± 57% +0.0 0.06 ± 11% perf-profile.children.cycles-pp.avc_has_perm
0.17 ± 4% +0.0 0.19 ± 4% perf-profile.children.cycles-pp.d_path
0.00 +0.0 0.03 ±100% perf-profile.children.cycles-pp.pte_alloc_one
0.01 ±173% +0.0 0.04 ± 57% perf-profile.children.cycles-pp.perf_swevent_event
0.12 ± 10% +0.0 0.15 ± 7% perf-profile.children.cycles-pp.security_mmap_file
0.18 ± 3% +0.0 0.21 ± 3% perf-profile.children.cycles-pp.up_read
0.14 ± 3% +0.0 0.17 ± 3% perf-profile.children.cycles-pp.__might_sleep
0.32 ± 3% +0.0 0.35 ± 4% perf-profile.children.cycles-pp.__fget
0.12 ± 3% +0.0 0.15 ± 2% perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.16 ± 4% +0.0 0.19 ± 5% perf-profile.children.cycles-pp.kmem_cache_alloc
0.21 ± 3% +0.0 0.25 ± 4% perf-profile.children.cycles-pp.down_read_trylock
0.25 ± 5% +0.0 0.29 ± 4% perf-profile.children.cycles-pp.__vma_link_rb
0.17 ± 6% +0.0 0.21 ± 7% perf-profile.children.cycles-pp.vma_compute_subtree_gap
2.22 ± 15% +0.0 2.26 ± 14% perf-profile.children.cycles-pp.exit_to_usermode_loop
0.21 ± 7% +0.0 0.26 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
0.00 +0.0 0.04 ± 58% perf-profile.children.cycles-pp.perf_iterate_sb
2.22 ± 15% +0.0 2.27 ± 14% perf-profile.children.cycles-pp.task_numa_work
0.16 ± 5% +0.0 0.20 ± 4% perf-profile.children.cycles-pp.___might_sleep
0.24 ± 16% +0.0 0.29 ± 13% perf-profile.children.cycles-pp.vma_policy_mof
2.21 ± 15% +0.0 2.26 ± 14% perf-profile.children.cycles-pp.task_work_run
0.37 ± 3% +0.0 0.42 ± 5% perf-profile.children.cycles-pp.sync_regs
0.08 ± 23% +0.0 0.13 ± 14% perf-profile.children.cycles-pp.get_task_policy
0.39 ± 2% +0.1 0.45 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.46 +0.1 0.53 ± 2% perf-profile.children.cycles-pp.perf_event_mmap
0.97 ± 14% +0.1 1.05 ± 11% perf-profile.children.cycles-pp.prepare_exit_to_usermode
6.75 +0.1 6.85 perf-profile.children.cycles-pp.do_page_fault
6.76 +0.1 6.86 perf-profile.children.cycles-pp.page_fault
6.77 +0.1 6.87 perf-profile.children.cycles-pp.__do_page_fault
1.10 ± 2% +0.1 1.25 ± 3% perf-profile.children.cycles-pp.vmacache_find
0.63 ± 6% +0.2 0.78 ± 3% perf-profile.children.cycles-pp.___perf_sw_event
0.75 ± 4% +0.2 0.93 ± 3% perf-profile.children.cycles-pp.__perf_sw_event
1.35 +0.2 1.56 perf-profile.children.cycles-pp.unmapped_area_topdown
1.42 +0.2 1.63 perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
1.58 +0.2 1.81 ± 2% perf-profile.children.cycles-pp.find_vma
1.50 +0.2 1.73 perf-profile.children.cycles-pp.get_unmapped_area
2.72 +0.3 3.06 perf-profile.children.cycles-pp.native_irq_return_iret
2.15 +0.4 2.50 ± 3% perf-profile.children.cycles-pp.vma_interval_tree_insert
2.70 ± 5% +0.4 3.07 ± 3% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
3.83 +0.4 4.26 perf-profile.children.cycles-pp.rwsem_spin_on_owner
68.47 -2.2 66.29 perf-profile.self.cycles-pp.osq_lock
2.16 ± 7% -0.7 1.45 perf-profile.self.cycles-pp.__handle_mm_fault
0.74 ± 3% -0.1 0.64 ± 2% perf-profile.self.cycles-pp.rwsem_down_write_failed
0.97 -0.1 0.89 perf-profile.self.cycles-pp.up_write
0.07 ± 17% -0.1 0.00 perf-profile.self.cycles-pp.intel_idle
0.04 ± 58% -0.0 0.00 perf-profile.self.cycles-pp._raw_spin_lock_irqsave
0.36 ± 5% -0.0 0.32 ± 7% perf-profile.self.cycles-pp.osq_unlock
2.00 ± 14% -0.0 1.99 ± 14% perf-profile.self.cycles-pp.task_numa_work
0.01 ±173% -0.0 0.00 perf-profile.self.cycles-pp.__vma_link_file
0.62 ± 5% -0.0 0.61 ± 2% perf-profile.self.cycles-pp.__rb_insert_augmented
0.06 ± 7% -0.0 0.06 ± 9% perf-profile.self.cycles-pp.native_iret
0.01 ±173% +0.0 0.01 ±173% perf-profile.self.cycles-pp.ksys_mmap_pgoff
0.01 ±173% +0.0 0.01 ±173% perf-profile.self.cycles-pp._cond_resched
0.06 ± 9% +0.0 0.06 ± 14% perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
0.34 ± 5% +0.0 0.35 ± 3% perf-profile.self.cycles-pp.down_write
0.08 ± 5% +0.0 0.09 ± 4% perf-profile.self.cycles-pp.perf_event_mmap
0.11 ± 7% +0.0 0.11 ± 3% perf-profile.self.cycles-pp.__vma_link_rb
0.08 ± 10% +0.0 0.09 ± 13% perf-profile.self.cycles-pp.mem_cgroup_from_task
0.07 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.do_page_fault
0.06 ± 6% +0.0 0.07 ± 10% perf-profile.self.cycles-pp.pmd_devmap_trans_unstable
0.07 ± 7% +0.0 0.07 ± 5% perf-profile.self.cycles-pp.vma_gap_callbacks_rotate
0.12 ± 8% +0.0 0.13 ± 6% perf-profile.self.cycles-pp.d_path
0.06 +0.0 0.07 ± 10% perf-profile.self.cycles-pp.kmem_cache_alloc
0.00 +0.0 0.01 ±173% perf-profile.self.cycles-pp.kmem_cache_alloc_trace
0.00 +0.0 0.01 ±173% perf-profile.self.cycles-pp.prepend_path
0.06 ± 11% +0.0 0.07 ± 5% perf-profile.self.cycles-pp.fput
0.07 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.perf_exclude_event
0.21 ± 4% +0.0 0.22 perf-profile.self.cycles-pp.vma_interval_tree_augment_rotate
0.21 ± 3% +0.0 0.23 ± 6% perf-profile.self.cycles-pp.__entry_trampoline_start
0.04 ± 57% +0.0 0.06 ± 7% perf-profile.self.cycles-pp.kfree
0.04 ± 57% +0.0 0.06 ± 11% perf-profile.self.cycles-pp.avc_has_perm
0.16 ± 13% +0.0 0.19 ± 13% perf-profile.self.cycles-pp.vma_policy_mof
0.01 ±173% +0.0 0.04 ± 57% perf-profile.self.cycles-pp.perf_swevent_event
0.14 ± 3% +0.0 0.16 ± 2% perf-profile.self.cycles-pp.__might_sleep
0.32 ± 3% +0.0 0.34 ± 5% perf-profile.self.cycles-pp.__fget
0.18 ± 3% +0.0 0.21 ± 3% perf-profile.self.cycles-pp.up_read
0.15 ± 7% +0.0 0.17 ± 2% perf-profile.self.cycles-pp.__perf_sw_event
0.14 ± 10% +0.0 0.18 ± 4% perf-profile.self.cycles-pp.do_mmap
0.21 ± 3% +0.0 0.25 ± 4% perf-profile.self.cycles-pp.down_read_trylock
0.17 ± 6% +0.0 0.21 ± 7% perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.21 ± 7% +0.0 0.25 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
0.00 +0.0 0.04 ± 58% perf-profile.self.cycles-pp.perf_iterate_sb
0.16 ± 5% +0.0 0.20 ± 4% perf-profile.self.cycles-pp.___might_sleep
0.08 ± 23% +0.0 0.12 ± 14% perf-profile.self.cycles-pp.get_task_policy
0.37 ± 3% +0.0 0.42 ± 5% perf-profile.self.cycles-pp.sync_regs
0.00 +0.1 0.05 perf-profile.self.cycles-pp.do_syscall_64
0.39 ± 2% +0.1 0.45 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.48 +0.1 0.55 perf-profile.self.cycles-pp.find_vma
0.69 ± 2% +0.1 0.77 ± 2% perf-profile.self.cycles-pp.mmap_region
0.70 ± 3% +0.1 0.81 ± 2% perf-profile.self.cycles-pp.handle_mm_fault
0.54 ± 7% +0.1 0.69 ± 3% perf-profile.self.cycles-pp.___perf_sw_event
1.10 ± 2% +0.1 1.25 ± 3% perf-profile.self.cycles-pp.vmacache_find
0.73 +0.1 0.88 ± 5% perf-profile.self.cycles-pp.__do_page_fault
1.35 +0.2 1.56 perf-profile.self.cycles-pp.unmapped_area_topdown
1.75 +0.3 2.03 perf-profile.self.cycles-pp.swapgs_restore_regs_and_return_to_usermode
2.72 +0.3 3.06 perf-profile.self.cycles-pp.native_irq_return_iret
2.15 +0.3 2.49 ± 3% perf-profile.self.cycles-pp.vma_interval_tree_insert
3.82 +0.4 4.25 perf-profile.self.cycles-pp.rwsem_spin_on_owner
vm-scalability.throughput
2.25e+07 +-+--------------------------------------------------------------+
| O O |
2.2e+07 O-+O O OO O O OO O O O O O |
2.15e+07 +-+ O O O |
| O O |
2.1e+07 +-+ |
| |
2.05e+07 +-+ |
| |
2e+07 +-+ +. +. .++.+ ++ +. + + +.+ |
1.95e+07 +-+ .+ : + + + + + .+ +.++.+ + .+ + : +.+ .|
| ++ +: ++ + + +.+ + +.+ + + |
1.9e+07 +-+ + + + :+ : |
| + + |
1.85e+07 +-+--------------------------------------------------------------+
vm-scalability.median
800000 +-O----------------------------------------------------------------+
O O O O O O O O O |
780000 +-+O O O O O O O |
| O O O O |
760000 +-+ |
| |
740000 +-+ |
| |
720000 +-+ .+ .+ |
| + +.+.++.+ +. + + .+. .+ .+.+ +.+ ++. .+ |
700000 +-+ + : : +. : ++ + + :+ +. + + : .+ + +.|
| ++ :: + + + :.+ :+ + : |
680000 +-+ + + + + |
| |
660000 +-+----------------------------------------------------------------+
vm-scalability.time.voluntary_context_switches
8e+06 +-+-----------------------------------------------------------------+
| +. +. |
7e+06 +-+ : + + +. + +. + + |
| : :+ + : +. + + : :: .+ |
6e+06 +-++.+.+. + + + + : : : .+.++.+ +.+.++. .++.+. |
|+ : + + +.+ +.+ + .|
5e+06 +-+ + |
| |
4e+06 +-+ |
| |
3e+06 +-O O O O OO O |
O O O O O O |
2e+06 +-+ O O OO O OO O |
| |
1e+06 +-+-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Xiaolong
2 years, 6 months