0f2122045b ("io_uring: don't rely on weak ->files references"): WARNING: inconsistent lock state
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://git.kernel.org/pub/scm/linux/kernel/git/axboe/linux-block.git for-5.10/io_uring
commit 0f2122045b946241a9e549c2a76cea54fa58a7ff
Author: Jens Axboe <axboe(a)kernel.dk>
AuthorDate: Sun Sep 13 13:09:39 2020 -0600
Commit: Jens Axboe <axboe(a)kernel.dk>
CommitDate: Wed Sep 30 20:32:32 2020 -0600
io_uring: don't rely on weak ->files references
Grab actual references to the files_struct. To avoid circular references
issues due to this, we add a per-task note that keeps track of what
io_uring contexts a task has used. When the tasks execs or exits its
assigned files, we cancel requests based on this tracking.
With that, we can grab proper references to the files table, and no
longer need to rely on stashing away ring_fd and ring_file to check
if the ring_fd may have been closed.
Cc: stable(a)vger.kernel.org # v5.5+
Reviewed-by: Pavel Begunkov <asml.silence(a)gmail.com>
Signed-off-by: Jens Axboe <axboe(a)kernel.dk>
e6c8aa9ac3 io_uring: enable task/files specific overflow flushing
0f2122045b io_uring: don't rely on weak ->files references
faf7b51c06 io_uring: batch account ->req_issue and task struct references
+---------------------------------------------------------+------------+------------+------------+
| | e6c8aa9ac3 | 0f2122045b | faf7b51c06 |
+---------------------------------------------------------+------------+------------+------------+
| boot_successes | 39 | 3 | 4 |
| boot_failures | 0 | 10 | 1 |
| WARNING:inconsistent_lock_state | 0 | 10 | 1 |
| inconsistent{SOFTIRQ-ON-W}->{IN-SOFTIRQ-W}usage | 0 | 10 | 1 |
| invoked_oom-killer:gfp_mask=0x | 0 | 0 | 1 |
| Mem-Info | 0 | 0 | 1 |
| Out_of_memory_and_no_killable_processes | 0 | 0 | 1 |
| Kernel_panic-not_syncing:System_is_deadlocked_on_memory | 0 | 0 | 1 |
+---------------------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[child1:659] mbind (274) returned ENOSYS, marking as inactive.
[child1:659] mq_timedsend (279) returned ENOSYS, marking as inactive.
[main] 10175 iterations. [F:7781 S:2344 HI:2397]
[ 24.610601]
[ 24.610743] ================================
[ 24.611083] WARNING: inconsistent lock state
[ 24.611437] 5.9.0-rc7-00017-g0f2122045b9462 #5 Not tainted
[ 24.611861] --------------------------------
[ 24.612193] inconsistent {SOFTIRQ-ON-W} -> {IN-SOFTIRQ-W} usage.
[ 24.612660] ksoftirqd/0/7 [HC0[0]:SC1[3]:HE0:SE0] takes:
[ 24.613086] f00ed998 (&xa->xa_lock#4){+.?.}-{2:2}, at: xa_destroy+0x43/0xc1
[ 24.613642] {SOFTIRQ-ON-W} state was registered at:
[ 24.614024] lock_acquire+0x20c/0x29b
[ 24.614341] _raw_spin_lock+0x21/0x30
[ 24.614636] io_uring_add_task_file+0xe8/0x13a
[ 24.614987] io_uring_create+0x535/0x6bd
[ 24.615297] io_uring_setup+0x11d/0x136
[ 24.615606] __ia32_sys_io_uring_setup+0xd/0xf
[ 24.615977] do_int80_syscall_32+0x53/0x6c
[ 24.616306] restore_all_switch_stack+0x0/0xb1
[ 24.616677] irq event stamp: 939881
[ 24.616968] hardirqs last enabled at (939880): [<8105592d>] __local_bh_enable_ip+0x13c/0x145
[ 24.617642] hardirqs last disabled at (939881): [<81b6ace3>] _raw_spin_lock_irqsave+0x1b/0x4e
[ 24.618321] softirqs last enabled at (939738): [<81b6c7c8>] __do_softirq+0x3f0/0x45a
[ 24.618924] softirqs last disabled at (939743): [<81055741>] run_ksoftirqd+0x35/0x61
[ 24.619521]
[ 24.619521] other info that might help us debug this:
[ 24.620028] Possible unsafe locking scenario:
[ 24.620028]
[ 24.620492] CPU0
[ 24.620685] ----
[ 24.620894] lock(&xa->xa_lock#4);
[ 24.621168] <Interrupt>
[ 24.621381] lock(&xa->xa_lock#4);
[ 24.621695]
[ 24.621695] *** DEADLOCK ***
[ 24.621695]
[ 24.622154] 1 lock held by ksoftirqd/0/7:
[ 24.622468] #0: 823bfb94 (rcu_callback){....}-{0:0}, at: rcu_process_callbacks+0xc0/0x155
[ 24.623106]
[ 24.623106] stack backtrace:
[ 24.623454] CPU: 0 PID: 7 Comm: ksoftirqd/0 Not tainted 5.9.0-rc7-00017-g0f2122045b9462 #5
[ 24.624090] Call Trace:
[ 24.624284] ? show_stack+0x40/0x46
[ 24.624551] dump_stack+0x1b/0x1d
[ 24.624809] print_usage_bug+0x17a/0x185
[ 24.625142] mark_lock+0x11d/0x1db
[ 24.625474] ? print_shortest_lock_dependencies+0x121/0x121
[ 24.625905] __lock_acquire+0x41e/0x7bf
[ 24.626206] lock_acquire+0x20c/0x29b
[ 24.626517] ? xa_destroy+0x43/0xc1
[ 24.626810] ? lock_acquire+0x20c/0x29b
[ 24.627110] _raw_spin_lock_irqsave+0x3e/0x4e
[ 24.627450] ? xa_destroy+0x43/0xc1
[ 24.627725] xa_destroy+0x43/0xc1
[ 24.627989] __io_uring_free+0x57/0x71
[ 24.628286] ? get_pid+0x22/0x22
[ 24.628544] __put_task_struct+0xf2/0x163
[ 24.628865] put_task_struct+0x1f/0x2a
[ 24.629161] delayed_put_task_struct+0xe2/0xe9
[ 24.629509] rcu_process_callbacks+0x128/0x155
[ 24.629860] __do_softirq+0x1a3/0x45a
[ 24.630151] run_ksoftirqd+0x35/0x61
[ 24.630443] smpboot_thread_fn+0x304/0x31a
[ 24.630763] kthread+0x124/0x139
[ 24.631016] ? sort_range+0x18/0x18
[ 24.631290] ? kthread_create_worker_on_cpu+0x17/0x17
[ 24.631682] ret_from_fork+0x1c/0x28
[child0:778] chown (182) returned ENOSYS, marking as inactive.
errno out of range after doing personality: 3365:Unknown error 3365
[main] 10216 iterations. [F:7803 S:2349 HI:2295]
[ 33.862204] rcu-torture: rcu_torture_read_exit: Start of episode
[ 33.865277] rcu-torture: rcu_torture_read_exit: End of episode
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 11efbd4572d7ac687c526fedb67eba8c41b980e1 9123e3a74ec7b934a4a099e98af6a61c2f80bbf5 --
git bisect bad fd629d05af42ec07a1d3ca03069cd196890731a6 # 05:52 B 0 2 11 0 Merge 'vkoul-dmaengine/topic/warn_fixes' into devel-catchup-202010080223
git bisect bad d4cb9284556efad95b616a3069b4d5b6c438e7d4 # 06:13 B 0 1 10 0 Merge 'block/for-next' into devel-catchup-202010080223
git bisect good 1bffbc8f69d3dab3a776953766181a4599fee091 # 06:41 G 11 0 0 0 Merge 'linux-review/Caleb-Connolly/drm-panel-oneplus6-Add-panel-oneplus6/20201008-022035' into devel-catchup-202010080223
git bisect good 3b8c1fe05b6f9b7709ad8d4f251ff59334a08a63 # 07:01 G 11 0 0 0 Merge 'broonie-misc/arm64-stacktrace' into devel-catchup-202010080223
git bisect good e4cf4a30a7be6c34a0c4475eeec4140ccd1c9a30 # 07:14 G 11 0 0 0 Merge 'frank-w-bpi-r2-4.14/5.9-hdmisnd' into devel-catchup-202010080223
git bisect good 1cb039f3dc1619eb795c54aad0a98fdb379b4237 # 07:34 G 10 0 0 0 bdi: replace BDI_CAP_STABLE_WRITES with a queue and a sb flag
git bisect bad d6359d748a66108fd542dbbb997ec99ff6e13fee # 07:58 B 1 4 1 1 Merge branch 'for-5.10/block' into for-next
git bisect bad 5b09e37e27a878eb50f0eb96fbce8419e932a7d5 # 08:22 B 0 3 12 0 io_uring: io_kiocb_ppos() style change
git bisect bad c8d1ba583fe67c6b5e054d89f1433498a924286f # 08:48 B 0 1 11 0 io_uring: split work handling part of SQPOLL into helper
git bisect bad 9b8284921513fc1ea57d87777283a59b05862f03 # 09:07 B 0 2 11 0 io_uring: reference ->nsproxy for file table commands
git bisect good 2aede0e417db846793c276c7a1bbf7262c8349b0 # 09:37 G 10 0 0 0 io_uring: stash ctx task reference for SQPOLL
git bisect good 76e1b6427fd8246376a97e3227049d49188dfb9c # 09:50 G 11 0 1 1 io_uring: return cancelation status from poll/timeout/files handlers
git bisect bad 0f2122045b946241a9e549c2a76cea54fa58a7ff # 10:11 B 0 4 14 0 io_uring: don't rely on weak ->files references
git bisect good e6c8aa9ac33bd7c968af7816240fc081401fddcd # 10:31 G 10 0 0 0 io_uring: enable task/files specific overflow flushing
# first bad commit: [0f2122045b946241a9e549c2a76cea54fa58a7ff] io_uring: don't rely on weak ->files references
git bisect good e6c8aa9ac33bd7c968af7816240fc081401fddcd # 10:43 G 30 0 0 0 io_uring: enable task/files specific overflow flushing
# extra tests with debug options
git bisect bad 0f2122045b946241a9e549c2a76cea54fa58a7ff # 11:01 B 1 4 0 0 io_uring: don't rely on weak ->files references
# extra tests on head commit of block/for-5.10/io_uring
git bisect bad faf7b51c06973f947776af6c8f8a513475a2bfa1 # 11:19 B 0 1 11 0 io_uring: batch account ->req_issue and task struct references
# bad: [faf7b51c06973f947776af6c8f8a513475a2bfa1] io_uring: batch account ->req_issue and task struct references
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
1 year, 7 months
[KVM] 11e2633982: kernel-selftests.kvm.dirty_log_test.fail
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 11e2633982d4ab0bbacbfe8d96487bffc566da75 ("[PATCH v12 12/13] KVM: selftests: Let dirty_log_test async for dirty ring test")
url: https://github.com/0day-ci/linux/commits/Peter-Xu/KVM-Dirty-ring-interfac...
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git e9542fdb89751d93026a48a5fba66facc22df6fd
in testcase: kernel-selftests
version: kernel-selftests-x86_64-e8e8f16e-1_20200807
with following parameters:
group: kselftests-kvm
ucode: 0xdc
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: 8 threads Intel(R) Core(TM) i7-6770HQ CPU @ 2.60GHz with 32G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
# selftests: kvm: dirty_log_test
# Test iterations: 32, interval: 10 (ms)
# Testing Log Mode 'dirty-log'
# Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages
# guest physical test memory offset: 0x7fbfffc000
# Dirtied 1024 pages
# Total bits checked: dirty (65453), clear (8061104), track_next (3245)
# Testing Log Mode 'clear-log'
# Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages
# guest physical test memory offset: 0x7fbfffc000
# Dirtied 1024 pages
# Total bits checked: dirty (326519), clear (7800038), track_next (3259)
# Testing Log Mode 'dirty-ring'
# Testing guest mode: PA-bits:ANY, VA-bits:48, 4K pages
# guest physical test memory offset: 0x7fbfffc000
# vcpu stops because vcpu is kicked out...
# Notifying vcpu to continue
# vcpu continues now.
# fetch 0x0 page 0
# fetch 0x1 page 1
...
# fetch 0x1fff3 page 13105==== Test Assertion Failure ====
# dirty_log_test.c:526: test_bit_le(page, bmap)
# pid=10583 tid=10583 - Invalid argument
# 1 0x0000000000403294: vm_dirty_log_verify at dirty_log_test.c:523
# 2 (inlined by) run_test at dirty_log_test.c:743
# 3 0x00000000004026f4: main at dirty_log_test.c:905 (discriminator 3)
# 4 0x00007f2d11a2809a: ?? ??:0
# 5 0x0000000000402809: _start at ??:?
# Page 131151 should have its dirty bit set in this iteration but it is missing
# 8
...
# fetch 0x20050 page 131151
# Iteration 4 collected 64960 pages
# Notifying vcpu to continue
# Iteration 5 collected 0 pages
# vcpu continues now.
# vcpu stops because vcpu is kicked out...
not ok 19 selftests: kvm: dirty_log_test # exit=254
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
1 year, 7 months
0e7dd826b4 ("kmap: Add stray access protection for device pages"): kernel BUG at arch/x86/mm/physaddr.c:80!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/weiny2/linux-kernel.git lm-pks-pmem-for-5.10-v11
commit 0e7dd826b4695353fbdf1033b3bee6c1abd39a2b
Author: Ira Weiny <ira.weiny(a)intel.com>
AuthorDate: Tue Mar 31 13:50:26 2020 -0700
Commit: Ira Weiny <ira.weiny(a)intel.com>
CommitDate: Tue Oct 6 19:23:29 2020 -0700
kmap: Add stray access protection for device pages
Device managed pages may have additional protections. These protections
need to be removed prior to valid use by kernel users.
Check for special treatment of device managed pages in kmap and take
action if needed. We use kmap as an interface for generic kernel code
because under normal circumstances it would be a bug for general kernel
code to not use kmap prior to accessing kernel memory. Therefore, this
should allow any valid kernel users to seamlessly use these pages
without issues.
Because of the critical nature of kmap it must be pointed out that the
over head on regular DRAM is carefully implemented to be as fast as
possible. Furthermore the underlying MSR write required on device pages
when protected is better than a normal MSR write.
Specifically, WRMSR(MSR_IA32_PKRS) is not serializing but still
maintains ordering properties similar to WRPKRU. The current SDM
section on PKRS needs updating but should be the same as that of WRPKRU.
So to quote from the WRPKRU text:
WRPKRU will never execute speculatively. Memory accesses
affected by PKRU register will not execute (even speculatively)
until all prior executions of WRPKRU have completed execution
and updated the PKRU register.
Still this will make accessing pmem more expensive from the kernel but
the overhead is minimized and many pmem users access this memory through
user page mappings which are not affected at all.
Signed-off-by: Ira Weiny <ira.weiny(a)intel.com>
---
Changes from RFC V2:
Use change enable/disable function names to
dev_page_{en,dis}able_access()
highmem update for global PKS
a56c624302 memremap: Add zone device access protection
0e7dd826b4 kmap: Add stray access protection for device pages
6c3553a91c x86/pks: Add a debugfs file for allocated PKS keys
+------------------------------------------+------------+------------+------------+
| | a56c624302 | 0e7dd826b4 | 6c3553a91c |
+------------------------------------------+------------+------------+------------+
| boot_successes | 4 | 0 | 0 |
| boot_failures | 44 | 15 | 13 |
| BUG:kernel_timeout_in_test_stage | 44 | | |
| kernel_BUG_at_arch/x86/mm/physaddr.c | 0 | 15 | 13 |
| invalid_opcode:#[##] | 0 | 15 | 13 |
| EIP:__phys_addr | 0 | 15 | 13 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 15 | 13 |
+------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 1.222100] futex hash table entries: 256 (order: 1, 14336 bytes, linear)
[ 1.222100] xor: automatically using best checksumming function avx
[ 1.222100] pinctrl core: initialized pinctrl subsystem
[ 1.222100] regulator-dummy: no parameters
[ 1.222392] ------------[ cut here ]------------
[ 1.223046] kernel BUG at arch/x86/mm/physaddr.c:80!
[ 1.223713] invalid opcode: 0000 [#1]
[ 1.224302] CPU: 0 PID: 1 Comm: swapper Not tainted 5.9.0-rc7-00027-g0e7dd826b46953 #1
[ 1.225385] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 1.226526] EIP: __phys_addr+0x68/0x70
[ 1.227069] Code: 5b 5e 5d c3 8d b6 00 00 00 00 0f 0b 8d b6 00 00 00 00 e8 7b 7e 0f 00 84 c0 74 c4 0f 0b 8d 74 26 00 90 0f 0b 8d b6 00 00 00 00 <0f> 0b 8d b6 00 00 00 00 3d ff ff ff bf 76 49 55 80 3d 20 f2 d3 c2
[ 1.229388] EAX: 000375fe EBX: fffbb000 ECX: fffff000 EDX: 0003ffbb
[ 1.230240] ESI: 3ffbb000 EDI: fffbc000 EBP: ed03ddb4 ESP: ed03ddac
[ 1.231095] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010283
[ 1.232014] CR0: 80050033 CR2: 00000000 CR3: 02e29000 CR4: 003506b0
[ 1.232100] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 1.232100] DR6: fffe0ff0 DR7: 00000400
[ 1.232100] Call Trace:
[ 1.232100] kmap_to_page+0x36/0x90
[ 1.232100] prep_new_page+0xb5/0x180
[ 1.232100] get_page_from_freelist+0x1a3/0x2c0
[ 1.232100] __alloc_pages_nodemask+0x116/0x280
[ 1.232100] __vmalloc_node_range+0xeb/0x230
[ 1.232100] __vmalloc_node+0x5b/0x70
[ 1.232100] ? bpf_prog_alloc_no_stats+0x26/0xa0
[ 1.232100] __vmalloc+0x14/0x20
[ 1.232100] ? bpf_prog_alloc_no_stats+0x26/0xa0
[ 1.232100] bpf_prog_alloc_no_stats+0x26/0xa0
[ 1.232100] bpf_prog_alloc+0x13/0x60
[ 1.232100] bpf_prog_create+0x3a/0xa0
[ 1.232100] ? init_soundcore+0x2b/0x2b
[ 1.232100] ptp_classifier_init+0x20/0x28
[ 1.232100] ? setattr_prepare+0xe2/0x1d0
[ 1.232100] sock_init+0x75/0x7c
[ 1.232100] do_one_initcall+0x43/0x13f
[ 1.232100] ? init_setup+0xc/0x21
[ 1.232100] ? parse_args+0x1b7/0x250
[ 1.232100] do_initcalls+0x9d/0xc0
[ 1.232100] kernel_init_freeable+0x8d/0xb2
[ 1.232100] ? rest_init+0xe7/0xe7
[ 1.232100] kernel_init+0x8/0xd9
[ 1.232100] ret_from_fork+0x1c/0x28
[ 1.232100] Modules linked in:
[ 1.232111] ---[ end trace 876b7df11f78d0f4 ]---
[ 1.232750] EIP: __phys_addr+0x68/0x70
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 8c8c3f7d530aabc5b9f28197a5f7ac4d095484bf 549738f15da0e5a00275977623be199fbbf7df50 --
git bisect good d57028548dc6aead02b1826be0fc49449a93610e # 19:03 G 10 0 10 11 Merge 'linux-review/Moshe-Shemesh/Add-devlink-reload-action-and-limit-options/20201007-141626' into devel-catchup-202010071552
git bisect bad ae9d70de55fd91a6980f0c4339e6eede6a0dc62e # 19:16 B 0 1 10 0 Merge 'linux-nvme/nvme-5.10' into devel-catchup-202010071552
git bisect bad 8a5e9dc7390fbd8df147cb356352b85667e8b157 # 19:44 B 0 3 12 0 Merge 'saeed/net-next-mlx5' into devel-catchup-202010071552
git bisect good 4a6941369ef3652fc347a56dbe367598c338c232 # 20:42 G 11 0 11 11 Merge 'xen-tip/for-linus-5.9b' into devel-catchup-202010071552
git bisect bad 5f75b7fd2c9840ddd96c8d3c97dd03ce21b1523b # 20:52 B 0 5 14 0 Merge 'weiny2/lm-pks-pmem-for-5.10-v11' into devel-catchup-202010071552
git bisect bad 0f145b651e7b26549122a27dd99533d2c2707b84 # 21:13 B 0 1 14 4 fs/nfs: Utilize new kmap_thread()
git bisect good 4719b20d34fca404fe5c007947dbebc4eb7b403f # 22:05 G 11 0 11 11 x86/pks/test: Add testing for global option
git bisect bad c1fcf43070bdf1b9da0219200ac1edff7c141fc8 # 22:21 B 0 1 10 0 drivers/net: Utilize new kmap_thread()
git bisect bad f9ebb078d5339e12837b9e08dcd6704d8efccc44 # 22:38 B 0 3 13 0 kmap: Introduce k[un]map_thread debugging
git bisect bad 0e7dd826b4695353fbdf1033b3bee6c1abd39a2b # 22:53 B 0 1 10 0 kmap: Add stray access protection for device pages
git bisect good a56c62430254eb0224770b87bfa9677777d08e4c # 23:42 G 10 0 10 10 memremap: Add zone device access protection
# first bad commit: [0e7dd826b4695353fbdf1033b3bee6c1abd39a2b] kmap: Add stray access protection for device pages
git bisect good a56c62430254eb0224770b87bfa9677777d08e4c # 00:36 G 30 0 30 41 memremap: Add zone device access protection
# extra tests with debug options
git bisect bad 0e7dd826b4695353fbdf1033b3bee6c1abd39a2b # 00:48 B 0 1 10 0 kmap: Add stray access protection for device pages
# extra tests on head commit of weiny2/lm-pks-pmem-for-5.10-v11
git bisect bad 6c3553a91c6476dcaaeb09b60eb0ba25a2cec9f7 # 04:11 B 0 1 10 0 x86/pks: Add a debugfs file for allocated PKS keys
# bad: [6c3553a91c6476dcaaeb09b60eb0ba25a2cec9f7] x86/pks: Add a debugfs file for allocated PKS keys
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
1 year, 7 months
8af7da1149 ("kmap: Add stray access protection for device pages"): kernel BUG at arch/x86/mm/physaddr.c:80!
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/weiny2/linux-kernel.git lm-pks-pmem-for-5.10-v10
commit 8af7da1149db0a4bd752fabcdb64d80482b7be24
Author: Ira Weiny <ira.weiny(a)intel.com>
AuthorDate: Tue Mar 31 13:50:26 2020 -0700
Commit: Ira Weiny <ira.weiny(a)intel.com>
CommitDate: Tue Oct 6 09:55:08 2020 -0700
kmap: Add stray access protection for device pages
Device managed pages may have additional protections. These protections
need to be removed prior to valid use by kernel users.
Check for special treatment of device managed pages in kmap and take
action if needed. We use kmap as an interface for generic kernel code
because under normal circumstances it would be a bug for general kernel
code to not use kmap prior to accessing kernel memory. Therefore, this
should allow any valid kernel users to seamlessly use these pages
without issues.
Because of the critical nature of kmap it must be pointed out that the
over head on regular DRAM is carefully implemented to be as fast as
possible. Furthermore the underlying MSR write required on device pages
when protected is better than a normal MSR write.
Specifically, WRMSR(MSR_IA32_PKRS) is not serializing but still
maintains ordering properties similar to WRPKRU. The current SDM
section on PKRS needs updating but should be the same as that of WRPKRU.
So to quote from the WRPKRU text:
WRPKRU will never execute speculatively. Memory accesses
affected by PKRU register will not execute (even speculatively)
until all prior executions of WRPKRU have completed execution
and updated the PKRU register.
Still this will make accessing pmem more expensive from the kernel but
the overhead is minimized and many pmem users access this memory through
user page mappings which are not affected at all.
Signed-off-by: Ira Weiny <ira.weiny(a)intel.com>
---
Changes from RFC V2:
Use change enable/disable function names to
dev_page_{en,dis}able_access()
highmem update for global PKS
2de560860d memremap: Add zone device access protection
8af7da1149 kmap: Add stray access protection for device pages
603749de39 x86/pks: Add a debugfs file for allocated PKS keys
+------------------------------------------+------------+------------+------------+
| | 2de560860d | 8af7da1149 | 603749de39 |
+------------------------------------------+------------+------------+------------+
| boot_successes | 2 | 0 | 0 |
| boot_failures | 44 | 15 | 17 |
| BUG:kernel_timeout_in_test_stage | 43 | | |
| BUG:kernel_timeout_in_torture_test_stage | 1 | | |
| kernel_BUG_at_arch/x86/mm/physaddr.c | 0 | 15 | 17 |
| invalid_opcode:#[##] | 0 | 15 | 17 |
| EIP:__phys_addr | 0 | 15 | 17 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 15 | 17 |
+------------------------------------------+------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.241563] Inode-cache hash table entries: 131072 (order: 7, 524288 bytes, linear)
[ 0.243388] mem auto-init: stack:off, heap alloc:on, heap free:on
[ 0.244858] mem auto-init: clearing system memory may take some time...
[ 0.246445] Initializing HighMem for node 0 (000773fe:000bffe0)
[ 0.247884] ------------[ cut here ]------------
[ 0.248978] kernel BUG at arch/x86/mm/physaddr.c:80!
[ 0.250020] invalid opcode: 0000 [#1]
[ 0.250753] CPU: 0 PID: 0 Comm: swapper Not tainted 5.9.0-rc7-00027-g8af7da1149db0a #1
[ 0.252465] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.254388] EIP: __phys_addr+0x110/0x11c
[ 0.255237] Code: 00 84 c0 0f 84 5e ff ff ff 6a 00 31 c9 ba 01 00 00 00 b8 e0 f6 f4 81 e8 5e fd 0f 00 0f 0b b8 b8 c0 dc 81 e8 91 72 34 00 66 90 <0f> 0b b8 ac c0 dc 81 e8 83 72 34 00 0f b6 d1 b8 3c c1 dc 81 88 4d
[ 0.259248] EAX: 00000000 EBX: fffbb000 ECX: 00000000 EDX: 00000000
[ 0.260731] ESI: 7ffbb000 EDI: 000773fe EBP: 81d8febc ESP: 81d8feac
[ 0.262226] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00210083
[ 0.263923] CR0: 80050033 CR2: 00000000 CR3: 020a8000 CR4: 00040690
[ 0.265499] Call Trace:
[ 0.266134] kmap_to_page+0x33/0xa0
[ 0.266991] free_unref_page_prepare+0x23a/0x2a0
[ 0.268126] free_unref_page+0x2e/0x80
[ 0.269099] free_highmem_page+0x20/0x60
[ 0.270097] add_highpages_with_active_regions+0x111/0x151
[ 0.271448] set_highmem_pages_init+0x53/0x69
[ 0.272547] mem_init+0x15/0x1b7
[ 0.273363] start_kernel+0x412/0x733
[ 0.274255] i386_start_kernel+0x47/0x49
[ 0.275221] startup_32_smp+0x15f/0x164
[ 0.276146] Modules linked in:
[ 0.276895] random: get_random_bytes called from init_oops_id+0x42/0x60 with crng_init=0
[ 0.276901] ---[ end trace d1739bb8b37931d5 ]---
[ 0.279817] EIP: __phys_addr+0x110/0x11c
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 603749de39c54505adc2b0521f5b671df14b2b8b fb0155a09b0224a7147cb07a4ce6034c8d29667f --
git bisect bad 2d049687269c7fc9ba691012332e4425b64e3d5d # 06:45 B 0 4 13 0 fs/hfsplus: Utilize new kmap_thread()
git bisect good 1b21e385233f28ddb75bbe07ec5b773bb9c668ba # 07:34 G 11 0 11 11 x86/pks: Add a global pkrs option
git bisect bad 4b9c77848ae836111975fcbc68fd97bb48833097 # 07:42 B 0 8 21 4 drivers/rdma: Utilize new kmap_thread()
git bisect bad 7041880a5637762f5aab92259c207066240d9fc7 # 07:58 B 0 9 22 4 kmap: Introduce k[un]map_thread
git bisect good 2de560860dee9f69520e7064561b8e19a6d148fb # 08:44 G 10 0 10 10 memremap: Add zone device access protection
git bisect bad 8af7da1149db0a4bd752fabcdb64d80482b7be24 # 08:51 B 0 1 10 0 kmap: Add stray access protection for device pages
# first bad commit: [8af7da1149db0a4bd752fabcdb64d80482b7be24] kmap: Add stray access protection for device pages
git bisect good 2de560860dee9f69520e7064561b8e19a6d148fb # 09:37 G 31 0 31 42 memremap: Add zone device access protection
# extra tests with debug options
git bisect bad 8af7da1149db0a4bd752fabcdb64d80482b7be24 # 09:48 B 0 1 10 0 kmap: Add stray access protection for device pages
# extra tests on head commit of weiny2/lm-pks-pmem-for-5.10-v10
git bisect bad 603749de39c54505adc2b0521f5b671df14b2b8b # 10:05 B 0 17 29 0 x86/pks: Add a debugfs file for allocated PKS keys
# bad: [603749de39c54505adc2b0521f5b671df14b2b8b] x86/pks: Add a debugfs file for allocated PKS keys
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
1 year, 7 months
[sched/fair] fcf0553db6: netperf.Throughput_Mbps -30.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -30.8% regression of netperf.Throughput_Mbps due to commit:
commit: fcf0553db6f4c79387864f6e4ab4a891601f395e ("sched/fair: Remove meaningless imbalance calculation")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: netperf
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
with following parameters:
ip: ipv4
runtime: 900s
nr_threads: 50%
cluster: cs-localhost
test: TCP_STREAM
cpufreq_governor: performance
ucode: 0x5002f01
test-description: Netperf is a benchmark that can be use to measure various aspect of networking performance.
test-url: http://www.netperf.org/netperf/
In addition to that, the commit also has significant impact on the following tests:
+------------------+------------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.throughput 1.5% improvement |
| test machine | 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=8T |
| | test=anon-w-seq |
| | ucode=0x16 |
+------------------+------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
cluster/compiler/cpufreq_governor/ip/kconfig/nr_threads/rootfs/runtime/tbox_group/test/testcase/ucode:
cs-localhost/gcc-9/performance/ipv4/x86_64-rhel-8.3/50%/debian-10.4-x86_64-20200603.cgz/900s/lkp-csl-2ap3/TCP_STREAM/netperf/0x5002f01
commit:
a349834703 ("sched/fair: Rename sg_lb_stats::sum_nr_running to sum_h_nr_running")
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
a349834703010183 fcf0553db6f4c79387864f6e4ab
---------------- ---------------------------
%stddev %change %stddev
\ | \
3242 ± 9% -30.8% 2243 ± 5% netperf.Throughput_Mbps
311318 ± 9% -30.8% 215389 ± 5% netperf.Throughput_total_Mbps
1481632 ± 44% +121.8% 3286883 ± 29% netperf.time.involuntary_context_switches
43693 ± 3% -23.1% 33615 ± 2% netperf.time.minor_page_faults
2.138e+09 ± 9% -30.8% 1.479e+09 ± 5% netperf.workload
1042132 ± 8% +13.6% 1183437 ± 8% meminfo.DirectMap4k
148685 ± 3% +8.0% 160562 ± 3% numa-meminfo.node3.Shmem
24.60 ± 4% -52.8% 11.60 ± 22% vmstat.cpu.id
74.40 +16.9% 87.00 ± 2% vmstat.cpu.sy
143.80 +17.0% 168.20 ± 2% vmstat.procs.r
1085464 ± 17% -75.7% 263351 ± 30% vmstat.system.cs
24.75 ± 3% -12.8 11.94 ± 21% mpstat.cpu.all.idle%
1.18 ± 5% +0.2 1.41 ± 3% mpstat.cpu.all.irq%
1.56 ± 8% -0.7 0.87 ± 12% mpstat.cpu.all.soft%
72.26 +13.3 85.59 ± 3% mpstat.cpu.all.sys%
0.26 ± 10% -0.1 0.19 ± 9% mpstat.cpu.all.usr%
3.401e+09 ± 44% -91.1% 3.012e+08 ± 32% cpuidle.C1.time
1.436e+08 ± 65% -89.6% 14954139 ± 34% cpuidle.C1.usage
3.536e+10 ± 5% -45.3% 1.932e+10 ± 24% cpuidle.C1E.time
4.126e+08 ± 7% -66.1% 1.399e+08 ± 31% cpuidle.C1E.usage
5.227e+08 ±138% -85.7% 74659149 ± 22% cpuidle.POLL.time
4914892 ± 99% -86.6% 657190 ± 25% cpuidle.POLL.usage
332.80 ± 20% -38.5% 204.80 ± 7% slabinfo.biovec-64.active_objs
332.80 ± 20% -38.5% 204.80 ± 7% slabinfo.biovec-64.num_objs
47399 ± 2% +12.9% 53505 ± 3% slabinfo.skbuff_fclone_cache.active_objs
743.80 ± 2% +12.8% 838.80 ± 3% slabinfo.skbuff_fclone_cache.active_slabs
47625 ± 2% +12.8% 53730 ± 3% slabinfo.skbuff_fclone_cache.num_objs
743.80 ± 2% +12.8% 838.80 ± 3% slabinfo.skbuff_fclone_cache.num_slabs
2.361e+08 ± 5% -20.7% 1.872e+08 ± 5% numa-numastat.node0.local_node
2.361e+08 ± 5% -20.7% 1.872e+08 ± 5% numa-numastat.node0.numa_hit
2.767e+08 ± 13% -34.5% 1.812e+08 ± 3% numa-numastat.node1.local_node
2.767e+08 ± 13% -34.5% 1.812e+08 ± 3% numa-numastat.node1.numa_hit
2.555e+08 ± 8% -29.6% 1.798e+08 ± 3% numa-numastat.node2.local_node
2.555e+08 ± 8% -29.6% 1.799e+08 ± 3% numa-numastat.node2.numa_hit
3.029e+08 ± 40% -35.6% 1.952e+08 ± 9% numa-numastat.node3.local_node
3.029e+08 ± 40% -35.6% 1.952e+08 ± 9% numa-numastat.node3.numa_hit
1.197e+08 ± 7% -19.7% 96054758 ± 5% numa-vmstat.node0.numa_hit
1.196e+08 ± 7% -19.8% 95968475 ± 5% numa-vmstat.node0.numa_local
1.418e+08 ± 17% -35.8% 90977174 ± 4% numa-vmstat.node1.numa_hit
1.418e+08 ± 17% -35.9% 90926215 ± 4% numa-vmstat.node1.numa_local
1.252e+08 ± 6% -28.4% 89645786 ± 5% numa-vmstat.node2.numa_hit
1.25e+08 ± 6% -28.4% 89551187 ± 5% numa-vmstat.node2.numa_local
1.487e+08 ± 38% -34.0% 98073664 ± 9% numa-vmstat.node3.numa_hit
1.486e+08 ± 38% -34.1% 97983383 ± 9% numa-vmstat.node3.numa_local
125155 +2.2% 127953 proc-vmstat.nr_active_anon
264519 +1.2% 267816 proc-vmstat.nr_file_pages
6252 ± 2% +6.3% 6647 ± 2% proc-vmstat.nr_inactive_anon
10301 ± 2% +2.3% 10542 proc-vmstat.nr_mapped
43102 +7.7% 46401 ± 3% proc-vmstat.nr_shmem
125155 +2.2% 127953 proc-vmstat.nr_zone_active_anon
6252 ± 2% +6.3% 6647 ± 2% proc-vmstat.nr_zone_inactive_anon
213957 ± 11% -82.5% 37345 ± 20% proc-vmstat.numa_hint_faults
178876 ± 10% -88.9% 19891 ± 40% proc-vmstat.numa_hint_faults_local
1.071e+09 ± 9% -30.6% 7.43e+08 ± 5% proc-vmstat.numa_hit
1.071e+09 ± 9% -30.6% 7.429e+08 ± 5% proc-vmstat.numa_local
237444 ± 15% -72.4% 65597 ± 25% proc-vmstat.numa_pte_updates
40514 ± 3% +31.5% 53282 ± 7% proc-vmstat.pgactivate
8.553e+09 ± 9% -30.7% 5.929e+09 ± 5% proc-vmstat.pgalloc_normal
3599397 -5.3% 3407537 proc-vmstat.pgfault
8.553e+09 ± 9% -30.7% 5.929e+09 ± 5% proc-vmstat.pgfree
657361 ± 12% -54.2% 301036 ± 26% sched_debug.cfs_rq:/.MIN_vruntime.avg
5121779 ± 9% -34.6% 3348814 ± 20% sched_debug.cfs_rq:/.MIN_vruntime.stddev
73.53 ± 29% +45.6% 107.09 ± 30% sched_debug.cfs_rq:/.load_avg.max
657361 ± 12% -54.2% 301036 ± 26% sched_debug.cfs_rq:/.max_vruntime.avg
5121779 ± 9% -34.6% 3348814 ± 20% sched_debug.cfs_rq:/.max_vruntime.stddev
50650339 ± 3% +47.8% 74859779 ± 6% sched_debug.cfs_rq:/.min_vruntime.avg
62450655 ± 2% +33.7% 83491916 ± 4% sched_debug.cfs_rq:/.min_vruntime.max
36914060 ± 5% +61.8% 59739601 ± 13% sched_debug.cfs_rq:/.min_vruntime.min
0.73 +15.9% 0.84 ± 2% sched_debug.cfs_rq:/.nr_running.avg
0.40 -27.8% 0.29 ± 9% sched_debug.cfs_rq:/.nr_running.stddev
-994538 -957.6% 8528746 ± 32% sched_debug.cfs_rq:/.spread0.avg
10807042 ± 22% +58.8% 17160375 ± 19% sched_debug.cfs_rq:/.spread0.max
-14731891 -55.3% -6592382 sched_debug.cfs_rq:/.spread0.min
741.02 +16.5% 862.98 ± 2% sched_debug.cfs_rq:/.util_avg.avg
0.57 ± 82% +4756.2% 27.44 ± 38% sched_debug.cfs_rq:/.util_avg.min
367.90 ± 2% -29.3% 260.00 ± 10% sched_debug.cfs_rq:/.util_avg.stddev
642383 +21.0% 777315 ± 3% sched_debug.cpu.avg_idle.avg
412048 -20.4% 328111 ± 5% sched_debug.cpu.avg_idle.stddev
2319 ± 32% -31.8% 1582 ± 15% sched_debug.cpu.clock_task.stddev
2673 +17.7% 3147 ± 2% sched_debug.cpu.curr->pid.avg
1885 -16.0% 1583 ± 2% sched_debug.cpu.curr->pid.stddev
0.72 +16.5% 0.84 ± 2% sched_debug.cpu.nr_running.avg
0.41 -27.7% 0.30 ± 8% sched_debug.cpu.nr_running.stddev
2491773 ± 18% -76.8% 578096 ± 29% sched_debug.cpu.nr_switches.avg
6795283 ± 36% -74.9% 1707607 ± 36% sched_debug.cpu.nr_switches.max
478409 ± 15% -96.4% 17371 ±105% sched_debug.cpu.nr_switches.min
1299648 ± 48% -73.8% 340921 ± 32% sched_debug.cpu.nr_switches.stddev
27.38 ± 3% -25.7% 20.35 ± 5% perf-stat.i.MPKI
9.812e+09 ± 4% -5.5% 9.272e+09 perf-stat.i.branch-instructions
0.81 ± 6% -0.4 0.46 ± 10% perf-stat.i.branch-miss-rate%
80638412 ± 11% -47.7% 42189389 ± 9% perf-stat.i.branch-misses
40.15 ± 8% +20.5 60.60 ± 5% perf-stat.i.cache-miss-rate%
4.562e+08 +5.2% 4.798e+08 perf-stat.i.cache-misses
1.207e+09 ± 9% -33.2% 8.064e+08 ± 5% perf-stat.i.cache-references
1096573 ± 17% -75.8% 265735 ± 29% perf-stat.i.context-switches
10.52 ± 5% +25.7% 13.22 ± 2% perf-stat.i.cpi
4.508e+11 +15.6% 5.21e+11 ± 2% perf-stat.i.cpu-cycles
13047 ± 7% -81.6% 2396 ± 23% perf-stat.i.cpu-migrations
992.80 +13.6% 1128 ± 5% perf-stat.i.cycles-between-cache-misses
0.02 ± 14% -0.0 0.02 ± 9% perf-stat.i.dTLB-load-miss-rate%
2798302 ± 12% -31.0% 1930060 ± 9% perf-stat.i.dTLB-load-misses
1.179e+10 ± 5% -12.2% 1.036e+10 perf-stat.i.dTLB-loads
0.13 ± 15% +0.1 0.19 ± 15% perf-stat.i.dTLB-store-miss-rate%
4296580 ± 7% -17.8% 3530350 ± 14% perf-stat.i.dTLB-store-misses
3.57e+09 ± 11% -46.7% 1.901e+09 ± 8% perf-stat.i.dTLB-stores
78.03 +8.8 86.79 ± 2% perf-stat.i.iTLB-load-miss-rate%
51762078 ± 12% -54.6% 23480553 ± 12% perf-stat.i.iTLB-load-misses
15349414 ± 16% -74.0% 3985887 ± 27% perf-stat.i.iTLB-loads
4.351e+10 ± 4% -9.3% 3.947e+10 perf-stat.i.instructions
935.21 ± 8% +90.8% 1784 ± 11% perf-stat.i.instructions-per-iTLB-miss
0.10 ± 5% -21.5% 0.08 ± 2% perf-stat.i.ipc
2.35 +15.6% 2.71 ± 2% perf-stat.i.metric.GHz
0.12 ± 5% +148.4% 0.31 ± 37% perf-stat.i.metric.K/sec
139.00 ± 5% -15.2% 117.90 perf-stat.i.metric.M/sec
3921 -5.5% 3706 perf-stat.i.minor-faults
82.40 -30.1 52.35 ± 8% perf-stat.i.node-load-miss-rate%
1.602e+08 ± 2% -30.0% 1.122e+08 ± 6% perf-stat.i.node-load-misses
34641208 ± 8% +202.8% 1.049e+08 ± 10% perf-stat.i.node-loads
95.43 -8.8 86.63 perf-stat.i.node-store-miss-rate%
95463384 ± 3% -24.8% 71816176 ± 6% perf-stat.i.node-store-misses
4504280 ± 4% +136.0% 10631682 ± 8% perf-stat.i.node-stores
3921 -5.5% 3706 perf-stat.i.page-faults
27.61 ± 4% -26.1% 20.40 ± 5% perf-stat.overall.MPKI
0.82 ± 7% -0.4 0.45 ± 10% perf-stat.overall.branch-miss-rate%
38.34 ± 9% +21.5 59.81 ± 5% perf-stat.overall.cache-miss-rate%
10.42 ± 5% +26.7% 13.21 ± 2% perf-stat.overall.cpi
989.04 +9.8% 1085 ± 2% perf-stat.overall.cycles-between-cache-misses
0.02 ± 14% -0.0 0.02 ± 9% perf-stat.overall.dTLB-load-miss-rate%
0.12 ± 16% +0.1 0.19 ± 15% perf-stat.overall.dTLB-store-miss-rate%
77.26 +8.6 85.86 ± 2% perf-stat.overall.iTLB-load-miss-rate%
853.07 ± 7% +100.6% 1711 ± 12% perf-stat.overall.instructions-per-iTLB-miss
0.10 ± 5% -21.3% 0.08 ± 2% perf-stat.overall.ipc
82.05 -30.3 51.72 ± 8% perf-stat.overall.node-load-miss-rate%
95.46 -8.4 87.03 perf-stat.overall.node-store-miss-rate%
18379 ± 4% +31.1% 24093 ± 5% perf-stat.overall.path-length
9.785e+09 ± 4% -5.4% 9.259e+09 perf-stat.ps.branch-instructions
79985971 ± 11% -47.5% 41978792 ± 9% perf-stat.ps.branch-misses
4.56e+08 +5.2% 4.795e+08 perf-stat.ps.cache-misses
1.2e+09 ± 9% -33.0% 8.039e+08 ± 5% perf-stat.ps.cache-references
1084462 ± 17% -75.8% 262972 ± 30% perf-stat.ps.context-switches
4.509e+11 +15.5% 5.206e+11 ± 2% perf-stat.ps.cpu-cycles
12921 ± 8% -81.6% 2375 ± 25% perf-stat.ps.cpu-migrations
2775208 ± 11% -30.8% 1921057 ± 9% perf-stat.ps.dTLB-load-misses
1.175e+10 ± 5% -12.0% 1.034e+10 perf-stat.ps.dTLB-loads
3.542e+09 ± 11% -46.5% 1.894e+09 ± 8% perf-stat.ps.dTLB-stores
51329779 ± 12% -54.5% 23375015 ± 12% perf-stat.ps.iTLB-load-misses
15181049 ± 16% -74.1% 3937533 ± 27% perf-stat.ps.iTLB-loads
4.337e+10 ± 4% -9.1% 3.941e+10 perf-stat.ps.instructions
3905 -5.5% 3690 perf-stat.ps.minor-faults
1.598e+08 ± 2% -29.9% 1.121e+08 ± 6% perf-stat.ps.node-load-misses
34957327 ± 8% +200.2% 1.049e+08 ± 10% perf-stat.ps.node-loads
95360155 ± 3% -24.7% 71759787 ± 6% perf-stat.ps.node-store-misses
4532465 ± 4% +134.6% 10630923 ± 8% perf-stat.ps.node-stores
3905 -5.5% 3690 perf-stat.ps.page-faults
3.911e+13 ± 4% -9.1% 3.554e+13 perf-stat.total.instructions
20018 ± 25% -65.9% 6832 ± 32% interrupts.CPU10.RES:Rescheduling_interrupts
6930 ± 22% +20.4% 8344 ± 3% interrupts.CPU100.NMI:Non-maskable_interrupts
6930 ± 22% +20.4% 8344 ± 3% interrupts.CPU100.PMI:Performance_monitoring_interrupts
26175 ± 14% -67.9% 8402 ± 48% interrupts.CPU100.RES:Rescheduling_interrupts
26256 ± 18% -69.9% 7911 ± 26% interrupts.CPU101.RES:Rescheduling_interrupts
23953 ± 16% -64.2% 8581 ± 24% interrupts.CPU102.RES:Rescheduling_interrupts
22838 ± 15% -52.8% 10770 ± 55% interrupts.CPU103.RES:Rescheduling_interrupts
22692 ± 9% -63.1% 8375 ± 60% interrupts.CPU104.RES:Rescheduling_interrupts
22355 ± 16% -76.8% 5197 ± 73% interrupts.CPU105.RES:Rescheduling_interrupts
18887 ± 17% -74.0% 4919 ± 27% interrupts.CPU107.RES:Rescheduling_interrupts
17905 ± 15% -71.2% 5158 ± 70% interrupts.CPU108.RES:Rescheduling_interrupts
17658 ± 19% -80.4% 3468 ± 65% interrupts.CPU109.RES:Rescheduling_interrupts
16137 ± 10% -53.2% 7549 ± 47% interrupts.CPU11.RES:Rescheduling_interrupts
18899 ± 11% -65.1% 6597 ± 32% interrupts.CPU110.RES:Rescheduling_interrupts
6906 ± 23% +23.3% 8515 interrupts.CPU111.NMI:Non-maskable_interrupts
6906 ± 23% +23.3% 8515 interrupts.CPU111.PMI:Performance_monitoring_interrupts
18144 ± 12% -66.2% 6134 ± 82% interrupts.CPU111.RES:Rescheduling_interrupts
15208 ± 12% -70.4% 4509 ± 65% interrupts.CPU112.RES:Rescheduling_interrupts
19107 ± 17% -62.4% 7181 ± 63% interrupts.CPU113.RES:Rescheduling_interrupts
17575 ± 9% -56.2% 7690 ± 69% interrupts.CPU114.RES:Rescheduling_interrupts
17096 ± 8% -65.5% 5889 ± 85% interrupts.CPU115.RES:Rescheduling_interrupts
15285 ± 19% -57.1% 6556 ± 26% interrupts.CPU116.RES:Rescheduling_interrupts
16701 ± 17% -76.0% 4009 ± 42% interrupts.CPU117.RES:Rescheduling_interrupts
14198 ± 18% -78.7% 3029 ± 43% interrupts.CPU118.RES:Rescheduling_interrupts
12635 ± 6% -59.9% 5071 ± 42% interrupts.CPU119.RES:Rescheduling_interrupts
111.20 ± 39% -66.0% 37.80 ± 54% interrupts.CPU119.TLB:TLB_shootdowns
21097 ± 21% -67.3% 6890 ± 47% interrupts.CPU12.RES:Rescheduling_interrupts
38899 ± 13% -77.7% 8664 ± 36% interrupts.CPU120.RES:Rescheduling_interrupts
35324 ± 16% -78.9% 7468 ± 53% interrupts.CPU121.RES:Rescheduling_interrupts
30246 ± 15% -77.3% 6857 ± 36% interrupts.CPU122.RES:Rescheduling_interrupts
29316 ± 15% -76.0% 7025 ± 58% interrupts.CPU123.RES:Rescheduling_interrupts
26502 ± 17% -63.4% 9695 ± 38% interrupts.CPU124.RES:Rescheduling_interrupts
25454 ± 13% -65.1% 8892 ± 58% interrupts.CPU125.RES:Rescheduling_interrupts
25998 ± 28% -76.8% 6034 ± 30% interrupts.CPU126.RES:Rescheduling_interrupts
27086 ± 15% -70.2% 8084 ± 45% interrupts.CPU127.RES:Rescheduling_interrupts
21401 ± 19% -72.4% 5899 ± 39% interrupts.CPU128.RES:Rescheduling_interrupts
23975 ± 18% -71.6% 6800 ± 68% interrupts.CPU129.RES:Rescheduling_interrupts
19847 ± 17% -65.9% 6763 ± 65% interrupts.CPU13.RES:Rescheduling_interrupts
20441 ± 23% -66.6% 6836 ± 45% interrupts.CPU130.RES:Rescheduling_interrupts
20268 ± 28% -67.9% 6508 ± 39% interrupts.CPU131.RES:Rescheduling_interrupts
19568 ± 17% -77.4% 4420 ± 58% interrupts.CPU132.RES:Rescheduling_interrupts
18921 ± 22% -63.0% 6991 ± 19% interrupts.CPU133.RES:Rescheduling_interrupts
19231 ± 18% -70.1% 5746 ± 50% interrupts.CPU134.RES:Rescheduling_interrupts
5876 ± 27% +43.5% 8430 ± 2% interrupts.CPU135.NMI:Non-maskable_interrupts
5876 ± 27% +43.5% 8430 ± 2% interrupts.CPU135.PMI:Performance_monitoring_interrupts
18139 ± 15% -64.8% 6388 ± 60% interrupts.CPU135.RES:Rescheduling_interrupts
17565 ± 22% -68.7% 5505 ± 47% interrupts.CPU136.RES:Rescheduling_interrupts
18447 ± 14% -80.5% 3592 ± 78% interrupts.CPU137.RES:Rescheduling_interrupts
15921 ± 28% -62.6% 5948 ± 53% interrupts.CPU138.RES:Rescheduling_interrupts
15194 ± 24% -60.3% 6029 ± 33% interrupts.CPU139.RES:Rescheduling_interrupts
19339 ± 21% -70.7% 5667 ± 21% interrupts.CPU14.RES:Rescheduling_interrupts
6659 ± 25% +23.0% 8190 ± 5% interrupts.CPU140.NMI:Non-maskable_interrupts
6659 ± 25% +23.0% 8190 ± 5% interrupts.CPU140.PMI:Performance_monitoring_interrupts
15306 ± 19% -62.2% 5782 ± 49% interrupts.CPU140.RES:Rescheduling_interrupts
14340 ± 11% -72.7% 3913 ± 38% interrupts.CPU141.RES:Rescheduling_interrupts
10236 ± 26% -60.4% 4056 ± 60% interrupts.CPU142.RES:Rescheduling_interrupts
9089 ± 9% -44.5% 5045 ± 41% interrupts.CPU143.RES:Rescheduling_interrupts
36285 ± 13% -82.8% 6248 ± 73% interrupts.CPU144.RES:Rescheduling_interrupts
35513 ± 12% -75.3% 8776 ± 42% interrupts.CPU145.RES:Rescheduling_interrupts
31846 ± 10% -81.0% 6036 ± 38% interrupts.CPU146.RES:Rescheduling_interrupts
27606 ± 15% -81.5% 5099 ± 61% interrupts.CPU147.RES:Rescheduling_interrupts
28526 ± 17% -77.8% 6335 ± 54% interrupts.CPU148.RES:Rescheduling_interrupts
5629 ± 35% +43.8% 8093 ± 5% interrupts.CPU149.NMI:Non-maskable_interrupts
5629 ± 35% +43.8% 8093 ± 5% interrupts.CPU149.PMI:Performance_monitoring_interrupts
23694 ± 9% -65.5% 8176 ± 52% interrupts.CPU149.RES:Rescheduling_interrupts
17594 ± 23% -60.3% 6977 ± 57% interrupts.CPU15.RES:Rescheduling_interrupts
25905 ± 15% -73.6% 6827 ± 59% interrupts.CPU150.RES:Rescheduling_interrupts
24007 ± 17% -83.4% 3992 ± 35% interrupts.CPU151.RES:Rescheduling_interrupts
26038 ± 21% -69.0% 8076 ± 66% interrupts.CPU152.RES:Rescheduling_interrupts
22935 ± 7% -66.6% 7668 ± 32% interrupts.CPU153.RES:Rescheduling_interrupts
62.60 ± 45% -65.2% 21.80 ± 47% interrupts.CPU153.TLB:TLB_shootdowns
22072 ± 14% -79.7% 4472 ± 57% interrupts.CPU154.RES:Rescheduling_interrupts
21397 ± 16% -71.6% 6070 ± 41% interrupts.CPU155.RES:Rescheduling_interrupts
19854 ± 21% -65.7% 6815 ± 72% interrupts.CPU156.RES:Rescheduling_interrupts
21814 ± 21% -71.5% 6218 ± 95% interrupts.CPU157.RES:Rescheduling_interrupts
22265 ± 24% -75.8% 5390 ± 46% interrupts.CPU158.RES:Rescheduling_interrupts
21240 ± 20% -58.8% 8747 ± 52% interrupts.CPU159.RES:Rescheduling_interrupts
17437 ± 21% -63.7% 6334 ± 29% interrupts.CPU16.RES:Rescheduling_interrupts
65.60 ± 56% -67.1% 21.60 ± 67% interrupts.CPU16.TLB:TLB_shootdowns
17052 ± 9% -62.5% 6396 ± 27% interrupts.CPU160.RES:Rescheduling_interrupts
15195 ± 6% -72.1% 4245 ± 61% interrupts.CPU161.RES:Rescheduling_interrupts
14960 ± 24% -55.0% 6730 ± 55% interrupts.CPU162.RES:Rescheduling_interrupts
17994 ± 16% -64.3% 6426 ± 68% interrupts.CPU163.RES:Rescheduling_interrupts
14710 ± 13% -58.7% 6076 ± 43% interrupts.CPU164.RES:Rescheduling_interrupts
11712 ± 14% -48.4% 6041 ± 56% interrupts.CPU165.RES:Rescheduling_interrupts
11431 ± 14% -42.6% 6562 ± 27% interrupts.CPU167.RES:Rescheduling_interrupts
37298 ± 7% -76.3% 8847 ± 19% interrupts.CPU168.RES:Rescheduling_interrupts
35906 ± 10% -68.3% 11386 ± 70% interrupts.CPU169.RES:Rescheduling_interrupts
16528 ± 34% -65.4% 5723 ± 52% interrupts.CPU17.RES:Rescheduling_interrupts
33673 ± 15% -76.9% 7761 ± 73% interrupts.CPU170.RES:Rescheduling_interrupts
65.40 ± 38% -55.0% 29.40 ± 67% interrupts.CPU170.TLB:TLB_shootdowns
29357 ± 15% -69.6% 8935 ± 39% interrupts.CPU171.RES:Rescheduling_interrupts
28550 ± 27% -67.3% 9324 ± 78% interrupts.CPU172.RES:Rescheduling_interrupts
23625 ± 21% -79.5% 4850 ± 71% interrupts.CPU173.RES:Rescheduling_interrupts
24733 ± 10% -70.5% 7294 ± 74% interrupts.CPU174.RES:Rescheduling_interrupts
25344 ± 14% -74.7% 6408 ± 68% interrupts.CPU175.RES:Rescheduling_interrupts
20908 ± 29% -66.4% 7021 ± 45% interrupts.CPU177.RES:Rescheduling_interrupts
21900 ± 27% -71.7% 6199 ± 75% interrupts.CPU178.RES:Rescheduling_interrupts
20672 ± 15% -60.3% 8201 ± 57% interrupts.CPU179.RES:Rescheduling_interrupts
19053 ± 20% -64.5% 6769 ± 39% interrupts.CPU18.RES:Rescheduling_interrupts
24900 ± 26% -70.4% 7361 ± 71% interrupts.CPU180.RES:Rescheduling_interrupts
23027 ± 18% -63.1% 8499 ± 59% interrupts.CPU181.RES:Rescheduling_interrupts
19105 ± 25% -66.5% 6409 ± 46% interrupts.CPU183.RES:Rescheduling_interrupts
18049 ± 6% -77.8% 4015 ± 59% interrupts.CPU184.RES:Rescheduling_interrupts
17829 ± 34% -79.1% 3731 ± 97% interrupts.CPU185.RES:Rescheduling_interrupts
17896 ± 21% -56.7% 7741 ± 64% interrupts.CPU187.RES:Rescheduling_interrupts
19615 ± 25% -68.4% 6205 ± 45% interrupts.CPU188.RES:Rescheduling_interrupts
5762 ± 37% +20.6% 6948 ± 28% interrupts.CPU189.NMI:Non-maskable_interrupts
5762 ± 37% +20.6% 6948 ± 28% interrupts.CPU189.PMI:Performance_monitoring_interrupts
18061 ± 16% -65.4% 6251 ± 53% interrupts.CPU19.RES:Rescheduling_interrupts
20448 ± 24% -68.6% 6426 ± 41% interrupts.CPU20.RES:Rescheduling_interrupts
60.60 ± 57% -74.6% 15.40 ± 68% interrupts.CPU20.TLB:TLB_shootdowns
18718 ± 26% -77.9% 4146 ± 35% interrupts.CPU21.RES:Rescheduling_interrupts
19195 ± 24% -72.9% 5208 ± 33% interrupts.CPU22.RES:Rescheduling_interrupts
18435 ± 24% -59.7% 7435 ± 30% interrupts.CPU23.RES:Rescheduling_interrupts
12421 ± 21% -61.9% 4733 ± 29% interrupts.CPU25.RES:Rescheduling_interrupts
14604 ± 24% -54.6% 6631 ± 34% interrupts.CPU28.RES:Rescheduling_interrupts
16075 ± 26% -61.1% 6249 ± 38% interrupts.CPU29.RES:Rescheduling_interrupts
903.60 +13.5% 1026 ± 9% interrupts.CPU31.CAL:Function_call_interrupts
17567 ± 29% -62.3% 6617 ± 46% interrupts.CPU32.RES:Rescheduling_interrupts
15244 ± 21% -68.5% 4809 ± 50% interrupts.CPU33.RES:Rescheduling_interrupts
16526 ± 16% -63.8% 5988 ± 71% interrupts.CPU35.RES:Rescheduling_interrupts
19889 ± 6% -75.0% 4966 ± 51% interrupts.CPU36.RES:Rescheduling_interrupts
19404 ± 22% -66.2% 6563 ± 32% interrupts.CPU37.RES:Rescheduling_interrupts
19446 ± 11% -62.8% 7240 ± 60% interrupts.CPU38.RES:Rescheduling_interrupts
19790 ± 25% -63.1% 7297 ± 21% interrupts.CPU39.RES:Rescheduling_interrupts
8407 ± 4% -22.2% 6541 ± 29% interrupts.CPU4.NMI:Non-maskable_interrupts
8407 ± 4% -22.2% 6541 ± 29% interrupts.CPU4.PMI:Performance_monitoring_interrupts
21624 ± 20% -60.7% 8506 ± 59% interrupts.CPU40.RES:Rescheduling_interrupts
19575 ± 15% -72.3% 5430 ± 35% interrupts.CPU41.RES:Rescheduling_interrupts
22512 ± 21% -76.1% 5370 ± 65% interrupts.CPU42.RES:Rescheduling_interrupts
20342 ± 11% -78.7% 4337 ± 27% interrupts.CPU43.RES:Rescheduling_interrupts
71.00 ± 77% -66.5% 23.80 ± 53% interrupts.CPU43.TLB:TLB_shootdowns
22717 ± 16% -80.8% 4363 ± 42% interrupts.CPU44.RES:Rescheduling_interrupts
22427 ± 25% -87.0% 2919 ± 29% interrupts.CPU45.RES:Rescheduling_interrupts
25129 ± 26% -75.6% 6131 ± 35% interrupts.CPU46.RES:Rescheduling_interrupts
25311 ± 27% -69.7% 7679 ± 46% interrupts.CPU47.RES:Rescheduling_interrupts
12818 ± 23% -45.9% 6933 ± 36% interrupts.CPU5.RES:Rescheduling_interrupts
12087 ± 43% -56.6% 5248 ± 43% interrupts.CPU50.RES:Rescheduling_interrupts
15964 ± 15% -66.9% 5290 ± 59% interrupts.CPU51.RES:Rescheduling_interrupts
12712 ± 16% -51.1% 6220 ± 56% interrupts.CPU52.RES:Rescheduling_interrupts
12931 ± 8% -48.8% 6621 ± 52% interrupts.CPU53.RES:Rescheduling_interrupts
15241 ± 29% -46.6% 8135 ± 61% interrupts.CPU54.RES:Rescheduling_interrupts
18851 ± 10% -75.1% 4691 ± 68% interrupts.CPU55.RES:Rescheduling_interrupts
16114 ± 20% -72.6% 4418 ± 49% interrupts.CPU56.RES:Rescheduling_interrupts
14867 ± 36% -46.9% 7901 ± 54% interrupts.CPU57.RES:Rescheduling_interrupts
18445 ± 19% -60.3% 7329 ± 57% interrupts.CPU60.RES:Rescheduling_interrupts
16799 ± 19% -56.2% 7353 ± 56% interrupts.CPU61.RES:Rescheduling_interrupts
16466 ± 9% -49.0% 8398 ± 41% interrupts.CPU63.RES:Rescheduling_interrupts
19173 ± 7% -51.3% 9329 ± 22% interrupts.CPU64.RES:Rescheduling_interrupts
19019 ± 15% -65.0% 6662 ± 70% interrupts.CPU65.RES:Rescheduling_interrupts
24865 ± 14% -70.3% 7383 ± 31% interrupts.CPU66.RES:Rescheduling_interrupts
20350 ± 29% -52.0% 9765 ± 17% interrupts.CPU67.RES:Rescheduling_interrupts
21037 ± 25% -67.7% 6793 ± 55% interrupts.CPU68.RES:Rescheduling_interrupts
25286 ± 17% -70.7% 7398 ± 46% interrupts.CPU69.RES:Rescheduling_interrupts
17345 ± 17% -58.4% 7218 ± 40% interrupts.CPU7.RES:Rescheduling_interrupts
23913 ± 12% -67.2% 7844 ± 39% interrupts.CPU70.RES:Rescheduling_interrupts
22159 ± 29% -72.9% 5997 ± 42% interrupts.CPU71.RES:Rescheduling_interrupts
16007 ± 38% -54.6% 7270 ± 33% interrupts.CPU75.RES:Rescheduling_interrupts
16100 ± 29% -62.5% 6037 ± 61% interrupts.CPU76.RES:Rescheduling_interrupts
17691 ± 13% -71.8% 4992 ± 66% interrupts.CPU77.RES:Rescheduling_interrupts
19181 ± 29% -55.9% 8452 ± 34% interrupts.CPU78.RES:Rescheduling_interrupts
16480 ± 42% -66.8% 5468 ± 41% interrupts.CPU79.RES:Rescheduling_interrupts
14797 ± 21% -38.0% 9169 ± 15% interrupts.CPU8.RES:Rescheduling_interrupts
16987 ± 18% -55.3% 7601 ± 76% interrupts.CPU80.RES:Rescheduling_interrupts
20522 ± 15% -58.5% 8522 ± 32% interrupts.CPU81.RES:Rescheduling_interrupts
18750 ± 24% -71.3% 5386 ± 91% interrupts.CPU82.RES:Rescheduling_interrupts
21485 ± 34% -70.6% 6317 ±102% interrupts.CPU83.RES:Rescheduling_interrupts
15390 ± 19% -57.3% 6576 ± 66% interrupts.CPU84.RES:Rescheduling_interrupts
16251 ± 35% -60.0% 6503 ± 9% interrupts.CPU85.RES:Rescheduling_interrupts
22661 ± 20% -61.2% 8781 ± 43% interrupts.CPU86.RES:Rescheduling_interrupts
21182 ± 37% -72.1% 5915 ± 34% interrupts.CPU87.RES:Rescheduling_interrupts
20482 ± 27% -66.2% 6913 ± 55% interrupts.CPU88.RES:Rescheduling_interrupts
22303 ± 18% -74.8% 5616 ± 97% interrupts.CPU89.RES:Rescheduling_interrupts
22674 ± 39% -72.3% 6285 ± 45% interrupts.CPU90.RES:Rescheduling_interrupts
22527 ± 38% -66.7% 7509 ± 78% interrupts.CPU91.RES:Rescheduling_interrupts
24202 ± 41% -77.5% 5449 ± 75% interrupts.CPU93.RES:Rescheduling_interrupts
24056 ± 26% -62.5% 9013 ± 50% interrupts.CPU94.RES:Rescheduling_interrupts
25980 ± 26% -59.7% 10460 ± 31% interrupts.CPU96.RES:Rescheduling_interrupts
33285 ± 20% -78.3% 7213 ± 45% interrupts.CPU97.RES:Rescheduling_interrupts
88.00 ± 53% -77.5% 19.80 ± 63% interrupts.CPU97.TLB:TLB_shootdowns
30452 ± 14% -66.5% 10214 ± 57% interrupts.CPU98.RES:Rescheduling_interrupts
27805 ± 7% -67.8% 8944 ± 60% interrupts.CPU99.RES:Rescheduling_interrupts
3779530 ± 8% -64.4% 1345454 ± 26% interrupts.RES:Rescheduling_interrupts
23.08 -16.2 6.89 ± 45% perf-profile.calltrace.cycles-pp.secondary_startup_64
22.99 -16.1 6.85 ± 45% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
22.99 -16.1 6.85 ± 45% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
22.98 -16.1 6.85 ± 45% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
22.44 -15.7 6.73 ± 45% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
22.43 -15.7 6.73 ± 45% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
21.98 -15.4 6.62 ± 45% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
14.81 ± 3% -8.9 5.95 ± 34% perf-profile.calltrace.cycles-pp.release_sock.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
14.74 ± 3% -8.9 5.89 ± 35% perf-profile.calltrace.cycles-pp.__release_sock.release_sock.tcp_sendmsg.sock_sendmsg.__sys_sendto
14.11 ± 3% -8.4 5.72 ± 34% perf-profile.calltrace.cycles-pp.tcp_v4_do_rcv.__release_sock.release_sock.tcp_sendmsg.sock_sendmsg
14.09 ± 3% -8.4 5.71 ± 34% perf-profile.calltrace.cycles-pp.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock.tcp_sendmsg
12.73 ± 3% -8.2 4.55 ± 43% perf-profile.calltrace.cycles-pp.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock
12.68 ± 3% -8.2 4.51 ± 43% perf-profile.calltrace.cycles-pp.tcp_clean_rtx_queue.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv.__release_sock
12.37 ± 4% -8.1 4.25 ± 46% perf-profile.calltrace.cycles-pp.skb_release_data.__kfree_skb.tcp_clean_rtx_queue.tcp_ack.tcp_rcv_established
12.42 ± 4% -8.1 4.30 ± 46% perf-profile.calltrace.cycles-pp.__kfree_skb.tcp_clean_rtx_queue.tcp_ack.tcp_rcv_established.tcp_v4_do_rcv
11.93 ± 4% -8.0 3.95 ± 49% perf-profile.calltrace.cycles-pp.__free_pages_ok.skb_release_data.__kfree_skb.tcp_clean_rtx_queue.tcp_ack
11.79 ± 4% -7.9 3.90 ± 49% perf-profile.calltrace.cycles-pp.free_one_page.__free_pages_ok.skb_release_data.__kfree_skb.tcp_clean_rtx_queue
51.48 -1.5 50.03 perf-profile.calltrace.cycles-pp.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
51.47 -1.5 50.02 perf-profile.calltrace.cycles-pp.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
51.38 -1.4 49.95 perf-profile.calltrace.cycles-pp.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64.entry_SYSCALL_64_after_hwframe
51.33 -1.4 49.90 perf-profile.calltrace.cycles-pp.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto.do_syscall_64
1.75 ± 5% -0.6 1.17 ± 6% perf-profile.calltrace.cycles-pp.__tcp_push_pending_frames.tcp_rcv_established.tcp_v4_do_rcv.__release_sock.release_sock
1.75 ± 5% -0.6 1.17 ± 5% perf-profile.calltrace.cycles-pp.tcp_write_xmit.__tcp_push_pending_frames.tcp_rcv_established.tcp_v4_do_rcv.__release_sock
1.29 ± 8% -0.6 0.71 ± 10% perf-profile.calltrace.cycles-pp.__ip_queue_xmit.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_rcv_established
1.54 ± 6% -0.6 0.97 ± 6% perf-profile.calltrace.cycles-pp.__tcp_transmit_skb.tcp_write_xmit.__tcp_push_pending_frames.tcp_rcv_established.tcp_v4_do_rcv
7.37 ± 3% -0.6 6.81 perf-profile.calltrace.cycles-pp._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.__sys_sendto
7.11 ± 2% -0.5 6.57 perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyin._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg
7.22 ± 3% -0.5 6.68 perf-profile.calltrace.cycles-pp.copyin._copy_from_iter_full.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
0.00 +0.5 0.52 ± 3% perf-profile.calltrace.cycles-pp.release_sock.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
8.04 ± 2% +1.0 9.09 ± 2% perf-profile.calltrace.cycles-pp.copy_user_enhanced_fast_string.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter
8.23 ± 2% +1.1 9.35 ± 2% perf-profile.calltrace.cycles-pp._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg.inet_recvmsg
8.63 ± 2% +1.1 9.76 ± 2% perf-profile.calltrace.cycles-pp.skb_copy_datagram_iter.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
8.62 ± 2% +1.1 9.75 ± 2% perf-profile.calltrace.cycles-pp.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
8.19 ± 2% +1.1 9.32 ± 2% perf-profile.calltrace.cycles-pp.copyout._copy_to_iter.__skb_datagram_iter.skb_copy_datagram_iter.tcp_recvmsg
36.28 +7.5 43.79 ± 4% perf-profile.calltrace.cycles-pp.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.__sys_sendto.__x64_sys_sendto
24.55 +8.2 32.71 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.rmqueue.get_page_from_freelist.__alloc_pages_nodemask
24.99 +8.2 33.18 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.skb_page_frag_refill
26.00 +8.2 34.25 ± 5% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.skb_page_frag_refill.sk_page_frag_refill
26.80 +8.3 35.14 ± 5% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.skb_page_frag_refill.sk_page_frag_refill.tcp_sendmsg_locked.tcp_sendmsg
26.89 +8.3 35.23 ± 5% perf-profile.calltrace.cycles-pp.sk_page_frag_refill.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg.__sys_sendto
26.73 +8.3 35.07 ± 5% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.skb_page_frag_refill.sk_page_frag_refill.tcp_sendmsg_locked
26.91 +8.3 35.25 ± 5% perf-profile.calltrace.cycles-pp.skb_page_frag_refill.sk_page_frag_refill.tcp_sendmsg_locked.tcp_sendmsg.sock_sendmsg
24.02 +8.8 32.85 ± 5% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_one_page.__free_pages_ok.skb_release_data
24.44 +8.9 33.32 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_one_page.__free_pages_ok.skb_release_data.__kfree_skb
76.53 +16.3 92.79 ± 3% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
76.51 +16.3 92.78 ± 3% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
13.05 ± 3% +16.7 29.76 ± 11% perf-profile.calltrace.cycles-pp.free_one_page.__free_pages_ok.skb_release_data.__kfree_skb.tcp_recvmsg
13.42 ± 3% +17.0 30.47 ± 11% perf-profile.calltrace.cycles-pp.__free_pages_ok.skb_release_data.__kfree_skb.tcp_recvmsg.inet_recvmsg
13.93 ± 3% +17.3 31.25 ± 11% perf-profile.calltrace.cycles-pp.skb_release_data.__kfree_skb.tcp_recvmsg.inet_recvmsg.__sys_recvfrom
14.12 ± 3% +17.4 31.48 ± 11% perf-profile.calltrace.cycles-pp.__kfree_skb.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom
24.95 ± 2% +17.7 42.70 ± 7% perf-profile.calltrace.cycles-pp.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.95 ± 2% +17.7 42.70 ± 7% perf-profile.calltrace.cycles-pp.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.89 ± 2% +17.8 42.68 ± 7% perf-profile.calltrace.cycles-pp.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64.entry_SYSCALL_64_after_hwframe
24.89 ± 2% +17.8 42.67 ± 7% perf-profile.calltrace.cycles-pp.tcp_recvmsg.inet_recvmsg.__sys_recvfrom.__x64_sys_recvfrom.do_syscall_64
23.08 -16.2 6.89 ± 45% perf-profile.children.cycles-pp.secondary_startup_64
23.08 -16.2 6.89 ± 45% perf-profile.children.cycles-pp.cpu_startup_entry
23.07 -16.2 6.89 ± 45% perf-profile.children.cycles-pp.do_idle
22.99 -16.1 6.85 ± 45% perf-profile.children.cycles-pp.start_secondary
22.53 -15.8 6.77 ± 44% perf-profile.children.cycles-pp.cpuidle_enter
22.53 -15.8 6.77 ± 44% perf-profile.children.cycles-pp.cpuidle_enter_state
22.07 -15.4 6.66 ± 44% perf-profile.children.cycles-pp.intel_idle
15.38 ± 3% -8.9 6.48 ± 32% perf-profile.children.cycles-pp.tcp_rcv_established
15.42 ± 3% -8.9 6.52 ± 32% perf-profile.children.cycles-pp.tcp_v4_do_rcv
15.17 ± 3% -8.7 6.48 ± 31% perf-profile.children.cycles-pp.release_sock
15.06 ± 3% -8.7 6.38 ± 32% perf-profile.children.cycles-pp.__release_sock
12.87 ± 3% -8.3 4.62 ± 43% perf-profile.children.cycles-pp.tcp_ack
12.81 ± 3% -8.2 4.57 ± 43% perf-profile.children.cycles-pp.tcp_clean_rtx_queue
51.48 -1.5 50.03 perf-profile.children.cycles-pp.__x64_sys_sendto
51.47 -1.5 50.02 perf-profile.children.cycles-pp.__sys_sendto
51.38 -1.4 49.95 perf-profile.children.cycles-pp.sock_sendmsg
51.33 -1.4 49.90 perf-profile.children.cycles-pp.tcp_sendmsg
2.82 ± 7% -1.2 1.64 ± 12% perf-profile.children.cycles-pp.__tcp_transmit_skb
2.35 ± 8% -1.1 1.27 ± 14% perf-profile.children.cycles-pp.__ip_queue_xmit
2.03 ± 10% -1.1 0.97 ± 19% perf-profile.children.cycles-pp.ip_output
1.95 ± 10% -1.0 0.93 ± 19% perf-profile.children.cycles-pp.ip_finish_output2
1.69 ± 10% -0.9 0.79 ± 21% perf-profile.children.cycles-pp.__local_bh_enable_ip
1.66 ± 10% -0.9 0.78 ± 21% perf-profile.children.cycles-pp.do_softirq
1.69 ± 10% -0.9 0.81 ± 20% perf-profile.children.cycles-pp.__softirqentry_text_start
1.64 ± 10% -0.9 0.77 ± 21% perf-profile.children.cycles-pp.do_softirq_own_stack
2.39 ± 5% -0.9 1.53 ± 8% perf-profile.children.cycles-pp.tcp_write_xmit
1.57 ± 10% -0.8 0.73 ± 21% perf-profile.children.cycles-pp.net_rx_action
1.54 ± 10% -0.8 0.71 ± 22% perf-profile.children.cycles-pp.process_backlog
1.49 ± 10% -0.8 0.70 ± 22% perf-profile.children.cycles-pp.__netif_receive_skb_one_core
1.45 ± 10% -0.8 0.67 ± 22% perf-profile.children.cycles-pp.ip_rcv
1.41 ± 10% -0.8 0.65 ± 23% perf-profile.children.cycles-pp.ip_local_deliver
1.40 ± 10% -0.8 0.65 ± 23% perf-profile.children.cycles-pp.ip_local_deliver_finish
1.39 ± 10% -0.8 0.64 ± 23% perf-profile.children.cycles-pp.ip_protocol_deliver_rcu
1.37 ± 10% -0.7 0.63 ± 23% perf-profile.children.cycles-pp.tcp_v4_rcv
2.03 ± 4% -0.6 1.42 ± 5% perf-profile.children.cycles-pp.__tcp_push_pending_frames
7.38 ± 3% -0.6 6.82 perf-profile.children.cycles-pp._copy_from_iter_full
7.25 ± 3% -0.5 6.72 perf-profile.children.cycles-pp.copyin
0.69 ± 12% -0.5 0.19 ± 43% perf-profile.children.cycles-pp.sock_def_readable
0.67 ± 12% -0.5 0.18 ± 44% perf-profile.children.cycles-pp.__wake_up_common_lock
0.66 ± 17% -0.5 0.18 ± 31% perf-profile.children.cycles-pp.__schedule
0.64 ± 12% -0.5 0.18 ± 44% perf-profile.children.cycles-pp.__wake_up_common
0.63 ± 11% -0.5 0.17 ± 43% perf-profile.children.cycles-pp.try_to_wake_up
0.51 ± 17% -0.4 0.10 ± 47% perf-profile.children.cycles-pp.sk_wait_data
0.46 ± 16% -0.3 0.13 ± 29% perf-profile.children.cycles-pp.wait_woken
0.43 ± 16% -0.3 0.13 ± 26% perf-profile.children.cycles-pp.schedule_timeout
0.43 ± 17% -0.3 0.13 ± 25% perf-profile.children.cycles-pp.schedule
0.34 ± 12% -0.3 0.08 ± 67% perf-profile.children.cycles-pp.ttwu_do_activate
0.34 ± 13% -0.3 0.08 ± 67% perf-profile.children.cycles-pp.activate_task
0.34 ± 13% -0.3 0.08 ± 65% perf-profile.children.cycles-pp.enqueue_task_fair
0.27 ± 14% -0.2 0.05 ± 90% perf-profile.children.cycles-pp.enqueue_entity
0.20 ± 9% -0.1 0.07 ± 34% perf-profile.children.cycles-pp.lock_sock_nested
0.24 ± 6% -0.1 0.14 ± 19% perf-profile.children.cycles-pp._raw_spin_lock_bh
0.23 ± 8% -0.1 0.13 ± 11% perf-profile.children.cycles-pp.__dev_queue_xmit
0.18 ± 15% -0.1 0.09 ± 18% perf-profile.children.cycles-pp.pick_next_task_fair
0.23 ± 12% -0.1 0.14 ± 12% perf-profile.children.cycles-pp.update_load_avg
0.17 ± 27% -0.1 0.08 ± 20% perf-profile.children.cycles-pp.update_cfs_group
0.33 ± 2% -0.1 0.24 ± 6% perf-profile.children.cycles-pp.__list_del_entry_valid
0.12 ± 6% -0.1 0.04 ± 83% perf-profile.children.cycles-pp.__switch_to
0.15 ± 5% -0.1 0.08 ± 12% perf-profile.children.cycles-pp.dev_hard_start_xmit
0.11 ± 12% -0.1 0.05 ± 52% perf-profile.children.cycles-pp.update_rq_clock
0.13 ± 8% -0.1 0.07 ± 10% perf-profile.children.cycles-pp.loopback_xmit
0.08 ± 10% -0.0 0.03 ± 82% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.11 ± 16% -0.0 0.06 ± 7% perf-profile.children.cycles-pp.update_curr
0.07 -0.0 0.03 ± 81% perf-profile.children.cycles-pp.__ksize
0.35 -0.0 0.31 ± 4% perf-profile.children.cycles-pp.__free_one_page
0.12 ± 10% -0.0 0.09 ± 11% perf-profile.children.cycles-pp.__inet_lookup_established
0.10 ± 4% -0.0 0.07 ± 12% perf-profile.children.cycles-pp.skb_release_all
0.10 ± 7% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.__list_add_valid
0.09 ± 8% -0.0 0.06 ± 14% perf-profile.children.cycles-pp.__tcp_send_ack
0.10 ± 6% -0.0 0.07 ± 12% perf-profile.children.cycles-pp.skb_release_head_state
0.06 ± 6% -0.0 0.04 ± 50% perf-profile.children.cycles-pp.tcp_event_new_data_sent
0.06 ± 6% -0.0 0.04 ± 50% perf-profile.children.cycles-pp.sockfd_lookup_light
0.07 ± 5% -0.0 0.05 ± 7% perf-profile.children.cycles-pp.tcp_current_mss
0.09 ± 5% -0.0 0.08 ± 6% perf-profile.children.cycles-pp.__skb_clone
0.07 ± 11% -0.0 0.05 ± 7% perf-profile.children.cycles-pp.tcp_send_mss
0.08 ± 4% -0.0 0.07 ± 7% perf-profile.children.cycles-pp.sk_reset_timer
0.08 -0.0 0.07 ± 7% perf-profile.children.cycles-pp.mod_timer
0.09 +0.0 0.11 ± 4% perf-profile.children.cycles-pp.kmem_cache_free
0.19 +0.0 0.21 ± 3% perf-profile.children.cycles-pp.__slab_free
0.09 ± 5% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.__mod_zone_page_state
0.11 ± 6% +0.0 0.13 ± 4% perf-profile.children.cycles-pp.prep_new_page
0.27 +0.0 0.30 ± 3% perf-profile.children.cycles-pp.simple_copy_to_iter
0.08 ± 20% +0.0 0.11 ± 10% perf-profile.children.cycles-pp.ksys_write
0.08 ± 22% +0.0 0.11 ± 8% perf-profile.children.cycles-pp.vfs_write
0.08 ± 22% +0.0 0.11 ± 8% perf-profile.children.cycles-pp.__libc_write
0.08 ± 19% +0.0 0.11 ± 6% perf-profile.children.cycles-pp.generic_file_write_iter
0.08 ± 19% +0.0 0.11 ± 6% perf-profile.children.cycles-pp.__generic_file_write_iter
0.08 ± 19% +0.0 0.11 ± 6% perf-profile.children.cycles-pp.generic_perform_write
0.08 ± 19% +0.0 0.11 ± 8% perf-profile.children.cycles-pp.new_sync_write
0.12 ± 4% +0.0 0.15 ± 5% perf-profile.children.cycles-pp.skb_clone
0.11 ± 4% +0.0 0.15 ± 8% perf-profile.children.cycles-pp.__kmalloc_reserve
0.11 ± 3% +0.0 0.15 ± 7% perf-profile.children.cycles-pp.__kmalloc_node_track_caller
0.36 ± 3% +0.0 0.41 ± 5% perf-profile.children.cycles-pp.sk_stream_alloc_skb
0.06 ± 6% +0.1 0.12 ± 13% perf-profile.children.cycles-pp.alloc_slab_page
0.06 ± 7% +0.1 0.13 ± 11% perf-profile.children.cycles-pp.new_slab
0.06 ± 7% +0.1 0.13 ± 11% perf-profile.children.cycles-pp.allocate_slab
0.16 ± 2% +0.1 0.23 ± 3% perf-profile.children.cycles-pp.skb_try_coalesce
0.09 ± 4% +0.1 0.16 ± 9% perf-profile.children.cycles-pp.___slab_alloc
0.06 +0.1 0.13 ± 10% perf-profile.children.cycles-pp.put_cpu_partial
0.17 ± 2% +0.1 0.24 ± 2% perf-profile.children.cycles-pp.tcp_try_coalesce
0.06 ± 8% +0.1 0.13 ± 9% perf-profile.children.cycles-pp.unfreeze_partials
0.22 ± 2% +0.1 0.29 ± 2% perf-profile.children.cycles-pp.tcp_queue_rcv
0.04 ± 51% +0.1 0.13 ± 14% perf-profile.children.cycles-pp.native_irq_return_iret
0.20 ± 20% +0.1 0.34 ± 9% perf-profile.children.cycles-pp.task_tick_fair
0.75 ± 9% +0.2 0.90 ± 6% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
0.26 ± 19% +0.2 0.42 ± 7% perf-profile.children.cycles-pp.scheduler_tick
0.33 ± 12% +0.2 0.49 ± 7% perf-profile.children.cycles-pp.update_process_times
0.36 ± 12% +0.2 0.53 ± 6% perf-profile.children.cycles-pp.tick_sched_timer
0.33 ± 12% +0.2 0.50 ± 7% perf-profile.children.cycles-pp.tick_sched_handle
0.45 ± 8% +0.2 0.64 ± 7% perf-profile.children.cycles-pp.__hrtimer_run_queues
0.62 ± 10% +0.2 0.81 ± 6% perf-profile.children.cycles-pp.hrtimer_interrupt
1.04 ± 12% +0.4 1.43 ± 4% perf-profile.children.cycles-pp.apic_timer_interrupt
15.41 ± 2% +0.6 16.01 perf-profile.children.cycles-pp.copy_user_enhanced_fast_string
8.23 ± 2% +1.1 9.35 ± 2% perf-profile.children.cycles-pp._copy_to_iter
8.63 ± 2% +1.1 9.76 ± 2% perf-profile.children.cycles-pp.skb_copy_datagram_iter
8.19 ± 2% +1.1 9.32 ± 2% perf-profile.children.cycles-pp.copyout
8.62 ± 2% +1.1 9.75 ± 2% perf-profile.children.cycles-pp.__skb_datagram_iter
36.35 +7.5 43.87 ± 4% perf-profile.children.cycles-pp.tcp_sendmsg_locked
25.15 +8.2 33.34 ± 5% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
26.91 +8.3 35.25 ± 5% perf-profile.children.cycles-pp.skb_page_frag_refill
26.93 +8.3 35.27 ± 5% perf-profile.children.cycles-pp.sk_page_frag_refill
26.13 +8.4 34.51 ± 5% perf-profile.children.cycles-pp.rmqueue
26.82 +8.4 35.26 ± 5% perf-profile.children.cycles-pp.get_page_from_freelist
26.89 +8.4 35.33 ± 5% perf-profile.children.cycles-pp.__alloc_pages_nodemask
24.99 +8.8 33.83 ± 4% perf-profile.children.cycles-pp.free_one_page
24.77 +8.8 33.61 ± 4% perf-profile.children.cycles-pp._raw_spin_lock
26.43 +9.1 35.56 ± 4% perf-profile.children.cycles-pp.skb_release_data
25.55 +9.1 34.69 ± 4% perf-profile.children.cycles-pp.__free_pages_ok
26.76 +9.2 35.91 ± 4% perf-profile.children.cycles-pp.__kfree_skb
76.67 +16.3 92.96 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
76.65 +16.3 92.95 ± 3% perf-profile.children.cycles-pp.do_syscall_64
48.94 +17.0 65.94 ± 5% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
24.96 ± 2% +17.7 42.70 ± 7% perf-profile.children.cycles-pp.__x64_sys_recvfrom
24.95 ± 2% +17.7 42.70 ± 7% perf-profile.children.cycles-pp.__sys_recvfrom
24.89 ± 2% +17.8 42.68 ± 7% perf-profile.children.cycles-pp.inet_recvmsg
24.89 ± 2% +17.8 42.67 ± 7% perf-profile.children.cycles-pp.tcp_recvmsg
22.07 -15.4 6.66 ± 44% perf-profile.self.cycles-pp.intel_idle
0.17 ± 27% -0.1 0.08 ± 20% perf-profile.self.cycles-pp.update_cfs_group
0.33 ± 2% -0.1 0.24 ± 6% perf-profile.self.cycles-pp.__list_del_entry_valid
0.21 ± 3% -0.1 0.13 ± 12% perf-profile.self.cycles-pp.__tcp_transmit_skb
0.11 ± 8% -0.1 0.04 ± 83% perf-profile.self.cycles-pp.__switch_to
0.13 ± 5% -0.1 0.08 ± 14% perf-profile.self.cycles-pp.tcp_recvmsg
0.08 ± 10% -0.0 0.03 ± 82% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.07 ± 5% -0.0 0.03 ± 81% perf-profile.self.cycles-pp.__ksize
0.13 ± 3% -0.0 0.10 ± 7% perf-profile.self.cycles-pp.tcp_clean_rtx_queue
0.21 ± 2% -0.0 0.18 ± 6% perf-profile.self.cycles-pp.__free_one_page
0.11 ± 13% -0.0 0.08 ± 9% perf-profile.self.cycles-pp.__inet_lookup_established
0.10 ± 8% -0.0 0.07 ± 7% perf-profile.self.cycles-pp.__list_add_valid
0.08 ± 9% -0.0 0.06 ± 14% perf-profile.self.cycles-pp.tcp_v4_rcv
0.11 ± 9% -0.0 0.09 ± 8% perf-profile.self.cycles-pp._raw_spin_lock_bh
0.08 ± 4% -0.0 0.06 ± 7% perf-profile.self.cycles-pp.__skb_clone
0.16 ± 3% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.__ip_queue_xmit
0.12 ± 3% -0.0 0.10 ± 4% perf-profile.self.cycles-pp.__alloc_skb
0.19 +0.0 0.21 ± 3% perf-profile.self.cycles-pp.__slab_free
0.11 ± 6% +0.0 0.13 ± 5% perf-profile.self.cycles-pp.prep_new_page
0.09 ± 5% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.__mod_zone_page_state
0.27 +0.0 0.30 ± 3% perf-profile.self.cycles-pp.__check_object_size
0.11 ± 4% +0.0 0.15 ± 5% perf-profile.self.cycles-pp.skb_clone
0.90 +0.0 0.94 ± 3% perf-profile.self.cycles-pp.skb_release_data
0.00 +0.1 0.07 ± 17% perf-profile.self.cycles-pp.task_tick_fair
0.16 ± 2% +0.1 0.23 ± 2% perf-profile.self.cycles-pp.skb_try_coalesce
0.04 ± 51% +0.1 0.13 ± 14% perf-profile.self.cycles-pp.native_irq_return_iret
0.33 ± 5% +0.1 0.46 ± 5% perf-profile.self.cycles-pp.__free_pages_ok
15.30 ± 2% +0.6 15.89 perf-profile.self.cycles-pp.copy_user_enhanced_fast_string
48.94 +17.0 65.94 ± 5% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
2411060 ± 11% -17.0% 2001744 ± 6% softirqs.CPU0.NET_RX
2718610 ± 11% -37.5% 1698432 ± 8% softirqs.CPU10.NET_RX
2668197 ± 9% -36.9% 1683516 ± 17% softirqs.CPU100.NET_RX
55543 ± 7% -50.3% 27591 ± 36% softirqs.CPU100.SCHED
2531285 ± 9% -38.1% 1565973 ± 12% softirqs.CPU101.NET_RX
58736 ± 13% -55.5% 26130 ± 13% softirqs.CPU101.SCHED
2418334 ± 13% -27.0% 1765607 ± 15% softirqs.CPU102.NET_RX
53198 ± 15% -50.3% 26416 ± 20% softirqs.CPU102.SCHED
2606464 ± 9% -32.2% 1766378 ± 10% softirqs.CPU103.NET_RX
12406 ± 4% -7.9% 11428 ± 2% softirqs.CPU103.RCU
51062 ± 9% -41.0% 30107 ± 35% softirqs.CPU103.SCHED
2493152 ± 9% -41.8% 1450692 ± 20% softirqs.CPU104.NET_RX
53419 ± 11% -48.2% 27660 ± 47% softirqs.CPU104.SCHED
2480381 ± 9% -48.6% 1273839 ± 40% softirqs.CPU105.NET_RX
12399 ± 4% -10.0% 11163 ± 3% softirqs.CPU105.RCU
53177 ± 12% -64.1% 19113 ± 53% softirqs.CPU105.SCHED
2568408 ± 10% -34.3% 1687652 ± 20% softirqs.CPU106.NET_RX
2449975 ± 10% -39.4% 1483795 ± 15% softirqs.CPU107.NET_RX
48717 ± 12% -61.8% 18588 ± 16% softirqs.CPU107.SCHED
2465363 ± 11% -49.6% 1241743 ± 40% softirqs.CPU108.NET_RX
12381 ± 5% -9.2% 11237 ± 3% softirqs.CPU108.RCU
46096 ± 11% -58.0% 19358 ± 52% softirqs.CPU108.SCHED
2273723 ± 7% -47.0% 1205867 ± 28% softirqs.CPU109.NET_RX
12299 ± 4% -9.0% 11198 ± 2% softirqs.CPU109.RCU
48456 ± 18% -67.7% 15656 ± 39% softirqs.CPU109.SCHED
2665035 ± 8% -39.7% 1607028 ± 11% softirqs.CPU11.NET_RX
2402488 ± 8% -46.9% 1274744 ± 22% softirqs.CPU110.NET_RX
48708 ± 10% -55.3% 21768 ± 23% softirqs.CPU110.SCHED
2294041 ± 9% -40.9% 1355094 ± 23% softirqs.CPU111.NET_RX
49685 ± 11% -57.8% 20983 ± 43% softirqs.CPU111.SCHED
2359170 ± 9% -35.3% 1526656 ± 17% softirqs.CPU112.NET_RX
12723 ± 3% -9.4% 11526 ± 3% softirqs.CPU112.RCU
46602 ± 12% -58.6% 19273 ± 42% softirqs.CPU112.SCHED
2340351 ± 8% -35.8% 1501914 ± 20% softirqs.CPU113.NET_RX
50881 ± 11% -52.5% 24187 ± 49% softirqs.CPU113.SCHED
2256188 ± 10% -32.8% 1515043 ± 24% softirqs.CPU114.NET_RX
12786 ± 3% -10.6% 11428 ± 2% softirqs.CPU114.RCU
47771 ± 7% -47.1% 25263 ± 50% softirqs.CPU114.SCHED
2270439 ± 7% -43.9% 1274147 ± 25% softirqs.CPU115.NET_RX
47755 ± 8% -60.1% 19048 ± 50% softirqs.CPU115.SCHED
2301318 ± 10% -29.8% 1614857 ± 18% softirqs.CPU116.NET_RX
44792 ± 17% -50.2% 22306 ± 20% softirqs.CPU116.SCHED
2148563 ± 11% -52.7% 1016630 ± 27% softirqs.CPU117.NET_RX
12487 ± 2% -9.3% 11322 ± 2% softirqs.CPU117.RCU
46927 ± 10% -65.7% 16097 ± 30% softirqs.CPU117.SCHED
2051462 ± 7% -36.9% 1293612 ± 36% softirqs.CPU118.NET_RX
42919 ± 10% -64.4% 15274 ± 29% softirqs.CPU118.SCHED
2051750 ± 8% -32.4% 1386457 ± 22% softirqs.CPU119.NET_RX
39861 ± 6% -50.3% 19821 ± 23% softirqs.CPU119.SCHED
2863159 ± 10% -48.2% 1484251 ± 13% softirqs.CPU12.NET_RX
35633 ± 14% -40.8% 21081 ± 31% softirqs.CPU12.SCHED
3570263 ± 17% -54.2% 1635090 ± 6% softirqs.CPU120.NET_RX
64626 ± 8% -58.7% 26662 ± 26% softirqs.CPU120.SCHED
3702963 ± 17% -59.9% 1484709 ± 17% softirqs.CPU121.NET_RX
64426 ± 9% -61.3% 24913 ± 37% softirqs.CPU121.SCHED
3530917 ± 17% -54.1% 1622018 ± 11% softirqs.CPU122.NET_RX
59735 ± 10% -58.8% 24608 ± 26% softirqs.CPU122.SCHED
3562489 ± 20% -60.7% 1398515 ± 22% softirqs.CPU123.NET_RX
62125 ± 7% -61.1% 24139 ± 35% softirqs.CPU123.SCHED
3394123 ± 16% -50.6% 1678076 ± 16% softirqs.CPU124.NET_RX
59949 ± 9% -48.9% 30638 ± 18% softirqs.CPU124.SCHED
3315251 ± 18% -52.8% 1563380 ± 20% softirqs.CPU125.NET_RX
57125 ± 8% -45.2% 31315 ± 42% softirqs.CPU125.SCHED
3336327 ± 17% -58.0% 1402064 ± 20% softirqs.CPU126.NET_RX
60326 ± 12% -59.2% 24601 ± 21% softirqs.CPU126.SCHED
3355759 ± 20% -56.1% 1471860 ± 24% softirqs.CPU127.NET_RX
60251 ± 8% -53.4% 28094 ± 34% softirqs.CPU127.SCHED
3067020 ± 12% -58.0% 1288984 ± 24% softirqs.CPU128.NET_RX
56485 ± 16% -59.8% 22700 ± 26% softirqs.CPU128.SCHED
3137842 ± 15% -57.0% 1349344 ± 33% softirqs.CPU129.NET_RX
59946 ± 10% -60.5% 23697 ± 48% softirqs.CPU129.SCHED
2857070 ± 8% -45.3% 1562616 ± 20% softirqs.CPU13.NET_RX
3019596 ± 23% -58.6% 1249743 ± 19% softirqs.CPU130.NET_RX
55289 ± 14% -50.3% 27497 ± 40% softirqs.CPU130.SCHED
3156549 ± 22% -59.9% 1265006 ± 21% softirqs.CPU131.NET_RX
55181 ± 12% -56.0% 24264 ± 30% softirqs.CPU131.SCHED
3040632 ± 24% -58.8% 1252943 ± 29% softirqs.CPU132.NET_RX
53604 ± 9% -64.8% 18845 ± 37% softirqs.CPU132.SCHED
2941914 ± 19% -48.4% 1519254 ± 15% softirqs.CPU133.NET_RX
51831 ± 12% -52.8% 24441 ± 11% softirqs.CPU133.SCHED
3004762 ± 16% -50.0% 1501441 ± 28% softirqs.CPU134.NET_RX
53490 ± 8% -58.3% 22283 ± 32% softirqs.CPU134.SCHED
3036502 ± 16% -61.9% 1155804 ± 23% softirqs.CPU135.NET_RX
53052 ± 7% -55.5% 23632 ± 38% softirqs.CPU135.SCHED
3058234 ± 15% -48.1% 1587797 ± 22% softirqs.CPU136.NET_RX
52750 ± 13% -55.9% 23276 ± 26% softirqs.CPU136.SCHED
2834945 ± 21% -57.8% 1195389 ± 30% softirqs.CPU137.NET_RX
53108 ± 5% -67.9% 17056 ± 50% softirqs.CPU137.SCHED
2664120 ± 22% -46.5% 1424697 ± 38% softirqs.CPU138.NET_RX
46323 ± 19% -54.5% 21075 ± 39% softirqs.CPU138.SCHED
2610691 ± 23% -49.8% 1310524 ± 17% softirqs.CPU139.NET_RX
47434 ± 10% -54.0% 21820 ± 20% softirqs.CPU139.SCHED
2806472 ± 11% -49.1% 1428968 ± 15% softirqs.CPU14.NET_RX
32233 ± 11% -38.8% 19727 ± 11% softirqs.CPU14.SCHED
2505133 ± 12% -44.3% 1396093 ± 15% softirqs.CPU140.NET_RX
45572 ± 9% -51.9% 21928 ± 19% softirqs.CPU140.SCHED
2393157 ± 12% -52.9% 1127286 ± 28% softirqs.CPU141.NET_RX
44422 ± 7% -58.4% 18491 ± 28% softirqs.CPU141.SCHED
2020765 ± 13% -41.5% 1182979 ± 30% softirqs.CPU142.NET_RX
36011 ± 18% -49.0% 18360 ± 36% softirqs.CPU142.SCHED
2104843 ± 7% -35.5% 1357127 ± 24% softirqs.CPU143.NET_RX
3276208 ± 14% -62.9% 1215092 ± 24% softirqs.CPU144.NET_RX
62909 ± 12% -67.4% 20530 ± 50% softirqs.CPU144.SCHED
3287089 ± 11% -49.0% 1677726 ± 7% softirqs.CPU145.NET_RX
61959 ± 5% -53.3% 28959 ± 29% softirqs.CPU145.SCHED
3078579 ± 10% -54.6% 1397375 ± 17% softirqs.CPU146.NET_RX
59372 ± 7% -60.6% 23390 ± 27% softirqs.CPU146.SCHED
3099385 ± 16% -56.2% 1356855 ± 32% softirqs.CPU147.NET_RX
55073 ± 8% -62.1% 20887 ± 42% softirqs.CPU147.SCHED
3058536 ± 14% -57.4% 1303914 ± 19% softirqs.CPU148.NET_RX
57666 ± 11% -59.4% 23388 ± 36% softirqs.CPU148.SCHED
2797210 ± 15% -44.9% 1541878 ± 14% softirqs.CPU149.NET_RX
54769 ± 5% -48.0% 28475 ± 36% softirqs.CPU149.SCHED
2753938 ± 13% -46.4% 1476205 ± 26% softirqs.CPU15.NET_RX
2773837 ± 8% -49.8% 1392482 ± 25% softirqs.CPU150.NET_RX
56481 ± 11% -57.9% 23792 ± 43% softirqs.CPU150.SCHED
2913248 ± 12% -57.9% 1227126 ± 20% softirqs.CPU151.NET_RX
54119 ± 6% -67.3% 17686 ± 18% softirqs.CPU151.SCHED
2962237 ± 14% -55.4% 1322170 ± 23% softirqs.CPU152.NET_RX
57848 ± 12% -55.2% 25899 ± 46% softirqs.CPU152.SCHED
2824322 ± 8% -49.7% 1421240 ± 16% softirqs.CPU153.NET_RX
57214 ± 5% -55.0% 25730 ± 23% softirqs.CPU153.SCHED
3017257 ± 14% -59.8% 1212020 ± 31% softirqs.CPU154.NET_RX
54493 ± 10% -65.8% 18622 ± 37% softirqs.CPU154.SCHED
2808497 ± 10% -53.7% 1299347 ± 32% softirqs.CPU155.NET_RX
55357 ± 9% -58.1% 23189 ± 31% softirqs.CPU155.SCHED
2864825 ± 15% -47.3% 1510474 ± 23% softirqs.CPU156.NET_RX
52853 ± 13% -52.1% 25309 ± 48% softirqs.CPU156.SCHED
2822646 ± 14% -54.4% 1287032 ± 26% softirqs.CPU157.NET_RX
55730 ± 10% -60.7% 21925 ± 62% softirqs.CPU157.SCHED
2509321 ± 9% -41.5% 1467265 ± 33% softirqs.CPU158.NET_RX
55135 ± 11% -64.5% 19590 ± 29% softirqs.CPU158.SCHED
2638967 ± 14% -40.1% 1580103 ± 16% softirqs.CPU159.NET_RX
56293 ± 8% -49.9% 28192 ± 31% softirqs.CPU159.SCHED
2695457 ± 15% -48.0% 1401454 ± 13% softirqs.CPU16.NET_RX
31877 ± 14% -31.0% 21980 ± 23% softirqs.CPU16.SCHED
2330445 ± 8% -42.0% 1352543 ± 13% softirqs.CPU160.NET_RX
50654 ± 8% -53.1% 23760 ± 15% softirqs.CPU160.SCHED
2659502 ± 12% -48.1% 1380199 ± 22% softirqs.CPU161.NET_RX
47876 ± 6% -58.2% 20010 ± 39% softirqs.CPU161.SCHED
2477864 ± 11% -37.5% 1549170 ± 15% softirqs.CPU162.NET_RX
45515 ± 17% -45.6% 24752 ± 36% softirqs.CPU162.SCHED
2382583 ± 6% -30.8% 1649222 ± 18% softirqs.CPU163.NET_RX
50905 ± 11% -50.0% 25461 ± 39% softirqs.CPU163.SCHED
45620 ± 11% -50.7% 22490 ± 25% softirqs.CPU164.SCHED
2003025 ± 16% -38.9% 1224234 ± 21% softirqs.CPU165.NET_RX
38487 ± 12% -45.7% 20897 ± 33% softirqs.CPU165.SCHED
2029079 ± 20% -36.7% 1283536 ± 11% softirqs.CPU167.NET_RX
37271 ± 11% -36.3% 23745 ± 18% softirqs.CPU167.SCHED
3788471 ± 46% -58.0% 1592793 ± 13% softirqs.CPU168.NET_RX
64500 ± 9% -58.1% 27032 ± 14% softirqs.CPU168.SCHED
4160271 ± 58% -59.3% 1693815 ± 21% softirqs.CPU169.NET_RX
65543 ± 12% -54.3% 29953 ± 44% softirqs.CPU169.SCHED
2704755 ± 9% -39.3% 1640754 ± 16% softirqs.CPU17.NET_RX
4010274 ± 54% -61.1% 1561586 ± 35% softirqs.CPU170.NET_RX
65461 ± 15% -64.6% 23155 ± 40% softirqs.CPU170.SCHED
4005888 ± 56% -53.5% 1863617 ± 17% softirqs.CPU171.NET_RX
61007 ± 8% -52.1% 29231 ± 18% softirqs.CPU171.SCHED
3794731 ± 58% -56.2% 1663773 ± 29% softirqs.CPU172.NET_RX
60746 ± 20% -53.5% 28242 ± 51% softirqs.CPU172.SCHED
3830799 ± 53% -63.3% 1404468 ± 39% softirqs.CPU173.NET_RX
54674 ± 20% -65.0% 19126 ± 49% softirqs.CPU173.SCHED
3538477 ± 42% -52.8% 1668621 ± 15% softirqs.CPU174.NET_RX
55423 ± 9% -55.9% 24447 ± 50% softirqs.CPU174.SCHED
57534 ± 7% -60.0% 22985 ± 45% softirqs.CPU175.SCHED
3887126 ± 65% -55.5% 1727897 ± 27% softirqs.CPU176.NET_RX
57205 ± 18% -44.7% 31630 ± 36% softirqs.CPU176.SCHED
3714857 ± 57% -49.2% 1887234 ± 22% softirqs.CPU177.NET_RX
53869 ± 20% -52.6% 25530 ± 32% softirqs.CPU177.SCHED
3795497 ± 55% -63.1% 1401327 ± 35% softirqs.CPU178.NET_RX
11681 ± 2% -8.1% 10736 ± 4% softirqs.CPU178.RCU
55747 ± 19% -60.3% 22144 ± 45% softirqs.CPU178.SCHED
3395540 ± 44% -54.3% 1552463 ± 33% softirqs.CPU179.NET_RX
52609 ± 15% -48.1% 27319 ± 42% softirqs.CPU179.SCHED
2752010 ± 11% -40.6% 1634235 ± 22% softirqs.CPU18.NET_RX
33098 ± 14% -32.8% 22230 ± 32% softirqs.CPU18.SCHED
3390488 ± 47% -60.0% 1356411 ± 38% softirqs.CPU180.NET_RX
61426 ± 14% -58.8% 25303 ± 52% softirqs.CPU180.SCHED
58005 ± 13% -52.8% 27403 ± 42% softirqs.CPU181.SCHED
3456590 ± 52% -49.9% 1732729 ± 17% softirqs.CPU182.NET_RX
3186026 ± 47% -58.2% 1332986 ± 20% softirqs.CPU183.NET_RX
53111 ± 13% -52.3% 25319 ± 37% softirqs.CPU183.SCHED
49775 ± 5% -63.8% 18007 ± 40% softirqs.CPU184.SCHED
3305651 ± 51% -63.5% 1205891 ± 49% softirqs.CPU185.NET_RX
11475 -8.2% 10528 softirqs.CPU185.RCU
48770 ± 14% -66.5% 16357 ± 57% softirqs.CPU185.SCHED
3150235 ± 44% -48.0% 1638477 ± 13% softirqs.CPU186.NET_RX
48194 ± 19% -40.9% 28494 ± 28% softirqs.CPU186.SCHED
2753794 ± 35% -55.6% 1223044 ± 28% softirqs.CPU187.NET_RX
48711 ± 12% -52.0% 23387 ± 46% softirqs.CPU187.SCHED
53989 ± 16% -60.2% 21466 ± 31% softirqs.CPU188.SCHED
2384076 ± 28% -40.8% 1412505 ± 19% softirqs.CPU189.NET_RX
42698 ± 22% -46.4% 22902 ± 44% softirqs.CPU189.SCHED
2686332 ± 8% -46.4% 1441162 ± 36% softirqs.CPU19.NET_RX
32263 ± 13% -38.1% 19986 ± 36% softirqs.CPU19.SCHED
2573552 ± 32% -35.3% 1664384 ± 20% softirqs.CPU190.NET_RX
2641313 ± 36% -30.6% 1833708 ± 8% softirqs.CPU191.NET_RX
2504611 ± 8% -41.0% 1477849 ± 18% softirqs.CPU2.NET_RX
2845833 ± 9% -41.6% 1663141 ± 14% softirqs.CPU20.NET_RX
36451 ± 18% -45.1% 20023 ± 31% softirqs.CPU20.SCHED
2659091 ± 9% -49.7% 1337253 ± 18% softirqs.CPU21.NET_RX
34403 ± 16% -55.6% 15274 ± 24% softirqs.CPU21.SCHED
2743681 ± 9% -46.0% 1480961 ± 26% softirqs.CPU22.NET_RX
36228 ± 16% -47.7% 18943 ± 26% softirqs.CPU22.SCHED
2679441 ± 12% -39.4% 1625081 ± 11% softirqs.CPU23.NET_RX
35508 ± 19% -25.9% 26299 ± 19% softirqs.CPU23.SCHED
3079975 ± 21% -43.4% 1742003 ± 6% softirqs.CPU24.NET_RX
3147505 ± 25% -55.7% 1394293 ± 16% softirqs.CPU25.NET_RX
27604 ± 11% -34.8% 17997 ± 25% softirqs.CPU25.SCHED
3142164 ± 19% -44.0% 1759559 ± 9% softirqs.CPU26.NET_RX
3139990 ± 25% -55.5% 1397428 ± 25% softirqs.CPU27.NET_RX
3423302 ± 24% -52.3% 1633571 ± 21% softirqs.CPU28.NET_RX
3183269 ± 18% -48.2% 1648689 ± 22% softirqs.CPU29.NET_RX
32549 ± 18% -30.2% 22733 ± 22% softirqs.CPU29.SCHED
2360622 ± 7% -36.9% 1490348 ± 28% softirqs.CPU3.NET_RX
3341152 ± 14% -55.1% 1500296 ± 14% softirqs.CPU30.NET_RX
29104 ± 21% -22.7% 22486 ± 8% softirqs.CPU30.SCHED
3265019 ± 15% -49.5% 1649112 ± 21% softirqs.CPU31.NET_RX
3477261 ± 11% -60.8% 1364031 ± 19% softirqs.CPU32.NET_RX
3477803 ± 15% -59.0% 1426551 ± 25% softirqs.CPU33.NET_RX
28698 ± 13% -38.2% 17743 ± 31% softirqs.CPU33.SCHED
3454786 ± 21% -55.8% 1527719 ± 30% softirqs.CPU34.NET_RX
3187114 ± 17% -59.1% 1303766 ± 30% softirqs.CPU35.NET_RX
32320 ± 8% -36.3% 20591 ± 44% softirqs.CPU35.SCHED
3826701 ± 17% -68.9% 1189508 ± 19% softirqs.CPU36.NET_RX
35974 ± 5% -46.3% 19331 ± 34% softirqs.CPU36.SCHED
3493579 ± 17% -56.6% 1516418 ± 15% softirqs.CPU37.NET_RX
33361 ± 13% -30.8% 23072 ± 22% softirqs.CPU37.SCHED
3520932 ± 16% -59.7% 1418104 ± 36% softirqs.CPU38.NET_RX
35750 ± 5% -36.2% 22801 ± 38% softirqs.CPU38.SCHED
3565998 ± 12% -61.2% 1385115 ± 18% softirqs.CPU39.NET_RX
2482668 ± 9% -40.3% 1482219 ± 19% softirqs.CPU4.NET_RX
3626850 ± 21% -60.4% 1437274 ± 27% softirqs.CPU40.NET_RX
3214798 ± 13% -56.9% 1384251 ± 22% softirqs.CPU41.NET_RX
35949 ± 11% -38.8% 21996 ± 30% softirqs.CPU41.SCHED
3589512 ± 16% -60.8% 1406472 ± 36% softirqs.CPU42.NET_RX
39104 ± 19% -47.8% 20421 ± 41% softirqs.CPU42.SCHED
3501848 ± 16% -58.4% 1455130 ± 21% softirqs.CPU43.NET_RX
38844 ± 9% -54.1% 17814 ± 17% softirqs.CPU43.SCHED
3638528 ± 21% -65.0% 1272753 ± 25% softirqs.CPU44.NET_RX
41268 ± 12% -52.8% 19458 ± 31% softirqs.CPU44.SCHED
3459596 ± 16% -68.4% 1092443 ± 17% softirqs.CPU45.NET_RX
40479 ± 17% -63.4% 14820 ± 15% softirqs.CPU45.SCHED
3411025 ± 20% -54.3% 1558432 ± 16% softirqs.CPU46.NET_RX
44947 ± 20% -47.7% 23520 ± 19% softirqs.CPU46.SCHED
3482646 ± 16% -55.4% 1552959 ± 18% softirqs.CPU47.NET_RX
45972 ± 17% -41.2% 27049 ± 28% softirqs.CPU47.SCHED
2523850 ± 15% -48.7% 1293986 ± 33% softirqs.CPU48.NET_RX
2552666 ± 11% -35.1% 1655564 ± 14% softirqs.CPU49.NET_RX
2571090 ± 10% -40.1% 1539608 ± 14% softirqs.CPU5.NET_RX
2581711 ± 13% -51.6% 1250613 ± 19% softirqs.CPU50.NET_RX
2884667 ± 11% -47.1% 1525563 ± 31% softirqs.CPU51.NET_RX
3079250 ± 12% -52.2% 1473165 ± 11% softirqs.CPU52.NET_RX
2840747 ± 21% -44.5% 1576170 ± 8% softirqs.CPU53.NET_RX
2921278 ± 14% -52.0% 1401696 ± 16% softirqs.CPU54.NET_RX
3169937 ± 17% -60.8% 1241156 ± 25% softirqs.CPU55.NET_RX
36307 ± 8% -52.5% 17262 ± 38% softirqs.CPU55.SCHED
3056456 ± 6% -53.4% 1422908 ± 25% softirqs.CPU56.NET_RX
31795 ± 10% -44.7% 17568 ± 29% softirqs.CPU56.SCHED
3141468 ± 13% -52.7% 1487422 ± 9% softirqs.CPU57.NET_RX
3221467 ± 8% -56.5% 1402480 ± 25% softirqs.CPU58.NET_RX
3502440 ± 14% -60.5% 1385062 ± 27% softirqs.CPU59.NET_RX
2633501 ± 8% -37.2% 1653750 ± 21% softirqs.CPU6.NET_RX
3262285 ± 10% -55.1% 1463693 ± 26% softirqs.CPU60.NET_RX
3039300 ± 5% -51.3% 1479816 ± 22% softirqs.CPU61.NET_RX
2817826 ± 17% -51.1% 1376685 ± 25% softirqs.CPU62.NET_RX
3057336 ± 13% -50.4% 1516165 ± 18% softirqs.CPU63.NET_RX
3067351 ± 11% -53.7% 1421032 ± 20% softirqs.CPU64.NET_RX
34915 ± 7% -22.2% 27175 ± 15% softirqs.CPU64.SCHED
3255527 ± 11% -58.0% 1368668 ± 25% softirqs.CPU65.NET_RX
3245829 ± 14% -50.6% 1602539 ± 15% softirqs.CPU66.NET_RX
43367 ± 10% -40.4% 25864 ± 25% softirqs.CPU66.SCHED
3054422 ± 17% -45.0% 1681270 ± 14% softirqs.CPU67.NET_RX
3024427 ± 10% -55.6% 1343967 ± 17% softirqs.CPU68.NET_RX
3359067 ± 14% -60.1% 1340486 ± 20% softirqs.CPU69.NET_RX
45511 ± 11% -44.8% 25131 ± 31% softirqs.CPU69.SCHED
2581762 ± 12% -30.1% 1804333 ± 13% softirqs.CPU7.NET_RX
3185973 ± 12% -56.5% 1384547 ± 8% softirqs.CPU70.NET_RX
44949 ± 6% -42.2% 25997 ± 29% softirqs.CPU70.SCHED
3100967 ± 12% -55.4% 1384032 ± 13% softirqs.CPU71.NET_RX
42515 ± 18% -49.1% 21654 ± 28% softirqs.CPU71.SCHED
3332116 ± 63% -50.5% 1649135 ± 22% softirqs.CPU72.NET_RX
3459284 ± 55% -46.1% 1865592 ± 16% softirqs.CPU73.NET_RX
3593017 ± 41% -54.6% 1630199 ± 31% softirqs.CPU74.NET_RX
3908371 ± 64% -52.2% 1866996 ± 17% softirqs.CPU75.NET_RX
3424152 ± 38% -58.6% 1416916 ± 28% softirqs.CPU76.NET_RX
3523798 ± 49% -68.5% 1110373 ± 34% softirqs.CPU77.NET_RX
3782170 ± 56% -59.1% 1547780 ± 22% softirqs.CPU78.NET_RX
3831396 ± 58% -60.5% 1511650 ± 40% softirqs.CPU79.NET_RX
2544287 ± 15% -32.8% 1709146 ± 14% softirqs.CPU8.NET_RX
3777653 ± 52% -57.6% 1603569 ± 40% softirqs.CPU80.NET_RX
4121863 ± 60% -58.1% 1727891 ± 23% softirqs.CPU81.NET_RX
38262 ± 10% -27.5% 27747 ± 17% softirqs.CPU81.SCHED
4008537 ± 53% -67.8% 1291794 ± 23% softirqs.CPU82.NET_RX
3925610 ± 57% -60.9% 1533230 ± 40% softirqs.CPU83.NET_RX
3675683 ± 56% -59.6% 1483266 ± 39% softirqs.CPU84.NET_RX
4013276 ± 59% -51.5% 1946438 ± 19% softirqs.CPU85.NET_RX
3980480 ± 59% -57.3% 1697784 ± 13% softirqs.CPU86.NET_RX
3903322 ± 48% -61.2% 1516172 ± 27% softirqs.CPU87.NET_RX
39408 ± 36% -44.2% 21980 ± 21% softirqs.CPU87.SCHED
3820578 ± 42% -68.5% 1204526 ± 47% softirqs.CPU89.NET_RX
41951 ± 21% -54.5% 19089 ± 62% softirqs.CPU89.SCHED
2771428 ± 7% -39.0% 1690687 ± 19% softirqs.CPU9.NET_RX
3937814 ± 53% -58.8% 1623879 ± 35% softirqs.CPU90.NET_RX
4063598 ± 54% -66.5% 1359492 ± 28% softirqs.CPU91.NET_RX
41868 ± 33% -46.8% 22255 ± 44% softirqs.CPU91.SCHED
3610446 ± 40% -55.8% 1597481 ± 47% softirqs.CPU92.NET_RX
4116107 ± 52% -65.1% 1434570 ± 33% softirqs.CPU93.NET_RX
48062 ± 36% -57.8% 20260 ± 47% softirqs.CPU93.SCHED
3649362 ± 39% -53.6% 1692783 ± 13% softirqs.CPU94.NET_RX
46932 ± 22% -40.0% 28178 ± 34% softirqs.CPU94.SCHED
3674896 ± 48% -53.8% 1695998 ± 19% softirqs.CPU95.NET_RX
2906346 ± 10% -36.3% 1850842 ± 12% softirqs.CPU96.NET_RX
47371 ± 17% -34.1% 31198 ± 25% softirqs.CPU96.SCHED
2823944 ± 6% -46.6% 1506891 ± 27% softirqs.CPU97.NET_RX
12566 -8.4% 11511 ± 2% softirqs.CPU97.RCU
58287 ± 16% -60.5% 23031 ± 34% softirqs.CPU97.SCHED
2778341 ± 8% -41.0% 1640505 ± 21% softirqs.CPU98.NET_RX
12108 ± 4% -5.9% 11391 ± 3% softirqs.CPU98.RCU
57815 ± 13% -49.7% 29081 ± 43% softirqs.CPU98.SCHED
2754057 ± 9% -43.6% 1553858 ± 22% softirqs.CPU99.NET_RX
55507 ± 4% -52.1% 26578 ± 38% softirqs.CPU99.SCHED
5.877e+08 ± 12% -51.4% 2.859e+08 ± 10% softirqs.NET_RX
8311241 ± 4% -44.9% 4578168 ± 15% softirqs.SCHED
netperf.Throughput_Mbps
4000 +--------------------------------------------------------------------+
3800 |-+ .+... |
| .... ... |
3600 |-+ ... .. |
3400 |-+ ... .. |
| .... ... |
3200 |................+.................+. .. |
3000 |-+ .|
2800 |-+ |
| |
2600 |-+ |
2400 |-+ |
| |
2200 |-+ O O O |
2000 +--------------------------------------------------------------------+
netperf.time.minor_page_faults
48000 +-------------------------------------------------------------------+
|........ |
46000 |-+ ...... |
44000 |-+ .. ..|
| +................ ...... |
42000 |-+ +................ ...... |
| +.. |
40000 |-+ |
| |
38000 |-+ |
36000 |-+ |
| |
34000 |-+ O |
| O O |
32000 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-hsw-4ex1: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/8T/lkp-hsw-4ex1/anon-w-seq/vm-scalability/0x16
commit:
a349834703 ("sched/fair: Rename sg_lb_stats::sum_nr_running to sum_h_nr_running")
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
a349834703010183 fcf0553db6f4c79387864f6e4ab
---------------- ---------------------------
%stddev %change %stddev
\ | \
595857 +4.6% 623549 vm-scalability.median
8.22 ± 7% -3.2 5.01 ± 5% vm-scalability.median_stddev%
88608356 +1.5% 89976003 vm-scalability.throughput
498624 ± 2% -9.5% 451343 vm-scalability.time.involuntary_context_switches
9335 -1.3% 9212 vm-scalability.time.percent_of_cpu_this_job_got
12408855 ± 9% -17.1% 10291775 ± 8% meminfo.DirectMap2M
20311 ± 40% +60.3% 32564 ± 16% numa-numastat.node0.other_node
1759 ± 8% +62.0% 2850 ± 6% syscalls.sys_read.med
6827 ± 2% -2.8% 6636 vmstat.system.cs
12092 ± 18% -43.8% 6792 ± 30% numa-vmstat.node0.nr_slab_reclaimable
6233 ± 26% +29.4% 8069 ± 23% numa-vmstat.node3.nr_slab_reclaimable
14990 ± 12% +45.0% 21732 ± 18% numa-vmstat.node3.nr_slab_unreclaimable
48371 ± 18% -43.8% 27169 ± 30% numa-meminfo.node0.KReclaimable
48371 ± 18% -43.8% 27169 ± 30% numa-meminfo.node0.SReclaimable
167341 ± 16% -32.6% 112749 ± 31% numa-meminfo.node0.Slab
24931 ± 26% +29.5% 32277 ± 23% numa-meminfo.node3.KReclaimable
24931 ± 26% +29.5% 32277 ± 23% numa-meminfo.node3.SReclaimable
59961 ± 12% +45.0% 86933 ± 18% numa-meminfo.node3.SUnreclaim
84893 ± 15% +40.4% 119211 ± 13% numa-meminfo.node3.Slab
4627 ± 9% +25.4% 5802 ± 4% slabinfo.eventpoll_pwq.active_objs
4627 ± 9% +25.4% 5802 ± 4% slabinfo.eventpoll_pwq.num_objs
2190 ± 14% +26.1% 2762 ± 4% slabinfo.kmem_cache.active_objs
2190 ± 14% +26.1% 2762 ± 4% slabinfo.kmem_cache.num_objs
5823 ± 12% +22.2% 7118 ± 4% slabinfo.kmem_cache_node.active_objs
9665 ± 5% +10.3% 10663 ± 3% slabinfo.shmem_inode_cache.active_objs
9775 ± 5% +10.1% 10762 ± 3% slabinfo.shmem_inode_cache.num_objs
322.79 -7.9% 297.22 perf-stat.i.cpu-migrations
21.89 ± 3% -1.2 20.69 perf-stat.i.node-load-miss-rate%
13112232 ± 3% +3.4% 13562841 perf-stat.i.node-loads
21.44 ± 2% -1.1 20.29 perf-stat.overall.node-load-miss-rate%
3160 +1.9% 3219 perf-stat.overall.path-length
6748 ± 2% -2.6% 6570 perf-stat.ps.context-switches
319.09 -8.2% 292.97 perf-stat.ps.cpu-migrations
12905267 ± 3% +3.4% 13345546 perf-stat.ps.node-loads
12193 ± 5% +8.6% 13239 ± 4% softirqs.CPU105.RCU
12791 ± 6% +12.4% 14382 ± 7% softirqs.CPU22.RCU
12781 ± 5% +12.3% 14351 ± 4% softirqs.CPU28.RCU
12870 ± 4% +9.0% 14033 ± 7% softirqs.CPU31.RCU
11451 ± 5% +11.7% 12785 ± 4% softirqs.CPU90.RCU
11449 ± 6% +9.1% 12497 ± 6% softirqs.CPU91.RCU
11486 ± 6% +8.9% 12510 softirqs.CPU93.RCU
12197 ± 5% +10.4% 13462 ± 5% softirqs.CPU97.RCU
3899 ±100% -78.7% 830.89 ± 67% sched_debug.cfs_rq:/.load_avg.max
386.54 ± 91% -76.1% 92.43 ± 61% sched_debug.cfs_rq:/.load_avg.stddev
747.05 ± 7% -26.9% 546.12 ± 31% sched_debug.cfs_rq:/.runnable_load_avg.max
-203275 +540.7% -1302453 sched_debug.cfs_rq:/.spread0.avg
1142 ± 5% +9.6% 1251 ± 6% sched_debug.cfs_rq:/.util_avg.max
1.00 ±163% +2117.5% 22.17 ±100% sched_debug.cfs_rq:/.util_avg.min
212.47 ± 14% -16.9% 176.57 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.stddev
0.00 ± 7% +226.7% 0.00 ± 18% sched_debug.cpu.next_balance.stddev
2.48 ± 20% -24.6% 1.87 ± 11% sched_debug.cpu.nr_running.max
0.64 ± 29% -49.1% 0.33 ± 11% sched_debug.cpu.nr_running.stddev
137.50 ± 54% +194.2% 404.50 ± 78% interrupts.CPU109.RES:Rescheduling_interrupts
822.50 ± 22% -39.4% 498.75 ± 49% interrupts.CPU114.CAL:Function_call_interrupts
126.00 ± 72% +161.1% 329.00 ± 48% interrupts.CPU117.RES:Rescheduling_interrupts
153.00 ± 69% +121.6% 339.00 ± 51% interrupts.CPU119.RES:Rescheduling_interrupts
162.75 ± 83% +170.8% 440.75 ± 36% interrupts.CPU121.RES:Rescheduling_interrupts
798.00 ± 30% -45.0% 438.75 ± 28% interrupts.CPU126.CAL:Function_call_interrupts
3831 ± 37% -96.4% 138.50 ±106% interrupts.CPU128.NMI:Non-maskable_interrupts
3831 ± 37% -96.4% 138.50 ±106% interrupts.CPU128.PMI:Performance_monitoring_interrupts
218.25 ± 44% -51.7% 105.50 ± 4% interrupts.CPU128.RES:Rescheduling_interrupts
453.00 ± 74% -74.3% 116.50 ± 26% interrupts.CPU130.RES:Rescheduling_interrupts
2382 ± 15% -96.6% 80.75 ±133% interrupts.CPU138.NMI:Non-maskable_interrupts
2382 ± 15% -96.6% 80.75 ±133% interrupts.CPU138.PMI:Performance_monitoring_interrupts
557.25 ±155% +341.2% 2458 ± 79% interrupts.CPU14.NMI:Non-maskable_interrupts
557.25 ±155% +341.2% 2458 ± 79% interrupts.CPU14.PMI:Performance_monitoring_interrupts
782.00 ± 31% -37.0% 492.50 ± 50% interrupts.CPU17.CAL:Function_call_interrupts
779.75 ± 31% -36.8% 492.50 ± 50% interrupts.CPU18.CAL:Function_call_interrupts
782.25 ± 31% -37.2% 491.00 ± 49% interrupts.CPU20.CAL:Function_call_interrupts
5855 ± 20% -75.7% 1420 ±106% interrupts.CPU23.NMI:Non-maskable_interrupts
5855 ± 20% -75.7% 1420 ±106% interrupts.CPU23.PMI:Performance_monitoring_interrupts
785.00 ± 30% -38.0% 487.00 ± 51% interrupts.CPU3.CAL:Function_call_interrupts
1266 ±166% +241.5% 4325 ± 38% interrupts.CPU34.NMI:Non-maskable_interrupts
1266 ±166% +241.5% 4325 ± 38% interrupts.CPU34.PMI:Performance_monitoring_interrupts
784.00 ± 30% -37.0% 493.75 ± 49% interrupts.CPU36.CAL:Function_call_interrupts
783.25 ± 31% -37.6% 489.00 ± 51% interrupts.CPU37.CAL:Function_call_interrupts
325.75 ± 11% +80.6% 588.25 ± 45% interrupts.CPU37.RES:Rescheduling_interrupts
782.00 ± 31% -37.0% 492.50 ± 50% interrupts.CPU39.CAL:Function_call_interrupts
781.50 ± 31% -38.0% 484.50 ± 52% interrupts.CPU41.CAL:Function_call_interrupts
292.50 ± 21% +59.2% 465.75 ± 26% interrupts.CPU43.RES:Rescheduling_interrupts
233.00 ± 12% +59.4% 371.50 ± 20% interrupts.CPU48.RES:Rescheduling_interrupts
883.00 ±160% -97.1% 25.25 ±150% interrupts.CPU50.NMI:Non-maskable_interrupts
883.00 ±160% -97.1% 25.25 ±150% interrupts.CPU50.PMI:Performance_monitoring_interrupts
265.25 ± 28% +69.8% 450.50 ± 27% interrupts.CPU50.RES:Rescheduling_interrupts
249.75 ± 19% +40.8% 351.75 ± 11% interrupts.CPU51.RES:Rescheduling_interrupts
785.50 ± 30% -37.2% 493.00 ± 49% interrupts.CPU53.CAL:Function_call_interrupts
1.75 ±116% +1.2e+05% 2020 ±108% interrupts.CPU53.NMI:Non-maskable_interrupts
1.75 ±116% +1.2e+05% 2020 ±108% interrupts.CPU53.PMI:Performance_monitoring_interrupts
826.75 ± 79% +429.5% 4377 ± 27% interrupts.CPU56.NMI:Non-maskable_interrupts
826.75 ± 79% +429.5% 4377 ± 27% interrupts.CPU56.PMI:Performance_monitoring_interrupts
782.50 ± 31% -37.6% 488.00 ± 51% interrupts.CPU57.CAL:Function_call_interrupts
32.25 ±164% -100.0% 0.00 interrupts.CPU57.TLB:TLB_shootdowns
782.50 ± 31% -36.9% 494.00 ± 50% interrupts.CPU6.CAL:Function_call_interrupts
781.00 ± 31% -36.8% 493.50 ± 50% interrupts.CPU61.CAL:Function_call_interrupts
658.75 ±169% +429.9% 3490 ± 43% interrupts.CPU61.NMI:Non-maskable_interrupts
658.75 ±169% +429.9% 3490 ± 43% interrupts.CPU61.PMI:Performance_monitoring_interrupts
405.75 ± 20% -35.6% 261.50 ± 17% interrupts.CPU62.RES:Rescheduling_interrupts
781.75 ± 31% -36.8% 494.00 ± 50% interrupts.CPU65.CAL:Function_call_interrupts
782.50 ± 31% -37.0% 492.75 ± 50% interrupts.CPU66.CAL:Function_call_interrupts
838.25 ±121% +513.8% 5145 ± 30% interrupts.CPU66.NMI:Non-maskable_interrupts
838.25 ±121% +513.8% 5145 ± 30% interrupts.CPU66.PMI:Performance_monitoring_interrupts
782.00 ± 31% -37.1% 492.25 ± 50% interrupts.CPU67.CAL:Function_call_interrupts
782.25 ± 31% -37.0% 492.75 ± 49% interrupts.CPU69.CAL:Function_call_interrupts
780.00 ± 31% -37.3% 489.25 ± 49% interrupts.CPU71.CAL:Function_call_interrupts
784.25 ± 31% -38.0% 486.50 ± 51% interrupts.CPU75.CAL:Function_call_interrupts
783.25 ± 31% -36.9% 494.25 ± 50% interrupts.CPU77.CAL:Function_call_interrupts
822.00 ± 33% -39.9% 494.25 ± 50% interrupts.CPU83.CAL:Function_call_interrupts
789.75 ± 29% -37.1% 496.50 ± 49% interrupts.CPU90.CAL:Function_call_interrupts
5966 ± 15% -53.5% 2777 ± 62% interrupts.CPU94.NMI:Non-maskable_interrupts
5966 ± 15% -53.5% 2777 ± 62% interrupts.CPU94.PMI:Performance_monitoring_interrupts
46379 ± 9% -10.1% 41674 ± 4% interrupts.RES:Rescheduling_interrupts
68.78 -10.0 58.75 ± 8% perf-profile.calltrace.cycles-pp.do_access
30.04 ± 4% -4.3 25.69 ± 9% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
30.06 ± 4% -4.3 25.71 ± 9% perf-profile.calltrace.cycles-pp.page_fault.do_access
30.00 ± 4% -4.3 25.66 ± 9% perf-profile.calltrace.cycles-pp.do_user_addr_fault.do_page_fault.page_fault.do_access
29.93 ± 4% -4.3 25.59 ± 9% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault.do_access
26.63 ± 2% -2.7 23.88 ± 6% perf-profile.calltrace.cycles-pp.do_rw_once
0.57 ± 5% -0.3 0.27 ±100% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page
0.63 ± 6% -0.2 0.43 ± 58% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.63 ± 6% -0.2 0.43 ± 58% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.62 ± 5% -0.2 0.42 ± 58% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.__handle_mm_fault
28.25 ± 5% +6.6 34.88 ± 6% perf-profile.calltrace.cycles-pp.clear_page_erms.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
29.30 ± 5% +6.7 36.03 ± 6% perf-profile.calltrace.cycles-pp.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
30.21 ± 5% +7.0 37.23 ± 6% perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
31.31 ± 5% +7.3 38.61 ± 6% perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault
31.39 ± 5% +7.3 38.70 ± 6% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault
1.50 ±106% +11.7 13.17 ± 35% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault
1.50 ±106% +11.7 13.20 ± 35% perf-profile.calltrace.cycles-pp.do_user_addr_fault.do_page_fault.page_fault
1.51 ±106% +11.7 13.21 ± 35% perf-profile.calltrace.cycles-pp.page_fault
1.50 ±106% +11.7 13.21 ± 35% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
65.44 -9.4 56.04 ± 8% perf-profile.children.cycles-pp.do_access
30.08 -3.4 26.70 ± 7% perf-profile.children.cycles-pp.do_rw_once
0.06 ± 11% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.14 ± 6% +0.0 0.16 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.13 ± 10% +0.0 0.16 ± 9% perf-profile.children.cycles-pp.pte_alloc_one
0.12 ± 12% +0.0 0.15 ± 8% perf-profile.children.cycles-pp.prep_new_page
0.12 ± 10% +0.0 0.16 ± 20% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.12 ± 9% +0.0 0.15 ± 10% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.14 ± 8% +0.0 0.19 ± 9% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.irq_work_interrupt
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.irq_work_run
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.printk
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.1 0.07 ± 31% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.35 ± 3% +0.1 0.43 ± 7% perf-profile.children.cycles-pp._cond_resched
0.38 ± 6% +0.1 0.48 ± 6% perf-profile.children.cycles-pp.___might_sleep
0.46 ± 8% +0.1 0.60 ± 13% perf-profile.children.cycles-pp.scheduler_tick
0.66 ± 8% +0.2 0.81 ± 8% perf-profile.children.cycles-pp.rmqueue
0.70 ± 8% +0.2 0.87 ± 8% perf-profile.children.cycles-pp.alloc_pages_vma
0.79 ± 9% +0.2 0.97 ± 8% perf-profile.children.cycles-pp.get_page_from_freelist
0.83 ± 8% +0.2 1.03 ± 7% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.09 ± 6% +0.3 1.38 ± 11% perf-profile.children.cycles-pp.__hrtimer_run_queues
2.11 ± 3% +0.5 2.63 ± 12% perf-profile.children.cycles-pp.apic_timer_interrupt
28.56 ± 5% +6.6 35.18 ± 6% perf-profile.children.cycles-pp.clear_page_erms
29.39 ± 5% +6.8 36.14 ± 6% perf-profile.children.cycles-pp.clear_subpage
30.34 ± 5% +7.0 37.30 ± 6% perf-profile.children.cycles-pp.clear_huge_page
31.39 ± 5% +7.2 38.61 ± 6% perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
31.47 ± 5% +7.2 38.72 ± 6% perf-profile.children.cycles-pp.__handle_mm_fault
31.52 ± 5% +7.3 38.77 ± 6% perf-profile.children.cycles-pp.handle_mm_fault
31.60 ± 5% +7.3 38.87 ± 6% perf-profile.children.cycles-pp.do_user_addr_fault
31.67 ± 5% +7.3 38.95 ± 6% perf-profile.children.cycles-pp.page_fault
31.63 ± 5% +7.3 38.91 ± 6% perf-profile.children.cycles-pp.do_page_fault
29.16 ± 4% -4.1 25.05 ± 7% perf-profile.self.cycles-pp.do_access
27.38 -3.1 24.28 ± 7% perf-profile.self.cycles-pp.do_rw_once
0.05 ± 8% +0.0 0.07 ± 12% perf-profile.self.cycles-pp.prep_new_page
0.09 ± 9% +0.0 0.12 ± 11% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.do_huge_pmd_anonymous_page
0.00 +0.1 0.07 ± 31% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.28 ± 3% +0.1 0.35 ± 7% perf-profile.self.cycles-pp._cond_resched
0.35 ± 4% +0.1 0.43 ± 6% perf-profile.self.cycles-pp.___might_sleep
0.48 ± 8% +0.1 0.60 ± 9% perf-profile.self.cycles-pp.rmqueue
0.88 ± 4% +0.2 1.11 ± 3% perf-profile.self.cycles-pp.clear_subpage
28.18 ± 5% +6.4 34.57 ± 6% perf-profile.self.cycles-pp.clear_page_erms
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 7 months
[lockdep] d6667394fb: WARNING:suspicious_RCU_usage
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: d6667394fb52e8807137aea3adfa80d3a9c0125c ("lockdep: Fix lockdep recursion")
https://git.kernel.org/cgit/linux/kernel/git/peterz/queue.git locking/urgent
in testcase: trinity
version: trinity-i386
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------------------------+----------+------------+
| | v5.9-rc7 | d6667394fb |
+--------------------------------------------------------------------+----------+------------+
| boot_successes | 100 | 0 |
| boot_failures | 60 | 22 |
| WARNING:at_kernel/trace/trace.c:#trace_find_next_entry | 60 | |
| EIP:trace_find_next_entry | 60 | |
| WARNING:suspicious_RCU_usage | 0 | 22 |
| kernel/locking/lockdep.c:#RCU-list_traversed_in_non-reader_section | 0 | 22 |
| kernel/kprobes.c:#RCU-list_traversed_in_non-reader_section | 0 | 12 |
+--------------------------------------------------------------------+----------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.842475] WARNING: suspicious RCU usage
[ 0.842475] 5.9.0-rc7-00001-gd6667394fb52e #2 Not tainted
[ 0.842475] -----------------------------
[ 0.842475] kernel/locking/lockdep.c:869 RCU-list traversed in non-reader section!!
[ 0.842475]
[ 0.842475] other info that might help us debug this:
[ 0.842475]
[ 0.842475] RCU used illegally from offline CPU!
[ 0.842475] rcu_scheduler_active = 1, debug_locks = 1
[ 0.842475] no locks held by swapper/1/0.
[ 0.842475]
[ 0.842475] stack backtrace:
[ 0.842475] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.9.0-rc7-00001-gd6667394fb52e #2
[ 0.842475] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.842475] Call Trace:
[ 0.842475] dump_stack+0x6d/0x8e
[ 0.842475] lockdep_rcu_suspicious+0xbb/0xc4
[ 0.842475] register_lock_class.cold+0x3b/0x56
[ 0.842475] __lock_acquire+0x46/0xa30
[ 0.842475] lock_acquire+0x9f/0x3a0
[ 0.842475] ? __debug_object_init+0x60/0x530
[ 0.842475] ? find_held_lock+0x24/0x80
[ 0.842475] ? lock_release+0xa8/0x2c0
[ 0.842475] _raw_spin_lock_irqsave+0x36/0x60
[ 0.842475] ? __debug_object_init+0x60/0x530
[ 0.842475] __debug_object_init+0x60/0x530
[ 0.842475] ? mce_notify_irq+0x50/0x50
[ 0.842475] debug_object_init+0x1a/0x20
[ 0.842475] init_timer_key+0x1f/0x160
[ 0.842475] mcheck_cpu_init+0x1ea/0x4f0
[ 0.842475] identify_cpu+0x409/0x600
[ 0.842475] identify_secondary_cpu+0x17/0xb0
[ 0.842475] smp_store_cpu_info+0x3c/0x50
[ 0.842475] start_secondary+0x43/0x100
[ 0.842475] startup_32_smp+0x15f/0x170
[ 0.842475] smpboot: CPU 1 Converting physical 0 to logical die 1
[ 0.842475]
[ 0.842475] =============================
[ 0.842475] WARNING: suspicious RCU usage
[ 0.842475] 5.9.0-rc7-00001-gd6667394fb52e #2 Not tainted
[ 0.842475] -----------------------------
[ 0.842475] kernel/locking/lockdep.c:3130 RCU-list traversed in non-reader section!!
[ 0.842475]
[ 0.842475] other info that might help us debug this:
[ 0.842475]
[ 0.842475] RCU used illegally from offline CPU!
[ 0.842475] rcu_scheduler_active = 1, debug_locks = 1
[ 0.842475] no locks held by swapper/1/0.
[ 0.842475]
[ 0.842475] stack backtrace:
[ 0.842475] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.9.0-rc7-00001-gd6667394fb52e #2
[ 0.842475] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.842475] Call Trace:
[ 0.842475]
[ 0.842475] =============================
[ 0.842475] WARNING: suspicious RCU usage
[ 0.842475] 5.9.0-rc7-00001-gd6667394fb52e #2 Not tainted
[ 0.842475] -----------------------------
[ 0.842475] kernel/kprobes.c:299 RCU-list traversed in non-reader section!!
[ 0.842475]
[ 0.842475] other info that might help us debug this:
[ 0.842475]
[ 0.842475] RCU used illegally from offline CPU!
[ 0.842475] rcu_scheduler_active = 1, debug_locks = 1
[ 0.842475] no locks held by swapper/1/0.
[ 0.842475]
[ 0.842475] stack backtrace:
[ 0.842475] CPU: 1 PID: 0 Comm: swapper/1 Not tainted 5.9.0-rc7-00001-gd6667394fb52e #2
[ 0.842475] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 0.842475] Call Trace:
[ 0.842475] dump_stack+0x6d/0x8e
[ 0.842475] lockdep_rcu_suspicious+0xbb/0xc4
[ 0.842475] __is_insn_slot_addr+0x148/0x150
[ 0.842475] kernel_text_address+0xe4/0x110
[ 0.842475] ? get_stack_info+0x2c/0x138
[ 0.842475] __kernel_text_address+0x10/0x40
[ 0.842475] show_trace_log_lvl+0x13c/0x22e
[ 0.842475] ? dump_stack+0x6d/0x8e
[ 0.842475] show_stack+0x2e/0x34
[ 0.842475] dump_stack+0x6d/0x8e
[ 0.842475] lockdep_rcu_suspicious+0xbb/0xc4
[ 0.842475] validate_chain.cold+0x13f/0x178
[ 0.842475] __lock_acquire+0x48d/0xa30
[ 0.842475] lock_acquire+0x9f/0x3a0
[ 0.842475] ? vprintk_emit+0x5f/0x360
[ 0.842475] ? _raw_spin_lock+0x15/0x40
[ 0.842475] ? trace_preempt_off+0x23/0x100
[ 0.842475] _raw_spin_lock+0x2a/0x40
[ 0.842475] ? vprintk_emit+0x5f/0x360
[ 0.842475] vprintk_emit+0x5f/0x360
[ 0.842475] vprintk_default+0x17/0x20
[ 0.842475] vprintk_func+0x4f/0xc4
[ 0.842475] printk+0x13/0x15
[ 0.842475] kvm_register_clock+0x48/0x50
[ 0.842475] kvm_setup_secondary_clock+0xd/0x10
[ 0.842475] start_secondary+0x26/0x100
[ 0.842475] startup_32_smp+0x15f/0x170
[ 0.842475] dump_stack+0x6d/0x8e
[ 0.842475] lockdep_rcu_suspicious+0xbb/0xc4
[ 0.842475] validate_chain.cold+0x13f/0x178
[ 0.842475] __lock_acquire+0x48d/0xa30
[ 0.842475] lock_acquire+0x9f/0x3a0
[ 0.842475] ? vprintk_emit+0x5f/0x360
[ 0.842475] ? _raw_spin_lock+0x15/0x40
[ 0.842475] ? trace_preempt_off+0x23/0x100
[ 0.842475] _raw_spin_lock+0x2a/0x40
[ 0.842475] ? vprintk_emit+0x5f/0x360
[ 0.842475] vprintk_emit+0x5f/0x360
[ 0.842475] vprintk_default+0x17/0x20
[ 0.842475] vprintk_func+0x4f/0xc4
[ 0.842475] printk+0x13/0x15
[ 0.842475] kvm_register_clock+0x48/0x50
[ 0.842475] kvm_setup_secondary_clock+0xd/0x10
[ 0.842475] start_secondary+0x26/0x100
[ 0.842475] startup_32_smp+0x15f/0x170
[ 2.313232] kvm-guest: stealtime: cpu 1, msr 2dc7cb40
[ 2.314204] smp: Brought up 1 node, 2 CPUs
[ 2.315049] smpboot: Max logical packages: 2
[ 2.315909] smpboot: Total of 2 processors activated (7999.99 BogoMIPS)
[ 2.318017] devtmpfs: initialized
[ 2.325308] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
[ 2.326056] futex hash table entries: 512 (order: 3, 32768 bytes, linear)
[ 2.327637] xor: automatically using best checksumming function avx
[ 2.328021] prandom: seed boundary self test passed
[ 2.329937] prandom: 100 self tests passed
[ 2.330713] regulator-dummy: no parameters
[ 2.332183] NET: Registered protocol family 16
[ 2.336678] thermal_sys: Registered thermal governor 'fair_share'
[ 2.336683] thermal_sys: Registered thermal governor 'bang_bang'
[ 2.337006] thermal_sys: Registered thermal governor 'power_allocator'
[ 2.338166] cpuidle: using governor menu
[ 2.341107] ACPI: bus type PCI registered
[ 2.342845] PCI: PCI BIOS area is rw and x. Use pci=nobios if you want it NX.
[ 2.343000] PCI: PCI BIOS revision 2.10 entry at 0xfd1bc, last bus=0
[ 2.376292] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
[ 2.410101] raid6: sse2x2 gen() 5657 MB/s
[ 2.428057] raid6: sse2x2 xor() 3300 MB/s
[ 2.446008] raid6: sse2x1 gen() 4686 MB/s
[ 2.463005] raid6: sse2x1 xor() 2981 MB/s
[ 2.481008] raid6: sse1x2 gen() 2863 MB/s
[ 2.499012] raid6: sse1x1 gen() 2332 MB/s
[ 2.500004] raid6: using algorithm sse2x2 gen() 5657 MB/s
[ 2.501000] raid6: .... xor() 3300 MB/s, rmw enabled
[ 2.502001] raid6: using ssse3x1 recovery algorithm
[ 2.503880] ACPI: Added _OSI(Module Device)
[ 2.504017] ACPI: Added _OSI(Processor Device)
[ 2.504958] ACPI: Added _OSI(3.0 _SCP Extensions)
[ 2.505007] ACPI: Added _OSI(Processor Aggregator Device)
[ 2.506025] ACPI: Added _OSI(Linux-Dell-Video)
[ 2.507016] ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio)
[ 2.508015] ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics)
[ 2.540282] ACPI: 1 ACPI AML tables successfully acquired and loaded
[ 2.550405] ACPI: Interpreter enabled
[ 2.551178] ACPI: (supports S0 S3 S5)
[ 2.552034] ACPI: Using IOAPIC for interrupt routing
[ 2.553249] PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc7-00001-gd6667394fb52e .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 7 months
[ep_insert()] 9ee1cc5666: WARNING:possible_recursive_locking_detected
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 9ee1cc56661640a2ace2f7d0b52dec56b3573c53 ("[RFC PATCH 20/27] ep_insert(): we only need tep->mtx around the insertion itself")
url: https://github.com/0day-ci/linux/commits/Al-Viro/epoll-switch-epitem-pwql...
base: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git 22fbc037cd32e4e6771d2271b565806cfb8c134c
in testcase: rcuperf
version:
with following parameters:
runtime: 300s
perf_type: srcud
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------------------------+------------+------------+
| | 8d7a0bb9bb | 9ee1cc5666 |
+---------------------------------------------+------------+------------+
| boot_successes | 16 | 3 |
| boot_failures | 0 | 7 |
| WARNING:possible_recursive_locking_detected | 0 | 7 |
+---------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 16.956306] WARNING: possible recursive locking detected
[ 16.957162] 5.9.0-rc7-00161-g9ee1cc5666164 #1 Not tainted
[ 16.958165] --------------------------------------------
[ 16.959083] systemd/1 is trying to acquire lock:
[ 16.959879] f6148434 (&ep->mtx){+.+.}-{3:3}, at: do_epoll_ctl+0x516/0x730
[ 16.961051]
[ 16.961051] but task is already holding lock:
[ 16.962000] f6148134 (&ep->mtx){+.+.}-{3:3}, at: epoll_mutex_lock+0xe/0x21
[ 16.963207]
[ 16.963207] other info that might help us debug this:
[ 16.964179] Possible unsafe locking scenario:
[ 16.964179]
[ 16.965106] CPU0
[ 16.965531] ----
[ 16.965976] lock(&ep->mtx);
[ 16.966487] lock(&ep->mtx);
[ 16.967005]
[ 16.967005] *** DEADLOCK ***
[ 16.967005]
[ 16.968023] May be due to missing lock nesting notation
[ 16.968023]
[ 16.969182] 2 locks held by systemd/1:
[ 16.969815] #0: c73ba5d4 (epmutex){+.+.}-{3:3}, at: epoll_mutex_lock+0xe/0x21
[ 16.971187] #1: f6148134 (&ep->mtx){+.+.}-{3:3}, at: epoll_mutex_lock+0xe/0x21
[ 16.972573]
[ 16.972573] stack backtrace:
[ 16.973177] CPU: 0 PID: 1 Comm: systemd Not tainted 5.9.0-rc7-00161-g9ee1cc5666164 #1
[ 16.974433] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 16.975668] Call Trace:
[ 16.976068] dump_stack+0x1b/0x1d
[ 16.976560] validate_chain+0x4ad/0x4f5
[ 16.977206] __lock_acquire+0x726/0x7bf
[ 16.977882] lock_acquire+0x1f3/0x273
[ 16.978525] ? do_epoll_ctl+0x516/0x730
[ 16.979162] ? lock_is_held+0xb/0xd
[ 16.979754] __mutex_lock+0x72/0x1d6
[ 16.980362] ? do_epoll_ctl+0x516/0x730
[ 16.981009] ? rcu_read_lock_sched_held+0x20/0x37
[ 16.981802] ? kmem_cache_alloc+0xed/0x11c
[ 16.982461] mutex_lock_nested+0x14/0x18
[ 16.983148] ? do_epoll_ctl+0x516/0x730
[ 16.983821] do_epoll_ctl+0x516/0x730
[ 16.984457] __ia32_sys_epoll_ctl+0x2b/0x4f
[ 16.985178] do_int80_syscall_32+0x27/0x34
[ 16.985875] entry_INT80_32+0x113/0x113
[ 16.986505] EIP: 0xb7f5fa02
[ 16.987002] Code: 95 01 00 05 25 36 02 00 83 ec 14 8d 80 e8 99 ff ff 50 6a 02 e8 1f ff 00 00 c7 04 24 7f 00 00 00 e8 7e 87 01 00 66 90 90 cd 80 <c3> 8d b6 00 00 00 00 8d bc 27 00 00 00 00 8b 1c 24 c3 8d b6 00 00
[ 16.990076] EAX: ffffffda EBX: 00000004 ECX: 00000001 EDX: 00000009
[ 16.991027] ESI: bfcb6050 EDI: 00000001 EBP: 00000001 ESP: bfcb6030
[ 16.992078] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000246
[ OK ] Listening on udev Control Socket.
[ OK ] Listening on Syslog Socket.
[ OK ] Listening on RPCbind Server Activation Socket.
[ OK ] Reached target Swap.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Created slice User and Session Slice.
[ OK ] Listening on Journal Socket.
[ OK ] Listening on udev Kernel Socket.
[ OK ] Created slice System Slice.
[ 17.032119] _warn_unseeded_randomness: 199 callbacks suppressed
[ 17.032127] random: get_random_u32 called from copy_process+0x216/0x1349 with crng_init=1
Starting Load Kernel Modules...
Mounting RPC Pipe File System...
[ OK ] Reached target Slices.
[ 17.040073] random: get_random_u32 called from arch_pick_mmap_layout+0x4d/0xd7 with crng_init=1
[ 17.040077] random: get_random_u32 called from randomize_stack_top+0x1b/0x36 with crng_init=1
Starting Create Static Device Nodes in /dev...
[ OK ] Started Dispatch Password Requests to Console Directory Watch.
[ OK ] Listening on /dev/initctl Compatibility Named Pipe.
Starting Journal Service...
Starting Remount Root and Kernel File Systems...
[ OK ] Started Forward Password Requests to Wall Directory Watch.
[ OK ] Reached target Encrypted Volumes.
[ OK ] Reached target Paths.
[ OK ] Created slice system-getty.slice.
[ OK ] Mounted RPC Pipe File System.
[ OK ] Started Load Kernel Modules.
[ OK ] Started Create Static Device Nodes in /dev.
[ OK ] Started Remount Root and Kernel File Systems.
Starting Load/Save Random Seed...
Starting udev Coldplug all Devices...
Starting udev Kernel Device Manager...
[ OK ] Reached target Local File Systems (Pre).
[ OK ] Reached target Local File Systems.
Starting Preprocess NFS configuration...
Mounting Configuration File System...
Mounting FUSE Control File System...
Starting Apply Kernel Variables...
[ OK ] Mounted Configuration File System.
[ OK ] Mounted FUSE Control File System.
[ OK ] Started udev Kernel Device Manager.
[ OK ] Started Load/Save Random Seed.
[ OK ] Started Preprocess NFS configuration.
[ OK ] Started Apply Kernel Variables.
Starting Raise network interfaces...
[ OK ] Reached target NFS client services.
[ OK ] Started Journal Service.
Starting Flush Journal to Persistent Storage...
[ OK ] Started Flush Journal to Persistent Storage.
Starting Create Volatile Files and Directories...
[ OK ] Started Raise network interfaces.
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc7-00161-g9ee1cc5666164 .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 7 months
[ext4] 061113efe9: fio.write_iops 364.6% improvement
by kernel test robot
Greeting,
FYI, we noticed a 364.6% improvement of fio.write_iops due to commit:
commit: 061113efe99b24ac026db5aa5a72844e16318bd7 ("ext4: optimize file overwrites")
https://git.kernel.org/cgit/linux/kernel/git/tytso/ext4.git dev
in testcase: fio-basic
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:
disk: 2pmem
fs: ext4
mount_option: dax
runtime: 200s
nr_task: 50%
time_based: tb
rw: randwrite
bs: 4k
ioengine: sync
test_size: 200G
cpufreq_governor: performance
ucode: 0x5002f01
test-description: Fio is a tool that will spawn a number of threads or processes doing a particular type of I/O action as specified by the user.
test-url: https://github.com/axboe/fio
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
bs/compiler/cpufreq_governor/disk/fs/ioengine/kconfig/mount_option/nr_task/rootfs/runtime/rw/tbox_group/test_size/testcase/time_based/ucode:
4k/gcc-9/performance/2pmem/ext4/sync/x86_64-rhel-8.3/dax/50%/debian-10.4-x86_64-20200603.cgz/200s/randwrite/lkp-csl-2sp6/200G/fio-basic/tb/0x5002f01
commit:
9ffd5728cc ("ext4: remove unused including <linux/version.h>")
061113efe9 ("ext4: optimize file overwrites")
9ffd5728cca71e4f 061113efe99b24ac026db5aa5a7
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.16 ± 33% -0.2 0.01 fio.latency_100us%
48.50 ± 31% -47.2 1.28 ± 43% fio.latency_20us%
0.01 +32.7 32.76 ± 51% fio.latency_2us%
0.21 ± 89% +47.9 48.10 ± 17% fio.latency_4us%
25.01 ± 23% -24.6 0.37 ± 51% fio.latency_50us%
15563 ± 4% -10.4% 13952 ± 4% fio.time.involuntary_context_switches
13539 ± 5% -13.9% 11659 ± 9% fio.time.minor_page_faults
9206 -11.5% 8144 ± 3% fio.time.system_time
316.40 ± 5% +334.8% 1375 ± 18% fio.time.user_time
24203 -7.2% 22455 fio.time.voluntary_context_switches
5.93e+08 ± 2% +364.6% 2.755e+09 ± 18% fio.workload
11581 ± 2% +364.6% 53803 ± 18% fio.write_bw_MBps
25088 ± 8% -80.7% 4832 ± 24% fio.write_clat_90%_us
28480 ± 8% -78.7% 6072 ± 23% fio.write_clat_95%_us
37248 ± 6% -65.5% 12848 ± 21% fio.write_clat_99%_us
15694 ± 3% -80.2% 3100 ± 22% fio.write_clat_mean_us
9722 ± 15% -72.8% 2648 ± 18% fio.write_clat_stddev
2964935 ± 2% +364.6% 13773690 ± 18% fio.write_iops
1858 +4.2% 1936 vmstat.system.cs
47.96 -11.3% 42.52 ± 3% iostat.cpu.system
1.69 ± 5% +321.6% 7.11 ± 18% iostat.cpu.user
1.918e+09 ± 36% +226.2% 6.257e+09 ± 41% cpuidle.C1E.time
10452864 ± 3% +42.6% 14909339 ± 25% cpuidle.C1E.usage
7.753e+09 ± 8% -55.6% 3.44e+09 ± 74% cpuidle.C6.time
39387 ± 2% +15.2% 45385 meminfo.Active(anon)
7410 ± 10% +84.6% 13677 ± 4% meminfo.Dirty
122418 -9.6% 110640 meminfo.KReclaimable
122418 -9.6% 110640 meminfo.SReclaimable
0.01 ± 16% -0.0 0.00 ± 37% mpstat.cpu.all.iowait%
0.03 ± 3% +0.0 0.03 ± 4% mpstat.cpu.all.soft%
47.59 -5.5 42.07 ± 3% mpstat.cpu.all.sys%
1.70 ± 5% +5.5 7.17 ± 18% mpstat.cpu.all.usr%
741453 ± 7% -58.1% 310382 ± 30% numa-numastat.node0.local_node
756956 ± 6% -57.6% 321292 ± 33% numa-numastat.node0.numa_hit
247432 ± 24% +153.0% 626065 ± 15% numa-numastat.node1.local_node
263275 ± 19% +145.6% 646527 ± 17% numa-numastat.node1.numa_hit
3830 ± 5% -7.1% 3558 ± 2% slabinfo.dmaengine-unmap-16.active_objs
3830 ± 5% -7.1% 3558 ± 2% slabinfo.dmaengine-unmap-16.num_objs
421181 ± 5% -32.1% 286015 ± 7% slabinfo.ext4_extent_status.active_objs
6662 ± 6% -42.1% 3855 ± 8% slabinfo.ext4_extent_status.active_slabs
679573 ± 6% -42.1% 393340 ± 8% slabinfo.ext4_extent_status.num_objs
6662 ± 6% -42.1% 3855 ± 8% slabinfo.ext4_extent_status.num_slabs
9849 ± 2% +14.9% 11317 proc-vmstat.nr_active_anon
1812 ± 10% +87.2% 3392 ± 3% proc-vmstat.nr_dirty
2123 +1.3% 2151 proc-vmstat.nr_page_table_pages
30595 -9.6% 27662 proc-vmstat.nr_slab_reclaimable
9849 ± 2% +14.9% 11317 proc-vmstat.nr_zone_active_anon
1813 ± 10% +87.1% 3391 ± 3% proc-vmstat.nr_zone_write_pending
1064225 -5.0% 1011161 proc-vmstat.numa_hit
1032828 -5.1% 979762 proc-vmstat.numa_local
1104273 -5.0% 1049441 proc-vmstat.pgalloc_normal
898139 ± 14% -18.3% 733903 proc-vmstat.pgfree
6689 ± 9% +78.3% 11924 ± 15% numa-meminfo.node0.Dirty
1787289 -62.9% 662714 ± 3% numa-meminfo.node0.FilePages
1308617 ± 2% -88.8% 146900 ± 15% numa-meminfo.node0.Inactive
1308337 ± 2% -88.8% 146757 ± 15% numa-meminfo.node0.Inactive(anon)
1131509 -98.6% 16081 ± 14% numa-meminfo.node0.Mapped
4350653 -28.2% 3121909 ± 3% numa-meminfo.node0.MemUsed
7561 ± 3% -37.8% 4704 ± 12% numa-meminfo.node0.PageTables
1120696 -99.6% 4832 ± 79% numa-meminfo.node0.Shmem
37743 +15.4% 43552 numa-meminfo.node1.Active(anon)
539135 +209.5% 1668823 numa-meminfo.node1.FilePages
119869 ± 30% +969.4% 1281909 numa-meminfo.node1.Inactive
119766 ± 30% +970.2% 1281701 numa-meminfo.node1.Inactive(anon)
17210 ± 16% +6484.3% 1133150 numa-meminfo.node1.Mapped
2835108 ± 2% +43.3% 4061792 ± 2% numa-meminfo.node1.MemUsed
941.50 ± 19% +314.5% 3902 ± 14% numa-meminfo.node1.PageTables
44833 ± 10% +2502.5% 1166807 numa-meminfo.node1.Shmem
1601 ± 7% +85.5% 2970 ± 15% numa-vmstat.node0.nr_dirty
446809 -62.9% 165679 ± 3% numa-vmstat.node0.nr_file_pages
327071 ± 2% -88.8% 36681 ± 15% numa-vmstat.node0.nr_inactive_anon
282962 -98.5% 4157 ± 14% numa-vmstat.node0.nr_mapped
1890 ± 3% -37.9% 1174 ± 12% numa-vmstat.node0.nr_page_table_pages
280164 -99.6% 1208 ± 79% numa-vmstat.node0.nr_shmem
327071 ± 2% -88.8% 36681 ± 15% numa-vmstat.node0.nr_zone_inactive_anon
1601 ± 7% +85.5% 2970 ± 15% numa-vmstat.node0.nr_zone_write_pending
1917342 -22.9% 1477585 ± 7% numa-vmstat.node0.numa_hit
1869725 ± 3% -21.9% 1461109 ± 7% numa-vmstat.node0.numa_local
9474 +15.3% 10920 numa-vmstat.node1.nr_active_anon
134747 +208.6% 415874 numa-vmstat.node1.nr_file_pages
29850 ± 30% +968.9% 319054 numa-vmstat.node1.nr_inactive_anon
4207 ± 16% +6601.2% 281918 numa-vmstat.node1.nr_mapped
235.25 ± 18% +312.8% 971.00 ± 14% numa-vmstat.node1.nr_page_table_pages
11172 ± 10% +2499.0% 290369 numa-vmstat.node1.nr_shmem
9474 +15.3% 10920 numa-vmstat.node1.nr_zone_active_anon
29850 ± 30% +968.9% 319054 numa-vmstat.node1.nr_zone_inactive_anon
687971 ± 3% +56.8% 1078550 ± 10% numa-vmstat.node1.numa_hit
557691 ± 11% +64.5% 917161 ± 12% numa-vmstat.node1.numa_local
28318 ± 22% +31.8% 37317 ± 16% sched_debug.cfs_rq:/.exec_clock.stddev
385.22 ± 8% +9.7% 422.63 ± 9% sched_debug.cfs_rq:/.load_avg.avg
59833 ± 14% +24.5% 74463 ± 10% sched_debug.cfs_rq:/.min_vruntime.avg
103784 ± 12% +19.3% 123853 ± 11% sched_debug.cfs_rq:/.min_vruntime.max
29685 ± 20% +28.7% 38216 ± 16% sched_debug.cfs_rq:/.min_vruntime.stddev
29685 ± 20% +28.8% 38221 ± 16% sched_debug.cfs_rq:/.spread0.stddev
673766 ± 5% +28.1% 862888 ± 2% sched_debug.cpu.avg_idle.avg
316122 ± 5% -41.5% 185066 ± 8% sched_debug.cpu.avg_idle.stddev
2928 ± 8% +10.9% 3248 ± 7% sched_debug.cpu.nr_switches.avg
18266 ± 24% +53.1% 27974 ± 16% sched_debug.cpu.nr_switches.max
2854 ± 20% +37.8% 3934 ± 15% sched_debug.cpu.nr_switches.stddev
28.58 ± 24% +59.5% 45.58 ± 9% sched_debug.cpu.nr_uninterruptible.max
14485 ± 33% +69.6% 24568 ± 21% sched_debug.cpu.sched_count.max
2206 ± 27% +57.9% 3483 ± 19% sched_debug.cpu.sched_count.stddev
605.82 ± 19% +25.4% 759.56 ± 15% sched_debug.cpu.sched_goidle.avg
7179 ± 33% +70.3% 12229 ± 21% sched_debug.cpu.sched_goidle.max
34.46 ± 10% -42.4% 19.83 ± 10% sched_debug.cpu.sched_goidle.min
1117 ± 27% +57.4% 1759 ± 19% sched_debug.cpu.sched_goidle.stddev
7717 ± 27% +53.7% 11861 ± 20% sched_debug.cpu.ttwu_count.max
1125 ± 25% +49.2% 1679 ± 18% sched_debug.cpu.ttwu_count.stddev
5318 ± 36% +75.8% 9350 ± 23% sched_debug.cpu.ttwu_local.max
83.33 +33.3% 111.06 ± 14% sched_debug.cpu.ttwu_local.min
750.49 ± 30% +61.3% 1210 ± 21% sched_debug.cpu.ttwu_local.stddev
14.37 -79.1% 3.01 ± 55% perf-stat.i.MPKI
4.045e+09 ± 2% +276.2% 1.522e+10 ± 17% perf-stat.i.branch-instructions
1.01 -0.1 0.93 ± 2% perf-stat.i.branch-miss-rate%
44395538 +205.6% 1.357e+08 ± 14% perf-stat.i.branch-misses
35.66 ± 4% -8.0 27.62 ± 24% perf-stat.i.cache-miss-rate%
1814 +4.4% 1894 perf-stat.i.context-switches
6.02 ± 2% -73.5% 1.59 ± 19% perf-stat.i.cpi
0.00 ± 27% -0.0 0.00 ± 37% perf-stat.i.dTLB-load-miss-rate%
7.109e+09 ± 2% +276.9% 2.679e+10 ± 17% perf-stat.i.dTLB-loads
134633 ± 23% +202.7% 407497 ± 13% perf-stat.i.dTLB-store-misses
4.516e+09 ± 2% +291.9% 1.77e+10 ± 17% perf-stat.i.dTLB-stores
88.04 +6.5 94.54 perf-stat.i.iTLB-load-miss-rate%
36917229 ± 11% +180.6% 1.036e+08 ± 13% perf-stat.i.iTLB-load-misses
4880282 +16.3% 5678123 perf-stat.i.iTLB-loads
2.308e+10 ± 2% +279.2% 8.752e+10 ± 17% perf-stat.i.instructions
635.06 ± 8% +34.3% 852.98 ± 7% perf-stat.i.instructions-per-iTLB-miss
0.17 ± 2% +276.4% 0.65 ± 17% perf-stat.i.ipc
0.39 ± 61% -66.9% 0.13 ± 5% perf-stat.i.metric.K/sec
166.97 ± 2% +274.2% 624.73 ± 17% perf-stat.i.metric.M/sec
84.49 +3.4 87.88 ± 2% perf-stat.i.node-load-miss-rate%
10345827 ± 2% -61.8% 3956301 ± 22% perf-stat.i.node-store-misses
13.96 -78.8% 2.96 ± 56% perf-stat.overall.MPKI
1.10 -0.2 0.90 ± 3% perf-stat.overall.branch-miss-rate%
35.81 ± 4% -7.7 28.08 ± 24% perf-stat.overall.cache-miss-rate%
5.84 ± 2% -72.9% 1.58 ± 19% perf-stat.overall.cpi
0.00 ± 36% -0.0 0.00 ± 49% perf-stat.overall.dTLB-load-miss-rate%
0.00 ± 24% -0.0 0.00 ± 8% perf-stat.overall.dTLB-store-miss-rate%
88.21 +6.5 94.73 perf-stat.overall.iTLB-load-miss-rate%
631.78 ± 8% +33.0% 840.08 ± 7% perf-stat.overall.instructions-per-iTLB-miss
0.17 ± 2% +281.6% 0.65 ± 17% perf-stat.overall.ipc
7818 -18.3% 6384 perf-stat.overall.path-length
4.025e+09 ± 2% +276.0% 1.514e+10 ± 17% perf-stat.ps.branch-instructions
44199217 +205.6% 1.351e+08 ± 14% perf-stat.ps.branch-misses
1805 +4.4% 1885 perf-stat.ps.context-switches
7.075e+09 ± 2% +276.8% 2.665e+10 ± 17% perf-stat.ps.dTLB-loads
134275 ± 23% +202.0% 405555 ± 13% perf-stat.ps.dTLB-store-misses
4.494e+09 ± 2% +291.8% 1.76e+10 ± 17% perf-stat.ps.dTLB-stores
36727460 ± 11% +180.5% 1.03e+08 ± 13% perf-stat.ps.iTLB-load-misses
4854565 +16.3% 5647668 perf-stat.ps.iTLB-loads
2.297e+10 ± 2% +279.1% 8.706e+10 ± 17% perf-stat.ps.instructions
10291805 ± 2% -61.7% 3937017 ± 22% perf-stat.ps.node-store-misses
4.636e+12 ± 2% +279.0% 1.757e+13 ± 17% perf-stat.total.instructions
3980 ± 8% +122.3% 8848 ± 35% softirqs.CPU1.RCU
3187 ± 10% +159.2% 8263 ± 34% softirqs.CPU12.RCU
3430 ± 16% +142.1% 8305 ± 35% softirqs.CPU13.RCU
3187 ± 3% +155.3% 8137 ± 26% softirqs.CPU14.RCU
3655 ± 7% +153.6% 9271 ± 21% softirqs.CPU2.RCU
3302 ± 4% +139.9% 7923 ± 24% softirqs.CPU23.RCU
3629 ± 12% +147.4% 8979 ± 12% softirqs.CPU24.RCU
5417 ± 25% +72.7% 9355 ± 15% softirqs.CPU25.RCU
3915 ± 19% +141.6% 9459 ± 17% softirqs.CPU26.RCU
4003 ± 13% +113.5% 8549 ± 21% softirqs.CPU27.RCU
3553 ± 14% +159.2% 9210 ± 19% softirqs.CPU28.RCU
3413 ± 10% +174.0% 9352 ± 16% softirqs.CPU29.RCU
3784 ± 8% +124.7% 8505 ± 34% softirqs.CPU3.RCU
3802 ± 7% +146.5% 9372 ± 16% softirqs.CPU30.RCU
3767 ± 8% +146.8% 9295 ± 16% softirqs.CPU31.RCU
3939 ± 9% +130.9% 9097 ± 20% softirqs.CPU32.RCU
3935 ± 10% +125.3% 8867 ± 24% softirqs.CPU33.RCU
3991 +131.0% 9218 ± 19% softirqs.CPU34.RCU
3792 ± 16% +132.0% 8798 ± 21% softirqs.CPU35.RCU
3089 ± 8% +235.6% 10367 ± 19% softirqs.CPU36.RCU
14423 ± 24% -77.4% 3261 ± 10% softirqs.CPU36.SCHED
3965 ± 15% +134.5% 9298 ± 19% softirqs.CPU37.RCU
3548 ± 15% +195.0% 10468 ± 21% softirqs.CPU38.RCU
3596 ± 14% +198.2% 10723 ± 21% softirqs.CPU39.RCU
3815 ± 6% +169.3% 10275 ± 19% softirqs.CPU40.RCU
3468 ± 6% +170.9% 9396 ± 27% softirqs.CPU41.RCU
3712 ± 12% +179.7% 10381 ± 18% softirqs.CPU42.RCU
3573 ± 10% +166.6% 9527 ± 21% softirqs.CPU43.RCU
3866 +164.1% 10210 ± 18% softirqs.CPU44.RCU
3902 ± 2% +142.7% 9470 ± 26% softirqs.CPU45.RCU
3529 ± 16% +157.6% 9091 ± 25% softirqs.CPU46.RCU
3669 ± 13% +162.9% 9647 ± 20% softirqs.CPU47.RCU
3060 ± 17% +211.6% 9538 ± 16% softirqs.CPU48.RCU
3776 ± 14% +160.0% 9815 ± 18% softirqs.CPU49.RCU
3525 ± 13% +143.2% 8574 ± 38% softirqs.CPU5.RCU
4257 ± 3% +135.3% 10014 ± 34% softirqs.CPU50.RCU
4323 ± 4% +124.0% 9684 ± 16% softirqs.CPU51.RCU
4218 ± 16% +156.8% 10830 ± 18% softirqs.CPU52.RCU
4049 ± 11% +148.7% 10072 ± 17% softirqs.CPU53.RCU
4376 ± 20% +153.4% 11088 ± 16% softirqs.CPU54.RCU
8728 ±116% -69.9% 2625 ± 5% softirqs.CPU54.SCHED
4460 +122.1% 9903 ± 19% softirqs.CPU55.RCU
4369 ± 7% +124.8% 9823 ± 17% softirqs.CPU56.RCU
5078 ± 45% +112.4% 10788 ± 18% softirqs.CPU58.RCU
4510 ± 15% +143.1% 10964 ± 18% softirqs.CPU59.RCU
4504 ± 8% +118.8% 9854 ± 17% softirqs.CPU60.RCU
4252 ± 4% +121.7% 9428 ± 17% softirqs.CPU61.RCU
4241 ± 2% +152.2% 10695 ± 18% softirqs.CPU62.RCU
4388 ± 6% +132.2% 10189 ± 25% softirqs.CPU63.RCU
4937 ± 13% +122.2% 10971 ± 20% softirqs.CPU64.RCU
4411 ± 2% +155.2% 11259 ± 20% softirqs.CPU65.RCU
4435 ± 10% +148.9% 11038 ± 20% softirqs.CPU66.RCU
4047 ± 17% +179.7% 11320 ± 20% softirqs.CPU67.RCU
4238 +166.5% 11294 ± 23% softirqs.CPU68.RCU
4241 ± 4% +170.5% 11473 ± 21% softirqs.CPU69.RCU
3567 ± 8% +132.1% 8278 ± 37% softirqs.CPU7.RCU
4301 ± 2% +156.8% 11043 ± 19% softirqs.CPU70.RCU
4360 ± 2% +132.7% 10146 ± 30% softirqs.CPU71.RCU
3460 ± 11% +149.4% 8631 ± 34% softirqs.CPU72.RCU
3248 ± 11% +143.1% 7897 ± 22% softirqs.CPU73.RCU
3484 ± 17% +140.0% 8362 ± 36% softirqs.CPU74.RCU
3223 ± 11% +167.8% 8634 ± 32% softirqs.CPU75.RCU
3120 ± 16% +130.4% 7189 ± 33% softirqs.CPU76.RCU
3493 ± 5% +135.0% 8210 ± 40% softirqs.CPU77.RCU
2908 ± 12% +165.5% 7720 ± 42% softirqs.CPU78.RCU
3348 ± 17% +135.8% 7897 ± 42% softirqs.CPU79.RCU
3380 ± 4% +146.4% 8329 ± 35% softirqs.CPU8.RCU
3136 ± 12% +131.3% 7255 ± 37% softirqs.CPU80.RCU
2708 ± 7% +164.8% 7172 ± 45% softirqs.CPU81.RCU
2888 ± 8% +163.8% 7619 ± 34% softirqs.CPU82.RCU
2795 ± 16% +171.9% 7599 ± 33% softirqs.CPU83.RCU
10700 ± 27% +144.6% 26171 ± 3% softirqs.CPU84.SCHED
2686 ± 8% +177.9% 7465 ± 37% softirqs.CPU85.RCU
2640 ± 10% +174.3% 7242 ± 35% softirqs.CPU88.RCU
2957 ± 15% +142.1% 7160 ± 36% softirqs.CPU91.RCU
354901 +137.8% 843880 ± 17% softirqs.RCU
34287 ± 53% +55.0% 53147 interrupts.CAL:Function_call_interrupts
84.00 ± 92% -90.2% 8.25 ± 68% interrupts.CPU10.RES:Rescheduling_interrupts
83.25 ± 72% -88.9% 9.25 ± 31% interrupts.CPU11.RES:Rescheduling_interrupts
64.50 ± 37% -56.6% 28.00 ± 33% interrupts.CPU14.TLB:TLB_shootdowns
69.00 ± 33% -79.7% 14.00 ± 44% interrupts.CPU18.TLB:TLB_shootdowns
262.00 ± 73% +105.9% 539.50 ± 16% interrupts.CPU24.CAL:Function_call_interrupts
210.75 ± 70% +127.9% 480.25 ± 3% interrupts.CPU25.CAL:Function_call_interrupts
12.75 ±133% +639.2% 94.25 ± 31% interrupts.CPU25.TLB:TLB_shootdowns
13.50 ± 69% +525.9% 84.50 ± 29% interrupts.CPU27.TLB:TLB_shootdowns
28.00 ± 53% +223.2% 90.50 ± 19% interrupts.CPU28.TLB:TLB_shootdowns
16.00 ± 62% +393.8% 79.00 ± 27% interrupts.CPU29.TLB:TLB_shootdowns
340.00 ± 70% +217.4% 1079 ± 39% interrupts.CPU3.CAL:Function_call_interrupts
242.25 ± 79% +96.4% 475.75 ± 4% interrupts.CPU30.CAL:Function_call_interrupts
12.75 ± 80% +609.8% 90.50 ± 38% interrupts.CPU30.TLB:TLB_shootdowns
251.25 ± 75% +95.9% 492.25 ± 7% interrupts.CPU31.CAL:Function_call_interrupts
23.75 ± 28% +263.2% 86.25 ± 33% interrupts.CPU31.TLB:TLB_shootdowns
174.50 ± 22% +62.2% 283.00 ± 47% interrupts.CPU32.RES:Rescheduling_interrupts
19.75 ± 63% +432.9% 105.25 ± 28% interrupts.CPU32.TLB:TLB_shootdowns
246.75 ± 77% +130.7% 569.25 ± 29% interrupts.CPU33.CAL:Function_call_interrupts
8.75 ± 79% +900.0% 87.50 ± 32% interrupts.CPU33.TLB:TLB_shootdowns
244.75 ± 79% +99.6% 488.50 ± 3% interrupts.CPU35.CAL:Function_call_interrupts
2580 ± 18% +166.2% 6870 ± 24% interrupts.CPU36.NMI:Non-maskable_interrupts
2580 ± 18% +166.2% 6870 ± 24% interrupts.CPU36.PMI:Performance_monitoring_interrupts
84.50 ± 53% +133.7% 197.50 interrupts.CPU36.RES:Rescheduling_interrupts
22.25 ± 65% +1022.5% 249.75 ±103% interrupts.CPU36.TLB:TLB_shootdowns
17.50 ± 66% +578.6% 118.75 ± 33% interrupts.CPU38.TLB:TLB_shootdowns
241.25 ± 80% +101.2% 485.50 ± 3% interrupts.CPU39.CAL:Function_call_interrupts
21.00 ± 58% +385.7% 102.00 ± 28% interrupts.CPU39.TLB:TLB_shootdowns
19.75 ± 35% +370.9% 93.00 ± 32% interrupts.CPU40.TLB:TLB_shootdowns
27.50 ± 48% +278.2% 104.00 ± 9% interrupts.CPU42.TLB:TLB_shootdowns
23.00 ± 32% +241.3% 78.50 ± 24% interrupts.CPU43.TLB:TLB_shootdowns
246.00 ± 79% +94.9% 479.50 interrupts.CPU44.CAL:Function_call_interrupts
21.75 ± 30% +388.5% 106.25 ± 21% interrupts.CPU44.TLB:TLB_shootdowns
24.75 ± 24% +168.7% 66.50 ± 51% interrupts.CPU45.TLB:TLB_shootdowns
29.00 ± 33% +200.0% 87.00 ± 39% interrupts.CPU46.TLB:TLB_shootdowns
22.00 ± 34% +352.3% 99.50 ± 24% interrupts.CPU47.TLB:TLB_shootdowns
71.50 ± 25% -62.9% 26.50 ± 61% interrupts.CPU48.TLB:TLB_shootdowns
72.00 ± 28% -70.5% 21.25 ± 60% interrupts.CPU54.TLB:TLB_shootdowns
70.50 ± 24% -59.6% 28.50 ± 43% interrupts.CPU62.TLB:TLB_shootdowns
255.75 ± 78% +89.6% 485.00 interrupts.CPU72.CAL:Function_call_interrupts
23.75 ± 24% +307.4% 96.75 ± 13% interrupts.CPU72.TLB:TLB_shootdowns
19.25 ± 61% +366.2% 89.75 ± 12% interrupts.CPU73.TLB:TLB_shootdowns
246.75 ± 79% +105.6% 507.25 ± 8% interrupts.CPU74.CAL:Function_call_interrupts
39.75 ± 73% +190.6% 115.50 ± 30% interrupts.CPU74.TLB:TLB_shootdowns
251.25 ± 79% +193.7% 738.00 ± 53% interrupts.CPU75.CAL:Function_call_interrupts
27.00 ± 33% +408.3% 137.25 ± 55% interrupts.CPU75.TLB:TLB_shootdowns
249.00 ± 78% +243.1% 854.25 ± 42% interrupts.CPU76.CAL:Function_call_interrupts
28.00 ± 24% +248.2% 97.50 ± 21% interrupts.CPU76.TLB:TLB_shootdowns
5753 ± 36% -28.5% 4112 ± 52% interrupts.CPU77.NMI:Non-maskable_interrupts
5753 ± 36% -28.5% 4112 ± 52% interrupts.CPU77.PMI:Performance_monitoring_interrupts
26.00 ± 27% +253.8% 92.00 ± 18% interrupts.CPU77.TLB:TLB_shootdowns
248.25 ± 77% +91.9% 476.50 interrupts.CPU78.CAL:Function_call_interrupts
21.75 ± 15% +302.3% 87.50 ± 27% interrupts.CPU78.TLB:TLB_shootdowns
22.25 ± 61% +370.8% 104.75 ± 18% interrupts.CPU79.TLB:TLB_shootdowns
30.50 ± 33% +235.2% 102.25 ± 8% interrupts.CPU80.TLB:TLB_shootdowns
251.25 ± 78% +200.6% 755.25 ± 60% interrupts.CPU81.CAL:Function_call_interrupts
30.25 ± 54% +237.2% 102.00 ± 24% interrupts.CPU81.TLB:TLB_shootdowns
32.25 ± 23% +186.8% 92.50 ± 13% interrupts.CPU82.TLB:TLB_shootdowns
26.00 ± 16% +243.3% 89.25 ± 23% interrupts.CPU83.TLB:TLB_shootdowns
7732 ± 2% -67.6% 2505 ± 24% interrupts.CPU84.NMI:Non-maskable_interrupts
7732 ± 2% -67.6% 2505 ± 24% interrupts.CPU84.PMI:Performance_monitoring_interrupts
20.50 ± 39% +393.9% 101.25 ± 16% interrupts.CPU84.TLB:TLB_shootdowns
32.00 ± 31% +176.6% 88.50 ± 22% interrupts.CPU85.TLB:TLB_shootdowns
253.00 ± 78% +105.9% 521.00 ± 7% interrupts.CPU86.CAL:Function_call_interrupts
82.75 ± 82% -81.0% 15.75 ± 89% interrupts.CPU86.RES:Rescheduling_interrupts
26.75 ± 31% +238.3% 90.50 ± 28% interrupts.CPU86.TLB:TLB_shootdowns
270.75 ± 80% +389.6% 1325 ± 62% interrupts.CPU87.CAL:Function_call_interrupts
29.75 ± 33% +236.1% 100.00 ± 31% interrupts.CPU87.TLB:TLB_shootdowns
28.50 ± 16% +249.1% 99.50 ± 13% interrupts.CPU88.TLB:TLB_shootdowns
28.00 ± 55% +225.9% 91.25 ± 23% interrupts.CPU89.TLB:TLB_shootdowns
411.00 ± 81% -86.0% 57.50 ±148% interrupts.CPU9.RES:Rescheduling_interrupts
257.00 ± 77% +125.5% 579.50 ± 22% interrupts.CPU90.CAL:Function_call_interrupts
34.00 ± 29% +197.8% 101.25 ± 14% interrupts.CPU90.TLB:TLB_shootdowns
256.00 ± 76% +218.2% 814.50 ± 40% interrupts.CPU91.CAL:Function_call_interrupts
30.75 ± 28% +216.3% 97.25 ± 12% interrupts.CPU91.TLB:TLB_shootdowns
252.00 ± 77% +197.7% 750.25 ± 34% interrupts.CPU92.CAL:Function_call_interrupts
30.50 ± 27% +245.1% 105.25 ± 9% interrupts.CPU92.TLB:TLB_shootdowns
10.50 ± 51% +690.5% 83.00 ± 87% interrupts.CPU93.RES:Rescheduling_interrupts
33.00 ± 24% +215.9% 104.25 ± 16% interrupts.CPU93.TLB:TLB_shootdowns
96.75 ± 7% +35.1% 130.75 interrupts.IWI:IRQ_work_interrupts
4367 ± 18% +69.1% 7387 ± 31% interrupts.TLB:TLB_shootdowns
41.48 ± 2% -41.5 0.00 perf-profile.calltrace.cycles-pp.__ext4_journal_start_sb.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
39.78 -39.8 0.00 perf-profile.calltrace.cycles-pp.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin.iomap_apply.dax_iomap_rw
39.54 -39.5 0.00 perf-profile.calltrace.cycles-pp.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin.iomap_apply
65.49 ± 2% -34.1 31.39 ± 9% perf-profile.calltrace.cycles-pp.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter.new_sync_write
69.67 -11.6 58.02 ± 7% perf-profile.calltrace.cycles-pp.iomap_apply.dax_iomap_rw.ext4_file_write_iter.new_sync_write.vfs_write
69.69 -11.5 58.19 ± 7% perf-profile.calltrace.cycles-pp.dax_iomap_rw.ext4_file_write_iter.new_sync_write.vfs_write.ksys_write
9.33 -9.3 0.00 perf-profile.calltrace.cycles-pp.add_transaction_credits.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin
8.94 ± 7% -8.9 0.00 perf-profile.calltrace.cycles-pp.__ext4_journal_stop.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
8.64 ± 7% -8.6 0.00 perf-profile.calltrace.cycles-pp.jbd2_journal_stop.__ext4_journal_stop.ext4_iomap_begin.iomap_apply.dax_iomap_rw
71.18 -8.3 62.88 ± 3% perf-profile.calltrace.cycles-pp.ext4_file_write_iter.new_sync_write.vfs_write.ksys_write.do_syscall_64
71.23 -7.5 63.72 perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
6.62 ± 3% -6.6 0.00 perf-profile.calltrace.cycles-pp._raw_read_lock.start_this_handle.jbd2__journal_start.__ext4_journal_start_sb.ext4_iomap_begin
6.41 ± 5% -6.4 0.00 perf-profile.calltrace.cycles-pp.stop_this_handle.jbd2_journal_stop.__ext4_journal_stop.ext4_iomap_begin.iomap_apply
1.06 ± 5% +1.0 2.03 ± 55% perf-profile.calltrace.cycles-pp.file_update_time.ext4_write_checks.ext4_file_write_iter.new_sync_write.vfs_write
0.00 +1.0 1.04 ± 52% perf-profile.calltrace.cycles-pp.ext4_inode_block_valid.__check_block_validity.ext4_map_blocks.ext4_iomap_begin.iomap_apply
0.00 +1.4 1.45 ± 53% perf-profile.calltrace.cycles-pp.ksys_lseek.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.5 1.48 ± 9% perf-profile.calltrace.cycles-pp.ext4_es_lookup_extent.ext4_map_blocks.ext4_iomap_begin.iomap_apply.dax_iomap_rw
0.00 +1.6 1.64 ± 28% perf-profile.calltrace.cycles-pp.__check_block_validity.ext4_map_blocks.ext4_iomap_begin.iomap_apply.dax_iomap_rw
1.13 ± 6% +1.7 2.86 ± 67% perf-profile.calltrace.cycles-pp.ext4_write_checks.ext4_file_write_iter.new_sync_write.vfs_write.ksys_write
26.94 ± 2% +2.0 28.91 ± 2% perf-profile.calltrace.cycles-pp.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
26.94 ± 2% +2.0 28.91 ± 2% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry.start_secondary
26.94 ± 2% +2.0 28.91 ± 2% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
26.94 ± 2% +2.0 28.91 ± 2% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
26.94 ± 2% +2.0 28.91 ± 2% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.calltrace.cycles-pp.secondary_startup_64
27.08 ± 3% +2.3 29.41 ± 3% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.00 +3.3 3.33 ± 11% perf-profile.calltrace.cycles-pp.ext4_map_blocks.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
1.64 ± 8% +10.0 11.63 ± 25% perf-profile.calltrace.cycles-pp.__srcu_read_unlock.dax_iomap_actor.iomap_apply.dax_iomap_rw.ext4_file_write_iter
2.18 ± 15% +10.9 13.09 ± 31% perf-profile.calltrace.cycles-pp.__copy_user_nocache.__copy_user_flushcache._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply
2.19 ± 15% +11.0 13.16 ± 31% perf-profile.calltrace.cycles-pp.__copy_user_flushcache._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply.dax_iomap_rw
2.23 ± 14% +11.2 13.41 ± 31% perf-profile.calltrace.cycles-pp._copy_from_iter_flushcache.dax_iomap_actor.iomap_apply.dax_iomap_rw.ext4_file_write_iter
14.12 +11.6 25.71 ± 10% perf-profile.calltrace.cycles-pp.jbd2_transaction_committed.ext4_set_iomap.ext4_iomap_begin.iomap_apply.dax_iomap_rw
7.77 ± 4% +12.4 20.20 ± 10% perf-profile.calltrace.cycles-pp._raw_read_lock.jbd2_transaction_committed.ext4_set_iomap.ext4_iomap_begin.iomap_apply
14.31 +13.3 27.60 ± 10% perf-profile.calltrace.cycles-pp.ext4_set_iomap.ext4_iomap_begin.iomap_apply.dax_iomap_rw.ext4_file_write_iter
4.04 ± 6% +21.9 25.93 ± 22% perf-profile.calltrace.cycles-pp.dax_iomap_actor.iomap_apply.dax_iomap_rw.ext4_file_write_iter.new_sync_write
42.24 ± 2% -41.5 0.77 ± 6% perf-profile.children.cycles-pp.__ext4_journal_start_sb
40.51 -39.8 0.73 ± 5% perf-profile.children.cycles-pp.jbd2__journal_start
40.28 -39.6 0.72 ± 5% perf-profile.children.cycles-pp.start_this_handle
65.50 ± 2% -34.1 31.40 ± 9% perf-profile.children.cycles-pp.ext4_iomap_begin
69.67 -11.6 58.03 ± 7% perf-profile.children.cycles-pp.iomap_apply
69.70 -11.5 58.20 ± 7% perf-profile.children.cycles-pp.dax_iomap_rw
9.50 -9.3 0.18 ± 6% perf-profile.children.cycles-pp.add_transaction_credits
9.02 ± 7% -9.0 0.04 ± 57% perf-profile.children.cycles-pp.__ext4_journal_stop
8.72 ± 7% -8.7 0.04 ± 58% perf-profile.children.cycles-pp.jbd2_journal_stop
71.18 -8.3 62.91 ± 3% perf-profile.children.cycles-pp.ext4_file_write_iter
71.25 -7.5 63.75 perf-profile.children.cycles-pp.new_sync_write
6.43 ± 5% -6.4 0.00 perf-profile.children.cycles-pp.stop_this_handle
71.51 -5.8 65.70 perf-profile.children.cycles-pp.vfs_write
71.58 -5.5 66.11 perf-profile.children.cycles-pp.ksys_write
71.93 -3.8 68.15 perf-profile.children.cycles-pp.do_syscall_64
72.05 -3.2 68.81 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
0.08 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.scheduler_tick
0.05 +0.0 0.07 ± 10% perf-profile.children.cycles-pp.task_tick_fair
0.06 ± 6% +0.0 0.09 ± 14% perf-profile.children.cycles-pp.ext4_reserve_inode_write
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.rcu_all_qs
0.15 ± 8% +0.1 0.21 ± 12% perf-profile.children.cycles-pp.__ext4_mark_inode_dirty
0.00 +0.1 0.07 ± 17% perf-profile.children.cycles-pp.rw_verify_area
0.00 +0.1 0.08 ± 24% perf-profile.children.cycles-pp.pmem_dax_direct_access
0.04 ± 57% +0.1 0.12 ± 18% perf-profile.children.cycles-pp._cond_resched
0.00 +0.1 0.08 ± 8% perf-profile.children.cycles-pp.ktime_get_coarse_real_ts64
0.00 +0.1 0.09 ± 26% perf-profile.children.cycles-pp.generic_file_llseek_size
0.00 +0.1 0.09 ± 20% perf-profile.children.cycles-pp.file_modified
0.00 +0.1 0.09 ± 15% perf-profile.children.cycles-pp.aa_file_perm
0.00 +0.1 0.10 ± 37% perf-profile.children.cycles-pp.apparmor_file_permission
0.00 +0.1 0.11 ± 20% perf-profile.children.cycles-pp.__pmem_direct_access
0.03 ±100% +0.1 0.14 ± 28% perf-profile.children.cycles-pp.__might_sleep
0.04 ± 57% +0.1 0.16 ± 28% perf-profile.children.cycles-pp.___might_sleep
0.00 +0.1 0.15 ± 20% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.03 ±100% +0.2 0.21 ± 22% perf-profile.children.cycles-pp.up_write
0.00 +0.2 0.19 ±102% perf-profile.children.cycles-pp.timestamp_truncate
0.01 ±173% +0.2 0.21 ± 22% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.03 ±100% +0.2 0.22 ± 20% perf-profile.children.cycles-pp.dax_direct_access
0.00 +0.2 0.22 ±134% perf-profile.children.cycles-pp.__sb_end_write
0.05 ± 9% +0.2 0.29 ± 18% perf-profile.children.cycles-pp.__srcu_read_lock
0.06 ± 7% +0.3 0.32 ± 21% perf-profile.children.cycles-pp.syscall_exit_to_user_mode
0.04 ± 58% +0.3 0.34 ± 54% perf-profile.children.cycles-pp.current_time
0.07 ± 11% +0.3 0.39 ± 22% perf-profile.children.cycles-pp.down_write
0.00 +0.3 0.32 ± 83% perf-profile.children.cycles-pp.__fsnotify_parent
0.09 ± 7% +0.3 0.42 ± 19% perf-profile.children.cycles-pp.common_file_perm
0.07 ± 6% +0.3 0.40 ± 22% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.10 ± 8% +0.4 0.55 ± 21% perf-profile.children.cycles-pp.__fget_light
0.11 ± 9% +0.5 0.58 ± 21% perf-profile.children.cycles-pp.security_file_permission
0.05 ± 9% +0.5 0.52 ± 46% perf-profile.children.cycles-pp.__sb_start_write
0.06 ± 28% +0.5 0.57 ± 15% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.16 ± 30% +0.5 0.68 ± 20% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.12 ± 6% +0.5 0.67 ± 20% perf-profile.children.cycles-pp.__fdget_pos
0.00 +0.5 0.55 ±154% perf-profile.children.cycles-pp.generic_write_check_limits
0.00 +0.6 0.64 ±129% perf-profile.children.cycles-pp.generic_write_checks
0.01 ±173% +0.7 0.70 ±117% perf-profile.children.cycles-pp.ext4_generic_write_checks
0.10 ± 10% +0.8 0.91 ± 97% perf-profile.children.cycles-pp.ext4_llseek
0.19 ± 6% +0.8 1.00 ± 20% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.12 ± 42% +0.9 1.04 ± 52% perf-profile.children.cycles-pp.ext4_inode_block_valid
1.06 ± 4% +1.0 2.04 ± 55% perf-profile.children.cycles-pp.file_update_time
0.21 ± 8% +1.3 1.46 ± 52% perf-profile.children.cycles-pp.ksys_lseek
0.21 ± 10% +1.3 1.50 ± 9% perf-profile.children.cycles-pp.ext4_es_lookup_extent
0.17 ± 22% +1.5 1.65 ± 28% perf-profile.children.cycles-pp.__check_block_validity
1.13 ± 6% +1.7 2.87 ± 67% perf-profile.children.cycles-pp.ext4_write_checks
26.94 ± 2% +2.0 28.91 ± 2% perf-profile.children.cycles-pp.start_secondary
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.children.cycles-pp.secondary_startup_64
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.children.cycles-pp.cpu_startup_entry
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.children.cycles-pp.do_idle
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.children.cycles-pp.cpuidle_enter
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.children.cycles-pp.cpuidle_enter_state
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.children.cycles-pp.intel_idle
0.41 ± 11% +2.9 3.36 ± 11% perf-profile.children.cycles-pp.ext4_map_blocks
14.60 ± 2% +6.1 20.66 ± 10% perf-profile.children.cycles-pp._raw_read_lock
1.64 ± 8% +10.0 11.64 ± 25% perf-profile.children.cycles-pp.__srcu_read_unlock
2.18 ± 15% +10.9 13.12 ± 31% perf-profile.children.cycles-pp.__copy_user_nocache
2.19 ± 15% +11.0 13.17 ± 31% perf-profile.children.cycles-pp.__copy_user_flushcache
2.24 ± 15% +11.2 13.41 ± 31% perf-profile.children.cycles-pp._copy_from_iter_flushcache
14.13 +11.6 25.73 ± 10% perf-profile.children.cycles-pp.jbd2_transaction_committed
14.31 +13.3 27.62 ± 10% perf-profile.children.cycles-pp.ext4_set_iomap
4.04 ± 6% +21.9 25.97 ± 22% perf-profile.children.cycles-pp.dax_iomap_actor
23.93 -23.5 0.42 ± 6% perf-profile.self.cycles-pp.start_this_handle
9.46 -9.3 0.18 ± 6% perf-profile.self.cycles-pp.add_transaction_credits
6.40 ± 5% -6.4 0.00 perf-profile.self.cycles-pp.stop_this_handle
0.00 +0.1 0.08 ± 19% perf-profile.self.cycles-pp.do_syscall_64
0.00 +0.1 0.08 ± 24% perf-profile.self.cycles-pp.current_time
0.00 +0.1 0.08 ± 8% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.00 +0.1 0.08 ± 17% perf-profile.self.cycles-pp.aa_file_perm
0.00 +0.1 0.08 ± 24% perf-profile.self.cycles-pp.generic_file_llseek_size
0.00 +0.1 0.09 ± 40% perf-profile.self.cycles-pp.apparmor_file_permission
0.00 +0.1 0.09 ± 23% perf-profile.self.cycles-pp.generic_write_checks
0.00 +0.1 0.10 ± 18% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.01 ±173% +0.1 0.12 ± 30% perf-profile.self.cycles-pp.__might_sleep
0.00 +0.1 0.11 ± 20% perf-profile.self.cycles-pp.__pmem_direct_access
0.00 +0.1 0.11 ± 20% perf-profile.self.cycles-pp.ksys_lseek
0.00 +0.1 0.11 ± 19% perf-profile.self.cycles-pp.ksys_write
0.00 +0.1 0.12 ± 18% perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.00 +0.1 0.13 ± 22% perf-profile.self.cycles-pp.__fdget_pos
0.03 ±100% +0.1 0.16 ± 28% perf-profile.self.cycles-pp.___might_sleep
0.00 +0.2 0.16 ± 22% perf-profile.self.cycles-pp.dax_iomap_rw
0.00 +0.2 0.18 ±110% perf-profile.self.cycles-pp.timestamp_truncate
0.00 +0.2 0.18 ± 24% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.03 ±100% +0.2 0.21 ± 22% perf-profile.self.cycles-pp.up_write
0.03 ±100% +0.2 0.22 ± 22% perf-profile.self.cycles-pp.down_write
0.00 +0.2 0.22 ± 28% perf-profile.self.cycles-pp.ext4_map_blocks
0.00 +0.2 0.22 ±133% perf-profile.self.cycles-pp.__sb_end_write
0.05 ± 9% +0.2 0.29 ± 20% perf-profile.self.cycles-pp.dax_iomap_actor
0.04 ± 58% +0.2 0.28 ± 17% perf-profile.self.cycles-pp.__srcu_read_lock
0.00 +0.2 0.24 ± 25% perf-profile.self.cycles-pp._copy_from_iter_flushcache
0.03 ±100% +0.2 0.27 ± 27% perf-profile.self.cycles-pp.vfs_write
0.08 ± 5% +0.2 0.33 ± 20% perf-profile.self.cycles-pp.common_file_perm
0.07 ± 10% +0.3 0.36 ± 21% perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.00 +0.3 0.30 ± 95% perf-profile.self.cycles-pp.__sb_start_write
0.00 +0.3 0.31 ± 88% perf-profile.self.cycles-pp.__fsnotify_parent
0.07 ± 7% +0.3 0.39 ± 22% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.10 ± 8% +0.3 0.43 ± 26% perf-profile.self.cycles-pp.ext4_iomap_begin
0.10 ± 4% +0.4 0.53 ± 20% perf-profile.self.cycles-pp.__fget_light
0.04 ± 63% +0.5 0.53 ± 15% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.11 ± 4% +0.5 0.62 ± 21% perf-profile.self.cycles-pp.iomap_apply
0.07 ± 10% +0.5 0.61 ± 19% perf-profile.self.cycles-pp.ext4_es_lookup_extent
0.00 +0.5 0.55 ±154% perf-profile.self.cycles-pp.generic_write_check_limits
0.13 ± 6% +0.6 0.68 ± 20% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.04 ± 60% +0.6 0.60 ± 21% perf-profile.self.cycles-pp.__check_block_validity
0.00 +0.7 0.66 ±136% perf-profile.self.cycles-pp.file_update_time
0.01 ±173% +0.8 0.79 ±116% perf-profile.self.cycles-pp.new_sync_write
0.10 ± 8% +0.8 0.89 ± 99% perf-profile.self.cycles-pp.ext4_llseek
0.19 ± 6% +0.8 1.00 ± 20% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.11 ± 39% +0.9 1.04 ± 52% perf-profile.self.cycles-pp.ext4_inode_block_valid
0.23 ± 18% +1.0 1.26 ± 38% perf-profile.self.cycles-pp.ext4_file_write_iter
0.18 ± 16% +1.7 1.88 ± 67% perf-profile.self.cycles-pp.ext4_set_iomap
27.09 ± 3% +2.3 29.41 ± 3% perf-profile.self.cycles-pp.intel_idle
14.53 ± 2% +6.0 20.54 ± 10% perf-profile.self.cycles-pp._raw_read_lock
1.63 ± 8% +10.0 11.58 ± 25% perf-profile.self.cycles-pp.__srcu_read_unlock
2.18 ± 15% +10.9 13.05 ± 31% perf-profile.self.cycles-pp.__copy_user_nocache
fio.write_bw_MBps
70000 +-------------------------------------------------------------------+
| O |
60000 |-O O |
| O O |
| O O O |
50000 |-+ O O O O O O O O O |
| O O |
40000 |-+ O |
| |
30000 |-+ |
| |
| |
20000 |-+ |
| .+. |
10000 +-------------------------------------------------------------------+
fio.write_iops
1.8e+07 +-----------------------------------------------------------------+
| O |
1.6e+07 |-O O |
1.4e+07 |-+ O O |
| O O O |
1.2e+07 |-+ O O O O O O O O O |
| O O |
1e+07 |-+ O |
| |
8e+06 |-+ |
6e+06 |-+ |
| |
4e+06 |-+ |
|.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.++.+.+.+.+.+.+.+.|
2e+06 +-----------------------------------------------------------------+
fio.write_clat_mean_us
18000 +-------------------------------------------------------------------+
| .+. .+. .+. .+. +.|
16000 |.+.+ +.+.+.+.+ +.+.+.+.+.+.+.+.+.+.+.+ +.+.+.+ + +.+. + |
14000 |-+ + + + |
| + |
12000 |-+ |
| |
10000 |-+ |
| |
8000 |-+ |
6000 |-+ |
| |
4000 |-+ O O O |
| O O O O O O O O O O O O O O O |
2000 +-------------------------------------------------------------------+
fio.write_clat_stddev
16000 +-------------------------------------------------------------------+
| + |
14000 |-+ :: |
12000 |-+ : : + |
| .+ : :.+ : : |
10000 |.+. .+. + + : + + .+. .+. .+. .+. : : |
| + +. + + +.+.+. .+ +.+ + +.+ + +.+.+ +.+.|
8000 |-+ + + + + |
| + |
6000 |-+ |
4000 |-+ O |
| O O O O O O |
2000 |-O O O O O O O O O O O O O |
| |
0 +-------------------------------------------------------------------+
fio.write_clat_90__us
30000 +-------------------------------------------------------------------+
| .+. + .+. .+. +. |
25000 |.+.+ +. : : +.+.+ .+.+.+.+.+.+.+ +.+.+.+ +.+. + +.+ |
| +. : : : + .+ + +|
| + +. : + |
20000 |-+ + |
| |
15000 |-+ |
| |
10000 |-+ |
| |
| O O O O O O O O O |
5000 |-O O O O O O O O O O O |
| |
0 +-------------------------------------------------------------------+
fio.write_clat_95__us
35000 +-------------------------------------------------------------------+
| + |
30000 |.+.+.+.+ : : +.+. +.+.+.+.+.+.+.+.+.+.+.+.+.+. .+.+. |
| + : : : +. + +.+ +.|
25000 |-+ +.+ +. : +.+ |
| + |
20000 |-+ |
| |
15000 |-+ |
| |
10000 |-+ |
| O O O O O O O O O O O |
5000 |-O O O O O O O O O |
| |
0 +-------------------------------------------------------------------+
fio.write_clat_99__us
45000 +-------------------------------------------------------------------+
| + |
40000 |.+. .+. :: .+. +. .+.+. .+. .+. .+. .+. .+.+. |
35000 |-+ + +. : : + +. + + + + +.+ + +.+.+ +.|
| +. : : : +.+ |
30000 |-+ + +. : |
| + |
25000 |-+ |
| |
20000 |-+ |
15000 |-+ O O O |
| O O O O O O O O O O O O O O |
10000 |-O O O |
| |
5000 +-------------------------------------------------------------------+
fio.latency_4us_
70 +----------------------------------------------------------------------+
| O O |
60 |-+ O O |
| O O O O O O O O |
50 |-+ O O O O O |
| O |
40 |-+ O |
| O |
30 |-+ |
| |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fio.latency_20us_
80 +----------------------------------------------------------------------+
| .+ +. +. |
70 |-+ +. : : + : + |
60 |-+ + : : : : : :|
|. .+ : : : + : +. .+.+. .+.. .+ :|
50 |-+.+.+ : : + : : + +.+.+ +.+.+ + : : |
| : : + : : .+.+ : + |
40 |-+ :: + +.+. +. + |
| : + |
30 |-+ + |
20 |-+ |
| |
10 |-+ |
| |
0 +----------------------------------------------------------------------+
fio.latency_50us_
35 +----------------------------------------------------------------------+
| + |
30 |-+ + : + + +. |
| : + : : +.+.. : + : + .+ : +.+ |
25 |.+. : + : : +.+. : +.+. .+. : +. .+. : +. + : : |
| + : : : : + : + + + + + :|
20 |-+ : : : : + : :|
| +..+ : : +.+ |
15 |-+ + : |
| : : |
10 |-+ :: |
| + |
5 |-+ |
| |
0 +----------------------------------------------------------------------+
fio.latency_100us_
0.3 +--------------------------------------------------------------------+
| |
0.25 |-+ + + + + + |
| :: : + :: :: : |
| : : :: + + + : : : : : : |
0.2 |-+. : : : : :+ + + +. : : +. : : : : |
|+ + + : : .+ : + + + + +. + + +.+.: : |
0.15 |-+ + : : + + : + + + +.+ |
| + : : : +. .+ :|
0.1 |-+ +: + : + :|
| + + : |
| + |
0.05 |-+ |
| |
0 +--------------------------------------------------------------------+
fio.workload
3.5e+09 +-----------------------------------------------------------------+
| O |
3e+09 |-O O |
| O O O |
| O O O O |
2.5e+09 |-+ O O O O O O O |
| O O |
2e+09 |-+ O |
| |
1.5e+09 |-+ |
| |
| |
1e+09 |-+ |
| +. .+. |
5e+08 +-----------------------------------------------------------------+
fio.time.user_time
1800 +--------------------------------------------------------------------+
| O O |
1600 |-+ O |
1400 |-+ O O |
| O O O O |
1200 |-+ O O O O O O O O |
| O O |
1000 |-+ O |
| |
800 |-+ |
600 |-+ |
| |
400 |-+ |
|.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.+.|
200 +--------------------------------------------------------------------+
fio.time.system_time
9400 +--------------------------------------------------------------------+
| .+. .+. |
9200 |.+.+.+.+.+.+.+ +.+.+.+.+.+.+.+..+.+.+.+.+.+.+.+.+.+.+.+ +.+.+.+.|
9000 |-+ |
| |
8800 |-+ |
| |
8600 |-+ |
| O O |
8400 |-+ O |
8200 |-+ O O O O O O O O O O O |
| O |
8000 |-+ O O O |
| O |
7800 +--------------------------------------------------------------------+
fio.time.voluntary_context_switches
25000 +-------------------------------------------------------------------+
| +. |
24500 |-+ .+.+.+ + + |
|.+.+. .+ + .+ + .+. .+. .+.+. .+.+. .+.+.+. |
| +.+ + + +.+ +.+ +.+.+ +.+ +.+.|
24000 |-+ |
| |
23500 |-+ |
| |
23000 |-+ |
| |
| O O O O |
22500 |-O O O |
| O O O O O O O O O O O O O |
22000 +-------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 7 months
[mm] a4d63c3732: will-it-scale.per_process_ops -4.2% regression
by kernel test robot
Greeting,
FYI, we noticed a -4.2% regression of will-it-scale.per_process_ops due to commit:
commit: a4d63c3732f1a0c91abcf5b7f32b4ef7dcd82025 ("mm: do not rely on mm == current->mm in __get_user_pages_locked")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 104 threads Skylake with 192G memory
with following parameters:
nr_task: 100%
mode: process
test: mmap2
cpufreq_governor: performance
ucode: 0x2006906
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/100%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/mmap2/will-it-scale/0x2006906
commit:
v5.9-rc7
a4d63c3732 ("mm: do not rely on mm == current->mm in __get_user_pages_locked")
v5.9-rc7 a4d63c3732f1a0c91abcf5b7f32
---------------- ---------------------------
%stddev %change %stddev
\ | \
220777 -4.2% 211546 will-it-scale.per_process_ops
22960865 -4.2% 22000902 will-it-scale.workload
83839135 ± 55% +55.2% 1.301e+08 ± 26% cpuidle.C1E.time
211210 ± 30% +33.2% 281375 ± 21% cpuidle.C1E.usage
18140 ± 2% +39.2% 25254 ± 23% sched_debug.cpu.nr_switches.max
13463 ± 4% +14.0% 15345 ± 2% sched_debug.cpu.sched_count.max
81065 +1.1% 81959 proc-vmstat.nr_anon_pages
96.80 +3.0% 99.75 proc-vmstat.nr_anon_transparent_hugepages
83789 +1.1% 84672 proc-vmstat.nr_inactive_anon
83789 +1.1% 84672 proc-vmstat.nr_zone_inactive_anon
6517 ±144% -99.0% 64.50 ± 54% proc-vmstat.numa_hint_faults
6473 ± 65% -99.6% 26.50 ±121% proc-vmstat.numa_pages_migrated
6473 ± 65% -99.6% 26.50 ±121% proc-vmstat.pgmigrate_success
48103 ± 6% +8.8% 52321 ± 3% numa-meminfo.node0.KReclaimable
48103 ± 6% +8.8% 52321 ± 3% numa-meminfo.node0.SReclaimable
94453 ± 4% +18.1% 111554 ± 4% numa-meminfo.node0.SUnreclaim
142557 +15.0% 163876 ± 3% numa-meminfo.node0.Slab
50072 ± 4% -10.0% 45056 ± 3% numa-meminfo.node1.KReclaimable
50072 ± 4% -10.0% 45056 ± 3% numa-meminfo.node1.SReclaimable
98485 ± 4% -17.9% 80860 ± 4% numa-meminfo.node1.SUnreclaim
148557 -15.2% 125917 ± 3% numa-meminfo.node1.Slab
12025 ± 6% +8.8% 13079 ± 3% numa-vmstat.node0.nr_slab_reclaimable
23613 ± 4% +18.1% 27888 ± 4% numa-vmstat.node0.nr_slab_unreclaimable
828938 ± 7% +17.8% 976901 ± 7% numa-vmstat.node0.numa_hit
804550 ± 7% +18.8% 956028 ± 6% numa-vmstat.node0.numa_local
12516 ± 4% -10.0% 11263 ± 3% numa-vmstat.node1.nr_slab_reclaimable
24621 ± 4% -17.9% 20214 ± 4% numa-vmstat.node1.nr_slab_unreclaimable
940544 ± 7% -16.2% 788395 ± 9% numa-vmstat.node1.numa_hit
755086 ± 7% -20.6% 599240 ± 10% numa-vmstat.node1.numa_local
5.792e+10 -4.2% 5.55e+10 perf-stat.i.branch-instructions
0.48 -0.0 0.48 perf-stat.i.branch-miss-rate%
2.703e+08 -5.6% 2.552e+08 perf-stat.i.branch-misses
1.15 +4.5% 1.20 perf-stat.i.cpi
161.86 -1.2% 159.93 perf-stat.i.cpu-migrations
45949319 -4.5% 43875433 perf-stat.i.dTLB-load-misses
6.167e+10 -4.2% 5.906e+10 perf-stat.i.dTLB-loads
47578 ± 15% -8.9% 43356 perf-stat.i.dTLB-store-misses
2.787e+10 -4.2% 2.669e+10 perf-stat.i.dTLB-stores
82.07 ± 3% +13.8 95.89 ± 2% perf-stat.i.iTLB-load-miss-rate%
28493832 +51.9% 43294073 perf-stat.i.iTLB-load-misses
6078707 ± 18% -71.6% 1725940 ± 52% perf-stat.i.iTLB-loads
2.416e+11 -4.2% 2.314e+11 perf-stat.i.instructions
8812 -37.2% 5534 perf-stat.i.instructions-per-iTLB-miss
0.87 -4.3% 0.83 perf-stat.i.ipc
1418 -4.2% 1358 perf-stat.i.metric.M/sec
0.47 -0.0 0.46 perf-stat.overall.branch-miss-rate%
1.15 +4.6% 1.20 perf-stat.overall.cpi
82.47 ± 3% +13.7 96.20 ± 2% perf-stat.overall.iTLB-load-miss-rate%
8484 -37.0% 5347 perf-stat.overall.instructions-per-iTLB-miss
0.87 -4.4% 0.83 perf-stat.overall.ipc
5.773e+10 -4.2% 5.531e+10 perf-stat.ps.branch-instructions
2.693e+08 -5.6% 2.544e+08 perf-stat.ps.branch-misses
161.32 -1.2% 159.42 perf-stat.ps.cpu-migrations
45785331 -4.5% 43716262 perf-stat.ps.dTLB-load-misses
6.146e+10 -4.2% 5.886e+10 perf-stat.ps.dTLB-loads
47549 ± 15% -8.9% 43311 perf-stat.ps.dTLB-store-misses
2.778e+10 -4.3% 2.66e+10 perf-stat.ps.dTLB-stores
28376216 +52.0% 43133756 perf-stat.ps.iTLB-load-misses
6071510 ± 18% -71.6% 1726775 ± 52% perf-stat.ps.iTLB-loads
2.407e+11 -4.2% 2.306e+11 perf-stat.ps.instructions
7.276e+13 -4.2% 6.968e+13 perf-stat.total.instructions
89619 +2.5% 91832 interrupts.CAL:Function_call_interrupts
311.60 +22.6% 382.00 ± 22% interrupts.CPU17.RES:Rescheduling_interrupts
6803 ± 6% +5.3% 7164 ± 5% interrupts.CPU19.NMI:Non-maskable_interrupts
6803 ± 6% +5.3% 7164 ± 5% interrupts.CPU19.PMI:Performance_monitoring_interrupts
6070 ± 20% +18.0% 7163 ± 5% interrupts.CPU2.NMI:Non-maskable_interrupts
6070 ± 20% +18.0% 7163 ± 5% interrupts.CPU2.PMI:Performance_monitoring_interrupts
6802 ± 6% +5.3% 7164 ± 5% interrupts.CPU20.NMI:Non-maskable_interrupts
6802 ± 6% +5.3% 7164 ± 5% interrupts.CPU20.PMI:Performance_monitoring_interrupts
6801 ± 6% +5.2% 7158 ± 5% interrupts.CPU24.NMI:Non-maskable_interrupts
6801 ± 6% +5.2% 7158 ± 5% interrupts.CPU24.PMI:Performance_monitoring_interrupts
7216 ± 6% +4.9% 7572 ± 5% interrupts.CPU26.NMI:Non-maskable_interrupts
7216 ± 6% +4.9% 7572 ± 5% interrupts.CPU26.PMI:Performance_monitoring_interrupts
7216 ± 6% +4.9% 7573 ± 5% interrupts.CPU27.NMI:Non-maskable_interrupts
7216 ± 6% +4.9% 7573 ± 5% interrupts.CPU27.PMI:Performance_monitoring_interrupts
7217 ± 6% +4.9% 7573 ± 5% interrupts.CPU28.NMI:Non-maskable_interrupts
7217 ± 6% +4.9% 7573 ± 5% interrupts.CPU28.PMI:Performance_monitoring_interrupts
7218 ± 6% +4.9% 7573 ± 5% interrupts.CPU29.NMI:Non-maskable_interrupts
7218 ± 6% +4.9% 7573 ± 5% interrupts.CPU29.PMI:Performance_monitoring_interrupts
6071 ± 20% +18.0% 7163 ± 5% interrupts.CPU3.NMI:Non-maskable_interrupts
6071 ± 20% +18.0% 7163 ± 5% interrupts.CPU3.PMI:Performance_monitoring_interrupts
7219 ± 6% +4.9% 7573 ± 5% interrupts.CPU30.NMI:Non-maskable_interrupts
7219 ± 6% +4.9% 7573 ± 5% interrupts.CPU30.PMI:Performance_monitoring_interrupts
7218 ± 6% +4.9% 7574 ± 5% interrupts.CPU31.NMI:Non-maskable_interrupts
7218 ± 6% +4.9% 7574 ± 5% interrupts.CPU31.PMI:Performance_monitoring_interrupts
7217 ± 6% +4.9% 7573 ± 5% interrupts.CPU37.NMI:Non-maskable_interrupts
7217 ± 6% +4.9% 7573 ± 5% interrupts.CPU37.PMI:Performance_monitoring_interrupts
6071 ± 20% +18.0% 7163 ± 5% interrupts.CPU4.NMI:Non-maskable_interrupts
6071 ± 20% +18.0% 7163 ± 5% interrupts.CPU4.PMI:Performance_monitoring_interrupts
308.60 +35.9% 419.25 ± 31% interrupts.CPU40.RES:Rescheduling_interrupts
6436 ± 20% +17.7% 7573 ± 5% interrupts.CPU46.NMI:Non-maskable_interrupts
6436 ± 20% +17.7% 7573 ± 5% interrupts.CPU46.PMI:Performance_monitoring_interrupts
6437 ± 20% +17.7% 7573 ± 5% interrupts.CPU47.NMI:Non-maskable_interrupts
6437 ± 20% +17.7% 7573 ± 5% interrupts.CPU47.PMI:Performance_monitoring_interrupts
6436 ± 20% +17.7% 7573 ± 5% interrupts.CPU48.NMI:Non-maskable_interrupts
6436 ± 20% +17.7% 7573 ± 5% interrupts.CPU48.PMI:Performance_monitoring_interrupts
6436 ± 20% +17.7% 7574 ± 5% interrupts.CPU49.NMI:Non-maskable_interrupts
6436 ± 20% +17.7% 7574 ± 5% interrupts.CPU49.PMI:Performance_monitoring_interrupts
6071 ± 20% +18.0% 7164 ± 5% interrupts.CPU5.NMI:Non-maskable_interrupts
6071 ± 20% +18.0% 7164 ± 5% interrupts.CPU5.PMI:Performance_monitoring_interrupts
6436 ± 20% +17.7% 7574 ± 5% interrupts.CPU50.NMI:Non-maskable_interrupts
6436 ± 20% +17.7% 7574 ± 5% interrupts.CPU50.PMI:Performance_monitoring_interrupts
6435 ± 20% +17.7% 7574 ± 5% interrupts.CPU51.NMI:Non-maskable_interrupts
6435 ± 20% +17.7% 7574 ± 5% interrupts.CPU51.PMI:Performance_monitoring_interrupts
6052 ± 20% +18.0% 7139 ± 5% interrupts.CPU52.NMI:Non-maskable_interrupts
6052 ± 20% +18.0% 7139 ± 5% interrupts.CPU52.PMI:Performance_monitoring_interrupts
6790 ± 6% +5.2% 7142 ± 5% interrupts.CPU54.NMI:Non-maskable_interrupts
6790 ± 6% +5.2% 7142 ± 5% interrupts.CPU54.PMI:Performance_monitoring_interrupts
965.80 ± 62% +59.0% 1535 ± 48% interrupts.CPU54.RES:Rescheduling_interrupts
6057 ± 20% +17.9% 7142 ± 5% interrupts.CPU55.NMI:Non-maskable_interrupts
6057 ± 20% +17.9% 7142 ± 5% interrupts.CPU55.PMI:Performance_monitoring_interrupts
566.20 ± 24% +100.8% 1136 ± 50% interrupts.CPU58.CAL:Function_call_interrupts
6056 ± 20% +17.9% 7138 ± 5% interrupts.CPU58.NMI:Non-maskable_interrupts
6056 ± 20% +17.9% 7138 ± 5% interrupts.CPU58.PMI:Performance_monitoring_interrupts
6055 ± 20% +17.9% 7139 ± 5% interrupts.CPU59.NMI:Non-maskable_interrupts
6055 ± 20% +17.9% 7139 ± 5% interrupts.CPU59.PMI:Performance_monitoring_interrupts
6789 ± 6% +5.1% 7138 ± 5% interrupts.CPU60.NMI:Non-maskable_interrupts
6789 ± 6% +5.1% 7138 ± 5% interrupts.CPU60.PMI:Performance_monitoring_interrupts
6786 ± 6% +5.2% 7138 ± 5% interrupts.CPU61.NMI:Non-maskable_interrupts
6786 ± 6% +5.2% 7138 ± 5% interrupts.CPU61.PMI:Performance_monitoring_interrupts
6787 ± 6% +5.2% 7139 ± 5% interrupts.CPU63.NMI:Non-maskable_interrupts
6787 ± 6% +5.2% 7139 ± 5% interrupts.CPU63.PMI:Performance_monitoring_interrupts
6053 ± 20% +17.9% 7136 ± 5% interrupts.CPU65.NMI:Non-maskable_interrupts
6053 ± 20% +17.9% 7136 ± 5% interrupts.CPU65.PMI:Performance_monitoring_interrupts
314.40 ± 3% +6.1% 333.50 ± 5% interrupts.CPU67.RES:Rescheduling_interrupts
5316 ± 25% +34.2% 7133 ± 5% interrupts.CPU76.NMI:Non-maskable_interrupts
5316 ± 25% +34.2% 7133 ± 5% interrupts.CPU76.PMI:Performance_monitoring_interrupts
6434 ± 20% +17.6% 7570 ± 5% interrupts.CPU78.NMI:Non-maskable_interrupts
6434 ± 20% +17.6% 7570 ± 5% interrupts.CPU78.PMI:Performance_monitoring_interrupts
6433 ± 20% +17.7% 7570 ± 5% interrupts.CPU79.NMI:Non-maskable_interrupts
6433 ± 20% +17.7% 7570 ± 5% interrupts.CPU79.PMI:Performance_monitoring_interrupts
453.80 +142.8% 1102 ± 58% interrupts.CPU8.CAL:Function_call_interrupts
6435 ± 20% +17.6% 7570 ± 5% interrupts.CPU80.NMI:Non-maskable_interrupts
6435 ± 20% +17.6% 7570 ± 5% interrupts.CPU80.PMI:Performance_monitoring_interrupts
6436 ± 20% +17.6% 7571 ± 5% interrupts.CPU81.NMI:Non-maskable_interrupts
6436 ± 20% +17.6% 7571 ± 5% interrupts.CPU81.PMI:Performance_monitoring_interrupts
6436 ± 20% +17.6% 7571 ± 5% interrupts.CPU82.NMI:Non-maskable_interrupts
6436 ± 20% +17.6% 7571 ± 5% interrupts.CPU82.PMI:Performance_monitoring_interrupts
7217 ± 6% +4.9% 7573 ± 5% interrupts.CPU88.NMI:Non-maskable_interrupts
7217 ± 6% +4.9% 7573 ± 5% interrupts.CPU88.PMI:Performance_monitoring_interrupts
7217 ± 6% +4.9% 7571 ± 5% interrupts.CPU89.NMI:Non-maskable_interrupts
7217 ± 6% +4.9% 7571 ± 5% interrupts.CPU89.PMI:Performance_monitoring_interrupts
7218 ± 6% +4.9% 7571 ± 5% interrupts.CPU90.NMI:Non-maskable_interrupts
7218 ± 6% +4.9% 7571 ± 5% interrupts.CPU90.PMI:Performance_monitoring_interrupts
7218 ± 6% +4.9% 7571 ± 5% interrupts.CPU91.NMI:Non-maskable_interrupts
7218 ± 6% +4.9% 7571 ± 5% interrupts.CPU91.PMI:Performance_monitoring_interrupts
7219 ± 6% +4.9% 7571 ± 5% interrupts.CPU92.NMI:Non-maskable_interrupts
7219 ± 6% +4.9% 7571 ± 5% interrupts.CPU92.PMI:Performance_monitoring_interrupts
7218 ± 6% +4.9% 7571 ± 5% interrupts.CPU93.NMI:Non-maskable_interrupts
7218 ± 6% +4.9% 7571 ± 5% interrupts.CPU93.PMI:Performance_monitoring_interrupts
25.63 -0.9 24.69 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
20.44 -0.8 19.59 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
30.38 -0.8 29.63 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
4.49 -0.7 3.75 perf-profile.calltrace.cycles-pp._cond_resched.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
29.85 -0.7 29.14 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
28.79 -0.7 28.09 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
34.89 -0.6 34.25 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
41.44 -0.6 40.82 perf-profile.calltrace.cycles-pp.__mmap
3.77 ± 4% -0.5 3.26 ± 3% perf-profile.calltrace.cycles-pp.vm_area_alloc.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
3.13 ± 5% -0.4 2.68 ± 3% perf-profile.calltrace.cycles-pp.kmem_cache_alloc.vm_area_alloc.mmap_region.do_mmap.vm_mmap_pgoff
0.53 ± 3% -0.3 0.25 ±100% perf-profile.calltrace.cycles-pp.cap_vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap.vm_mmap_pgoff
2.64 -0.2 2.41 perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
1.26 -0.2 1.07 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
2.36 -0.2 2.18 perf-profile.calltrace.cycles-pp.rcu_all_qs._cond_resched.unmap_page_range.unmap_vmas.unmap_region
3.37 -0.2 3.20 perf-profile.calltrace.cycles-pp.d_path.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
0.55 ± 3% -0.2 0.38 ± 57% perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
2.30 -0.2 2.14 perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
1.57 -0.2 1.42 perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
3.03 -0.1 2.89 perf-profile.calltrace.cycles-pp.shmem_get_unmapped_area.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
1.75 -0.1 1.61 perf-profile.calltrace.cycles-pp.prepend_path.d_path.perf_event_mmap.mmap_region.do_mmap
4.31 -0.1 4.17 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__munmap
1.21 -0.1 1.07 perf-profile.calltrace.cycles-pp.shmem_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.84 -0.1 0.74 perf-profile.calltrace.cycles-pp.touch_atime.shmem_mmap.mmap_region.do_mmap.vm_mmap_pgoff
3.08 -0.1 2.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.__mmap
1.39 -0.1 1.30 perf-profile.calltrace.cycles-pp.find_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.62 -0.1 0.53 perf-profile.calltrace.cycles-pp.atime_needs_update.touch_atime.shmem_mmap.mmap_region.do_mmap
0.98 -0.1 0.89 perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.unmap_region.__do_munmap.__vm_munmap
3.88 -0.1 3.79 perf-profile.calltrace.cycles-pp.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
0.99 -0.1 0.92 perf-profile.calltrace.cycles-pp.kmem_cache_alloc_trace.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
0.73 ± 2% -0.1 0.68 perf-profile.calltrace.cycles-pp.kfree.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
2.97 -0.1 2.92 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.__munmap
0.90 ± 3% -0.1 0.85 ± 2% perf-profile.calltrace.cycles-pp.strlen.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
0.57 -0.0 0.52 ± 2% perf-profile.calltrace.cycles-pp.down_write.unlink_file_vma.free_pgtables.unmap_region.__do_munmap
1.06 -0.0 1.01 perf-profile.calltrace.cycles-pp.vm_unmapped_area.arch_get_unmapped_area_topdown.shmem_get_unmapped_area.get_unmapped_area.do_mmap
0.63 -0.0 0.59 perf-profile.calltrace.cycles-pp.common_file_perm.security_mmap_file.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
1.09 -0.0 1.06 perf-profile.calltrace.cycles-pp.prepend.d_path.perf_event_mmap.mmap_region.do_mmap
0.72 ± 2% +0.0 0.76 perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
2.71 +0.1 2.78 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__munmap
9.31 +0.1 9.39 perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
2.73 +0.1 2.83 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__mmap
4.27 +0.1 4.42 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__mmap
0.77 +0.2 0.95 ± 2% perf-profile.calltrace.cycles-pp.perf_event_mmap_output.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap
1.85 +0.2 2.03 perf-profile.calltrace.cycles-pp.security_mmap_file.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.83 +0.5 2.33 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
52.13 +0.6 52.74 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
58.31 +0.6 58.94 perf-profile.calltrace.cycles-pp.__munmap
45.74 +0.7 46.43 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
46.83 +0.7 47.54 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
47.29 +0.7 48.01 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
47.59 +0.7 48.34 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
38.38 +1.0 39.38 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
6.69 +1.2 7.91 perf-profile.calltrace.cycles-pp.free_pgd_range.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
25.68 -0.9 24.73 perf-profile.children.cycles-pp.do_mmap
20.58 -0.9 19.71 perf-profile.children.cycles-pp.mmap_region
29.90 -0.7 29.19 perf-profile.children.cycles-pp.ksys_mmap_pgoff
28.83 -0.7 28.13 perf-profile.children.cycles-pp.vm_mmap_pgoff
41.80 -0.6 41.15 perf-profile.children.cycles-pp.__mmap
3.77 ± 4% -0.5 3.26 ± 3% perf-profile.children.cycles-pp.vm_area_alloc
3.21 ± 5% -0.4 2.76 ± 3% perf-profile.children.cycles-pp.kmem_cache_alloc
4.97 -0.3 4.65 perf-profile.children.cycles-pp._cond_resched
2.68 -0.2 2.46 perf-profile.children.cycles-pp.vma_link
14.27 -0.2 14.08 perf-profile.children.cycles-pp.___might_sleep
1.26 -0.2 1.08 perf-profile.children.cycles-pp.__vma_link_rb
3.39 -0.2 3.22 perf-profile.children.cycles-pp.d_path
2.33 -0.2 2.17 perf-profile.children.cycles-pp.zap_pte_range
1.60 -0.2 1.45 perf-profile.children.cycles-pp.free_pgtables
3.04 -0.1 2.89 perf-profile.children.cycles-pp.shmem_get_unmapped_area
6.07 -0.1 5.92 perf-profile.children.cycles-pp.entry_SYSCALL_64
1.78 -0.1 1.64 perf-profile.children.cycles-pp.prepend_path
1.23 -0.1 1.10 perf-profile.children.cycles-pp.shmem_mmap
2.71 -0.1 2.59 perf-profile.children.cycles-pp.rcu_all_qs
1.47 -0.1 1.38 perf-profile.children.cycles-pp.find_vma
0.41 -0.1 0.32 perf-profile.children.cycles-pp.vma_set_page_prot
0.84 -0.1 0.74 perf-profile.children.cycles-pp.touch_atime
1.16 ± 2% -0.1 1.07 ± 2% perf-profile.children.cycles-pp.down_write
3.91 -0.1 3.82 perf-profile.children.cycles-pp.get_unmapped_area
0.99 -0.1 0.90 perf-profile.children.cycles-pp.unlink_file_vma
0.63 -0.1 0.54 perf-profile.children.cycles-pp.atime_needs_update
1.05 -0.1 0.97 perf-profile.children.cycles-pp.kmem_cache_alloc_trace
0.36 -0.1 0.29 ± 2% perf-profile.children.cycles-pp.apparmor_mmap_file
0.21 ± 3% -0.1 0.15 ± 10% perf-profile.children.cycles-pp.uprobe_mmap
0.29 ± 2% -0.1 0.23 perf-profile.children.cycles-pp.vma_merge
0.74 ± 2% -0.1 0.69 perf-profile.children.cycles-pp.kfree
0.36 -0.1 0.31 ± 8% perf-profile.children.cycles-pp.current_time
0.52 ± 2% -0.1 0.47 ± 5% perf-profile.children.cycles-pp.security_mmap_addr
0.31 -0.1 0.26 perf-profile.children.cycles-pp.sync_mm_rss
0.90 ± 3% -0.1 0.85 ± 2% perf-profile.children.cycles-pp.strlen
0.97 -0.0 0.92 perf-profile.children.cycles-pp.__might_sleep
0.33 ± 2% -0.0 0.29 ± 3% perf-profile.children.cycles-pp.__vm_enough_memory
1.07 -0.0 1.02 perf-profile.children.cycles-pp.vm_unmapped_area
0.39 -0.0 0.35 ± 2% perf-profile.children.cycles-pp.lru_add_drain
0.35 -0.0 0.30 ± 3% perf-profile.children.cycles-pp.cap_mmap_addr
0.64 -0.0 0.60 perf-profile.children.cycles-pp.common_file_perm
1.12 -0.0 1.08 perf-profile.children.cycles-pp.prepend
0.31 ± 3% -0.0 0.27 ± 5% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.18 ± 2% -0.0 0.15 ± 5% perf-profile.children.cycles-pp.vm_pgprot_modify
0.38 -0.0 0.34 perf-profile.children.cycles-pp.obj_cgroup_charge
0.74 -0.0 0.71 perf-profile.children.cycles-pp.up_write
0.26 ± 6% -0.0 0.23 ± 8% perf-profile.children.cycles-pp.path_noexec
1.01 -0.0 0.98 perf-profile.children.cycles-pp.memcpy_erms
0.37 -0.0 0.34 perf-profile.children.cycles-pp.downgrade_write
0.31 ± 2% -0.0 0.28 ± 2% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.18 ± 5% -0.0 0.15 perf-profile.children.cycles-pp.cap_capable
0.19 ± 3% -0.0 0.17 ± 3% perf-profile.children.cycles-pp.__x64_sys_mmap
0.32 -0.0 0.29 perf-profile.children.cycles-pp.tlb_gather_mmu
0.54 ± 2% -0.0 0.52 perf-profile.children.cycles-pp.cap_vm_enough_memory
0.24 ± 2% -0.0 0.21 ± 2% perf-profile.children.cycles-pp.vma_interval_tree_remove
0.22 ± 4% -0.0 0.20 ± 2% perf-profile.children.cycles-pp.unlink_anon_vmas
1.04 -0.0 1.02 perf-profile.children.cycles-pp.down_write_killable
0.45 ± 2% -0.0 0.43 perf-profile.children.cycles-pp.vmacache_find
0.14 ± 2% -0.0 0.12 ± 5% perf-profile.children.cycles-pp.timestamp_truncate
0.08 ± 5% -0.0 0.06 ± 6% perf-profile.children.cycles-pp.munmap@plt
0.43 ± 2% +0.0 0.45 perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.18 ± 3% +0.0 0.20 perf-profile.children.cycles-pp.tlb_flush_mmu
0.10 ± 6% +0.0 0.12 ± 8% perf-profile.children.cycles-pp.aa_file_perm
0.43 ± 3% +0.0 0.46 ± 2% perf-profile.children.cycles-pp.fput_many
0.05 ± 7% +0.0 0.09 ± 5% perf-profile.children.cycles-pp.__x86_indirect_thunk_r9
0.29 +0.0 0.33 perf-profile.children.cycles-pp.cap_mmap_file
0.19 ± 3% +0.0 0.23 ± 2% perf-profile.children.cycles-pp.userfaultfd_unmap_complete
0.73 ± 2% +0.0 0.78 perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.08 +0.1 0.13 ± 5% perf-profile.children.cycles-pp.get_align_mask
0.16 ± 2% +0.1 0.23 ± 3% perf-profile.children.cycles-pp.blocking_notifier_call_chain
0.08 ± 5% +0.1 0.15 ± 5% perf-profile.children.cycles-pp.__x86_retpoline_rbp
9.37 +0.1 9.45 perf-profile.children.cycles-pp.perf_event_mmap
0.38 ± 7% +0.1 0.46 ± 4% perf-profile.children.cycles-pp.exit_to_user_mode_prepare
0.42 +0.1 0.52 ± 2% perf-profile.children.cycles-pp.refill_obj_stock
6.10 +0.1 6.24 perf-profile.children.cycles-pp.syscall_return_via_sysret
0.33 ± 3% +0.2 0.49 perf-profile.children.cycles-pp.__x86_retpoline_r9
1.88 +0.2 2.05 perf-profile.children.cycles-pp.security_mmap_file
0.79 +0.2 0.96 ± 2% perf-profile.children.cycles-pp.perf_event_mmap_output
1.85 +0.5 2.35 perf-profile.children.cycles-pp.perf_iterate_sb
58.74 +0.6 59.34 perf-profile.children.cycles-pp.__munmap
45.84 +0.7 46.52 perf-profile.children.cycles-pp.__do_munmap
46.87 +0.7 47.58 perf-profile.children.cycles-pp.__vm_munmap
47.31 +0.7 48.03 perf-profile.children.cycles-pp.__x64_sys_munmap
38.46 +1.0 39.45 perf-profile.children.cycles-pp.unmap_region
6.70 +1.2 7.93 perf-profile.children.cycles-pp.free_pgd_range
12.02 -0.3 11.68 perf-profile.self.cycles-pp.___might_sleep
2.32 -0.2 2.09 perf-profile.self.cycles-pp._cond_resched
1.25 -0.2 1.07 perf-profile.self.cycles-pp.__vma_link_rb
0.90 ± 3% -0.1 0.78 ± 3% perf-profile.self.cycles-pp.shmem_get_unmapped_area
0.84 ± 2% -0.1 0.72 ± 6% perf-profile.self.cycles-pp.prepend_path
1.65 -0.1 1.54 perf-profile.self.cycles-pp.zap_pte_range
1.29 -0.1 1.19 perf-profile.self.cycles-pp.__do_munmap
5.36 -0.1 5.26 perf-profile.self.cycles-pp.entry_SYSCALL_64
0.47 ± 11% -0.1 0.38 ± 2% perf-profile.self.cycles-pp.get_obj_cgroup_from_current
1.45 -0.1 1.37 perf-profile.self.cycles-pp.perf_event_mmap
0.32 -0.1 0.25 perf-profile.self.cycles-pp.apparmor_mmap_file
0.28 ± 2% -0.1 0.21 perf-profile.self.cycles-pp.vma_merge
0.20 ± 3% -0.1 0.14 ± 11% perf-profile.self.cycles-pp.uprobe_mmap
0.54 -0.1 0.48 perf-profile.self.cycles-pp.common_file_perm
0.89 -0.1 0.82 perf-profile.self.cycles-pp.find_vma
2.04 -0.1 1.98 perf-profile.self.cycles-pp.rcu_all_qs
0.73 ± 2% -0.1 0.67 perf-profile.self.cycles-pp.kfree
0.53 -0.1 0.48 perf-profile.self.cycles-pp.vm_area_alloc
0.89 ± 3% -0.1 0.84 ± 2% perf-profile.self.cycles-pp.strlen
0.61 -0.0 0.56 perf-profile.self.cycles-pp.kmem_cache_alloc_trace
0.30 -0.0 0.26 perf-profile.self.cycles-pp.sync_mm_rss
0.89 -0.0 0.84 perf-profile.self.cycles-pp.__might_sleep
0.25 -0.0 0.20 ± 3% perf-profile.self.cycles-pp.__x64_sys_munmap
0.52 ± 3% -0.0 0.47 ± 3% perf-profile.self.cycles-pp.down_write
1.06 -0.0 1.01 perf-profile.self.cycles-pp.vm_unmapped_area
0.15 ± 2% -0.0 0.11 ± 4% perf-profile.self.cycles-pp.vma_set_page_prot
0.30 ± 2% -0.0 0.26 perf-profile.self.cycles-pp.cap_mmap_addr
0.29 ± 3% -0.0 0.25 ± 4% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.17 ± 2% -0.0 0.14 ± 6% perf-profile.self.cycles-pp.vm_pgprot_modify
0.36 -0.0 0.33 perf-profile.self.cycles-pp.obj_cgroup_charge
0.72 -0.0 0.69 perf-profile.self.cycles-pp.up_write
0.30 -0.0 0.26 ± 3% perf-profile.self.cycles-pp.unmap_region
0.17 ± 5% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.cap_capable
0.30 ± 2% -0.0 0.27 perf-profile.self.cycles-pp.lru_add_drain_cpu
0.35 -0.0 0.33 ± 2% perf-profile.self.cycles-pp.downgrade_write
0.44 -0.0 0.42 perf-profile.self.cycles-pp.entry_SYSCALL_64_after_hwframe
0.46 ± 2% -0.0 0.44 perf-profile.self.cycles-pp.down_write_killable
0.39 -0.0 0.36 ± 2% perf-profile.self.cycles-pp.shmem_mmap
0.31 ± 2% -0.0 0.29 ± 3% perf-profile.self.cycles-pp.tlb_gather_mmu
0.19 ± 2% -0.0 0.17 ± 4% perf-profile.self.cycles-pp.vma_interval_tree_remove
0.18 ± 2% -0.0 0.15 ± 5% perf-profile.self.cycles-pp.__x64_sys_mmap
0.14 -0.0 0.12 ± 13% perf-profile.self.cycles-pp.current_time
0.20 ± 2% -0.0 0.18 ± 4% perf-profile.self.cycles-pp.unlink_anon_vmas
0.13 ± 3% -0.0 0.11 ± 6% perf-profile.self.cycles-pp.timestamp_truncate
0.08 -0.0 0.07 ± 13% perf-profile.self.cycles-pp.ktime_get_coarse_real_ts64
0.13 ± 3% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.unlink_file_vma
0.08 -0.0 0.07 ± 6% perf-profile.self.cycles-pp.lru_add_drain
0.09 -0.0 0.08 perf-profile.self.cycles-pp.__vma_link_file
0.33 ± 2% +0.0 0.35 ± 2% perf-profile.self.cycles-pp.vm_mmap_pgoff
0.12 ± 3% +0.0 0.14 perf-profile.self.cycles-pp.tlb_flush_mmu
0.39 ± 2% +0.0 0.41 ± 2% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.09 ± 4% +0.0 0.11 ± 9% perf-profile.self.cycles-pp.aa_file_perm
0.24 ± 2% +0.0 0.27 perf-profile.self.cycles-pp.cap_mmap_file
0.21 ± 2% +0.0 0.26 perf-profile.self.cycles-pp.vma_link
0.06 +0.0 0.10 ± 4% perf-profile.self.cycles-pp.get_align_mask
0.96 +0.0 1.01 perf-profile.self.cycles-pp.do_mmap
0.17 ± 3% +0.0 0.22 perf-profile.self.cycles-pp.userfaultfd_unmap_complete
0.12 ± 3% +0.1 0.18 ± 2% perf-profile.self.cycles-pp.security_vm_enough_memory_mm
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.__x86_indirect_thunk_r9
0.15 ± 3% +0.1 0.22 perf-profile.self.cycles-pp.blocking_notifier_call_chain
0.06 ± 6% +0.1 0.13 ± 5% perf-profile.self.cycles-pp.__x86_retpoline_rbp
0.36 ± 2% +0.1 0.45 perf-profile.self.cycles-pp.security_mmap_file
0.36 ± 8% +0.1 0.44 ± 5% perf-profile.self.cycles-pp.exit_to_user_mode_prepare
0.31 ± 3% +0.1 0.40 ± 2% perf-profile.self.cycles-pp.get_unmapped_area
0.40 +0.1 0.50 ± 2% perf-profile.self.cycles-pp.refill_obj_stock
0.30 ± 2% +0.1 0.43 perf-profile.self.cycles-pp.__x86_retpoline_r9
6.09 +0.1 6.22 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.75 +0.1 0.89 perf-profile.self.cycles-pp.perf_event_mmap_output
1.40 +0.1 1.54 perf-profile.self.cycles-pp.mmap_region
1.01 ± 2% +0.3 1.28 perf-profile.self.cycles-pp.perf_iterate_sb
10.66 +0.8 11.42 perf-profile.self.cycles-pp.unmap_page_range
6.66 +1.2 7.88 perf-profile.self.cycles-pp.free_pgd_range
will-it-scale.per_process_ops
224000 +------------------------------------------------------------------+
| |
222000 |-+ .+..... |
| .... +......+...... |
220000 |.+ + |
| |
218000 |-+ |
| |
216000 |-+ |
| |
214000 |-+ |
| |
212000 |-+ O O O O O O |
| O O O |
210000 +------------------------------------------------------------------+
will-it-scale.workload
2.32e+07 +----------------------------------------------------------------+
| .+.... |
2.3e+07 |-.... . ...+..... |
|. +... + |
2.28e+07 |-+ |
| |
2.26e+07 |-+ |
| |
2.24e+07 |-+ |
| |
2.22e+07 |-+ |
| O O O |
2.2e+07 |-+ O O O O |
| O O |
2.18e+07 +----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 7 months
[mm] 70e806e4e6: will-it-scale.per_process_ops 2.7% improvement
by kernel test robot
Greeting,
FYI, we noticed a 2.7% improvement of will-it-scale.per_process_ops due to commit:
commit: 70e806e4e645019102d0e09d4933654fb5fb58ce ("mm: Do early cow for pinned pages during fork() for ptes")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: will-it-scale
on test machine: 104 threads Skylake with 192G memory
with following parameters:
nr_task: 100%
mode: process
test: mmap2
cpufreq_governor: performance
ucode: 0x2006906
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------+
| testcase: change | will-it-scale: will-it-scale.per_process_ops 2.0% improvement |
| test machine | 104 threads Skylake with 192G memory |
| test parameters | cpufreq_governor=performance |
| | mode=process |
| | nr_task=100% |
| | test=mmap1 |
| | ucode=0x2006906 |
+------------------+---------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/100%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/mmap2/will-it-scale/0x2006906
commit:
7a4830c380 ("mm/fork: Pass new vma pointer into copy_page_range()")
70e806e4e6 ("mm: Do early cow for pinned pages during fork() for ptes")
7a4830c380f3a8b3 70e806e4e645019102d0e09d493
---------------- ---------------------------
%stddev %change %stddev
\ | \
215537 +2.7% 221349 will-it-scale.per_process_ops
22415915 +2.7% 23020376 will-it-scale.workload
839733 ± 3% -35.4% 542743 ± 17% cpuidle.C1.time
282931 ± 8% +13.9% 322181 ± 9% numa-numastat.node0.local_node
306399 ± 5% +8.4% 332275 ± 5% numa-numastat.node0.numa_hit
741.62 ± 7% +9.1% 809.08 ± 6% sched_debug.cfs_rq:/.util_avg.min
0.14 ± 7% +12.2% 0.16 ± 5% sched_debug.cpu.nr_running.stddev
795.75 ± 7% +35.8% 1080 ± 7% numa-vmstat.node0.nr_page_table_pages
11758 ± 8% +21.3% 14260 ± 4% numa-vmstat.node0.nr_slab_reclaimable
1282 ± 4% -21.9% 1002 ± 7% numa-vmstat.node1.nr_page_table_pages
12482 ± 7% -20.9% 9868 ± 8% numa-vmstat.node1.nr_slab_reclaimable
789100 ± 11% -19.3% 636572 ± 14% numa-vmstat.node1.numa_local
47033 ± 8% +21.3% 57044 ± 4% numa-meminfo.node0.KReclaimable
3183 ± 7% +35.8% 4323 ± 7% numa-meminfo.node0.PageTables
47033 ± 8% +21.3% 57044 ± 4% numa-meminfo.node0.SReclaimable
144339 ± 8% +12.7% 162692 ± 6% numa-meminfo.node0.Slab
49936 ± 7% -20.9% 39478 ± 8% numa-meminfo.node1.KReclaimable
5130 ± 4% -21.8% 4013 ± 7% numa-meminfo.node1.PageTables
49936 ± 7% -20.9% 39478 ± 8% numa-meminfo.node1.SReclaimable
567.00 ± 9% +181.8% 1597 ± 33% interrupts.CPU3.CAL:Function_call_interrupts
483.50 ± 8% +20.2% 581.25 ± 12% interrupts.CPU59.CAL:Function_call_interrupts
351.00 ± 12% +43.7% 504.25 ± 20% interrupts.CPU59.RES:Rescheduling_interrupts
322.00 ± 2% +30.7% 421.00 ± 16% interrupts.CPU60.RES:Rescheduling_interrupts
455.75 +17.8% 536.75 ± 17% interrupts.CPU61.CAL:Function_call_interrupts
316.50 +25.0% 395.75 ± 16% interrupts.CPU62.RES:Rescheduling_interrupts
462.75 ± 4% +4.9% 485.25 ± 5% interrupts.CPU72.CAL:Function_call_interrupts
994.00 ± 95% -67.9% 318.75 ± 3% interrupts.CPU98.RES:Rescheduling_interrupts
5.651e+10 +2.8% 5.809e+10 perf-stat.i.branch-instructions
0.48 +0.0 0.48 perf-stat.i.branch-miss-rate%
2.596e+08 +4.0% 2.699e+08 perf-stat.i.branch-misses
10.93 ± 6% -2.2 8.74 ± 4% perf-stat.i.cache-miss-rate%
1.18 -2.7% 1.15 perf-stat.i.cpi
44666128 +3.6% 46270200 perf-stat.i.dTLB-load-misses
6.013e+10 +2.8% 6.179e+10 perf-stat.i.dTLB-loads
42553 +14.9% 48901 ± 15% perf-stat.i.dTLB-store-misses
2.718e+10 +2.8% 2.793e+10 perf-stat.i.dTLB-stores
44533758 -36.9% 28097154 perf-stat.i.iTLB-load-misses
2.356e+11 +2.8% 2.422e+11 perf-stat.i.instructions
5549 +60.7% 8920 perf-stat.i.instructions-per-iTLB-miss
0.85 +2.8% 0.87 perf-stat.i.ipc
1382 +2.8% 1421 perf-stat.i.metric.M/sec
0.46 +0.0 0.46 perf-stat.overall.branch-miss-rate%
11.14 ± 10% -2.4 8.75 ± 4% perf-stat.overall.cache-miss-rate%
1.18 -2.7% 1.15 perf-stat.overall.cpi
0.00 +0.0 0.00 ± 15% perf-stat.overall.dTLB-store-miss-rate%
5292 +63.0% 8627 perf-stat.overall.instructions-per-iTLB-miss
0.85 +2.8% 0.87 perf-stat.overall.ipc
5.632e+10 +2.8% 5.789e+10 perf-stat.ps.branch-instructions
2.588e+08 +3.9% 2.69e+08 perf-stat.ps.branch-misses
44511150 +3.6% 46103111 perf-stat.ps.dTLB-load-misses
5.993e+10 +2.8% 6.158e+10 perf-stat.ps.dTLB-loads
42650 +14.6% 48890 ± 15% perf-stat.ps.dTLB-store-misses
2.709e+10 +2.8% 2.783e+10 perf-stat.ps.dTLB-stores
44371490 -36.9% 27977958 perf-stat.ps.iTLB-load-misses
2.348e+11 +2.8% 2.414e+11 perf-stat.ps.instructions
7.106e+13 +2.6% 7.29e+13 perf-stat.total.instructions
48.23 -0.9 47.34 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
39.18 -0.9 38.30 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
47.45 -0.9 46.59 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
47.90 -0.9 47.05 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
46.34 -0.8 45.51 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
52.66 -0.8 51.86 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
15.57 -0.7 14.88 perf-profile.calltrace.cycles-pp.___might_sleep.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
29.18 -0.7 28.50 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
58.71 -0.6 58.12 perf-profile.calltrace.cycles-pp.__munmap
26.14 -0.6 25.57 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
2.89 -0.3 2.54 perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
7.01 -0.2 6.77 perf-profile.calltrace.cycles-pp.free_pgd_range.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
0.76 -0.1 0.65 perf-profile.calltrace.cycles-pp.__vma_rb_erase.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.66 -0.1 0.56 perf-profile.calltrace.cycles-pp.down_write.unlink_file_vma.free_pgtables.unmap_region.__do_munmap
3.21 -0.1 3.10 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.__mmap
1.01 -0.1 0.94 perf-profile.calltrace.cycles-pp.memcpy_erms.prepend.d_path.perf_event_mmap.mmap_region
1.63 -0.1 1.58 perf-profile.calltrace.cycles-pp.prepend_path.d_path.perf_event_mmap.mmap_region.do_mmap
1.04 -0.0 0.99 perf-profile.calltrace.cycles-pp.unlink_file_vma.free_pgtables.unmap_region.__do_munmap.__vm_munmap
0.89 -0.0 0.85 perf-profile.calltrace.cycles-pp.prepend_name.prepend_path.d_path.perf_event_mmap.mmap_region
0.56 -0.0 0.53 ± 2% perf-profile.calltrace.cycles-pp.down_write.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
1.21 -0.0 1.18 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
1.25 +0.0 1.28 perf-profile.calltrace.cycles-pp.prepend.d_path.perf_event_mmap.mmap_region.do_mmap
0.93 +0.0 0.98 perf-profile.calltrace.cycles-pp.strlen.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
1.32 +0.1 1.38 perf-profile.calltrace.cycles-pp.find_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.57 ± 2% +0.1 0.65 perf-profile.calltrace.cycles-pp.common_file_perm.security_mmap_file.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
0.67 +0.1 0.76 perf-profile.calltrace.cycles-pp.kfree.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
2.24 +0.1 2.35 perf-profile.calltrace.cycles-pp.rcu_all_qs._cond_resched.unmap_page_range.unmap_vmas.unmap_region
1.98 +0.1 2.10 perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.shmem_get_unmapped_area.get_unmapped_area.do_mmap.vm_mmap_pgoff
25.35 +0.1 25.48 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.39 ± 57% +0.1 0.53 perf-profile.calltrace.cycles-pp.cap_vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap.vm_mmap_pgoff
9.39 +0.2 9.54 perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
1.72 +0.2 1.88 perf-profile.calltrace.cycles-pp.security_mmap_file.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.38 ± 57% +0.2 0.53 ± 2% perf-profile.calltrace.cycles-pp.down_write_killable.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.38 ± 6% +0.2 2.56 ± 2% perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap
2.67 +0.2 2.86 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__mmap
2.58 +0.2 2.79 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__munmap
2.88 ± 4% +0.2 3.09 perf-profile.calltrace.cycles-pp.kmem_cache_alloc.vm_area_alloc.mmap_region.do_mmap.vm_mmap_pgoff
0.82 +0.2 1.04 perf-profile.calltrace.cycles-pp.vm_unmapped_area.arch_get_unmapped_area_topdown.shmem_get_unmapped_area.get_unmapped_area.do_mmap
4.18 ± 5% +0.2 4.40 perf-profile.calltrace.cycles-pp._cond_resched.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
3.49 ± 3% +0.2 3.73 perf-profile.calltrace.cycles-pp.vm_area_alloc.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
20.14 +0.3 20.46 perf-profile.calltrace.cycles-pp.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64
28.30 +0.4 28.71 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
29.38 +0.4 29.82 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
29.88 +0.5 30.34 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
2.16 +0.5 2.64 perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
34.46 +0.5 34.99 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__mmap
41.05 +0.6 41.63 perf-profile.calltrace.cycles-pp.__mmap
39.24 -0.9 38.36 perf-profile.children.cycles-pp.unmap_region
47.93 -0.9 47.07 perf-profile.children.cycles-pp.__x64_sys_munmap
47.48 -0.9 46.62 perf-profile.children.cycles-pp.__vm_munmap
46.43 -0.8 45.59 perf-profile.children.cycles-pp.__do_munmap
28.42 -0.7 27.68 perf-profile.children.cycles-pp.unmap_page_range
29.20 -0.7 28.52 perf-profile.children.cycles-pp.unmap_vmas
59.12 -0.6 58.54 perf-profile.children.cycles-pp.__munmap
14.57 -0.5 14.04 perf-profile.children.cycles-pp.___might_sleep
78.17 -0.4 77.73 perf-profile.children.cycles-pp.do_syscall_64
2.94 -0.3 2.59 perf-profile.children.cycles-pp.vma_link
87.19 -0.3 86.92 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
7.03 -0.2 6.79 perf-profile.children.cycles-pp.free_pgd_range
0.32 ± 2% -0.2 0.15 ± 2% perf-profile.children.cycles-pp.__rb_insert_augmented
1.26 -0.1 1.13 perf-profile.children.cycles-pp.down_write
0.77 -0.1 0.65 perf-profile.children.cycles-pp.__vma_rb_erase
0.08 ± 5% -0.1 0.03 ±100% perf-profile.children.cycles-pp.memcpy
1.66 -0.1 1.60 perf-profile.children.cycles-pp.prepend_path
0.17 ± 5% -0.0 0.12 ± 3% perf-profile.children.cycles-pp.__vma_link_file
1.05 -0.0 1.00 perf-profile.children.cycles-pp.unlink_file_vma
0.92 -0.0 0.87 perf-profile.children.cycles-pp.prepend_name
0.36 -0.0 0.32 ± 2% perf-profile.children.cycles-pp.__x86_retpoline_r9
0.12 -0.0 0.08 ± 5% perf-profile.children.cycles-pp.get_align_mask
1.02 -0.0 0.98 perf-profile.children.cycles-pp.__might_sleep
0.17 ± 3% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.timestamp_truncate
0.38 ± 2% -0.0 0.35 ± 3% perf-profile.children.cycles-pp.current_time
0.46 -0.0 0.43 ± 2% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
1.21 -0.0 1.19 perf-profile.children.cycles-pp.__vma_link_rb
0.10 ± 4% -0.0 0.08 perf-profile.children.cycles-pp.get_mmap_base
0.18 ± 2% -0.0 0.17 perf-profile.children.cycles-pp.tlb_flush_mmu
0.07 -0.0 0.06 perf-profile.children.cycles-pp.munmap@plt
0.18 +0.0 0.20 ± 2% perf-profile.children.cycles-pp.testcase
0.35 +0.0 0.36 ± 2% perf-profile.children.cycles-pp.obj_cgroup_charge
0.29 ± 2% +0.0 0.31 ± 2% perf-profile.children.cycles-pp.lru_add_drain_cpu
0.09 +0.0 0.11 ± 3% perf-profile.children.cycles-pp.aa_file_perm
0.44 ± 2% +0.0 0.46 perf-profile.children.cycles-pp.refill_obj_stock
0.27 ± 3% +0.0 0.30 ± 2% perf-profile.children.cycles-pp.vma_merge
1.28 +0.0 1.31 perf-profile.children.cycles-pp.prepend
0.76 +0.0 0.79 perf-profile.children.cycles-pp.up_write
0.32 ± 2% +0.0 0.35 perf-profile.children.cycles-pp.downgrade_write
0.27 ± 3% +0.0 0.30 ± 2% perf-profile.children.cycles-pp.cap_mmap_file
0.07 ± 6% +0.0 0.11 ± 4% perf-profile.children.cycles-pp.ima_file_mmap
0.28 ± 2% +0.0 0.32 ± 2% perf-profile.children.cycles-pp.__vm_enough_memory
0.03 ±100% +0.0 0.07 perf-profile.children.cycles-pp.fput
0.93 +0.0 0.98 perf-profile.children.cycles-pp.strlen
0.34 ± 2% +0.0 0.39 perf-profile.children.cycles-pp.apparmor_mmap_file
0.26 +0.0 0.31 ± 2% perf-profile.children.cycles-pp.sync_mm_rss
0.22 ± 3% +0.1 0.27 ± 2% perf-profile.children.cycles-pp.vma_interval_tree_remove
0.25 +0.1 0.30 ± 6% perf-profile.children.cycles-pp.percpu_counter_add_batch
1.40 +0.1 1.45 perf-profile.children.cycles-pp.find_vma
0.14 ± 7% +0.1 0.20 ± 9% perf-profile.children.cycles-pp.vm_pgprot_modify
0.16 ± 5% +0.1 0.21 ± 2% perf-profile.children.cycles-pp.uprobe_mmap
1.13 +0.1 1.19 perf-profile.children.cycles-pp.memcpy_erms
0.59 ± 2% +0.1 0.66 perf-profile.children.cycles-pp.common_file_perm
4.85 +0.1 4.93 perf-profile.children.cycles-pp._cond_resched
0.67 ± 2% +0.1 0.77 perf-profile.children.cycles-pp.kfree
0.33 ± 3% +0.1 0.43 perf-profile.children.cycles-pp.vma_set_page_prot
2.01 +0.1 2.13 perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
25.40 +0.1 25.53 perf-profile.children.cycles-pp.do_mmap
9.46 +0.2 9.61 perf-profile.children.cycles-pp.perf_event_mmap
8.57 +0.2 8.73 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
1.74 +0.2 1.90 perf-profile.children.cycles-pp.security_mmap_file
2.39 ± 5% +0.2 2.58 ± 2% perf-profile.children.cycles-pp.kmem_cache_free
0.83 +0.2 1.04 perf-profile.children.cycles-pp.vm_unmapped_area
2.96 ± 4% +0.2 3.18 perf-profile.children.cycles-pp.kmem_cache_alloc
3.49 ± 3% +0.2 3.73 perf-profile.children.cycles-pp.vm_area_alloc
20.27 +0.3 20.59 perf-profile.children.cycles-pp.mmap_region
5.89 +0.4 6.30 perf-profile.children.cycles-pp.syscall_return_via_sysret
28.34 +0.4 28.76 perf-profile.children.cycles-pp.vm_mmap_pgoff
29.42 +0.4 29.86 perf-profile.children.cycles-pp.ksys_mmap_pgoff
2.20 +0.5 2.66 perf-profile.children.cycles-pp.zap_pte_range
41.40 +0.6 41.99 perf-profile.children.cycles-pp.__mmap
11.45 -1.0 10.48 perf-profile.self.cycles-pp.unmap_page_range
12.18 -0.4 11.82 perf-profile.self.cycles-pp.___might_sleep
6.98 -0.2 6.74 perf-profile.self.cycles-pp.free_pgd_range
0.31 ± 2% -0.2 0.14 perf-profile.self.cycles-pp.__rb_insert_augmented
5.52 -0.1 5.40 perf-profile.self.cycles-pp.entry_SYSCALL_64
1.04 -0.1 0.93 ± 2% perf-profile.self.cycles-pp.do_mmap
0.75 -0.1 0.64 perf-profile.self.cycles-pp.__vma_rb_erase
0.31 ± 4% -0.1 0.24 ± 6% perf-profile.self.cycles-pp.get_unmapped_area
0.15 ± 2% -0.1 0.10 ± 4% perf-profile.self.cycles-pp.__vma_link_file
0.90 -0.0 0.85 perf-profile.self.cycles-pp.prepend_name
0.93 ± 2% -0.0 0.88 ± 3% perf-profile.self.cycles-pp.shmem_get_unmapped_area
0.54 ± 2% -0.0 0.50 perf-profile.self.cycles-pp.down_write
0.66 -0.0 0.61 ± 2% perf-profile.self.cycles-pp.__mmap
0.94 -0.0 0.90 perf-profile.self.cycles-pp.arch_get_unmapped_area_topdown
0.10 ± 4% -0.0 0.06 perf-profile.self.cycles-pp.get_align_mask
0.33 -0.0 0.29 perf-profile.self.cycles-pp.__x86_retpoline_r9
0.16 ± 2% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.current_time
0.92 -0.0 0.89 perf-profile.self.cycles-pp.__might_sleep
0.15 ± 2% -0.0 0.12 ± 3% perf-profile.self.cycles-pp.timestamp_truncate
0.42 -0.0 0.39 ± 2% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.14 ± 3% -0.0 0.12 perf-profile.self.cycles-pp.prepend
0.10 ± 4% -0.0 0.08 ± 6% perf-profile.self.cycles-pp.get_mmap_base
0.35 ± 2% -0.0 0.33 perf-profile.self.cycles-pp.security_mmap_file
0.15 ± 3% -0.0 0.14 ± 3% perf-profile.self.cycles-pp.vma_interval_tree_insert
0.13 ± 3% -0.0 0.12 perf-profile.self.cycles-pp.tlb_flush_mmu
0.06 ± 6% -0.0 0.05 perf-profile.self.cycles-pp.munmap@plt
0.05 +0.0 0.06 ± 6% perf-profile.self.cycles-pp.should_failslab
0.11 ± 6% +0.0 0.12 ± 4% perf-profile.self.cycles-pp.security_vm_enough_memory_mm
0.33 +0.0 0.35 ± 2% perf-profile.self.cycles-pp.obj_cgroup_charge
0.50 +0.0 0.52 perf-profile.self.cycles-pp.vm_area_alloc
0.30 +0.0 0.32 ± 3% perf-profile.self.cycles-pp.unmap_vmas
0.24 +0.0 0.26 perf-profile.self.cycles-pp.unmap_region
0.08 ± 5% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.aa_file_perm
0.38 +0.0 0.40 perf-profile.self.cycles-pp.vmacache_find
0.42 ± 2% +0.0 0.45 perf-profile.self.cycles-pp.refill_obj_stock
0.25 +0.0 0.27 ± 2% perf-profile.self.cycles-pp.atime_needs_update
0.28 +0.0 0.30 ± 2% perf-profile.self.cycles-pp.vm_mmap_pgoff
0.26 +0.0 0.28 ± 2% perf-profile.self.cycles-pp.vma_merge
0.11 ± 7% +0.0 0.15 ± 7% perf-profile.self.cycles-pp.vma_set_page_prot
0.31 ± 2% +0.0 0.34 perf-profile.self.cycles-pp.downgrade_write
0.74 +0.0 0.77 perf-profile.self.cycles-pp.up_write
0.83 +0.0 0.87 perf-profile.self.cycles-pp.find_vma
0.07 ± 7% +0.0 0.10 ± 4% perf-profile.self.cycles-pp.ima_file_mmap
0.22 ± 3% +0.0 0.26 perf-profile.self.cycles-pp.cap_mmap_file
0.30 ± 2% +0.0 0.34 perf-profile.self.cycles-pp.apparmor_mmap_file
0.21 ± 2% +0.0 0.26 perf-profile.self.cycles-pp.__x64_sys_munmap
0.93 +0.0 0.97 perf-profile.self.cycles-pp.strlen
0.12 ± 4% +0.0 0.17 perf-profile.self.cycles-pp.free_pgtables
0.24 +0.0 0.29 ± 5% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.26 +0.0 0.31 ± 2% perf-profile.self.cycles-pp.sync_mm_rss
0.17 ± 4% +0.1 0.22 ± 4% perf-profile.self.cycles-pp.vma_interval_tree_remove
0.13 ± 8% +0.1 0.18 ± 11% perf-profile.self.cycles-pp.vm_pgprot_modify
0.50 ± 2% +0.1 0.55 perf-profile.self.cycles-pp.common_file_perm
1.31 +0.1 1.36 perf-profile.self.cycles-pp.mmap_region
0.15 ± 3% +0.1 0.20 ± 2% perf-profile.self.cycles-pp.uprobe_mmap
2.22 +0.1 2.30 perf-profile.self.cycles-pp._cond_resched
1.06 +0.1 1.15 perf-profile.self.cycles-pp.memcpy_erms
0.66 ± 2% +0.1 0.76 perf-profile.self.cycles-pp.kfree
8.18 +0.2 8.36 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
0.82 +0.2 1.04 perf-profile.self.cycles-pp.vm_unmapped_area
1.59 +0.4 2.00 perf-profile.self.cycles-pp.zap_pte_range
5.87 +0.4 6.29 perf-profile.self.cycles-pp.syscall_return_via_sysret
will-it-scale.per_process_ops
223000 +------------------------------------------------------------------+
| O |
222000 |-+ O O |
221000 |-+ O O |
| O O O |
220000 |-+ O |
219000 |-+ O |
| |
218000 |-+ +.. |
217000 |-+ .. .. |
|..... . .+.. |
216000 |-+ +.... .. +..... ...+.... .. .. |
215000 |-+ + +.. +..... ..+.....+. .|
| +.. |
214000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/process/100%/debian-10.4-x86_64-20200603.cgz/lkp-skl-fpga01/mmap1/will-it-scale/0x2006906
commit:
7a4830c380 ("mm/fork: Pass new vma pointer into copy_page_range()")
70e806e4e6 ("mm: Do early cow for pinned pages during fork() for ptes")
7a4830c380f3a8b3 70e806e4e645019102d0e09d493
---------------- ---------------------------
%stddev %change %stddev
\ | \
238782 +2.0% 243561 will-it-scale.per_process_ops
9565 -0.9% 9477 will-it-scale.time.maximum_resident_set_size
24833379 +2.0% 25330430 will-it-scale.workload
18752 ± 9% -14.9% 15950 ± 12% numa-meminfo.node0.Mapped
4793 ± 9% -15.7% 4041 ± 13% numa-vmstat.node0.nr_mapped
12863 ± 12% +18.5% 15248 ± 11% sched_debug.cpu.sched_count.max
1740 ± 6% -12.1% 1530 slabinfo.kmalloc-rcl-512.active_objs
1740 ± 6% -12.1% 1530 slabinfo.kmalloc-rcl-512.num_objs
18707 +9.1% 20407 ± 3% softirqs.CPU16.RCU
17047 ± 2% +8.1% 18429 ± 4% softirqs.CPU40.RCU
82.25 -1.5% 81.00 vmstat.cpu.sy
16.00 +6.2% 17.00 vmstat.cpu.us
8495 ± 38% -96.9% 260.75 ± 82% proc-vmstat.numa_hint_faults
9348 ± 56% -93.2% 632.75 ±118% proc-vmstat.numa_pages_migrated
41007 ± 55% -86.3% 5613 ± 69% proc-vmstat.numa_pte_updates
9348 ± 56% -93.2% 632.75 ±118% proc-vmstat.pgmigrate_success
7387 -37.7% 4602 ± 34% interrupts.CPU10.NMI:Non-maskable_interrupts
7387 -37.7% 4602 ± 34% interrupts.CPU10.PMI:Performance_monitoring_interrupts
313.00 ± 2% +23.9% 387.75 ± 20% interrupts.CPU24.RES:Rescheduling_interrupts
353.75 ± 8% -9.7% 319.50 ± 5% interrupts.CPU37.RES:Rescheduling_interrupts
5248 ± 22% -37.7% 3269 ± 27% interrupts.CPU53.CAL:Function_call_interrupts
6467 ± 24% -28.8% 4602 ± 34% interrupts.CPU6.NMI:Non-maskable_interrupts
6467 ± 24% -28.8% 4602 ± 34% interrupts.CPU6.PMI:Performance_monitoring_interrupts
610.00 ± 23% -24.1% 463.00 interrupts.CPU62.CAL:Function_call_interrupts
6466 ± 24% -28.8% 4603 ± 34% interrupts.CPU7.NMI:Non-maskable_interrupts
6466 ± 24% -28.8% 4603 ± 34% interrupts.CPU7.PMI:Performance_monitoring_interrupts
453.50 +80.8% 819.75 ± 55% interrupts.CPU8.CAL:Function_call_interrupts
6467 ± 24% -28.8% 4602 ± 34% interrupts.CPU8.NMI:Non-maskable_interrupts
6467 ± 24% -28.8% 4602 ± 34% interrupts.CPU8.PMI:Performance_monitoring_interrupts
7388 -37.7% 4603 ± 34% interrupts.CPU9.NMI:Non-maskable_interrupts
7388 -37.7% 4603 ± 34% interrupts.CPU9.PMI:Performance_monitoring_interrupts
5.672e+10 +2.0% 5.784e+10 perf-stat.i.branch-instructions
2.382e+08 +2.7% 2.446e+08 perf-stat.i.branch-misses
1.18 -2.0% 1.16 perf-stat.i.cpi
49520439 +1.9% 50482111 perf-stat.i.dTLB-load-misses
5.872e+10 +2.0% 5.987e+10 perf-stat.i.dTLB-loads
44598 +4.4% 46565 perf-stat.i.dTLB-store-misses
2.605e+10 +2.0% 2.656e+10 perf-stat.i.dTLB-stores
96.00 -7.9 88.06 ± 6% perf-stat.i.iTLB-load-miss-rate%
49281952 +2.2% 50384621 perf-stat.i.iTLB-load-misses
1928684 ± 46% +259.5% 6934327 ± 58% perf-stat.i.iTLB-loads
2.354e+11 +2.0% 2.4e+11 perf-stat.i.instructions
0.85 +2.0% 0.86 perf-stat.i.ipc
1360 +2.0% 1387 perf-stat.i.metric.M/sec
2838 -0.9% 2811 perf-stat.i.minor-faults
2838 -0.9% 2812 perf-stat.i.page-faults
1.18 -2.0% 1.16 perf-stat.overall.cpi
0.00 +0.0 0.00 perf-stat.overall.dTLB-store-miss-rate%
96.28 -8.0 88.30 ± 6% perf-stat.overall.iTLB-load-miss-rate%
0.85 +2.0% 0.86 perf-stat.overall.ipc
5.653e+10 +2.0% 5.765e+10 perf-stat.ps.branch-instructions
2.375e+08 +2.7% 2.438e+08 perf-stat.ps.branch-misses
49350510 +1.9% 50309290 perf-stat.ps.dTLB-load-misses
5.852e+10 +2.0% 5.967e+10 perf-stat.ps.dTLB-loads
44555 +4.3% 46476 perf-stat.ps.dTLB-store-misses
2.596e+10 +2.0% 2.647e+10 perf-stat.ps.dTLB-stores
49109500 +2.2% 50209811 perf-stat.ps.iTLB-load-misses
1915468 ± 45% +260.7% 6909154 ± 58% perf-stat.ps.iTLB-loads
2.346e+11 +2.0% 2.392e+11 perf-stat.ps.instructions
2834 -1.0% 2805 perf-stat.ps.minor-faults
2834 -1.0% 2806 perf-stat.ps.page-faults
7.095e+13 +1.9% 7.23e+13 perf-stat.total.instructions
34.83 -1.1 33.77 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
44.65 -1.0 43.62 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
18.45 -0.9 17.52 perf-profile.calltrace.cycles-pp.___might_sleep.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
52.42 -0.7 51.69 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
53.69 -0.7 52.96 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
31.25 -0.7 30.52 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
54.16 -0.7 53.47 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
54.49 -0.7 53.84 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__munmap
2.31 -0.5 1.77 perf-profile.calltrace.cycles-pp.vma_link.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
59.40 -0.4 58.98 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.__munmap
1.62 -0.2 1.38 perf-profile.calltrace.cycles-pp.__vma_link_rb.vma_link.mmap_region.do_mmap.vm_mmap_pgoff
18.16 -0.2 17.93 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
22.17 -0.2 21.94 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
21.60 -0.2 21.38 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
21.31 -0.2 21.11 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.__mmap
1.68 ± 2% -0.1 1.54 perf-profile.calltrace.cycles-pp.find_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.62 ± 2% -0.0 0.59 perf-profile.calltrace.cycles-pp.security_mmap_addr.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
0.56 +0.0 0.58 perf-profile.calltrace.cycles-pp.tlb_finish_mmu.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
0.62 +0.0 0.65 perf-profile.calltrace.cycles-pp.cap_vm_enough_memory.security_vm_enough_memory_mm.mmap_region.do_mmap.vm_mmap_pgoff
2.55 +0.1 2.65 perf-profile.calltrace.cycles-pp.rcu_all_qs._cond_resched.unmap_page_range.unmap_vmas.unmap_region
3.15 +0.1 3.26 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64.__munmap
0.89 +0.1 1.01 ± 2% perf-profile.calltrace.cycles-pp.perf_event_mmap_output.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap
3.87 +0.2 4.03 perf-profile.calltrace.cycles-pp.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
4.65 +0.2 4.87 perf-profile.calltrace.cycles-pp.syscall_exit_to_user_mode.entry_SYSCALL_64_after_hwframe.__munmap
2.72 ± 2% +0.2 2.95 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__mmap
1.94 +0.2 2.19 perf-profile.calltrace.cycles-pp.perf_iterate_sb.perf_event_mmap.mmap_region.do_mmap.vm_mmap_pgoff
2.68 ± 2% +0.3 2.93 perf-profile.calltrace.cycles-pp.syscall_return_via_sysret.__munmap
1.00 +0.3 1.26 perf-profile.calltrace.cycles-pp.vm_unmapped_area.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff
2.05 +0.3 2.32 perf-profile.calltrace.cycles-pp.arch_get_unmapped_area_topdown.get_unmapped_area.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff
5.05 +0.3 5.32 perf-profile.calltrace.cycles-pp._cond_resched.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
2.80 ± 2% +0.4 3.15 perf-profile.calltrace.cycles-pp.zap_pte_range.unmap_page_range.unmap_vmas.unmap_region.__do_munmap
2.93 ± 7% +0.4 3.33 ± 3% perf-profile.calltrace.cycles-pp.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
2.27 ± 9% +0.5 2.73 ± 3% perf-profile.calltrace.cycles-pp.kmem_cache_free.remove_vma.__do_munmap.__vm_munmap.__x64_sys_munmap
0.00 +0.6 0.58 perf-profile.calltrace.cycles-pp.free_pgtables.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
34.85 -1.1 33.80 perf-profile.children.cycles-pp.unmap_vmas
44.73 -1.0 43.70 perf-profile.children.cycles-pp.unmap_region
33.89 -1.0 32.89 perf-profile.children.cycles-pp.unmap_page_range
76.71 -0.9 75.83 perf-profile.children.cycles-pp.do_syscall_64
52.54 -0.7 51.79 perf-profile.children.cycles-pp.__do_munmap
53.73 -0.7 53.00 perf-profile.children.cycles-pp.__vm_munmap
16.84 -0.7 16.13 perf-profile.children.cycles-pp.___might_sleep
54.19 -0.7 53.49 perf-profile.children.cycles-pp.__x64_sys_munmap
86.59 -0.6 86.02 perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
2.36 -0.5 1.82 perf-profile.children.cycles-pp.vma_link
1.64 -0.3 1.38 perf-profile.children.cycles-pp.__vma_link_rb
18.21 -0.2 17.98 perf-profile.children.cycles-pp.do_mmap
21.63 -0.2 21.41 perf-profile.children.cycles-pp.ksys_mmap_pgoff
21.36 -0.2 21.16 perf-profile.children.cycles-pp.vm_mmap_pgoff
0.30 ± 2% -0.2 0.11 ± 4% perf-profile.children.cycles-pp.__rb_insert_augmented
1.78 ± 2% -0.2 1.63 perf-profile.children.cycles-pp.find_vma
0.15 ± 2% -0.1 0.07 ± 6% perf-profile.children.cycles-pp.__vma_link_file
0.41 ± 3% -0.1 0.35 perf-profile.children.cycles-pp.cap_mmap_addr
0.76 -0.1 0.71 perf-profile.children.cycles-pp.__might_sleep
0.47 -0.0 0.42 perf-profile.children.cycles-pp.obj_cgroup_charge
0.43 ± 3% -0.0 0.39 ± 3% perf-profile.children.cycles-pp.tlb_gather_mmu
0.64 ± 2% -0.0 0.60 perf-profile.children.cycles-pp.security_mmap_addr
0.23 ± 3% -0.0 0.20 ± 2% perf-profile.children.cycles-pp.vmacache_update
0.25 -0.0 0.22 perf-profile.children.cycles-pp.strlen
0.45 -0.0 0.42 ± 2% perf-profile.children.cycles-pp.apparmor_mmap_file
0.22 -0.0 0.20 ± 2% perf-profile.children.cycles-pp.cap_capable
0.23 -0.0 0.22 perf-profile.children.cycles-pp.__x64_sys_mmap
0.10 +0.0 0.11 perf-profile.children.cycles-pp.vm_area_free
0.10 ± 4% +0.0 0.12 ± 3% perf-profile.children.cycles-pp.__x86_retpoline_rbp
0.10 ± 5% +0.0 0.11 perf-profile.children.cycles-pp.get_mmap_base
0.18 ± 2% +0.0 0.20 perf-profile.children.cycles-pp.testcase
0.08 ± 5% +0.0 0.10 perf-profile.children.cycles-pp.unlink_file_vma
0.58 +0.0 0.60 perf-profile.children.cycles-pp.tlb_finish_mmu
0.47 ± 2% +0.0 0.49 perf-profile.children.cycles-pp.__x86_retpoline_rax
0.29 ± 2% +0.0 0.32 perf-profile.children.cycles-pp.__x86_retpoline_r9
0.47 +0.0 0.50 ± 2% perf-profile.children.cycles-pp.syscall_enter_from_user_mode
0.38 +0.0 0.41 perf-profile.children.cycles-pp.downgrade_write
0.48 ± 3% +0.0 0.52 perf-profile.children.cycles-pp.vma_merge
0.22 ± 3% +0.0 0.26 perf-profile.children.cycles-pp.unlink_anon_vmas
0.16 ± 5% +0.0 0.20 ± 2% perf-profile.children.cycles-pp.blocking_notifier_call_chain
0.35 ± 2% +0.0 0.39 ± 2% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.40 ± 2% +0.1 0.46 perf-profile.children.cycles-pp.refill_obj_stock
0.33 ± 2% +0.1 0.39 perf-profile.children.cycles-pp.__vm_enough_memory
2.90 +0.1 2.98 perf-profile.children.cycles-pp.rcu_all_qs
0.51 +0.1 0.61 perf-profile.children.cycles-pp.free_pgtables
0.33 +0.1 0.44 perf-profile.children.cycles-pp.cap_mmap_file
0.92 +0.1 1.03 ± 2% perf-profile.children.cycles-pp.perf_event_mmap_output
3.94 +0.2 4.09 perf-profile.children.cycles-pp.perf_event_mmap
5.32 +0.2 5.57 perf-profile.children.cycles-pp._cond_resched
1.97 +0.3 2.22 perf-profile.children.cycles-pp.perf_iterate_sb
1.01 +0.3 1.27 perf-profile.children.cycles-pp.vm_unmapped_area
2.09 +0.3 2.36 perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
9.39 +0.3 9.68 perf-profile.children.cycles-pp.syscall_exit_to_user_mode
2.83 ± 2% +0.4 3.19 perf-profile.children.cycles-pp.zap_pte_range
2.97 ± 6% +0.4 3.37 ± 3% perf-profile.children.cycles-pp.remove_vma
2.29 ± 9% +0.5 2.75 ± 3% perf-profile.children.cycles-pp.kmem_cache_free
6.16 +0.5 6.65 perf-profile.children.cycles-pp.syscall_return_via_sysret
13.46 -1.2 12.24 perf-profile.self.cycles-pp.unmap_page_range
14.10 -0.4 13.67 perf-profile.self.cycles-pp.___might_sleep
1.62 -0.2 1.37 perf-profile.self.cycles-pp.__vma_link_rb
0.59 ± 4% -0.2 0.40 perf-profile.self.cycles-pp.get_unmapped_area
0.86 ± 2% -0.2 0.67 ± 2% perf-profile.self.cycles-pp.do_mmap
0.29 ± 2% -0.2 0.11 ± 3% perf-profile.self.cycles-pp.__rb_insert_augmented
1.04 ± 2% -0.1 0.94 perf-profile.self.cycles-pp.find_vma
0.13 -0.1 0.04 ± 57% perf-profile.self.cycles-pp.__vma_link_file
0.63 -0.1 0.54 ± 3% perf-profile.self.cycles-pp.security_mmap_file
1.63 -0.1 1.56 perf-profile.self.cycles-pp.perf_event_mmap
0.36 ± 3% -0.1 0.29 ± 2% perf-profile.self.cycles-pp.cap_mmap_addr
0.70 -0.1 0.65 perf-profile.self.cycles-pp.__might_sleep
0.45 -0.0 0.41 perf-profile.self.cycles-pp.obj_cgroup_charge
0.42 ± 3% -0.0 0.38 ± 2% perf-profile.self.cycles-pp.tlb_gather_mmu
0.40 -0.0 0.37 ± 2% perf-profile.self.cycles-pp.apparmor_mmap_file
0.75 -0.0 0.71 perf-profile.self.cycles-pp.__mmap
0.21 ± 3% -0.0 0.18 ± 3% perf-profile.self.cycles-pp.vmacache_update
0.23 -0.0 0.20 ± 2% perf-profile.self.cycles-pp.vma_link
0.23 -0.0 0.21 ± 2% perf-profile.self.cycles-pp.strlen
0.13 ± 3% -0.0 0.11 ± 3% perf-profile.self.cycles-pp.security_mmap_addr
0.08 +0.0 0.09 perf-profile.self.cycles-pp.vm_area_free
0.09 +0.0 0.10 perf-profile.self.cycles-pp.get_mmap_base
0.07 +0.0 0.08 ± 5% perf-profile.self.cycles-pp.unlink_file_vma
0.09 ± 4% +0.0 0.10 perf-profile.self.cycles-pp.can_vma_merge_before
0.11 ± 3% +0.0 0.13 perf-profile.self.cycles-pp.testcase
0.22 ± 3% +0.0 0.25 perf-profile.self.cycles-pp.userfaultfd_unmap_prep
0.27 +0.0 0.29 perf-profile.self.cycles-pp.__x86_retpoline_r9
0.43 +0.0 0.46 ± 2% perf-profile.self.cycles-pp.syscall_enter_from_user_mode
0.21 ± 3% +0.0 0.24 perf-profile.self.cycles-pp.unlink_anon_vmas
0.15 ± 3% +0.0 0.19 ± 3% perf-profile.self.cycles-pp.blocking_notifier_call_chain
0.36 +0.0 0.40 perf-profile.self.cycles-pp.vm_mmap_pgoff
0.35 +0.0 0.39 perf-profile.self.cycles-pp.tlb_finish_mmu
0.34 ± 2% +0.0 0.37 ± 2% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.51 +0.0 0.55 ± 2% perf-profile.self.cycles-pp.__munmap
0.36 +0.0 0.40 perf-profile.self.cycles-pp.downgrade_write
2.20 +0.0 2.24 perf-profile.self.cycles-pp.rcu_all_qs
0.21 ± 3% +0.1 0.27 perf-profile.self.cycles-pp.free_pgtables
0.38 +0.1 0.44 perf-profile.self.cycles-pp.refill_obj_stock
0.62 +0.1 0.69 perf-profile.self.cycles-pp.vm_area_alloc
0.26 +0.1 0.36 perf-profile.self.cycles-pp.cap_mmap_file
0.86 +0.1 0.97 ± 2% perf-profile.self.cycles-pp.perf_event_mmap_output
0.99 +0.1 1.10 ± 4% perf-profile.self.cycles-pp.perf_iterate_sb
2.45 +0.2 2.63 perf-profile.self.cycles-pp._cond_resched
1.71 ± 8% +0.2 1.94 ± 5% perf-profile.self.cycles-pp.kmem_cache_alloc
0.98 +0.3 1.24 perf-profile.self.cycles-pp.vm_unmapped_area
8.99 +0.3 9.27 perf-profile.self.cycles-pp.syscall_exit_to_user_mode
2.04 ± 2% +0.4 2.39 perf-profile.self.cycles-pp.zap_pte_range
1.65 ± 12% +0.4 2.02 ± 4% perf-profile.self.cycles-pp.kmem_cache_free
6.13 +0.5 6.63 perf-profile.self.cycles-pp.syscall_return_via_sysret
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 7 months