[perf/core] 16fb162e78: WARNING:at_kernel/events/core.c:#list_add_event
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 16fb162e7868b8f3c1fc09ef0d683c9554cb6404 ("[RFC] perf/core: Fixes hung issue on perf stat command during cpu hotplug")
url: https://github.com/0day-ci/linux/commits/Kajol-Jain/perf-core-Fixes-hung-...
base: https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git 2cb5383b30d47c446ec7d884cd80f93ffcc31817
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------+------------+------------+
| | 2cb5383b30 | 16fb162e78 |
+-------------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 8 |
| WARNING:at_kernel/events/core.c:#list_add_event | 0 | 8 |
| EIP:list_add_event | 0 | 8 |
| kernel_BUG_at_lib/list_debug.c | 0 | 8 |
| invalid_opcode:#[##] | 0 | 8 |
| EIP:__list_add_valid | 0 | 8 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 8 |
+-------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 97.084211] WARNING: CPU: 0 PID: 593 at kernel/events/core.c:1807 list_add_event+0x29e/0x4b0
[ 97.086779] Modules linked in:
[ 97.087756] CPU: 0 PID: 593 Comm: trinity-main Not tainted 5.9.0-rc1-00012-g16fb162e7868b #1
[ 97.089783] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 97.091980] EIP: list_add_event+0x29e/0x4b0
[ 97.093322] Code: ef c2 c4 01 83 15 44 ef c2 c4 00 8b 43 60 a8 01 0f 84 9a fd ff ff 8d b6 00 00 00 00 83 05 48 ef c2 c4 01 83 15 4c ef c2 c4 00 <0f> 0b 83 05 50 ef c2 c4 01 83 15 54 ef c2 c4 00 e9 71 fd ff ff 8d
[ 97.097789] EAX: 00000007 EBX: eee48000 ECX: 00000001 EDX: ee125280
[ 97.099297] ESI: eeebe200 EDI: 00000001 EBP: eee67dfc ESP: eee67dec
[ 97.101008] DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068 EFLAGS: 00010046
[ 97.106936] CR0: 80050033 CR2: 08dfc000 CR3: 2e0bb000 CR4: 00040690
[ 97.108599] Call Trace:
[ 97.109457] __perf_install_in_context+0x286/0x3e0
[ 97.110759] remote_function+0x79/0x90
[ 97.111897] generic_exec_single+0xc3/0x190
[ 97.113101] smp_call_function_single+0x1a3/0x2d0
[ 97.114364] ? bpf_fd_reuseport_array_update_elem+0x300/0x300
[ 97.115866] task_function_call+0x87/0xc0
[ 97.117025] ? __perf_event_enable+0x740/0x740
[ 97.118248] perf_install_in_context+0x110/0x450
[ 97.119540] __do_sys_perf_event_open+0x13cb/0x1ed0
[ 97.120833] __ia32_sys_perf_event_open+0x24/0x40
[ 97.122097] __do_fast_syscall_32+0x99/0x110
[ 97.123213] do_fast_syscall_32+0x37/0x130
[ 97.124409] do_SYSENTER_32+0x23/0x40
[ 97.125470] entry_SYSENTER_32+0xb0/0x10d
[ 97.126592] EIP: 0xb7f11549
[ 97.127449] Code: 03 74 c0 01 10 05 03 74 b8 01 10 06 03 74 b4 01 10 07 03 74 b0 01 10 08 03 74 d8 01 00 00 00 00 00 51 52 55 89 e5 0f 34 cd 80 <5d> 5a 59 c3 90 90 90 90 8d b4 26 00 00 00 00 8d b4 26 00 00 00 00
[ 97.131832] EAX: ffffffda EBX: 08e38318 ECX: 00000000 EDX: ffffffff
[ 97.137556] ESI: ffffffff EDI: 00000002 EBP: 6d656d83 ESP: bfa804ec
[ 97.139119] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000286
[ 97.140732] ---[ end trace 57d55e687e8b0e41 ]---
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc1-00012-g16fb162e7868b .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 8 months
[drm/crtc] 236b7bc44a: BUG:stack_guard_page_was_hit_at(____ptrval____)(stack_is(____ptrval____)..(____ptrval____))
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 236b7bc44ae0fdecc8e80c5aba0655ca14fdfb23 ("[PATCH 4/4] drm/crtc: add drmm_crtc_alloc_with_planes()")
url: https://github.com/0day-ci/linux/commits/Philipp-Zabel/drm-add-drmm_encod...
base: git://anongit.freedesktop.org/drm-intel for-linux-next
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+---------------------------------------------------------------------------------------------+------------+------------+
| | d809a51da3 | 236b7bc44a |
+---------------------------------------------------------------------------------------------+------------+------------+
| boot_successes | 16 | 0 |
| boot_failures | 0 | 4 |
| BUG:stack_guard_page_was_hit_at(____ptrval____)(stack_is(____ptrval____)..(____ptrval____)) | 0 | 4 |
| RIP:drm_crtc_init_with_planes[drm] | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception_in_interrupt | 0 | 4 |
+---------------------------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 12.795894] BUG: stack guard page was hit at (____ptrval____) (stack is (____ptrval____)..(____ptrval____))
[ 12.795895] kernel stack overflow (double-fault): 0000 [#1] SMP PTI
[ 12.795896] CPU: 0 PID: 193 Comm: systemd-udevd Not tainted 5.8.0-01890-g236b7bc44ae0fd #1
[ 12.795897] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 12.795898] RIP: 0010:drm_crtc_init_with_planes+0x5/0x80 [drm]
[ 12.795899] Code: 0f eb 16 48 8b 50 10 48 8d 42 f0 48 39 d7 74 09 39 b0 90 00 00 00 75 eb c3 31 c0 c3 66 0f 1f 84 00 00 00 00 00 66 66 66 66 90 <55> 48 89 e5 41 52 4c 8d 55 10 48 83 ec 50 65 48 8b 04 25 28 00 00
[ 12.795900] RSP: 0018:ffffac1040344000 EFLAGS: 00010246
[ 12.795902] RAX: ffffac1040344010 RBX: ffff9b400fa8b078 RCX: 0000000000000000
[ 12.795902] RDX: ffff9b400fa8b490 RSI: ffff9b400fa8b078 RDI: ffff9b4010c5c000
[ 12.795903] RBP: ffffac1040344068 R08: ffffffffc04d3340 R09: 0000000000000000
[ 12.795904] R10: ffffac1040344078 R11: 0000000000000000 R12: ffff9b4010c5c000
[ 12.795905] R13: ffff9b400fa8b490 R14: 000000000000000a R15: ffffffffc0621300
[ 12.795905] FS: 00007faf7e56dd40(0000) GS:ffff9b403fc00000(0000) knlGS:0000000000000000
[ 12.795906] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 12.795907] CR2: ffffac1040343ff8 CR3: 0000000168096000 CR4: 00000000000406f0
[ 12.795907] Call Trace:
[ 12.795908] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795909] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795910] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795911] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795912] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795913] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795914] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795915] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795916] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795917] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795918] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795919] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795920] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795921] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795922] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795923] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795924] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795925] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795926] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795927] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795928] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795929] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795930] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795931] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795932] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795933] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795934] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795935] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795936] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795937] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795938] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795939] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795940] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795941] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795942] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795943] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795944] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795945] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795946] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795947] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795948] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795949] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795950] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795951] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795952] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795953] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795954] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795955] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795956] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795957] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795958] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795959] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795960] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795961] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795962] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795963] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795964] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795965] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795966] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795967] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795968] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795969] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795970] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795971] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795972] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795973] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795974] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795975] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795976] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795977] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795978] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795979] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795980] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795981] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795982] ? update_group_capacity+0x25/0x1c0
[ 12.795982] ? cpumask_next_and+0x1a/0x20
[ 12.795983] ? update_sd_lb_stats+0x121/0x860
[ 12.795984] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795985] ? update_load_avg+0x78/0x660
[ 12.795986] ? account_entity_enqueue+0x9c/0xe0
[ 12.795986] ? enqueue_entity+0x218/0x3a0
[ 12.795987] drm_crtc_init_with_planes+0x63/0x80 [drm]
[ 12.795987] ? enqueue_task_fair+0x8e/0x6a0
[ 12.795988] ? check_preempt_wakeup+0x17f/0x240
[ 12.795988] drm_crtc_init_with_planes+0x63/0x80 [drm]
To reproduce:
# build kernel
cd linux
cp config-5.8.0-01890-g236b7bc44ae0fd .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 8 months
[bio] 37abbdc72e: WARNING:at_block/bio.c:#bio_release_pages
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 37abbdc72ec00a133b4b93f8d7ff9559a41da4e0 ("[PATCH 4/5] bio: introduce BIO_FOLL_PIN flag")
url: https://github.com/0day-ci/linux/commits/John-Hubbard/bio-Direct-IO-conve...
base: https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-next
in testcase: ltp
with following parameters:
disk: 1HDD
fs: ext4
test: ltp-aiodio.part2
ucode: 0x21
test-description: The LTP testsuite contains a collection of tools for testing the Linux kernel and related features.
test-url: http://linux-test-project.github.io/
on test machine: 4 threads Intel(R) Core(TM) i3-3220 CPU @ 3.30GHz with 8G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------+------------+------------+
| | 0f01c02dee | 37abbdc72e |
+----------------+------------+------------+
| boot_successes | 0 | 0 |
+----------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
user :notice: [ 56.877035] INFO: creating /lkp/benchmarks/ltp/output directory
user :notice: [ 56.881385] INFO: creating /lkp/benchmarks/ltp/results directory
user :notice: [ 56.886110] Checking for required user/group ids
user :notice: [ 56.896602] 'nobody' user id and group found.
user :notice: [ 56.900161] 'bin' user id and group found.
user :notice: [ 56.903809] 'daemon' user id and group found.
user :notice: [ 56.907197] Users group found.
user :notice: [ 56.910336] Sys group found.
user :notice: [ 56.913550] Required users/groups exist.
user :notice: [ 56.918766] If some fields are empty or look unusual you may have an old version.
user :notice: [ 56.924550] Compare to the current minimal requirements in Documentation/Changes.
user :notice: [ 56.929676] /etc/os-release
user :notice: [ 56.933350] PRETTY_NAME="Debian GNU/Linux 10 (buster)"
user :notice: [ 56.936986] NAME="Debian GNU/Linux"
user :notice: [ 56.939723] VERSION_ID="10"
user :notice: [ 56.942553] VERSION="10 (buster)"
user :notice: [ 56.945702] VERSION_CODENAME=buster
user :notice: [ 56.947515] ID=debian
user :notice: [ 56.949931] HOME_URL="https://www.debian.org/"
user :notice: [ 56.952557] SUPPORT_URL="https://www.debian.org/support"
user :notice: [ 56.955001] BUG_REPORT_URL="https://bugs.debian.org/"
user :notice: [ 56.956962] uname:
user :notice: [ 56.960871] Linux lkp-ivb-d02 5.8.0-10182-g37abbdc72ec00 #1 SMP Thu Aug 27 06:01:27 CST 2020 x86_64 GNU/Linux
user :notice: [ 56.963851] /proc/cmdline
user :warn : [ 57.009433] LTP: starting ADSP000 (aiodio_sparse)
user :warn : [ 59.571766] LTP: starting ADSP001 (aiodio_sparse -s 180k)
user :warn : [ 59.709771] LTP: starting ADSP002 (aiodio_sparse -dd -s 1751k -w 11k)
kern :warn : [ 59.757746] ------------[ cut here ]------------
kern :warn : [ 59.758325] WARNING: CPU: 3 PID: 2581 at block/bio.c:955 bio_release_pages+0xd7/0xe0
kern :warn : [ 59.758952] Modules linked in: dm_mod netconsole btrfs blake2b_generic xor zstd_compress raid6_pq libcrc32c intel_rapl_msr sd_mod intel_rapl_common t10_pi x86_pkg_temp_thermal sg intel_powerclamp coretemp i915 intel_gtt drm_kms_helper kvm_intel kvm irqbypass crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel syscopyarea rapl intel_cstate sysfillrect intel_uncore sysimgblt fb_sys_fops drm mei_me ipmi_devintf ahci libahci ipmi_msghandler libata mei joydev ie31200_edac video ip_tables
kern :warn : [ 59.761834] CPU: 3 PID: 2581 Comm: aiodio_sparse Not tainted 5.8.0-10182-g37abbdc72ec00 #1
kern :warn : [ 59.762559] Hardware name: Hewlett-Packard p6-1451cx/2ADA, BIOS 8.15 02/05/2013
kern :warn : [ 59.763295] RIP: 0010:bio_release_pages+0xd7/0xe0
kern :warn : [ 59.763983] Code: e1 89 d5 81 e2 ff 0f 00 00 c1 ed 0c 29 d1 48 c1 e5 06 48 03 28 eb 9c 48 8b 45 08 a8 01 75 c0 48 89 ef e8 8c f0 d2 ff eb b6 c3 <0f> 0b c3 66 0f 1f 44 00 00 0f 1f 44 00 00 41 54 31 c0 55 bd 00 10
kern :warn : [ 59.765596] RSP: 0000:ffffc90000124e68 EFLAGS: 00010246
kern :warn : [ 59.766339] RAX: 0000000000000a00 RBX: ffff888212c3f2a0 RCX: 0000000000000000
kern :warn : [ 59.767124] RDX: fffffffffff41387 RSI: 0000000000000000 RDI: ffff88821fae3000
kern :warn : [ 59.767950] RBP: ffff88821fae3000 R08: ffff88821f1d1c00 R09: 0000000000000000
kern :warn : [ 59.768726] R10: ffff88821f1d1a10 R11: ffff88821faab3b0 R12: 0000000040000001
kern :warn : [ 59.769513] R13: 0000000000000400 R14: 0000000000002c00 R15: 0000000000000000
kern :warn : [ 59.770294] FS: 00007fb0401ef740(0000) GS:ffff88821fb80000(0000) knlGS:0000000000000000
kern :warn : [ 59.771112] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
kern :warn : [ 59.771908] CR2: 00007ffc3fc11000 CR3: 000000012477e006 CR4: 00000000001706e0
kern :warn : [ 59.772720] Call Trace:
kern :warn : [ 59.773466] <IRQ>
kern :warn : [ 59.774200] iomap_dio_bio_end_io+0x5f/0x100
kern :warn : [ 59.774973] blk_update_request+0x219/0x3c0
kern :warn : [ 59.775767] scsi_end_request+0x29/0x140
kern :warn : [ 59.776538] scsi_io_completion+0x7a/0x520
kern :warn : [ 59.777324] blk_done_softirq+0x95/0xc0
kern :warn : [ 59.778098] __do_softirq+0xe8/0x313
kern :warn : [ 59.778887] asm_call_on_stack+0x12/0x20
kern :warn : [ 59.779662] </IRQ>
kern :warn : [ 59.780435] do_softirq_own_stack+0x39/0x60
kern :warn : [ 59.781217] irq_exit_rcu+0xd2/0xe0
kern :warn : [ 59.782020] common_interrupt+0x74/0x140
kern :warn : [ 59.782797] ? asm_common_interrupt+0x8/0x40
kern :warn : [ 59.783594] asm_common_interrupt+0x1e/0x40
kern :warn : [ 59.784359] RIP: 0033:0x5572c6b47f20
kern :warn : [ 59.785119] Code: 10 00 00 49 01 c4 44 39 fd 0f 8c a2 00 00 00 ba 00 10 00 00 4c 89 ee 44 89 f7 e8 ab f5 ff ff 85 c0 7e d7 89 c2 4c 89 eb eb 09 <48> 83 c3 01 83 ea 01 74 c7 44 0f be 03 45 84 c0 74 ee 83 fa 03 7e
kern :warn : [ 59.786952] RSP: 002b:00007ffc3fc0fed0 EFLAGS: 00000246
kern :warn : [ 59.787836] RAX: 0000000000001000 RBX: 00007ffc3fc11de1 RCX: 00007fb0403c950e
kern :warn : [ 59.788750] RDX: 00000000000000bf RSI: 00007ffc3fc10ea0 RDI: 0000000000000007
kern :warn : [ 59.789647] RBP: 00000000001b5c00 R08: 0000000000000000 R09: 00007ffc3fc0d6b7
kern :warn : [ 59.790550] R10: 0000000000000000 R11: 0000000000000246 R12: 000000000003e000
kern :warn : [ 59.791447] R13: 00007ffc3fc10ea0 R14: 0000000000000007 R15: 000000000003e000
kern :warn : [ 59.792356] ---[ end trace 1c52c540ed6c08e4 ]---
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
lkp
1 year, 8 months
[sched/fair] fcf0553db6: vm-scalability.median 4.6% improvement
by kernel test robot
Greeting,
FYI, we noticed a 1.5% improvement of vm-scalability.throughput due to commit:
commit: fcf0553db6f4c79387864f6e4ab4a891601f395e ("sched/fair: Remove meaningless imbalance calculation")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: vm-scalability
on test machine: 144 threads Intel(R) Xeon(R) CPU E7-8890 v3 @ 2.50GHz with 512G memory
with following parameters:
runtime: 300s
size: 8T
test: anon-w-seq
cpufreq_governor: performance
ucode: 0x16
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-9/performance/x86_64-rhel-8.3/debian-10.4-x86_64-20200603.cgz/300s/8T/lkp-hsw-4ex1/anon-w-seq/vm-scalability/0x16
commit:
a349834703 ("sched/fair: Rename sg_lb_stats::sum_nr_running to sum_h_nr_running")
fcf0553db6 ("sched/fair: Remove meaningless imbalance calculation")
a349834703010183 fcf0553db6f4c79387864f6e4ab
---------------- ---------------------------
%stddev %change %stddev
\ | \
595857 +4.6% 623549 vm-scalability.median
8.22 ± 7% -3.2 5.01 ± 5% vm-scalability.median_stddev%
88608356 +1.5% 89976003 vm-scalability.throughput
498624 ± 2% -9.5% 451343 vm-scalability.time.involuntary_context_switches
9335 -1.3% 9212 vm-scalability.time.percent_of_cpu_this_job_got
12408855 ± 9% -17.1% 10291775 ± 8% meminfo.DirectMap2M
20311 ± 40% +60.3% 32564 ± 16% numa-numastat.node0.other_node
1759 ± 8% +62.0% 2850 ± 6% syscalls.sys_read.med
6827 ± 2% -2.8% 6636 vmstat.system.cs
12092 ± 18% -43.8% 6792 ± 30% numa-vmstat.node0.nr_slab_reclaimable
6233 ± 26% +29.4% 8069 ± 23% numa-vmstat.node3.nr_slab_reclaimable
14990 ± 12% +45.0% 21732 ± 18% numa-vmstat.node3.nr_slab_unreclaimable
48371 ± 18% -43.8% 27169 ± 30% numa-meminfo.node0.KReclaimable
48371 ± 18% -43.8% 27169 ± 30% numa-meminfo.node0.SReclaimable
167341 ± 16% -32.6% 112749 ± 31% numa-meminfo.node0.Slab
24931 ± 26% +29.5% 32277 ± 23% numa-meminfo.node3.KReclaimable
24931 ± 26% +29.5% 32277 ± 23% numa-meminfo.node3.SReclaimable
59961 ± 12% +45.0% 86933 ± 18% numa-meminfo.node3.SUnreclaim
84893 ± 15% +40.4% 119211 ± 13% numa-meminfo.node3.Slab
4627 ± 9% +25.4% 5802 ± 4% slabinfo.eventpoll_pwq.active_objs
4627 ± 9% +25.4% 5802 ± 4% slabinfo.eventpoll_pwq.num_objs
2190 ± 14% +26.1% 2762 ± 4% slabinfo.kmem_cache.active_objs
2190 ± 14% +26.1% 2762 ± 4% slabinfo.kmem_cache.num_objs
5823 ± 12% +22.2% 7118 ± 4% slabinfo.kmem_cache_node.active_objs
9665 ± 5% +10.3% 10663 ± 3% slabinfo.shmem_inode_cache.active_objs
9775 ± 5% +10.1% 10762 ± 3% slabinfo.shmem_inode_cache.num_objs
322.79 -7.9% 297.22 perf-stat.i.cpu-migrations
21.89 ± 3% -1.2 20.69 perf-stat.i.node-load-miss-rate%
13112232 ± 3% +3.4% 13562841 perf-stat.i.node-loads
21.44 ± 2% -1.1 20.29 perf-stat.overall.node-load-miss-rate%
3160 +1.9% 3219 perf-stat.overall.path-length
6748 ± 2% -2.6% 6570 perf-stat.ps.context-switches
319.09 -8.2% 292.97 perf-stat.ps.cpu-migrations
12905267 ± 3% +3.4% 13345546 perf-stat.ps.node-loads
12193 ± 5% +8.6% 13239 ± 4% softirqs.CPU105.RCU
12791 ± 6% +12.4% 14382 ± 7% softirqs.CPU22.RCU
12781 ± 5% +12.3% 14351 ± 4% softirqs.CPU28.RCU
12870 ± 4% +9.0% 14033 ± 7% softirqs.CPU31.RCU
11451 ± 5% +11.7% 12785 ± 4% softirqs.CPU90.RCU
11449 ± 6% +9.1% 12497 ± 6% softirqs.CPU91.RCU
11486 ± 6% +8.9% 12510 softirqs.CPU93.RCU
12197 ± 5% +10.4% 13462 ± 5% softirqs.CPU97.RCU
3899 ±100% -78.7% 830.89 ± 67% sched_debug.cfs_rq:/.load_avg.max
386.54 ± 91% -76.1% 92.43 ± 61% sched_debug.cfs_rq:/.load_avg.stddev
747.05 ± 7% -26.9% 546.12 ± 31% sched_debug.cfs_rq:/.runnable_load_avg.max
-203275 +540.7% -1302453 sched_debug.cfs_rq:/.spread0.avg
1142 ± 5% +9.6% 1251 ± 6% sched_debug.cfs_rq:/.util_avg.max
1.00 ±163% +2117.5% 22.17 ±100% sched_debug.cfs_rq:/.util_avg.min
212.47 ± 14% -16.9% 176.57 ± 4% sched_debug.cfs_rq:/.util_est_enqueued.stddev
0.00 ± 7% +226.7% 0.00 ± 18% sched_debug.cpu.next_balance.stddev
2.48 ± 20% -24.6% 1.87 ± 11% sched_debug.cpu.nr_running.max
0.64 ± 29% -49.1% 0.33 ± 11% sched_debug.cpu.nr_running.stddev
137.50 ± 54% +194.2% 404.50 ± 78% interrupts.CPU109.RES:Rescheduling_interrupts
822.50 ± 22% -39.4% 498.75 ± 49% interrupts.CPU114.CAL:Function_call_interrupts
126.00 ± 72% +161.1% 329.00 ± 48% interrupts.CPU117.RES:Rescheduling_interrupts
153.00 ± 69% +121.6% 339.00 ± 51% interrupts.CPU119.RES:Rescheduling_interrupts
162.75 ± 83% +170.8% 440.75 ± 36% interrupts.CPU121.RES:Rescheduling_interrupts
798.00 ± 30% -45.0% 438.75 ± 28% interrupts.CPU126.CAL:Function_call_interrupts
3831 ± 37% -96.4% 138.50 ±106% interrupts.CPU128.NMI:Non-maskable_interrupts
3831 ± 37% -96.4% 138.50 ±106% interrupts.CPU128.PMI:Performance_monitoring_interrupts
218.25 ± 44% -51.7% 105.50 ± 4% interrupts.CPU128.RES:Rescheduling_interrupts
453.00 ± 74% -74.3% 116.50 ± 26% interrupts.CPU130.RES:Rescheduling_interrupts
2382 ± 15% -96.6% 80.75 ±133% interrupts.CPU138.NMI:Non-maskable_interrupts
2382 ± 15% -96.6% 80.75 ±133% interrupts.CPU138.PMI:Performance_monitoring_interrupts
557.25 ±155% +341.2% 2458 ± 79% interrupts.CPU14.NMI:Non-maskable_interrupts
557.25 ±155% +341.2% 2458 ± 79% interrupts.CPU14.PMI:Performance_monitoring_interrupts
782.00 ± 31% -37.0% 492.50 ± 50% interrupts.CPU17.CAL:Function_call_interrupts
779.75 ± 31% -36.8% 492.50 ± 50% interrupts.CPU18.CAL:Function_call_interrupts
782.25 ± 31% -37.2% 491.00 ± 49% interrupts.CPU20.CAL:Function_call_interrupts
5855 ± 20% -75.7% 1420 ±106% interrupts.CPU23.NMI:Non-maskable_interrupts
5855 ± 20% -75.7% 1420 ±106% interrupts.CPU23.PMI:Performance_monitoring_interrupts
785.00 ± 30% -38.0% 487.00 ± 51% interrupts.CPU3.CAL:Function_call_interrupts
1266 ±166% +241.5% 4325 ± 38% interrupts.CPU34.NMI:Non-maskable_interrupts
1266 ±166% +241.5% 4325 ± 38% interrupts.CPU34.PMI:Performance_monitoring_interrupts
784.00 ± 30% -37.0% 493.75 ± 49% interrupts.CPU36.CAL:Function_call_interrupts
783.25 ± 31% -37.6% 489.00 ± 51% interrupts.CPU37.CAL:Function_call_interrupts
325.75 ± 11% +80.6% 588.25 ± 45% interrupts.CPU37.RES:Rescheduling_interrupts
782.00 ± 31% -37.0% 492.50 ± 50% interrupts.CPU39.CAL:Function_call_interrupts
781.50 ± 31% -38.0% 484.50 ± 52% interrupts.CPU41.CAL:Function_call_interrupts
292.50 ± 21% +59.2% 465.75 ± 26% interrupts.CPU43.RES:Rescheduling_interrupts
233.00 ± 12% +59.4% 371.50 ± 20% interrupts.CPU48.RES:Rescheduling_interrupts
883.00 ±160% -97.1% 25.25 ±150% interrupts.CPU50.NMI:Non-maskable_interrupts
883.00 ±160% -97.1% 25.25 ±150% interrupts.CPU50.PMI:Performance_monitoring_interrupts
265.25 ± 28% +69.8% 450.50 ± 27% interrupts.CPU50.RES:Rescheduling_interrupts
249.75 ± 19% +40.8% 351.75 ± 11% interrupts.CPU51.RES:Rescheduling_interrupts
785.50 ± 30% -37.2% 493.00 ± 49% interrupts.CPU53.CAL:Function_call_interrupts
1.75 ±116% +1.2e+05% 2020 ±108% interrupts.CPU53.NMI:Non-maskable_interrupts
1.75 ±116% +1.2e+05% 2020 ±108% interrupts.CPU53.PMI:Performance_monitoring_interrupts
826.75 ± 79% +429.5% 4377 ± 27% interrupts.CPU56.NMI:Non-maskable_interrupts
826.75 ± 79% +429.5% 4377 ± 27% interrupts.CPU56.PMI:Performance_monitoring_interrupts
782.50 ± 31% -37.6% 488.00 ± 51% interrupts.CPU57.CAL:Function_call_interrupts
32.25 ±164% -100.0% 0.00 interrupts.CPU57.TLB:TLB_shootdowns
782.50 ± 31% -36.9% 494.00 ± 50% interrupts.CPU6.CAL:Function_call_interrupts
781.00 ± 31% -36.8% 493.50 ± 50% interrupts.CPU61.CAL:Function_call_interrupts
658.75 ±169% +429.9% 3490 ± 43% interrupts.CPU61.NMI:Non-maskable_interrupts
658.75 ±169% +429.9% 3490 ± 43% interrupts.CPU61.PMI:Performance_monitoring_interrupts
405.75 ± 20% -35.6% 261.50 ± 17% interrupts.CPU62.RES:Rescheduling_interrupts
781.75 ± 31% -36.8% 494.00 ± 50% interrupts.CPU65.CAL:Function_call_interrupts
782.50 ± 31% -37.0% 492.75 ± 50% interrupts.CPU66.CAL:Function_call_interrupts
838.25 ±121% +513.8% 5145 ± 30% interrupts.CPU66.NMI:Non-maskable_interrupts
838.25 ±121% +513.8% 5145 ± 30% interrupts.CPU66.PMI:Performance_monitoring_interrupts
782.00 ± 31% -37.1% 492.25 ± 50% interrupts.CPU67.CAL:Function_call_interrupts
782.25 ± 31% -37.0% 492.75 ± 49% interrupts.CPU69.CAL:Function_call_interrupts
780.00 ± 31% -37.3% 489.25 ± 49% interrupts.CPU71.CAL:Function_call_interrupts
784.25 ± 31% -38.0% 486.50 ± 51% interrupts.CPU75.CAL:Function_call_interrupts
783.25 ± 31% -36.9% 494.25 ± 50% interrupts.CPU77.CAL:Function_call_interrupts
822.00 ± 33% -39.9% 494.25 ± 50% interrupts.CPU83.CAL:Function_call_interrupts
789.75 ± 29% -37.1% 496.50 ± 49% interrupts.CPU90.CAL:Function_call_interrupts
5966 ± 15% -53.5% 2777 ± 62% interrupts.CPU94.NMI:Non-maskable_interrupts
5966 ± 15% -53.5% 2777 ± 62% interrupts.CPU94.PMI:Performance_monitoring_interrupts
46379 ± 9% -10.1% 41674 ± 4% interrupts.RES:Rescheduling_interrupts
68.78 -10.0 58.75 ± 8% perf-profile.calltrace.cycles-pp.do_access
30.04 ± 4% -4.3 25.69 ± 9% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
30.06 ± 4% -4.3 25.71 ± 9% perf-profile.calltrace.cycles-pp.page_fault.do_access
30.00 ± 4% -4.3 25.66 ± 9% perf-profile.calltrace.cycles-pp.do_user_addr_fault.do_page_fault.page_fault.do_access
29.93 ± 4% -4.3 25.59 ± 9% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault.do_access
26.63 ± 2% -2.7 23.88 ± 6% perf-profile.calltrace.cycles-pp.do_rw_once
0.57 ± 5% -0.3 0.27 ±100% perf-profile.calltrace.cycles-pp.rmqueue.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page
0.63 ± 6% -0.2 0.43 ± 58% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
0.63 ± 6% -0.2 0.43 ± 58% perf-profile.calltrace.cycles-pp.alloc_pages_vma.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
0.62 ± 5% -0.2 0.42 ± 58% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.do_huge_pmd_anonymous_page.__handle_mm_fault
28.25 ± 5% +6.6 34.88 ± 6% perf-profile.calltrace.cycles-pp.clear_page_erms.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault
29.30 ± 5% +6.7 36.03 ± 6% perf-profile.calltrace.cycles-pp.clear_subpage.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault
30.21 ± 5% +7.0 37.23 ± 6% perf-profile.calltrace.cycles-pp.clear_huge_page.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault
31.31 ± 5% +7.3 38.61 ± 6% perf-profile.calltrace.cycles-pp.do_huge_pmd_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault
31.39 ± 5% +7.3 38.70 ± 6% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault
1.50 ±106% +11.7 13.17 ± 35% perf-profile.calltrace.cycles-pp.handle_mm_fault.do_user_addr_fault.do_page_fault.page_fault
1.50 ±106% +11.7 13.20 ± 35% perf-profile.calltrace.cycles-pp.do_user_addr_fault.do_page_fault.page_fault
1.51 ±106% +11.7 13.21 ± 35% perf-profile.calltrace.cycles-pp.page_fault
1.50 ±106% +11.7 13.21 ± 35% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault
65.44 -9.4 56.04 ± 8% perf-profile.children.cycles-pp.do_access
30.08 -3.4 26.70 ± 7% perf-profile.children.cycles-pp.do_rw_once
0.06 ± 11% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.pagevec_lru_move_fn
0.14 ± 6% +0.0 0.16 ± 6% perf-profile.children.cycles-pp._raw_spin_lock_irqsave
0.13 ± 10% +0.0 0.16 ± 9% perf-profile.children.cycles-pp.pte_alloc_one
0.12 ± 12% +0.0 0.15 ± 8% perf-profile.children.cycles-pp.prep_new_page
0.12 ± 10% +0.0 0.16 ± 20% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.12 ± 9% +0.0 0.15 ± 10% perf-profile.children.cycles-pp.prepare_exit_to_usermode
0.14 ± 8% +0.0 0.19 ± 9% perf-profile.children.cycles-pp.swapgs_restore_regs_and_return_to_usermode
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.irq_work_interrupt
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.smp_irq_work_interrupt
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.irq_work_run
0.00 +0.1 0.05 ± 8% perf-profile.children.cycles-pp.printk
0.00 +0.1 0.06 ± 11% perf-profile.children.cycles-pp.irq_work_run_list
0.00 +0.1 0.07 ± 31% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.35 ± 3% +0.1 0.43 ± 7% perf-profile.children.cycles-pp._cond_resched
0.38 ± 6% +0.1 0.48 ± 6% perf-profile.children.cycles-pp.___might_sleep
0.46 ± 8% +0.1 0.60 ± 13% perf-profile.children.cycles-pp.scheduler_tick
0.66 ± 8% +0.2 0.81 ± 8% perf-profile.children.cycles-pp.rmqueue
0.70 ± 8% +0.2 0.87 ± 8% perf-profile.children.cycles-pp.alloc_pages_vma
0.79 ± 9% +0.2 0.97 ± 8% perf-profile.children.cycles-pp.get_page_from_freelist
0.83 ± 8% +0.2 1.03 ± 7% perf-profile.children.cycles-pp.__alloc_pages_nodemask
1.09 ± 6% +0.3 1.38 ± 11% perf-profile.children.cycles-pp.__hrtimer_run_queues
2.11 ± 3% +0.5 2.63 ± 12% perf-profile.children.cycles-pp.apic_timer_interrupt
28.56 ± 5% +6.6 35.18 ± 6% perf-profile.children.cycles-pp.clear_page_erms
29.39 ± 5% +6.8 36.14 ± 6% perf-profile.children.cycles-pp.clear_subpage
30.34 ± 5% +7.0 37.30 ± 6% perf-profile.children.cycles-pp.clear_huge_page
31.39 ± 5% +7.2 38.61 ± 6% perf-profile.children.cycles-pp.do_huge_pmd_anonymous_page
31.47 ± 5% +7.2 38.72 ± 6% perf-profile.children.cycles-pp.__handle_mm_fault
31.52 ± 5% +7.3 38.77 ± 6% perf-profile.children.cycles-pp.handle_mm_fault
31.60 ± 5% +7.3 38.87 ± 6% perf-profile.children.cycles-pp.do_user_addr_fault
31.67 ± 5% +7.3 38.95 ± 6% perf-profile.children.cycles-pp.page_fault
31.63 ± 5% +7.3 38.91 ± 6% perf-profile.children.cycles-pp.do_page_fault
29.16 ± 4% -4.1 25.05 ± 7% perf-profile.self.cycles-pp.do_access
27.38 -3.1 24.28 ± 7% perf-profile.self.cycles-pp.do_rw_once
0.05 ± 8% +0.0 0.07 ± 12% perf-profile.self.cycles-pp.prep_new_page
0.09 ± 9% +0.0 0.12 ± 11% perf-profile.self.cycles-pp.prepare_exit_to_usermode
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.do_huge_pmd_anonymous_page
0.00 +0.1 0.07 ± 31% perf-profile.self.cycles-pp.__intel_pmu_enable_all
0.28 ± 3% +0.1 0.35 ± 7% perf-profile.self.cycles-pp._cond_resched
0.35 ± 4% +0.1 0.43 ± 6% perf-profile.self.cycles-pp.___might_sleep
0.48 ± 8% +0.1 0.60 ± 9% perf-profile.self.cycles-pp.rmqueue
0.88 ± 4% +0.2 1.11 ± 3% perf-profile.self.cycles-pp.clear_subpage
28.18 ± 5% +6.4 34.57 ± 6% perf-profile.self.cycles-pp.clear_page_erms
vm-scalability.median
660000 +------------------------------------------------------------------+
650000 |-+O |
| O O O O O O O |
640000 |-+ O O O O O O O O O |
630000 |-+ O O O O |
| O O O |
620000 |-+ |
610000 |-+ |
600000 |-+ +..+. .+ |
| +.. .+.. .. +. + .+.+ |
590000 |-.+. + + + +. |
580000 |.+ +..+ +.. +.. : |
| + : |
570000 |-+ + + |
560000 +------------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 8 months
[workqueue] b9b6541211: WARNING:at_kernel/workqueue.c:#check_flush_dependency
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: b9b6541211fde678ca06596cd7ab732f26663bfc ("[PATCH v3] workqueue: Warn when work flush own workqueue")
url: https://github.com/0day-ci/linux/commits/Qianli-Zhao/workqueue-Warn-when-...
base: https://git.kernel.org/cgit/linux/kernel/git/tj/wq.git for-next
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-i386 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-----------------------------------------------------------+------------+------------+
| | 10cdb15759 | b9b6541211 |
+-----------------------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 7 |
| WARNING:at_kernel/workqueue.c:#check_flush_dependency | 0 | 7 |
| EIP:check_flush_dependency | 0 | 7 |
| WARNING:at_kernel/rcu/rcutorture.c:#rcutorture_oom_notify | 0 | 1 |
| EIP:rcutorture_oom_notify | 0 | 1 |
| calltrace:do_softirq_own_stack | 0 | 1 |
| BUG:kernel_NULL_pointer_dereference,address | 0 | 1 |
| Oops:#[##] | 0 | 1 |
| EIP:rcu_torture_fwd_cb_hist | 0 | 1 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 1 |
+-----------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 18.864047] WARNING: CPU: 1 PID: 16 at kernel/workqueue.c:2600 check_flush_dependency+0xc9/0x180
[ 18.864047] Modules linked in:
[ 18.864047] CPU: 1 PID: 16 Comm: kworker/1:0 Not tainted 5.7.0-rc4-00031-gb9b6541211fde #1
[ 18.874069] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 18.874069] Workqueue: events async_call_lookup_ports
[ 18.874069] EIP: check_flush_dependency+0xc9/0x180
[ 18.874069] Code: 75 9c 89 45 f0 b2 01 88 15 e2 b1 b6 c2 8d 93 d4 00 00 00 89 54 24 08 8b 50 0c c7 04 24 40 57 75 c2 89 54 24 04 e8 87 34 fe ff <0f> 0b 8b 45 f0 e9 6a ff ff ff 8d 74 26 00 90 31 ff e9 42 ff ff ff
[ 18.874069] EAX: 00000059 EBX: f3c07200 ECX: 00000000 EDX: 00000000
[ 18.874069] ESI: f3f23700 EDI: c1f1f6b0 EBP: f3f1de04 ESP: f3f1dde0
[ 18.874069] DS: 007b ES: 007b FS: 00d8 GS: 0000 SS: 0068 EFLAGS: 00010096
[ 18.874069] CR0: 80050033 CR2: b7f1a000 CR3: 02e09000 CR4: 000406d0
[ 18.874069] DR0: 00000000 DR1: 00000000 DR2: 00000000 DR3: 00000000
[ 18.874069] DR6: fffe0ff0 DR7: 00000400
[ 18.874069] Call Trace:
[ 18.874069] ? unregister_device+0x65/0x65
[ 18.874069] __flush_work+0xce/0x410
[ 18.874069] ? __flush_work+0x3a/0x410
[ 18.874069] ? mark_held_locks+0x3f/0x70
[ 18.874069] ? queue_work_on+0x5d/0x80
[ 18.874069] ? snd_seq_device_load_drivers+0x1d/0x30
[ 18.874069] ? lockdep_hardirqs_on+0xe4/0x1b0
[ 18.874069] ? queue_work_on+0x5d/0x80
[ 18.874069] ? trace_hardirqs_on+0x4d/0x2d0
[ 18.874069] flush_work+0xf/0x20
[ 18.874069] snd_seq_device_load_drivers+0x27/0x30
[ 18.874069] snd_seq_client_use_ptr+0x101/0x140
[ 18.874069] snd_seq_ioctl_query_next_client+0x41/0x80
[ 18.874069] snd_seq_kernel_client_ctl+0x45/0x70
[ 18.874069] snd_seq_oss_midi_lookup_ports+0x5e/0xc0
[ 18.874069] async_call_lookup_ports+0x12/0x20
[ 18.874069] process_one_work+0x213/0x590
[ 18.874069] ? process_one_work+0x172/0x590
[ 18.874069] worker_thread+0x15e/0x3c0
[ 18.874069] kthread+0xee/0x120
[ 18.874069] ? process_one_work+0x590/0x590
[ 18.874069] ? kthread_park+0xa0/0xa0
[ 18.874069] ret_from_fork+0x19/0x24
[ 18.874069] irq event stamp: 786
[ 18.874069] hardirqs last enabled at (785): [<c1085bcd>] queue_work_on+0x5d/0x80
[ 18.874069] hardirqs last disabled at (786): [<c2151005>] _raw_spin_lock_irq+0x15/0x80
[ 18.874069] softirqs last enabled at (310): [<c10f7fc0>] srcu_invoke_callbacks+0x80/0x120
[ 18.874069] softirqs last disabled at (306): [<c10f7fc0>] srcu_invoke_callbacks+0x80/0x120
[ 18.874069] ---[ end trace bea0f009b4a26e1f ]---
To reproduce:
# build kernel
cd linux
cp config-5.7.0-rc4-00031-gb9b6541211fde .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 8 months
[bpf] 3ebc0a7f46: BUG:KASAN:use-after-free_in_b
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 3ebc0a7f460e4f73f8c9ab9dca89a57dc32c1602 ("[PATCH bpf-next v4 03/30] bpf: memcg-based memory accounting for bpf maps")
url: https://github.com/0day-ci/linux/commits/Roman-Gushchin/bpf-switch-to-mem...
base: https://git.kernel.org/cgit/linux/kernel/git/bpf/bpf-next.git master
in testcase: locktorture
with following parameters:
runtime: 300s
test: cpuhotplug
test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------------------------------------+------------+------------+
| | e96c019fb3 | 3ebc0a7f46 |
+-------------------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 6 | 4 |
| WARNING:suspicious_RCU_usage | 6 | |
| security/device_cgroup.c:#RCU-list_traversed_in_non-reader_section | 6 | |
| drivers/char/ipmi/ipmi_msghandler.c:#RCU-list_traversed_in_non-reader_section | 6 | |
| BUG:KASAN:use-after-free_in_b | 0 | 4 |
+-------------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 41.560152] BUG: KASAN: use-after-free in bpf_map_free_deferred+0x117/0x38b
[ 41.560762] Read of size 8 at addr ffff8881e4114858 by task kworker/0:1/15
[ 41.561528]
[ 41.561737] CPU: 0 PID: 15 Comm: kworker/0:1 Not tainted 5.9.0-rc1-00133-g3ebc0a7f460e4 #1
[ 41.562648] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.12.0-1 04/01/2014
[ 41.563562] Workqueue: events bpf_map_free_deferred
[ 41.563937] Call Trace:
[ 41.564147] ? dump_stack+0x31/0x40
[ 41.564423] ? print_address_description+0x2c/0x6d8
[ 41.564851] ? rcu_read_unlock_sched_notrace+0x52/0x52
[ 41.565243] ? bpf_map_free_deferred+0x117/0x38b
[ 41.565582] ? kasan_report+0x1b1/0x222
[ 41.565872] ? bpf_map_free_deferred+0x117/0x38b
[ 41.566214] ? __asan_report_load8_noabort+0x1e/0x26
[ 41.566570] ? bpf_map_free_deferred+0x117/0x38b
[ 41.566906] ? bpf_map_charge_move+0x8d/0x8d
[ 41.567234] ? process_one_work+0x819/0xe1c
[ 41.567570] ? __lock_acquired+0x46e/0x5f6
[ 41.567885] ? pwq_dec_nr_in_flight+0x363/0x363
[ 41.568224] ? preempt_count_add+0x1b/0x24
[ 41.568535] ? __kasan_check_write+0x1e/0x26
[ 41.568843] ? worker_clr_flags+0x192/0x1b7
[ 41.569168] ? worker_thread+0x787/0x9e7
[ 41.569480] ? kthread+0x47e/0x494
[ 41.569730] ? create_worker+0x523/0x523
[ 41.570017] ? kthread_create_worker+0xc3/0xc3
[ 41.570345] ? ret_from_fork+0x1f/0x30
[ 41.570657]
[ 41.570781] Allocated by task 0:
[ 41.571016] (stack is not available)
[ 41.571290]
[ 41.571414] Freed by task 15:
[ 41.571640] arch_stack_walk+0xbc/0xd0
[ 41.571914] stack_trace_save+0x85/0xa6
[ 41.572203] kasan_save_stack+0x22/0x58
[ 41.572484] kasan_set_track+0x22/0x2e
[ 41.572762] kasan_set_free_info+0x29/0x3f
[ 41.573056] __kasan_slab_free+0x165/0x192
[ 41.573377] kasan_slab_free+0x11/0x19
[ 41.573649] slab_free_freelist_hook+0x1e5/0x29c
[ 41.573976] kfree+0x3b7/0x57a
[ 41.574202] trie_free+0x8d/0x14e
[ 41.574444] bpf_map_free_deferred+0xd2/0x38b
[ 41.574762] process_one_work+0x819/0xe1c
[ 41.575060] worker_thread+0x787/0x9e7
[ 41.575330] kthread+0x47e/0x494
[ 41.575566] ret_from_fork+0x1f/0x30
[ 41.575822]
[ 41.575945] The buggy address belongs to the object at ffff8881e4114800
[ 41.575945] which belongs to the cache kmalloc-512 of size 512
[ 41.576811] The buggy address is located 88 bytes inside of
[ 41.576811] 512-byte region [ffff8881e4114800, ffff8881e4114a00)
[ 41.577626] The buggy address belongs to the page:
[ 41.577971] page:(____ptrval____) refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x1e4114
[ 41.578627] head:(____ptrval____) order:1 compound_mapcount:0
[ 41.579029] flags: 0x4000000000010200(slab|head)
[ 41.579358] raw: 4000000000010200 dead000000000100 dead000000000122 ffff8881f5c41280
[ 41.579921] raw: 0000000000000000 0000000080080008 00000001ffffffff 0000000000000000
[ 41.580490] page dumped because: kasan: bad access detected
[ 41.580907]
[ 41.581029] Memory state around the buggy address:
[ 41.581366] ffff8881e4114700: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 41.581860] ffff8881e4114780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
[ 41.582369] >ffff8881e4114800: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 41.582866] ^
[ 41.583292] ffff8881e4114880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 41.583787] ffff8881e4114900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
[ 41.585494] ==================================================================
[ 41.586196] Disabling lock debugging due to kernel taint
[ 42.162717] rcu-perf: rcu_perf_writer 0 has 100 measurements
[ 42.199609] Dumping ftrace buffer:
[ 42.200080] (ftrace buffer empty)
[ 42.202418] rcu-perf: Test complete
[ 42.490753] random: systemd: uninitialized urandom read (16 bytes read)
[ 42.496513] random: systemd: uninitialized urandom read (16 bytes read)
[ OK ] Listening on RPCbind Server Activation Socket.
[ 42.503401] random: systemd: uninitialized urandom read (16 bytes read)
[ OK ] Created slice system-serial\x2dgetty.slice.
[ OK ] Created slice User and Session Slice.
[ OK ] Listening on udev Control Socket.
[ OK ] Listening on Syslog Socket.
[ OK ] Listening on udev Kernel Socket.
[ OK ] Listening on initctl Compatibility Named Pipe.
[ OK ] Reached target Swap.
[ OK ] Listening on Journal Socket.
Mounting POSIX Message Queue File System...
Starting Remount Root and Kernel File Systems...
Mounting Kernel Debug File System...
Starting udev Coldplug all Devices...
[ OK ] Reached target Local Encrypted Volumes.
[ OK ] Listening on Journal Socket (/dev/log).
[ OK ] Reached target Slices.
Mounting RPC Pipe File System...
Starting Load Kernel Modules...
[ OK ] Reached target Paths.
[ OK ] Listening on Journal Audit Socket.
[ 43.278865] random: fast init done
Starting Journal Service...
[ OK ] Created slice system-getty.slice.
[ OK ] Mounted POSIX Message Queue File System.
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc1-00133-g3ebc0a7f460e4 .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 8 months
[bpf] eda7ef0c7b: canonical_address#:#[##]
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: eda7ef0c7b86e72b35c62c9d1e55c57cecd0abe7 ("[PATCH bpf-next v4 19/30] bpf: eliminate rlimit-based memory accounting for hashtab maps")
url: https://github.com/0day-ci/linux/commits/Roman-Gushchin/bpf-switch-to-mem...
base: https://git.kernel.org/cgit/linux/kernel/git/bpf/bpf-next.git master
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------+------------+------------+
| | 4ad9edebed | eda7ef0c7b |
+------------------------------------------+------------+------------+
| boot_successes | 4 | 0 |
| boot_failures | 0 | 4 |
| canonical_address#:#[##] | 0 | 4 |
| RIP:bpf_map_free_deferred | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 |
+------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 37.201357] init: tty2 main process ended, respawning
[ 37.232660] init: tty3 main process (480) terminated with status 1
[ 37.234056] init: tty3 main process ended, respawning
[ 37.239246] init: tty6 main process (482) terminated with status 1
[ 37.240789] init: tty6 main process ended, respawning
[ 40.878036] general protection fault, probably for non-canonical address 0x6b6b6b6b6b6b6b6b: 0000 [#1] SMP
[ 40.900083] CPU: 0 PID: 157 Comm: kworker/0:2 Not tainted 5.9.0-rc1-00149-geda7ef0c7b86e7 #1
[ 40.901680] Workqueue: events bpf_map_free_deferred
[ 40.902630] RIP: 0010:bpf_map_free_deferred+0x57/0xdf
[ 40.903694] Code: aa ff ff ff 48 89 ef e8 e2 ee 27 00 48 8b 83 70 ff ff ff 48 89 ef ff 50 18 48 89 e7 e8 66 ff ff ff 48 8b 5b c8 48 85 db 74 6c <f6> 43 7c 01 75 66 e8 ac dd ff ff e8 c0 ec ff ff e8 aa 91 74 00 85
[ 40.907343] RSP: 0018:ffff88821a353e38 EFLAGS: 00010202
[ 40.908373] RAX: 0000000000000000 RBX: 6b6b6b6b6b6b6b6b RCX: 0000000000000006
[ 40.909786] RDX: ffff88821aa98b40 RSI: 0000000000000000 RDI: 0000000000000000
[ 40.911066] RBP: ffff888236069c00 R08: 0000000000000400 R09: ffffea000867e208
[ 40.912407] R10: ffffea0008359048 R11: 0000000000000002 R12: ffff888237c2a780
[ 40.913801] R13: ffff888237c2fd00 R14: 0000000000000000 R15: ffff888236069c98
[ 40.915216] FS: 0000000000000000(0000) GS:ffff888237c00000(0000) knlGS:0000000000000000
[ 40.916762] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 40.917874] CR2: 0000000000442d70 CR3: 000000020d685000 CR4: 00000000000406b0
[ 40.919297] Call Trace:
[ 40.919816] process_one_work+0x288/0x475
[ 40.920573] ? worker_thread+0x205/0x254
[ 40.921377] worker_thread+0x1a5/0x254
[ 40.922147] ? create_worker+0x17d/0x17d
[ 40.922899] kthread+0x108/0x110
[ 40.923570] ? kthread_create_worker_on_cpu+0x65/0x65
[ 40.924598] ret_from_fork+0x1f/0x30
[ 40.925312] Modules linked in: ide_cd_mod cdrom ide_pci_generic evdev i2c_piix4 piix ide_core i2c_core virtio_blk parport_pc qemu_fw_cfg processor button
[ 40.928068] ---[ end trace 270fed0e47b93410 ]---
[ 40.928901] RIP: 0010:bpf_map_free_deferred+0x57/0xdf
[ 40.929782] Code: aa ff ff ff 48 89 ef e8 e2 ee 27 00 48 8b 83 70 ff ff ff 48 89 ef ff 50 18 48 89 e7 e8 66 ff ff ff 48 8b 5b c8 48 85 db 74 6c <f6> 43 7c 01 75 66 e8 ac dd ff ff e8 c0 ec ff ff e8 aa 91 74 00 85
[ 40.933487] RSP: 0018:ffff88821a353e38 EFLAGS: 00010202
[ 40.934549] RAX: 0000000000000000 RBX: 6b6b6b6b6b6b6b6b RCX: 0000000000000006
[ 40.970611] RDX: ffff88821aa98b40 RSI: 0000000000000000 RDI: 0000000000000000
[ 40.971949] RBP: ffff888236069c00 R08: 0000000000000400 R09: ffffea000867e208
[ 40.973305] R10: ffffea0008359048 R11: 0000000000000002 R12: ffff888237c2a780
[ 40.974789] R13: ffff888237c2fd00 R14: 0000000000000000 R15: ffff888236069c98
[ 40.980705] FS: 0000000000000000(0000) GS:ffff888237c00000(0000) knlGS:0000000000000000
[ 40.982086] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 40.983172] CR2: 0000000000442d70 CR3: 000000020d685000 CR4: 00000000000406b0
[ 40.984665] Kernel panic - not syncing: Fatal exception
[ 40.985818] Kernel Offset: disabled
Kboot worker: lkp-worker46
Elapsed time: 60
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc1-00149-geda7ef0c7b86e7 .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 8 months
[block] 8cfb68ac58: fsmark.files_per_sec -17.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -17.9% regression of fsmark.files_per_sec due to commit:
commit: 8cfb68ac58ee842a4cb35efb1a5f2382abcc48df ("block: make QUEUE_SYSFS_BIT_FNS a little more useful")
git://git.infradead.org/users/hch/block.git bdi-cleanups
in testcase: fsmark
on test machine: 96 threads Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz with 256G memory
with following parameters:
iterations: 1x
nr_threads: 32t
disk: 1SSD
fs: btrfs
filesize: 9B
test_size: 400M
sync_method: fsyncBeforeClose
nr_directories: 16d
nr_files_per_directory: 256fpd
cpufreq_governor: performance
ucode: 0x5002f01
test-description: The fsmark is a file system benchmark to test synchronous write workloads, for example, mail servers workload.
test-url: https://sourceforge.net/projects/fsmark/
In addition to that, the commit also has significant impact on the following tests:
+------------------+---------------------------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec -6.1% regression |
| test machine | 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1SSD |
| | filesize=9B |
| | fs=btrfs |
| | iterations=8 |
| | nr_directories=16d |
| | nr_files_per_directory=256fpd |
| | nr_threads=4 |
| | sync_method=fsyncBeforeClose |
| | test_size=16G |
| | ucode=0x4002f01 |
+------------------+---------------------------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec -9.3% regression |
| test machine | 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | disk=1BRD_32G |
| | filesize=4K |
| | fs=btrfs |
| | iterations=1x |
| | nr_files_per_directory=1fpd |
| | nr_threads=1t |
| | sync_method=fsyncBeforeClose |
| | test_size=4G |
| | ucode=0x400002c |
+------------------+---------------------------------------------------------------------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-9/performance/1SSD/9B/btrfs/1x/x86_64-rhel-8.3/16d/256fpd/32t/debian-10.4-x86_64-20200603.cgz/fsyncBeforeClose/lkp-csl-2sp7/400M/fsmark/0x5002f01
commit:
c9c9735c46 (" SCSI misc on 20200814")
8cfb68ac58 ("block: make QUEUE_SYSFS_BIT_FNS a little more useful")
c9c9735c46f589b9 8cfb68ac58ee842a4cb35efb1a5
---------------- ---------------------------
%stddev %change %stddev
\ | \
8381 -17.9% 6879 ± 2% fsmark.files_per_sec
13.22 +21.6% 16.08 ± 2% fsmark.time.elapsed_time
13.22 +21.6% 16.08 ± 2% fsmark.time.elapsed_time.max
2359552 -20.1% 1885888 ± 2% fsmark.time.file_system_outputs
3864 -4.9% 3673 ± 2% fsmark.time.involuntary_context_switches
143.94 +23.4% 177.65 fsmark.time.system_time
2321916 ± 3% +32.7% 3081861 ± 3% fsmark.time.voluntary_context_switches
28.71 ± 66% -72.4% 7.93 ±111% sched_debug.cfs_rq:/.removed.load_avg.avg
157.47 ± 31% -61.1% 61.24 ±103% sched_debug.cfs_rq:/.removed.load_avg.stddev
49026 ± 2% +14.3% 56053 ± 4% vmstat.io.bo
14.00 -48.2% 7.25 ± 5% vmstat.memory.buff
297011 ± 2% +9.4% 324937 vmstat.system.cs
237583 ± 3% -40.5% 141458 ± 2% meminfo.Active
236266 ± 3% -40.7% 140094 ± 2% meminfo.Active(file)
24665088 ± 9% -13.9% 21246976 ± 9% meminfo.DirectMap2M
260629 -20.1% 208253 ± 2% meminfo.max_used_kB
9474 +33.0% 12600 softirqs.BLOCK
4340 ± 3% +63.1% 7079 ± 31% softirqs.CPU78.RCU
438222 ± 3% +13.2% 495988 ± 3% softirqs.RCU
328332 +10.6% 363175 ± 2% softirqs.SCHED
176334 ± 13% -41.2% 103763 ± 24% numa-meminfo.node0.Active
175517 ± 13% -41.1% 103370 ± 24% numa-meminfo.node0.Active(file)
194207 ± 26% -56.4% 84620 ± 55% numa-meminfo.node0.AnonPages
194493 ± 26% -53.4% 90682 ± 49% numa-meminfo.node0.Inactive(anon)
13302 ± 2% +28.4% 17086 ± 11% numa-meminfo.node0.Mapped
96551 ± 53% +114.9% 207535 ± 22% numa-meminfo.node1.AnonPages
173071 ± 36% +66.4% 288049 ± 29% numa-meminfo.node1.Inactive
105256 ± 49% +100.0% 210463 ± 21% numa-meminfo.node1.Inactive(anon)
18724 -20.7% 14842 ± 13% numa-meminfo.node1.Mapped
43157 ± 14% -39.8% 25970 ± 23% numa-vmstat.node0.nr_active_file
48579 ± 26% -56.4% 21161 ± 55% numa-vmstat.node0.nr_anon_pages
48650 ± 26% -53.4% 22678 ± 49% numa-vmstat.node0.nr_inactive_anon
3375 ± 3% +28.0% 4322 ± 10% numa-vmstat.node0.nr_mapped
43157 ± 13% -39.8% 25968 ± 23% numa-vmstat.node0.nr_zone_active_file
48650 ± 26% -53.4% 22678 ± 49% numa-vmstat.node0.nr_zone_inactive_anon
24142 ± 53% +115.1% 51919 ± 22% numa-vmstat.node1.nr_anon_pages
26325 ± 49% +100.0% 52649 ± 21% numa-vmstat.node1.nr_inactive_anon
4796 ± 2% -20.2% 3827 ± 11% numa-vmstat.node1.nr_mapped
26325 ± 49% +100.0% 52648 ± 21% numa-vmstat.node1.nr_zone_inactive_anon
329.00 +3.5% 340.50 proc-vmstat.nr_active_anon
58594 ± 3% -40.1% 35080 ± 2% proc-vmstat.nr_active_file
294646 -20.3% 234920 ± 2% proc-vmstat.nr_dirtied
379193 -6.0% 356556 proc-vmstat.nr_file_pages
18145 -1.7% 17841 proc-vmstat.nr_kernel_stack
294048 -20.2% 234700 ± 2% proc-vmstat.nr_written
329.00 +3.5% 340.50 proc-vmstat.nr_zone_active_anon
58594 ± 3% -40.1% 35080 ± 2% proc-vmstat.nr_zone_active_file
494499 -11.2% 439349 ± 2% proc-vmstat.numa_hit
463347 -11.9% 408170 ± 2% proc-vmstat.numa_local
521666 -10.4% 467559 ± 2% proc-vmstat.pgalloc_normal
784160 ± 2% +37.4% 1077544 ± 3% proc-vmstat.pgpgout
2220 ± 7% -20.7% 1761 ± 9% slabinfo.biovec-64.active_objs
2220 ± 7% -20.7% 1761 ± 9% slabinfo.biovec-64.num_objs
3660 ± 3% -9.1% 3327 ± 2% slabinfo.blkdev_ioc.active_objs
3663 ± 4% -9.1% 3331 ± 2% slabinfo.blkdev_ioc.num_objs
92.50 ± 19% -98.4% 1.50 ±110% slabinfo.btrfs_ordered_extent.active_objs
92.50 ± 19% -98.4% 1.50 ±110% slabinfo.btrfs_ordered_extent.num_objs
528.00 ± 13% +39.4% 736.00 slabinfo.kmalloc-rcl-128.active_objs
528.00 ± 13% +39.4% 736.00 slabinfo.kmalloc-rcl-128.num_objs
1995 ± 3% +23.7% 2467 ± 4% slabinfo.kmalloc-rcl-96.active_objs
1995 ± 3% +23.7% 2467 ± 4% slabinfo.kmalloc-rcl-96.num_objs
6865 -17.5% 5666 ± 4% slabinfo.numa_policy.active_objs
6865 -17.5% 5666 ± 4% slabinfo.numa_policy.num_objs
4923 ± 3% -12.1% 4328 ± 7% slabinfo.pid_namespace.active_objs
4923 ± 3% -12.1% 4328 ± 7% slabinfo.pid_namespace.num_objs
2533 ± 4% -13.6% 2188 ± 11% slabinfo.skbuff_fclone_cache.active_objs
2533 ± 4% -13.6% 2188 ± 11% slabinfo.skbuff_fclone_cache.num_objs
74364175 +2.2% 76034332 perf-stat.i.branch-misses
354617 ± 2% +9.2% 387331 perf-stat.i.context-switches
662.74 ± 39% -66.3% 223.48 ± 5% perf-stat.i.cpu-migrations
0.37 -2.5% 0.36 perf-stat.i.ipc
7899 ± 2% -9.9% 7117 ± 3% perf-stat.i.minor-faults
343982 ± 4% -20.4% 273945 ± 6% perf-stat.i.node-stores
7899 ± 2% -9.9% 7117 ± 3% perf-stat.i.page-faults
2.495e+09 +1.9% 2.543e+09 perf-stat.ps.branch-instructions
69036585 +3.6% 71496512 perf-stat.ps.branch-misses
329122 ± 2% +10.6% 364070 perf-stat.ps.context-switches
89124 +1.3% 90242 perf-stat.ps.cpu-clock
614.79 ± 39% -65.8% 210.24 ± 5% perf-stat.ps.cpu-migrations
7349 ± 2% -8.8% 6704 ± 3% perf-stat.ps.minor-faults
319303 ± 4% -19.3% 257583 ± 6% perf-stat.ps.node-stores
7349 ± 2% -8.8% 6705 ± 3% perf-stat.ps.page-faults
89124 +1.3% 90242 perf-stat.ps.task-clock
1.632e+11 +21.0% 1.975e+11 ± 3% perf-stat.total.instructions
15.62 ±131% -14.2 1.39 ±173% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.get_signal.arch_do_signal
15.62 ±131% -14.2 1.39 ±173% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.get_signal
7.64 ±134% -7.6 0.00 perf-profile.calltrace.cycles-pp.proc_cgroup_show.proc_single_show.seq_read.vfs_read.ksys_read
7.64 ±134% -7.6 0.00 perf-profile.calltrace.cycles-pp.proc_single_show.seq_read.vfs_read.ksys_read.do_syscall_64
7.64 ±134% -6.2 1.39 ±173% perf-profile.calltrace.cycles-pp.seq_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.51 ±113% -1.7 2.78 ±173% perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.51 ±113% -1.7 2.78 ±173% perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.do_sys_open.do_syscall_64
2.78 ±173% +14.1 16.91 ± 49% perf-profile.calltrace.cycles-pp.show_interrupts.seq_read.proc_reg_read.vfs_read.ksys_read
2.78 ±173% +19.1 21.87 ± 42% perf-profile.calltrace.cycles-pp.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.17 ±173% +19.1 23.26 ± 41% perf-profile.calltrace.cycles-pp.proc_reg_read.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.78 ±173% +19.1 21.87 ± 42% perf-profile.calltrace.cycles-pp.vfs_read.ksys_read.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.17 ±173% +19.1 23.26 ± 41% perf-profile.calltrace.cycles-pp.seq_read.proc_reg_read.vfs_read.ksys_read.do_syscall_64
17.29 ±112% -14.5 2.78 ±173% perf-profile.children.cycles-pp.mmput
17.29 ±112% -14.5 2.78 ±173% perf-profile.children.cycles-pp.exit_mmap
7.64 ±134% -7.6 0.00 perf-profile.children.cycles-pp.proc_cgroup_show
7.64 ±134% -7.6 0.00 perf-profile.children.cycles-pp.proc_single_show
4.79 ±108% -4.8 0.00 perf-profile.children.cycles-pp.kmem_cache_free
4.79 ±108% -4.8 0.00 perf-profile.children.cycles-pp.refill_obj_stock
4.51 ±113% -3.1 1.39 ±173% perf-profile.children.cycles-pp.lookup_fast
3.06 ±100% -3.1 0.00 perf-profile.children.cycles-pp.asm_exc_page_fault
4.51 ±113% -1.7 2.78 ±173% perf-profile.children.cycles-pp.do_sys_open
4.51 ±113% -1.7 2.78 ±173% perf-profile.children.cycles-pp.do_sys_openat2
4.51 ±113% -1.7 2.78 ±173% perf-profile.children.cycles-pp.do_filp_open
4.51 ±113% -1.7 2.78 ±173% perf-profile.children.cycles-pp.path_openat
2.78 ±173% +12.7 15.52 ± 56% perf-profile.children.cycles-pp.show_interrupts
4.17 ±173% +19.1 23.26 ± 41% perf-profile.children.cycles-pp.proc_reg_read
17926 +34.7% 24145 interrupts.315:PCI-MSI.376832-edge.ahci[0000:00:17.0]
137923 ± 6% +25.9% 173705 ± 6% interrupts.CAL:Function_call_interrupts
29432 +18.5% 34868 ± 2% interrupts.CPU0.LOC:Local_timer_interrupts
29409 +17.9% 34661 ± 2% interrupts.CPU1.LOC:Local_timer_interrupts
29370 +19.5% 35085 ± 2% interrupts.CPU10.LOC:Local_timer_interrupts
29358 +18.4% 34770 interrupts.CPU11.LOC:Local_timer_interrupts
29293 +18.8% 34812 ± 2% interrupts.CPU12.LOC:Local_timer_interrupts
29287 +18.8% 34796 interrupts.CPU13.LOC:Local_timer_interrupts
29417 +18.2% 34769 interrupts.CPU14.LOC:Local_timer_interrupts
29364 +18.4% 34765 interrupts.CPU15.LOC:Local_timer_interrupts
29386 +18.1% 34707 interrupts.CPU16.LOC:Local_timer_interrupts
29357 +18.4% 34757 interrupts.CPU17.LOC:Local_timer_interrupts
29373 +19.0% 34967 ± 2% interrupts.CPU18.LOC:Local_timer_interrupts
29345 +18.6% 34799 interrupts.CPU19.LOC:Local_timer_interrupts
29443 +18.2% 34806 interrupts.CPU2.LOC:Local_timer_interrupts
29357 +18.4% 34750 interrupts.CPU20.LOC:Local_timer_interrupts
17926 +34.7% 24145 interrupts.CPU21.315:PCI-MSI.376832-edge.ahci[0000:00:17.0]
29449 +18.2% 34812 ± 2% interrupts.CPU21.LOC:Local_timer_interrupts
29386 +18.3% 34777 interrupts.CPU22.LOC:Local_timer_interrupts
29365 +18.5% 34792 interrupts.CPU23.LOC:Local_timer_interrupts
28324 ± 7% +18.1% 33462 ± 5% interrupts.CPU24.LOC:Local_timer_interrupts
28242 ± 7% +18.4% 33439 ± 5% interrupts.CPU25.LOC:Local_timer_interrupts
28281 ± 7% +18.4% 33477 ± 5% interrupts.CPU26.LOC:Local_timer_interrupts
28298 ± 7% +18.2% 33459 ± 5% interrupts.CPU27.LOC:Local_timer_interrupts
28283 ± 7% +18.3% 33465 ± 5% interrupts.CPU28.LOC:Local_timer_interrupts
28277 ± 7% +18.4% 33476 ± 5% interrupts.CPU29.LOC:Local_timer_interrupts
29498 +17.9% 34781 interrupts.CPU3.LOC:Local_timer_interrupts
28309 ± 7% +18.2% 33460 ± 5% interrupts.CPU30.LOC:Local_timer_interrupts
28285 ± 7% +18.4% 33477 ± 5% interrupts.CPU31.LOC:Local_timer_interrupts
28300 ± 7% +17.9% 33378 ± 5% interrupts.CPU32.LOC:Local_timer_interrupts
28440 ± 7% +17.9% 33531 ± 5% interrupts.CPU33.LOC:Local_timer_interrupts
644.50 ± 16% +97.1% 1270 ± 38% interrupts.CPU34.CAL:Function_call_interrupts
28311 ± 7% +18.2% 33477 ± 5% interrupts.CPU34.LOC:Local_timer_interrupts
28052 ± 7% +19.3% 33470 ± 5% interrupts.CPU35.LOC:Local_timer_interrupts
28287 ± 7% +18.3% 33465 ± 5% interrupts.CPU36.LOC:Local_timer_interrupts
551.25 ± 11% +42.4% 785.00 ± 28% interrupts.CPU37.CAL:Function_call_interrupts
28289 ± 7% +18.3% 33456 ± 5% interrupts.CPU37.LOC:Local_timer_interrupts
28274 ± 7% +18.3% 33450 ± 5% interrupts.CPU38.LOC:Local_timer_interrupts
28282 ± 7% +18.3% 33466 ± 5% interrupts.CPU39.LOC:Local_timer_interrupts
29239 +18.9% 34779 interrupts.CPU4.LOC:Local_timer_interrupts
28278 ± 7% +18.4% 33473 ± 5% interrupts.CPU40.LOC:Local_timer_interrupts
28218 ± 7% +18.6% 33466 ± 5% interrupts.CPU41.LOC:Local_timer_interrupts
28278 ± 7% +18.3% 33458 ± 5% interrupts.CPU42.LOC:Local_timer_interrupts
28282 ± 7% +18.4% 33480 ± 5% interrupts.CPU43.LOC:Local_timer_interrupts
28289 ± 7% +17.9% 33356 ± 5% interrupts.CPU44.LOC:Local_timer_interrupts
28289 ± 7% +18.3% 33476 ± 5% interrupts.CPU45.LOC:Local_timer_interrupts
28298 ± 7% +18.3% 33477 ± 5% interrupts.CPU46.LOC:Local_timer_interrupts
28290 ± 7% +17.9% 33355 ± 5% interrupts.CPU47.LOC:Local_timer_interrupts
29381 +18.3% 34769 interrupts.CPU48.LOC:Local_timer_interrupts
29571 +17.6% 34790 interrupts.CPU49.LOC:Local_timer_interrupts
29434 +17.8% 34682 interrupts.CPU5.LOC:Local_timer_interrupts
29378 +18.4% 34791 ± 2% interrupts.CPU50.LOC:Local_timer_interrupts
29392 +18.5% 34839 ± 2% interrupts.CPU51.LOC:Local_timer_interrupts
1828 ± 31% +70.2% 3112 ± 19% interrupts.CPU52.CAL:Function_call_interrupts
29390 +18.3% 34761 interrupts.CPU52.LOC:Local_timer_interrupts
200.00 ± 39% +112.8% 425.50 ± 33% interrupts.CPU52.RES:Rescheduling_interrupts
29376 +18.4% 34795 interrupts.CPU53.LOC:Local_timer_interrupts
1955 ± 25% +54.6% 3024 ± 13% interrupts.CPU54.CAL:Function_call_interrupts
29422 +18.2% 34786 interrupts.CPU54.LOC:Local_timer_interrupts
29347 +18.5% 34781 interrupts.CPU55.LOC:Local_timer_interrupts
29386 +18.4% 34790 interrupts.CPU56.LOC:Local_timer_interrupts
29373 +18.3% 34746 interrupts.CPU57.LOC:Local_timer_interrupts
29261 +18.8% 34769 interrupts.CPU58.LOC:Local_timer_interrupts
29403 +18.2% 34767 interrupts.CPU59.LOC:Local_timer_interrupts
29353 +18.5% 34773 interrupts.CPU6.LOC:Local_timer_interrupts
29390 +18.3% 34771 interrupts.CPU60.LOC:Local_timer_interrupts
1719 ± 25% +52.8% 2627 ± 20% interrupts.CPU61.CAL:Function_call_interrupts
29416 +18.3% 34787 interrupts.CPU61.LOC:Local_timer_interrupts
29356 +18.5% 34794 interrupts.CPU62.LOC:Local_timer_interrupts
29368 +18.4% 34764 interrupts.CPU63.LOC:Local_timer_interrupts
29213 +19.1% 34795 interrupts.CPU64.LOC:Local_timer_interrupts
29346 +18.5% 34781 interrupts.CPU65.LOC:Local_timer_interrupts
29373 +18.4% 34775 interrupts.CPU66.LOC:Local_timer_interrupts
29374 +18.4% 34768 interrupts.CPU67.LOC:Local_timer_interrupts
29418 +18.3% 34803 ± 2% interrupts.CPU68.LOC:Local_timer_interrupts
29371 +18.4% 34770 interrupts.CPU69.LOC:Local_timer_interrupts
29356 +18.4% 34761 interrupts.CPU7.LOC:Local_timer_interrupts
29387 +18.3% 34760 interrupts.CPU70.LOC:Local_timer_interrupts
29313 +18.6% 34762 interrupts.CPU71.LOC:Local_timer_interrupts
28307 ± 7% +18.3% 33476 ± 5% interrupts.CPU72.LOC:Local_timer_interrupts
28304 ± 7% +18.0% 33401 ± 5% interrupts.CPU73.LOC:Local_timer_interrupts
28291 ± 7% +18.3% 33480 ± 5% interrupts.CPU74.LOC:Local_timer_interrupts
28292 ± 7% +18.4% 33486 ± 5% interrupts.CPU75.LOC:Local_timer_interrupts
28282 ± 7% +18.4% 33478 ± 5% interrupts.CPU76.LOC:Local_timer_interrupts
28291 ± 7% +18.3% 33469 ± 5% interrupts.CPU77.LOC:Local_timer_interrupts
28313 ± 7% +18.2% 33467 ± 5% interrupts.CPU78.LOC:Local_timer_interrupts
28444 ± 7% +17.7% 33489 ± 5% interrupts.CPU79.LOC:Local_timer_interrupts
29385 +18.4% 34802 interrupts.CPU8.LOC:Local_timer_interrupts
28253 ± 7% +18.6% 33510 ± 5% interrupts.CPU80.LOC:Local_timer_interrupts
28286 ± 7% +18.3% 33462 ± 5% interrupts.CPU81.LOC:Local_timer_interrupts
28262 ± 7% +18.4% 33473 ± 5% interrupts.CPU82.LOC:Local_timer_interrupts
28297 ± 7% +18.4% 33495 ± 5% interrupts.CPU83.LOC:Local_timer_interrupts
28279 ± 7% +18.3% 33453 ± 5% interrupts.CPU84.LOC:Local_timer_interrupts
28289 ± 7% +18.4% 33507 ± 5% interrupts.CPU85.LOC:Local_timer_interrupts
28284 ± 7% +18.3% 33452 ± 5% interrupts.CPU86.LOC:Local_timer_interrupts
28279 ± 7% +18.5% 33497 ± 5% interrupts.CPU87.LOC:Local_timer_interrupts
28295 ± 7% +19.2% 33720 ± 4% interrupts.CPU88.LOC:Local_timer_interrupts
28234 ± 7% +18.6% 33491 ± 5% interrupts.CPU89.LOC:Local_timer_interrupts
29403 +18.3% 34787 interrupts.CPU9.LOC:Local_timer_interrupts
28166 ± 7% +18.9% 33481 ± 5% interrupts.CPU90.LOC:Local_timer_interrupts
28275 ± 7% +18.4% 33487 ± 5% interrupts.CPU91.LOC:Local_timer_interrupts
28267 ± 7% +18.4% 33470 ± 5% interrupts.CPU92.LOC:Local_timer_interrupts
28274 ± 7% +18.1% 33378 ± 5% interrupts.CPU93.LOC:Local_timer_interrupts
28279 ± 7% +18.4% 33474 ± 5% interrupts.CPU94.LOC:Local_timer_interrupts
28309 ± 7% +18.4% 33504 ± 5% interrupts.CPU95.LOC:Local_timer_interrupts
2767708 ± 4% +18.4% 3276358 ± 2% interrupts.LOC:Local_timer_interrupts
13946 ± 12% +35.8% 18945 ± 17% interrupts.RES:Rescheduling_interrupts
fsmark.files_per_sec
9000 +--------------------------------------------------------------------+
| |
| + +. |
8500 |-+ +. +. .+. + + .. + .+..+.|
| .. +.. : +. + + .+ + +. +..+.+.+..+.+.+..+ |
|.+ : + +.. + +.. + |
8000 |-+ + + + |
| |
7500 |-+ |
| |
| O O |
7000 |-O O O O O O O |
| O O O O O O O O O |
| O O O O O O O O O |
6500 +--------------------------------------------------------------------+
fsmark.time.system_time
185 +---------------------------------------------------------------------+
| O O O O O O O |
180 |-+ O O O O O O O O |
175 |-+ O O O O |
| O O O O O O O O |
170 |-+ |
165 |-+ |
| |
160 |-+ |
155 |-+ |
| +.. +.. |
150 |-+ .+. +.. + + .+. |
145 |.+..+.+. +.+.. + .+..+ +.+ +.+.. .+. +.+.. .|
| +.+..+ +.+ + +.+..+ |
140 +---------------------------------------------------------------------+
fsmark.time.voluntary_context_switches
3.4e+06 +-----------------------------------------------------------------+
| O O O O |
3.2e+06 |-+ O O O O |
| O O O O O O O O O O |
3e+06 |-+ O O O O O O |
| O O O O |
2.8e+06 |-+ |
| |
2.6e+06 |-+ |
| +.+. +.+ +.+ |
2.4e+06 |.+.. : +..+. .+ +.+ + + .. + .+. .+..+.+.+ |
| : + + .. + .+ + +. +.+ + .+.|
2.2e+06 |-+ + + +. +. |
| |
2e+06 +-----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-csl-2ap1: 192 threads Intel(R) Xeon(R) CPU @ 2.20GHz with 192G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-9/performance/1SSD/9B/btrfs/8/x86_64-rhel-8.3/16d/256fpd/4/debian-10.4-x86_64-20200603.cgz/fsyncBeforeClose/lkp-csl-2ap1/16G/fsmark/0x4002f01
commit:
c9c9735c46 (" SCSI misc on 20200814")
8cfb68ac58 ("block: make QUEUE_SYSFS_BIT_FNS a little more useful")
c9c9735c46f589b9 8cfb68ac58ee842a4cb35efb1a5
---------------- ---------------------------
%stddev %change %stddev
\ | \
14341 ± 2% -6.1% 13467 fsmark.files_per_sec
318.08 ± 2% +5.2% 334.51 fsmark.time.elapsed_time
318.08 ± 2% +5.2% 334.51 fsmark.time.elapsed_time.max
2.57e+08 +15.7% 2.974e+08 fsmark.time.file_system_outputs
1433 ± 5% -10.7% 1279 fsmark.time.involuntary_context_switches
180.67 -7.8% 166.50 fsmark.time.percent_of_cpu_this_job_got
16455563 ± 2% -20.1% 13141224 ± 2% fsmark.time.voluntary_context_switches
0.10 ± 5% +0.1 0.16 mpstat.cpu.all.iowait%
10363 ±140% +199.4% 31025 numa-numastat.node0.other_node
2803315 ± 91% -82.5% 491458 ± 77% numa-numastat.node2.numa_hit
370055 ± 3% +118.3% 807954 vmstat.io.bo
23034629 +10.1% 25353506 vmstat.memory.cache
105720 -23.4% 81028 ± 2% vmstat.system.cs
402634 +2.5% 412629 vmstat.system.in
9481352 ± 2% +23.7% 11724195 ± 2% meminfo.Active
9471789 ± 2% +23.7% 11714606 ± 2% meminfo.Active(file)
19972841 +11.3% 22235999 meminfo.Cached
1246325 ± 3% -12.8% 1086240 ± 4% meminfo.DirectMap4k
26173204 +9.2% 28588033 meminfo.Memused
3.326e+08 ± 5% -33.5% 2.213e+08 ± 7% cpuidle.C1.time
10254675 ± 3% -26.2% 7568112 ± 4% cpuidle.C1.usage
3.282e+10 ± 56% +88.1% 6.174e+10 cpuidle.C1E.time
92104289 ± 26% +45.9% 1.344e+08 cpuidle.C1E.usage
2.574e+10 ± 72% -99.9% 35688524 cpuidle.C6.time
40454525 ± 70% -99.6% 142353 ± 4% cpuidle.C6.usage
20458584 +46.5% 29980561 cpuidle.POLL.time
5157540 +48.7% 7669349 cpuidle.POLL.usage
3395 ± 70% +1940.4% 69272 ± 61% numa-meminfo.node1.AnonHugePages
10256 ± 28% +757.4% 87935 ± 47% numa-meminfo.node1.AnonPages
11114 ± 21% +706.9% 89679 ± 45% numa-meminfo.node1.Inactive(anon)
2556688 ± 57% -99.9% 2653 ± 83% numa-meminfo.node2.Active
601.00 ±126% +307.7% 2450 ± 94% numa-meminfo.node2.Active(anon)
2556086 ± 57% -100.0% 202.50 ± 41% numa-meminfo.node2.Active(file)
73926 ±100% -100.0% 8.00 ± 12% numa-meminfo.node2.Dirty
5717691 ± 68% -95.6% 252360 numa-meminfo.node2.FilePages
2963447 ± 85% -98.9% 32117 ± 50% numa-meminfo.node2.Inactive
2919238 ± 86% -100.0% 734.50 ± 35% numa-meminfo.node2.Inactive(file)
954960 ± 83% -97.1% 27837 ± 41% numa-meminfo.node2.KReclaimable
7416088 ± 67% -90.8% 684272 ± 4% numa-meminfo.node2.MemUsed
366.67 ± 58% +184.6% 1043 ± 60% numa-meminfo.node2.PageTables
954960 ± 83% -97.1% 27837 ± 41% numa-meminfo.node2.SReclaimable
406357 ± 75% -83.6% 66735 ± 6% numa-meminfo.node2.SUnreclaim
1361317 ± 80% -93.1% 94573 ± 16% numa-meminfo.node2.Slab
2367486 ± 2% +23.7% 2929170 ± 2% proc-vmstat.nr_active_file
32231067 +15.7% 37287643 proc-vmstat.nr_dirtied
66439 +2.4% 68032 proc-vmstat.nr_dirty
4992862 +11.4% 5560436 proc-vmstat.nr_file_pages
42888773 -1.4% 42282764 proc-vmstat.nr_free_pages
784561 +1.6% 797468 proc-vmstat.nr_slab_reclaimable
358606 +3.3% 370304 proc-vmstat.nr_slab_unreclaimable
32229396 +15.7% 37286196 proc-vmstat.nr_written
2367486 ± 2% +23.7% 2929170 ± 2% proc-vmstat.nr_zone_active_file
66601 +2.3% 68155 proc-vmstat.nr_zone_write_pending
9950054 +4.0% 10350725 proc-vmstat.numa_hit
9856519 +4.1% 10256822 proc-vmstat.numa_local
11765348 +3.5% 12175102 proc-vmstat.pgalloc_normal
1232760 ± 2% +5.1% 1295455 proc-vmstat.pgfault
11576430 ± 3% +5.2% 12183085 proc-vmstat.pgfree
1.188e+08 +129.5% 2.727e+08 proc-vmstat.pgpgout
2587 ± 28% +749.7% 21984 ± 47% numa-vmstat.node1.nr_anon_pages
2801 ± 21% +700.2% 22420 ± 45% numa-vmstat.node1.nr_inactive_anon
2801 ± 21% +700.2% 22420 ± 45% numa-vmstat.node1.nr_zone_inactive_anon
1026160 ± 42% -45.0% 564173 ± 7% numa-vmstat.node1.numa_hit
911821 ± 47% -50.6% 450148 ± 9% numa-vmstat.node1.numa_local
150.00 ±126% +308.0% 612.00 ± 94% numa-vmstat.node2.nr_active_anon
639121 ± 57% -100.0% 50.00 ± 42% numa-vmstat.node2.nr_active_file
4390565 ± 49% -100.0% 1044 ± 23% numa-vmstat.node2.nr_dirtied
18474 ±100% -100.0% 1.50 ± 33% numa-vmstat.node2.nr_dirty
1429333 ± 68% -95.6% 63090 numa-vmstat.node2.nr_file_pages
729624 ± 86% -100.0% 183.50 ± 35% numa-vmstat.node2.nr_inactive_file
91.33 ± 59% +185.8% 261.00 ± 60% numa-vmstat.node2.nr_page_table_pages
238661 ± 83% -97.1% 6958 ± 41% numa-vmstat.node2.nr_slab_reclaimable
101586 ± 75% -83.6% 16684 ± 6% numa-vmstat.node2.nr_slab_unreclaimable
4371839 ± 49% -100.0% 1042 ± 23% numa-vmstat.node2.nr_written
150.00 ±126% +308.0% 612.00 ± 94% numa-vmstat.node2.nr_zone_active_anon
639121 ± 57% -100.0% 50.00 ± 42% numa-vmstat.node2.nr_zone_active_file
729624 ± 86% -100.0% 183.50 ± 35% numa-vmstat.node2.nr_zone_inactive_file
18521 ±100% -100.0% 1.50 ± 33% numa-vmstat.node2.nr_zone_write_pending
1933653 ± 58% -65.8% 661815 ± 38% numa-vmstat.node2.numa_hit
1829227 ± 62% -69.2% 563701 ± 47% numa-vmstat.node2.numa_local
74121 ± 38% +34.0% 99352 ± 15% numa-vmstat.node3.numa_other
10172 ± 32% -71.3% 2923 ± 99% sched_debug.cfs_rq:/.MIN_vruntime.max
19.05 ± 6% -33.2% 12.73 ± 11% sched_debug.cfs_rq:/.exec_clock.min
10172 ± 32% -71.3% 2923 ± 99% sched_debug.cfs_rq:/.max_vruntime.max
64578 ± 2% +13.8% 73468 ± 11% sched_debug.cfs_rq:/.min_vruntime.max
23097 ± 10% +18.5% 27380 ± 8% sched_debug.cfs_rq:/.min_vruntime.min
-3883 -148.9% 1897 ±205% sched_debug.cfs_rq:/.spread0.avg
25748 ± 23% +45.8% 37548 ± 23% sched_debug.cfs_rq:/.spread0.max
-15742 -45.8% -8539 sched_debug.cfs_rq:/.spread0.min
590.42 ± 13% -25.5% 439.83 ± 8% sched_debug.cfs_rq:/.util_est_enqueued.max
85.96 ± 6% -29.9% 60.26 ± 18% sched_debug.cfs_rq:/.util_est_enqueued.stddev
15114 -45.9% 8184 ± 13% sched_debug.cpu.max_idle_balance_cost.stddev
85927 ± 9% -25.4% 64114 ± 13% sched_debug.cpu.nr_switches.avg
801141 ± 16% +9.7% 878469 ± 12% sched_debug.cpu.nr_switches.max
0.01 ± 19% +31.4% 0.01 ± 6% sched_debug.cpu.nr_uninterruptible.avg
84328 ± 10% -25.9% 62528 ± 13% sched_debug.cpu.sched_count.avg
798454 ± 16% +9.7% 875948 ± 12% sched_debug.cpu.sched_count.max
42049 ± 10% -25.9% 31163 ± 13% sched_debug.cpu.sched_goidle.avg
398713 ± 16% +9.7% 437426 ± 12% sched_debug.cpu.sched_goidle.max
42175 ± 10% -26.0% 31215 ± 13% sched_debug.cpu.ttwu_count.avg
7514 ± 19% +30.5% 9803 ± 9% sched_debug.cpu.ttwu_local.avg
87026 ± 20% +81.3% 157740 ± 8% sched_debug.cpu.ttwu_local.max
16085 ± 24% +67.6% 26963 ± 5% sched_debug.cpu.ttwu_local.stddev
2691 ± 3% -14.1% 2310 ± 3% slabinfo.PING.active_objs
2691 ± 3% -14.1% 2310 ± 3% slabinfo.PING.num_objs
915.67 ± 11% -34.9% 596.00 ± 4% slabinfo.biovec-max.active_objs
923.33 ± 10% -33.4% 614.50 ± 3% slabinfo.biovec-max.num_objs
4710 ± 13% -30.0% 3298 ± 3% slabinfo.buffer_head.active_objs
4711 ± 13% -30.0% 3298 ± 3% slabinfo.buffer_head.num_objs
4598 ± 15% -31.9% 3132 slabinfo.dmaengine-unmap-16.active_objs
4598 ± 15% -31.9% 3132 slabinfo.dmaengine-unmap-16.num_objs
1182 ± 10% -18.4% 965.50 slabinfo.file_lock_cache.active_objs
1182 ± 10% -18.4% 965.50 slabinfo.file_lock_cache.num_objs
4339 ± 12% -30.2% 3030 ± 5% slabinfo.khugepaged_mm_slot.active_objs
4339 ± 12% -30.2% 3030 ± 5% slabinfo.khugepaged_mm_slot.num_objs
13447 ± 5% +11.4% 14974 slabinfo.kmalloc-192.active_objs
5131 ± 10% -30.5% 3568 slabinfo.mnt_cache.active_objs
5131 ± 10% -30.3% 3578 slabinfo.mnt_cache.num_objs
416575 ± 3% +42.6% 593901 ± 2% slabinfo.numa_policy.active_objs
7159 ± 4% +35.4% 9694 ± 2% slabinfo.numa_policy.active_slabs
443932 ± 4% +35.4% 601067 ± 2% slabinfo.numa_policy.num_objs
7159 ± 4% +35.4% 9694 ± 2% slabinfo.numa_policy.num_slabs
38580 ± 4% -12.2% 33870 slabinfo.pid_namespace.active_objs
695.00 ± 4% -12.6% 607.50 slabinfo.pid_namespace.active_slabs
38949 ± 4% -12.6% 34054 slabinfo.pid_namespace.num_objs
695.00 ± 4% -12.6% 607.50 slabinfo.pid_namespace.num_slabs
139006 +13.2% 157408 slabinfo.radix_tree_node.active_objs
2504 +12.7% 2822 slabinfo.radix_tree_node.active_slabs
140268 +12.7% 158065 slabinfo.radix_tree_node.num_objs
2504 +12.7% 2822 slabinfo.radix_tree_node.num_slabs
17.01 ± 38% -31.8% 11.60 perf-stat.i.MPKI
1.79 ± 41% -0.7 1.09 perf-stat.i.branch-miss-rate%
24054793 ± 40% -38.6% 14775410 perf-stat.i.branch-misses
16676510 ± 8% -25.7% 12383747 ± 3% perf-stat.i.cache-misses
1.119e+08 ± 38% -32.0% 76150997 perf-stat.i.cache-references
106737 -23.4% 81752 ± 2% perf-stat.i.context-switches
2.26 ± 4% -15.2% 1.92 perf-stat.i.cpi
1.469e+10 ± 4% -15.3% 1.244e+10 perf-stat.i.cpu-cycles
206.42 -2.5% 201.30 perf-stat.i.cpu-migrations
0.46 ± 4% +16.1% 0.53 perf-stat.i.ipc
0.08 ± 4% -15.3% 0.06 perf-stat.i.metric.GHz
0.87 ± 6% +28.7% 1.13 ± 4% perf-stat.i.metric.K/sec
57.10 ± 9% -14.8 42.34 ± 10% perf-stat.i.node-load-miss-rate%
3057226 ± 18% -50.8% 1503408 ± 18% perf-stat.i.node-load-misses
50.35 ± 9% -23.2 27.17 ± 17% perf-stat.i.node-store-miss-rate%
889352 ± 14% -57.2% 380323 ± 19% perf-stat.i.node-store-misses
634299 ± 3% +13.1% 717379 ± 5% perf-stat.i.node-stores
16.82 ± 38% -31.3% 11.55 perf-stat.overall.MPKI
1.80 ± 40% -0.7 1.12 perf-stat.overall.branch-miss-rate%
2.21 ± 4% -14.6% 1.89 perf-stat.overall.cpi
888.28 ± 10% +13.2% 1005 ± 2% perf-stat.overall.cycles-between-cache-misses
0.45 ± 4% +16.9% 0.53 perf-stat.overall.ipc
67.48 ± 6% -17.3 50.21 ± 11% perf-stat.overall.node-load-miss-rate%
58.08 ± 6% -23.6 34.47 ± 15% perf-stat.overall.node-store-miss-rate%
23979708 ± 40% -38.5% 14735864 perf-stat.ps.branch-misses
16622346 ± 8% -25.7% 12346713 ± 3% perf-stat.ps.cache-misses
1.116e+08 ± 38% -31.9% 75921368 perf-stat.ps.cache-references
106373 -23.4% 81507 ± 2% perf-stat.ps.context-switches
1.464e+10 ± 4% -15.3% 1.24e+10 perf-stat.ps.cpu-cycles
205.83 -2.4% 200.79 perf-stat.ps.cpu-migrations
3046719 ± 18% -50.8% 1499150 ± 18% perf-stat.ps.node-load-misses
886348 ± 14% -57.2% 379278 ± 19% perf-stat.ps.node-store-misses
632290 ± 3% +13.1% 715173 ± 5% perf-stat.ps.node-stores
2.115e+12 +4.2% 2.204e+12 perf-stat.total.instructions
37061 ± 17% +20.7% 44719 ± 4% softirqs.CPU101.SCHED
41293 ± 2% +12.0% 46232 ± 6% softirqs.CPU102.SCHED
40360 ± 5% +13.7% 45875 ± 6% softirqs.CPU103.SCHED
41842 +10.3% 46152 ± 6% softirqs.CPU104.SCHED
42144 +5.9% 44634 ± 4% softirqs.CPU105.SCHED
42326 +6.3% 44983 ± 4% softirqs.CPU106.SCHED
36518 ± 21% +23.0% 44916 ± 3% softirqs.CPU107.SCHED
36496 ± 21% +21.6% 44387 ± 3% softirqs.CPU108.SCHED
41601 ± 2% +8.5% 45129 ± 4% softirqs.CPU109.SCHED
42278 ± 2% +6.4% 44987 ± 4% softirqs.CPU110.SCHED
31953 ± 24% +39.0% 44401 ± 3% softirqs.CPU113.SCHED
84477 ± 4% +3.6% 87558 ± 3% softirqs.CPU115.RCU
41016 ± 2% +9.5% 44929 ± 2% softirqs.CPU115.SCHED
84061 ± 4% +4.7% 87983 ± 3% softirqs.CPU116.RCU
41192 +7.5% 44276 ± 3% softirqs.CPU117.SCHED
84928 ± 4% +3.2% 87633 ± 3% softirqs.CPU118.RCU
35258 ± 22% +23.5% 43551 softirqs.CPU118.SCHED
40129 ± 2% +7.0% 42924 ± 2% softirqs.CPU12.SCHED
88467 ± 4% +6.1% 93820 ± 3% softirqs.CPU157.RCU
84352 ± 5% +5.9% 89321 ± 2% softirqs.CPU16.RCU
84510 ± 4% +5.4% 89078 ± 3% softirqs.CPU17.RCU
42859 +6.4% 45610 ± 3% softirqs.CPU179.SCHED
84793 ± 5% +6.0% 89907 softirqs.CPU18.RCU
42750 +7.5% 45938 ± 4% softirqs.CPU182.SCHED
42393 ± 2% +8.2% 45888 ± 4% softirqs.CPU183.SCHED
41993 +9.8% 46110 ± 4% softirqs.CPU186.SCHED
42186 +7.9% 45503 ± 3% softirqs.CPU187.SCHED
84190 ± 5% +3.3% 86965 ± 4% softirqs.CPU19.RCU
83884 ± 5% +6.7% 89470 ± 3% softirqs.CPU20.RCU
84101 ± 5% +5.2% 88443 ± 3% softirqs.CPU21.RCU
84156 ± 5% +6.5% 89632 softirqs.CPU22.RCU
84830 ± 5% +4.7% 88847 ± 2% softirqs.CPU23.RCU
40641 +6.8% 43385 ± 3% softirqs.CPU3.SCHED
35600 ± 20% +19.0% 42362 softirqs.CPU66.SCHED
40052 ± 3% +7.2% 42953 ± 2% softirqs.CPU7.SCHED
84910 ± 8% +5.0% 89157 ± 8% softirqs.CPU70.RCU
80938 ± 11% +10.5% 89431 ± 6% softirqs.CPU72.RCU
80695 ± 4% +8.3% 87378 ± 7% softirqs.CPU74.RCU
39758 ± 4% +8.4% 43094 ± 3% softirqs.CPU9.SCHED
24129 ± 54% +28.1% 30909 ± 39% softirqs.CPU95.SCHED
61273 ±141% +70.2% 104291 ± 86% interrupts.108:PCI-MSI.23593025-edge.nvme0q65
67780 ±141% +198.0% 202003 ± 99% interrupts.109:PCI-MSI.23593026-edge.nvme0q66
5839 ±141% +947.8% 61180 ± 94% interrupts.140:PCI-MSI.23593057-edge.nvme0q97
3208 ±141% +4875.8% 159657 ± 72% interrupts.141:PCI-MSI.23593058-edge.nvme0q98
2792 ±141% +6892.6% 195257 ± 99% interrupts.142:PCI-MSI.23593059-edge.nvme0q99
3390 ±141% +7350.2% 252586 ± 89% interrupts.143:PCI-MSI.23593060-edge.nvme0q100
1.33 ±141% +1.7e+07% 224326 ± 88% interrupts.144:PCI-MSI.23593061-edge.nvme0q101
5093 ±141% +1198.8% 66149 ± 99% interrupts.76:PCI-MSI.23592993-edge.nvme0q33
1685904 ± 2% -39.2% 1024585 ± 4% interrupts.CAL:Function_call_interrupts
66.67 ± 94% -85.0% 10.00 ± 50% interrupts.CPU10.RES:Rescheduling_interrupts
1.33 ±141% +1.7e+07% 224326 ± 88% interrupts.CPU100.144:PCI-MSI.23593061-edge.nvme0q101
113.67 ± 4% +35.0% 153.50 ± 5% interrupts.CPU101.NMI:Non-maskable_interrupts
113.67 ± 4% +35.0% 153.50 ± 5% interrupts.CPU101.PMI:Performance_monitoring_interrupts
916.00 ±139% +71.6% 1572 ± 97% interrupts.CPU102.RES:Rescheduling_interrupts
89.67 ± 28% +33.8% 120.00 ± 9% interrupts.CPU104.NMI:Non-maskable_interrupts
89.67 ± 28% +33.8% 120.00 ± 9% interrupts.CPU104.PMI:Performance_monitoring_interrupts
94.00 ± 24% +40.4% 132.00 ± 6% interrupts.CPU108.NMI:Non-maskable_interrupts
94.00 ± 24% +40.4% 132.00 ± 6% interrupts.CPU108.PMI:Performance_monitoring_interrupts
3560 ± 52% -76.9% 822.00 ± 2% interrupts.CPU11.CAL:Function_call_interrupts
276.33 ± 84% -78.3% 60.00 ± 16% interrupts.CPU11.NMI:Non-maskable_interrupts
276.33 ± 84% -78.3% 60.00 ± 16% interrupts.CPU11.PMI:Performance_monitoring_interrupts
581.00 ±141% +52.8% 888.00 ± 97% interrupts.CPU113.RES:Rescheduling_interrupts
114.67 ± 3% -45.9% 62.00 ± 12% interrupts.CPU12.NMI:Non-maskable_interrupts
114.67 ± 3% -45.9% 62.00 ± 12% interrupts.CPU12.PMI:Performance_monitoring_interrupts
59.00 ±115% -86.4% 8.00 interrupts.CPU120.RES:Rescheduling_interrupts
91.33 ±118% -95.1% 4.50 ± 33% interrupts.CPU124.RES:Rescheduling_interrupts
184.00 ±120% -98.6% 2.50 ± 20% interrupts.CPU126.RES:Rescheduling_interrupts
46.33 ±112% -86.0% 6.50 ± 23% interrupts.CPU128.RES:Rescheduling_interrupts
266.00 ± 87% -98.1% 5.00 interrupts.CPU129.RES:Rescheduling_interrupts
469.33 ±108% -88.0% 56.50 ± 20% interrupts.CPU13.NMI:Non-maskable_interrupts
469.33 ±108% -88.0% 56.50 ± 20% interrupts.CPU13.PMI:Performance_monitoring_interrupts
158.33 ± 97% -97.5% 4.00 ± 50% interrupts.CPU131.RES:Rescheduling_interrupts
7460 ± 71% -89.2% 806.50 interrupts.CPU138.CAL:Function_call_interrupts
218.33 ± 92% -99.5% 1.00 interrupts.CPU138.RES:Rescheduling_interrupts
131.00 ± 87% -97.7% 3.00 ± 33% interrupts.CPU139.RES:Rescheduling_interrupts
2.33 ± 88% +2685.7% 65.00 ± 92% interrupts.CPU140.TLB:TLB_shootdowns
72.00 ± 47% -96.5% 2.50 ± 60% interrupts.CPU141.RES:Rescheduling_interrupts
5093 ±141% +1198.8% 66149 ± 99% interrupts.CPU144.76:PCI-MSI.23592993-edge.nvme0q33
12466 ± 70% -93.4% 826.50 interrupts.CPU145.CAL:Function_call_interrupts
128.33 ± 16% -25.2% 96.00 ± 2% interrupts.CPU145.NMI:Non-maskable_interrupts
128.33 ± 16% -25.2% 96.00 ± 2% interrupts.CPU145.PMI:Performance_monitoring_interrupts
27853 ±105% -97.1% 819.00 interrupts.CPU146.CAL:Function_call_interrupts
12475 ±120% -93.1% 862.00 interrupts.CPU147.CAL:Function_call_interrupts
17867 ± 78% -95.5% 812.00 interrupts.CPU148.CAL:Function_call_interrupts
2.33 ± 20% +9778.6% 230.50 ± 97% interrupts.CPU148.TLB:TLB_shootdowns
17485 ± 87% -95.4% 813.00 interrupts.CPU150.CAL:Function_call_interrupts
762.00 ±102% -99.1% 6.50 ± 84% interrupts.CPU150.RES:Rescheduling_interrupts
19650 ±111% -95.8% 818.50 interrupts.CPU151.CAL:Function_call_interrupts
689.67 ±121% -99.4% 4.00 interrupts.CPU151.RES:Rescheduling_interrupts
22144 ±114% -96.2% 831.00 interrupts.CPU152.CAL:Function_call_interrupts
789.67 ±118% -98.7% 10.50 ± 52% interrupts.CPU152.RES:Rescheduling_interrupts
9567 ± 83% -91.4% 820.50 interrupts.CPU154.CAL:Function_call_interrupts
253.67 ± 92% -98.2% 4.50 ± 11% interrupts.CPU154.RES:Rescheduling_interrupts
24902 ± 97% -96.8% 808.50 interrupts.CPU155.CAL:Function_call_interrupts
18408 ± 98% -93.3% 1238 ± 21% interrupts.CPU156.CAL:Function_call_interrupts
17218 ± 91% -95.2% 833.50 interrupts.CPU157.CAL:Function_call_interrupts
8212 ± 67% -85.5% 1190 ± 29% interrupts.CPU158.CAL:Function_call_interrupts
17229 ±129% -95.0% 856.50 interrupts.CPU159.CAL:Function_call_interrupts
1001 ± 13% +122.1% 2223 ± 29% interrupts.CPU16.CAL:Function_call_interrupts
126.00 ± 14% -32.5% 85.00 ± 12% interrupts.CPU16.NMI:Non-maskable_interrupts
126.00 ± 14% -32.5% 85.00 ± 12% interrupts.CPU16.PMI:Performance_monitoring_interrupts
15039 ±102% -94.6% 805.00 interrupts.CPU160.CAL:Function_call_interrupts
9717 ±110% -91.6% 815.50 interrupts.CPU161.CAL:Function_call_interrupts
14872 ± 62% -94.3% 851.00 ± 5% interrupts.CPU162.CAL:Function_call_interrupts
389.00 ±137% -99.5% 2.00 ±100% interrupts.CPU162.RES:Rescheduling_interrupts
139.00 ± 22% -36.3% 88.50 ± 2% interrupts.CPU163.NMI:Non-maskable_interrupts
139.00 ± 22% -36.3% 88.50 ± 2% interrupts.CPU163.PMI:Performance_monitoring_interrupts
129.00 ± 14% -29.8% 90.50 interrupts.CPU164.NMI:Non-maskable_interrupts
129.00 ± 14% -29.8% 90.50 interrupts.CPU164.PMI:Performance_monitoring_interrupts
61273 ±141% +70.2% 104291 ± 86% interrupts.CPU168.108:PCI-MSI.23593025-edge.nvme0q65
67780 ±141% +198.0% 202003 ± 99% interrupts.CPU169.109:PCI-MSI.23593026-edge.nvme0q66
1820 ± 77% +88.8% 3438 ± 75% interrupts.CPU20.CAL:Function_call_interrupts
1597 ± 66% +156.7% 4100 ± 24% interrupts.CPU22.CAL:Function_call_interrupts
1110 ± 35% +261.8% 4017 ± 78% interrupts.CPU23.CAL:Function_call_interrupts
15.00 ± 56% +970.0% 160.50 ± 82% interrupts.CPU23.RES:Rescheduling_interrupts
83.33 ± 25% +48.8% 124.00 ± 21% interrupts.CPU24.NMI:Non-maskable_interrupts
83.33 ± 25% +48.8% 124.00 ± 21% interrupts.CPU24.PMI:Performance_monitoring_interrupts
83.33 ± 26% +235.4% 279.50 ± 67% interrupts.CPU25.NMI:Non-maskable_interrupts
83.33 ± 26% +235.4% 279.50 ± 67% interrupts.CPU25.PMI:Performance_monitoring_interrupts
118.33 ± 4% -27.7% 85.50 ± 16% interrupts.CPU4.NMI:Non-maskable_interrupts
118.33 ± 4% -27.7% 85.50 ± 16% interrupts.CPU4.PMI:Performance_monitoring_interrupts
813.67 +7.8% 877.50 ± 2% interrupts.CPU41.CAL:Function_call_interrupts
46083 ±122% -98.0% 915.00 interrupts.CPU48.CAL:Function_call_interrupts
1609 ±121% -98.0% 32.00 ± 9% interrupts.CPU48.RES:Rescheduling_interrupts
0.33 ±141% +72200.0% 241.00 ± 97% interrupts.CPU5.TLB:TLB_shootdowns
10202 ±125% -91.5% 865.00 ± 3% interrupts.CPU53.CAL:Function_call_interrupts
128.00 ± 95% -97.3% 3.50 ± 42% interrupts.CPU55.RES:Rescheduling_interrupts
112.00 -48.7% 57.50 ± 18% interrupts.CPU6.NMI:Non-maskable_interrupts
112.00 -48.7% 57.50 ± 18% interrupts.CPU6.PMI:Performance_monitoring_interrupts
832.33 ± 5% +150.1% 2081 ± 56% interrupts.CPU60.CAL:Function_call_interrupts
398.33 ± 94% -75.8% 96.50 ± 3% interrupts.CPU61.NMI:Non-maskable_interrupts
398.33 ± 94% -75.8% 96.50 ± 3% interrupts.CPU61.PMI:Performance_monitoring_interrupts
814.00 ± 2% +124.3% 1826 ± 52% interrupts.CPU62.CAL:Function_call_interrupts
837.67 ± 3% +76.3% 1476 ± 43% interrupts.CPU64.CAL:Function_call_interrupts
131.00 ± 14% -27.5% 95.00 ± 4% interrupts.CPU64.NMI:Non-maskable_interrupts
131.00 ± 14% -27.5% 95.00 ± 4% interrupts.CPU64.PMI:Performance_monitoring_interrupts
131.00 ± 14% -30.2% 91.50 interrupts.CPU65.NMI:Non-maskable_interrupts
131.00 ± 14% -30.2% 91.50 interrupts.CPU65.PMI:Performance_monitoring_interrupts
135.67 ± 17% -32.9% 91.00 interrupts.CPU66.NMI:Non-maskable_interrupts
135.67 ± 17% -32.9% 91.00 interrupts.CPU66.PMI:Performance_monitoring_interrupts
135.67 ± 19% -33.3% 90.50 interrupts.CPU67.NMI:Non-maskable_interrupts
135.67 ± 19% -33.3% 90.50 interrupts.CPU67.PMI:Performance_monitoring_interrupts
132.67 ± 16% -31.8% 90.50 interrupts.CPU68.NMI:Non-maskable_interrupts
132.67 ± 16% -31.8% 90.50 interrupts.CPU68.PMI:Performance_monitoring_interrupts
133.33 ± 15% -32.1% 90.50 interrupts.CPU69.NMI:Non-maskable_interrupts
133.33 ± 15% -32.1% 90.50 interrupts.CPU69.PMI:Performance_monitoring_interrupts
460.67 ±100% -80.4% 90.50 interrupts.CPU70.NMI:Non-maskable_interrupts
460.67 ±100% -80.4% 90.50 interrupts.CPU70.PMI:Performance_monitoring_interrupts
314.33 ± 81% -69.3% 96.50 interrupts.CPU71.NMI:Non-maskable_interrupts
314.33 ± 81% -69.3% 96.50 interrupts.CPU71.PMI:Performance_monitoring_interrupts
811.67 +115.7% 1750 ± 45% interrupts.CPU77.CAL:Function_call_interrupts
137.33 ± 14% -47.2% 72.50 ± 15% interrupts.CPU77.NMI:Non-maskable_interrupts
137.33 ± 14% -47.2% 72.50 ± 15% interrupts.CPU77.PMI:Performance_monitoring_interrupts
4.00 ± 40% +1512.5% 64.50 ± 76% interrupts.CPU77.RES:Rescheduling_interrupts
1363 ± 58% +74.8% 2383 ± 60% interrupts.CPU78.CAL:Function_call_interrupts
138.67 ± 13% -62.5% 52.00 ± 19% interrupts.CPU78.NMI:Non-maskable_interrupts
138.67 ± 13% -62.5% 52.00 ± 19% interrupts.CPU78.PMI:Performance_monitoring_interrupts
19.67 ±105% +334.7% 85.50 ± 80% interrupts.CPU78.RES:Rescheduling_interrupts
135.00 ± 15% -48.5% 69.50 ± 39% interrupts.CPU79.NMI:Non-maskable_interrupts
135.00 ± 15% -48.5% 69.50 ± 39% interrupts.CPU79.PMI:Performance_monitoring_interrupts
110.00 -47.7% 57.50 ± 13% interrupts.CPU8.NMI:Non-maskable_interrupts
110.00 -47.7% 57.50 ± 13% interrupts.CPU8.PMI:Performance_monitoring_interrupts
818.33 +2231.9% 19082 ± 95% interrupts.CPU80.CAL:Function_call_interrupts
4.67 ± 26% +17310.7% 812.50 ± 98% interrupts.CPU80.RES:Rescheduling_interrupts
150.33 ± 17% -65.4% 52.00 ± 19% interrupts.CPU82.NMI:Non-maskable_interrupts
150.33 ± 17% -65.4% 52.00 ± 19% interrupts.CPU82.PMI:Performance_monitoring_interrupts
2393 ± 91% +185.5% 6835 ± 86% interrupts.CPU87.CAL:Function_call_interrupts
87.33 ± 35% +1880.9% 1730 ± 95% interrupts.CPU87.NMI:Non-maskable_interrupts
87.33 ± 35% +1880.9% 1730 ± 95% interrupts.CPU87.PMI:Performance_monitoring_interrupts
121.67 ±135% -97.5% 3.00 ± 33% interrupts.CPU89.RES:Rescheduling_interrupts
117.00 -50.0% 58.50 ± 23% interrupts.CPU9.NMI:Non-maskable_interrupts
117.00 -50.0% 58.50 ± 23% interrupts.CPU9.PMI:Performance_monitoring_interrupts
1989 ± 84% +36.8% 2721 ± 69% interrupts.CPU90.CAL:Function_call_interrupts
60.33 ±135% +128.7% 138.00 ± 96% interrupts.CPU90.RES:Rescheduling_interrupts
1097 ± 37% +176.5% 3033 ± 68% interrupts.CPU93.CAL:Function_call_interrupts
1729 ± 75% +80.9% 3127 ± 72% interrupts.CPU94.CAL:Function_call_interrupts
65.33 ±134% +214.5% 205.50 ± 96% interrupts.CPU94.RES:Rescheduling_interrupts
49.67 ±137% +311.7% 204.50 ± 96% interrupts.CPU95.RES:Rescheduling_interrupts
5839 ±141% +947.8% 61180 ± 94% interrupts.CPU96.140:PCI-MSI.23593057-edge.nvme0q97
3208 ±141% +4875.8% 159657 ± 72% interrupts.CPU97.141:PCI-MSI.23593058-edge.nvme0q98
2792 ±141% +6892.6% 195257 ± 99% interrupts.CPU98.142:PCI-MSI.23593059-edge.nvme0q99
3390 ±141% +7350.2% 252586 ± 89% interrupts.CPU99.143:PCI-MSI.23593060-edge.nvme0q100
61699 ± 6% -20.1% 49301 interrupts.RES:Rescheduling_interrupts
711.67 ± 35% +182.3% 2009 ± 7% interrupts.TLB:TLB_shootdowns
42.42 ± 6% -4.1 38.31 perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
1.37 ± 10% -0.1 1.23 perf-profile.calltrace.cycles-pp.btrfs_add_link.btrfs_create.path_openat.do_filp_open.do_sys_openat2
0.91 ± 3% -0.1 0.84 ± 2% perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
0.80 ± 7% +0.1 0.86 ± 2% perf-profile.calltrace.cycles-pp.__btrfs_drop_extents.cow_file_range_inline.cow_file_range.btrfs_run_delalloc_range.writepage_delalloc
0.58 ± 12% +0.1 0.68 ± 8% perf-profile.calltrace.cycles-pp.alloc_tree_block_no_bg_flush.__btrfs_cow_block.btrfs_cow_block.btrfs_search_slot.btrfs_truncate_inode_items
1.27 ± 7% +0.1 1.36 ± 4% perf-profile.calltrace.cycles-pp.cow_file_range_inline.cow_file_range.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage
0.69 ± 10% +0.2 0.86 ± 10% perf-profile.calltrace.cycles-pp.blk_update_request.blk_mq_end_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu
0.71 ± 10% +0.2 0.90 ± 11% perf-profile.calltrace.cycles-pp.blk_mq_end_request.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event
0.80 ± 10% +0.2 1.03 ± 13% perf-profile.calltrace.cycles-pp.nvme_irq.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq
0.80 ± 9% +0.2 1.03 ± 14% perf-profile.calltrace.cycles-pp.__handle_irq_event_percpu.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.asm_call_on_stack
0.82 ± 9% +0.2 1.06 ± 13% perf-profile.calltrace.cycles-pp.handle_irq_event_percpu.handle_irq_event.handle_edge_irq.asm_call_on_stack.common_interrupt
0.42 ± 71% +0.3 0.68 ± 9% perf-profile.calltrace.cycles-pp.btrfs_alloc_tree_block.alloc_tree_block_no_bg_flush.__btrfs_cow_block.btrfs_cow_block.btrfs_search_slot
0.83 ± 9% +0.3 1.09 ± 14% perf-profile.calltrace.cycles-pp.handle_irq_event.handle_edge_irq.asm_call_on_stack.common_interrupt.asm_common_interrupt
0.85 ± 9% +0.3 1.12 ± 15% perf-profile.calltrace.cycles-pp.asm_call_on_stack.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter
0.85 ± 9% +0.3 1.12 ± 15% perf-profile.calltrace.cycles-pp.handle_edge_irq.asm_call_on_stack.common_interrupt.asm_common_interrupt.cpuidle_enter_state
0.89 ± 9% +0.3 1.18 ± 15% perf-profile.calltrace.cycles-pp.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle
0.90 ± 10% +0.3 1.19 ± 15% perf-profile.calltrace.cycles-pp.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.36 ± 70% +0.3 0.65 ± 9% perf-profile.calltrace.cycles-pp.end_bio_extent_buffer_writepage.btrfs_end_bio.blk_update_request.blk_mq_end_request.nvme_irq
1.10 ± 11% +0.3 1.40 ± 12% perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
0.90 ± 31% +0.3 1.21 ± 9% perf-profile.calltrace.cycles-pp.ktime_get.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.37 ± 70% +0.3 0.69 ± 8% perf-profile.calltrace.cycles-pp.btrfs_end_bio.blk_update_request.blk_mq_end_request.nvme_irq.__handle_irq_event_percpu
0.97 ± 28% +0.3 1.28 ± 8% perf-profile.calltrace.cycles-pp.tick_nohz_irq_exit.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
1.14 ± 19% +0.3 1.48 ± 8% perf-profile.calltrace.cycles-pp.rebalance_domains.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu
0.49 ± 74% +0.4 0.94 ± 12% perf-profile.calltrace.cycles-pp._raw_spin_trylock.rebalance_domains.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack
2.59 ± 8% +0.5 3.06 ± 11% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt
2.60 ± 8% +0.5 3.07 ± 11% perf-profile.calltrace.cycles-pp.asm_call_on_stack.do_softirq_own_stack.irq_exit_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt
0.82 ± 11% +0.6 1.41 ± 10% perf-profile.calltrace.cycles-pp.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents
0.70 ± 5% +0.6 1.31 ± 34% perf-profile.calltrace.cycles-pp.__mutex_lock.btrfs_log_inode_parent.btrfs_log_dentry_safe.btrfs_sync_file.do_fsync
0.62 ± 70% +0.6 1.25 ± 11% perf-profile.calltrace.cycles-pp.timekeeping_max_deferment.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle
1.97 ± 9% +0.6 2.59 ± 8% perf-profile.calltrace.cycles-pp.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry.start_secondary
1.57 ± 16% +0.7 2.27 ± 9% perf-profile.calltrace.cycles-pp.tick_nohz_next_event.tick_nohz_get_sleep_length.menu_select.do_idle.cpu_startup_entry
0.00 +0.8 0.80 ± 13% perf-profile.calltrace.cycles-pp.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range
0.00 +0.9 0.91 ± 17% perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.btrfs_log_inode_parent.btrfs_log_dentry_safe.btrfs_sync_file
42.63 ± 7% -4.2 38.39 perf-profile.children.cycles-pp.intel_idle
1.37 ± 10% -0.1 1.23 perf-profile.children.cycles-pp.btrfs_add_link
0.26 ± 36% -0.1 0.17 ± 3% perf-profile.children.cycles-pp.__hrtimer_next_event_base
0.17 ± 2% -0.0 0.12 ± 16% perf-profile.children.cycles-pp.prepare_to_wait
0.09 ± 18% -0.0 0.06 ± 9% perf-profile.children.cycles-pp.btrfs_node_key
0.40 ± 7% -0.0 0.37 perf-profile.children.cycles-pp.prepare_uptodate_page
0.09 ± 5% -0.0 0.06 perf-profile.children.cycles-pp.btrfs_lock_and_flush_ordered_range
0.19 -0.0 0.16 ± 9% perf-profile.children.cycles-pp.unwind_get_return_address
0.07 ± 7% -0.0 0.05 perf-profile.children.cycles-pp.do_dentry_open
0.06 ± 13% +0.0 0.08 ± 6% perf-profile.children.cycles-pp.vfs_read
0.09 ± 5% +0.0 0.11 perf-profile.children.cycles-pp.__xa_set_mark
0.06 ± 19% +0.0 0.08 ± 12% perf-profile.children.cycles-pp.copy_extent_buffer
0.07 +0.0 0.09 ± 11% perf-profile.children.cycles-pp.asm_exc_page_fault
0.07 ± 17% +0.0 0.10 ± 15% perf-profile.children.cycles-pp.delay_tsc
0.07 ± 17% +0.0 0.10 ± 5% perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
0.15 ± 3% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.rcu_dynticks_eqs_enter
0.15 ± 17% +0.0 0.17 ± 11% perf-profile.children.cycles-pp.__fput
0.07 ± 18% +0.0 0.09 ± 11% perf-profile.children.cycles-pp.__x86_indirect_thunk_rax
0.07 ± 11% +0.0 0.10 ± 15% perf-profile.children.cycles-pp.btrfs_tree_read_unlock
0.08 +0.0 0.10 ± 14% perf-profile.children.cycles-pp.trigger_load_balance
0.12 ± 13% +0.0 0.15 ± 3% perf-profile.children.cycles-pp.select_task_rq_fair
0.10 ± 12% +0.0 0.12 ± 4% perf-profile.children.cycles-pp.find_free_extent_clustered
0.18 ± 17% +0.0 0.21 ± 12% perf-profile.children.cycles-pp.update_load_avg
0.09 ± 15% +0.0 0.12 ± 8% perf-profile.children.cycles-pp.btrfs_alloc_from_cluster
0.15 ± 10% +0.0 0.18 ± 11% perf-profile.children.cycles-pp.___might_sleep
0.08 ± 20% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.__update_load_avg_cfs_rq
0.06 ± 8% +0.0 0.09 ± 11% perf-profile.children.cycles-pp.__btrfs_map_block
0.04 ± 71% +0.0 0.07 ± 14% perf-profile.children.cycles-pp.__irqentry_text_start
0.15 ± 9% +0.0 0.18 ± 8% perf-profile.children.cycles-pp.alloc_inode
0.10 ± 9% +0.0 0.14 ± 14% perf-profile.children.cycles-pp.percpu_counter_add_batch
0.02 ±141% +0.0 0.06 ± 9% perf-profile.children.cycles-pp.btrfs_must_commit_transaction
0.12 ± 6% +0.0 0.16 perf-profile.children.cycles-pp._raw_spin_lock_irq
0.02 ±141% +0.0 0.06 ± 16% perf-profile.children.cycles-pp.read_counters
0.02 ±141% +0.0 0.06 ± 16% perf-profile.children.cycles-pp.cmd_stat
0.02 ±141% +0.0 0.06 ± 16% perf-profile.children.cycles-pp.__run_perf_stat
0.02 ±141% +0.0 0.06 ± 16% perf-profile.children.cycles-pp.process_interval
0.26 ± 12% +0.0 0.30 ± 16% perf-profile.children.cycles-pp.start_transaction
0.16 ± 8% +0.0 0.21 ± 7% perf-profile.children.cycles-pp.new_inode_pseudo
0.11 ± 27% +0.0 0.16 ± 6% perf-profile.children.cycles-pp.blk_mq_request_issue_directly
0.06 ± 71% +0.0 0.11 ± 9% perf-profile.children.cycles-pp.nvme_map_data
0.18 ± 10% +0.0 0.22 ± 6% perf-profile.children.cycles-pp.new_inode
0.12 ± 23% +0.0 0.17 ± 5% perf-profile.children.cycles-pp.nvme_queue_rq
0.17 ± 4% +0.0 0.22 ± 9% perf-profile.children.cycles-pp.kmem_cache_free
0.02 ±141% +0.1 0.07 ± 14% perf-profile.children.cycles-pp.submit_bio_checks
0.00 +0.1 0.05 perf-profile.children.cycles-pp.__rq_qos_throttle
0.00 +0.1 0.05 perf-profile.children.cycles-pp.search_bitmap
0.00 +0.1 0.05 perf-profile.children.cycles-pp.timekeeping_advance
0.38 ± 21% +0.1 0.43 ± 16% perf-profile.children.cycles-pp.alloc_extent_buffer
0.32 ± 7% +0.1 0.37 ± 5% perf-profile.children.cycles-pp.end_page_writeback
0.31 ± 9% +0.1 0.36 ± 4% perf-profile.children.cycles-pp.test_clear_page_writeback
0.21 ± 3% +0.1 0.27 ± 9% perf-profile.children.cycles-pp.pick_next_task_fair
0.11 ± 27% +0.1 0.17 ± 5% perf-profile.children.cycles-pp.blk_mq_try_issue_list_directly
0.13 ± 26% +0.1 0.18 ± 8% perf-profile.children.cycles-pp.__blk_mq_try_issue_directly
0.00 +0.1 0.06 perf-profile.children.cycles-pp.tick_sched_do_timer
0.11 ± 27% +0.1 0.17 ± 8% perf-profile.children.cycles-pp.blk_mq_sched_insert_requests
0.12 ± 25% +0.1 0.18 ± 8% perf-profile.children.cycles-pp.blk_finish_plug
0.12 ± 25% +0.1 0.18 ± 8% perf-profile.children.cycles-pp.blk_flush_plug_list
0.20 ± 4% +0.1 0.26 ± 11% perf-profile.children.cycles-pp.tsc_verify_tsc_adjust
0.06 ± 8% +0.1 0.12 ± 25% perf-profile.children.cycles-pp.mempool_alloc
0.80 ± 7% +0.1 0.86 ± 2% perf-profile.children.cycles-pp.__btrfs_drop_extents
0.79 ± 15% +0.1 0.85 ± 15% perf-profile.children.cycles-pp.rcu_core
0.20 ± 7% +0.1 0.27 ± 9% perf-profile.children.cycles-pp.arch_cpu_idle_enter
0.11 ± 27% +0.1 0.18 ± 5% perf-profile.children.cycles-pp.blk_mq_flush_plug_list
0.06 ± 14% +0.1 0.14 ± 25% perf-profile.children.cycles-pp.bio_alloc_bioset
0.21 ± 17% +0.1 0.29 perf-profile.children.cycles-pp.btrfs_root_node
0.12 ± 7% +0.1 0.21 ± 11% perf-profile.children.cycles-pp.write_dev_supers
1.27 ± 7% +0.1 1.36 ± 4% perf-profile.children.cycles-pp.cow_file_range_inline
0.24 ± 17% +0.1 0.33 ± 6% perf-profile.children.cycles-pp.calc_global_load_tick
0.15 ± 41% +0.1 0.24 ± 22% perf-profile.children.cycles-pp.account_process_tick
0.23 ± 10% +0.1 0.33 ± 4% perf-profile.children.cycles-pp.write_all_supers
0.18 ± 9% +0.1 0.28 ± 25% perf-profile.children.cycles-pp.blk_mq_submit_bio
0.82 ± 13% +0.1 0.94 ± 9% perf-profile.children.cycles-pp.btrfs_alloc_tree_block
0.82 ± 13% +0.1 0.94 ± 8% perf-profile.children.cycles-pp.alloc_tree_block_no_bg_flush
1.14 ± 6% +0.1 1.26 ± 7% perf-profile.children.cycles-pp.read_block_for_search
0.23 ± 7% +0.1 0.36 ± 23% perf-profile.children.cycles-pp.submit_bio_noacct
0.61 ± 7% +0.1 0.74 ± 6% perf-profile.children.cycles-pp.end_bio_extent_buffer_writepage
0.24 ± 8% +0.1 0.37 ± 21% perf-profile.children.cycles-pp.submit_bio
0.63 ± 8% +0.1 0.78 ± 6% perf-profile.children.cycles-pp.btrfs_end_bio
0.83 ± 6% +0.2 0.99 ± 10% perf-profile.children.cycles-pp.blk_update_request
0.86 ± 7% +0.2 1.04 ± 9% perf-profile.children.cycles-pp.blk_mq_end_request
1.73 ± 11% +0.2 1.94 ± 11% perf-profile.children.cycles-pp._raw_spin_lock
0.26 ± 8% +0.2 0.48 ± 15% perf-profile.children.cycles-pp.btrfs_map_bio
0.97 ± 7% +0.2 1.20 ± 13% perf-profile.children.cycles-pp.__handle_irq_event_percpu
0.97 ± 7% +0.2 1.20 ± 13% perf-profile.children.cycles-pp.nvme_irq
0.99 ± 6% +0.2 1.23 ± 13% perf-profile.children.cycles-pp.handle_irq_event_percpu
1.01 ± 6% +0.3 1.27 ± 13% perf-profile.children.cycles-pp.handle_irq_event
1.02 ± 6% +0.3 1.29 ± 14% perf-profile.children.cycles-pp.handle_edge_irq
1.08 ± 6% +0.3 1.36 ± 15% perf-profile.children.cycles-pp.common_interrupt
1.11 ± 11% +0.3 1.41 ± 12% perf-profile.children.cycles-pp.poll_idle
1.09 ± 6% +0.3 1.39 ± 14% perf-profile.children.cycles-pp.asm_common_interrupt
0.98 ± 27% +0.3 1.30 ± 8% perf-profile.children.cycles-pp.tick_nohz_irq_exit
0.81 ± 12% +0.4 1.17 ± 15% perf-profile.children.cycles-pp.irqtime_account_irq
1.17 ± 19% +0.4 1.53 ± 8% perf-profile.children.cycles-pp.rebalance_domains
0.67 ± 47% +0.4 1.09 ± 14% perf-profile.children.cycles-pp._raw_spin_trylock
6.09 ± 8% +0.4 6.54 ± 8% perf-profile.children.cycles-pp.__filemap_fdatawrite_range
0.82 ± 19% +0.4 1.27 ± 4% perf-profile.children.cycles-pp.native_irq_return_iret
6.10 ± 8% +0.4 6.54 ± 8% perf-profile.children.cycles-pp.do_writepages
2.75 ± 7% +0.5 3.22 ± 11% perf-profile.children.cycles-pp.do_softirq_own_stack
0.76 ± 32% +0.5 1.25 ± 11% perf-profile.children.cycles-pp.timekeeping_max_deferment
2.72 ± 7% +0.5 3.21 ± 11% perf-profile.children.cycles-pp.__softirqentry_text_start
0.21 ± 20% +0.6 0.80 ± 13% perf-profile.children.cycles-pp.submit_extent_page
0.82 ± 10% +0.6 1.42 ± 10% perf-profile.children.cycles-pp.write_one_eb
1.99 ± 8% +0.6 2.62 ± 8% perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
1.60 ± 14% +0.7 2.29 ± 10% perf-profile.children.cycles-pp.tick_nohz_next_event
42.60 ± 7% -4.2 38.39 perf-profile.self.cycles-pp.intel_idle
0.37 ± 11% -0.1 0.30 perf-profile.self.cycles-pp.do_idle
0.12 ± 38% -0.1 0.06 ± 9% perf-profile.self.cycles-pp.sched_clock_cpu
0.07 ± 11% -0.0 0.03 ±100% perf-profile.self.cycles-pp.rcu_eqs_exit
0.09 ± 47% -0.0 0.05 perf-profile.self.cycles-pp.load_balance
0.15 ± 5% +0.0 0.17 ± 3% perf-profile.self.cycles-pp.mark_page_accessed
0.07 +0.0 0.09 perf-profile.self.cycles-pp.rcu_idle_exit
0.07 +0.0 0.09 ± 11% perf-profile.self.cycles-pp.trigger_load_balance
0.07 ± 17% +0.0 0.10 ± 15% perf-profile.self.cycles-pp.delay_tsc
0.15 ± 11% +0.0 0.17 ± 5% perf-profile.self.cycles-pp.rebalance_domains
0.08 ± 20% +0.0 0.11 ± 4% perf-profile.self.cycles-pp.__update_load_avg_cfs_rq
0.06 ± 14% +0.0 0.09 ± 11% perf-profile.self.cycles-pp.btrfs_tree_read_unlock
0.14 ± 3% +0.0 0.17 ± 3% perf-profile.self.cycles-pp.rcu_dynticks_eqs_enter
0.12 ± 8% +0.0 0.15 ± 3% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.15 ± 10% +0.0 0.18 ± 11% perf-profile.self.cycles-pp.___might_sleep
0.09 ± 10% +0.0 0.12 ± 12% perf-profile.self.cycles-pp.percpu_counter_add_batch
0.02 ±141% +0.0 0.06 ± 9% perf-profile.self.cycles-pp.__btrfs_release_delayed_node
0.02 ±141% +0.0 0.06 ± 9% perf-profile.self.cycles-pp.__x86_indirect_thunk_rax
0.02 ±141% +0.0 0.06 ± 9% perf-profile.self.cycles-pp.xas_find_marked
0.02 ±141% +0.0 0.06 ± 9% perf-profile.self.cycles-pp.__put_user_8
0.12 ± 8% +0.0 0.15 ± 3% perf-profile.self.cycles-pp._raw_spin_lock_irq
0.27 ± 8% +0.0 0.31 ± 8% perf-profile.self.cycles-pp.kmem_cache_alloc
0.16 ± 6% +0.0 0.20 ± 19% perf-profile.self.cycles-pp.note_gp_changes
0.02 ±141% +0.0 0.07 ± 7% perf-profile.self.cycles-pp.end_bio_extent_buffer_writepage
0.00 +0.1 0.05 perf-profile.self.cycles-pp.submit_extent_page
0.17 ± 7% +0.1 0.24 ± 10% perf-profile.self.cycles-pp.tsc_verify_tsc_adjust
0.23 ± 19% +0.1 0.32 ± 7% perf-profile.self.cycles-pp.calc_global_load_tick
0.15 ± 41% +0.1 0.24 ± 22% perf-profile.self.cycles-pp.account_process_tick
1.45 ± 9% +0.2 1.60 ± 3% perf-profile.self.cycles-pp._raw_spin_lock
0.40 ± 18% +0.3 0.65 ± 16% perf-profile.self.cycles-pp.tick_nohz_next_event
0.89 ± 11% +0.3 1.17 ± 11% perf-profile.self.cycles-pp.poll_idle
0.57 ± 23% +0.4 0.93 ± 21% perf-profile.self.cycles-pp.irqtime_account_irq
0.66 ± 47% +0.4 1.09 ± 14% perf-profile.self.cycles-pp._raw_spin_trylock
0.82 ± 19% +0.4 1.26 ± 4% perf-profile.self.cycles-pp.native_irq_return_iret
0.76 ± 33% +0.5 1.25 ± 11% perf-profile.self.cycles-pp.timekeeping_max_deferment
***************************************************************************************************
lkp-csl-2sp3: 96 threads Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs/iterations/kconfig/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase/ucode:
gcc-9/performance/1BRD_32G/4K/btrfs/1x/x86_64-rhel-8.3/1fpd/1t/debian-10.4-x86_64-20200603.cgz/fsyncBeforeClose/lkp-csl-2sp3/4G/fsmark/0x400002c
commit:
c9c9735c46 (" SCSI misc on 20200814")
8cfb68ac58 ("block: make QUEUE_SYSFS_BIT_FNS a little more useful")
c9c9735c46f589b9 8cfb68ac58ee842a4cb35efb1a5
---------------- ---------------------------
%stddev %change %stddev
\ | \
19410613 +5.1% 20407467 fsmark.app_overhead
5696 -9.3% 5167 fsmark.files_per_sec
182.65 +9.5% 200.03 fsmark.time.elapsed_time
182.65 +9.5% 200.03 fsmark.time.elapsed_time.max
29203 +5.9% 30913 fsmark.time.involuntary_context_switches
167.42 +9.8% 183.88 fsmark.time.system_time
1150458 +11.5% 1283115 fsmark.time.voluntary_context_switches
34.13 ± 2% -3.7% 32.88 ± 2% boot-time.boot
2.29 +2.3% 2.34 iostat.cpu.system
16766916 +28.3% 21514074 meminfo.Memused
145271 +15.3% 167538 meminfo.max_used_kB
3063753 ± 91% +148.6% 7616841 numa-numastat.node0.local_node
3071532 ± 90% +148.0% 7616887 numa-numastat.node0.numa_hit
451940 +74.9% 790445 vmstat.io.bo
43632 +28.4% 56035 vmstat.system.cs
2271 ± 51% +51.1% 3430 ± 3% numa-meminfo.node0.Active(anon)
57131462 ± 12% -20.9% 45167314 numa-meminfo.node0.MemFree
8544572 ± 85% +140.0% 20508720 numa-meminfo.node0.MemUsed
567.25 ± 51% +51.3% 858.00 ± 3% numa-vmstat.node0.nr_active_anon
14283328 ± 12% -20.9% 11291336 numa-vmstat.node0.nr_free_pages
567.25 ± 51% +51.3% 858.00 ± 3% numa-vmstat.node0.nr_zone_active_anon
2561586 ± 70% +112.7% 5449148 numa-vmstat.node0.numa_hit
2489447 ± 72% +116.3% 5383984 numa-vmstat.node0.numa_local
42956790 +89.1% 81216665 ± 2% cpuidle.C1.time
3157312 +59.2% 5025868 cpuidle.C1.usage
34975386 ± 92% +392.7% 1.723e+08 ± 24% cpuidle.C6.time
76142 ± 64% +207.1% 233809 ± 19% cpuidle.C6.usage
240094 ± 2% +31.1% 314781 ± 2% cpuidle.POLL.time
64726 ± 7% +21.5% 78656 ± 7% cpuidle.POLL.usage
9100 ± 7% +12.7% 10259 ± 4% slabinfo.blkdev_ioc.active_objs
9136 ± 7% +12.6% 10287 ± 4% slabinfo.blkdev_ioc.num_objs
3402 ± 13% -22.4% 2641 ± 2% slabinfo.dmaengine-unmap-16.active_objs
3402 ± 13% -22.4% 2641 ± 2% slabinfo.dmaengine-unmap-16.num_objs
22968 ± 5% +17.3% 26942 ± 9% slabinfo.fsnotify_mark_connector.active_objs
23046 ± 5% +17.1% 26998 ± 9% slabinfo.fsnotify_mark_connector.num_objs
88733 +21.4% 107690 slabinfo.radix_tree_node.active_objs
1594 +21.3% 1933 slabinfo.radix_tree_node.active_slabs
89299 +21.3% 108324 slabinfo.radix_tree_node.num_objs
1594 +21.3% 1933 slabinfo.radix_tree_node.num_slabs
988396 +4.8% 1035737 proc-vmstat.nr_active_file
12645 +2.7% 12988 proc-vmstat.nr_dirty
3008632 -3.8% 2894869 proc-vmstat.nr_dirty_background_threshold
6024620 -3.8% 5796818 proc-vmstat.nr_dirty_threshold
1803926 +2.7% 1853297 proc-vmstat.nr_file_pages
28733741 -4.1% 27545084 proc-vmstat.nr_free_pages
988396 +4.8% 1035737 proc-vmstat.nr_zone_active_file
12680 +2.8% 13036 proc-vmstat.nr_zone_write_pending
5925748 +30.3% 7720309 proc-vmstat.numa_hit
5894616 +30.4% 7689177 proc-vmstat.numa_local
1047267 +4.6% 1095186 proc-vmstat.pgactivate
6770338 +26.5% 8563427 proc-vmstat.pgalloc_normal
554850 +8.2% 600120 proc-vmstat.pgfault
83933236 +91.2% 1.605e+08 proc-vmstat.pgpgout
1611 ± 21% +73.2% 2790 ± 15% sched_debug.cfs_rq:/.exec_clock.avg
9.60 ± 16% +124.1% 21.52 ± 42% sched_debug.cfs_rq:/.exec_clock.min
6375 ± 36% +37.6% 8772 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
28068 ± 12% +76.8% 49613 ± 18% sched_debug.cfs_rq:/.load.avg
0.10 ± 7% +35.9% 0.13 ± 11% sched_debug.cfs_rq:/.nr_running.avg
0.28 ± 4% +20.0% 0.34 ± 4% sched_debug.cfs_rq:/.nr_running.stddev
24.11 ± 15% +48.5% 35.80 ± 3% sched_debug.cfs_rq:/.util_est_enqueued.avg
117.21 ± 10% +29.6% 151.89 sched_debug.cfs_rq:/.util_est_enqueued.stddev
37298 ± 27% -84.7% 5703 ± 3% sched_debug.cpu.avg_idle.min
2.44 -18.6% 1.99 ± 5% sched_debug.cpu.clock.stddev
40402 +32.0% 53322 sched_debug.cpu.nr_switches.avg
38722 +33.5% 51687 sched_debug.cpu.sched_count.avg
19144 +33.9% 25625 sched_debug.cpu.sched_goidle.avg
19304 +33.6% 25793 sched_debug.cpu.ttwu_count.avg
505232 ± 38% +45.8% 736671 ± 9% sched_debug.cpu.ttwu_count.max
62001 ± 26% +53.2% 94998 ± 6% sched_debug.cpu.ttwu_count.stddev
8.34 +7.7% 8.98 ± 2% perf-stat.i.MPKI
0.85 ± 2% +0.0 0.89 perf-stat.i.branch-miss-rate%
6869817 ± 2% +5.9% 7275908 perf-stat.i.cache-misses
38586376 +4.2% 40200722 ± 2% perf-stat.i.cache-references
44264 +28.4% 56855 perf-stat.i.context-switches
1.34 +6.8% 1.43 perf-stat.i.cpi
6.288e+09 +3.4% 6.5e+09 perf-stat.i.cpu-cycles
995.58 ± 2% -3.6% 959.41 perf-stat.i.cycles-between-cache-misses
0.03 ± 24% +0.0 0.05 ± 13% perf-stat.i.dTLB-load-miss-rate%
342420 ± 24% +56.5% 535819 ± 13% perf-stat.i.dTLB-load-misses
6.17e+08 -2.5% 6.013e+08 perf-stat.i.dTLB-stores
2903078 +2.3% 2969751 perf-stat.i.iTLB-load-misses
4.745e+09 -3.4% 4.584e+09 perf-stat.i.instructions
1650 -5.5% 1560 perf-stat.i.instructions-per-iTLB-miss
0.75 -6.6% 0.70 perf-stat.i.ipc
0.07 +3.4% 0.07 perf-stat.i.metric.GHz
1095354 ± 5% +23.3% 1350789 ± 4% perf-stat.i.node-stores
8.13 +7.8% 8.77 ± 2% perf-stat.overall.MPKI
0.92 ± 2% +0.0 0.96 perf-stat.overall.branch-miss-rate%
1.33 +7.0% 1.42 perf-stat.overall.cpi
0.03 ± 24% +0.0 0.04 ± 13% perf-stat.overall.dTLB-load-miss-rate%
1634 -5.6% 1543 perf-stat.overall.instructions-per-iTLB-miss
0.75 -6.5% 0.71 perf-stat.overall.ipc
6831661 ± 2% +6.0% 7239498 perf-stat.ps.cache-misses
38375771 +4.2% 39999295 ± 2% perf-stat.ps.cache-references
44001 +28.5% 56543 perf-stat.ps.context-switches
6.254e+09 +3.4% 6.467e+09 perf-stat.ps.cpu-cycles
340510 ± 24% +56.5% 533011 ± 13% perf-stat.ps.dTLB-load-misses
6.136e+08 -2.5% 5.983e+08 perf-stat.ps.dTLB-stores
2887228 +2.3% 2954861 perf-stat.ps.iTLB-load-misses
4.719e+09 -3.3% 4.561e+09 perf-stat.ps.instructions
1089509 ± 5% +23.4% 1344191 ± 4% perf-stat.ps.node-stores
8.674e+11 +5.6% 9.164e+11 perf-stat.total.instructions
106.50 ± 5% +17.4% 125.00 ± 8% interrupts.81:PCI-MSI.12589063-edge.eth3-TxRx-6
368.50 +9.4% 403.25 interrupts.9:IO-APIC.9-fasteoi.acpi
106.50 ± 5% +17.4% 125.00 ± 8% interrupts.CPU0.81:PCI-MSI.12589063-edge.eth3-TxRx-6
368360 +9.4% 403006 interrupts.CPU0.LOC:Local_timer_interrupts
368.50 +9.4% 403.25 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
368501 +9.4% 403021 interrupts.CPU1.LOC:Local_timer_interrupts
317.00 ± 84% +347.2% 1417 ± 32% interrupts.CPU1.NMI:Non-maskable_interrupts
317.00 ± 84% +347.2% 1417 ± 32% interrupts.CPU1.PMI:Performance_monitoring_interrupts
772.25 ± 99% +190.2% 2241 ± 7% interrupts.CPU1.RES:Rescheduling_interrupts
368509 +9.4% 403084 interrupts.CPU10.LOC:Local_timer_interrupts
368494 +9.4% 403022 interrupts.CPU11.LOC:Local_timer_interrupts
368490 +9.4% 403051 interrupts.CPU12.LOC:Local_timer_interrupts
58.50 ±122% +317.9% 244.50 ± 46% interrupts.CPU12.RES:Rescheduling_interrupts
465.00 ± 13% +181.9% 1311 ± 14% interrupts.CPU13.CAL:Function_call_interrupts
368341 +9.4% 403085 interrupts.CPU13.LOC:Local_timer_interrupts
2.75 ± 97% +3400.0% 96.25 ± 84% interrupts.CPU13.RES:Rescheduling_interrupts
368479 +9.4% 403030 interrupts.CPU14.LOC:Local_timer_interrupts
27.25 ± 65% +494.5% 162.00 ± 39% interrupts.CPU14.RES:Rescheduling_interrupts
368395 +9.4% 403043 interrupts.CPU16.LOC:Local_timer_interrupts
66.50 ± 42% +225.9% 216.75 ± 74% interrupts.CPU16.NMI:Non-maskable_interrupts
66.50 ± 42% +225.9% 216.75 ± 74% interrupts.CPU16.PMI:Performance_monitoring_interrupts
368484 +9.4% 403038 interrupts.CPU17.LOC:Local_timer_interrupts
94.75 ± 24% +114.8% 203.50 ± 46% interrupts.CPU17.NMI:Non-maskable_interrupts
94.75 ± 24% +114.8% 203.50 ± 46% interrupts.CPU17.PMI:Performance_monitoring_interrupts
28.75 ±155% +355.7% 131.00 ± 79% interrupts.CPU17.RES:Rescheduling_interrupts
368495 +9.4% 403099 interrupts.CPU18.LOC:Local_timer_interrupts
89.50 ± 23% +314.2% 370.75 ± 49% interrupts.CPU18.NMI:Non-maskable_interrupts
89.50 ± 23% +314.2% 370.75 ± 49% interrupts.CPU18.PMI:Performance_monitoring_interrupts
9.00 ±141% +1550.0% 148.50 ± 82% interrupts.CPU18.RES:Rescheduling_interrupts
368479 +9.4% 403090 interrupts.CPU19.LOC:Local_timer_interrupts
80.00 ± 23% +174.1% 219.25 ± 79% interrupts.CPU19.NMI:Non-maskable_interrupts
80.00 ± 23% +174.1% 219.25 ± 79% interrupts.CPU19.PMI:Performance_monitoring_interrupts
368477 +9.4% 403020 interrupts.CPU2.LOC:Local_timer_interrupts
392.25 ± 99% +150.0% 980.75 ± 9% interrupts.CPU2.RES:Rescheduling_interrupts
368548 +9.4% 403080 interrupts.CPU20.LOC:Local_timer_interrupts
79.25 ± 28% +77.6% 140.75 ± 24% interrupts.CPU20.NMI:Non-maskable_interrupts
79.25 ± 28% +77.6% 140.75 ± 24% interrupts.CPU20.PMI:Performance_monitoring_interrupts
368582 +9.3% 403028 interrupts.CPU21.LOC:Local_timer_interrupts
368540 +9.4% 403071 interrupts.CPU22.LOC:Local_timer_interrupts
462.25 ± 14% +110.0% 970.75 ± 48% interrupts.CPU23.CAL:Function_call_interrupts
368403 +9.4% 403013 interrupts.CPU23.LOC:Local_timer_interrupts
368416 +9.3% 402784 interrupts.CPU24.LOC:Local_timer_interrupts
3911 ± 98% -98.4% 63.00 ± 25% interrupts.CPU24.NMI:Non-maskable_interrupts
3911 ± 98% -98.4% 63.00 ± 25% interrupts.CPU24.PMI:Performance_monitoring_interrupts
368327 +9.4% 402990 interrupts.CPU25.LOC:Local_timer_interrupts
368484 +9.4% 403030 interrupts.CPU3.LOC:Local_timer_interrupts
69.75 ± 42% +799.6% 627.50 ± 22% interrupts.CPU3.NMI:Non-maskable_interrupts
69.75 ± 42% +799.6% 627.50 ± 22% interrupts.CPU3.PMI:Performance_monitoring_interrupts
203.50 ± 98% +168.4% 546.25 ± 23% interrupts.CPU3.RES:Rescheduling_interrupts
368458 +9.4% 403014 interrupts.CPU36.LOC:Local_timer_interrupts
368359 +9.4% 403017 interrupts.CPU38.LOC:Local_timer_interrupts
111.00 ± 59% -99.3% 0.75 ±110% interrupts.CPU40.RES:Rescheduling_interrupts
476.50 ± 19% +79.5% 855.50 ± 32% interrupts.CPU41.CAL:Function_call_interrupts
473.25 ± 14% +123.0% 1055 ± 20% interrupts.CPU45.CAL:Function_call_interrupts
86.75 ±101% -99.4% 0.50 ±173% interrupts.CPU47.RES:Rescheduling_interrupts
649.00 ± 24% +246.8% 2250 ± 77% interrupts.CPU48.CAL:Function_call_interrupts
368383 +9.4% 402831 interrupts.CPU48.LOC:Local_timer_interrupts
159.50 ±102% +312.4% 657.75 ± 4% interrupts.CPU48.RES:Rescheduling_interrupts
603.50 ± 43% +52.8% 922.25 ± 32% interrupts.CPU49.CAL:Function_call_interrupts
368483 +9.3% 402872 interrupts.CPU49.LOC:Local_timer_interrupts
368513 +9.4% 403106 interrupts.CPU5.LOC:Local_timer_interrupts
24.25 ± 71% +1115.5% 294.75 ± 53% interrupts.CPU50.RES:Rescheduling_interrupts
368478 +9.4% 403012 interrupts.CPU51.LOC:Local_timer_interrupts
92.50 ± 23% +219.2% 295.25 ± 33% interrupts.CPU51.NMI:Non-maskable_interrupts
92.50 ± 23% +219.2% 295.25 ± 33% interrupts.CPU51.PMI:Performance_monitoring_interrupts
368514 +9.4% 403024 interrupts.CPU52.LOC:Local_timer_interrupts
368533 +9.4% 403010 interrupts.CPU53.LOC:Local_timer_interrupts
479.25 ± 12% +116.4% 1037 ± 18% interrupts.CPU55.CAL:Function_call_interrupts
368514 +9.4% 403050 interrupts.CPU55.LOC:Local_timer_interrupts
466.75 ± 13% +100.3% 934.75 ± 28% interrupts.CPU56.CAL:Function_call_interrupts
368513 +9.4% 403083 interrupts.CPU56.LOC:Local_timer_interrupts
11.25 ± 68% +1502.2% 180.25 ± 50% interrupts.CPU56.RES:Rescheduling_interrupts
462.75 ± 13% +83.6% 849.75 ± 32% interrupts.CPU57.CAL:Function_call_interrupts
368508 +9.4% 403065 interrupts.CPU57.LOC:Local_timer_interrupts
467.00 ± 13% +99.5% 931.50 ± 40% interrupts.CPU58.CAL:Function_call_interrupts
368568 +9.4% 403080 interrupts.CPU58.LOC:Local_timer_interrupts
368484 +9.4% 403089 interrupts.CPU59.LOC:Local_timer_interrupts
368426 +9.4% 403025 interrupts.CPU6.LOC:Local_timer_interrupts
368490 +9.4% 403084 interrupts.CPU60.LOC:Local_timer_interrupts
368494 +9.4% 403058 interrupts.CPU61.LOC:Local_timer_interrupts
477.00 ± 12% +141.5% 1152 ± 15% interrupts.CPU62.CAL:Function_call_interrupts
368483 +9.4% 403078 interrupts.CPU62.LOC:Local_timer_interrupts
8.00 ± 84% +1665.6% 141.25 ± 71% interrupts.CPU62.RES:Rescheduling_interrupts
368511 +9.4% 403052 interrupts.CPU63.LOC:Local_timer_interrupts
368446 +9.4% 403034 interrupts.CPU64.LOC:Local_timer_interrupts
33.50 ±134% +459.7% 187.50 ± 87% interrupts.CPU64.RES:Rescheduling_interrupts
368541 +9.3% 402980 interrupts.CPU65.LOC:Local_timer_interrupts
368502 +9.4% 403059 interrupts.CPU66.LOC:Local_timer_interrupts
75.25 ± 31% +378.4% 360.00 ± 44% interrupts.CPU66.NMI:Non-maskable_interrupts
75.25 ± 31% +378.4% 360.00 ± 44% interrupts.CPU66.PMI:Performance_monitoring_interrupts
368483 +9.4% 403045 interrupts.CPU67.LOC:Local_timer_interrupts
13.25 ± 82% +913.2% 134.25 ± 48% interrupts.CPU67.RES:Rescheduling_interrupts
368504 +9.4% 403050 interrupts.CPU68.LOC:Local_timer_interrupts
81.50 ± 38% +75.2% 142.75 ± 15% interrupts.CPU68.NMI:Non-maskable_interrupts
81.50 ± 38% +75.2% 142.75 ± 15% interrupts.CPU68.PMI:Performance_monitoring_interrupts
463.25 ± 13% +84.3% 853.75 ± 31% interrupts.CPU69.CAL:Function_call_interrupts
368498 +9.4% 403062 interrupts.CPU69.LOC:Local_timer_interrupts
368536 +9.4% 403075 interrupts.CPU7.LOC:Local_timer_interrupts
86.75 ± 42% +242.7% 297.25 ± 87% interrupts.CPU7.NMI:Non-maskable_interrupts
86.75 ± 42% +242.7% 297.25 ± 87% interrupts.CPU7.PMI:Performance_monitoring_interrupts
368577 +9.3% 402957 interrupts.CPU70.LOC:Local_timer_interrupts
368485 +9.4% 403021 interrupts.CPU71.LOC:Local_timer_interrupts
8.50 ±114% +2494.1% 220.50 ± 66% interrupts.CPU71.RES:Rescheduling_interrupts
484.00 ± 17% +82.7% 884.50 ± 30% interrupts.CPU8.CAL:Function_call_interrupts
368541 +9.4% 403057 interrupts.CPU8.LOC:Local_timer_interrupts
483.50 ± 18% +91.7% 926.75 ± 18% interrupts.CPU81.CAL:Function_call_interrupts
368480 +9.4% 402987 interrupts.CPU83.LOC:Local_timer_interrupts
472.00 ± 14% +115.8% 1018 ± 42% interrupts.CPU9.CAL:Function_call_interrupts
368492 +9.4% 403042 interrupts.CPU9.LOC:Local_timer_interrupts
458.25 ± 14% +298.5% 1826 ± 81% interrupts.CPU92.CAL:Function_call_interrupts
12715 +23.0% 15637 interrupts.RES:Rescheduling_interrupts
698.50 ± 18% -27.1% 509.50 ± 4% interrupts.TLB:TLB_shootdowns
66124 ± 18% +65.4% 109342 ± 4% softirqs.CPU1.RCU
29936 ± 11% +36.4% 40843 ± 2% softirqs.CPU1.SCHED
53711 ± 3% +13.3% 60852 softirqs.CPU10.RCU
55057 ± 2% +11.4% 61309 ± 2% softirqs.CPU11.RCU
26201 ± 4% +8.2% 28361 ± 3% softirqs.CPU12.SCHED
51913 ± 3% +15.9% 60160 ± 3% softirqs.CPU13.RCU
52291 ± 4% +13.6% 59419 ± 5% softirqs.CPU14.RCU
26436 +7.8% 28492 ± 3% softirqs.CPU14.SCHED
52418 ± 4% +14.1% 59806 ± 3% softirqs.CPU15.RCU
42863 ± 16% +30.7% 56014 ± 4% softirqs.CPU16.RCU
47207 +15.4% 54489 ± 3% softirqs.CPU17.RCU
45598 ± 5% +19.0% 54250 ± 5% softirqs.CPU18.RCU
47201 +15.6% 54565 ± 3% softirqs.CPU19.RCU
62725 ± 14% +22.4% 76783 ± 5% softirqs.CPU2.RCU
48249 +13.4% 54716 ± 3% softirqs.CPU20.RCU
25403 ± 8% +13.2% 28757 ± 2% softirqs.CPU20.SCHED
46907 +15.7% 54281 ± 4% softirqs.CPU21.RCU
46984 +18.6% 55738 ± 2% softirqs.CPU22.RCU
26492 ± 2% +8.9% 28841 ± 3% softirqs.CPU22.SCHED
46377 +17.7% 54580 ± 3% softirqs.CPU23.RCU
26141 ± 4% +12.0% 29281 ± 4% softirqs.CPU27.SCHED
26917 ± 5% +9.6% 29488 ± 4% softirqs.CPU28.SCHED
47149 ± 3% +19.4% 56316 ± 9% softirqs.CPU29.RCU
26015 ± 3% +13.4% 29507 ± 6% softirqs.CPU29.SCHED
57421 ± 7% +20.1% 68943 ± 4% softirqs.CPU3.RCU
27482 ± 7% +14.5% 31472 ± 3% softirqs.CPU3.SCHED
25844 ± 4% +12.5% 29077 ± 5% softirqs.CPU32.SCHED
53621 +22.5% 65688 ± 8% softirqs.CPU34.RCU
52765 ± 5% +24.8% 65845 ± 8% softirqs.CPU35.RCU
25860 ± 3% +13.3% 29292 ± 4% softirqs.CPU35.SCHED
52615 ± 2% +23.7% 65084 ± 9% softirqs.CPU36.RCU
54186 +21.1% 65609 ± 8% softirqs.CPU37.RCU
25659 ± 4% +12.9% 28964 ± 4% softirqs.CPU37.SCHED
53617 +16.5% 62478 ± 6% softirqs.CPU38.RCU
25588 ± 3% +13.8% 29119 ± 4% softirqs.CPU38.SCHED
55244 ± 4% +14.9% 63464 ± 7% softirqs.CPU39.RCU
49939 ± 19% +31.8% 65810 ± 8% softirqs.CPU40.RCU
23691 ± 20% +23.2% 29199 ± 5% softirqs.CPU40.SCHED
53640 +24.9% 66998 ± 6% softirqs.CPU41.RCU
53624 +22.1% 65478 ± 8% softirqs.CPU42.RCU
25837 ± 4% +11.1% 28713 ± 4% softirqs.CPU42.SCHED
54458 ± 6% +20.7% 65722 ± 8% softirqs.CPU43.RCU
25930 ± 4% +13.2% 29343 ± 5% softirqs.CPU43.SCHED
56462 ± 7% +14.7% 64741 ± 8% softirqs.CPU44.RCU
26312 ± 4% +9.5% 28818 ± 4% softirqs.CPU44.SCHED
25799 ± 3% +12.1% 28928 ± 3% softirqs.CPU45.SCHED
53606 +20.9% 64829 ± 8% softirqs.CPU46.RCU
25814 ± 3% +13.4% 29278 ± 5% softirqs.CPU46.SCHED
54296 ± 2% +21.0% 65686 ± 8% softirqs.CPU47.RCU
55722 ± 3% +20.0% 66880 ± 5% softirqs.CPU48.RCU
23562 ± 18% +28.3% 30241 softirqs.CPU48.SCHED
49659 ± 13% +21.8% 60465 ± 2% softirqs.CPU49.RCU
53441 +19.3% 63754 ± 4% softirqs.CPU5.RCU
26612 ± 3% +10.7% 29465 ± 3% softirqs.CPU5.SCHED
52655 ± 3% +13.6% 59818 ± 2% softirqs.CPU50.RCU
52667 ± 2% +13.6% 59855 ± 3% softirqs.CPU51.RCU
52770 ± 2% +9.0% 57501 ± 2% softirqs.CPU52.RCU
52429 ± 3% +12.9% 59179 softirqs.CPU53.RCU
52308 ± 3% +12.8% 59001 ± 2% softirqs.CPU54.RCU
51341 ± 5% +12.3% 57647 ± 3% softirqs.CPU55.RCU
52564 ± 2% +11.9% 58827 softirqs.CPU56.RCU
53067 ± 2% +10.4% 58610 softirqs.CPU57.RCU
51959 ± 3% +14.0% 59258 softirqs.CPU58.RCU
52358 ± 3% +11.0% 58126 ± 2% softirqs.CPU59.RCU
52120 ± 5% +16.9% 60948 ± 2% softirqs.CPU6.RCU
45704 ± 19% +27.9% 58434 ± 2% softirqs.CPU61.RCU
48526 ± 7% +17.1% 56804 ± 3% softirqs.CPU63.RCU
46283 ± 2% +17.7% 54487 ± 3% softirqs.CPU64.RCU
46552 +17.2% 54573 ± 2% softirqs.CPU65.RCU
44740 ± 5% +23.4% 55197 ± 7% softirqs.CPU66.RCU
26720 ± 3% +9.7% 29305 softirqs.CPU66.SCHED
46224 ± 2% +15.4% 53359 ± 3% softirqs.CPU67.RCU
46566 +16.6% 54295 ± 3% softirqs.CPU68.RCU
26668 ± 3% +9.0% 29077 ± 3% softirqs.CPU68.SCHED
45737 +17.0% 53505 ± 5% softirqs.CPU69.RCU
26199 +9.6% 28724 ± 2% softirqs.CPU69.SCHED
51440 ± 4% +14.3% 58814 ± 3% softirqs.CPU7.RCU
46517 ± 2% +18.1% 54935 ± 3% softirqs.CPU70.RCU
45841 +18.4% 54262 ± 3% softirqs.CPU71.RCU
46300 +23.2% 57057 ± 8% softirqs.CPU73.RCU
45814 ± 2% +23.5% 56560 ± 8% softirqs.CPU74.RCU
26014 ± 4% +13.2% 29440 ± 4% softirqs.CPU74.SCHED
26300 ± 3% +12.1% 29469 ± 4% softirqs.CPU75.SCHED
25955 ± 2% +13.5% 29468 ± 4% softirqs.CPU76.SCHED
46380 ± 2% +23.0% 57034 ± 8% softirqs.CPU77.RCU
26218 ± 4% +13.4% 29729 ± 4% softirqs.CPU77.SCHED
46785 +25.6% 58763 ± 11% softirqs.CPU78.RCU
26117 ± 2% +13.6% 29656 ± 4% softirqs.CPU78.SCHED
53711 +12.2% 60267 ± 2% softirqs.CPU8.RCU
53965 ± 2% +22.0% 65854 ± 8% softirqs.CPU80.RCU
25991 ± 3% +14.9% 29860 ± 5% softirqs.CPU80.SCHED
26240 ± 4% +12.2% 29433 ± 5% softirqs.CPU81.SCHED
53854 +23.4% 66469 ± 8% softirqs.CPU82.RCU
25985 ± 2% +13.9% 29602 ± 5% softirqs.CPU82.SCHED
54014 ± 3% +23.6% 66770 ± 8% softirqs.CPU83.RCU
26480 ± 3% +11.8% 29592 ± 5% softirqs.CPU83.SCHED
52154 ± 2% +25.5% 65451 ± 8% softirqs.CPU84.RCU
26026 ± 2% +13.7% 29599 ± 5% softirqs.CPU84.SCHED
53379 +22.7% 65510 ± 8% softirqs.CPU85.RCU
25946 ± 4% +12.9% 29294 ± 4% softirqs.CPU85.SCHED
52552 ± 2% +25.3% 65857 ± 8% softirqs.CPU86.RCU
26034 ± 3% +13.3% 29507 ± 4% softirqs.CPU86.SCHED
25965 ± 2% +13.1% 29369 ± 5% softirqs.CPU87.SCHED
51459 ± 10% +29.1% 66411 ± 8% softirqs.CPU88.RCU
26294 ± 2% +12.5% 29576 ± 5% softirqs.CPU88.SCHED
54838 ± 4% +20.4% 66011 ± 8% softirqs.CPU89.RCU
23955 ± 16% +24.4% 29809 ± 4% softirqs.CPU89.SCHED
52941 ± 2% +13.0% 59835 ± 2% softirqs.CPU9.RCU
51463 ± 4% +27.5% 65598 ± 8% softirqs.CPU90.RCU
25984 ± 4% +13.2% 29424 ± 5% softirqs.CPU90.SCHED
52447 ± 2% +25.5% 65845 ± 8% softirqs.CPU91.RCU
26028 ± 4% +14.7% 29863 ± 5% softirqs.CPU91.SCHED
55285 ± 5% +17.6% 65009 ± 8% softirqs.CPU92.RCU
26552 ± 4% +10.8% 29432 ± 5% softirqs.CPU92.SCHED
53586 +23.2% 66008 ± 8% softirqs.CPU93.RCU
25806 ± 2% +14.1% 29457 ± 4% softirqs.CPU93.SCHED
54040 +21.9% 65871 ± 8% softirqs.CPU94.RCU
26003 ± 2% +13.7% 29555 ± 4% softirqs.CPU94.SCHED
54132 +22.6% 66393 ± 8% softirqs.CPU95.RCU
26273 ± 2% +12.8% 29640 ± 4% softirqs.CPU95.SCHED
5067631 +17.7% 5963495 ± 5% softirqs.RCU
2543158 +11.0% 2822356 softirqs.SCHED
32993 ± 3% +12.6% 37161 ± 4% softirqs.TIMER
5.24 ± 19% -2.4 2.80 ± 17% perf-profile.calltrace.cycles-pp.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.btree_write_cache_pages.do_writepages
2.59 ± 21% -1.5 1.05 ± 20% perf-profile.calltrace.cycles-pp.btrfs_check_node.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.btree_write_cache_pages
0.93 ± 22% -0.6 0.31 ±101% perf-profile.calltrace.cycles-pp.btrfs_get_32.check_leaf.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio
0.83 ± 9% +0.4 1.27 ± 24% perf-profile.calltrace.cycles-pp.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state.cpuidle_enter
0.00 +0.5 0.54 ± 5% perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.btrfs_sync_file
0.00 +0.5 0.54 ± 5% perf-profile.calltrace.cycles-pp.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction
0.00 +0.5 0.54 ± 5% perf-profile.calltrace.cycles-pp.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction
0.00 +0.5 0.55 ± 5% perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.btrfs_sync_file.do_fsync
0.00 +0.6 0.55 ± 5% perf-profile.calltrace.cycles-pp.btrfs_write_and_wait_transaction.btrfs_commit_transaction.btrfs_sync_file.do_fsync.__x64_sys_fsync
0.00 +0.6 0.56 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock.brd_insert_page.brd_do_bvec.brd_submit_bio.submit_bio_noacct
0.00 +0.6 0.62 ± 13% perf-profile.calltrace.cycles-pp.__btrfs_commit_inode_delayed_items.__btrfs_run_delayed_items.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
0.30 ±100% +0.6 0.95 ± 30% perf-profile.calltrace.cycles-pp.tick_irq_enter.irq_enter_rcu.sysvec_apic_timer_interrupt.asm_sysvec_apic_timer_interrupt.cpuidle_enter_state
0.00 +0.7 0.74 ± 13% perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_items.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
0.14 ±173% +0.8 0.97 ± 9% perf-profile.calltrace.cycles-pp.brd_insert_page.brd_do_bvec.brd_submit_bio.submit_bio_noacct.submit_bio
1.08 ± 19% +0.9 1.93 ± 36% perf-profile.calltrace.cycles-pp.ktime_get.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack
1.79 ± 17% +0.9 2.71 ± 25% perf-profile.calltrace.cycles-pp.clockevents_program_event.hrtimer_interrupt.__sysvec_apic_timer_interrupt.asm_call_on_stack.sysvec_apic_timer_interrupt
2.91 ± 8% +1.0 3.86 ± 12% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.00 +1.0 1.00 ± 25% perf-profile.calltrace.cycles-pp.btrfs_check_node.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page
0.30 ±100% +1.0 1.34 ± 15% perf-profile.calltrace.cycles-pp.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread
0.30 ±100% +1.1 1.41 ± 15% perf-profile.calltrace.cycles-pp.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread.ret_from_fork
3.29 ± 8% +1.2 4.50 ± 11% perf-profile.calltrace.cycles-pp.ret_from_fork
3.29 ± 8% +1.2 4.50 ± 11% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
3.11 ± 9% +1.2 4.33 ± 12% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 +1.3 1.34 ± 5% perf-profile.calltrace.cycles-pp.submit_bio.btrfs_map_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page
0.00 +1.4 1.40 ± 6% perf-profile.calltrace.cycles-pp.btrfs_map_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page.write_one_eb
0.00 +1.5 1.51 ± 22% perf-profile.calltrace.cycles-pp.btree_csum_one_bio.btree_submit_bio_hook.submit_one_bio.submit_extent_page.write_one_eb
2.85 ± 12% +2.0 4.88 ± 10% perf-profile.calltrace.cycles-pp.brd_do_bvec.brd_submit_bio.submit_bio_noacct.submit_bio.btrfs_map_bio
3.60 ± 13% +2.1 5.73 ± 10% perf-profile.calltrace.cycles-pp.brd_submit_bio.submit_bio_noacct.submit_bio.btrfs_map_bio.btree_submit_bio_hook
3.67 ± 13% +2.2 5.83 ± 10% perf-profile.calltrace.cycles-pp.submit_bio_noacct.submit_bio.btrfs_map_bio.btree_submit_bio_hook.submit_one_bio
1.48 ± 5% +2.7 4.13 ± 14% perf-profile.calltrace.cycles-pp.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range.btrfs_write_marked_extents
0.00 +2.9 2.91 ± 14% perf-profile.calltrace.cycles-pp.btree_submit_bio_hook.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages
0.00 +2.9 2.91 ± 14% perf-profile.calltrace.cycles-pp.submit_one_bio.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages
0.00 +3.2 3.21 ± 15% perf-profile.calltrace.cycles-pp.submit_extent_page.write_one_eb.btree_write_cache_pages.do_writepages.__filemap_fdatawrite_range
0.51 ± 6% -0.1 0.38 ± 8% perf-profile.children.cycles-pp.alloc_reserved_file_extent
0.57 ± 6% -0.1 0.45 ± 10% perf-profile.children.cycles-pp.btrfs_run_delayed_refs_for_head
0.87 ± 6% -0.1 0.76 ± 9% perf-profile.children.cycles-pp.btrfs_lookup_dir_item
0.47 ± 9% -0.1 0.36 ± 6% perf-profile.children.cycles-pp.btrfs_set_token_32
0.35 ± 17% -0.1 0.26 ± 30% perf-profile.children.cycles-pp.memzero_extent_buffer
0.48 ± 5% -0.1 0.39 ± 11% perf-profile.children.cycles-pp.__test_set_page_writeback
0.52 ± 9% -0.1 0.43 ± 9% perf-profile.children.cycles-pp.btrfs_lookup
0.51 ± 9% -0.1 0.42 ± 10% perf-profile.children.cycles-pp.btrfs_lookup_dentry
0.13 ± 47% -0.1 0.07 ± 20% perf-profile.children.cycles-pp.check_buffer_tree_ref
0.21 ± 8% -0.0 0.16 ± 15% perf-profile.children.cycles-pp.push_leaf_left
0.07 ± 17% -0.0 0.03 ±100% perf-profile.children.cycles-pp.extent_clear_unlock_delalloc
0.09 ± 9% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.08 ± 5% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.btrfs_lock_root_node
0.15 ± 14% -0.0 0.11 ± 6% perf-profile.children.cycles-pp.account_page_dirtied
0.12 ± 21% -0.0 0.08 ± 17% perf-profile.children.cycles-pp.__kmalloc
0.12 ± 13% -0.0 0.08 ± 20% perf-profile.children.cycles-pp.clear_state_bit
0.11 ± 10% -0.0 0.07 ± 17% perf-profile.children.cycles-pp.__push_leaf_left
0.06 ± 22% +0.0 0.08 ± 10% perf-profile.children.cycles-pp.rmqueue_bulk
0.03 ±100% +0.0 0.06 ± 6% perf-profile.children.cycles-pp.tick_nohz_tick_stopped
0.04 ± 58% +0.0 0.08 ± 27% perf-profile.children.cycles-pp.btrfs_delayed_inode_release_metadata
0.08 ± 15% +0.0 0.12 ± 16% perf-profile.children.cycles-pp.set_next_entity
0.01 ±173% +0.0 0.06 ± 14% perf-profile.children.cycles-pp.mem_cgroup_charge
0.03 ±100% +0.0 0.07 ± 10% perf-profile.children.cycles-pp.__kernel_text_address
0.10 ± 21% +0.0 0.15 ± 12% perf-profile.children.cycles-pp.call_cpuidle
0.12 ± 26% +0.0 0.17 ± 9% perf-profile.children.cycles-pp.add_to_page_cache_lru
0.05 ± 63% +0.0 0.10 ± 15% perf-profile.children.cycles-pp.__switch_to
0.09 ± 15% +0.0 0.14 ± 18% perf-profile.children.cycles-pp.rcu_nmi_exit
0.03 ±100% +0.0 0.08 ± 11% perf-profile.children.cycles-pp.unwind_get_return_address
0.14 ± 12% +0.0 0.18 ± 11% perf-profile.children.cycles-pp.rmqueue
0.01 ±173% +0.1 0.06 ± 17% perf-profile.children.cycles-pp.kernel_text_address
0.03 ±100% +0.1 0.08 ± 20% perf-profile.children.cycles-pp.__radix_tree_preload
0.01 ±173% +0.1 0.07 ± 24% perf-profile.children.cycles-pp.__update_load_avg_se
0.11 ± 20% +0.1 0.17 ± 4% perf-profile.children.cycles-pp.dequeue_entity
0.10 ± 30% +0.1 0.16 ± 4% perf-profile.children.cycles-pp.__add_to_page_cache_locked
0.14 ± 12% +0.1 0.21 ± 16% perf-profile.children.cycles-pp.update_ts_time_stats
0.03 ±100% +0.1 0.09 ± 28% perf-profile.children.cycles-pp.tick_nohz_idle_exit
0.06 ± 15% +0.1 0.12 ± 25% perf-profile.children.cycles-pp.join_transaction
0.12 ± 8% +0.1 0.19 ± 23% perf-profile.children.cycles-pp.update_load_avg
0.08 ± 10% +0.1 0.15 ± 12% perf-profile.children.cycles-pp.select_task_rq_fair
0.17 ± 14% +0.1 0.24 ± 10% perf-profile.children.cycles-pp.update_irq_load_avg
0.12 ± 18% +0.1 0.20 ± 4% perf-profile.children.cycles-pp.dequeue_task_fair
0.37 ± 13% +0.1 0.46 ± 5% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.11 ± 7% +0.1 0.20 ± 16% perf-profile.children.cycles-pp.nr_iowait_cpu
0.14 ± 33% +0.1 0.23 ± 15% perf-profile.children.cycles-pp.clear_page_erms
0.04 ±102% +0.1 0.13 ± 7% perf-profile.children.cycles-pp.__switch_to_asm
0.00 +0.1 0.10 ± 41% perf-profile.children.cycles-pp.btrfs_try_granting_tickets
0.13 ± 9% +0.1 0.27 ± 32% perf-profile.children.cycles-pp.btrfs_block_rsv_release
0.15 ± 34% +0.1 0.29 ± 8% perf-profile.children.cycles-pp.prep_new_page
0.21 ± 9% +0.1 0.35 ± 23% perf-profile.children.cycles-pp.unwind_next_frame
0.20 ± 28% +0.1 0.34 ± 24% perf-profile.children.cycles-pp.run_local_timers
0.49 ± 28% +0.2 0.65 ± 4% perf-profile.children.cycles-pp.__btrfs_update_delayed_inode
0.21 ± 8% +0.2 0.37 ± 4% perf-profile.children.cycles-pp.schedule_idle
0.45 ± 10% +0.2 0.62 ± 9% perf-profile.children.cycles-pp.schedule
0.07 ± 26% +0.2 0.25 ± 23% perf-profile.children.cycles-pp.btrfs_block_rsv_add
0.32 ± 18% +0.2 0.51 ± 9% perf-profile.children.cycles-pp.get_page_from_freelist
0.36 ± 20% +0.2 0.55 ± 5% perf-profile.children.cycles-pp.btrfs_write_and_wait_transaction
0.30 ± 7% +0.2 0.51 ± 17% perf-profile.children.cycles-pp.arch_stack_walk
0.36 ± 18% +0.2 0.61 ± 15% perf-profile.children.cycles-pp.__alloc_pages_nodemask
0.35 ± 7% +0.2 0.60 ± 17% perf-profile.children.cycles-pp.stack_trace_save_tsk
0.41 ± 7% +0.3 0.71 ± 18% perf-profile.children.cycles-pp.__account_scheduler_latency
0.24 ± 17% +0.3 0.54 ± 15% perf-profile.children.cycles-pp.start_transaction
0.65 ± 6% +0.3 0.97 ± 5% perf-profile.children.cycles-pp.__sched_text_start
0.58 ± 12% +0.3 0.91 ± 20% perf-profile.children.cycles-pp.brd_lookup_page
0.55 ± 4% +0.3 0.90 ± 16% perf-profile.children.cycles-pp.enqueue_entity
0.13 ± 28% +0.4 0.50 ± 10% perf-profile.children.cycles-pp.btrfs_use_block_rsv
0.59 ± 4% +0.4 0.97 ± 15% perf-profile.children.cycles-pp.ttwu_do_activate
0.59 ± 3% +0.4 0.97 ± 15% perf-profile.children.cycles-pp.enqueue_task_fair
0.56 ± 13% +0.4 0.97 ± 29% perf-profile.children.cycles-pp.tick_irq_enter
0.39 ± 20% +0.4 0.81 ± 17% perf-profile.children.cycles-pp.__queue_work
0.35 ± 11% +0.4 0.80 ± 46% perf-profile.children.cycles-pp.ktime_get_update_offsets_now
0.85 ± 8% +0.4 1.29 ± 23% perf-profile.children.cycles-pp.irq_enter_rcu
0.41 ± 27% +0.5 0.86 ± 24% perf-profile.children.cycles-pp.queue_work_on
0.85 ± 4% +0.5 1.39 ± 13% perf-profile.children.cycles-pp.try_to_wake_up
0.46 ± 16% +0.6 1.02 ± 13% perf-profile.children.cycles-pp.__btrfs_commit_inode_delayed_items
0.21 ± 26% +0.6 0.79 ± 13% perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
2.03 ± 10% +0.7 2.71 ± 3% perf-profile.children.cycles-pp._raw_spin_lock
0.67 ± 17% +0.7 1.35 ± 8% perf-profile.children.cycles-pp.brd_insert_page
0.00 +0.7 0.74 ± 13% perf-profile.children.cycles-pp.__btrfs_run_delayed_items
0.53 ± 13% +0.8 1.34 ± 15% perf-profile.children.cycles-pp.flush_space
0.53 ± 13% +0.9 1.41 ± 15% perf-profile.children.cycles-pp.btrfs_async_reclaim_metadata_space
1.83 ± 16% +0.9 2.76 ± 26% perf-profile.children.cycles-pp.clockevents_program_event
2.91 ± 8% +1.0 3.87 ± 12% perf-profile.children.cycles-pp.process_one_work
3.29 ± 8% +1.2 4.50 ± 11% perf-profile.children.cycles-pp.kthread
3.29 ± 8% +1.2 4.50 ± 11% perf-profile.children.cycles-pp.ret_from_fork
3.11 ± 9% +1.2 4.33 ± 12% perf-profile.children.cycles-pp.worker_thread
2.93 ± 17% +1.7 4.67 ± 31% perf-profile.children.cycles-pp.ktime_get
3.23 ± 12% +2.1 5.29 ± 10% perf-profile.children.cycles-pp.brd_do_bvec
4.52 ± 12% +2.1 6.65 ± 11% perf-profile.children.cycles-pp.brd_submit_bio
4.65 ± 13% +2.2 6.81 ± 11% perf-profile.children.cycles-pp.submit_bio_noacct
4.66 ± 13% +2.2 6.82 ± 11% perf-profile.children.cycles-pp.submit_bio
4.80 ± 13% +2.2 7.04 ± 11% perf-profile.children.cycles-pp.btrfs_map_bio
1.50 ± 5% +2.7 4.18 ± 14% perf-profile.children.cycles-pp.write_one_eb
0.41 ± 16% +2.9 3.28 ± 15% perf-profile.children.cycles-pp.submit_extent_page
0.34 ± 7% -0.1 0.26 ± 13% perf-profile.self.cycles-pp.btrfs_set_token_32
0.09 ± 9% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.syscall_return_via_sysret
0.11 ± 15% -0.0 0.07 ± 14% perf-profile.self.cycles-pp.read_block_for_search
0.16 ± 11% +0.0 0.20 ± 2% perf-profile.self.cycles-pp.asm_sysvec_apic_timer_interrupt
0.06 ± 7% +0.0 0.09 ± 17% perf-profile.self.cycles-pp.set_next_entity
0.03 ±100% +0.0 0.07 ± 12% perf-profile.self.cycles-pp.enqueue_task_fair
0.05 ± 64% +0.0 0.10 ± 11% perf-profile.self.cycles-pp.__switch_to
0.10 ± 21% +0.0 0.15 ± 12% perf-profile.self.cycles-pp.call_cpuidle
0.01 ±173% +0.1 0.07 ± 28% perf-profile.self.cycles-pp.__update_load_avg_se
0.00 +0.1 0.06 ± 14% perf-profile.self.cycles-pp.__btrfs_end_transaction
0.17 ± 14% +0.1 0.24 ± 10% perf-profile.self.cycles-pp.update_irq_load_avg
0.07 ± 21% +0.1 0.14 ± 7% perf-profile.self.cycles-pp.btrfs_end_bio
0.11 ± 17% +0.1 0.18 ± 12% perf-profile.self.cycles-pp.__sched_text_start
0.11 ± 7% +0.1 0.19 ± 16% perf-profile.self.cycles-pp.nr_iowait_cpu
0.13 ± 34% +0.1 0.22 ± 15% perf-profile.self.cycles-pp.clear_page_erms
0.04 ±102% +0.1 0.13 ± 7% perf-profile.self.cycles-pp.__switch_to_asm
0.32 ± 20% +0.1 0.42 ± 6% perf-profile.self.cycles-pp.do_idle
0.01 ±173% +0.1 0.12 ± 27% perf-profile.self.cycles-pp.process_one_work
0.31 ± 18% +0.1 0.43 ± 15% perf-profile.self.cycles-pp.brd_lookup_page
0.18 ± 34% +0.1 0.32 ± 21% perf-profile.self.cycles-pp.run_local_timers
0.27 ± 16% +0.4 0.66 ± 53% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
1.98 ± 10% +0.5 2.50 ± 2% perf-profile.self.cycles-pp._raw_spin_lock
1.92 ± 14% +1.0 2.92 ± 12% perf-profile.self.cycles-pp.brd_do_bvec
2.43 ± 18% +1.7 4.12 ± 35% perf-profile.self.cycles-pp.ktime_get
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 8 months
[net] 879098b7f2: BUG:spinlock_bad_magic_on_CPU
by kernel test robot
Greeting,
FYI, we noticed the following commit (built with gcc-9):
commit: 879098b7f2b32a71b348c19af561a9be68443132 ("net: Use skbufhead with raw lock")
https://git.kernel.org/cgit/linux/kernel/git/rt/linux-rt-devel.git linux-5.9.y-rt-rebase
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------------------------+------------+------------+
| | 39d1446c27 | 879098b7f2 |
+--------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 4 | 4 |
| WARNING:suspicious_RCU_usage | 4 | |
| security/device_cgroup.c:#RCU-list_traversed_in_non-reader_section | 4 | |
| BUG:spinlock_bad_magic_on_CPU | 0 | 4 |
+--------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 91.101781] 001: BUG: spinlock bad magic on CPU#1, swapper/0/1
[ 91.101786] 001: lock: 0xffff888237beafd0, .magic: 00000000, .owner: swapper/0/1, .owner_cpu: 1
[ 91.101791] 001: CPU: 1 PID: 1 Comm: swapper/0 Not tainted 5.9.0-rc2-00146-g879098b7f2b32 #1
[ 91.101795] 001: Call Trace:
[ 91.101797] 001: ? dump_stack+0x12b/0x19f
[ 91.101804] 001: ? spin_dump+0xb3/0xf0
[ 91.101808] 001: ? do_raw_spin_unlock+0xe6/0x1c0
[ 91.101811] 001: ? _raw_spin_unlock_irqrestore+0x3e/0xe0
[ 91.101817] 001: ? skb_dequeue+0xbd/0xf0
[ 91.101822] 001: ? dev_cpu_dead+0x437/0x4f0
[ 91.101827] 001: ? netif_rx_ni+0x470/0x470
[ OK ] Listening on RPCbind Server Activation Socket.
[ 91.101829] 001: ? cpuhp_invoke_callback+0x18b/0x16e0
[ 91.101834] 001: ? _cpu_down+0x186/0x4b0
[ 91.101839] 001: ? cpu_down+0x3c/0x80
[ OK ] Created slice system-getty.slice.
[ 91.101842] 001: ? cpu_device_down+0x1d/0x30
[ 91.101844] 001: ? cpu_subsys_offline+0xd/0x20
[ 91.101849] 001: ? device_offline+0xfb/0x190
[ 91.101852] 001: ? topology_init+0xff/0xff
[ 91.101855] 001: ? remove_cpu+0x32/0x50
[ 91.101857] 001: ? _debug_hotplug_cpu+0xa7/0x105
[ 91.101861] 001: ? debug_hotplug_cpu+0x10/0x1a
[ 91.101863] 001: ? do_one_initcall+0x83/0x620
[ 91.101867] 001: ? rcu_read_lock_sched_held+0x57/0xb0
[ OK ] Created slice User and Session Slice.
[ OK ] Reached target Slices.
[ 91.101872] 001: ? do_basic_setup+0x2ae/0x309
[ 91.101876] 001: ? kernel_init_freeable+0x106/0x15c
[ 91.101878] 001: ? rest_init+0x480/0x480
[ 91.101880] 001: ? kernel_init+0xd/0x1f0
[ 91.101883] 001: ? rest_init+0x480/0x480
[ 91.101885] 001: ? ret_from_fork+0x1f/0x30
[ 91.102241] 001: DEBUG_HOTPLUG_CPU0: CPU 0 is now offline
[ 91.102252] 001: ALSA device list:
[ 91.102254] 001: #0: UAC2_Gadget 0
[ 91.105946] 001: Freeing unused decrypted memory: 2040K
[ 91.108553] 001: Freeing unused kernel image (initmem) memory: 3804K
Starting Remount Root and Kernel File Systems...
[ 91.291406] 001: Write protecting the kernel read-only data: 61440k
[ 91.293691] 001: Freeing unused kernel image (text/rodata gap) memory: 2040K
[ 91.295473] 001: Freeing unused kernel image (rodata/data gap) memory: 760K
[ 91.299522] 001: x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ 91.299576] 001: x86/mm: Checking user space page tables
[ OK ] Reached target Local Encrypted Volumes.
[ 91.299922] 001: x86/mm: Checked W+X mappings: passed, no W+X pages found.
[ OK ] Reached target Paths.
[ 91.300497] 001: Failed to set sysctl parameter 'kernel.softlockup_panic=1': parameter not found
[ 91.325545] 001: Run /init as init process
[ 91.325550] 001: with arguments:
[ 91.325552] 001: /init
[ 91.325554] 001: with environment:
[ 91.325555] 001: HOME=/
[ 91.325556] 001: TERM=linux
[ 91.325557] 001: user=lkp
[ 91.325558] 001: job=/lkp/jobs/scheduled/vm-snb-107/boot-1-debian-10.4-x86_64-20200603.cgz-879098b7f2b32a71b348c19af561a9be68443132-20200825-6009-1p960b6-3.yaml
[ 91.325560] 001: ARCH=x86_64
[ 91.325561] 001: kconfig=x86_64-randconfig-a013-20200824
[ 91.325563] 001: branch=linux-rt-devel/linux-5.9.y-rt-rebase
[ 91.325564] 001: commit=879098b7f2b32a71b348c19af561a9be68443132
[ 91.325565] 001: BOOT_IMAGE=/pkg/linux/x86_64-randconfig-a013-20200824/gcc-9/879098b7f2b32a71b348c19af561a9be68443132/vmlinuz-5.9.0-rc2-00146-g879098b7f2b32
Starting Journal Service...
[ 91.325567] 001: max_uptime=600
[ 91.325568] 001: RESULT_ROOT=/result/boot/1/vm-snb/debian-10.4-x86_64-20200603.cgz/x86_64-randconfig-a013-20200824/gcc-9/879098b7f2b32a71b348c19af561a9be68443132/3
[ 91.325570] 001: LKP_SERVER=inn
[ 91.325571] 001: selinux=0
[ 91.325572] 001: softlockup_panic=1
[ 91.325574] 001: nmi_watchdog=panic
[ 91.325575] 001: prompt_ramdisk=0
[ 91.325576] 001: vga=normal
[ 91.325577] 001: watchdog_thresh=60
[ 91.503366] 001: systemd[1]: RTC configured in localtime, applying delta of 0 minutes to system time.
[ 91.648326] 001: _warn_unseeded_randomness: 34 callbacks suppressed
[ 91.648332] 001: random: get_random_u32 called from bucket_table_alloc+0xce/0x1b0 with crng_init=0
[ 91.696041] 001: process 338 ((sd-executor)) attempted a POSIX timer syscall while CONFIG_POSIX_TIMERS is not set
Mounting POSIX Message Queue File System...
[ 91.733064] 001: random: get_random_u64 called from arch_rnd+0x30/0x60 with crng_init=0
[ 91.733081] 001: random: get_random_u64 called from randomize_stack_top+0x46/0xa0 with crng_init=0
[ 92.820837] 001: random: systemd: uninitialized urandom read (16 bytes read)
[ 92.823435] 001: random: systemd: uninitialized urandom read (16 bytes read)
[ 92.838884] 001: random: systemd: uninitialized urandom read (16 bytes read)
[ 92.882324] 001: _warn_unseeded_randomness: 73 callbacks suppressed
[ 92.882332] 001: random: get_random_bytes called from key_alloc+0x464/0x950 with crng_init=0
[ 92.884829] 001: random: get_random_u64 called from arch_rnd+0x30/0x60 with crng_init=0
[ OK ] Created slice system-serial\x2dgetty.slice.
[ OK ] Listening on udev Control Socket.
[ 92.884841] 001: random: get_random_u64 called from randomize_stack_top+0x46/0xa0 with crng_init=0
Starting udev Coldplug all Devices...
[ OK ] Mounted RPC Pipe File System.
[ OK ] Started Load Kernel Modules.
[ OK ] Started Remount Root and Kernel File Systems.
[ 93.565325] 001: random: fast init done
[ OK ] Mounted POSIX Message Queue File System.
Starting Load/Save Random Seed...
Starting Create System Users...
Mounting Kernel Configuration File System...
Starting Apply Kernel Variables...
[ 93.926313] 001: _warn_unseeded_randomness: 70 callbacks suppressed
[ 93.926328] 001: random: get_random_bytes called from key_alloc+0x464/0x950 with crng_init=1
[ 93.933849] 001: random: get_random_u64 called from arch_rnd+0x30/0x60 with crng_init=1
[ 93.940693] 001: random: get_random_u64 called from randomize_stack_top+0x46/0xa0 with crng_init=1
To reproduce:
# build kernel
cd linux
cp config-5.9.0-rc2-00146-g879098b7f2b32 .config
make HOSTCC=gcc-9 CC=gcc-9 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 8 months
36e96b4116 ("init: add an init_dup helper"): BUG: kernel hang in boot stage
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
git://git.infradead.org/users/hch/misc.git init_path
commit 36e96b41164933f00bb8f273b864603d3eb09d87
Author: Christoph Hellwig <hch(a)lst.de>
AuthorDate: Tue Jul 28 17:49:47 2020 +0200
Commit: Christoph Hellwig <hch(a)lst.de>
CommitDate: Fri Jul 31 08:17:54 2020 +0200
init: add an init_dup helper
Add a simple helper to grab a reference to a file and install it at
the next available fd, and switch the early init code over to it.
Signed-off-by: Christoph Hellwig <hch(a)lst.de>
235e57935b init: add an init_utimes helper
36e96b4116 init: add an init_dup helper
+--------------------------------+------------+------------+
| | 235e57935b | 36e96b4116 |
+--------------------------------+------------+------------+
| boot_successes | 34 | 4 |
| boot_failures | 6 | 2 |
| BUG:kernel_hang_in_test_stage | 5 | |
| invoked_oom-killer:gfp_mask=0x | 1 | |
| EIP:clear_user | 1 | |
| Mem-Info | 1 | |
| BUG:kernel_hang_in_boot_stage | 0 | 2 |
+--------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 1.702371] link=/cephfs/kbuild/run-queue/kvm/i386-defconfig/hch-misc:init_path:36e96b41164933f00bb8f273b864603d3eb09d87/.vmlinuz-36e96b41164933f00bb8f273b864603d3eb09d87-20200731155926-2:openwrt-vm-openwrt-36
[ 1.706155] branch=hch-misc/init_path
[ 1.707253] BOOT_IMAGE=/pkg/linux/i386-defconfig/gcc-9/36e96b41164933f00bb8f273b864603d3eb09d87/vmlinuz-5.8.0-rc5-00098-g36e96b4116493
[ 1.710065] watchdog_thresh=60
[ 2.155632] input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3
BUG: kernel hang in boot stage
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start de44d2b59b0fc49b84d711d815dfc2ef0a272912 bcf876870b95592b52519ed4aafcf9d95999bc9c --
git bisect bad 7247a9ee89be76c074b245f11bf06cd8b96499e6 # 01:26 B 0 12 31 2 Merge remote-tracking branch 'wireless-drivers-next/master'
git bisect good fd75cb14351fd6839137e9affad58f5c78555619 # 01:59 G 11 0 1 3 Merge remote-tracking branch 'openrisc/for-next'
git bisect good fef2b045b428c32f1eef73e0115ad72196712cbb # 02:36 G 11 0 2 7 Merge remote-tracking branch 'iomap/iomap-for-next'
git bisect bad c9d27cc1cbefb53dfa5adcc9f29ce3f0cc31c362 # 02:36 B 0 7 28 5 Merge remote-tracking branch 'v4l-dvb/master'
git bisect bad d7f0423e6841b4cd132d728106e70a6e9877e016 # 02:36 B 0 11 30 3 Merge remote-tracking branch 'pstore/for-next/pstore'
git bisect bad f4ea38d1cb1116f8725d1bada90495af24028998 # 02:36 B 0 13 30 1 Merge remote-tracking branch 'vfs/for-next'
git bisect good f6340a2a47ada03a2980c47e4b72b3e2af5e73fb # 02:36 G 14 0 0 0 Merge remote-tracking branch 'file-locks/locks-next'
git bisect good 259bf01c1bd1f049958496a089c4f334fe0c8a48 # 02:36 G 14 0 0 2 Merge branches 'work.misc', 'work.regset' and 'work.fdpic' into for-next
git bisect good fd5ad30c782351ab4d4a15941fc61e743a1bd66c # 03:09 G 13 0 2 5 fs: expose utimes_common
git bisect good db63f1e315384590b979f8f74abd1b5363b69894 # 03:42 G 13 0 3 7 init: add an init_chdir helper
git bisect good cd3acb6a79349f346714ab3d26d203a0c6ca5ab0 # 04:17 G 13 0 2 5 init: add an init_symlink helper
git bisect good 716308a5331bf907b819f9db8dc942b19568f925 # 04:50 G 13 0 2 5 init: add an init_stat helper
git bisect bad 36e96b41164933f00bb8f273b864603d3eb09d87 # 04:50 B 0 2 24 0 init: add an init_dup helper
git bisect good 235e57935bf328c4cce371ffc4dd1d8fab4885cd # 04:50 G 34 0 0 10 init: add an init_utimes helper
# first bad commit: [36e96b41164933f00bb8f273b864603d3eb09d87] init: add an init_dup helper
git bisect good 235e57935bf328c4cce371ffc4dd1d8fab4885cd # 05:26 G 39 0 5 15 init: add an init_utimes helper
# extra tests with debug options
git bisect bad 36e96b41164933f00bb8f273b864603d3eb09d87 # 05:53 B 0 4 13 0 init: add an init_dup helper
# extra tests on head commit of hch-misc/init_path
git bisect bad 36e96b41164933f00bb8f273b864603d3eb09d87 # 06:06 B 0 2 24 0 init: add an init_dup helper
# bad: [36e96b41164933f00bb8f273b864603d3eb09d87] init: add an init_dup helper
# extra tests on revert first bad commit
git bisect good 06682a59779977dd5eade5ac921b4f7c02f0d21c # 07:19 G 13 0 13 13 Revert "init: add an init_dup helper"
# good: [06682a59779977dd5eade5ac921b4f7c02f0d21c] Revert "init: add an init_dup helper"
---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/lkp@lists.01.org
1 year, 8 months