[mm] 71ee870ccb: will-it-scale.per_process_ops -2.9% regression
by kernel test robot
Greeting,
FYI, we noticed a -2.9% regression of will-it-scale.per_process_ops due to commit:
commit: 71ee870ccb768a5019ff8ebeb47cc9f062559a7a ("[RFC PATCH] mm: readahead: add readahead_shift into backing device")
url: https://github.com/0day-ci/linux/commits/Martin-Liu/mm-readahead-add-read...
in testcase: will-it-scale
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 64G memory
with following parameters:
nr_task: 50%
mode: process
test: poll2
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/process/50%/debian-x86_64-2018-04-03-no-ucode.cgz/lkp-bdw-ep3d/poll2/will-it-scale
commit:
v5.1-rc2
71ee870ccb ("mm: readahead: add readahead_shift into backing device")
v5.1-rc2 71ee870ccb768a5019ff8ebeb47
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:5 40% 2:4 dmesg.BUG:unable_to_handle_kernel
:5 40% 2:4 dmesg.Kernel_panic-not_syncing:Fatal_exception
:5 40% 2:4 dmesg.Oops:#[##]
:5 40% 2:4 dmesg.RIP:ondemand_readahead
%stddev %change %stddev
\ | \
330269 -2.9% 320782 will-it-scale.per_process_ops
14531906 -2.9% 14114429 will-it-scale.workload
38.92 ± 10% +2.3% 39.81 ± 10% boot-time.boot
6108941 ±121% -99.4% 38017 ± 3% cpuidle.C1.usage
53094 ± 3% +6.3% 56436 ± 6% meminfo.Shmem
6106659 ±121% -99.4% 36364 ± 3% turbostat.C1
245.02 -3.1% 237.54 turbostat.PkgWatt
16384 ± 9% -15.8% 13797 numa-vmstat.node0.nr_slab_unreclaimable
2715 ± 18% +30.4% 3542 numa-vmstat.node1.nr_mapped
8641 ± 11% +14.4% 9889 ± 5% numa-vmstat.node1.nr_slab_reclaimable
14101 ± 12% +19.9% 16907 numa-vmstat.node1.nr_slab_unreclaimable
65541 ± 9% -15.8% 55189 numa-meminfo.node0.SUnreclaim
34563 ± 11% +14.5% 39558 ± 5% numa-meminfo.node1.KReclaimable
34563 ± 11% +14.5% 39558 ± 5% numa-meminfo.node1.SReclaimable
56405 ± 12% +19.9% 67631 numa-meminfo.node1.SUnreclaim
90969 ± 12% +17.8% 107190 numa-meminfo.node1.Slab
28.50 ± 21% +7.2 35.70 ± 22% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
28.50 ± 21% +7.2 35.71 ± 22% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
28.50 ± 21% +7.2 35.71 ± 22% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
29.06 ± 22% +7.2 36.28 ± 22% perf-profile.calltrace.cycles-pp.secondary_startup_64
28.43 ± 22% +7.2 35.66 ± 22% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
28.32 ± 22% +7.3 35.62 ± 22% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
68918 +1.6% 70055 proc-vmstat.nr_active_anon
4420 +1.1% 4471 proc-vmstat.nr_inactive_anon
13272 ± 3% +6.3% 14110 ± 6% proc-vmstat.nr_shmem
68918 +1.6% 70055 proc-vmstat.nr_zone_active_anon
4420 +1.1% 4471 proc-vmstat.nr_zone_inactive_anon
12592 ± 5% +9.8% 13822 ± 8% proc-vmstat.pgactivate
22.97 ± 16% +15.2% 26.46 ± 4% sched_debug.cfs_rq:/.load_avg.avg
31.76 ± 68% +91.5% 60.83 ± 16% sched_debug.cfs_rq:/.removed.load_avg.stddev
1465 ± 68% +90.9% 2798 ± 16% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
90.67 ± 81% +122.4% 201.67 ± 28% sched_debug.cfs_rq:/.removed.util_avg.max
13.91 ± 72% +96.0% 27.27 ± 24% sched_debug.cfs_rq:/.removed.util_avg.stddev
161427 ± 2% +53.9% 248474 ± 23% sched_debug.cpu.avg_idle.stddev
32.13 ± 11% -17.8% 26.42 ± 3% sched_debug.cpu.cpu_load[3].max
319.40 ± 7% +9.5% 349.67 ± 3% sched_debug.cpu.nr_switches.min
16306 ± 32% +65.7% 27026 ± 23% softirqs.CPU11.RCU
19898 ± 15% +17.4% 23362 ± 12% softirqs.CPU25.RCU
25900 ± 19% -46.7% 13796 ± 39% softirqs.CPU38.RCU
18084 ± 26% +58.8% 28722 ± 6% softirqs.CPU39.RCU
90524 ± 2% +37.1% 124077 ± 24% softirqs.CPU39.TIMER
19146 ± 30% +21.8% 23323 ± 33% softirqs.CPU4.RCU
4625 ± 4% +336.9% 20209 ± 76% softirqs.CPU45.SCHED
4695 ± 2% +401.2% 23537 ± 79% softirqs.CPU61.SCHED
89544 +36.5% 122247 ± 26% softirqs.CPU61.TIMER
4691 ± 2% +404.2% 23656 ± 80% softirqs.CPU65.SCHED
89357 +46.9% 131246 ± 32% softirqs.CPU65.TIMER
10339 ± 4% -20.7% 8197 ± 13% softirqs.CPU66.RCU
4555 ± 3% +337.5% 19929 ± 76% softirqs.CPU66.SCHED
13869 ± 37% -39.3% 8419 ± 2% softirqs.CPU78.RCU
12163 ± 34% -33.2% 8125 ± 4% softirqs.CPU80.RCU
4660 ± 3% +331.1% 20087 ± 77% softirqs.CPU80.SCHED
5222 ± 24% +352.1% 23611 ± 79% softirqs.CPU82.SCHED
90496 ± 3% +35.0% 122171 ± 27% softirqs.CPU82.TIMER
9935 ± 6% -19.9% 7958 ± 8% softirqs.CPU84.RCU
4653 ± 4% +325.4% 19793 ± 76% softirqs.CPU84.SCHED
9701 ± 6% -21.8% 7582 ± 8% softirqs.CPU85.RCU
4643 ± 4% +325.2% 19741 ± 76% softirqs.CPU85.SCHED
0.08 ± 4% +11.7% 0.09 ± 3% perf-stat.i.MPKI
3.464e+10 -2.9% 3.365e+10 perf-stat.i.branch-instructions
0.27 +0.0 0.27 perf-stat.i.branch-miss-rate%
206283 ± 6% +10.1% 227086 ± 7% perf-stat.i.cache-misses
12679443 ± 4% +9.2% 13840943 ± 3% perf-stat.i.cache-references
0.75 +3.0% 0.78 perf-stat.i.cpi
13397222 -5.7% 12635139 perf-stat.i.dTLB-load-misses
3.619e+10 -2.9% 3.515e+10 perf-stat.i.dTLB-loads
1.854e+10 -2.3% 1.811e+10 perf-stat.i.dTLB-stores
1.636e+11 -2.9% 1.589e+11 perf-stat.i.instructions
1.33 -2.9% 1.29 perf-stat.i.ipc
24604 ± 32% +13.2% 27859 ± 26% perf-stat.i.node-stores
0.08 ± 4% +12.4% 0.09 ± 3% perf-stat.overall.MPKI
0.26 +0.0 0.27 perf-stat.overall.branch-miss-rate%
0.75 +3.0% 0.78 perf-stat.overall.cpi
1.33 -2.9% 1.29 perf-stat.overall.ipc
3.452e+10 -2.9% 3.353e+10 perf-stat.ps.branch-instructions
205685 ± 6% +10.1% 226440 ± 7% perf-stat.ps.cache-misses
12640358 ± 4% +9.2% 13798889 ± 3% perf-stat.ps.cache-references
13352001 -5.7% 12592542 perf-stat.ps.dTLB-load-misses
3.606e+10 -2.9% 3.503e+10 perf-stat.ps.dTLB-loads
1.847e+10 -2.3% 1.805e+10 perf-stat.ps.dTLB-stores
1.63e+11 -2.9% 1.584e+11 perf-stat.ps.instructions
24533 ± 32% +13.2% 27780 ± 26% perf-stat.ps.node-stores
4.923e+13 -2.9% 4.782e+13 perf-stat.total.instructions
659.20 ± 14% -33.4% 439.00 ± 5% interrupts.32:PCI-MSI.3145729-edge.eth0-TxRx-0
193.20 ± 13% +46.5% 283.00 ± 4% interrupts.35:PCI-MSI.3145732-edge.eth0-TxRx-3
341.60 +9.0% 372.50 interrupts.9:IO-APIC.9-fasteoi.acpi
926.20 ± 57% -59.0% 380.00 interrupts.CPU0.RES:Rescheduling_interrupts
341.60 +9.0% 372.50 interrupts.CPU1.9:IO-APIC.9-fasteoi.acpi
2403 ± 32% +123.2% 5363 ± 46% interrupts.CPU1.NMI:Non-maskable_interrupts
2403 ± 32% +123.2% 5363 ± 46% interrupts.CPU1.PMI:Performance_monitoring_interrupts
2983 ± 52% +44.4% 4309 ± 32% interrupts.CPU10.NMI:Non-maskable_interrupts
2983 ± 52% +44.4% 4309 ± 32% interrupts.CPU10.PMI:Performance_monitoring_interrupts
659.20 ± 14% -33.4% 439.00 ± 5% interrupts.CPU11.32:PCI-MSI.3145729-edge.eth0-TxRx-0
60.80 ± 91% +1764.3% 1133 ± 92% interrupts.CPU11.RES:Rescheduling_interrupts
193.20 ± 13% +46.5% 283.00 ± 4% interrupts.CPU14.35:PCI-MSI.3145732-edge.eth0-TxRx-3
2401 ± 32% +79.0% 4299 ± 33% interrupts.CPU2.NMI:Non-maskable_interrupts
2401 ± 32% +79.0% 4299 ± 33% interrupts.CPU2.PMI:Performance_monitoring_interrupts
3289 ± 42% +107.2% 6814 ± 15% interrupts.CPU21.NMI:Non-maskable_interrupts
3289 ± 42% +107.2% 6814 ± 15% interrupts.CPU21.PMI:Performance_monitoring_interrupts
3275 ± 42% +108.5% 6829 ± 16% interrupts.CPU22.NMI:Non-maskable_interrupts
3275 ± 42% +108.5% 6829 ± 16% interrupts.CPU22.PMI:Performance_monitoring_interrupts
2404 ± 32% +79.0% 4305 ± 33% interrupts.CPU3.NMI:Non-maskable_interrupts
2404 ± 32% +79.0% 4305 ± 33% interrupts.CPU3.PMI:Performance_monitoring_interrupts
2940 ± 51% +46.3% 4301 ± 33% interrupts.CPU31.NMI:Non-maskable_interrupts
2940 ± 51% +46.3% 4301 ± 33% interrupts.CPU31.PMI:Performance_monitoring_interrupts
2368 ± 28% +81.8% 4304 ± 33% interrupts.CPU33.NMI:Non-maskable_interrupts
2368 ± 28% +81.8% 4304 ± 33% interrupts.CPU33.PMI:Performance_monitoring_interrupts
41.80 ± 77% +164.4% 110.50 ± 52% interrupts.CPU33.RES:Rescheduling_interrupts
2872 ± 26% +50.0% 4308 ± 32% interrupts.CPU34.NMI:Non-maskable_interrupts
2872 ± 26% +50.0% 4308 ± 32% interrupts.CPU34.PMI:Performance_monitoring_interrupts
2872 ± 25% +137.3% 6815 ± 15% interrupts.CPU35.NMI:Non-maskable_interrupts
2872 ± 25% +137.3% 6815 ± 15% interrupts.CPU35.PMI:Performance_monitoring_interrupts
2660 ± 18% +156.6% 6826 ± 16% interrupts.CPU36.NMI:Non-maskable_interrupts
2660 ± 18% +156.6% 6826 ± 16% interrupts.CPU36.PMI:Performance_monitoring_interrupts
40.60 ± 50% +134.0% 95.00 ± 36% interrupts.CPU38.RES:Rescheduling_interrupts
2395 ± 32% +79.5% 4300 ± 33% interrupts.CPU4.NMI:Non-maskable_interrupts
2395 ± 32% +79.5% 4300 ± 33% interrupts.CPU4.PMI:Performance_monitoring_interrupts
68.20 ± 33% +1063.5% 793.50 ± 12% interrupts.CPU4.RES:Rescheduling_interrupts
2986 ± 5% +129.2% 6843 ± 16% interrupts.CPU40.NMI:Non-maskable_interrupts
2986 ± 5% +129.2% 6843 ± 16% interrupts.CPU40.PMI:Performance_monitoring_interrupts
2984 ± 5% +128.6% 6821 ± 16% interrupts.CPU41.NMI:Non-maskable_interrupts
2984 ± 5% +128.6% 6821 ± 16% interrupts.CPU41.PMI:Performance_monitoring_interrupts
2991 ± 5% +43.9% 4305 ± 32% interrupts.CPU43.NMI:Non-maskable_interrupts
2991 ± 5% +43.9% 4305 ± 32% interrupts.CPU43.PMI:Performance_monitoring_interrupts
2392 ± 31% +79.8% 4302 ± 33% interrupts.CPU5.NMI:Non-maskable_interrupts
2392 ± 31% +79.8% 4302 ± 33% interrupts.CPU5.PMI:Performance_monitoring_interrupts
345.00 ±173% +382.3% 1664 ± 53% interrupts.CPU6.RES:Rescheduling_interrupts
1.20 ± 33% +4233.3% 52.00 ± 96% interrupts.CPU77.RES:Rescheduling_interrupts
471010 ± 6% +10.0% 518230 ± 10% interrupts.NMI:Non-maskable_interrupts
471010 ± 6% +10.0% 518230 ± 10% interrupts.PMI:Performance_monitoring_interrupts
will-it-scale.per_process_ops
350000 +-+----------------------------------------------------------------+
O..+..O..+..O O +..+..O..O..+ O +..O..+..O..+..+..O..O..+..+..O
300000 +-+ : : : : |
| : : : : |
250000 +-+ : : : : |
| : : : : |
200000 +-+ : : : : |
| : : : : |
150000 +-+ : : : : |
| : : : : |
100000 +-+ : : : : |
| : : : : |
50000 +-+ : :: |
| : : |
0 +-+O-----O--------O--O--------O------O-----O-----O--O--------O--O--+
will-it-scale.workload
1.6e+07 +-+---------------------------------------------------------------+
|..+..+..+..+ +..+..+..+..+ +..+..+..+..+..+..+..+..+..+..|
1.4e+07 O-+ O O O : O O : O : O O O O O
1.2e+07 +-+ : : : : |
| : : : : |
1e+07 +-+ : : : : |
| : : : : |
8e+06 +-+ : : : : |
| : : : : |
6e+06 +-+ : : : : |
4e+06 +-+ : : : : |
| : : : : |
2e+06 +-+ : : |
| : : |
0 +-+O-----O--------O--O--------O-----O-----O-----O--O--------O--O--+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 9 months
[auxdisplay] 24c764abfd: kmsg.parport#:cannot_grant_exclusive_access_for_device_pps_parport
by kernel test robot
FYI, we noticed the following commit (built with gcc-5):
commit: 24c764abfd0d4b6e8e33c3818b668edbb4936d6f ("auxdisplay: deconfuse configuration")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: locktorture
with following parameters:
runtime: 300s
test: default
test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
[ 10.294087 ] random: systemd: uninitialized urandom read (16 bytes read)
[ 10.298543 ] random: systemd: uninitialized urandom read (16 bytes read)
Mounting RPC Pipe File System...
Mounting Debug File System...
Starting Journal Service...
Starting Load Kernel Modules...
[ 10.503136 ] random: fast init done
Starting Remount Root and Kernel File Systems...
Mounting FUSE Control File System...
Starting Apply Kernel Variables...
Mounting Configuration File System...
Starting Create Static Device Nodes in /dev...
Starting Load/Save Random Seed...
Starting udev Coldplug all Devices...
Starting Preprocess NFS configuration...
Starting Raise network interfaces...
Starting udev Kernel Device Manager...
Starting Flush Journal to Persistent Storage...
Starting Create Volatile Files and Directories...
Starting Network Time Synchronization...
Starting RPC bind portmap service...
Starting Update UTMP about System Boot/Shutdown...
[ 11.126278 ] _warn_unseeded_randomness: 391 callbacks suppressed
[ 11.126285 ] random: get_random_u64 called from arch_rnd+0x20/0x37 with crng_init=1
[ 11.127744 ] random: get_random_u64 called from load_elf_binary+0x330/0x13fa with crng_init=1
[ 11.128656 ] random: get_random_u64 called from arch_rnd+0x20/0x37 with crng_init=1
[ 12.271342 ] _warn_unseeded_randomness: 26 callbacks suppressed
[ 12.271348 ] random: get_random_u64 called from arch_rnd+0x20/0x37 with crng_init=1
[ 12.272835 ] random: get_random_u64 called from load_elf_binary+0x330/0x13fa with crng_init=1
[ 12.272840 ] random: get_random_u32 called from arch_align_stack+0x28/0x3a with crng_init=1
Starting Permit User Sessions...
[ 12.359892 ] parport_pc 00:04: reported by Plug and Play ACPI
[ 12.361256 ] parport0: PC-style at 0x378, irq 7 [PCSPP(,...)]
Starting LSB: Execute the kexec -e command to reboot system...
Starting Login Service...
•••••••••
[ 12.488387 ] rtc_cmos 00:00: alarms up to one day, y3k, 114 bytes nvram, hpet irqs
Starting System Logging Service...
Starting LKP bootstrap...
Starting /etc/rc.local Compatibility...
Starting OpenBSD Secure Shell server...
Starting LSB: Start and stop bmc-watchdog...
[ 12.475680 ] rc.local[353]: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/lkp/lkp/src/bin
[ 12.676342 ] parport0: cannot grant exclusive access for device pps_parport
[ 12.677670 ] pps_parport: couldn't register with parport0
[ 12.684501 ] parport_pc parport_pc.956: Unable to set coherent dma mask: disabling DMA
[ 12.705830 ] parport_pc parport_pc.888: Unable to set coherent dma mask: disabling DMA
[ 12.744332 ] parport_pc parport_pc.632: Unable to set coherent dma mask: disabling DMA
[ 12.769682 ] Linux agpgart interface v0.103
Starting LSB: Load kernel image with kexec...
[ 13.224353 ] PCI Interrupt Link [LNKD] enabled at IRQ 10
[ 13.274820 ] _warn_unseeded_randomness: 338 callbacks suppressed
[ 13.274827 ] random: get_random_u64 called from arch_rnd+0x20/0x37 with crng_init=1
[ 13.276289 ] random: get_random_u64 called from load_elf_binary+0x330/0x13fa with crng_init=1
[ 13.278156 ] random: get_random_u64 called from arch_rnd+0x20/0x37 with crng_init=1
[ 13.308315 ] [drm] Found bochs VGA, ID 0xb0c0.
[ 13.308798 ] [drm] Framebuffer size 16384 kB @ 0xfd000000, mmio @ 0xfebf0000.
[ 13.560302 ] AVX version of gcm_enc/dec engaged.
[ 13.560812 ] AES CTR mode by8 optimization enabled
To reproduce:
# build kernel
cd linux
cp config-5.0.0-00006-g24c764a .config
make HOSTCC=gcc-5 CC=gcc-5 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-5 CC=gcc-5 ARCH=x86_64 prepare
make HOSTCC=gcc-5 CC=gcc-5 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-5 CC=gcc-5 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-5 CC=gcc-5 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 9 months
[vfs] 27eb9d500d: vm-scalability.median -19.4% regression
by kernel test robot
Greeting,
FYI, we noticed a -19.4% regression of vm-scalability.median due to commit:
commit: 27eb9d500d71d93e2b2f55c226bc1cc4ba53e9b0 ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
https://git.kernel.org/cgit/linux/kernel/git/dhowells/linux-fs.git mount-api-viro
in testcase: vm-scalability
on test machine: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
with following parameters:
runtime: 300s
size: 16G
test: shm-pread-rand
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
In addition to that, the commit also has significant impact on the following tests:
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: |
| test machine | 104 threads Skylake with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=2T |
| | test=shm-xread-seq-mt |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -6.9% regression |
| test machine | 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=1T |
| | test=lru-shm |
| | ucode=0x200005a |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median 17.4% improvement |
| test machine | 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=256G |
| | test=lru-shm-rand |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median -33.5% regression |
| test machine | 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=16G |
| | test=shm-pread-rand |
| | ucode=0x200005a |
+------------------+-----------------------------------------------------------------------+
| testcase: change | vm-scalability: vm-scalability.median 5.9% improvement |
| test machine | 104 threads Skylake with 192G memory |
| test parameters | cpufreq_governor=performance |
| | runtime=300s |
| | size=1T |
| | test=lru-shm |
+------------------+-----------------------------------------------------------------------+
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2018-04-03.cgz/300s/16G/lkp-bdw-ep2/shm-pread-rand/vm-scalability
commit:
f568cf93ca ("vfs: Convert smackfs to use the new mount API")
27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b
---------------- ---------------------------
%stddev %change %stddev
\ | \
35.48 -48.9% 18.13 vm-scalability.free_time
58648 -19.4% 47257 vm-scalability.median
5152784 -19.7% 4135803 vm-scalability.throughput
345.44 -6.3% 323.69 vm-scalability.time.elapsed_time
345.44 -6.3% 323.69 vm-scalability.time.elapsed_time.max
65947509 -50.0% 32974469 vm-scalability.time.maximum_resident_set_size
1.362e+08 -50.0% 68087693 vm-scalability.time.minor_page_faults
8549 +1.3% 8656 vm-scalability.time.percent_of_cpu_this_job_got
5070 ± 2% -49.9% 2538 vm-scalability.time.system_time
24465 +4.2% 25484 vm-scalability.time.user_time
1.548e+09 -19.7% 1.244e+09 vm-scalability.workload
1584808 ± 72% -53.2% 742467 ± 8% cpuidle.C1.time
785475 ± 36% -53.3% 366683 ± 31% cpuidle.C6.usage
2.96 ± 3% -1.1 1.82 mpstat.cpu.all.idle%
16.68 -7.8 8.92 mpstat.cpu.all.sys%
80.35 +8.9 89.25 mpstat.cpu.all.usr%
9965020 ± 2% -44.0% 5578712 ± 5% numa-numastat.node0.local_node
9977872 -44.1% 5580748 ± 5% numa-numastat.node0.numa_hit
10122101 -52.6% 4801652 ± 6% numa-numastat.node1.local_node
10126532 -52.4% 4816911 ± 6% numa-numastat.node1.numa_hit
79.25 +11.0% 88.00 vmstat.cpu.us
64507804 -48.1% 33498560 vmstat.memory.cache
55730730 +65.3% 92097498 vmstat.memory.free
1073 -4.2% 1028 vmstat.system.cs
31.50 ± 8% -19.0% 25.50 ± 9% sched_debug.cpu.cpu_load[3].max
3.10 ± 19% -34.0% 2.04 ± 11% sched_debug.cpu.cpu_load[3].stddev
59.79 ± 6% -18.4% 48.78 ± 7% sched_debug.cpu.cpu_load[4].max
5.63 ± 8% -22.5% 4.36 ± 8% sched_debug.cpu.cpu_load[4].stddev
517.67 ± 5% -19.3% 417.56 ± 7% sched_debug.cpu.nr_switches.min
2709 +1.2% 2742 turbostat.Avg_MHz
22091 ± 95% -63.2% 8124 ± 27% turbostat.C1
774586 ± 36% -54.5% 352400 ± 33% turbostat.C6
1.23 ± 23% -35.1% 0.80 ± 15% turbostat.CPU%c1
0.85 ± 15% -46.8% 0.45 ± 11% turbostat.Pkg%pc2
26.30 +3.3% 27.17 turbostat.RAMWatt
4589533 -68.6% 1440882 meminfo.Active
4589118 -68.6% 1440465 meminfo.Active(anon)
64456373 -48.0% 33491887 meminfo.Cached
64265169 -49.2% 32665733 meminfo.Committed_AS
58888990 -47.2% 31073080 meminfo.Inactive
58887681 -47.2% 31071756 meminfo.Inactive(anon)
223071 -33.5% 148432 meminfo.KReclaimable
58833647 -47.3% 31017550 meminfo.Mapped
55029628 +66.1% 91387958 meminfo.MemAvailable
55536115 +65.5% 91931756 meminfo.MemFree
76356407 -47.7% 39960766 meminfo.Memused
10888435 -49.1% 5544617 meminfo.PageTables
223071 -33.5% 148432 meminfo.SReclaimable
63233823 -49.0% 32269411 meminfo.Shmem
353083 -21.7% 276412 meminfo.Slab
238010 -45.0% 131007 meminfo.max_used_kB
6.97 ± 11% -1.3 5.69 ± 5% perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.93 ± 50% -1.2 0.70 ± 93% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
1.94 ± 50% -1.2 0.72 ± 92% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
2.25 ± 41% -1.1 1.17 ± 42% perf-profile.calltrace.cycles-pp.ret_from_fork
2.25 ± 41% -1.1 1.17 ± 42% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
5.80 ± 10% -1.0 4.75 ± 7% perf-profile.calltrace.cycles-pp.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.97 ± 11% -0.6 1.42 ± 8% perf-profile.calltrace.cycles-pp.__update_load_avg_cfs_rq.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
2.75 ± 4% -0.5 2.28 ± 5% perf-profile.calltrace.cycles-pp.native_write_msr.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt
3.34 ± 4% -0.4 2.90 ± 5% perf-profile.calltrace.cycles-pp.lapic_next_deadline.clockevents_program_event.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
1.77 ± 4% -0.2 1.54 ± 11% perf-profile.calltrace.cycles-pp.run_timer_softirq.__softirqentry_text_start.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
1.67 ± 10% -0.2 1.44 ± 13% perf-profile.calltrace.cycles-pp.rcu_sched_clock_irq.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.76 ± 5% -0.1 0.62 ± 10% perf-profile.calltrace.cycles-pp.ktime_get.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
0.51 ± 58% +0.7 1.25 ± 8% perf-profile.calltrace.cycles-pp.rb_next.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt
1.43 ± 20% +0.9 2.35 ± 8% perf-profile.calltrace.cycles-pp.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
2.17 ± 16% +1.0 3.18 ± 9% perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
2.16 ± 19% +1.0 3.19 ± 13% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
1696 ± 13% -15.2% 1438 ± 3% slabinfo.UNIX.active_objs
1696 ± 13% -15.2% 1438 ± 3% slabinfo.UNIX.num_objs
3934 ± 4% -15.1% 3340 ± 7% slabinfo.eventpoll_pwq.active_objs
3934 ± 4% -15.1% 3340 ± 7% slabinfo.eventpoll_pwq.num_objs
21089 ± 5% -7.5% 19504 ± 2% slabinfo.filp.active_objs
21107 ± 5% -7.2% 19587 ± 2% slabinfo.filp.num_objs
9028 ± 5% -6.8% 8417 ± 2% slabinfo.kmalloc-512.active_objs
4303 ± 5% -15.3% 3646 ± 5% slabinfo.kmalloc-rcl-64.active_objs
4303 ± 5% -15.3% 3646 ± 5% slabinfo.kmalloc-rcl-64.num_objs
1911 ± 4% -10.7% 1708 ± 4% slabinfo.kmalloc-rcl-96.active_objs
1911 ± 4% -10.7% 1708 ± 4% slabinfo.kmalloc-rcl-96.num_objs
272197 -46.6% 145406 slabinfo.radix_tree_node.active_objs
4921 -46.9% 2612 slabinfo.radix_tree_node.active_slabs
275638 -46.9% 146318 slabinfo.radix_tree_node.num_objs
4921 -46.9% 2612 slabinfo.radix_tree_node.num_slabs
362.50 ± 9% -24.7% 273.00 ± 17% slabinfo.skbuff_fclone_cache.active_objs
362.50 ± 9% -24.7% 273.00 ± 17% slabinfo.skbuff_fclone_cache.num_objs
1142643 -68.5% 360168 proc-vmstat.nr_active_anon
1371004 +66.3% 2279988 proc-vmstat.nr_dirty_background_threshold
2745366 +66.3% 4565553 proc-vmstat.nr_dirty_threshold
16116153 -48.1% 8369140 proc-vmstat.nr_file_pages
13884480 +65.6% 22987632 proc-vmstat.nr_free_pages
14728364 -47.3% 7763800 proc-vmstat.nr_inactive_anon
14714990 -47.3% 7750437 proc-vmstat.nr_mapped
2720411 -49.1% 1385255 proc-vmstat.nr_page_table_pages
15810254 -49.0% 8063260 proc-vmstat.nr_shmem
55779 -33.5% 37096 proc-vmstat.nr_slab_reclaimable
32502 -1.6% 31995 proc-vmstat.nr_slab_unreclaimable
1142643 -68.5% 360168 proc-vmstat.nr_zone_active_anon
14728364 -47.3% 7763800 proc-vmstat.nr_zone_inactive_anon
20128841 -48.2% 10422506 proc-vmstat.numa_hit
20111549 -48.3% 10405203 proc-vmstat.numa_local
16498703 -50.0% 8254888 proc-vmstat.pgactivate
20220083 -48.1% 10494413 proc-vmstat.pgalloc_normal
1.371e+08 -49.7% 68917706 proc-vmstat.pgfault
19856424 -49.8% 9959314 ± 3% proc-vmstat.pgfree
576965 ± 2% -66.5% 193251 ± 4% numa-vmstat.node0.nr_active_anon
7984459 -46.0% 4314633 ± 3% numa-vmstat.node0.nr_file_pages
6994491 ± 2% +60.4% 11221928 ± 2% numa-vmstat.node0.nr_free_pages
7288285 -45.1% 4000865 ± 3% numa-vmstat.node0.nr_inactive_anon
7280756 -45.2% 3990345 ± 3% numa-vmstat.node0.nr_mapped
1339535 ± 7% -41.5% 783256 ± 18% numa-vmstat.node0.nr_page_table_pages
7829898 -46.9% 4159908 ± 3% numa-vmstat.node0.nr_shmem
576964 ± 2% -66.5% 193251 ± 4% numa-vmstat.node0.nr_zone_active_anon
7288285 -45.1% 4000865 ± 3% numa-vmstat.node0.nr_zone_inactive_anon
10058590 ± 2% -42.2% 5818668 ± 5% numa-vmstat.node0.numa_hit
10045466 ± 2% -42.1% 5816443 ± 5% numa-vmstat.node0.numa_local
579988 -70.6% 170402 ± 3% numa-vmstat.node1.nr_active_anon
8125385 -50.1% 4052654 ± 3% numa-vmstat.node1.nr_file_pages
6897539 ± 2% +70.6% 11767440 ± 2% numa-vmstat.node1.nr_free_pages
7419455 -49.4% 3757603 ± 3% numa-vmstat.node1.nr_inactive_anon
7413582 -49.4% 3754757 ± 3% numa-vmstat.node1.nr_mapped
1379651 ± 6% -56.4% 601611 ± 23% numa-vmstat.node1.nr_page_table_pages
7974047 -51.1% 3901499 ± 3% numa-vmstat.node1.nr_shmem
29269 ± 31% -55.5% 13024 ± 53% numa-vmstat.node1.nr_slab_reclaimable
579988 -70.6% 170402 ± 3% numa-vmstat.node1.nr_zone_active_anon
7419455 -49.4% 3757603 ± 3% numa-vmstat.node1.nr_zone_inactive_anon
10114627 ± 2% -50.5% 5007129 ± 6% numa-vmstat.node1.numa_hit
9935234 ± 2% -51.5% 4816878 ± 6% numa-vmstat.node1.numa_local
2309540 ± 2% -66.6% 772196 ± 4% numa-meminfo.node0.Active
2309333 ± 2% -66.6% 771828 ± 4% numa-meminfo.node0.Active(anon)
110938 ± 15% -22.2% 86295 ± 16% numa-meminfo.node0.AnonHugePages
31924208 -46.0% 17242697 ± 3% numa-meminfo.node0.FilePages
29138719 -45.1% 15989448 ± 3% numa-meminfo.node0.Inactive
29138061 -45.1% 15988774 ± 3% numa-meminfo.node0.Inactive(anon)
29107588 -45.2% 15946271 ± 3% numa-meminfo.node0.Mapped
27989020 ± 2% +60.5% 44909537 ± 2% numa-meminfo.node0.MemFree
37878262 -44.7% 20957746 ± 5% numa-meminfo.node0.MemUsed
5360132 ± 7% -41.6% 3129116 ± 18% numa-meminfo.node0.PageTables
31305964 -46.9% 16623796 ± 3% numa-meminfo.node0.Shmem
2321613 -70.7% 680488 ± 3% numa-meminfo.node1.Active
2321405 -70.7% 680440 ± 3% numa-meminfo.node1.Active(anon)
32488297 -50.2% 16194282 ± 3% numa-meminfo.node1.FilePages
29663768 -49.4% 15015880 ± 3% numa-meminfo.node1.Inactive
29663115 -49.4% 15015232 ± 3% numa-meminfo.node1.Inactive(anon)
117054 ± 31% -55.5% 52077 ± 53% numa-meminfo.node1.KReclaimable
29639530 -49.4% 15003596 ± 3% numa-meminfo.node1.Mapped
27600077 ± 2% +70.6% 47091753 ± 2% numa-meminfo.node1.MemFree
38425162 -50.7% 18933485 ± 6% numa-meminfo.node1.MemUsed
5521649 ± 6% -56.5% 2402059 ± 23% numa-meminfo.node1.PageTables
117054 ± 31% -55.5% 52077 ± 53% numa-meminfo.node1.SReclaimable
31882947 -51.1% 15589660 ± 3% numa-meminfo.node1.Shmem
176591 ± 22% -40.0% 105918 ± 28% numa-meminfo.node1.Slab
224.75 ± 14% -21.7% 176.00 ± 3% interrupts.35:IR-PCI-MSI.1572865-edge.eth0-TxRx-1
383.00 ± 30% -36.4% 243.67 ± 43% interrupts.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
213.50 ± 10% -9.6% 193.00 ± 14% interrupts.47:IR-PCI-MSI.1572877-edge.eth0-TxRx-13
227.50 ± 41% -30.0% 159.33 interrupts.56:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
313942 -5.2% 297766 interrupts.CAL:Function_call_interrupts
224.75 ± 14% -21.7% 176.00 ± 3% interrupts.CPU1.35:IR-PCI-MSI.1572865-edge.eth0-TxRx-1
430.25 ± 40% +125.2% 969.00 ± 20% interrupts.CPU1.RES:Rescheduling_interrupts
213.50 ± 10% -9.6% 193.00 ± 14% interrupts.CPU13.47:IR-PCI-MSI.1572877-edge.eth0-TxRx-13
124.25 ± 39% +1451.7% 1928 ± 26% interrupts.CPU17.RES:Rescheduling_interrupts
358.75 ± 31% +365.7% 1670 ± 20% interrupts.CPU18.RES:Rescheduling_interrupts
227.50 ± 41% -30.0% 159.33 interrupts.CPU22.56:IR-PCI-MSI.1572886-edge.eth0-TxRx-22
2867 ±149% -94.5% 157.67 ± 57% interrupts.CPU25.RES:Rescheduling_interrupts
444.50 ± 72% -79.5% 91.00 ± 45% interrupts.CPU27.RES:Rescheduling_interrupts
387.25 ± 39% -51.8% 186.67 ± 83% interrupts.CPU29.RES:Rescheduling_interrupts
671.50 ± 85% -73.9% 175.33 ± 19% interrupts.CPU31.RES:Rescheduling_interrupts
392.00 ± 20% -74.1% 101.67 ± 68% interrupts.CPU35.RES:Rescheduling_interrupts
689.00 ± 58% -63.7% 250.00 ± 19% interrupts.CPU38.RES:Rescheduling_interrupts
713.75 ± 46% -65.2% 248.33 ± 67% interrupts.CPU41.RES:Rescheduling_interrupts
403.25 ± 36% -67.8% 130.00 ± 48% interrupts.CPU42.RES:Rescheduling_interrupts
4547 ± 10% -15.5% 3841 ± 2% interrupts.CPU43.RES:Rescheduling_interrupts
295.00 ± 64% -83.2% 49.67 ± 39% interrupts.CPU48.RES:Rescheduling_interrupts
383.00 ± 30% -36.4% 243.67 ± 43% interrupts.CPU5.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
104.75 ± 44% +242.7% 359.00 ± 69% interrupts.CPU52.RES:Rescheduling_interrupts
7873 -33.0% 5276 ± 34% interrupts.CPU57.NMI:Non-maskable_interrupts
7873 -33.0% 5276 ± 34% interrupts.CPU57.PMI:Performance_monitoring_interrupts
7836 -49.6% 3950 interrupts.CPU58.NMI:Non-maskable_interrupts
7836 -49.6% 3950 interrupts.CPU58.PMI:Performance_monitoring_interrupts
7880 -49.8% 3957 interrupts.CPU59.NMI:Non-maskable_interrupts
7880 -49.8% 3957 interrupts.CPU59.PMI:Performance_monitoring_interrupts
6881 ± 24% -42.4% 3965 interrupts.CPU61.NMI:Non-maskable_interrupts
6881 ± 24% -42.4% 3965 interrupts.CPU61.PMI:Performance_monitoring_interrupts
6894 ± 24% -42.7% 3948 interrupts.CPU63.NMI:Non-maskable_interrupts
6894 ± 24% -42.7% 3948 interrupts.CPU63.PMI:Performance_monitoring_interrupts
34.75 ± 48% +1010.8% 386.00 ±113% interrupts.CPU63.RES:Rescheduling_interrupts
7879 -49.7% 3959 interrupts.CPU64.NMI:Non-maskable_interrupts
7879 -49.7% 3959 interrupts.CPU64.PMI:Performance_monitoring_interrupts
7828 -49.4% 3960 interrupts.CPU65.NMI:Non-maskable_interrupts
7828 -49.4% 3960 interrupts.CPU65.PMI:Performance_monitoring_interrupts
7889 -49.6% 3977 interrupts.CPU66.NMI:Non-maskable_interrupts
7889 -49.6% 3977 interrupts.CPU66.PMI:Performance_monitoring_interrupts
7862 -49.6% 3964 interrupts.CPU67.NMI:Non-maskable_interrupts
7862 -49.6% 3964 interrupts.CPU67.PMI:Performance_monitoring_interrupts
658.00 ±109% -97.8% 14.33 ± 52% interrupts.CPU67.RES:Rescheduling_interrupts
7845 -49.3% 3976 interrupts.CPU68.NMI:Non-maskable_interrupts
7845 -49.3% 3976 interrupts.CPU68.PMI:Performance_monitoring_interrupts
7875 -49.6% 3966 interrupts.CPU69.NMI:Non-maskable_interrupts
7875 -49.6% 3966 interrupts.CPU69.PMI:Performance_monitoring_interrupts
348.75 ± 33% +140.0% 837.00 ± 56% interrupts.CPU7.RES:Rescheduling_interrupts
7881 -49.9% 3952 interrupts.CPU70.NMI:Non-maskable_interrupts
7881 -49.9% 3952 interrupts.CPU70.PMI:Performance_monitoring_interrupts
27.33 +3.0% 28.16 perf-stat.i.MPKI
8.834e+09 -1.8% 8.673e+09 perf-stat.i.branch-instructions
0.15 ± 9% -0.1 0.10 ± 7% perf-stat.i.branch-miss-rate%
10588799 -28.6% 7555287 ± 2% perf-stat.i.branch-misses
63.58 +4.2 67.78 perf-stat.i.cache-miss-rate%
6.596e+08 +8.0% 7.125e+08 perf-stat.i.cache-misses
9.611e+08 +5.0% 1.009e+09 perf-stat.i.cache-references
1041 -4.5% 994.48 perf-stat.i.context-switches
2.378e+11 +1.3% 2.41e+11 perf-stat.i.cpu-cycles
50.46 ± 2% +22.5% 61.81 ± 7% perf-stat.i.cpu-migrations
662.21 ± 2% -23.0% 509.63 perf-stat.i.cycles-between-cache-misses
4.42 +0.4 4.87 perf-stat.i.dTLB-load-miss-rate%
4.707e+08 +12.4% 5.293e+08 perf-stat.i.dTLB-load-misses
3.295e+09 +5.5% 3.477e+09 perf-stat.i.dTLB-stores
81.54 ± 5% -6.0 75.55 perf-stat.i.iTLB-load-miss-rate%
883475 ± 2% -45.9% 477683 perf-stat.i.iTLB-load-misses
3.788e+10 -1.7% 3.723e+10 perf-stat.i.instructions
532634 ± 30% +47.4% 784855 perf-stat.i.instructions-per-iTLB-miss
0.17 -5.1% 0.16 perf-stat.i.ipc
394946 -46.4% 211876 perf-stat.i.minor-faults
1.827e+08 +19.8% 2.189e+08 ± 8% perf-stat.i.node-load-misses
2240951 -44.9% 1234844 perf-stat.i.node-store-misses
1555912 -44.5% 864154 ± 2% perf-stat.i.node-stores
394952 -46.4% 211884 perf-stat.i.page-faults
25.29 +7.0% 27.07 perf-stat.overall.MPKI
0.12 ± 2% -0.0 0.09 ± 3% perf-stat.overall.branch-miss-rate%
68.60 +2.0 70.57 perf-stat.overall.cache-miss-rate%
6.27 +3.2% 6.47 perf-stat.overall.cpi
361.41 -6.3% 338.69 perf-stat.overall.cycles-between-cache-misses
4.21 +0.5 4.74 perf-stat.overall.dTLB-load-miss-rate%
96.84 -1.6 95.28 perf-stat.overall.iTLB-load-miss-rate%
42701 ± 2% +81.4% 77459 perf-stat.overall.instructions-per-iTLB-miss
0.16 -3.1% 0.15 perf-stat.overall.ipc
28.32 +2.8 31.14 ± 9% perf-stat.overall.node-load-miss-rate%
8452 +14.5% 9674 perf-stat.overall.path-length
8.82e+09 -1.9% 8.652e+09 perf-stat.ps.branch-instructions
10660193 -28.7% 7603578 ± 2% perf-stat.ps.branch-misses
6.561e+08 +8.1% 7.094e+08 perf-stat.ps.cache-misses
9.565e+08 +5.1% 1.005e+09 perf-stat.ps.cache-references
1039 -4.6% 992.30 perf-stat.ps.context-switches
2.371e+11 +1.3% 2.402e+11 perf-stat.ps.cpu-cycles
50.29 ± 2% +22.5% 61.62 ± 7% perf-stat.ps.cpu-migrations
4.681e+08 +12.6% 5.269e+08 perf-stat.ps.dTLB-load-misses
3.281e+09 +5.6% 3.463e+09 perf-stat.ps.dTLB-stores
886017 ± 2% -45.9% 479504 perf-stat.ps.iTLB-load-misses
3.782e+10 -1.8% 3.714e+10 perf-stat.ps.instructions
395992 -46.3% 212654 perf-stat.ps.minor-faults
1.817e+08 +19.9% 2.179e+08 ± 8% perf-stat.ps.node-load-misses
2253281 -44.9% 1242231 perf-stat.ps.node-store-misses
1565323 -44.4% 869772 ± 2% perf-stat.ps.node-stores
395992 -46.3% 212654 perf-stat.ps.page-faults
1.309e+13 -8.1% 1.203e+13 perf-stat.total.instructions
146357 ± 6% -14.5% 125182 softirqs.CPU0.TIMER
141184 ± 5% -11.7% 124672 ± 2% softirqs.CPU1.TIMER
147886 ± 14% -20.0% 118358 ± 2% softirqs.CPU10.TIMER
136397 ± 2% -10.9% 121532 softirqs.CPU11.TIMER
137112 ± 3% -11.2% 121805 softirqs.CPU12.TIMER
136851 ± 3% -11.8% 120641 ± 2% softirqs.CPU13.TIMER
154185 ± 12% -18.7% 125328 ± 2% softirqs.CPU14.TIMER
146182 ± 6% -11.6% 129251 ± 2% softirqs.CPU15.TIMER
137975 ± 3% -11.6% 121954 softirqs.CPU17.TIMER
139759 ± 3% -12.1% 122847 ± 2% softirqs.CPU18.TIMER
136434 ± 3% -10.1% 122633 softirqs.CPU19.TIMER
140178 ± 3% -13.9% 120756 softirqs.CPU2.TIMER
142345 ± 2% -10.5% 127466 softirqs.CPU20.TIMER
142140 ± 4% -13.0% 123682 ± 2% softirqs.CPU21.TIMER
140219 ± 3% -11.9% 123543 softirqs.CPU3.TIMER
39237 ± 5% -8.5% 35897 softirqs.CPU33.RCU
39794 ± 6% -7.9% 36635 softirqs.CPU35.RCU
43475 ± 5% -10.0% 39132 softirqs.CPU36.RCU
137545 ± 3% -13.3% 119226 softirqs.CPU4.TIMER
44254 ± 6% -11.4% 39230 ± 4% softirqs.CPU42.RCU
43098 ± 4% -14.8% 36703 ± 10% softirqs.CPU43.RCU
150777 ± 12% -14.8% 128387 ± 4% softirqs.CPU44.TIMER
149865 ± 4% -13.8% 129214 ± 5% softirqs.CPU45.TIMER
139358 ± 2% -12.6% 121744 ± 2% softirqs.CPU46.TIMER
140711 ± 3% -11.8% 124137 softirqs.CPU47.TIMER
137353 ± 2% -12.7% 119924 softirqs.CPU48.TIMER
136333 ± 3% -11.4% 120724 softirqs.CPU49.TIMER
137656 ± 4% -12.8% 120006 softirqs.CPU5.TIMER
142562 ± 2% -11.5% 126238 ± 4% softirqs.CPU50.TIMER
139806 ± 2% -9.4% 126671 ± 2% softirqs.CPU51.TIMER
135091 ± 2% -10.2% 121320 ± 2% softirqs.CPU52.TIMER
139116 ± 3% -10.6% 124324 softirqs.CPU53.TIMER
137160 -14.2% 117742 ± 3% softirqs.CPU54.TIMER
135798 ± 2% -12.1% 119408 softirqs.CPU55.TIMER
136756 ± 3% -11.6% 120847 softirqs.CPU56.TIMER
135013 ± 2% -11.3% 119817 softirqs.CPU57.TIMER
146878 ± 7% -15.1% 124630 ± 3% softirqs.CPU58.TIMER
152234 ± 13% -15.3% 128998 ± 2% softirqs.CPU59.TIMER
142749 ± 3% -13.5% 123549 ± 2% softirqs.CPU6.TIMER
136796 ± 3% -11.8% 120619 ± 2% softirqs.CPU61.TIMER
138150 ± 3% -12.3% 121108 ± 3% softirqs.CPU62.TIMER
136393 ± 3% -11.4% 120810 softirqs.CPU63.TIMER
141686 ± 2% -10.2% 127274 softirqs.CPU64.TIMER
140700 ± 3% -12.0% 123754 ± 2% softirqs.CPU65.TIMER
143677 ± 3% -11.9% 126609 ± 2% softirqs.CPU7.TIMER
34821 ± 5% -9.7% 31437 softirqs.CPU77.RCU
34876 ± 5% -8.2% 32008 softirqs.CPU79.RCU
134723 ± 2% -10.2% 120958 softirqs.CPU8.TIMER
35408 ± 5% -7.7% 32681 ± 3% softirqs.CPU83.RCU
38518 ± 5% -9.5% 34861 ± 5% softirqs.CPU86.RCU
38060 ± 9% -20.0% 30464 ± 13% softirqs.CPU87.RCU
135950 ± 2% -9.8% 122596 softirqs.CPU9.TIMER
347353 ± 3% -21.9% 271232 ± 3% softirqs.SCHED
vm-scalability.time.user_time
30000 +-+-----------------------------------------------------------------+
| |
25000 O-O.+.O O O O O O .O. O O O .O.O O O O.O .+.|
|.+ +..+.+.+.+ + + +.+ +.+.+.+. +.+ +.+ : +.+.+ |
| : : : : : : : : |
20000 +-+ : : :: : : : : : |
| : : : : : : : : : : |
15000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
10000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
| : : : : : : : : :: |
5000 +-+ : : :: : :: |
| : : : : : |
0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+
vm-scalability.time.system_time
6000 +-+------------------------------------------------------------------+
| |
5000 +-+. .+..+.+.+.+ + + +.+. .+.+.+.+. .+.+ +.+. +..+.+. .|
| + : : : : +. + : : + : + |
| : :: : : : : : : |
4000 +-+ : :: :: : : : : : |
| : : : : : : : : : : |
3000 +-+ : : : : : : : : : : |
O O O O :O:O : : :O:O O O O O O O O : :O O O: : |
2000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
| :: : : :: : : : : |
1000 +-+ : : : : : |
| : : : : : |
0 +-+-O----O-O---O------O-O-----O----O--------------O------------------+
vm-scalability.time.percent_of_cpu_this_job_got
9000 +-+------------------------------------------------------------------+
O.O.+.O..+.+.O.+ O O + O O.+.O..+.O.O.O.O.O.O O.O.O +..+.+.+.|
8000 +-+ : : : : : : : : |
7000 +-+ : : : : : : : : |
| : :: : : : : : : |
6000 +-+ : : : : : : : : : : |
5000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
4000 +-+ : : : : : : : : : : |
3000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
2000 +-+ : :: : :: : |
1000 +-+ : : : : : |
| : : : : : |
0 +-+-O----O-O---O------O-O-----O----O--------------O------------------+
vm-scalability.time.elapsed_time
350 +-+-------------------------------------------------------------------+
O O O O : O O : O O O O O O O O O O O O : |
300 +-+ : : : : : : : : |
| : : : : : : : : |
250 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
200 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
150 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
100 +-+ : : : : : : : : : : |
| :: : : :: : |
50 +-+ : : : : : |
| : : : : : |
0 +-+-O----O-O---O------O-O------O---O--------------O-------------------+
vm-scalability.time.elapsed_time.max
350 +-+-------------------------------------------------------------------+
O O O O : O O : O O O O O O O O O O O O : |
300 +-+ : : : : : : : : |
| : : : : : : : : |
250 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
200 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
150 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
100 +-+ : : : : : : : : : : |
| :: : : :: : |
50 +-+ : : : : : |
| : : : : : |
0 +-+-O----O-O---O------O-O------O---O--------------O-------------------+
vm-scalability.time.maximum_resident_set_size
7e+07 +-+-----------------------------------------------------------------+
|.+.+.+..+.+.+.+ + + +.+.+.+.+.+.+..+.+.+ +.+.+ +.+.+.+.|
6e+07 +-+ : : : : : : : : |
| : : : : : : : : |
5e+07 +-+ : : :: : : : : : |
| : : : : : : : : : : |
4e+07 +-+ : : : : : : : : : : |
O O O O :O:O: : : O:O O O O O O O O: :O O O: : |
3e+07 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
2e+07 +-+ : : : : : : : : : : |
| : : :: : :: |
1e+07 +-+ : : : : : |
| : : : : : |
0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+
vm-scalability.time.minor_page_faults
1.4e+08 +-+---------------------------------------------------------------+
| : : : : : : : : |
1.2e+08 +-+ : : : : : : : : |
| : : : : : : : : |
1e+08 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
8e+07 +-+ : : : : : : : : : : |
O O O O :O:O: : :O:O O O O O O O O: :O O O: : |
6e+07 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
4e+07 +-+ : : : : : : : : : : |
| :: : : :: : |
2e+07 +-+ : : : : : |
| : : : : : |
0 +-+-O---O-O---O------O-O-----O---O-------------O------------------+
vm-scalability.throughput
6e+06 +-+-----------------------------------------------------------------+
| |
5e+06 +-+.+.+..+.+.+.+ + + +.+.+.+.+.+.+..+.+.+ +.+.+ +.+.+.+.|
| : : : : : : : : |
O O O O : O O :: O O O O O O O O O O O O : |
4e+06 +-+ : :: :: : : : : : |
| : : : : : : : : : : |
3e+06 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
2e+06 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
| :: :: : : :: :: |
1e+06 +-+ : : : : :: |
| : : : : : |
0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+
vm-scalability.free_time
40 +-+--------------------------------------------------------------------+
|. .+.+. .+ + + +. .+.+.+. .+.+ |
35 +-+.+. +.+. : : : : +.+. +.+. : +.+..+ +.+..+.+.|
30 +-+ : : : : : : : : |
| : : :: : : : : : |
25 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
20 +-+ : : : : : : O O O O : : : : |
O O O O :O:O: : :O:O O O O: :O O O: : |
15 +-+ : : : : : : : : : : |
10 +-+ : : : : : : : : : : |
| : : :: :: :: |
5 +-+ : : : : : |
| : : : : : |
0 +-+-O----O-O----O-----O-O------O----O--------------O-------------------+
vm-scalability.median
60000 +-+-----------------------------------------------------------------+
| : : : : : : : : |
50000 +-+ : : : : : : : : |
O O O O : O O :: O O O O O O O O O O O O : |
| : : : : : : : : : : |
40000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
30000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
20000 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
| : : :: : :: |
10000 +-+ : : : : : |
| : : : : : |
0 +-+-O----O-O---O-----O-O------O---O--------------O------------------+
vm-scalability.workload
1.6e+09 +-+---------------------------------------------------------------+
| + : : + + : : : : + |
1.4e+09 +-+ : : : : : : : : |
1.2e+09 O-O O O : O O : O O O O O O O O O :O O O : |
| : : : : : : : : : : |
1e+09 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
8e+08 +-+ : : : : : : : : : : |
| : : : : : : : : : : |
6e+08 +-+ : : : : : : : : : : |
4e+08 +-+ : : : : : : : : : : |
| :: : : :: : |
2e+08 +-+ : : : : : |
| : : : : : |
0 +-+-O---O-O---O------O-O-----O---O-------------O------------------+
[*] bisect-good sample
[O] bisect-bad sample
***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
***************************************************************************************************
lkp-skl-2sp6: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-03-19.cgz/300s/1T/lkp-skl-2sp6/lru-shm/vm-scalability/0x200005a
commit:
f568cf93ca ("vfs: Convert smackfs to use the new mount API")
27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
1:4 -25% :4 kmsg.Firmware_Bug]:the_BIOS_has_corrupted_hw-PMU_resources(MSR#is#)
:4 25% 1:4 kmsg.Firmware_Bug]:the_BIOS_has_corrupted_hw-PMU_resources(MSR#is#c5)
3:4 17% 4:4 perf-profile.calltrace.cycles-pp.sync_regs.error_entry.do_access
6:4 33% 7:4 perf-profile.calltrace.cycles-pp.error_entry.do_access
%stddev %change %stddev
\ | \
0.04 -44.5% 0.02 vm-scalability.free_time
493774 -6.9% 459676 vm-scalability.median
0.10 ± 4% -70.2% 0.03 ± 8% vm-scalability.median_stddev
0.10 ± 3% -71.6% 0.03 ± 8% vm-scalability.stddev
35560067 -6.7% 33183685 vm-scalability.throughput
265.62 +5.3% 279.70 vm-scalability.time.elapsed_time
265.62 +5.3% 279.70 vm-scalability.time.elapsed_time.max
53018 ± 4% +12.9% 59872 ± 2% vm-scalability.time.involuntary_context_switches
915746 -50.0% 458060 vm-scalability.time.maximum_resident_set_size
5.282e+08 +1.8% 5.378e+08 vm-scalability.time.minor_page_faults
1913 +3.0% 1970 vm-scalability.time.percent_of_cpu_this_job_got
3184 +7.2% 3413 vm-scalability.time.system_time
1899 +10.5% 2099 vm-scalability.time.user_time
33022 +102.9% 67014 vm-scalability.time.voluntary_context_switches
2.371e+09 +1.6% 2.408e+09 vm-scalability.workload
53130 ± 4% +157.8% 136948 ± 40% cpuidle.C1.usage
207860 +39.5% 290026 cpuidle.POLL.time
101579 +28.5% 130526 cpuidle.POLL.usage
73.00 -2.1% 71.50 vmstat.cpu.id
34506226 -48.0% 17935148 vmstat.memory.cache
96542374 +17.2% 1.131e+08 vmstat.memory.free
3293 ± 2% +35.6% 4466 vmstat.system.cs
851.33 +6.9% 910.25 turbostat.Avg_MHz
53038 ± 4% +155.5% 135497 ± 39% turbostat.C1
30.18 -22.3 7.89 ± 11% turbostat.PKG_%
48.33 +8.1% 52.25 ± 2% turbostat.PkgTmp
177.02 +6.2% 187.92 turbostat.PkgWatt
34577155 -48.1% 17950176 meminfo.Cached
33784819 -49.7% 16984540 meminfo.Committed_AS
33341535 -49.9% 16711076 meminfo.Inactive
33339708 -49.9% 16708826 meminfo.Inactive(anon)
1826 +23.1% 2248 meminfo.Inactive(file)
145341 -26.4% 106952 meminfo.KReclaimable
8852817 ± 3% -50.6% 4377013 meminfo.Mapped
95780719 +17.4% 1.124e+08 meminfo.MemAvailable
96325492 +17.3% 1.13e+08 meminfo.MemFree
35378531 -47.1% 18707929 meminfo.Memused
20756 ± 2% -43.5% 11735 meminfo.PageTables
145341 -26.4% 106952 meminfo.SReclaimable
33354229 -49.9% 16727090 meminfo.Shmem
274165 -13.2% 238008 meminfo.Slab
257335 -51.1% 125813 meminfo.max_used_kB
62431 ± 26% -49.8% 31335 ± 6% softirqs.CPU18.RCU
24215 ± 3% +12.7% 27285 ± 6% softirqs.CPU18.SCHED
24808 +18.1% 29295 ± 11% softirqs.CPU23.RCU
21472 ± 6% +22.6% 26326 ± 17% softirqs.CPU30.RCU
20425 ± 9% +30.1% 26584 ± 11% softirqs.CPU32.RCU
22603 ± 7% +20.0% 27119 ± 13% softirqs.CPU34.RCU
24977 ± 4% +20.7% 30135 ± 10% softirqs.CPU39.RCU
24481 ± 2% +16.9% 28618 ± 9% softirqs.CPU40.RCU
24336 ± 11% +13.0% 27510 ± 11% softirqs.CPU41.RCU
25706 ± 4% +20.1% 30880 ± 8% softirqs.CPU44.RCU
24456 ± 4% +21.1% 29606 ± 10% softirqs.CPU48.RCU
22435 ± 11% +37.2% 30774 ± 10% softirqs.CPU50.RCU
23702 ± 5% +28.3% 30419 ± 11% softirqs.CPU51.RCU
25067 ± 7% +32.1% 33123 ± 4% softirqs.CPU52.RCU
22262 ± 4% +17.7% 26209 ± 13% softirqs.CPU58.RCU
35952 ± 10% -22.6% 27814 ± 10% softirqs.CPU6.RCU
25967 ± 4% +8.4% 28142 softirqs.CPU7.SCHED
4406424 ± 7% -49.2% 2236674 ± 6% numa-vmstat.node0.nr_file_pages
11914548 ± 2% +18.3% 14091804 numa-vmstat.node0.nr_free_pages
4254001 ± 7% -51.1% 2080200 ± 6% numa-vmstat.node0.nr_inactive_anon
1024415 ± 2% -45.2% 560990 numa-vmstat.node0.nr_mapped
2444 ± 9% -38.0% 1515 ± 12% numa-vmstat.node0.nr_page_table_pages
4255480 ± 7% -51.1% 2082984 ± 6% numa-vmstat.node0.nr_shmem
19425 ± 6% -23.1% 14934 ± 2% numa-vmstat.node0.nr_slab_reclaimable
4253930 ± 7% -51.1% 2080107 ± 6% numa-vmstat.node0.nr_zone_inactive_anon
34197 ± 13% +10.5% 37790 ± 11% numa-vmstat.node1.nr_anon_pages
70.67 ± 24% +170.3% 191.00 ± 30% numa-vmstat.node1.nr_dirtied
4217319 ± 7% -46.8% 2244814 ± 6% numa-vmstat.node1.nr_file_pages
12188079 ± 2% +16.2% 14163735 numa-vmstat.node1.nr_free_pages
4060115 ± 8% -48.5% 2090670 ± 7% numa-vmstat.node1.nr_inactive_anon
1048978 ± 3% -48.0% 545102 ± 2% numa-vmstat.node1.nr_mapped
2558 ± 8% -44.0% 1431 ± 14% numa-vmstat.node1.nr_page_table_pages
4062271 ± 8% -48.5% 2092470 ± 7% numa-vmstat.node1.nr_shmem
16926 ± 7% -30.2% 11811 ± 3% numa-vmstat.node1.nr_slab_reclaimable
63.33 ± 24% +167.2% 169.25 ± 30% numa-vmstat.node1.nr_written
4060057 ± 8% -48.5% 2090600 ± 7% numa-vmstat.node1.nr_zone_inactive_anon
17465 +28.1% 22378 ± 2% slabinfo.anon_vma.active_objs
379.00 +28.3% 486.25 ± 2% slabinfo.anon_vma.active_slabs
17465 +28.2% 22391 ± 2% slabinfo.anon_vma.num_objs
379.00 +28.3% 486.25 ± 2% slabinfo.anon_vma.num_slabs
34169 +19.5% 40848 slabinfo.anon_vma_chain.active_objs
533.33 +19.7% 638.50 slabinfo.anon_vma_chain.active_slabs
34169 +19.7% 40900 slabinfo.anon_vma_chain.num_objs
533.33 +19.7% 638.50 slabinfo.anon_vma_chain.num_slabs
3571 +7.4% 3835 ± 3% slabinfo.pid.active_objs
3571 +7.4% 3835 ± 3% slabinfo.pid.num_objs
150826 -43.4% 85301 slabinfo.radix_tree_node.active_objs
2749 -42.7% 1574 slabinfo.radix_tree_node.active_slabs
154006 -42.7% 88185 slabinfo.radix_tree_node.num_objs
2749 -42.7% 1574 slabinfo.radix_tree_node.num_slabs
2385 ± 4% +17.6% 2805 ± 5% slabinfo.sighand_cache.active_objs
2388 ± 4% +17.8% 2813 ± 5% slabinfo.sighand_cache.num_objs
3459 ± 2% +9.2% 3778 ± 3% slabinfo.signal_cache.active_objs
3459 ± 2% +9.2% 3778 ± 3% slabinfo.signal_cache.num_objs
1197 ± 4% +19.7% 1433 ± 3% slabinfo.skbuff_ext_cache.active_objs
1204 ± 4% +19.3% 1436 ± 3% slabinfo.skbuff_ext_cache.num_objs
33178 ± 11% +65.8% 55010 ± 23% sched_debug.cfs_rq:/.min_vruntime.stddev
33153 ± 10% +66.0% 55046 ± 23% sched_debug.cfs_rq:/.spread0.stddev
408.53 ± 6% +29.1% 527.30 sched_debug.cfs_rq:/.util_est_enqueued.max
82.45 ± 6% +27.1% 104.81 ± 11% sched_debug.cfs_rq:/.util_est_enqueued.stddev
9.78 ± 2% +10.0% 10.76 ± 3% sched_debug.cpu.cpu_load[0].avg
9.62 ± 4% +15.0% 11.07 ± 3% sched_debug.cpu.cpu_load[1].avg
9.52 ± 5% +18.0% 11.23 ± 3% sched_debug.cpu.cpu_load[2].avg
9.55 ± 4% +18.5% 11.32 ± 3% sched_debug.cpu.cpu_load[3].avg
9.72 ± 3% +18.8% 11.54 ± 2% sched_debug.cpu.cpu_load[4].avg
8720 ± 11% +73.6% 15136 sched_debug.cpu.curr->pid.max
1411 ± 9% +90.4% 2687 ± 9% sched_debug.cpu.curr->pid.stddev
11784 ± 3% +13.5% 13371 sched_debug.cpu.load.avg
112105 ± 12% +19.5% 134002 sched_debug.cpu.nr_load_updates.max
1962 ± 15% +73.8% 3411 ± 17% sched_debug.cpu.nr_load_updates.stddev
0.23 ± 2% +9.6% 0.26 ± 2% sched_debug.cpu.nr_running.stddev
5073 ± 12% +58.9% 8061 sched_debug.cpu.nr_switches.avg
22637 ± 15% +94.2% 43953 ± 12% sched_debug.cpu.nr_switches.max
1515 ± 5% +76.3% 2671 ± 17% sched_debug.cpu.nr_switches.min
4110 ± 17% +74.3% 7163 ± 8% sched_debug.cpu.nr_switches.stddev
-16.67 -35.5% -10.75 sched_debug.cpu.nr_uninterruptible.min
5.82 ± 8% -16.6% 4.85 ± 5% sched_debug.cpu.nr_uninterruptible.stddev
17663224 ± 7% -49.3% 8955455 ± 6% numa-meminfo.node0.FilePages
17054254 ± 7% -51.2% 8330481 ± 6% numa-meminfo.node0.Inactive
17053513 ± 7% -51.2% 8329558 ± 6% numa-meminfo.node0.Inactive(anon)
77757 ± 6% -23.2% 59747 ± 2% numa-meminfo.node0.KReclaimable
4351999 ± 2% -48.2% 2256215 ± 2% numa-meminfo.node0.Mapped
47619833 ± 2% +18.3% 56357983 numa-meminfo.node0.MemFree
18057485 ± 7% -48.4% 9319335 ± 5% numa-meminfo.node0.MemUsed
10156 ± 8% -39.8% 6112 ± 10% numa-meminfo.node0.PageTables
77757 ± 6% -23.2% 59747 ± 2% numa-meminfo.node0.SReclaimable
17059436 ± 7% -51.1% 8340675 ± 6% numa-meminfo.node0.Shmem
136791 ± 13% +10.5% 151136 ± 11% numa-meminfo.node1.AnonPages
16876074 ± 7% -46.8% 8981202 ± 6% numa-meminfo.node1.FilePages
16248318 ± 8% -48.5% 8365977 ± 7% numa-meminfo.node1.Inactive
16247232 ± 8% -48.5% 8364650 ± 7% numa-meminfo.node1.Inactive(anon)
67560 ± 7% -30.1% 47237 ± 3% numa-meminfo.node1.KReclaimable
4186453 ± 3% -47.9% 2182438 ± 4% numa-meminfo.node1.Mapped
48745406 ± 2% +16.2% 56652661 numa-meminfo.node1.MemFree
17281296 ± 7% -45.8% 9374041 ± 6% numa-meminfo.node1.MemUsed
9974 ± 6% -42.9% 5693 ± 16% numa-meminfo.node1.PageTables
67560 ± 7% -30.1% 47237 ± 3% numa-meminfo.node1.SReclaimable
16255869 ± 8% -48.5% 8371819 ± 7% numa-meminfo.node1.Shmem
128302 ± 3% -17.3% 106117 ± 7% numa-meminfo.node1.Slab
64056 +1.9% 65273 proc-vmstat.nr_active_anon
284.00 +80.7% 513.25 proc-vmstat.nr_dirtied
2391828 +17.3% 2805292 proc-vmstat.nr_dirty_background_threshold
4790033 +17.3% 5618128 proc-vmstat.nr_dirty_threshold
8615955 -47.9% 4485338 proc-vmstat.nr_file_pages
24110149 +17.2% 28251496 proc-vmstat.nr_free_pages
8306311 -49.7% 4174722 proc-vmstat.nr_inactive_anon
456.00 +23.2% 562.00 proc-vmstat.nr_inactive_file
2183921 ± 3% -49.7% 1097757 proc-vmstat.nr_mapped
5274 ± 3% -44.3% 2939 proc-vmstat.nr_page_table_pages
8309962 -49.7% 4179304 proc-vmstat.nr_shmem
36356 -26.5% 26731 proc-vmstat.nr_slab_reclaimable
266.67 +89.4% 505.00 proc-vmstat.nr_written
64056 +1.9% 65273 proc-vmstat.nr_zone_active_anon
8306311 -49.7% 4174722 proc-vmstat.nr_zone_inactive_anon
456.00 +23.2% 562.00 proc-vmstat.nr_zone_inactive_file
6493 ± 34% +198.8% 19402 ± 43% proc-vmstat.numa_hint_faults
2898 ± 20% +285.7% 11176 ± 58% proc-vmstat.numa_hint_faults_local
5.297e+08 +1.8% 5.39e+08 proc-vmstat.numa_hit
993.33 ± 14% +118.5% 2170 ± 35% proc-vmstat.numa_huge_pte_updates
5.296e+08 +1.8% 5.39e+08 proc-vmstat.numa_local
539384 ± 14% +113.4% 1150839 ± 34% proc-vmstat.numa_pte_updates
5669 ± 2% +20.9% 6856 ± 5% proc-vmstat.pgactivate
5.308e+08 +1.8% 5.401e+08 proc-vmstat.pgalloc_normal
5.289e+08 +1.8% 5.385e+08 proc-vmstat.pgfault
5.306e+08 +1.6% 5.393e+08 proc-vmstat.pgfree
50.92 -2.1 48.86 ± 3% perf-profile.calltrace.cycles-pp.page_fault.do_access
13.22 -1.7 11.55 ± 3% perf-profile.calltrace.cycles-pp.do_rw_once
1.61 ± 12% -1.3 0.31 ±102% perf-profile.calltrace.cycles-pp.evict.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.61 ± 12% -1.3 0.31 ±102% perf-profile.calltrace.cycles-pp.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.41 -1.1 3.29 ± 11% perf-profile.calltrace.cycles-pp.shmem_undo_range.shmem_truncate_range.shmem_evict_inode.evict.do_unlinkat
4.42 -1.1 3.30 ± 11% perf-profile.calltrace.cycles-pp.shmem_evict_inode.evict.do_unlinkat.do_syscall_64.entry_SYSCALL_64_after_hwframe
4.42 -1.1 3.30 ± 11% perf-profile.calltrace.cycles-pp.shmem_truncate_range.shmem_evict_inode.evict.do_unlinkat.do_syscall_64
1.69 ± 12% -1.0 0.70 ± 14% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
1.69 ± 12% -1.0 0.70 ± 14% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.14 -0.7 1.40 ± 8% perf-profile.calltrace.cycles-pp.__pagevec_release.shmem_undo_range.shmem_truncate_range.shmem_evict_inode.evict
2.12 ± 2% -0.7 1.39 ± 8% perf-profile.calltrace.cycles-pp.release_pages.__pagevec_release.shmem_undo_range.shmem_truncate_range.shmem_evict_inode
7.59 -0.7 6.92 ± 3% perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.43 -0.5 0.91 ± 7% perf-profile.calltrace.cycles-pp.free_unref_page_list.release_pages.__pagevec_release.shmem_undo_range.shmem_truncate_range
1.72 ± 2% -0.4 1.33 ± 2% perf-profile.calltrace.cycles-pp.shmem_add_to_page_cache.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
3.27 ± 2% -0.4 2.92 ± 3% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault
0.76 ± 3% -0.3 0.42 ± 57% perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
3.55 ± 3% -0.3 3.23 ± 3% perf-profile.calltrace.cycles-pp.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.57 ± 4% -0.3 0.25 ±100% perf-profile.calltrace.cycles-pp.xas_find.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault
1.59 ± 5% -0.3 1.32 ± 2% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
1.65 ± 5% -0.3 1.39 ± 2% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
0.71 -0.1 0.65 ± 3% perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
0.58 ± 4% +0.0 0.61 ± 4% perf-profile.calltrace.cycles-pp.__list_del_entry_valid.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page
0.72 ± 7% +0.1 0.79 ± 7% perf-profile.calltrace.cycles-pp.delete_from_page_cache.truncate_inode_page.shmem_undo_range.shmem_truncate_range.shmem_evict_inode
8.50 ± 5% +1.4 9.90 ± 14% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
6.53 ± 12% +1.7 8.19 ± 18% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
1.25e+10 -1.4% 1.233e+10 perf-stat.i.branch-instructions
40278349 -7.9% 37105012 perf-stat.i.cache-misses
2.002e+08 ± 3% -9.7% 1.807e+08 ± 3% perf-stat.i.cache-references
3242 +37.4% 4454 perf-stat.i.context-switches
1.09 ± 3% +11.6% 1.21 perf-stat.i.cpi
5.886e+10 +7.8% 6.346e+10 perf-stat.i.cpu-cycles
82.67 +73.7% 143.61 ± 2% perf-stat.i.cpu-migrations
925.34 +34.2% 1241 perf-stat.i.cycles-between-cache-misses
0.03 ± 10% +0.0 0.03 ± 4% perf-stat.i.dTLB-store-miss-rate%
2048736 -2.3% 2002293 perf-stat.i.dTLB-store-misses
2672712 ± 3% -17.4% 2207575 ± 2% perf-stat.i.iTLB-load-misses
0.95 ± 4% -10.5% 0.85 perf-stat.i.ipc
1962276 -2.7% 1909124 perf-stat.i.minor-faults
2409945 -4.8% 2294023 ± 3% perf-stat.i.node-load-misses
34.06 ± 2% -5.6 28.42 ± 4% perf-stat.i.node-store-miss-rate%
1962504 -2.7% 1909687 perf-stat.i.page-faults
4.49 ± 3% -9.2% 4.07 ± 3% perf-stat.overall.MPKI
1.32 +8.4% 1.43 perf-stat.overall.cpi
1466 +16.9% 1714 perf-stat.overall.cycles-between-cache-misses
77.31 ± 2% -5.1 72.25 ± 2% perf-stat.overall.iTLB-load-miss-rate%
16699 ± 4% +20.2% 20073 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.76 -7.8% 0.70 perf-stat.overall.ipc
5060 +2.4% 5183 perf-stat.overall.path-length
1.264e+10 -2.0% 1.239e+10 perf-stat.ps.branch-instructions
40566190 -8.4% 37177450 perf-stat.ps.cache-misses
2.023e+08 ± 3% -10.3% 1.815e+08 ± 3% perf-stat.ps.cache-references
3241 +37.3% 4450 perf-stat.ps.context-switches
5.951e+10 +7.1% 6.373e+10 perf-stat.ps.cpu-cycles
83.53 +73.6% 145.03 ± 2% perf-stat.ps.cpu-migrations
1.18e+10 -1.5% 1.162e+10 perf-stat.ps.dTLB-loads
2073937 -2.9% 2014135 perf-stat.ps.dTLB-store-misses
2704797 ± 3% -17.9% 2220519 ± 2% perf-stat.ps.iTLB-load-misses
1987508 -3.3% 1921858 perf-stat.ps.minor-faults
2405765 -4.9% 2287716 ± 3% perf-stat.ps.node-load-misses
1987508 -3.3% 1921858 perf-stat.ps.page-faults
1.2e+13 +4.0% 1.248e+13 perf-stat.total.instructions
380.00 ± 13% -48.5% 195.75 ± 30% interrupts.70:PCI-MSI.70260738-edge.eth3-TxRx-1
207675 +7.7% 223582 ± 2% interrupts.CAL:Function_call_interrupts
1712 ± 4% +28.6% 2202 ± 7% interrupts.CPU1.NMI:Non-maskable_interrupts
1712 ± 4% +28.6% 2202 ± 7% interrupts.CPU1.PMI:Performance_monitoring_interrupts
1794 ± 8% +25.2% 2246 ± 6% interrupts.CPU10.NMI:Non-maskable_interrupts
1794 ± 8% +25.2% 2246 ± 6% interrupts.CPU10.PMI:Performance_monitoring_interrupts
1795 ± 4% +19.1% 2137 ± 10% interrupts.CPU11.NMI:Non-maskable_interrupts
1795 ± 4% +19.1% 2137 ± 10% interrupts.CPU11.PMI:Performance_monitoring_interrupts
1710 ± 4% +46.6% 2507 ± 17% interrupts.CPU13.NMI:Non-maskable_interrupts
1710 ± 4% +46.6% 2507 ± 17% interrupts.CPU13.PMI:Performance_monitoring_interrupts
1701 ± 3% +28.3% 2182 ± 8% interrupts.CPU14.NMI:Non-maskable_interrupts
1701 ± 3% +28.3% 2182 ± 8% interrupts.CPU14.PMI:Performance_monitoring_interrupts
1701 ± 5% +38.3% 2352 ± 15% interrupts.CPU15.NMI:Non-maskable_interrupts
1701 ± 5% +38.3% 2352 ± 15% interrupts.CPU15.PMI:Performance_monitoring_interrupts
1708 ± 3% +27.0% 2168 ± 5% interrupts.CPU16.NMI:Non-maskable_interrupts
1708 ± 3% +27.0% 2168 ± 5% interrupts.CPU16.PMI:Performance_monitoring_interrupts
212.00 ± 47% +264.3% 772.25 ± 64% interrupts.CPU18.RES:Rescheduling_interrupts
69.00 ± 60% +100.7% 138.50 ± 36% interrupts.CPU18.TLB:TLB_shootdowns
1826 ± 5% +81.2% 3309 ± 31% interrupts.CPU19.NMI:Non-maskable_interrupts
1826 ± 5% +81.2% 3309 ± 31% interrupts.CPU19.PMI:Performance_monitoring_interrupts
2895 ± 2% +8.0% 3127 ± 4% interrupts.CPU2.CAL:Function_call_interrupts
1760 ± 9% +33.9% 2356 ± 18% interrupts.CPU20.NMI:Non-maskable_interrupts
1760 ± 9% +33.9% 2356 ± 18% interrupts.CPU20.PMI:Performance_monitoring_interrupts
299.67 ± 37% +155.3% 765.00 ± 38% interrupts.CPU20.RES:Rescheduling_interrupts
1741 ± 9% +62.9% 2836 ± 52% interrupts.CPU21.NMI:Non-maskable_interrupts
1741 ± 9% +62.9% 2836 ± 52% interrupts.CPU21.PMI:Performance_monitoring_interrupts
184.67 ± 48% +211.2% 574.75 ± 34% interrupts.CPU21.RES:Rescheduling_interrupts
2911 +7.7% 3135 ± 3% interrupts.CPU22.CAL:Function_call_interrupts
1749 ± 9% +23.3% 2157 ± 16% interrupts.CPU22.NMI:Non-maskable_interrupts
1749 ± 9% +23.3% 2157 ± 16% interrupts.CPU22.PMI:Performance_monitoring_interrupts
210.67 ± 27% +285.1% 811.25 ± 58% interrupts.CPU22.RES:Rescheduling_interrupts
57.33 ± 46% +209.2% 177.25 ± 24% interrupts.CPU22.TLB:TLB_shootdowns
1746 ± 10% +19.3% 2083 ± 12% interrupts.CPU23.NMI:Non-maskable_interrupts
1746 ± 10% +19.3% 2083 ± 12% interrupts.CPU23.PMI:Performance_monitoring_interrupts
285.67 ± 28% +120.8% 630.75 ± 31% interrupts.CPU23.RES:Rescheduling_interrupts
2900 +8.2% 3137 ± 2% interrupts.CPU24.CAL:Function_call_interrupts
1738 ± 9% +29.1% 2245 ± 13% interrupts.CPU24.NMI:Non-maskable_interrupts
1738 ± 9% +29.1% 2245 ± 13% interrupts.CPU24.PMI:Performance_monitoring_interrupts
371.00 ± 14% +90.9% 708.25 ± 31% interrupts.CPU24.RES:Rescheduling_interrupts
128.33 ± 30% +123.4% 286.75 ± 78% interrupts.CPU24.TLB:TLB_shootdowns
2866 +9.6% 3140 ± 2% interrupts.CPU25.CAL:Function_call_interrupts
1746 ± 10% +65.6% 2891 ± 27% interrupts.CPU25.NMI:Non-maskable_interrupts
1746 ± 10% +65.6% 2891 ± 27% interrupts.CPU25.PMI:Performance_monitoring_interrupts
237.00 ± 13% +204.1% 720.75 ± 56% interrupts.CPU25.RES:Rescheduling_interrupts
31.67 ± 15% +435.3% 169.50 ± 49% interrupts.CPU25.TLB:TLB_shootdowns
275.67 ± 68% +140.1% 662.00 ± 12% interrupts.CPU26.RES:Rescheduling_interrupts
2797 ± 5% +13.0% 3159 ± 3% interrupts.CPU27.CAL:Function_call_interrupts
224.00 ± 85% +134.6% 525.50 ± 16% interrupts.CPU27.RES:Rescheduling_interrupts
60.33 ± 46% +222.4% 194.50 ± 56% interrupts.CPU27.TLB:TLB_shootdowns
2916 +9.2% 3185 ± 4% interrupts.CPU28.CAL:Function_call_interrupts
1773 ± 11% +17.8% 2089 ± 11% interrupts.CPU29.NMI:Non-maskable_interrupts
1773 ± 11% +17.8% 2089 ± 11% interrupts.CPU29.PMI:Performance_monitoring_interrupts
181.33 ± 44% +564.0% 1204 ± 60% interrupts.CPU29.RES:Rescheduling_interrupts
1729 ± 5% +26.1% 2179 ± 7% interrupts.CPU3.NMI:Non-maskable_interrupts
1729 ± 5% +26.1% 2179 ± 7% interrupts.CPU3.PMI:Performance_monitoring_interrupts
1748 ± 10% +20.6% 2109 ± 10% interrupts.CPU30.NMI:Non-maskable_interrupts
1748 ± 10% +20.6% 2109 ± 10% interrupts.CPU30.PMI:Performance_monitoring_interrupts
2945 +6.2% 3127 ± 4% interrupts.CPU31.CAL:Function_call_interrupts
1745 ± 10% +19.4% 2084 ± 10% interrupts.CPU31.NMI:Non-maskable_interrupts
1745 ± 10% +19.4% 2084 ± 10% interrupts.CPU31.PMI:Performance_monitoring_interrupts
244.67 ± 30% +199.4% 732.50 ± 47% interrupts.CPU31.RES:Rescheduling_interrupts
248.00 ± 39% +129.0% 568.00 ± 40% interrupts.CPU33.RES:Rescheduling_interrupts
100.33 ± 16% +106.8% 207.50 ± 48% interrupts.CPU33.TLB:TLB_shootdowns
2810 ± 3% +12.2% 3154 ± 3% interrupts.CPU34.CAL:Function_call_interrupts
1762 ± 10% +29.3% 2279 ± 14% interrupts.CPU34.NMI:Non-maskable_interrupts
1762 ± 10% +29.3% 2279 ± 14% interrupts.CPU34.PMI:Performance_monitoring_interrupts
249.00 ± 53% +271.4% 924.75 ± 29% interrupts.CPU34.RES:Rescheduling_interrupts
2901 +8.2% 3138 ± 4% interrupts.CPU35.CAL:Function_call_interrupts
1720 ± 4% +25.7% 2163 ± 7% interrupts.CPU39.NMI:Non-maskable_interrupts
1720 ± 4% +25.7% 2163 ± 7% interrupts.CPU39.PMI:Performance_monitoring_interrupts
95.67 ± 88% +110.1% 201.00 ± 86% interrupts.CPU39.TLB:TLB_shootdowns
2836 ± 6% +10.4% 3130 interrupts.CPU4.CAL:Function_call_interrupts
1854 ± 11% +18.4% 2195 ± 7% interrupts.CPU4.NMI:Non-maskable_interrupts
1854 ± 11% +18.4% 2195 ± 7% interrupts.CPU4.PMI:Performance_monitoring_interrupts
138.00 ± 47% +76.1% 243.00 ± 38% interrupts.CPU40.RES:Rescheduling_interrupts
44.00 ± 42% +277.3% 166.00 ± 64% interrupts.CPU40.TLB:TLB_shootdowns
1707 ± 4% +28.1% 2186 ± 8% interrupts.CPU41.NMI:Non-maskable_interrupts
1707 ± 4% +28.1% 2186 ± 8% interrupts.CPU41.PMI:Performance_monitoring_interrupts
38.00 ± 52% +186.8% 109.00 ± 69% interrupts.CPU41.TLB:TLB_shootdowns
1690 ± 4% +30.0% 2196 ± 8% interrupts.CPU42.NMI:Non-maskable_interrupts
1690 ± 4% +30.0% 2196 ± 8% interrupts.CPU42.PMI:Performance_monitoring_interrupts
71.67 ± 84% +278.1% 271.00 ±109% interrupts.CPU42.TLB:TLB_shootdowns
1701 ± 3% +26.9% 2159 ± 8% interrupts.CPU43.NMI:Non-maskable_interrupts
1701 ± 3% +26.9% 2159 ± 8% interrupts.CPU43.PMI:Performance_monitoring_interrupts
66.33 ± 65% +106.2% 136.75 ± 61% interrupts.CPU43.TLB:TLB_shootdowns
1716 ± 3% +26.9% 2178 ± 8% interrupts.CPU44.NMI:Non-maskable_interrupts
1716 ± 3% +26.9% 2178 ± 8% interrupts.CPU44.PMI:Performance_monitoring_interrupts
57.33 ± 97% +76.2% 101.00 ± 58% interrupts.CPU44.TLB:TLB_shootdowns
1759 ± 2% +26.4% 2224 ± 6% interrupts.CPU45.NMI:Non-maskable_interrupts
1759 ± 2% +26.4% 2224 ± 6% interrupts.CPU45.PMI:Performance_monitoring_interrupts
1877 ± 13% +18.3% 2222 ± 8% interrupts.CPU46.NMI:Non-maskable_interrupts
1877 ± 13% +18.3% 2222 ± 8% interrupts.CPU46.PMI:Performance_monitoring_interrupts
220.33 ± 71% +86.3% 410.50 ± 35% interrupts.CPU47.RES:Rescheduling_interrupts
49.33 ± 79% +96.1% 96.75 ± 52% interrupts.CPU47.TLB:TLB_shootdowns
34.67 ± 29% +149.5% 86.50 ± 50% interrupts.CPU48.TLB:TLB_shootdowns
1705 ± 4% +66.4% 2838 ± 36% interrupts.CPU49.NMI:Non-maskable_interrupts
1705 ± 4% +66.4% 2838 ± 36% interrupts.CPU49.PMI:Performance_monitoring_interrupts
331.67 ± 31% -40.9% 196.00 ± 25% interrupts.CPU49.RES:Rescheduling_interrupts
41.67 ± 75% +128.0% 95.00 ± 54% interrupts.CPU49.TLB:TLB_shootdowns
1727 ± 3% +25.5% 2167 ± 8% interrupts.CPU5.NMI:Non-maskable_interrupts
1727 ± 3% +25.5% 2167 ± 8% interrupts.CPU5.PMI:Performance_monitoring_interrupts
1705 ± 4% +28.7% 2194 ± 11% interrupts.CPU50.NMI:Non-maskable_interrupts
1705 ± 4% +28.7% 2194 ± 11% interrupts.CPU50.PMI:Performance_monitoring_interrupts
37.67 ± 40% +145.6% 92.50 ± 28% interrupts.CPU50.TLB:TLB_shootdowns
1705 ± 5% +29.7% 2211 ± 8% interrupts.CPU51.NMI:Non-maskable_interrupts
1705 ± 5% +29.7% 2211 ± 8% interrupts.CPU51.PMI:Performance_monitoring_interrupts
223.00 ± 13% +51.0% 336.75 ± 23% interrupts.CPU51.RES:Rescheduling_interrupts
1711 ± 4% +28.0% 2191 ± 8% interrupts.CPU52.NMI:Non-maskable_interrupts
1711 ± 4% +28.0% 2191 ± 8% interrupts.CPU52.PMI:Performance_monitoring_interrupts
76.33 ± 79% +109.3% 159.75 ± 43% interrupts.CPU53.TLB:TLB_shootdowns
262.67 ± 9% +139.8% 630.00 ± 19% interrupts.CPU54.RES:Rescheduling_interrupts
92.33 ± 52% +32.4% 122.25 ± 44% interrupts.CPU54.TLB:TLB_shootdowns
2904 ± 2% +7.2% 3113 ± 5% interrupts.CPU55.CAL:Function_call_interrupts
1770 ± 8% +62.2% 2871 ± 29% interrupts.CPU55.NMI:Non-maskable_interrupts
1770 ± 8% +62.2% 2871 ± 29% interrupts.CPU55.PMI:Performance_monitoring_interrupts
1761 ± 10% +59.5% 2808 ± 42% interrupts.CPU56.NMI:Non-maskable_interrupts
1761 ± 10% +59.5% 2808 ± 42% interrupts.CPU56.PMI:Performance_monitoring_interrupts
2645 ± 11% +14.9% 3038 ± 5% interrupts.CPU57.CAL:Function_call_interrupts
257.67 ± 40% +185.3% 735.25 ± 19% interrupts.CPU58.RES:Rescheduling_interrupts
46.67 ± 38% +341.4% 206.00 ± 82% interrupts.CPU58.TLB:TLB_shootdowns
2927 +8.9% 3187 ± 4% interrupts.CPU59.CAL:Function_call_interrupts
1767 ± 9% +17.1% 2069 ± 10% interrupts.CPU59.NMI:Non-maskable_interrupts
1767 ± 9% +17.1% 2069 ± 10% interrupts.CPU59.PMI:Performance_monitoring_interrupts
146.00 ± 43% +166.6% 389.25 ± 46% interrupts.CPU59.RES:Rescheduling_interrupts
1742 ± 9% +43.9% 2506 ± 28% interrupts.CPU60.NMI:Non-maskable_interrupts
1742 ± 9% +43.9% 2506 ± 28% interrupts.CPU60.PMI:Performance_monitoring_interrupts
2811 ± 3% +12.0% 3150 ± 3% interrupts.CPU61.CAL:Function_call_interrupts
371.67 ± 20% +43.1% 532.00 ± 28% interrupts.CPU61.RES:Rescheduling_interrupts
1781 ± 10% +28.8% 2295 ± 18% interrupts.CPU63.NMI:Non-maskable_interrupts
1781 ± 10% +28.8% 2295 ± 18% interrupts.CPU63.PMI:Performance_monitoring_interrupts
186.33 ± 44% +176.5% 515.25 ± 42% interrupts.CPU63.RES:Rescheduling_interrupts
31.67 ± 22% +500.0% 190.00 ± 58% interrupts.CPU63.TLB:TLB_shootdowns
375.33 ± 22% +79.6% 674.00 ± 40% interrupts.CPU64.RES:Rescheduling_interrupts
1785 ± 12% +17.4% 2096 ± 10% interrupts.CPU65.NMI:Non-maskable_interrupts
1785 ± 12% +17.4% 2096 ± 10% interrupts.CPU65.PMI:Performance_monitoring_interrupts
1745 ± 11% +19.7% 2089 ± 10% interrupts.CPU66.NMI:Non-maskable_interrupts
1745 ± 11% +19.7% 2089 ± 10% interrupts.CPU66.PMI:Performance_monitoring_interrupts
1749 ± 9% +19.5% 2090 ± 10% interrupts.CPU67.NMI:Non-maskable_interrupts
1749 ± 9% +19.5% 2090 ± 10% interrupts.CPU67.PMI:Performance_monitoring_interrupts
1811 ± 8% +17.7% 2132 ± 12% interrupts.CPU68.NMI:Non-maskable_interrupts
1811 ± 8% +17.7% 2132 ± 12% interrupts.CPU68.PMI:Performance_monitoring_interrupts
179.33 ± 62% +133.9% 419.50 ± 35% interrupts.CPU68.RES:Rescheduling_interrupts
380.00 ± 13% -48.5% 195.75 ± 30% interrupts.CPU69.70:PCI-MSI.70260738-edge.eth3-TxRx-1
2682 ± 4% +18.7% 3184 ± 5% interrupts.CPU69.CAL:Function_call_interrupts
1727 ± 4% +26.5% 2184 ± 8% interrupts.CPU7.NMI:Non-maskable_interrupts
1727 ± 4% +26.5% 2184 ± 8% interrupts.CPU7.PMI:Performance_monitoring_interrupts
2847 ± 4% +11.1% 3162 ± 5% interrupts.CPU70.CAL:Function_call_interrupts
1777 ± 9% +22.0% 2168 ± 11% interrupts.CPU70.NMI:Non-maskable_interrupts
1777 ± 9% +22.0% 2168 ± 11% interrupts.CPU70.PMI:Performance_monitoring_interrupts
241.33 ± 32% +137.4% 573.00 ± 25% interrupts.CPU70.RES:Rescheduling_interrupts
1780 ± 10% +23.1% 2191 ± 11% interrupts.CPU71.NMI:Non-maskable_interrupts
1780 ± 10% +23.1% 2191 ± 11% interrupts.CPU71.PMI:Performance_monitoring_interrupts
1704 ± 4% +25.5% 2139 ± 8% interrupts.CPU8.NMI:Non-maskable_interrupts
1704 ± 4% +25.5% 2139 ± 8% interrupts.CPU8.PMI:Performance_monitoring_interrupts
1774 +23.9% 2198 ± 8% interrupts.CPU9.NMI:Non-maskable_interrupts
1774 +23.9% 2198 ± 8% interrupts.CPU9.PMI:Performance_monitoring_interrupts
135003 ± 4% +21.0% 163358 ± 7% interrupts.NMI:Non-maskable_interrupts
135003 ± 4% +21.0% 163358 ± 7% interrupts.PMI:Performance_monitoring_interrupts
24412 ± 8% +56.4% 38169 ± 13% interrupts.RES:Rescheduling_interrupts
6115 ± 16% +92.6% 11777 ± 13% interrupts.TLB:TLB_shootdowns
***************************************************************************************************
lkp-bdw-ep2: 88 threads Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2018-04-03.cgz/300s/256G/lkp-bdw-ep2/lru-shm-rand/vm-scalability
commit:
f568cf93ca ("vfs: Convert smackfs to use the new mount API")
27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b
---------------- ---------------------------
%stddev %change %stddev
\ | \
0.03 -51.6% 0.01 vm-scalability.free_time
54446 +17.4% 63942 vm-scalability.median
0.00 ± 29% +206.0% 0.01 ± 12% vm-scalability.median_stddev
4788027 +17.3% 5614974 vm-scalability.throughput
193.08 -8.8% 176.00 vm-scalability.time.elapsed_time
193.08 -8.8% 176.00 vm-scalability.time.elapsed_time.max
750512 -49.9% 375966 vm-scalability.time.maximum_resident_set_size
5671 -6.6% 5298 vm-scalability.time.percent_of_cpu_this_job_got
881.36 -6.1% 827.85 vm-scalability.time.system_time
10068 -15.6% 8497 vm-scalability.time.user_time
10321 +97.7% 20407 vm-scalability.time.voluntary_context_switches
35.56 ± 2% +3.8 39.39 mpstat.cpu.all.idle%
34.81 ± 10% -15.2% 29.51 boot-time.boot
28.53 ± 13% -18.9% 23.13 boot-time.dhcp
366.97 +1.7% 373.37 pmeter.Average_Active_Power
13197 +15.2% 15209 pmeter.performance_per_watt
6.092e+08 ±138% +537.9% 3.886e+09 ± 31% cpuidle.C3.time
2145092 ±138% +325.2% 9121998 ± 20% cpuidle.C3.usage
5.276e+09 ± 14% -59.5% 2.135e+09 ± 60% cpuidle.C6.time
36.00 ± 2% +9.7% 39.50 vmstat.cpu.id
58.00 -6.9% 54.00 vmstat.cpu.us
55228783 -49.7% 27768287 vmstat.memory.cache
75764283 +36.3% 1.033e+08 vmstat.memory.free
1972 +43.0% 2820 ± 4% vmstat.system.cs
1823 -5.6% 1720 turbostat.Avg_MHz
2144618 ±138% +325.3% 9121862 ± 20% turbostat.C3
3.53 ±138% +21.4 24.98 ± 32% turbostat.C3%
30.76 ± 15% -17.2 13.59 ± 60% turbostat.C6%
12.28 ± 30% +59.1% 19.54 ± 6% turbostat.CPU%c1
0.77 ±137% +1422.1% 11.72 ± 44% turbostat.CPU%c3
21.38 ± 22% -67.9% 6.86 ± 70% turbostat.CPU%c6
65.00 ± 3% +10.8% 72.00 turbostat.CoreTmp
10.91 ± 16% -24.4% 8.24 ± 2% turbostat.Pkg%pc2
0.00 ±141% +725.0% 0.03 ± 53% turbostat.Pkg%pc3
0.12 ± 61% -89.9% 0.01 ±103% turbostat.Pkg%pc6
178.48 +2.8% 183.53 turbostat.PkgWatt
22.65 -3.2% 21.92 turbostat.RAMWatt
55238867 -50.0% 27601805 meminfo.Cached
54257533 -51.0% 26559528 meminfo.Committed_AS
54003945 -51.2% 26371074 meminfo.Inactive
54002477 -51.2% 26369483 meminfo.Inactive(anon)
202625 -33.0% 135693 meminfo.KReclaimable
39890692 -53.3% 18642069 meminfo.Mapped
75034355 +36.9% 1.027e+08 meminfo.MemAvailable
75551000 +36.7% 1.033e+08 meminfo.MemFree
56341522 -49.2% 28604497 meminfo.Memused
88732 -48.9% 45342 meminfo.PageTables
202625 -33.0% 135693 meminfo.SReclaimable
54016672 -51.2% 26379748 meminfo.Shmem
336103 -18.9% 272502 meminfo.Slab
360670 -43.5% 203807 meminfo.max_used_kB
15033 ± 3% +12.6% 16924 ± 4% slabinfo.anon_vma.active_objs
15033 ± 3% +12.6% 16924 ± 4% slabinfo.anon_vma.num_objs
29229 ± 2% +8.9% 31821 ± 3% slabinfo.anon_vma_chain.active_objs
29229 ± 2% +8.9% 31821 ± 3% slabinfo.anon_vma_chain.num_objs
12000 ± 4% +9.4% 13124 ± 3% slabinfo.cred_jar.active_objs
12000 ± 4% +9.4% 13124 ± 3% slabinfo.cred_jar.num_objs
662.00 ± 7% -16.9% 550.00 ± 5% slabinfo.kmem_cache_node.active_objs
703.00 ± 7% -15.9% 591.50 ± 4% slabinfo.kmem_cache_node.num_objs
238877 -47.9% 124367 slabinfo.radix_tree_node.active_objs
4288 -47.7% 2244 slabinfo.radix_tree_node.active_slabs
240200 -47.7% 125696 slabinfo.radix_tree_node.num_objs
4288 -47.7% 2244 slabinfo.radix_tree_node.num_slabs
648.67 ± 4% +36.8% 887.50 ± 4% slabinfo.skbuff_ext_cache.active_objs
648.67 ± 4% +36.8% 887.50 ± 4% slabinfo.skbuff_ext_cache.num_objs
2559 ± 3% +20.1% 3074 ± 11% slabinfo.sock_inode_cache.active_objs
2559 ± 3% +20.1% 3074 ± 11% slabinfo.sock_inode_cache.num_objs
6943392 ± 4% -50.9% 3410526 ± 2% numa-vmstat.node0.nr_file_pages
9358435 ± 2% +37.9% 12908555 numa-vmstat.node0.nr_free_pages
6783608 ± 4% -52.0% 3257220 ± 2% numa-vmstat.node0.nr_inactive_anon
4976414 ± 3% -53.3% 2324942 numa-vmstat.node0.nr_mapped
11297 ± 2% -48.6% 5810 ± 3% numa-vmstat.node0.nr_page_table_pages
6786264 ± 4% -52.0% 3258936 ± 2% numa-vmstat.node0.nr_shmem
27773 -33.2% 18562 ± 4% numa-vmstat.node0.nr_slab_reclaimable
6783598 ± 4% -52.0% 3257206 ± 2% numa-vmstat.node0.nr_zone_inactive_anon
6798656 ± 5% -48.7% 3489885 ± 3% numa-vmstat.node1.nr_file_pages
9597839 ± 3% +34.6% 12913978 numa-vmstat.node1.nr_free_pages
6649054 ± 5% -49.8% 3334855 ± 3% numa-vmstat.node1.nr_inactive_anon
18.00 ± 68% +263.9% 65.50 ± 50% numa-vmstat.node1.nr_inactive_file
4914540 -53.1% 2306813 numa-vmstat.node1.nr_mapped
10635 -49.2% 5405 ± 5% numa-vmstat.node1.nr_page_table_pages
6649976 ± 5% -49.8% 3335702 ± 3% numa-vmstat.node1.nr_shmem
22650 ± 2% -32.3% 15336 ± 6% numa-vmstat.node1.nr_slab_reclaimable
6649042 ± 5% -49.8% 3334845 ± 3% numa-vmstat.node1.nr_zone_inactive_anon
18.00 ± 68% +263.9% 65.50 ± 50% numa-vmstat.node1.nr_zone_inactive_file
27806193 ± 4% -51.0% 13620225 ± 2% numa-meminfo.node0.FilePages
27168468 ± 4% -52.1% 13008289 ± 3% numa-meminfo.node0.Inactive
27167075 ± 4% -52.1% 13006963 ± 3% numa-meminfo.node0.Inactive(anon)
111263 -33.3% 74266 ± 4% numa-meminfo.node0.KReclaimable
19943539 ± 3% -53.1% 9349954 numa-meminfo.node0.Mapped
37400591 ± 3% +38.1% 51655755 numa-meminfo.node0.MemFree
28466692 ± 4% -50.1% 14211527 ± 2% numa-meminfo.node0.MemUsed
45430 ± 2% -48.3% 23467 ± 4% numa-meminfo.node0.PageTables
111263 -33.3% 74266 ± 4% numa-meminfo.node0.SReclaimable
27177672 ± 4% -52.1% 13013861 ± 3% numa-meminfo.node0.Shmem
185064 -18.1% 151580 ± 5% numa-meminfo.node0.Slab
27237549 ± 4% -48.7% 13965128 ± 2% numa-meminfo.node1.FilePages
26639192 ± 4% -49.9% 13345224 ± 3% numa-meminfo.node1.Inactive
26639119 ± 4% -49.9% 13344959 ± 3% numa-meminfo.node1.Inactive(anon)
90704 -32.4% 61357 ± 6% numa-meminfo.node1.KReclaimable
19694581 -53.4% 9174424 numa-meminfo.node1.Mapped
38348099 ± 3% +34.7% 51650284 numa-meminfo.node1.MemFree
27677139 ± 4% -48.1% 14374955 ± 2% numa-meminfo.node1.MemUsed
42740 -49.4% 21608 ± 4% numa-meminfo.node1.PageTables
90704 -32.4% 61357 ± 6% numa-meminfo.node1.SReclaimable
26642833 ± 4% -49.9% 13348390 ± 3% numa-meminfo.node1.Shmem
150389 -19.7% 120837 ± 8% numa-meminfo.node1.Slab
65408 -2.4% 63841 proc-vmstat.nr_active_anon
122.33 +56.1% 191.00 proc-vmstat.nr_dirtied
1872101 +36.9% 2563115 proc-vmstat.nr_dirty_background_threshold
3748812 +36.9% 5132740 proc-vmstat.nr_dirty_threshold
13795102 -50.0% 6898747 proc-vmstat.nr_file_pages
18902921 +36.6% 25824194 proc-vmstat.nr_free_pages
13485730 -51.1% 6590402 proc-vmstat.nr_inactive_anon
366.00 +8.6% 397.50 proc-vmstat.nr_inactive_file
9967759 -53.5% 4637655 proc-vmstat.nr_mapped
22073 -49.2% 11216 proc-vmstat.nr_page_table_pages
13489295 -51.1% 6592974 proc-vmstat.nr_shmem
50522 -32.9% 33902 proc-vmstat.nr_slab_reclaimable
33370 +2.5% 34199 proc-vmstat.nr_slab_unreclaimable
118.67 +43.9% 170.75 proc-vmstat.nr_written
65408 -2.4% 63841 proc-vmstat.nr_zone_active_anon
13485730 -51.1% 6590402 proc-vmstat.nr_zone_inactive_anon
366.00 +8.6% 397.50 proc-vmstat.nr_zone_inactive_file
2724 ± 25% +77.1% 4826 ± 28% proc-vmstat.numa_hint_faults
3308 ± 4% +168.1% 8869 ± 47% proc-vmstat.numa_pages_migrated
275310 ± 9% +65.5% 455701 ± 15% proc-vmstat.numa_pte_updates
11705 ± 4% -16.1% 9825 ± 6% proc-vmstat.pgactivate
3308 ± 4% +168.1% 8869 ± 47% proc-vmstat.pgmigrate_success
27.31 ± 13% +392.9% 134.60 ±119% sched_debug.cfs_rq:/.load_avg.avg
331.58 ± 11% +525.2% 2073 ±128% sched_debug.cfs_rq:/.load_avg.max
68.59 ± 17% +592.4% 474.93 ±130% sched_debug.cfs_rq:/.load_avg.stddev
5489162 -54.4% 2500483 ± 56% sched_debug.cfs_rq:/.min_vruntime.avg
5569380 -54.1% 2554132 ± 56% sched_debug.cfs_rq:/.min_vruntime.max
5339180 -55.4% 2381744 ± 56% sched_debug.cfs_rq:/.min_vruntime.min
0.13 ± 32% +40.7% 0.19 ± 7% sched_debug.cfs_rq:/.nr_running.stddev
6.73 ± 19% +167.6% 18.01 ± 53% sched_debug.cfs_rq:/.removed.load_avg.avg
40.59 ± 9% +108.1% 84.47 ± 30% sched_debug.cfs_rq:/.removed.load_avg.stddev
309.01 ± 19% +169.0% 831.37 ± 53% sched_debug.cfs_rq:/.removed.runnable_sum.avg
11735 +77.4% 20824 ± 24% sched_debug.cfs_rq:/.removed.runnable_sum.max
1863 ± 9% +109.2% 3898 ± 30% sched_debug.cfs_rq:/.removed.runnable_sum.stddev
2.70 ± 35% +192.4% 7.89 ± 48% sched_debug.cfs_rq:/.removed.util_avg.avg
104.17 ± 17% +118.0% 227.12 ± 25% sched_debug.cfs_rq:/.removed.util_avg.max
16.08 ± 25% +136.5% 38.02 ± 27% sched_debug.cfs_rq:/.removed.util_avg.stddev
10.49 ± 11% -32.0% 7.13 ± 10% sched_debug.cfs_rq:/.runnable_load_avg.avg
114.31 ± 11% +14.2% 130.50 ± 6% sched_debug.cfs_rq:/.util_avg.stddev
557.17 ± 21% +24.9% 695.94 ± 16% sched_debug.cfs_rq:/.util_est_enqueued.max
130909 ± 2% -38.9% 79924 ± 32% sched_debug.cpu.clock.avg
130918 ± 2% -38.9% 79933 ± 32% sched_debug.cpu.clock.max
130900 ± 2% -38.9% 79917 ± 32% sched_debug.cpu.clock.min
130909 ± 2% -38.9% 79924 ± 32% sched_debug.cpu.clock_task.avg
130918 ± 2% -38.9% 79933 ± 32% sched_debug.cpu.clock_task.max
130900 ± 2% -38.9% 79917 ± 32% sched_debug.cpu.clock_task.min
8.80 ± 4% -14.7% 7.50 ± 5% sched_debug.cpu.cpu_load[3].avg
8.95 ± 4% -13.9% 7.70 ± 2% sched_debug.cpu.cpu_load[4].avg
4083 ± 19% -35.7% 2625 ± 22% sched_debug.cpu.curr->pid.avg
2171 ± 43% -71.8% 612.12 ± 39% sched_debug.cpu.curr->pid.min
96790 -47.5% 50806 ± 50% sched_debug.cpu.nr_load_updates.avg
108785 ± 5% -48.3% 56293 ± 46% sched_debug.cpu.nr_load_updates.max
93045 -50.6% 46010 ± 52% sched_debug.cpu.nr_load_updates.min
0.22 ± 15% +22.6% 0.27 ± 8% sched_debug.cpu.nr_running.stddev
11.00 ± 26% +81.8% 20.00 ± 35% sched_debug.cpu.nr_uninterruptible.max
130899 ± 2% -38.9% 79917 ± 32% sched_debug.cpu_clk
126848 ± 3% -40.2% 75884 ± 34% sched_debug.ktime
131813 ± 3% -38.8% 80612 ± 32% sched_debug.sched_clk
20131 ± 7% +18.3% 23816 ± 12% softirqs.CPU14.RCU
89596 ± 10% -21.0% 70741 ± 9% softirqs.CPU14.TIMER
82675 ± 3% -12.7% 72166 ± 9% softirqs.CPU15.TIMER
82022 ± 3% -13.1% 71311 ± 7% softirqs.CPU20.TIMER
75561 ± 4% -13.5% 65383 ± 4% softirqs.CPU22.TIMER
73531 ± 5% -12.7% 64183 ± 4% softirqs.CPU23.TIMER
74796 ± 3% -13.8% 64445 ± 6% softirqs.CPU24.TIMER
73640 ± 5% -10.8% 65706 ± 3% softirqs.CPU26.TIMER
72580 ± 6% -13.9% 62506 ± 3% softirqs.CPU27.TIMER
73608 ± 5% -12.0% 64755 ± 3% softirqs.CPU28.TIMER
73680 ± 5% -10.1% 66265 ± 5% softirqs.CPU29.TIMER
73672 ± 4% -13.2% 63955 ± 3% softirqs.CPU30.TIMER
73132 ± 5% -12.3% 64110 ± 4% softirqs.CPU31.TIMER
73080 ± 5% -13.4% 63311 ± 4% softirqs.CPU33.TIMER
74422 ± 4% -10.9% 66288 softirqs.CPU34.TIMER
73272 ± 5% -12.7% 63991 ± 2% softirqs.CPU35.TIMER
75598 ± 4% -10.2% 67920 softirqs.CPU37.TIMER
75732 ± 3% -12.1% 66556 ± 2% softirqs.CPU38.TIMER
73562 ± 6% -14.0% 63239 ± 5% softirqs.CPU39.TIMER
74804 ± 3% -13.1% 64997 ± 2% softirqs.CPU40.TIMER
75926 -11.2% 67418 ± 2% softirqs.CPU41.TIMER
76357 -10.0% 68731 ± 3% softirqs.CPU42.TIMER
78177 -13.1% 67901 softirqs.CPU43.TIMER
20261 ± 5% +25.8% 25487 ± 22% softirqs.CPU49.RCU
21419 ± 3% +11.1% 23806 ± 6% softirqs.CPU58.RCU
83494 ± 2% -15.7% 70407 ± 8% softirqs.CPU58.TIMER
81880 ± 3% -13.4% 70890 ± 8% softirqs.CPU59.TIMER
16398 ± 7% +24.7% 20443 ± 14% softirqs.CPU63.RCU
81929 ± 3% -13.3% 71052 ± 9% softirqs.CPU64.TIMER
73604 ± 5% -13.4% 63720 ± 2% softirqs.CPU66.TIMER
74179 ± 4% -13.0% 64501 ± 5% softirqs.CPU68.TIMER
73907 ± 5% -11.8% 65152 ± 4% softirqs.CPU70.TIMER
72272 ± 6% -14.5% 61817 ± 5% softirqs.CPU71.TIMER
73600 ± 5% -12.4% 64453 ± 4% softirqs.CPU72.TIMER
73383 ± 4% -12.2% 64466 ± 4% softirqs.CPU74.TIMER
16610 ± 9% +22.2% 20291 ± 10% softirqs.CPU75.RCU
74127 ± 4% -12.7% 64724 ± 4% softirqs.CPU75.TIMER
72876 ± 5% -13.5% 63044 ± 3% softirqs.CPU77.TIMER
73741 ± 5% -9.8% 66541 softirqs.CPU78.TIMER
72336 ± 6% -10.7% 64573 softirqs.CPU79.TIMER
74984 ± 4% -9.4% 67927 softirqs.CPU81.TIMER
75643 ± 3% -11.6% 66858 ± 2% softirqs.CPU82.TIMER
74269 ± 4% -12.4% 65080 ± 3% softirqs.CPU84.TIMER
75571 -11.4% 66947 ± 2% softirqs.CPU85.TIMER
75810 -9.8% 68392 ± 2% softirqs.CPU86.TIMER
6674487 -10.7% 5963133 ± 3% softirqs.TIMER
48.25 ± 48% -48.1 0.15 ±173% perf-profile.calltrace.cycles-pp.apic_timer_interrupt
45.86 ± 48% -45.7 0.14 ±173% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
36.81 ± 47% -36.8 0.00 perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
26.91 ± 48% -26.9 0.00 perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
19.21 ± 43% -19.2 0.00 perf-profile.calltrace.cycles-pp.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
17.43 ± 43% -17.4 0.00 perf-profile.calltrace.cycles-pp.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
17.20 ± 43% -17.2 0.00 perf-profile.calltrace.cycles-pp.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt
11.95 ± 48% -12.0 0.00 perf-profile.calltrace.cycles-pp.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
9.48 ± 48% -9.5 0.00 perf-profile.calltrace.cycles-pp.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle.tick_sched_timer
8.62 ± 41% -8.1 0.54 ± 60% perf-profile.calltrace.cycles-pp.new_sync_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write._fini
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write._fini
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.write._fini
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp._fini
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.devkmsg_write.new_sync_write.vfs_write.ksys_write.do_syscall_64
8.17 ± 41% -7.6 0.54 ± 60% perf-profile.calltrace.cycles-pp.devkmsg_emit.devkmsg_write.new_sync_write.vfs_write.ksys_write
7.95 ± 38% -7.4 0.54 ± 61% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write.new_sync_write
6.82 ± 32% -6.5 0.31 ±100% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
6.82 ± 32% -6.5 0.31 ±100% perf-profile.calltrace.cycles-pp.wait_for_xmitr.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock
6.50 ± 34% -6.2 0.33 ±100% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit.devkmsg_write
6.30 ± 35% -6.0 0.32 ±100% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.devkmsg_emit
4.33 ± 50% -4.3 0.00 perf-profile.calltrace.cycles-pp.irq_exit.smp_apic_timer_interrupt.apic_timer_interrupt
4.19 ± 50% -4.2 0.00 perf-profile.calltrace.cycles-pp.perf_mux_hrtimer_handler.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
4.10 ±115% -3.8 0.26 ±100% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
4.09 ±116% -3.8 0.26 ±100% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.00 +1.4 1.41 ± 12% perf-profile.calltrace.cycles-pp.nrand48_r
0.00 +1.5 1.52 ± 34% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
0.00 +1.5 1.54 ± 34% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page
0.00 +1.5 1.54 ± 33% perf-profile.calltrace.cycles-pp.clear_page_erms.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.00 +2.0 2.00 ± 34% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page
0.00 +2.1 2.08 ± 34% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp
0.00 +2.1 2.12 ± 34% perf-profile.calltrace.cycles-pp.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault
0.00 +2.2 2.17 ± 34% perf-profile.calltrace.cycles-pp.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
0.00 +2.5 2.47 ± 34% perf-profile.calltrace.cycles-pp.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.00 +2.9 2.94 ± 10% perf-profile.calltrace.cycles-pp.do_rw_once
0.00 +3.8 3.76 ± 33% perf-profile.calltrace.cycles-pp.filemap_map_pages.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.00 +5.7 5.74 ± 33% perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
0.00 +5.8 5.85 ± 33% perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
0.00 +5.9 5.87 ± 33% perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.18 ±141% +9.8 10.02 ± 33% perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
0.00 +10.1 10.11 ± 33% perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
0.00 +10.4 10.41 ± 33% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
0.00 +10.5 10.49 ± 33% perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
0.00 +11.6 11.55 ± 36% perf-profile.calltrace.cycles-pp.page_fault.do_access
0.00 +82.8 82.82 ± 5% perf-profile.calltrace.cycles-pp.do_access
21.45 -7.5% 19.84 ± 5% perf-stat.i.MPKI
8.758e+09 +10.8% 9.7e+09 perf-stat.i.branch-instructions
0.80 ± 7% -0.2 0.61 ± 25% perf-stat.i.branch-miss-rate%
52.90 -10.0 42.90 perf-stat.i.cache-miss-rate%
5.787e+08 -11.6% 5.114e+08 perf-stat.i.cache-misses
8.135e+08 +7.8% 8.773e+08 perf-stat.i.cache-references
1924 +43.6% 2763 ± 4% perf-stat.i.context-switches
3.57 -14.1% 3.06 perf-stat.i.cpi
1.602e+11 -5.6% 1.512e+11 perf-stat.i.cpu-cycles
64.20 ± 5% +39.9% 89.80 perf-stat.i.cpu-migrations
537.18 ± 8% +21.6% 653.45 ± 6% perf-stat.i.cycles-between-cache-misses
2.96 -0.5 2.48 perf-stat.i.dTLB-load-miss-rate%
4.458e+08 -4.6% 4.254e+08 perf-stat.i.dTLB-load-misses
1.098e+10 +10.7% 1.215e+10 perf-stat.i.dTLB-loads
589025 ± 25% +54.6% 910381 ± 7% perf-stat.i.dTLB-store-misses
4.284e+09 +12.1% 4.803e+09 perf-stat.i.dTLB-stores
1852806 ± 4% +13.3% 2099262 ± 3% perf-stat.i.iTLB-load-misses
3.852e+10 +11.1% 4.278e+10 perf-stat.i.instructions
286409 ± 41% +54.9% 443730 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.33 ± 4% +7.7% 0.35 ± 3% perf-stat.i.ipc
672221 ± 2% +11.8% 751342 perf-stat.i.minor-faults
6532149 -26.3% 4811515 ± 15% perf-stat.i.node-load-misses
5.586e+08 -12.0% 4.919e+08 perf-stat.i.node-loads
672261 ± 2% +11.8% 751452 perf-stat.i.page-faults
21.10 -2.9% 20.50 perf-stat.overall.MPKI
71.04 -12.7 58.30 perf-stat.overall.cache-miss-rate%
4.15 -14.9% 3.53 perf-stat.overall.cpi
277.06 +6.7% 295.70 perf-stat.overall.cycles-between-cache-misses
3.89 -0.5 3.38 perf-stat.overall.dTLB-load-miss-rate%
0.01 ± 25% +0.0 0.02 ± 7% perf-stat.overall.dTLB-store-miss-rate%
0.24 +17.5% 0.28 perf-stat.overall.ipc
12551 +1.1% 12696 perf-stat.overall.path-length
8.727e+09 +10.9% 9.677e+09 perf-stat.ps.branch-instructions
5.756e+08 -11.4% 5.101e+08 perf-stat.ps.cache-misses
8.103e+08 +8.0% 8.751e+08 perf-stat.ps.cache-references
1922 +43.4% 2757 ± 4% perf-stat.ps.context-switches
1.595e+11 -5.4% 1.508e+11 perf-stat.ps.cpu-cycles
64.37 ± 5% +39.6% 89.86 perf-stat.ps.cpu-migrations
4.431e+08 -4.2% 4.243e+08 perf-stat.ps.dTLB-load-misses
1.094e+10 +10.8% 1.213e+10 perf-stat.ps.dTLB-loads
586226 ± 25% +54.3% 904307 ± 7% perf-stat.ps.dTLB-store-misses
4.276e+09 +12.1% 4.792e+09 perf-stat.ps.dTLB-stores
1875392 ± 4% +12.0% 2099913 ± 3% perf-stat.ps.iTLB-load-misses
3.84e+10 +11.2% 4.269e+10 perf-stat.ps.instructions
684408 +10.2% 754271 perf-stat.ps.minor-faults
6489476 -26.3% 4781738 ± 15% perf-stat.ps.node-load-misses
5.554e+08 -11.7% 4.907e+08 perf-stat.ps.node-loads
684409 +10.2% 754271 perf-stat.ps.page-faults
7.45e+12 +1.1% 7.535e+12 perf-stat.total.instructions
190.67 -9.9% 171.75 interrupts.168:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
213.67 ± 29% -56.7% 92.50 ± 3% interrupts.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
251.33 ± 48% -60.7% 98.75 ± 6% interrupts.45:IR-PCI-MSI.1572875-edge.eth0-TxRx-11
169.00 ± 26% -38.6% 103.75 ± 14% interrupts.46:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
107.00 ± 10% -15.0% 91.00 ± 4% interrupts.48:IR-PCI-MSI.1572878-edge.eth0-TxRx-14
103.00 ± 6% -15.5% 87.00 interrupts.50:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
99.33 ± 4% -12.7% 86.75 interrupts.75:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
98.67 ± 6% -12.1% 86.75 interrupts.79:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
98.33 ± 5% -11.5% 87.00 interrupts.84:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
99.67 ± 2% -12.7% 87.00 interrupts.87:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
328.00 -10.7% 292.75 interrupts.9:IR-IO-APIC.9-fasteoi.acpi
193517 -6.8% 180367 interrupts.CAL:Function_call_interrupts
328.00 -10.7% 292.75 interrupts.CPU1.9:IR-IO-APIC.9-fasteoi.acpi
2232 ± 2% -8.8% 2035 ± 2% interrupts.CPU1.CAL:Function_call_interrupts
7482 ± 7% -49.3% 3796 ± 36% interrupts.CPU10.NMI:Non-maskable_interrupts
7482 ± 7% -49.3% 3796 ± 36% interrupts.CPU10.PMI:Performance_monitoring_interrupts
251.33 ± 48% -60.7% 98.75 ± 6% interrupts.CPU11.45:IR-PCI-MSI.1572875-edge.eth0-TxRx-11
7434 ± 8% -32.4% 5028 ± 6% interrupts.CPU11.NMI:Non-maskable_interrupts
7434 ± 8% -32.4% 5028 ± 6% interrupts.CPU11.PMI:Performance_monitoring_interrupts
169.00 ± 26% -38.6% 103.75 ± 14% interrupts.CPU12.46:IR-PCI-MSI.1572876-edge.eth0-TxRx-12
7480 ± 8% -33.2% 4994 ± 8% interrupts.CPU12.NMI:Non-maskable_interrupts
7480 ± 8% -33.2% 4994 ± 8% interrupts.CPU12.PMI:Performance_monitoring_interrupts
7460 ± 7% -33.2% 4986 ± 7% interrupts.CPU13.NMI:Non-maskable_interrupts
7460 ± 7% -33.2% 4986 ± 7% interrupts.CPU13.PMI:Performance_monitoring_interrupts
904.00 ±105% -86.8% 119.00 ± 79% interrupts.CPU13.RES:Rescheduling_interrupts
107.00 ± 10% -15.0% 91.00 ± 4% interrupts.CPU14.48:IR-PCI-MSI.1572878-edge.eth0-TxRx-14
7545 ± 6% -32.3% 5111 ± 8% interrupts.CPU14.NMI:Non-maskable_interrupts
7545 ± 6% -32.3% 5111 ± 8% interrupts.CPU14.PMI:Performance_monitoring_interrupts
7556 ± 6% -32.8% 5080 ± 7% interrupts.CPU15.NMI:Non-maskable_interrupts
7556 ± 6% -32.8% 5080 ± 7% interrupts.CPU15.PMI:Performance_monitoring_interrupts
103.00 ± 6% -15.5% 87.00 interrupts.CPU16.50:IR-PCI-MSI.1572880-edge.eth0-TxRx-16
7494 ± 7% -32.4% 5069 ± 8% interrupts.CPU16.NMI:Non-maskable_interrupts
7494 ± 7% -32.4% 5069 ± 8% interrupts.CPU16.PMI:Performance_monitoring_interrupts
7477 ± 8% -32.9% 5015 ± 8% interrupts.CPU17.NMI:Non-maskable_interrupts
7477 ± 8% -32.9% 5015 ± 8% interrupts.CPU17.PMI:Performance_monitoring_interrupts
7416 ± 8% -32.4% 5012 ± 7% interrupts.CPU18.NMI:Non-maskable_interrupts
7416 ± 8% -32.4% 5012 ± 7% interrupts.CPU18.PMI:Performance_monitoring_interrupts
7489 ± 7% -24.9% 5624 ± 19% interrupts.CPU19.NMI:Non-maskable_interrupts
7489 ± 7% -24.9% 5624 ± 19% interrupts.CPU19.PMI:Performance_monitoring_interrupts
7531 ± 6% -32.1% 5116 ± 7% interrupts.CPU20.NMI:Non-maskable_interrupts
7531 ± 6% -32.1% 5116 ± 7% interrupts.CPU20.PMI:Performance_monitoring_interrupts
302.33 ± 9% -50.1% 151.00 ± 39% interrupts.CPU22.RES:Rescheduling_interrupts
7565 ± 6% -33.8% 5005 ± 6% interrupts.CPU32.NMI:Non-maskable_interrupts
7565 ± 6% -33.8% 5005 ± 6% interrupts.CPU32.PMI:Performance_monitoring_interrupts
7494 ± 7% -34.4% 4916 ± 7% interrupts.CPU34.NMI:Non-maskable_interrupts
7494 ± 7% -34.4% 4916 ± 7% interrupts.CPU34.PMI:Performance_monitoring_interrupts
7562 ± 5% -34.6% 4946 ± 7% interrupts.CPU35.NMI:Non-maskable_interrupts
7562 ± 5% -34.6% 4946 ± 7% interrupts.CPU35.PMI:Performance_monitoring_interrupts
7485 ± 7% -32.8% 5030 ± 7% interrupts.CPU36.NMI:Non-maskable_interrupts
7485 ± 7% -32.8% 5030 ± 7% interrupts.CPU36.PMI:Performance_monitoring_interrupts
7532 ± 6% -33.3% 5026 ± 7% interrupts.CPU37.NMI:Non-maskable_interrupts
7532 ± 6% -33.3% 5026 ± 7% interrupts.CPU37.PMI:Performance_monitoring_interrupts
898.33 ±108% -87.9% 108.25 ± 59% interrupts.CPU37.RES:Rescheduling_interrupts
7502 ± 7% -33.7% 4977 ± 7% interrupts.CPU38.NMI:Non-maskable_interrupts
7502 ± 7% -33.7% 4977 ± 7% interrupts.CPU38.PMI:Performance_monitoring_interrupts
99.33 ± 4% -12.7% 86.75 interrupts.CPU39.75:IR-PCI-MSI.1572903-edge.eth0-TxRx-39
7506 ± 6% -41.6% 4382 ± 28% interrupts.CPU39.NMI:Non-maskable_interrupts
7506 ± 6% -41.6% 4382 ± 28% interrupts.CPU39.PMI:Performance_monitoring_interrupts
7512 ± 7% -30.3% 5232 ± 8% interrupts.CPU40.NMI:Non-maskable_interrupts
7512 ± 7% -30.3% 5232 ± 8% interrupts.CPU40.PMI:Performance_monitoring_interrupts
7842 -44.1% 4384 ± 27% interrupts.CPU41.NMI:Non-maskable_interrupts
7842 -44.1% 4384 ± 27% interrupts.CPU41.PMI:Performance_monitoring_interrupts
7501 ± 6% -41.7% 4375 ± 27% interrupts.CPU42.NMI:Non-maskable_interrupts
7501 ± 6% -41.7% 4375 ± 27% interrupts.CPU42.PMI:Performance_monitoring_interrupts
98.67 ± 6% -12.1% 86.75 interrupts.CPU43.79:IR-PCI-MSI.1572907-edge.eth0-TxRx-43
7515 ± 6% -41.5% 4399 ± 27% interrupts.CPU43.NMI:Non-maskable_interrupts
7515 ± 6% -41.5% 4399 ± 27% interrupts.CPU43.PMI:Performance_monitoring_interrupts
7550 ± 5% -40.5% 4492 ± 26% interrupts.CPU44.NMI:Non-maskable_interrupts
7550 ± 5% -40.5% 4492 ± 26% interrupts.CPU44.PMI:Performance_monitoring_interrupts
182.33 ± 39% +529.8% 1148 ±115% interrupts.CPU44.RES:Rescheduling_interrupts
2255 ± 2% -8.6% 2060 ± 2% interrupts.CPU45.CAL:Function_call_interrupts
98.33 ± 5% -11.5% 87.00 interrupts.CPU48.84:IR-PCI-MSI.1572912-edge.eth0-TxRx-48
182.67 ± 68% +121.0% 403.75 ± 61% interrupts.CPU49.RES:Rescheduling_interrupts
213.67 ± 29% -56.7% 92.50 ± 3% interrupts.CPU5.39:IR-PCI-MSI.1572869-edge.eth0-TxRx-5
109.00 ± 14% +62.2% 176.75 ± 36% interrupts.CPU5.RES:Rescheduling_interrupts
99.67 ± 2% -12.7% 87.00 interrupts.CPU51.87:IR-PCI-MSI.1572915-edge.eth0-TxRx-51
51.00 ± 51% +63.7% 83.50 ± 27% interrupts.CPU51.RES:Rescheduling_interrupts
2241 -9.8% 2022 ± 3% interrupts.CPU52.CAL:Function_call_interrupts
2222 -10.5% 1988 ± 3% interrupts.CPU54.CAL:Function_call_interrupts
7457 ± 7% -38.2% 4612 ± 27% interrupts.CPU55.NMI:Non-maskable_interrupts
7457 ± 7% -38.2% 4612 ± 27% interrupts.CPU55.PMI:Performance_monitoring_interrupts
7522 ± 7% -41.9% 4367 ± 25% interrupts.CPU56.NMI:Non-maskable_interrupts
7522 ± 7% -41.9% 4367 ± 25% interrupts.CPU56.PMI:Performance_monitoring_interrupts
7514 ± 7% -49.6% 3786 ± 35% interrupts.CPU57.NMI:Non-maskable_interrupts
7514 ± 7% -49.6% 3786 ± 35% interrupts.CPU57.PMI:Performance_monitoring_interrupts
76.67 ± 95% +761.8% 660.75 ±127% interrupts.CPU57.RES:Rescheduling_interrupts
7563 ± 6% -40.8% 4480 ± 24% interrupts.CPU58.NMI:Non-maskable_interrupts
7563 ± 6% -40.8% 4480 ± 24% interrupts.CPU58.PMI:Performance_monitoring_interrupts
205.00 ± 45% -66.6% 68.50 ± 56% interrupts.CPU58.RES:Rescheduling_interrupts
7571 ± 6% -32.6% 5105 ± 7% interrupts.CPU59.NMI:Non-maskable_interrupts
7571 ± 6% -32.6% 5105 ± 7% interrupts.CPU59.PMI:Performance_monitoring_interrupts
7531 ± 6% -41.1% 4432 ± 25% interrupts.CPU6.NMI:Non-maskable_interrupts
7531 ± 6% -41.1% 4432 ± 25% interrupts.CPU6.PMI:Performance_monitoring_interrupts
7532 ± 6% -32.6% 5075 ± 8% interrupts.CPU60.NMI:Non-maskable_interrupts
7532 ± 6% -32.6% 5075 ± 8% interrupts.CPU60.PMI:Performance_monitoring_interrupts
26.00 ± 19% +301.9% 104.50 ± 86% interrupts.CPU60.RES:Rescheduling_interrupts
7482 ± 7% -33.2% 4999 ± 8% interrupts.CPU61.NMI:Non-maskable_interrupts
7482 ± 7% -33.2% 4999 ± 8% interrupts.CPU61.PMI:Performance_monitoring_interrupts
7513 ± 7% -33.5% 4998 ± 8% interrupts.CPU62.NMI:Non-maskable_interrupts
7513 ± 7% -33.5% 4998 ± 8% interrupts.CPU62.PMI:Performance_monitoring_interrupts
7478 ± 7% -27.5% 5423 ± 14% interrupts.CPU63.NMI:Non-maskable_interrupts
7478 ± 7% -27.5% 5423 ± 14% interrupts.CPU63.PMI:Performance_monitoring_interrupts
79.33 ± 80% +70.8% 135.50 ± 59% interrupts.CPU63.RES:Rescheduling_interrupts
7574 ± 6% -32.1% 5141 ± 7% interrupts.CPU64.NMI:Non-maskable_interrupts
7574 ± 6% -32.1% 5141 ± 7% interrupts.CPU64.PMI:Performance_monitoring_interrupts
190.67 -9.9% 171.75 interrupts.CPU65.168:IR-PCI-MSI.512000-edge.ahci[0000:00:1f.2]
7546 ± 5% -24.1% 5728 ± 15% interrupts.CPU65.NMI:Non-maskable_interrupts
7546 ± 5% -24.1% 5728 ± 15% interrupts.CPU65.PMI:Performance_monitoring_interrupts
7435 ± 8% -31.0% 5130 ± 6% interrupts.CPU66.NMI:Non-maskable_interrupts
7435 ± 8% -31.0% 5130 ± 6% interrupts.CPU66.PMI:Performance_monitoring_interrupts
48.00 ± 40% +322.4% 202.75 ± 76% interrupts.CPU66.RES:Rescheduling_interrupts
7473 ± 8% -30.1% 5226 ± 15% interrupts.CPU67.NMI:Non-maskable_interrupts
7473 ± 8% -30.1% 5226 ± 15% interrupts.CPU67.PMI:Performance_monitoring_interrupts
7502 ± 7% -33.6% 4985 ± 7% interrupts.CPU68.NMI:Non-maskable_interrupts
7502 ± 7% -33.6% 4985 ± 7% interrupts.CPU68.PMI:Performance_monitoring_interrupts
7415 ± 8% -32.9% 4973 ± 9% interrupts.CPU70.NMI:Non-maskable_interrupts
7415 ± 8% -32.9% 4973 ± 9% interrupts.CPU70.PMI:Performance_monitoring_interrupts
102.00 ± 64% +195.8% 301.75 ± 50% interrupts.CPU70.RES:Rescheduling_interrupts
7463 ± 7% -32.9% 5005 ± 9% interrupts.CPU71.NMI:Non-maskable_interrupts
7463 ± 7% -32.9% 5005 ± 9% interrupts.CPU71.PMI:Performance_monitoring_interrupts
7502 ± 8% -38.2% 4635 ± 20% interrupts.CPU72.NMI:Non-maskable_interrupts
7502 ± 8% -38.2% 4635 ± 20% interrupts.CPU72.PMI:Performance_monitoring_interrupts
65.00 ± 58% +181.9% 183.25 ± 28% interrupts.CPU72.RES:Rescheduling_interrupts
7426 ± 8% -38.4% 4574 ± 30% interrupts.CPU73.NMI:Non-maskable_interrupts
7426 ± 8% -38.4% 4574 ± 30% interrupts.CPU73.PMI:Performance_monitoring_interrupts
7444 ± 8% -40.4% 4434 ± 28% interrupts.CPU74.NMI:Non-maskable_interrupts
7444 ± 8% -40.4% 4434 ± 28% interrupts.CPU74.PMI:Performance_monitoring_interrupts
7454 ± 8% -38.9% 4558 ± 24% interrupts.CPU75.NMI:Non-maskable_interrupts
7454 ± 8% -38.9% 4558 ± 24% interrupts.CPU75.PMI:Performance_monitoring_interrupts
7446 ± 8% -32.6% 5017 ± 7% interrupts.CPU76.NMI:Non-maskable_interrupts
7446 ± 8% -32.6% 5017 ± 7% interrupts.CPU76.PMI:Performance_monitoring_interrupts
7465 ± 6% -32.5% 5035 ± 10% interrupts.CPU77.NMI:Non-maskable_interrupts
7465 ± 6% -32.5% 5035 ± 10% interrupts.CPU77.PMI:Performance_monitoring_interrupts
206.00 ± 59% -64.7% 72.75 ± 26% interrupts.CPU77.RES:Rescheduling_interrupts
7424 ± 9% -40.9% 4386 ± 28% interrupts.CPU78.NMI:Non-maskable_interrupts
7424 ± 9% -40.9% 4386 ± 28% interrupts.CPU78.PMI:Performance_monitoring_interrupts
76.00 ± 78% +204.3% 231.25 ± 53% interrupts.CPU78.RES:Rescheduling_interrupts
7717 ± 2% -43.0% 4399 ± 29% interrupts.CPU79.NMI:Non-maskable_interrupts
7717 ± 2% -43.0% 4399 ± 29% interrupts.CPU79.PMI:Performance_monitoring_interrupts
7454 ± 8% -48.8% 3813 ± 36% interrupts.CPU8.NMI:Non-maskable_interrupts
7454 ± 8% -48.8% 3813 ± 36% interrupts.CPU8.PMI:Performance_monitoring_interrupts
148.33 ± 52% +132.1% 344.25 ± 65% interrupts.CPU8.RES:Rescheduling_interrupts
7484 ± 7% -32.4% 5061 ± 6% interrupts.CPU80.NMI:Non-maskable_interrupts
7484 ± 7% -32.4% 5061 ± 6% interrupts.CPU80.PMI:Performance_monitoring_interrupts
100.00 ± 76% +164.2% 264.25 ± 67% interrupts.CPU80.RES:Rescheduling_interrupts
7523 ± 7% -33.3% 5017 ± 8% interrupts.CPU82.NMI:Non-maskable_interrupts
7523 ± 7% -33.3% 5017 ± 8% interrupts.CPU82.PMI:Performance_monitoring_interrupts
7446 ± 8% -33.5% 4949 ± 7% interrupts.CPU83.NMI:Non-maskable_interrupts
7446 ± 8% -33.5% 4949 ± 7% interrupts.CPU83.PMI:Performance_monitoring_interrupts
2219 -17.7% 1826 ± 8% interrupts.CPU84.CAL:Function_call_interrupts
7423 ± 8% -25.3% 5545 ± 9% interrupts.CPU84.NMI:Non-maskable_interrupts
7423 ± 8% -25.3% 5545 ± 9% interrupts.CPU84.PMI:Performance_monitoring_interrupts
161.00 ± 52% +207.9% 495.75 ± 34% interrupts.CPU84.RES:Rescheduling_interrupts
7611 ± 5% -34.6% 4981 ± 8% interrupts.CPU85.NMI:Non-maskable_interrupts
7611 ± 5% -34.6% 4981 ± 8% interrupts.CPU85.PMI:Performance_monitoring_interrupts
7510 ± 7% -33.9% 4964 ± 8% interrupts.CPU86.NMI:Non-maskable_interrupts
7510 ± 7% -33.9% 4964 ± 8% interrupts.CPU86.PMI:Performance_monitoring_interrupts
56.67 ± 65% +193.8% 166.50 ± 45% interrupts.CPU87.RES:Rescheduling_interrupts
7461 ± 7% -41.1% 4392 ± 25% interrupts.CPU9.NMI:Non-maskable_interrupts
7461 ± 7% -41.1% 4392 ± 25% interrupts.CPU9.PMI:Performance_monitoring_interrupts
615899 ± 8% -30.9% 425828 ± 8% interrupts.NMI:Non-maskable_interrupts
615899 ± 8% -30.9% 425828 ± 8% interrupts.PMI:Performance_monitoring_interrupts
224.00 ± 5% +124.7% 503.25 ± 6% interrupts.TLB:TLB_shootdowns
***************************************************************************************************
lkp-skl-2sp6: 72 threads Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz with 128G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase/ucode:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-03-19.cgz/300s/16G/lkp-skl-2sp6/shm-pread-rand/vm-scalability/0x200005a
commit:
f568cf93ca ("vfs: Convert smackfs to use the new mount API")
27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 kmsg.Firmware_Bug]:the_BIOS_has_corrupted_hw-PMU_resources(MSR#is#c5)
%stddev %change %stddev
\ | \
29.34 -50.3% 14.58 vm-scalability.free_time
57369 -33.5% 38169 vm-scalability.median
4128975 -33.3% 2753755 vm-scalability.throughput
338.83 -5.6% 319.89 vm-scalability.time.elapsed_time
338.83 -5.6% 319.89 vm-scalability.time.elapsed_time.max
65853236 -50.0% 32927302 vm-scalability.time.maximum_resident_set_size
1.18e+08 -50.0% 59015896 vm-scalability.time.minor_page_faults
6995 +1.3% 7086 vm-scalability.time.percent_of_cpu_this_job_got
4355 -51.8% 2097 ± 3% vm-scalability.time.system_time
19346 +6.3% 20571 vm-scalability.time.user_time
1.239e+09 -33.2% 8.276e+08 vm-scalability.workload
0.72 -13.0% 0.62 ± 8% boot-time.smp_boot
470049 ± 9% -26.9% 343394 ± 25% cpuidle.C1.time
1115118 ± 28% -36.4% 708982 ± 25% cpuidle.C1E.usage
3.07 -1.3 1.74 ± 14% mpstat.cpu.all.idle%
17.84 -8.7 9.12 ± 3% mpstat.cpu.all.sys%
79.09 +10.0 89.13 mpstat.cpu.all.usr%
78.00 +12.8% 88.00 vmstat.cpu.us
64111530 -47.9% 33375595 vmstat.memory.cache
58088987 +60.4% 93196417 vmstat.memory.free
9740248 ± 2% -49.3% 4940013 ± 3% numa-numastat.node0.local_node
9744954 ± 2% -49.3% 4943645 ± 3% numa-numastat.node0.numa_hit
9771507 ± 2% -47.4% 5143109 ± 3% numa-numastat.node1.local_node
9780969 ± 2% -47.3% 5153631 ± 3% numa-numastat.node1.numa_hit
2997 +1.2% 3033 turbostat.Avg_MHz
1114565 ± 28% -36.4% 709071 ± 25% turbostat.C1E
2.82 ± 6% -36.7% 1.79 ± 13% turbostat.CPU%c1
235.19 -3.0% 228.14 turbostat.PkgWatt
92.25 -10.5% 82.55 turbostat.RAMWatt
43.84 +20.8% 52.98 ± 10% sched_debug.cfs_rq:/.util_avg.stddev
1100199 ± 8% -9.1% 1000000 sched_debug.cpu.avg_idle.max
4.47 ± 4% +45.2% 6.49 ± 6% sched_debug.cpu.clock.stddev
4.47 ± 4% +45.2% 6.49 ± 6% sched_debug.cpu.clock_task.stddev
2.93 ± 2% +16.1% 3.40 ± 3% sched_debug.cpu.cpu_load[0].stddev
41.50 ± 6% -9.2% 37.67 ± 8% sched_debug.cpu.cpu_load[3].max
76.58 ± 5% -8.4% 70.17 ± 5% sched_debug.cpu.cpu_load[4].max
7.84 ± 5% -8.6% 7.17 ± 6% sched_debug.cpu.cpu_load[4].stddev
538523 ± 2% -7.2% 500000 sched_debug.cpu.max_idle_balance_cost.max
5402 ± 49% -100.0% 0.00 sched_debug.cpu.max_idle_balance_cost.stddev
918.42 ± 11% -21.8% 717.79 ± 13% sched_debug.cpu.nr_switches.min
6.31 ± 10% -21.6% 4.95 ± 6% sched_debug.cpu.nr_uninterruptible.stddev
9450 ± 2% -16.8% 7861 ± 4% slabinfo.cred_jar.active_objs
9450 ± 2% -16.8% 7861 ± 4% slabinfo.cred_jar.num_objs
691.50 ± 3% +25.8% 869.75 ± 4% slabinfo.dmaengine-unmap-16.active_objs
691.50 ± 3% +25.8% 869.75 ± 4% slabinfo.dmaengine-unmap-16.num_objs
3409 ± 6% -14.1% 2927 ± 7% slabinfo.eventpoll_pwq.active_objs
3409 ± 6% -14.1% 2927 ± 7% slabinfo.eventpoll_pwq.num_objs
270793 -46.9% 143859 slabinfo.radix_tree_node.active_objs
4897 -47.3% 2583 slabinfo.radix_tree_node.active_slabs
274288 -47.3% 144682 slabinfo.radix_tree_node.num_objs
4897 -47.3% 2583 slabinfo.radix_tree_node.num_slabs
3430 -13.8% 2955 ± 10% slabinfo.sock_inode_cache.active_objs
3430 -13.8% 2955 ± 10% slabinfo.sock_inode_cache.num_objs
4061070 -68.8% 1267951 meminfo.Active
4060810 -68.8% 1267700 meminfo.Active(anon)
64101737 -48.1% 33279811 meminfo.Cached
63977519 -49.3% 32444459 meminfo.Committed_AS
59063468 -47.5% 31035002 meminfo.Inactive
59062059 -47.5% 31033620 meminfo.Inactive(anon)
217857 -34.3% 143104 meminfo.KReclaimable
59006761 -47.5% 30978341 meminfo.Mapped
57345075 +61.5% 92600531 meminfo.MemAvailable
57853777 +61.0% 93146626 meminfo.MemFree
73850245 -47.8% 38557396 meminfo.Memused
8905110 -49.3% 4516171 meminfo.PageTables
217857 -34.3% 143104 meminfo.SReclaimable
62878730 -49.0% 32056754 meminfo.Shmem
341526 -22.2% 265855 meminfo.Slab
236622 -45.7% 128474 meminfo.max_used_kB
1017564 -68.8% 317803 proc-vmstat.nr_active_anon
1430333 +61.5% 2309494 proc-vmstat.nr_dirty_background_threshold
2864165 +61.5% 4624641 proc-vmstat.nr_dirty_threshold
16010805 -48.0% 8323862 proc-vmstat.nr_file_pages
14478526 +60.8% 23283066 proc-vmstat.nr_free_pages
14748269 -47.4% 7761184 proc-vmstat.nr_inactive_anon
14734597 -47.4% 7747479 proc-vmstat.nr_mapped
2226081 -49.3% 1128924 proc-vmstat.nr_page_table_pages
15704787 -48.9% 8017833 proc-vmstat.nr_shmem
54423 -34.2% 35794 proc-vmstat.nr_slab_reclaimable
1017564 -68.8% 317803 proc-vmstat.nr_zone_active_anon
14748269 -47.4% 7761184 proc-vmstat.nr_zone_inactive_anon
19553150 -48.2% 10125086 proc-vmstat.numa_hit
19538971 -48.3% 10110918 proc-vmstat.numa_local
16473553 -50.0% 8241083 proc-vmstat.pgactivate
19639546 -48.1% 10194611 proc-vmstat.pgalloc_normal
1.189e+08 -49.7% 59831195 proc-vmstat.pgfault
19263642 -47.7% 10066386 proc-vmstat.pgfree
2040989 -68.7% 638086 ± 5% numa-meminfo.node0.Active
2040759 -68.7% 637895 ± 5% numa-meminfo.node0.Active(anon)
32255633 -49.2% 16382022 ± 2% numa-meminfo.node0.FilePages
29703056 -48.7% 15251546 ± 2% numa-meminfo.node0.Inactive
29702146 -48.7% 15250514 ± 2% numa-meminfo.node0.Inactive(anon)
129237 ± 27% -47.4% 67973 ± 26% numa-meminfo.node0.KReclaimable
29669464 -48.7% 15233801 ± 2% numa-meminfo.node0.Mapped
28353196 ± 2% +65.3% 46863502 numa-meminfo.node0.MemFree
37324122 ± 2% -49.6% 18813817 ± 4% numa-meminfo.node0.MemUsed
4634020 ± 8% -55.8% 2049728 ± 16% numa-meminfo.node0.PageTables
129237 ± 27% -47.4% 67973 ± 26% numa-meminfo.node0.SReclaimable
31643856 -50.2% 15768034 ± 2% numa-meminfo.node0.Shmem
198838 ± 19% -31.5% 136214 ± 13% numa-meminfo.node0.Slab
2060932 -68.9% 641532 ± 5% numa-meminfo.node1.Active
2060902 -68.9% 641471 ± 5% numa-meminfo.node1.Active(anon)
31779623 -46.8% 16909618 ± 2% numa-meminfo.node1.FilePages
29252029 -46.0% 15782612 ± 2% numa-meminfo.node1.Inactive
29251531 -46.0% 15782260 ± 2% numa-meminfo.node1.Inactive(anon)
29228867 -46.1% 15743623 ± 2% numa-meminfo.node1.Mapped
29571324 ± 2% +56.5% 46273312 numa-meminfo.node1.MemFree
36455379 ± 2% -45.8% 19753390 ± 4% numa-meminfo.node1.MemUsed
4267926 ± 8% -42.2% 2465282 ± 13% numa-meminfo.node1.PageTables
31167329 -47.7% 16299492 ± 2% numa-meminfo.node1.Shmem
511423 -68.7% 160010 ± 5% numa-vmstat.node0.nr_active_anon
8071576 -49.3% 4095377 ± 2% numa-vmstat.node0.nr_file_pages
7080740 ± 2% +65.5% 11715896 numa-vmstat.node0.nr_free_pages
7431980 -48.7% 3811956 ± 2% numa-vmstat.node0.nr_inactive_anon
7423898 -48.7% 3807846 ± 2% numa-vmstat.node0.nr_mapped
1158358 ± 8% -55.8% 512451 ± 16% numa-vmstat.node0.nr_page_table_pages
7918632 -50.2% 3941880 ± 2% numa-vmstat.node0.nr_shmem
32332 ± 27% -47.4% 16993 ± 26% numa-vmstat.node0.nr_slab_reclaimable
511422 -68.7% 160010 ± 5% numa-vmstat.node0.nr_zone_active_anon
7431981 -48.7% 3811955 ± 2% numa-vmstat.node0.nr_zone_inactive_anon
9835567 ± 2% -47.3% 5178639 ± 3% numa-vmstat.node0.numa_hit
9830733 ± 2% -47.4% 5174906 ± 3% numa-vmstat.node0.numa_local
516488 -68.8% 160936 ± 5% numa-vmstat.node1.nr_active_anon
7952400 -46.8% 4227427 ± 2% numa-vmstat.node1.nr_file_pages
7385447 ± 2% +56.6% 11567830 numa-vmstat.node1.nr_free_pages
7319116 -46.1% 3945017 ± 2% numa-vmstat.node1.nr_inactive_anon
7313529 -46.2% 3935419 ± 2% numa-vmstat.node1.nr_mapped
244.00 -63.8% 88.25 ±100% numa-vmstat.node1.nr_mlock
1066849 ± 8% -42.2% 616680 ± 13% numa-vmstat.node1.nr_page_table_pages
7799327 -47.8% 4074895 ± 2% numa-vmstat.node1.nr_shmem
516486 -68.8% 160936 ± 5% numa-vmstat.node1.nr_zone_active_anon
7319114 -46.1% 3945017 ± 2% numa-vmstat.node1.nr_zone_inactive_anon
9691224 ± 2% -45.0% 5332953 ± 3% numa-vmstat.node1.numa_hit
9507279 ± 2% -45.9% 5147811 ± 3% numa-vmstat.node1.numa_local
47.94 -6.1 41.86 ± 5% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
79.61 -2.7 76.88 perf-profile.calltrace.cycles-pp.apic_timer_interrupt
76.03 -2.1 73.88 ± 2% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt
67.16 -1.8 65.32 ± 2% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.79 ± 3% -1.8 1.97 ± 10% perf-profile.calltrace.cycles-pp.account_user_time.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
7.12 ± 7% -1.3 5.83 ± 13% perf-profile.calltrace.cycles-pp.ktime_get_update_offsets_now.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.05 -1.0 2.09 ± 25% perf-profile.calltrace.cycles-pp.interrupt_entry
1.36 ± 26% -0.7 0.62 ± 58% perf-profile.calltrace.cycles-pp.ktime_get.tick_sched_timer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.50 ± 2% -0.6 0.93 ± 13% perf-profile.calltrace.cycles-pp.timerqueue_del.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt
1.71 ± 2% -0.5 1.20 ± 12% perf-profile.calltrace.cycles-pp.__remove_hrtimer.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt
3.35 ± 2% -0.5 2.85 ± 7% perf-profile.calltrace.cycles-pp.interrupt_entry.apic_timer_interrupt
1.80 ± 5% -0.4 1.40 ± 16% perf-profile.calltrace.cycles-pp.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
1.06 ± 6% -0.4 0.69 ± 15% perf-profile.calltrace.cycles-pp.rcu_irq_enter.irq_enter.smp_apic_timer_interrupt.apic_timer_interrupt
1.10 ± 5% -0.3 0.79 ± 33% perf-profile.calltrace.cycles-pp.prepare_exit_to_usermode.swapgs_restore_regs_and_return_to_usermode
1.00 ± 14% -0.2 0.76 ± 10% perf-profile.calltrace.cycles-pp.native_apic_mem_write.smp_apic_timer_interrupt.apic_timer_interrupt
1.20 ± 11% +0.3 1.49 ± 9% perf-profile.calltrace.cycles-pp.__update_load_avg_se.task_tick_fair.scheduler_tick.update_process_times.tick_sched_handle
0.64 ± 4% +0.3 0.97 ± 20% perf-profile.calltrace.cycles-pp.ret_from_fork
0.64 ± 4% +0.3 0.97 ± 20% perf-profile.calltrace.cycles-pp.kthread.ret_from_fork
0.00 +0.7 0.70 ± 17% perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault
0.00 +0.7 0.71 ± 22% perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork
0.00 +0.7 0.71 ± 22% perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork
0.70 +0.8 1.50 ± 24% perf-profile.calltrace.cycles-pp.serial8250_console_putchar.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit
0.71 +0.8 1.56 ± 24% perf-profile.calltrace.cycles-pp.uart_console_write.serial8250_console_write.console_unlock.vprintk_emit.printk
0.72 +0.9 1.61 ± 25% perf-profile.calltrace.cycles-pp.serial8250_console_write.console_unlock.vprintk_emit.printk.irq_work_run_list
4.32 ± 5% +0.9 5.22 ± 9% perf-profile.calltrace.cycles-pp.run_local_timers.update_process_times.tick_sched_handle.tick_sched_timer.__hrtimer_run_queues
0.83 +1.0 1.81 ± 24% perf-profile.calltrace.cycles-pp.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt.irq_work_interrupt
0.83 +1.0 1.81 ± 24% perf-profile.calltrace.cycles-pp.vprintk_emit.printk.irq_work_run_list.irq_work_run.smp_irq_work_interrupt
0.83 +1.0 1.81 ± 24% perf-profile.calltrace.cycles-pp.console_unlock.vprintk_emit.printk.irq_work_run_list.irq_work_run
263.00 ± 12% -30.7% 182.25 ± 5% interrupts.73:PCI-MSI.70260741-edge.eth3-TxRx-4
254259 -7.7% 234765 ± 2% interrupts.CAL:Function_call_interrupts
7626 +7.3% 8183 ± 6% interrupts.CPU0.NMI:Non-maskable_interrupts
7626 +7.3% 8183 ± 6% interrupts.CPU0.PMI:Performance_monitoring_interrupts
7628 +7.4% 8191 ± 6% interrupts.CPU13.NMI:Non-maskable_interrupts
7628 +7.4% 8191 ± 6% interrupts.CPU13.PMI:Performance_monitoring_interrupts
7617 +7.6% 8192 ± 6% interrupts.CPU15.NMI:Non-maskable_interrupts
7617 +7.6% 8192 ± 6% interrupts.CPU15.PMI:Performance_monitoring_interrupts
55.00 +1010.0% 610.50 ± 89% interrupts.CPU17.RES:Rescheduling_interrupts
263.00 ± 12% -30.7% 182.25 ± 5% interrupts.CPU18.73:PCI-MSI.70260741-edge.eth3-TxRx-4
3499 ± 2% -7.3% 3242 ± 2% interrupts.CPU18.CAL:Function_call_interrupts
7616 +7.4% 8182 ± 6% interrupts.CPU18.NMI:Non-maskable_interrupts
7616 +7.4% 8182 ± 6% interrupts.CPU18.PMI:Performance_monitoring_interrupts
604.50 ± 4% -67.9% 194.25 ± 49% interrupts.CPU19.RES:Rescheduling_interrupts
3563 -8.6% 3255 interrupts.CPU21.CAL:Function_call_interrupts
7609 +7.6% 8187 ± 6% interrupts.CPU22.NMI:Non-maskable_interrupts
7609 +7.6% 8187 ± 6% interrupts.CPU22.PMI:Performance_monitoring_interrupts
474.00 ± 19% -57.2% 202.75 ± 48% interrupts.CPU25.RES:Rescheduling_interrupts
1109 ± 62% -79.2% 231.25 ± 55% interrupts.CPU27.RES:Rescheduling_interrupts
58.50 ± 23% +211.1% 182.00 ± 71% interrupts.CPU3.RES:Rescheduling_interrupts
480.50 ± 17% -58.2% 201.00 ± 53% interrupts.CPU31.RES:Rescheduling_interrupts
450.00 ± 5% -68.0% 144.00 ± 29% interrupts.CPU33.RES:Rescheduling_interrupts
341.00 ± 21% -45.2% 186.75 ± 66% interrupts.CPU34.RES:Rescheduling_interrupts
723.50 ± 77% -90.8% 66.75 ± 56% interrupts.CPU36.RES:Rescheduling_interrupts
29.00 ± 41% +239.7% 98.50 ± 33% interrupts.CPU41.RES:Rescheduling_interrupts
15.50 ± 9% +390.3% 76.00 ± 35% interrupts.CPU42.RES:Rescheduling_interrupts
3695 -11.1% 3287 ± 2% interrupts.CPU43.CAL:Function_call_interrupts
251.00 ± 46% -72.8% 68.25 ± 54% interrupts.CPU44.RES:Rescheduling_interrupts
3659 -9.9% 3297 ± 2% interrupts.CPU48.CAL:Function_call_interrupts
115.50 ± 18% -71.2% 33.25 ±101% interrupts.CPU55.RES:Rescheduling_interrupts
1285 ± 10% -70.8% 374.75 ±125% interrupts.CPU59.RES:Rescheduling_interrupts
3508 -27.2% 2555 ± 48% interrupts.CPU6.CAL:Function_call_interrupts
3638 -11.1% 3234 ± 3% interrupts.CPU62.CAL:Function_call_interrupts
3636 -9.8% 3279 ± 2% interrupts.CPU63.CAL:Function_call_interrupts
110.00 ± 39% -55.5% 49.00 ± 91% interrupts.CPU65.RES:Rescheduling_interrupts
173.50 ± 50% -62.0% 66.00 ± 34% interrupts.CPU66.RES:Rescheduling_interrupts
3607 -28.3% 2584 ± 47% interrupts.CPU68.CAL:Function_call_interrupts
49.00 ± 4% +120.4% 108.00 ± 64% interrupts.CPU68.RES:Rescheduling_interrupts
3591 -10.1% 3228 interrupts.CPU71.CAL:Function_call_interrupts
79.00 ± 8% +388.3% 385.75 ± 42% interrupts.CPU9.RES:Rescheduling_interrupts
24.57 -5.2% 23.29 perf-stat.i.MPKI
6.019e+09 -17.5% 4.965e+09 perf-stat.i.branch-instructions
0.22 ± 3% -0.1 0.15 ± 5% perf-stat.i.branch-miss-rate%
8375301 -31.9% 5702870 ± 6% perf-stat.i.branch-misses
74.12 +10.5 84.59 perf-stat.i.cache-miss-rate%
5.203e+08 -15.8% 4.38e+08 perf-stat.i.cache-misses
6.67e+08 -24.3% 5.052e+08 perf-stat.i.cache-references
8.41 +20.4% 10.13 perf-stat.i.cpi
2.149e+11 +1.3% 2.178e+11 perf-stat.i.cpu-cycles
655.72 -6.9% 610.25 perf-stat.i.cycles-between-cache-misses
2.96 +0.3 3.29 perf-stat.i.dTLB-load-miss-rate%
2.507e+08 -13.9% 2.158e+08 perf-stat.i.dTLB-load-misses
7.517e+09 -17.7% 6.186e+09 perf-stat.i.dTLB-loads
0.03 ± 3% -0.0 0.03 ± 11% perf-stat.i.dTLB-store-miss-rate%
881950 ± 3% -27.7% 637506 ± 10% perf-stat.i.dTLB-store-misses
2.644e+09 -17.6% 2.18e+09 perf-stat.i.dTLB-stores
748152 -41.7% 436041 ± 2% perf-stat.i.iTLB-load-misses
51192 ± 19% -29.8% 35952 ± 19% perf-stat.i.iTLB-loads
2.617e+10 -17.8% 2.152e+10 perf-stat.i.instructions
166690 ± 2% -12.3% 146120 ± 3% perf-stat.i.instructions-per-iTLB-miss
0.14 -22.0% 0.11 perf-stat.i.ipc
349429 -46.8% 186018 perf-stat.i.minor-faults
2.037e+08 ± 2% -14.3% 1.745e+08 ± 6% perf-stat.i.node-load-misses
3.048e+08 -15.7% 2.57e+08 ± 6% perf-stat.i.node-loads
1273439 ± 2% -45.5% 694360 ± 4% perf-stat.i.node-store-misses
480055 ± 12% -42.6% 275675 ± 8% perf-stat.i.node-stores
349431 -46.8% 186021 perf-stat.i.page-faults
25.47 -7.9% 23.47 perf-stat.overall.MPKI
0.14 -0.0 0.12 ± 7% perf-stat.overall.branch-miss-rate%
77.98 +8.7 86.69 perf-stat.overall.cache-miss-rate%
8.22 +23.2% 10.12 perf-stat.overall.cpi
413.74 +20.3% 497.63 perf-stat.overall.cycles-between-cache-misses
3.22 +0.1 3.37 perf-stat.overall.dTLB-load-miss-rate%
0.03 ± 3% -0.0 0.03 ± 10% perf-stat.overall.dTLB-store-miss-rate%
34786 +41.4% 49190 ± 2% perf-stat.overall.instructions-per-iTLB-miss
0.12 -18.8% 0.10 perf-stat.overall.ipc
7133 +16.5% 8310 perf-stat.overall.path-length
5.996e+09 -17.5% 4.948e+09 perf-stat.ps.branch-instructions
8402682 -31.9% 5720811 ± 6% perf-stat.ps.branch-misses
5.179e+08 -15.8% 4.363e+08 perf-stat.ps.cache-misses
6.641e+08 -24.2% 5.033e+08 perf-stat.ps.cache-references
2.143e+11 +1.3% 2.171e+11 perf-stat.ps.cpu-cycles
2.495e+08 -13.8% 2.15e+08 perf-stat.ps.dTLB-load-misses
7.488e+09 -17.7% 6.165e+09 perf-stat.ps.dTLB-loads
880618 ± 3% -27.8% 636134 ± 10% perf-stat.ps.dTLB-store-misses
2.634e+09 -17.5% 2.172e+09 perf-stat.ps.dTLB-stores
749535 -41.8% 436237 ± 2% perf-stat.ps.iTLB-load-misses
51054 ± 19% -29.7% 35915 ± 19% perf-stat.ps.iTLB-loads
2.607e+10 -17.8% 2.144e+10 perf-stat.ps.instructions
350552 -46.8% 186443 perf-stat.ps.minor-faults
2.027e+08 ± 2% -14.2% 1.739e+08 ± 6% perf-stat.ps.node-load-misses
3.033e+08 -15.6% 2.56e+08 ± 6% perf-stat.ps.node-loads
1276424 ± 2% -45.4% 696879 ± 4% perf-stat.ps.node-store-misses
483087 ± 13% -42.6% 277396 ± 9% perf-stat.ps.node-stores
350552 -46.8% 186443 perf-stat.ps.page-faults
8.84e+12 -22.2% 6.878e+12 perf-stat.total.instructions
134477 ± 4% -12.2% 118031 ± 4% softirqs.CPU0.TIMER
129516 ± 3% -10.3% 116158 ± 5% softirqs.CPU1.TIMER
137061 ± 7% -11.5% 121271 ± 2% softirqs.CPU10.TIMER
134708 ± 8% -12.2% 118210 softirqs.CPU13.TIMER
138931 ± 8% -14.0% 119458 ± 2% softirqs.CPU14.TIMER
28768 +10.5% 31792 ± 5% softirqs.CPU15.RCU
136131 ± 9% -12.7% 118857 ± 2% softirqs.CPU15.TIMER
132792 ± 3% -7.7% 122562 softirqs.CPU16.TIMER
143921 ± 4% -16.2% 120596 ± 4% softirqs.CPU18.TIMER
145168 ± 3% -17.5% 119736 ± 5% softirqs.CPU19.TIMER
153122 ± 16% -23.8% 116679 ± 3% softirqs.CPU2.TIMER
141372 -16.9% 117480 ± 4% softirqs.CPU20.TIMER
144335 ± 3% -17.9% 118496 ± 3% softirqs.CPU21.TIMER
165007 ± 11% -27.5% 119554 ± 4% softirqs.CPU22.TIMER
142200 ± 2% -17.0% 118044 ± 5% softirqs.CPU23.TIMER
141063 ± 5% -15.9% 118601 ± 4% softirqs.CPU24.TIMER
138963 ± 4% -16.5% 116070 ± 6% softirqs.CPU25.TIMER
141346 ± 4% -15.8% 118964 ± 5% softirqs.CPU27.TIMER
140996 ± 5% -16.9% 117149 ± 5% softirqs.CPU28.TIMER
141709 ± 5% -17.6% 116804 ± 4% softirqs.CPU29.TIMER
139584 ± 4% -15.8% 117486 ± 4% softirqs.CPU30.TIMER
138678 ± 4% -14.9% 117960 ± 6% softirqs.CPU31.TIMER
139778 ± 5% -16.6% 116581 ± 4% softirqs.CPU32.TIMER
138835 ± 4% -16.3% 116168 ± 5% softirqs.CPU33.TIMER
139832 ± 2% -14.3% 119885 ± 4% softirqs.CPU34.TIMER
143088 ± 3% -14.1% 122870 ± 4% softirqs.CPU35.TIMER
145947 ± 5% -11.3% 129393 softirqs.CPU38.TIMER
131003 ± 2% -11.2% 116301 ± 3% softirqs.CPU4.TIMER
130674 ± 4% -9.8% 117891 ± 2% softirqs.CPU42.TIMER
131033 ± 5% -10.0% 117896 ± 3% softirqs.CPU43.TIMER
131812 ± 6% -8.3% 120870 ± 6% softirqs.CPU44.TIMER
133415 ± 6% -11.1% 118651 ± 3% softirqs.CPU45.TIMER
138074 ± 5% -10.5% 123620 softirqs.CPU46.TIMER
133232 ± 5% -6.5% 124597 ± 2% softirqs.CPU47.TIMER
134867 ± 7% -10.9% 120099 softirqs.CPU49.TIMER
133501 ± 3% -11.0% 118863 ± 2% softirqs.CPU5.TIMER
139285 ± 9% -11.0% 124014 ± 2% softirqs.CPU50.TIMER
138112 ± 7% -12.4% 120918 softirqs.CPU51.TIMER
136733 ± 6% -10.7% 122034 ± 2% softirqs.CPU52.TIMER
139399 ± 6% -7.8% 128577 ± 4% softirqs.CPU53.TIMER
144006 ± 3% -16.2% 120664 ± 4% softirqs.CPU54.TIMER
144335 ± 2% -15.9% 121369 ± 5% softirqs.CPU55.TIMER
141569 -13.3% 122777 ± 5% softirqs.CPU56.TIMER
142905 -16.0% 120105 ± 3% softirqs.CPU57.TIMER
149917 ± 2% -19.6% 120597 ± 4% softirqs.CPU58.TIMER
145062 -16.0% 121875 ± 6% softirqs.CPU59.TIMER
129322 ± 3% -10.2% 116093 ± 4% softirqs.CPU6.TIMER
139051 ± 5% -16.6% 116013 ± 5% softirqs.CPU60.TIMER
136928 ± 5% -16.7% 114040 ± 6% softirqs.CPU61.TIMER
139478 ± 4% -17.0% 115821 ± 5% softirqs.CPU63.TIMER
140188 ± 5% -17.4% 115808 ± 6% softirqs.CPU64.TIMER
139850 ± 5% -18.4% 114185 ± 5% softirqs.CPU65.TIMER
139338 ± 5% -17.6% 114794 ± 4% softirqs.CPU66.TIMER
139162 ± 6% -16.5% 116236 ± 6% softirqs.CPU67.TIMER
138693 ± 4% -17.7% 114160 ± 5% softirqs.CPU68.TIMER
137368 ± 5% -16.1% 115277 ± 5% softirqs.CPU69.TIMER
129982 ± 3% -12.3% 113979 ± 4% softirqs.CPU7.TIMER
136521 ± 3% -15.0% 116043 ± 6% softirqs.CPU70.TIMER
139566 ± 3% -15.0% 118588 ± 6% softirqs.CPU71.TIMER
137034 ± 8% -9.9% 123532 ± 2% softirqs.CPU8.TIMER
139637 ± 6% -12.2% 122590 softirqs.CPU9.TIMER
9979650 ± 4% -13.1% 8675751 ± 3% softirqs.TIMER
***************************************************************************************************
lkp-skl-fpga01: 104 threads Skylake with 192G memory
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2018-04-03.cgz/300s/1T/lkp-skl-fpga01/lru-shm/vm-scalability
commit:
f568cf93ca ("vfs: Convert smackfs to use the new mount API")
27eb9d500d ("vfs: Convert ramfs, shmem, tmpfs, devtmpfs, rootfs to use the new mount API")
f568cf93ca0923a0 27eb9d500d71d93e2b2f55c226b
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
4:4 -1% 4:4 perf-profile.calltrace.cycles-pp.sync_regs.error_entry.do_access
%stddev %change %stddev
\ | \
0.03 -51.0% 0.02 vm-scalability.free_time
576244 +5.9% 610398 vm-scalability.median
0.00 ± 10% +52.3% 0.01 ± 3% vm-scalability.median_stddev
59786024 +6.2% 63492278 vm-scalability.throughput
243.93 +2.5% 250.02 vm-scalability.time.elapsed_time
243.93 +2.5% 250.02 vm-scalability.time.elapsed_time.max
43465 ± 4% +7.9% 46892 ± 4% vm-scalability.time.involuntary_context_switches
946724 -50.0% 473636 vm-scalability.time.maximum_resident_set_size
5.177e+08 +2.6% 5.313e+08 vm-scalability.time.minor_page_faults
1756 -5.1% 1666 vm-scalability.time.percent_of_cpu_this_job_got
3116 -3.8% 2998 vm-scalability.time.system_time
31133 +104.1% 63552 vm-scalability.time.voluntary_context_switches
2.324e+09 +2.4% 2.379e+09 vm-scalability.workload
0.00 ±110% +0.0 0.00 ±139% mpstat.cpu.all.iowait%
499.00 -1.6% 491.00 turbostat.Avg_MHz
913171 ± 41% +828.6% 8479745 ± 90% turbostat.C6
3.09 ± 44% +20.9 23.95 ±103% turbostat.C6%
50080150 -48.8% 25663962 vmstat.memory.cache
1.458e+08 +16.7% 1.702e+08 vmstat.memory.free
3501 +32.4% 4637 vmstat.system.cs
7.929e+08 ± 44% +687.9% 6.248e+09 ±103% cpuidle.C6.time
926892 ± 40% +816.6% 8496018 ± 90% cpuidle.C6.usage
190135 ± 3% +152.6% 480192 ± 88% cpuidle.POLL.time
84219 ± 2% +34.4% 113180 ± 9% cpuidle.POLL.usage
40696 ± 15% -26.5% 29917 ± 13% softirqs.CPU4.RCU
44474 ± 29% -35.9% 28525 softirqs.CPU42.RCU
30262 ± 9% +19.5% 36161 ± 15% softirqs.CPU63.RCU
29764 ± 6% +27.6% 37992 ± 13% softirqs.CPU65.RCU
50166780 -48.9% 25654809 meminfo.Cached
49347199 -50.1% 24617888 meminfo.Committed_AS
48930891 -50.1% 24417186 meminfo.Inactive
48929311 -50.1% 24415179 meminfo.Inactive(anon)
1579 +27.0% 2006 meminfo.Inactive(file)
188217 -30.3% 131272 meminfo.KReclaimable
7916315 -53.0% 3720091 ± 2% meminfo.Mapped
1.447e+08 +17.0% 1.693e+08 meminfo.MemAvailable
1.455e+08 +16.9% 1.701e+08 meminfo.MemFree
51232587 -48.0% 26665161 meminfo.Memused
19466 -39.0% 11869 ± 2% meminfo.PageTables
188217 -30.3% 131272 meminfo.SReclaimable
48944461 -50.1% 24432065 meminfo.Shmem
348076 -14.6% 297129 meminfo.Slab
413517 -50.2% 206120 meminfo.max_used_kB
6226812 ± 15% -48.7% 3192024 ± 15% numa-vmstat.node0.nr_file_pages
18066381 ± 5% +16.8% 21103373 ± 2% numa-vmstat.node0.nr_free_pages
6074426 ± 15% -50.0% 3038706 ± 16% numa-vmstat.node0.nr_inactive_anon
306.25 ± 10% -34.4% 201.00 ± 47% numa-vmstat.node0.nr_inactive_file
987437 ± 3% -53.6% 458349 numa-vmstat.node0.nr_mapped
2482 ± 10% -40.2% 1484 ± 18% numa-vmstat.node0.nr_page_table_pages
6076001 ± 15% -50.0% 3040939 ± 16% numa-vmstat.node0.nr_shmem
24272 ± 5% -34.8% 15832 ± 9% numa-vmstat.node0.nr_slab_reclaimable
6074321 ± 15% -50.0% 3038614 ± 16% numa-vmstat.node0.nr_zone_inactive_anon
306.25 ± 10% -34.4% 201.00 ± 47% numa-vmstat.node0.nr_zone_inactive_file
6314888 ± 14% -48.9% 3229891 ± 15% numa-vmstat.node1.nr_file_pages
18310535 ± 4% +16.9% 21407188 ± 2% numa-vmstat.node1.nr_free_pages
6157626 ± 14% -50.1% 3073024 ± 16% numa-vmstat.node1.nr_inactive_anon
88.00 ± 34% +240.3% 299.50 ± 31% numa-vmstat.node1.nr_inactive_file
989014 ± 3% -51.0% 484627 numa-vmstat.node1.nr_mapped
2428 ± 9% -39.7% 1463 ± 14% numa-vmstat.node1.nr_page_table_pages
6159861 ± 14% -50.1% 3075031 ± 16% numa-vmstat.node1.nr_shmem
22813 ± 4% -25.5% 17007 ± 8% numa-vmstat.node1.nr_slab_reclaimable
6157554 ± 14% -50.1% 3072951 ± 16% numa-vmstat.node1.nr_zone_inactive_anon
88.00 ± 34% +240.3% 299.50 ± 31% numa-vmstat.node1.nr_zone_inactive_file
271.00 +81.2% 491.00 proc-vmstat.nr_dirtied
3610614 +17.0% 4223136 proc-vmstat.nr_dirty_background_threshold
7230238 +17.0% 8456814 proc-vmstat.nr_dirty_threshold
12540351 -48.8% 6420021 proc-vmstat.nr_file_pages
36378024 +16.9% 42512287 proc-vmstat.nr_free_pages
12230694 -50.0% 6109850 proc-vmstat.nr_inactive_anon
394.75 +27.0% 501.25 proc-vmstat.nr_inactive_file
16101 +1.3% 16303 proc-vmstat.nr_kernel_stack
1996785 ± 3% -52.8% 941936 proc-vmstat.nr_mapped
4896 ± 3% -40.0% 2936 proc-vmstat.nr_page_table_pages
12234511 -50.0% 6114074 proc-vmstat.nr_shmem
47085 -30.2% 32861 proc-vmstat.nr_slab_reclaimable
39964 +3.8% 41464 proc-vmstat.nr_slab_unreclaimable
257.25 ± 3% +88.2% 484.25 proc-vmstat.nr_written
12230693 -50.0% 6109850 proc-vmstat.nr_zone_inactive_anon
394.75 +27.0% 501.25 proc-vmstat.nr_zone_inactive_file
14321 ± 18% -51.9% 6891 ± 58% proc-vmstat.numa_hint_faults
5.192e+08 +2.6% 5.325e+08 proc-vmstat.numa_hit
5.192e+08 +2.6% 5.325e+08 proc-vmstat.numa_local
5.203e+08 +2.6% 5.336e+08 proc-vmstat.pgalloc_normal
5.183e+08 +2.6% 5.32e+08 proc-vmstat.pgfault
5.195e+08 +2.6% 5.329e+08 proc-vmstat.pgfree
24881812 ± 15% -48.7% 12770223 ± 15% numa-meminfo.node0.FilePages
24273499 ± 16% -49.9% 12157797 ± 16% numa-meminfo.node0.Inactive
24272273 ± 16% -49.9% 12156990 ± 16% numa-meminfo.node0.Inactive(anon)
1225 ± 9% -34.2% 806.25 ± 47% numa-meminfo.node0.Inactive(file)
97123 ± 5% -34.7% 63384 ± 9% numa-meminfo.node0.KReclaimable
4011265 ± 3% -54.1% 1841181 numa-meminfo.node0.Mapped
72290696 ± 5% +16.8% 84410927 ± 2% numa-meminfo.node0.MemFree
25393898 ± 15% -47.7% 13273667 ± 14% numa-meminfo.node0.MemUsed
9880 ± 11% -40.2% 5904 ± 17% numa-meminfo.node0.PageTables
97123 ± 5% -34.7% 63384 ± 9% numa-meminfo.node0.SReclaimable
24278558 ± 16% -49.9% 12165871 ± 16% numa-meminfo.node0.Shmem
189220 -22.1% 147434 ± 9% numa-meminfo.node0.Slab
25264418 ± 13% -48.9% 12911196 ± 15% numa-meminfo.node1.FilePages
24635696 ± 14% -50.1% 12284951 ± 16% numa-meminfo.node1.Inactive
24635342 ± 14% -50.1% 12283751 ± 16% numa-meminfo.node1.Inactive(anon)
353.50 ± 34% +239.4% 1199 ± 31% numa-meminfo.node1.Inactive(file)
91245 ± 4% -25.5% 68019 ± 8% numa-meminfo.node1.KReclaimable
3913740 ± 4% -49.7% 1969074 numa-meminfo.node1.Mapped
73237280 ± 4% +16.9% 85636740 ± 2% numa-meminfo.node1.MemFree
25816602 ± 13% -48.0% 13417142 ± 14% numa-meminfo.node1.MemUsed
9593 ± 9% -38.8% 5870 ± 17% numa-meminfo.node1.PageTables
91245 ± 4% -25.5% 68019 ± 8% numa-meminfo.node1.SReclaimable
24644294 ± 14% -50.1% 12291753 ± 16% numa-meminfo.node1.Shmem
21626 +28.6% 27808 ± 3% slabinfo.anon_vma.active_objs
469.75 +28.6% 604.00 ± 3% slabinfo.anon_vma.active_slabs
21626 +28.6% 27808 ± 3% slabinfo.anon_vma.num_objs
469.75 +28.6% 604.00 ± 3% slabinfo.anon_vma.num_slabs
40574 ± 2% +22.0% 49501 slabinfo.anon_vma_chain.active_objs
634.00 ± 2% +22.0% 773.50 slabinfo.anon_vma_chain.active_slabs
40594 ± 2% +22.1% 49547 slabinfo.anon_vma_chain.num_objs
634.00 ± 2% +22.0% 773.50 slabinfo.anon_vma_chain.num_slabs
1597 ± 12% +16.0% 1852 ± 3% slabinfo.avc_xperms_data.active_objs
1597 ± 12% +16.0% 1852 ± 3% slabinfo.avc_xperms_data.num_objs
13724 ± 2% +15.7% 15879 ± 6% slabinfo.cred_jar.active_objs
13724 ± 2% +15.7% 15879 ± 6% slabinfo.cred_jar.num_objs
214679 -45.6% 116688 slabinfo.radix_tree_node.active_objs
3911 -44.8% 2158 slabinfo.radix_tree_node.active_slabs
219065 -44.8% 120925 slabinfo.radix_tree_node.num_objs
3911 -44.8% 2158 slabinfo.radix_tree_node.num_slabs
3190 +12.4% 3586 slabinfo.sighand_cache.active_objs
3191 +12.7% 3595 slabinfo.sighand_cache.num_objs
1032 ± 7% +44.0% 1486 ± 6% slabinfo.skbuff_ext_cache.active_objs
1051 ± 5% +43.4% 1507 ± 5% slabinfo.skbuff_ext_cache.num_objs
40306 ± 2% +14.4% 46096 ± 2% slabinfo.vm_area_struct.active_objs
1007 ± 2% +14.5% 1153 ± 2% slabinfo.vm_area_struct.active_slabs
40314 ± 2% +14.5% 46143 ± 2% slabinfo.vm_area_struct.num_objs
1007 ± 2% +14.5% 1153 ± 2% slabinfo.vm_area_struct.num_slabs
323228 ± 55% -50.5% 159938 ± 22% sched_debug.cfs_rq:/.load.max
2195909 -10.8% 1958320 sched_debug.cfs_rq:/.min_vruntime.avg
2099869 -13.5% 1815790 sched_debug.cfs_rq:/.min_vruntime.min
36020 ± 4% +34.7% 48534 ± 9% sched_debug.cfs_rq:/.min_vruntime.stddev
229.45 ± 12% -32.9% 153.90 ± 23% sched_debug.cfs_rq:/.runnable_load_avg.max
321968 ± 56% -51.0% 157788 ± 23% sched_debug.cfs_rq:/.runnable_weight.max
66477 ± 17% +110.2% 139739 ± 23% sched_debug.cfs_rq:/.spread0.avg
134057 ± 20% +119.7% 294519 ± 16% sched_debug.cfs_rq:/.spread0.max
-29537 -91.4% -2550 sched_debug.cfs_rq:/.spread0.min
35997 ± 4% +34.7% 48487 ± 9% sched_debug.cfs_rq:/.spread0.stddev
175.30 ± 14% -63.9% 63.35 ±100% sched_debug.cfs_rq:/.util_avg.min
192.91 ± 7% +39.8% 269.72 ± 12% sched_debug.cfs_rq:/.util_avg.stddev
80.48 ± 13% +51.5% 121.95 ± 14% sched_debug.cfs_rq:/.util_est_enqueued.stddev
138814 ± 7% +25.3% 173956 ± 3% sched_debug.cpu.avg_idle.stddev
274.40 ± 16% -27.3% 199.50 ± 21% sched_debug.cpu.cpu_load[2].max
315.45 ± 12% -35.1% 204.70 ± 28% sched_debug.cpu.cpu_load[3].max
373.85 ± 8% -34.7% 243.95 ± 25% sched_debug.cpu.cpu_load[4].max
10724 +50.7% 16157 sched_debug.cpu.curr->pid.max
1467 +115.4% 3160 ± 29% sched_debug.cpu.curr->pid.stddev
323228 ± 55% -50.5% 159938 ± 22% sched_debug.cpu.load.max
2099 ± 11% +54.3% 3239 ± 22% sched_debug.cpu.nr_load_updates.stddev
0.21 ± 4% +23.1% 0.26 ± 13% sched_debug.cpu.nr_running.stddev
4564 +31.8% 6015 sched_debug.cpu.nr_switches.avg
43664 ± 27% +43.3% 62587 ± 13% sched_debug.cpu.nr_switches.max
1176 ± 8% +50.8% 1773 ± 7% sched_debug.cpu.nr_switches.min
5768 ± 13% +26.4% 7290 ± 8% sched_debug.cpu.nr_switches.stddev
1018 ± 32% +172.8% 2778 ± 40% interrupts.CPU0.RES:Rescheduling_interrupts
24.00 ± 51% +266.7% 88.00 ± 63% interrupts.CPU100.TLB:TLB_shootdowns
82.00 ± 41% +251.2% 288.00 ± 71% interrupts.CPU14.RES:Rescheduling_interrupts
99.75 ± 69% +111.8% 211.25 ± 21% interrupts.CPU17.RES:Rescheduling_interrupts
134.50 ± 54% +650.0% 1008 ±122% interrupts.CPU18.RES:Rescheduling_interrupts
1730 ± 8% -31.8% 1180 ± 34% interrupts.CPU2.NMI:Non-maskable_interrupts
1730 ± 8% -31.8% 1180 ± 34% interrupts.CPU2.PMI:Performance_monitoring_interrupts
66.00 ± 6% +2245.1% 1547 ±147% interrupts.CPU21.RES:Rescheduling_interrupts
90.75 ± 38% +239.9% 308.50 ± 49% interrupts.CPU22.RES:Rescheduling_interrupts
36.50 ± 49% +114.4% 78.25 ± 62% interrupts.CPU36.TLB:TLB_shootdowns
103.25 ± 48% +148.7% 256.75 ± 27% interrupts.CPU37.RES:Rescheduling_interrupts
99.00 ± 87% +243.2% 339.75 ± 46% interrupts.CPU5.RES:Rescheduling_interrupts
99.50 ± 61% +163.1% 261.75 ± 34% interrupts.CPU52.RES:Rescheduling_interrupts
99.25 ±117% +256.9% 354.25 ± 60% interrupts.CPU54.RES:Rescheduling_interrupts
70.75 ± 49% +501.4% 425.50 ± 48% interrupts.CPU56.RES:Rescheduling_interrupts
21.00 ± 44% +290.5% 82.00 ± 64% interrupts.CPU64.TLB:TLB_shootdowns
26.50 ± 52% +284.9% 102.00 ± 75% interrupts.CPU67.TLB:TLB_shootdowns
78.75 ± 86% +182.9% 222.75 ± 41% interrupts.CPU69.RES:Rescheduling_interrupts
39.50 ± 27% +372.2% 186.50 ± 68% interrupts.CPU72.RES:Rescheduling_interrupts
108.50 ± 60% +108.5% 226.25 ± 39% interrupts.CPU73.RES:Rescheduling_interrupts
38.25 ± 76% +259.5% 137.50 ± 68% interrupts.CPU73.TLB:TLB_shootdowns
37.50 ± 59% +116.7% 81.25 ± 72% interrupts.CPU85.TLB:TLB_shootdowns
22.50 ± 38% +354.4% 102.25 ± 70% interrupts.CPU87.TLB:TLB_shootdowns
21.75 ± 49% +308.0% 88.75 ± 55% interrupts.CPU89.TLB:TLB_shootdowns
25.00 ± 67% +156.0% 64.00 ± 46% interrupts.CPU94.TLB:TLB_shootdowns
32.75 ± 85% +141.2% 79.00 ± 47% interrupts.CPU96.TLB:TLB_shootdowns
146.25 ± 31% +58.1% 231.25 ± 17% interrupts.CPU99.RES:Rescheduling_interrupts
21099 ± 11% +91.6% 40429 ± 14% interrupts.RES:Rescheduling_interrupts
6445 ± 13% +48.4% 9563 ± 9% interrupts.TLB:TLB_shootdowns
68.61 -4.3 64.28 perf-profile.calltrace.cycles-pp.do_access
58.14 -4.0 54.12 perf-profile.calltrace.cycles-pp.page_fault.do_access
52.63 -4.0 48.62 perf-profile.calltrace.cycles-pp.do_page_fault.page_fault.do_access
52.30 -4.0 48.30 perf-profile.calltrace.cycles-pp.__do_page_fault.do_page_fault.page_fault.do_access
51.01 -4.0 47.02 perf-profile.calltrace.cycles-pp.handle_mm_fault.__do_page_fault.do_page_fault.page_fault.do_access
50.43 -4.0 46.46 perf-profile.calltrace.cycles-pp.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault.page_fault
40.02 -3.8 36.23 perf-profile.calltrace.cycles-pp.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault
40.76 -3.8 36.98 perf-profile.calltrace.cycles-pp.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
40.69 -3.8 36.91 perf-profile.calltrace.cycles-pp.shmem_fault.__do_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault
10.70 ± 3% -2.0 8.71 ± 3% perf-profile.calltrace.cycles-pp.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault
10.96 ± 3% -2.0 8.99 ± 4% perf-profile.calltrace.cycles-pp.__lru_cache_add.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
8.90 ± 4% -1.9 6.98 ± 4% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp
8.96 ± 4% -1.9 7.04 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.pagevec_lru_move_fn.__lru_cache_add.shmem_getpage_gfp.shmem_fault
14.87 ± 4% -1.6 13.27 perf-profile.calltrace.cycles-pp.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
13.23 ± 5% -1.5 11.78 ± 2% perf-profile.calltrace.cycles-pp.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
12.38 ± 5% -1.4 10.94 ± 2% perf-profile.calltrace.cycles-pp.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page
12.74 ± 5% -1.4 11.30 ± 2% perf-profile.calltrace.cycles-pp.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp
12.92 ± 5% -1.4 11.48 ± 2% perf-profile.calltrace.cycles-pp.alloc_pages_vma.shmem_alloc_page.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault
9.64 ± 6% -1.4 8.21 ± 3% perf-profile.calltrace.cycles-pp._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma.shmem_alloc_page
9.49 ± 7% -1.4 8.06 ± 3% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.get_page_from_freelist.__alloc_pages_nodemask.alloc_pages_vma
15.61 -0.6 14.96 perf-profile.calltrace.cycles-pp.do_rw_once
4.94 -0.3 4.66 perf-profile.calltrace.cycles-pp.clear_page_erms.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.57 ± 3% -0.2 0.39 ± 57% perf-profile.calltrace.cycles-pp.mem_cgroup_commit_charge.shmem_getpage_gfp.shmem_fault.__do_fault.__handle_mm_fault
0.74 ± 3% -0.1 0.68 ± 4% perf-profile.calltrace.cycles-pp.page_add_file_rmap.alloc_set_pte.finish_fault.__handle_mm_fault.handle_mm_fault
1.20 -0.1 1.14 ± 2% perf-profile.calltrace.cycles-pp.finish_fault.__handle_mm_fault.handle_mm_fault.__do_page_fault.do_page_fault
0.80 ± 2% -0.0 0.76 ± 3% perf-profile.calltrace.cycles-pp.security_vm_enough_memory_mm.shmem_alloc_and_acct_page.shmem_getpage_gfp.shmem_fault.__do_fault
7.84 ± 15% +3.4 11.26 ± 14% perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
10.37 ± 9% +3.6 13.92 ± 11% perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
11.77 ± 9% +3.6 15.32 ± 9% perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
11.77 ± 9% +3.6 15.32 ± 9% perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
11.77 ± 9% +3.6 15.32 ± 9% perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
11.87 ± 9% +3.6 15.48 ± 9% perf-profile.calltrace.cycles-pp.secondary_startup_64
3.36 +21.1% 4.07 ± 20% perf-stat.i.MPKI
1.346e+10 +2.2% 1.375e+10 perf-stat.i.branch-instructions
29022778 +7.7% 31267478 ± 5% perf-stat.i.branch-misses
47.72 ± 2% -15.7 32.07 ± 14% perf-stat.i.cache-miss-rate%
43211868 -1.8% 42419129 perf-stat.i.cache-misses
3376 +37.1% 4629 perf-stat.i.context-switches
1.03 +2.9% 1.06 ± 3% perf-stat.i.cpi
72.97 +88.4% 137.51 perf-stat.i.cpu-migrations
799.98 +21.1% 968.56 ± 2% perf-stat.i.cycles-between-cache-misses
0.03 ± 7% +0.0 0.04 ± 20% perf-stat.i.dTLB-load-miss-rate%
1.267e+10 +2.1% 1.294e+10 perf-stat.i.dTLB-loads
0.02 +0.0 0.03 ± 9% perf-stat.i.dTLB-store-miss-rate%
2158638 +2.8% 2218331 perf-stat.i.dTLB-store-misses
3.621e+09 +4.1% 3.771e+09 perf-stat.i.dTLB-stores
23.59 +8.0 31.57 ± 11% perf-stat.i.iTLB-load-miss-rate%
2744828 -9.9% 2473836 ± 3% perf-stat.i.iTLB-load-misses
4.811e+10 +2.9% 4.953e+10 perf-stat.i.instructions
180646 ± 14% -39.9% 108545 ± 35% perf-stat.i.instructions-per-iTLB-miss
2095485 +1.2% 2120039 perf-stat.i.minor-faults
52.46 -2.5 49.94 perf-stat.i.node-load-miss-rate%
2458601 +2.4% 2516686 perf-stat.i.node-loads
43.60 ± 2% -5.8 37.80 ± 3% perf-stat.i.node-store-miss-rate%
7662333 +1.0% 7738567 perf-stat.i.node-stores
2095716 +1.2% 2120269 perf-stat.i.page-faults
0.21 +0.0 0.23 ± 5% perf-stat.overall.branch-miss-rate%
18.95 -0.4 18.52 ± 2% perf-stat.overall.cache-miss-rate%
1.04 -3.5% 1.01 perf-stat.overall.cpi
17502 +14.4% 20023 ± 3% perf-stat.overall.instructions-per-iTLB-miss
0.96 +3.6% 0.99 perf-stat.overall.ipc
52.01 -1.6 50.40 perf-stat.overall.node-load-miss-rate%
5108 +2.1% 5215 perf-stat.overall.path-length
1.359e+10 +1.3% 1.377e+10 perf-stat.ps.branch-instructions
29202158 +7.0% 31254091 ± 5% perf-stat.ps.branch-misses
43522733 -2.5% 42432124 perf-stat.ps.cache-misses
3385 +36.4% 4618 perf-stat.ps.context-switches
5.071e+10 -1.5% 4.994e+10 perf-stat.ps.cpu-cycles
73.95 +86.7% 138.03 perf-stat.ps.cpu-migrations
1.279e+10 +1.3% 1.296e+10 perf-stat.ps.dTLB-loads
3.652e+09 +3.3% 3.774e+09 perf-stat.ps.dTLB-stores
2776127 -10.7% 2478891 ± 3% perf-stat.ps.iTLB-load-misses
4.857e+10 +2.1% 4.959e+10 perf-stat.ps.instructions
2451998 +2.3% 2507985 perf-stat.ps.node-loads
1.187e+13 +4.5% 1.241e+13 perf-stat.total.instructions
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year, 9 months
[vfs] d4698b98cb: kmsg.e_Pipe_file_descriptor_not_open
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: d4698b98cb1d9990fcdfa6a0577f8a8bcd7786b7 ("vfs: Convert autofs to use the new mount API")
https://git.kernel.org/cgit/linux/kernel/git/dhowells/linux-fs.git mount-api-viro
in testcase: rcutorture
with following parameters:
runtime: 300s
test: default
torture_type: srcud
test-description: rcutorture is rcutorture kernel module load/unload test.
test-url: https://www.kernel.org/doc/Documentation/RCU/torture.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
[ 9.938484 ] systemd[1]: RTC configured in localtime, applying delta
of 480 minutes to system time.
[ 9.955593 ] random: systemd: uninitialized urandom read (16 bytes read)
[ 9.957295 ] random: systemd: uninitialized urandom read (16 bytes read)
[ 9.958493 ] random: systemd: uninitialized urandom read (16 bytes read)
[ 10.019199 ] e Pipe file descriptor not open
See 'systemctl status proc-sys-fs-binfmt_misc.automount' for details.
Mounting Huge Pages File System...
Created slice system-getty.slice.
To reproduce:
# build kernel
cd linux
cp config-5.1.0-rc2-00063-gd4698b9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 9 months
[mac80211_hwsim] 5cf94a0ee7: WARNING:at_net/wireless/core.c:#wiphy_register[cfg80211]
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 5cf94a0ee722f324f8999e77e6cd3112e0c1cb08 ("[PATCH] mac80211_hwsim: calculate if_combination.max_interfaces")
url: https://github.com/0day-ci/linux/commits/Johannes-Berg/mac80211_hwsim-cal...
base: https://git.kernel.org/cgit/linux/kernel/git/jberg/mac80211-next.git master
in testcase: hwsim
with following parameters:
group: hwsim-09
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+----------------------------------------------------------+------------+------------+
| | be9cefe796 | 5cf94a0ee7 |
+----------------------------------------------------------+------------+------------+
| boot_successes | 23 | 11 |
| boot_failures | 0 | 4 |
| WARNING:at_net/wireless/core.c:#wiphy_register[cfg80211] | 0 | 4 |
| RIP:wiphy_register[cfg80211] | 0 | 4 |
+----------------------------------------------------------+------------+------------+
[ 307.982608] WARNING: CPU: 1 PID: 5801 at net/wireless/core.c:561 wiphy_register+0x45a/0x900 [cfg80211]
[ 307.985386] Modules linked in: cmac ccm arc4 mac80211_hwsim mac80211 cfg80211 rfkill crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sr_mod cdrom sg bochs_drm ata_generic ttm ppdev aesni_intel snd_pcm pata_acpi drm_kms_helper snd_timer crypto_simd cryptd syscopyarea sysfillrect glue_helper snd sysimgblt fb_sys_fops ata_piix joydev serio_raw soundcore pcspkr libata drm parport_pc i2c_piix4 floppy parport ip_tables
[ 307.995459] CPU: 1 PID: 5801 Comm: python2 Not tainted 5.0.0-rc7-02366-g5cf94a0 #1
[ 307.997737] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 308.000189] RIP: 0010:wiphy_register+0x45a/0x900 [cfg80211]
[ 308.002175] Code: 66 44 85 e0 74 8b 0f 0b b8 ea ff ff ff e9 e8 fc ff ff 0f 0b 0f 0b b8 ea ff ff ff e9 da fc ff ff 80 7a 10 00 0f 85 05 ff ff ff <0f> 0b b8 ea ff ff ff e9 c4 fc ff ff 0f 0b b8 ea ff ff ff e9 b8 fc
[ 308.007466] RSP: 0018:ffffa0ba0094b9e0 EFLAGS: 00010246
[ 308.009486] RAX: 0000000000000000 RBX: ffff95a5f1da4300 RCX: 0000000000000002
[ 308.012007] RDX: ffff95a5f1da6188 RSI: 0000000000000000 RDI: ffff95a5f1da4300
[ 308.014304] RBP: ffffa0ba0094ba70 R08: 0000000000000000 R09: ffff95a687c03080
[ 308.016594] R10: ffffa0ba0094ba90 R11: ffff95a5f1da61a0 R12: 0000000000000001
[ 308.018909] R13: 0000000000000001 R14: 0000000000000027 R15: 0000000000000001
[ 308.021204] FS: 00007f29e6e82700(0000) GS:ffff95a6bfd00000(0000) knlGS:0000000000000000
[ 308.023666] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 308.025736] CR2: 00007f38dc710708 CR3: 0000000078766000 CR4: 00000000000406e0
[ 308.028042] Call Trace:
[ 308.029620] ? ieee80211_register_hw+0x269/0xc60 [mac80211]
[ 308.031667] ? preempt_schedule_common+0x15/0x30
[ 308.033560] ? ieee80211_register_hw+0x42a/0xc60 [mac80211]
[ 308.035631] ieee80211_register_hw+0x42a/0xc60 [mac80211]
[ 308.037654] mac80211_hwsim_new_radio+0x8d6/0xdc0 [mac80211_hwsim]
[ 308.039808] ? cred_has_capability+0x7d/0x130
[ 308.041649] hwsim_new_radio_nl+0x37d/0x3b0 [mac80211_hwsim]
[ 308.043699] genl_family_rcv_msg+0x1fa/0x3c0
[ 308.046108] genl_rcv_msg+0x47/0x90
[ 308.047818] ? __kmalloc_node_track_caller+0x5c/0x2b0
[ 308.049763] ? genl_family_rcv_msg+0x3c0/0x3c0
[ 308.051604] netlink_rcv_skb+0x4a/0x110
[ 308.053332] genl_rcv+0x24/0x40
[ 308.054955] netlink_unicast+0x193/0x230
[ 308.056710] netlink_sendmsg+0x2c1/0x3c0
[ 308.058459] sock_sendmsg+0x36/0x40
[ 308.060127] __sys_sendto+0x10e/0x140
[ 308.061830] ? handle_mm_fault+0xf5/0x230
[ 308.063589] ? __do_page_fault+0x314/0x520
[ 308.065363] __x64_sys_sendto+0x24/0x30
[ 308.067110] do_syscall_64+0x5b/0x1a0
[ 308.068831] entry_SYSCALL_64_after_hwframe+0x44/0xa9
[ 308.070775] RIP: 0033:0x7f29e6a5f5bd
[ 308.072469] Code: 79 20 00 f7 d8 64 89 01 48 83 c8 ff c3 8b 05 1a be 20 00 85 c0 75 3e 48 63 ff 45 31 c9 45 31 c0 4c 63 d1 b8 2c 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 0b c3 66 2e 0f 1f 84 00 00 00 00 00 48 8b 15
[ 308.077788] RSP: 002b:00007ffca1265f88 EFLAGS: 00000246 ORIG_RAX: 000000000000002c
[ 308.080160] RAX: ffffffffffffffda RBX: 00000000ffffffff RCX: 00007f29e6a5f5bd
[ 308.082457] RDX: 000000000000001c RSI: 00007f29df971d8c RDI: 0000000000000014
[ 308.084723] RBP: 00007f29df95bfb8 R08: 0000000000000000 R09: 0000000000000000
[ 308.086965] R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffca1265fc0
[ 308.089184] R13: 00007f29e6e82698 R14: 000000000000001c R15: 000055ecd4bb40a0
[ 308.091367] ---[ end trace e3672b76808636d2 ]---
To reproduce:
# build kernel
cd linux
cp config-5.0.0-rc7-02366-g5cf94a0 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year, 9 months
f5a7344f61 ("drm: Add helpers to kick off self refresh mode in .."): BUG: KASAN: null-ptr-deref in drm_self_refresh_helper_alter_state
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Sean-Paul/drm-Add-helpers-to-kic...
commit f5a7344f61b9f1ff99802d4daa7fecbdf0040b42
Author: Sean Paul <seanpaul(a)chromium.org>
AuthorDate: Tue Mar 26 16:44:54 2019 -0400
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Wed Mar 27 19:48:55 2019 +0800
drm: Add helpers to kick off self refresh mode in drivers
This patch adds a new drm helper library to help drivers implement
self refresh. Drivers choosing to use it will register crtcs and
will receive callbacks when it's time to enter or exit self refresh
mode.
In its current form, it has a timer which will trigger after a
driver-specified amount of inactivity. When the timer triggers, the
helpers will submit a new atomic commit to shut the refreshing pipe
off. On the next atomic commit, the drm core will revert the self
refresh state and bring everything back up to be actively driven.
From the driver's perspective, this works like a regular disable/enable
cycle. The driver need only check the 'self_refresh_active' and/or
'self_refresh_changed' state in crtc_state and connector_state. It
should initiate self refresh mode on the panel and enter an off or
low-power state.
Changes in v2:
- s/psr/self_refresh/ (Daniel)
- integrated the psr exit into the commit that wakes it up (Jose/Daniel)
- made the psr state per-crtc (Jose/Daniel)
Link to v1: https://patchwork.freedesktop.org/patch/msgid/20190228210939.83386-2-sean...
Cc: Daniel Vetter <daniel(a)ffwll.ch>
Cc: Jose Souza <jose.souza(a)intel.com>
Cc: Zain Wang <wzz(a)rock-chips.com>
Cc: Tomasz Figa <tfiga(a)chromium.org>
Signed-off-by: Sean Paul <seanpaul(a)chromium.org>
14c741de93 Merge tag 'nfs-for-5.1-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
f5a7344f61 drm: Add helpers to kick off self refresh mode in drivers
5886c73b16 drm/rockchip: Use drm_atomic_helper_commit_tail_rpm
+------------------------------------------+------------+------------+------------+
| | 14c741de93 | f5a7344f61 | 5886c73b16 |
+------------------------------------------+------------+------------+------------+
| boot_successes | 28 | 0 | 0 |
| boot_failures | 0 | 11 | 13 |
| BUG:KASAN:null-ptr-deref_in_d | 0 | 11 | 13 |
| BUG:unable_to_handle_kernel | 0 | 11 | 13 |
| Oops:#[##] | 0 | 11 | 13 |
| RIP:drm_self_refresh_helper_alter_state | 0 | 11 | 13 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 11 | 13 |
+------------------------------------------+------------+------------+------------+
[ 9.780515] [TTM] Initializing pool allocator
[ 9.781498] [TTM] Initializing DMA pool allocator
[ 9.788382] [drm] Initialized bochs-drm 1.0.0 20130925 for 0000:00:02.0 on minor 2
[ 9.797720] fbcon: DRM emulated (fb0) is primary device
[ 9.800488] ==================================================================
[ 9.800635] BUG: KASAN: null-ptr-deref in drm_self_refresh_helper_alter_state+0x1ec/0x237
[ 9.800653] Read of size 4 at addr 0000000000000068 by task swapper/0/1
[ 9.800653]
[ 9.800653] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G T 5.1.0-rc2-00025-gf5a7344 #1
[ 9.800653] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 9.800653] Call Trace:
[ 9.800653] dump_stack+0x62/0x8b
[ 9.800653] ? drm_self_refresh_helper_alter_state+0x1ec/0x237
[ 9.800653] kasan_report+0x1bc/0x1d4
[ 9.800653] ? drm_self_refresh_helper_alter_state+0x1ec/0x237
[ 9.800653] __asan_load4+0xa0/0xa2
[ 9.800653] drm_self_refresh_helper_alter_state+0x1ec/0x237
[ 9.800653] ? drm_atomic_helper_check_planes+0x2dc/0x3a3
[ 9.800653] drm_atomic_helper_check+0xac/0xb8
[ 9.800653] drm_atomic_check_only+0xd5a/0xeb1
[ 9.800653] ? drm_atomic_add_affected_planes+0x188/0x188
[ 9.800653] drm_atomic_commit+0x29/0x7e
[ 9.800653] restore_fbdev_mode_atomic+0x2ef/0x36f
[ 9.800653] ? drm_fb_helper_debug_leave+0x264/0x264
[ 9.800653] ? mutex_lock+0xf5/0x128
[ 9.800653] restore_fbdev_mode+0xbd/0x207
[ 9.800653] drm_fb_helper_restore_fbdev_mode_unlocked+0x51/0x9c
[ 9.800653] drm_fb_helper_set_par+0x6e/0x7e
[ 9.800653] fbcon_init+0x6ed/0x88e
[ 9.800653] ? drm_fb_helper_restore_fbdev_mode_unlocked+0x9c/0x9c
[ 9.800653] visual_init+0x14d/0x1fd
[ 9.800653] do_bind_con_driver+0x28f/0x3c4
[ 9.800653] do_take_over_console+0x253/0x264
[ 9.800653] do_fbcon_takeover+0x90/0xeb
[ 9.800653] ? fbcon_event_notify+0x5cd/0xc9f
[ 9.800653] fbcon_event_notify+0x5fe/0xc9f
[ 9.800653] notifier_call_chain+0x61/0x93
[ 9.800653] __blocking_notifier_call_chain+0x58/0x72
[ 9.800653] blocking_notifier_call_chain+0x14/0x16
[ 9.800653] fb_notifier_call_chain+0x1b/0x1d
[ 9.800653] register_framebuffer+0x4cb/0x54d
[ 9.800653] ? remove_conflicting_pci_framebuffers+0x112/0x112
[ 9.800653] ? __ww_mutex_lock_interruptible_slowpath+0x18/0x18
[ 9.800653] ? drm_fb_helper_fill_pixel_fmt+0x262/0x26c
[ 9.800653] __drm_fb_helper_initial_config_and_unlock+0x68a/0x7d0
[ 9.800653] ? drm_setup_crtcs+0xf17/0xf17
[ 9.800653] ? __ww_mutex_lock_interruptible_slowpath+0x18/0x18
[ 9.800653] ? drm_fb_helper_init+0x32/0x362
[ 9.800653] drm_fb_helper_initial_config+0x31/0x38
[ 9.800653] drm_fbdev_client_hotplug+0x17d/0x1e5
[ 9.800653] drm_fbdev_generic_setup+0x15e/0x196
[ 9.800653] ? bochs_pci_remove+0x3b/0x3b
[ 9.800653] bochs_pci_probe+0x1b6/0x1d8
[ 9.800653] local_pci_probe+0x76/0xc3
[ 9.800653] pci_device_probe+0x260/0x2cb
[ 9.800653] ? pci_device_remove+0x226/0x226
[ 9.800653] ? driver_sysfs_add+0xe6/0x14d
[ 9.800653] really_probe+0x332/0x69c
[ 9.800653] ? device_driver_attach+0x93/0x93
[ 9.800653] driver_probe_device+0x1a1/0x1e0
[ 9.800653] ? device_driver_attach+0x5d/0x93
[ 9.800653] device_driver_attach+0x72/0x93
[ 9.800653] ? parse_option_str+0x17/0x75
[ 9.800653] __driver_attach+0x1cd/0x1dc
[ 9.800653] bus_for_each_dev+0xe0/0x11f
[ 9.800653] ? bus_remove_file+0x4a/0x4a
[ 9.800653] ? ftrace_likely_update+0x28/0x3e
[ 9.800653] ? _raw_spin_unlock+0x53/0x61
[ 9.800653] driver_attach+0x2b/0x2e
[ 9.800653] bus_add_driver+0x20d/0x2f2
[ 9.800653] driver_register+0x12d/0x17c
[ 9.800653] __pci_register_driver+0xd6/0xe1
[ 9.800653] ? ast_init+0x45/0x45
[ 9.800653] bochs_init+0x43/0x45
[ 9.800653] do_one_initcall+0x177/0x373
[ 9.800653] ? start_kernel+0x62f/0x62f
[ 9.800653] ? kasan_poison_shadow+0x2f/0x31
[ 9.800653] ? set_debug_rodata+0x17/0x17
[ 9.800653] kernel_init_freeable+0x3ac/0x441
[ 9.800653] ? rest_init+0xda/0xda
[ 9.800653] kernel_init+0x11/0x10f
[ 9.800653] ? rest_init+0xda/0xda
[ 9.800653] ret_from_fork+0x35/0x40
[ 9.800653] ==================================================================
[ 9.800653] Disabling lock debugging due to kernel taint
[ 9.801974] BUG: unable to handle kernel NULL pointer dereference at 0000000000000068
[ 9.801980] #PF error: [normal kernel read fault]
[ 9.802001] PGD 0 P4D 0
[ 9.802029] Oops: 0000 [#1] PREEMPT SMP KASAN
[ 9.802038] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G B T 5.1.0-rc2-00025-gf5a7344 #1
[ 9.802042] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 9.802058] RIP: 0010:drm_self_refresh_helper_alter_state+0x1ec/0x237
[ 9.802067] Code: 61 ff 41 80 bc 24 e1 01 00 00 00 75 ad 49 8d be 90 04 00 00 e8 9b fb 61 ff 4d 8b a6 90 04 00 00 49 8d 7c 24 68 e8 43 fa 61 ff <41> 8b 7c 24 68 e8 13 ab 4b ff 48 89 45 c8 48 c7 c7 38 b1 7a 85 e8
[ 9.802072] RSP: 0000:ffff88801985f278 EFLAGS: 00010286
[ 9.802099] RAX: ffff888019850080 RBX: ffff8880148852c8 RCX: ffffffff81273647
[ 9.802104] RDX: 0000000000000000 RSI: 2000040000000000 RDI: ffffffff855ab2b8
[ 9.802110] RBP: ffff88801985f2c0 R08: fffffbfff0aa5616 R09: 0000000000000007
[ 9.802114] R10: fffffbfff0aa5615 R11: 0000000000000000 R12: 0000000000000000
[ 9.802119] R13: 0000000000000000 R14: ffff8880191ec318 R15: ffff8880148852e8
[ 9.802126] FS: 0000000000000000(0000) GS:ffff88801a400000(0000) knlGS:0000000000000000
[ 9.802130] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 9.802135] CR2: 0000000000000068 CR3: 000000000462b001 CR4: 00000000001606f0
[ 9.802141] Call Trace:
[ 9.802150] ? drm_atomic_helper_check_planes+0x2dc/0x3a3
[ 9.802159] drm_atomic_helper_check+0xac/0xb8
[ 9.802169] drm_atomic_check_only+0xd5a/0xeb1
[ 9.802179] ? drm_atomic_add_affected_planes+0x188/0x188
[ 9.802189] drm_atomic_commit+0x29/0x7e
[ 9.802198] restore_fbdev_mode_atomic+0x2ef/0x36f
[ 9.802214] ? drm_fb_helper_debug_leave+0x264/0x264
[ 9.802223] ? mutex_lock+0xf5/0x128
[ 9.802232] restore_fbdev_mode+0xbd/0x207
[ 9.802241] drm_fb_helper_restore_fbdev_mode_unlocked+0x51/0x9c
[ 9.802249] drm_fb_helper_set_par+0x6e/0x7e
[ 9.802257] fbcon_init+0x6ed/0x88e
[ 9.802265] ? drm_fb_helper_restore_fbdev_mode_unlocked+0x9c/0x9c
[ 9.802273] visual_init+0x14d/0x1fd
[ 9.802281] do_bind_con_driver+0x28f/0x3c4
[ 9.802290] do_take_over_console+0x253/0x264
[ 9.802299] do_fbcon_takeover+0x90/0xeb
[ 9.802308] ? fbcon_event_notify+0x5cd/0xc9f
[ 9.802316] fbcon_event_notify+0x5fe/0xc9f
[ 9.802325] notifier_call_chain+0x61/0x93
[ 9.802333] __blocking_notifier_call_chain+0x58/0x72
[ 9.802342] blocking_notifier_call_chain+0x14/0x16
[ 9.802349] fb_notifier_call_chain+0x1b/0x1d
[ 9.802356] register_framebuffer+0x4cb/0x54d
[ 9.802365] ? remove_conflicting_pci_framebuffers+0x112/0x112
[ 9.802384] ? __ww_mutex_lock_interruptible_slowpath+0x18/0x18
[ 9.802394] ? drm_fb_helper_fill_pixel_fmt+0x262/0x26c
[ 9.802417] __drm_fb_helper_initial_config_and_unlock+0x68a/0x7d0
[ 9.802425] ? drm_setup_crtcs+0xf17/0xf17
[ 9.802433] ? __ww_mutex_lock_interruptible_slowpath+0x18/0x18
[ 9.802442] ? drm_fb_helper_init+0x32/0x362
[ 9.802449] drm_fb_helper_initial_config+0x31/0x38
[ 9.802456] drm_fbdev_client_hotplug+0x17d/0x1e5
[ 9.802464] drm_fbdev_generic_setup+0x15e/0x196
[ 9.802471] ? bochs_pci_remove+0x3b/0x3b
[ 9.802478] bochs_pci_probe+0x1b6/0x1d8
[ 9.802486] local_pci_probe+0x76/0xc3
[ 9.802493] pci_device_probe+0x260/0x2cb
[ 9.802501] ? pci_device_remove+0x226/0x226
[ 9.802509] ? driver_sysfs_add+0xe6/0x14d
[ 9.802517] really_probe+0x332/0x69c
[ 9.802525] ? device_driver_attach+0x93/0x93
[ 9.802533] driver_probe_device+0x1a1/0x1e0
[ 9.802540] ? device_driver_attach+0x5d/0x93
[ 9.802548] device_driver_attach+0x72/0x93
[ 9.802556] ? parse_option_str+0x17/0x75
[ 9.802563] __driver_attach+0x1cd/0x1dc
[ 9.802571] bus_for_each_dev+0xe0/0x11f
[ 9.802578] ? bus_remove_file+0x4a/0x4a
[ 9.802587] ? ftrace_likely_update+0x28/0x3e
[ 9.802594] ? _raw_spin_unlock+0x53/0x61
[ 9.802602] driver_attach+0x2b/0x2e
[ 9.802609] bus_add_driver+0x20d/0x2f2
[ 9.802618] driver_register+0x12d/0x17c
[ 9.802626] __pci_register_driver+0xd6/0xe1
[ 9.802633] ? ast_init+0x45/0x45
[ 9.802640] bochs_init+0x43/0x45
[ 9.802647] do_one_initcall+0x177/0x373
[ 9.802654] ? start_kernel+0x62f/0x62f
[ 9.802662] ? kasan_poison_shadow+0x2f/0x31
[ 9.802670] ? set_debug_rodata+0x17/0x17
[ 9.802677] kernel_init_freeable+0x3ac/0x441
[ 9.802685] ? rest_init+0xda/0xda
[ 9.802692] kernel_init+0x11/0x10f
[ 9.802699] ? rest_init+0xda/0xda
[ 9.802706] ret_from_fork+0x35/0x40
[ 9.802722] CR2: 0000000000000068
[ 9.802735] ---[ end trace 1c0a59d8d6a544f3 ]---
[ 9.802745] RIP: 0010:drm_self_refresh_helper_alter_state+0x1ec/0x237
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 295e041d18873d661bea4459df2a6c81b6926927 8c2ffd9174779014c3fe1f96d9dc3641d9175f00 --
git bisect bad 03025a210a0499d328cdb9f082e4c45e6b798be8 # 02:25 B 0 4 18 0 Merge 'linux-review/Jan-Harkes/Coda-updates-for-linux-5-1/20190321-121454' into devel-hourly-2019032722
git bisect bad 424324c9d2a55b2bd66f0ce8e0756834d8de0a1d # 02:44 B 0 10 24 0 Merge 'linux-review/Dan-Carpenter/media-v4l2-ctrl-potential-shift-wrapping-bugs/20190325-203001' into devel-hourly-2019032722
git bisect bad cb6fcf0aaa11eb1399c5eee8a35e9982dcd76931 # 03:07 B 0 11 25 0 Merge 'linux-review/Liviu-Dudau/MAINTAINERS-Fix-pattern-for-Documentation-path-for-Arm-Mali-Komeda/20190327-051830' into devel-hourly-2019032722
git bisect bad 1fa6f320433474ff674140efcbb8f02f71429452 # 03:43 B 0 11 25 0 Merge 'linux-review/Liviu-Dudau/arm-komeda-Compile-komeda_debugfs_init-only-if-CONFIG_DEBUG_FS-is-enabled/20190326-205140' into devel-hourly-2019032722
git bisect good 8136c126428003262721806e62332d1c84084823 # 04:07 G 10 0 1 1 Merge 'linux-review/Arnd-Bergmann/wireless-carl9170-fix-clang-build-warning/20190326-032812' into devel-hourly-2019032722
git bisect bad c3f1830078f8705319690f2ab2ca8a377c6c1328 # 04:25 B 0 1 15 0 Merge 'nsekhar-davinci/fixes' into devel-hourly-2019032722
git bisect good 16d0e17a5313a4d81d70f30847ffc16275a89d04 # 04:52 G 11 0 2 2 Merge 'mzx/for-next' into devel-hourly-2019032722
git bisect good 25d9b508677d66f36351fce6ad31eaf6972369d3 # 05:09 G 11 0 5 5 Merge 'linux-review/UPDATE-20190325-104210/Janusz-Krzysztofik/mtd-rawnand-ams-delta-Drop-board-specific-partition-info/20190320-223544' into devel-hourly-2019032722
git bisect good 9d4cd0076bc0d36b593abaeefc0ab924ad9a8d27 # 05:33 G 11 0 2 2 Merge 'asoc/for-linus' into devel-hourly-2019032722
git bisect bad 9845aab9316a6dc047898ef135fb114ea2487cab # 05:46 B 0 11 25 0 Merge 'linux-review/Sean-Paul/drm-Add-helpers-to-kick-off-self-refresh-mode-in-drivers/20190327-194853' into devel-hourly-2019032722
git bisect good 65ae689329c5d6a149b9201df9321368fbdb6a5c # 06:00 G 10 0 2 2 Merge tag 'for-5.1-rc2-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/kdave/linux
git bisect good 01f2f5b82a2b523ae76af53f2ff43c48dde10a00 # 06:14 G 10 0 2 2 SUNRPC: fix uninitialized variable warning
git bisect bad 4e91560bc502453eee65ec7a0366ab047eb6f8d1 # 06:30 B 0 6 20 0 drm/rockchip: Check for fast link training before enabling psr
git bisect bad f5a7344f61b9f1ff99802d4daa7fecbdf0040b42 # 06:52 B 0 10 24 0 drm: Add helpers to kick off self refresh mode in drivers
git bisect good 14c741de93861749dfb60b4964028541f5c506ca # 07:02 G 10 0 3 3 Merge tag 'nfs-for-5.1-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
# first bad commit: [f5a7344f61b9f1ff99802d4daa7fecbdf0040b42] drm: Add helpers to kick off self refresh mode in drivers
git bisect good 14c741de93861749dfb60b4964028541f5c506ca # 07:04 G 33 0 5 8 Merge tag 'nfs-for-5.1-3' of git://git.linux-nfs.org/projects/trondmy/linux-nfs
# extra tests with debug options
git bisect bad f5a7344f61b9f1ff99802d4daa7fecbdf0040b42 # 07:21 B 0 8 22 0 drm: Add helpers to kick off self refresh mode in drivers
# extra tests on HEAD of linux-devel/devel-hourly-2019032722
git bisect bad 295e041d18873d661bea4459df2a6c81b6926927 # 07:21 B 0 13 30 0 0day head guard for 'devel-hourly-2019032722'
# extra tests on tree/branch linux-review/Sean-Paul/drm-Add-helpers-to-kick-off-self-refresh-mode-in-drivers/20190327-194853
git bisect bad 5886c73b16de676d8655f6d4a286bc74df20c1e2 # 07:51 B 0 10 24 0 drm/rockchip: Use drm_atomic_helper_commit_tail_rpm
# extra tests with first bad commit reverted
git bisect good 2edfca58b2d075534dda49714ce96c5cebf00ff5 # 08:07 G 11 0 0 0 Revert "drm: Add helpers to kick off self refresh mode in drivers"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
1 year, 10 months
[crc] b76377543b: leaking_addresses.proc.modules.crct10dif_pclmul163841-Live
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: b76377543b738a6b58b0a7b0a42dd9e16436fee1 ("crc-t10dif: Pick better transform if one becomes available")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: leaking_addresses
with following parameters:
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
2019-03-26 21:30:42 ./leaking_addresses.pl --output-raw result/scan.out
2019-03-26 21:31:14 ./leaking_addresses.pl --input-raw result/scan.out --squash-by-filename
Total number of results from scan (incl dmesg): 109117
dmesg output:
[ 0.031612] mapped APIC to ffffffffff5fc000 ( fee00000)
[ 0.067531] mapped IOAPIC to ffffffffff5fb000 (fec00000)
Results squashed by filename (excl dmesg). Displaying [<number of results> <filename>], <example result>
[37 .symtab] 0xffffffffc031f000
[1 .rodata.cst16.bswap_mask] 0xffffffffc0561060
[108127 kallsyms] ffffffffb8800000 T startup_64
[1 key] 402000000 3803078f800d001 feffffdfffefffff fffffffffffffffe
[2 __tracepoints_ptrs] 0xffffffffc037fdb0
[37 .note.Linux] 0xffffffffc0319380
[7 .text..refcount] 0xffffffffc053ca1a
[11 __ksymtab_gpl] 0xffffffffc046e028
[2 __tracepoints] 0xffffffffc03927c0
[8 .altinstr_replacement] 0xffffffffc050be52
[1 .rodata.cst16.mask1] 0xffffffffc04b0100
[1 .rodata.cst16.MASK2] 0xffffffffc050c360
[11 __verbose] 0xffffffffc05b6c68
[1 .rodata.cst16.POLY] 0xffffffffc050c2f0
[1 .rodata.cst16.ONEf] 0xffffffffc050c6e0
[1 .rodata.cst16.ONE] 0xffffffffc050c370
[31 .data] 0xffffffffc031a0c0
[150 blacklist] 0xffffffffb8806170-0xffffffffb88061c0 perf_event_nmi_handler
[1 .rodata.cst16.POLY2] 0xffffffffc050c6d0
[2 .altinstr_aux] 0xffffffffc05c9988
[31 .init.text] 0xffffffffc031d000
[1 .init.rodata] 0xffffffffc03a3000
[20 __bug_table] 0xffffffffc03bb17c
[1 .rodata.cst16.enc] 0xffffffffc050c3b0
[1 framebuffer] modifier=0xffff921890038680
[1 .rodata.cst16.mask2] 0xffffffffc04b0110
[2 .ref.data] 0xffffffffc0392640
[52 printk_formats] 0xffffffffb98c1130 : "CPU_ON"
[8 .altinstructions] 0xffffffffc050c3c0
[26 .rodata.str1.8] 0xffffffffc03171f0
[37 .gnu.linkonce.this_module] 0xffffffffc031a380
[1 devices] B: KEY=402000000 3803078f800d001 feffffdfffefffff fffffffffffffffe
[19 __ksymtab_strings] 0xffffffffc0318ac8
[15 __ksymtab] 0xffffffffc0317028
[4 .data..read_mostly] 0xffffffffc054b5a8
[2 .fixup] 0xffffffffc058047b
[14 __jump_table] 0xffffffffc031a000
[1 .rodata.cst16.gf128mul_x_ble_mask] 0xffffffffc050c2e0
[32 .rodata] 0xffffffffc03179a0
[37 .text] 0xffffffffc0314000
[1 .rodata.cst16.MASK1] 0xffffffffc050c350
[2 __bpf_raw_tp_map] 0xffffffffc0392540
[1 uevent] KEY=402000000 3803078f800d001 feffffdfffefffff fffffffffffffffe
[8 __ex_table] 0xffffffffc054aac4
[1 .rodata.cst16.F_MIN_MASK] 0xffffffffc050c390
[2 .rodata.cst16.SHUF_MASK] 0xffffffffc050c330
[37 __mcount_loc] 0xffffffffc0317038
[21 .bss] 0xffffffffc031a6c0
[10 .text.unlikely] 0xffffffffc0316393
[17 __param] 0xffffffffc0318a00
[16 .parainstructions] 0xffffffffc0317978
[29 .rodata.str1.1] 0xffffffffc0317100
[37 modules] crct10dif_pclmul 16384 1 - Live 0xffffffffc04af000
[37 .note.gnu.build-id] 0xffffffffc0317000
[14 .smp_locks] 0xffffffffc046e2cc
[37 .strtab] 0xffffffffc0320788
[30 .exit.text] 0xffffffffc0316554
[31 .orc_unwind_ip] 0xffffffffc0318af9
[5 .data.once] 0xffffffffc054b5b0
[1 .rodata.cst16.TWOONE] 0xffffffffc050c310
[31 .orc_unwind] 0xffffffffc0318e61
[1 .rodata.cst16.dec] 0xffffffffc050c3a0
[2 _ftrace_events] 0xffffffffc0392600
[3 .init.data] 0xffffffffc031e000
[2 __tracepoints_strings] 0xffffffffc0386910
[1 .rodata.cst32.pshufb_shf_table] 0xffffffffc04b0140
To reproduce:
# build kernel
cd linux
cp config-4.19.0-rc2-00022-gb763775 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 10 months
80f8d3dbca ("tcp: fix zerocopy and notsent_lowat issues"): WARNING: CPU: 0 PID: 709 at net/core/stream.c:205 sk_stream_kill_queues
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Eric-Dumazet/tcp-fix-zerocopy-an...
commit 80f8d3dbcab4d3e5020927d909a41a2410163301
Author: Eric Dumazet <edumazet(a)google.com>
AuthorDate: Mon Mar 25 13:44:34 2019 -0700
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Tue Mar 26 21:58:19 2019 +0800
tcp: fix zerocopy and notsent_lowat issues
My recent patch had at least two problems :
1) TX zerocopy wants notification when skb is acknowledged,
thus we need to call skb_zcopy_clear() if the skb is
cached into sk->sk_tx_skb_cache
2) Some applications might expect precise EPOLLOUT notifications,
so we need to update sk->sk_wmem_queued and call
sk_mem_uncharge() from sk_wmem_free_skb() in all cases.
The SOCK_QUEUE_SHRUNK flag must also be set.
Fixes: 472c2e07eef0 ("tcp: add one skb cache for tx")
Signed-off-by: Eric Dumazet <edumazet(a)google.com>
Cc: Willem de Bruijn <willemb(a)google.com>
Cc: Soheil Hassas Yeganeh <soheil(a)google.com>
68cc2999f6 Merge branch 'devlink-small-spring-cleanup'
80f8d3dbca tcp: fix zerocopy and notsent_lowat issues
+-----------------------------------------------------+------------+------------+
| | 68cc2999f6 | 80f8d3dbca |
+-----------------------------------------------------+------------+------------+
| boot_successes | 56 | 10 |
| boot_failures | 1 | 8 |
| Mem-Info | 1 | |
| WARNING:at_net/core/stream.c:#sk_stream_kill_queues | 0 | 8 |
| EIP:sk_stream_kill_queues | 0 | 8 |
| WARNING:at_net/ipv4/af_inet.c:#inet_sock_destruct | 0 | 8 |
| EIP:inet_sock_destruct | 0 | 8 |
+-----------------------------------------------------+------------+------------+
[ 26.057238] warning: process `trinity-c0' used the obsolete bdflush system call
[ 26.058938] Fix your initscripts?
[ 27.025711] warning: process `trinity-c0' used the obsolete bdflush system call
[ 27.027574] Fix your initscripts?
[ 64.912122] Writes: Total: 32081831 Max/Min: 0/0 Fail: 0
[ 72.958673] WARNING: CPU: 0 PID: 709 at net/core/stream.c:205 sk_stream_kill_queues+0x250/0x260
[ 72.960466] CPU: 0 PID: 709 Comm: trinity-c3 Not tainted 5.0.0-11759-g80f8d3d #1
[ 72.961826] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 72.963309] EIP: sk_stream_kill_queues+0x250/0x260
[ 72.964165] Code: 00 00 00 31 c9 e8 80 70 e9 fe 58 5a 5b 5e 5f 5d c3 90 89 d8 e8 b1 9f fe ff e9 56 ff ff ff 8d 74 26 00 0f 0b e9 ea fe ff ff 90 <0f> 0b e9 7a ff ff ff 90 0f 0b eb bf 90 90 90 90 55 89 e5 56 53 e8
[ 72.967129] EAX: 00000001 EBX: d7bf8040 ECX: 00000000 EDX: 00000001
[ 72.968171] ESI: 00000001 EDI: fffffd50 EBP: d6b93e3c ESP: d6b93e28
[ 72.969213] DS: 007b ES: 007b FS: 0000 GS: 00e0 SS: 0068 EFLAGS: 00010286
[ 72.970322] CR0: 80050033 CR2: 00003636 CR3: 17a42000 CR4: 001406b0
[ 72.971365] DR0: b6d1c000 DR1: b6d40000 DR2: 00000000 DR3: 00000000
[ 72.972410] DR6: ffff0ff0 DR7: 00000600
[ 72.973139] Call Trace:
[ 72.973698] inet_csk_destroy_sock+0x157/0x2a0
[ 72.974510] tcp_close+0x58b/0x640
[ 72.975186] inet_release+0x82/0x90
[ 72.975876] inet6_release+0x40/0x50
[ 72.976577] __sock_release+0xe7/0x120
[ 72.977296] sock_close+0x12/0x20
[ 72.977966] __fput+0x19d/0x3d0
[ 72.978612] ____fput+0xd/0x10
[ 72.979244] task_work_run+0x8c/0xc0
[ 72.979948] do_exit+0x30f/0x1210
[ 72.980616] ? trace_hardirqs_off+0x38/0xe0
[ 72.981393] do_group_exit+0x7e/0x100
[ 72.982102] sys_exit_group+0x18/0x20
[ 72.982815] do_int80_syscall_32+0xbf/0x260
[ 72.983593] entry_INT80_32+0xce/0xce
[ 72.984301] EIP: 0x809af42
[ 72.984894] Code: Bad RIP value.
[ 72.985550] EAX: ffffffda EBX: 00000000 ECX: 00000000 EDX: 00000000
[ 72.986590] ESI: 01108100 EDI: 4eaa2bf2 EBP: 6fbbe4f1 ESP: bfd700c8
[ 72.987658] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 007b EFLAGS: 00000296
[ 72.988771] ---[ end trace ec45b2e65e1b6578 ]---
[ 72.989635] WARNING: CPU: 0 PID: 709 at net/core/stream.c:206 sk_stream_kill_queues+0x258/0x260
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 524d47add1c303a8d1dfa09eb59d6ffd08211ad2 9e98c678c2d6ae3a17cb2de55d17f69dddaa231b --
git bisect good 7bfe3be5eb3005ce7db157100e3eb0165f780055 # 23:49 G 13 0 3 3 Merge 'regulator/for-next' into devel-catchup-201903262246
git bisect good af2340e3edb9b7aa9a8190894740067a98d9c5c4 # 23:58 G 14 0 0 0 Merge 'spi/for-5.2' into devel-catchup-201903262246
git bisect bad 1345716b12772ec739b375317c47e25f471ad450 # 00:14 B 9 5 0 0 Merge 'linux-review/Eric-Dumazet/tcp-fix-zerocopy-and-notsent_lowat-issues/20190326-215818' into devel-catchup-201903262246
git bisect good 570c8a7d53032b1773ecfc6d317402450ada6de4 # 00:31 G 19 0 0 0 net: phy: aquantia: check for supported interface modes in config_init
git bisect good c8b7abdd7d8e4696d5ffa25cebaa82931e0e39b3 # 00:47 G 19 0 0 0 ice: fix some function prototype and signature style issues
git bisect good d64fee0a0320ecc678903c30c2fed56b68979011 # 01:03 G 19 0 2 2 Merge tag 'mlx5-updates-2019-03-20' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
git bisect good c3f10cbcaa3d5e1980733c3ccd0261df426412d2 # 01:16 G 19 0 1 1 bnxt: call devlink_port_type_eth_set() before port register
git bisect good faaccbe6eb07ecd590bebae11eb236661ecfb069 # 01:28 G 19 0 1 1 nfp: move devlink port type set after netdev registration
git bisect good b8f975545cdbcc316cf20e827e7966d4410b5c5a # 01:47 G 19 0 1 1 net: devlink: add port type spinlock
git bisect good 68cc2999f6926590c7783f2de12ba467ecad8c7d # 01:58 G 19 0 1 1 Merge branch 'devlink-small-spring-cleanup'
git bisect bad 80f8d3dbcab4d3e5020927d909a41a2410163301 # 02:09 B 2 2 0 0 tcp: fix zerocopy and notsent_lowat issues
# first bad commit: [80f8d3dbcab4d3e5020927d909a41a2410163301] tcp: fix zerocopy and notsent_lowat issues
git bisect good 68cc2999f6926590c7783f2de12ba467ecad8c7d # 02:17 G 60 0 4 5 Merge branch 'devlink-small-spring-cleanup'
# extra tests with debug options
git bisect bad 80f8d3dbcab4d3e5020927d909a41a2410163301 # 02:24 B 12 7 2 2 tcp: fix zerocopy and notsent_lowat issues
# extra tests on HEAD of linux-devel/devel-catchup-201903262246
git bisect bad 524d47add1c303a8d1dfa09eb59d6ffd08211ad2 # 02:24 B 4 7 0 1 0day head guard for 'devel-catchup-201903262246'
# extra tests on tree/branch linux-review/Eric-Dumazet/tcp-fix-zerocopy-and-notsent_lowat-issues/20190326-215818
git bisect bad 80f8d3dbcab4d3e5020927d909a41a2410163301 # 02:28 B 10 8 0 1 tcp: fix zerocopy and notsent_lowat issues
# extra tests with first bad commit reverted
git bisect good a66debe7c0f325673539a10aa3ef52db5460b1b3 # 02:37 G 20 0 1 1 Revert "tcp: fix zerocopy and notsent_lowat issues"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
1 year, 10 months
Re: [LKP] [kernfs] e19dfdc83b: BUG:KASAN:global-out-of-bounds_in_s
by Ondrej Mosnacek
On Mon, Mar 25, 2019 at 4:17 PM Paul Moore <paul(a)paul-moore.com> wrote:
> Ondrej, please look into this.
>
> You've looked at this code more recently than I have, but it looks
> like there might be an issue with __kernfs_iattrs() returning a
> pointer to a kernfs_iattrs object without taking a kernfs reference
> (kernfs_get(kn)). Although I would be a little surprised if this was
> the problem as I think it would cause a number of issues beyond just
> this one ... ?
I think this is actually because of how xattr_full_name() reconstructs
the full name from the xattr suffix. It assumes that the suffix was
obtained from the full name by just taking a pointer inside it, but in
kernfs_security_xattr_get/set() I pass the suffix directly... I'm
surprised that this didn't fail spectacularly earlier during testing.
Maybe the newer GCC does some clever merging of the string constants,
so that XATTR_SELINUX_SUFFIX actually ends up as a substring of
XATTR_NAME_SELINUX? (That would be one hell of a "lucky" coincidence
:)
I'll post a patch that converts kernfs_security_xattr_get/set() to
take the full name and hopefully that will fix the problem. I'll see
if I can run the reproducer locally tomorrow...
>
> On Mon, Mar 25, 2019 at 10:50 AM kernel test robot
> <rong.a.chen(a)intel.com> wrote:
> >
> > FYI, we noticed the following commit (built with gcc-7):
> >
> > commit: e19dfdc83b60f196e0653d683499f7bc5548128f ("kernfs: initialize security of newly created nodes")
> > https://git.kernel.org/cgit/linux/kernel/git/pcmoore/selinux.git next
> >
> > in testcase: locktorture
> > with following parameters:
> >
> > runtime: 300s
> > test: default
> >
> > test-description: This torture test consists of creating a number of kernel threads which acquire the lock and hold it for specific amount of time, thus simulating different critical region behaviors.
> > test-url: https://www.kernel.org/doc/Documentation/locking/locktorture.txt
> >
> >
> > on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 2G
> >
> > caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
> >
> >
> > +-------------------------------------------------+------------+------------+
> > | | ec882da5cd | e19dfdc83b |
> > +-------------------------------------------------+------------+------------+
> > | boot_successes | 0 | 0 |
> > | boot_failures | 8 | 8 |
> > | BUG:kernel_reboot-without-warning_in_test_stage | 8 | |
> > | BUG:KASAN:global-out-of-bounds_in_s | 0 | 8 |
> > +-------------------------------------------------+------------+------------+
> >
> >
> >
> > [ 27.938038] BUG: KASAN: global-out-of-bounds in strcmp+0x97/0xa0
> > [ 27.940755] Read of size 1 at addr ffffffff946a83d7 by task systemd/1
> > [ 27.943554]
> > [ 27.944603] CPU: 0 PID: 1 Comm: systemd Not tainted 5.1.0-rc1-00010-ge19dfdc #1
> > [ 27.948091] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
> > [ 27.951946] Call Trace:
> > [ 27.953353] ? strcmp+0x97/0xa0
> > [ 27.955026] print_address_description+0x22/0x270
> > [ 27.957203] ? strcmp+0x97/0xa0
> > [ 27.958841] kasan_report+0x13b/0x1d0
> > [ 27.960759] ? strcmp+0x97/0xa0
> > [ 27.962378] ? strcmp+0x97/0xa0
> > [ 27.963976] strcmp+0x97/0xa0
> > [ 27.965846] simple_xattr_get+0x7b/0x120
> > [ 27.967473] selinux_kernfs_init_security+0x108/0x440
> > [ 27.969360] ? __radix_tree_replace+0x9a/0x230
> > [ 27.971200] ? selinux_secctx_to_secid+0x20/0x20
> > [ 27.973011] ? __fprop_inc_percpu_max+0x190/0x190
> > [ 27.975563] ? kvm_sched_clock_read+0x12/0x20
> > [ 27.977907] ? sched_clock+0x5/0x10
> > [ 27.979867] ? sched_clock_cpu+0x24/0xb0
> > [ 27.982048] ? idr_alloc_cyclic+0xcb/0x190
> > [ 27.984229] ? lock_downgrade+0x620/0x620
> > [ 27.986388] security_kernfs_init_security+0x3c/0x70
> > [ 27.989012] __kernfs_new_node+0x403/0x5e0
> > [ 27.991195] ? kernfs_dop_revalidate+0x330/0x330
> > [ 27.993589] ? css_next_child+0xec/0x260
> > [ 27.995685] ? css_next_descendant_pre+0x36/0x110
> > [ 27.998115] ? cgroup_propagate_control+0x2d6/0x460
> > [ 28.000662] kernfs_new_node+0x72/0x140
> > [ 28.002818] ? lockdep_hardirqs_on+0x379/0x560
> > [ 28.005171] ? cgroup_idr_replace+0x35/0x40
> > [ 28.007417] kernfs_create_dir_ns+0x26/0x130
> > [ 28.009690] cgroup_mkdir+0x3b9/0xef0
> > [ 28.011764] ? cgroup_destroy_locked+0x5e0/0x5e0
> > [ 28.014196] kernfs_iop_mkdir+0x12f/0x1b0
> > [ 28.016396] vfs_mkdir+0x2e6/0x510
> > [ 28.018317] do_mkdirat+0x19b/0x1f0
> > [ 28.020284] ? __x64_sys_mknod+0xb0/0xb0
> > [ 28.022437] do_syscall_64+0xe5/0x10d0
> > [ 28.024408] ? syscall_return_slowpath+0x790/0x790
> > [ 28.026874] ? entry_SYSCALL_64_after_hwframe+0x3e/0xbe
> > [ 28.029504] ? trace_hardirqs_off_caller+0x58/0x200
> > [ 28.031993] ? trace_hardirqs_off_thunk+0x1a/0x1c
> > [ 28.034438] entry_SYSCALL_64_after_hwframe+0x49/0xbe
> > [ 28.036748] RIP: 0033:0x7f38cab6f447
> > [ 28.038825] Code: 00 b8 ff ff ff ff c3 0f 1f 40 00 48 8b 05 49 da 2b 00 64 c7 00 5f 00 00 00 b8 ff ff ff ff c3 0f 1f 40 00 b8 53 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d 21 da 2b 00 f7 d8 64 89 01 48
> > [ 28.047736] RSP: 002b:00007ffeef143d88 EFLAGS: 00000246 ORIG_RAX: 0000000000000053
> > [ 28.051776] RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 00007f38cab6f447
> > [ 28.055117] RDX: 00007ffeef143c30 RSI: 00000000000001ed RDI: 000055a7b0458560
> > [ 28.058533] RBP: 0000000000000040 R08: 0000000000000000 R09: 2f73662f7379732f
> > [ 28.062031] R10: 732f70756f726763 R11: 0000000000000246 R12: 000055a7b04b30a0
> > [ 28.065528] R13: 0000000000000000 R14: 000055a7b046bb88 R15: 000055a7b046b540
> > [ 28.068977]
> > [ 28.070240] The buggy address belongs to the variable:
> > [ 28.072491] securityfs_super_operations+0x4917/0x6220
> > [ 28.075171]
> > [ 28.076286] Memory state around the buggy address:
> > [ 28.078861] ffffffff946a8280: fa fa fa fa 00 01 fa fa fa fa fa fa 00 02 fa fa
> > [ 28.082610] ffffffff946a8300: fa fa fa fa 00 02 fa fa fa fa fa fa 00 01 fa fa
> > [ 28.086669] >ffffffff946a8380: fa fa fa fa 00 03 fa fa fa fa fa fa 00 fa fa fa
> > [ 28.090587] ^
> > [ 28.093576] ffffffff946a8400: fa fa fa fa 00 00 00 00 00 00 05 fa fa fa fa fa
> > [ 28.097599] ffffffff946a8480: 00 00 01 fa fa fa fa fa 00 00 00 00 00 00 00 00
> > [ 28.101453] ==================================================================
> > [ 28.105478] Disabling lock debugging due to kernel taint
> > Starting Load Kernel Modules...
> > Mounting Debug File System...
> > ] Listening on RPCbind Server Activation Socket.
> > Starting Remount Root and Kernel File Systems...
> > Starting Journal Service...
> > Mounting RPC Pipe File System...
> > [ 28.508319] _warn_unseeded_randomness: 131 callbacks suppressed
> > [ 28.508335] random: get_random_u64 called from copy_process+0x596/0x6450 with crng_init=1
> > Starting Create Static Device Nodes in /dev...
> > [ 28.552988] random: get_random_u64 called from arch_pick_mmap_layout+0x4a1/0x600 with crng_init=1
> > [ 28.556785] random: get_random_u64 called from arch_pick_mmap_layout+0x446/0x600 with crng_init=1
> > Starting Load/Save Random Seed...
> > Starting udev Coldplug all Devices...
> > Mounting FUSE Control File System...
> > Starting Apply Kernel Variables...
> > Mounting Configuration File System...
> > Starting Raise network interfaces...
> > Starting Preprocess NFS configuration...
> > Starting udev Kernel Device Manager...
> > Starting Flush Journal to Persistent Storage...
> > Starting Create Volatile Files and Directories...
> > [ 29.523554] random: get_random_u64 called from arch_pick_mmap_layout+0x446/0x600 with crng_init=1
> > [ 29.527262] random: get_random_u64 called from load_elf_binary+0x1281/0x2f30 with crng_init=1
> >
> > Starting RPC bind portmap service...
> > Starting Network Time Synchronization...
> > Starting Update UTMP about System Boot/Shutdown...
> > [ 30.574449] _warn_unseeded_randomness: 154 callbacks suppressed
> > [ 30.574479] random: get_random_u32 called from bucket_table_alloc+0x149/0x370 with crng_init=1
> > [ 32.628754] random: get_random_u64 called from arch_pick_mmap_layout+0x4a1/0x600 with crng_init=1
> > [ 32.632973] random: get_random_u64 called from arch_pick_mmap_layout+0x446/0x600 with crng_init=1
> > [ 32.637364] random: get_random_u64 called from load_elf_binary+0x1281/0x2f30 with crng_init=1
> > Starting Login Service...
> > Starting LSB: Start and stop bmc-watchdog...
> > Starting LSB: Execute the kexec -e command to reboot system...
> >
> >
> > To reproduce:
> >
> > # build kernel
> > cd linux
> > cp config-5.1.0-rc1-00010-ge19dfdc .config
> > make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
> > make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
> > make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
> > make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
> > make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
> >
> >
> > git clone https://github.com/intel/lkp-tests.git
> > cd lkp-tests
> > find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
> > bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
> >
> >
> >
> >
> > Thanks,
> > Rong Chen
> >
>
>
> --
> paul moore
> www.paul-moore.com
--
Ondrej Mosnacek <omosnace at redhat dot com>
Software Engineer, Security Technologies
Red Hat, Inc.
1 year, 10 months
[cfg80211] 83db2c9ebd: hwsim.scan_multi_bssid_check_ie.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 83db2c9ebd071566d3b45e4860b9803dfc36d1ca ("[PATCH 06/11] cfg80211: don't skip multi-bssid index element")
url: https://github.com/0day-ci/linux/commits/Luca-Coelho/mac80211-Increase-MA...
base: https://git.kernel.org/cgit/linux/kernel/git/jberg/mac80211-next.git master
in testcase: hwsim
with following parameters:
group: hwsim-12
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 4G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
2019-03-25 02:31:59 ./run-tests.py scan_multi_bssid_check_ie
DEV: wlan0: 02:00:00:00:00:00
DEV: wlan1: 02:00:00:00:01:00
DEV: wlan2: 02:00:00:00:02:00
APDEV: wlan3
APDEV: wlan4
START scan_multi_bssid_check_ie 1/1
Test: Scan and check if nontransmitting BSS inherits IE from transmitting BSS
Starting AP wlan3
trans_bss beacon_ie: [0, 1, 3, 5, 71, 42, 45, 221, 50, 59, 61, 127]
nontrans_bss1 beacon_ie: [0, 1, 3, 5, 42, 45, 221, 50, 85, 59, 61, 127]
check IE failed
Traceback (most recent call last):
File "./run-tests.py", line 466, in main
t(dev, apdev)
File "/lkp/benchmarks/hwsim/tests/hwsim/test_scan.py", line 1703, in test_scan_multi_bssid_check_ie
raise Exception("check IE failed")
Exception: check IE failed
FAIL scan_multi_bssid_check_ie 0.671182 2019-03-25 02:32:01.152626
passed 0 test case(s)
skipped 0 test case(s)
failed tests: scan_multi_bssid_check_ie
To reproduce:
# build kernel
cd linux
cp config-5.0.0-rc7-02371-g83db2c9 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 modules_prepare
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 SHELL=/bin/bash
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year, 10 months