[drm/mgag200] 90f479ae51: vm-scalability.median -18.8% regression
by kernel test robot
Greeting,
FYI, we noticed a -18.8% regression of vm-scalability.median due to commit:
commit: 90f479ae51afa45efab97afdde9b94b9660dd3e4 ("drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: vm-scalability
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
runtime: 300s
size: 8T
test: anon-cow-seq-hugetlb
cpufreq_governor: performance
test-description: The motivation behind this suite is to exercise functions and regions of the mm/ of the Linux kernel which are of interest to us.
test-url: https://git.kernel.org/cgit/linux/kernel/git/wfg/vm-scalability.git/
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/rootfs/runtime/size/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/debian-x86_64-2019-05-14.cgz/300s/8T/lkp-knm01/anon-cow-seq-hugetlb/vm-scalability
commit:
f1f8555dfb ("drm/bochs: Use shadow buffer for bochs framebuffer console")
90f479ae51 ("drm/mgag200: Replace struct mga_fbdev with generic framebuffer emulation")
f1f8555dfb9a70a2 90f479ae51afa45efab97afdde9
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
2:4 -50% :4 dmesg.WARNING:at#for_ip_interrupt_entry/0x
:4 25% 1:4 dmesg.WARNING:at_ip___perf_sw_event/0x
:4 25% 1:4 dmesg.WARNING:at_ip__fsnotify_parent/0x
%stddev %change %stddev
\ | \
43955 ± 2% -18.8% 35691 vm-scalability.median
0.06 ± 7% +193.0% 0.16 ± 2% vm-scalability.median_stddev
14906559 ± 2% -17.9% 12237079 vm-scalability.throughput
87651 ± 2% -17.4% 72374 vm-scalability.time.involuntary_context_switches
2086168 -23.6% 1594224 vm-scalability.time.minor_page_faults
15082 ± 2% -10.4% 13517 vm-scalability.time.percent_of_cpu_this_job_got
29987 -8.9% 27327 vm-scalability.time.system_time
15755 -12.4% 13795 vm-scalability.time.user_time
122011 -19.3% 98418 vm-scalability.time.voluntary_context_switches
3.034e+09 -23.6% 2.318e+09 vm-scalability.workload
242478 ± 12% +68.5% 408518 ± 23% cpuidle.POLL.time
2788 ± 21% +117.4% 6062 ± 26% cpuidle.POLL.usage
56653 ± 10% +64.4% 93144 ± 20% meminfo.Mapped
120392 ± 7% +14.0% 137212 ± 4% meminfo.Shmem
47221 ± 11% +77.1% 83634 ± 22% numa-meminfo.node0.Mapped
120465 ± 7% +13.9% 137205 ± 4% numa-meminfo.node0.Shmem
2885513 -16.5% 2409384 numa-numastat.node0.local_node
2885471 -16.5% 2409354 numa-numastat.node0.numa_hit
11813 ± 11% +76.3% 20824 ± 22% numa-vmstat.node0.nr_mapped
30096 ± 7% +13.8% 34238 ± 4% numa-vmstat.node0.nr_shmem
43.72 ± 2% +5.5 49.20 mpstat.cpu.all.idle%
0.03 ± 4% +0.0 0.05 ± 6% mpstat.cpu.all.soft%
19.51 -2.4 17.08 mpstat.cpu.all.usr%
1012 -7.9% 932.75 turbostat.Avg_MHz
32.38 ± 10% +25.8% 40.73 turbostat.CPU%c1
145.51 -3.1% 141.01 turbostat.PkgWatt
15.09 -19.2% 12.19 turbostat.RAMWatt
43.50 ± 2% +13.2% 49.25 vmstat.cpu.id
18.75 ± 2% -13.3% 16.25 ± 2% vmstat.cpu.us
152.00 ± 2% -9.5% 137.50 vmstat.procs.r
4800 -13.1% 4173 vmstat.system.cs
156170 -11.9% 137594 slabinfo.anon_vma.active_objs
3395 -11.9% 2991 slabinfo.anon_vma.active_slabs
156190 -11.9% 137606 slabinfo.anon_vma.num_objs
3395 -11.9% 2991 slabinfo.anon_vma.num_slabs
1716 ± 5% +11.5% 1913 ± 8% slabinfo.dmaengine-unmap-16.active_objs
1716 ± 5% +11.5% 1913 ± 8% slabinfo.dmaengine-unmap-16.num_objs
1767 ± 2% -19.0% 1431 ± 2% slabinfo.hugetlbfs_inode_cache.active_objs
1767 ± 2% -19.0% 1431 ± 2% slabinfo.hugetlbfs_inode_cache.num_objs
3597 ± 5% -16.4% 3006 ± 3% slabinfo.skbuff_ext_cache.active_objs
3597 ± 5% -16.4% 3006 ± 3% slabinfo.skbuff_ext_cache.num_objs
1330122 -23.6% 1016557 proc-vmstat.htlb_buddy_alloc_success
77214 ± 3% +6.4% 82128 ± 2% proc-vmstat.nr_active_anon
67277 +2.9% 69246 proc-vmstat.nr_anon_pages
218.50 ± 3% -10.6% 195.25 proc-vmstat.nr_dirtied
288628 +1.4% 292755 proc-vmstat.nr_file_pages
360.50 -2.7% 350.75 proc-vmstat.nr_inactive_file
14225 ± 9% +63.8% 23304 ± 20% proc-vmstat.nr_mapped
30109 ± 7% +13.8% 34259 ± 4% proc-vmstat.nr_shmem
99870 -1.3% 98597 proc-vmstat.nr_slab_unreclaimable
204.00 ± 4% -12.1% 179.25 proc-vmstat.nr_written
77214 ± 3% +6.4% 82128 ± 2% proc-vmstat.nr_zone_active_anon
360.50 -2.7% 350.75 proc-vmstat.nr_zone_inactive_file
8810 ± 19% -66.1% 2987 ± 42% proc-vmstat.numa_hint_faults
8810 ± 19% -66.1% 2987 ± 42% proc-vmstat.numa_hint_faults_local
2904082 -16.4% 2427026 proc-vmstat.numa_hit
2904081 -16.4% 2427025 proc-vmstat.numa_local
6.828e+08 -23.5% 5.221e+08 proc-vmstat.pgalloc_normal
2900008 -17.2% 2400195 proc-vmstat.pgfault
6.827e+08 -23.5% 5.22e+08 proc-vmstat.pgfree
1.635e+10 -17.0% 1.357e+10 perf-stat.i.branch-instructions
1.53 ± 4% -0.1 1.45 ± 3% perf-stat.i.branch-miss-rate%
2.581e+08 ± 3% -20.5% 2.051e+08 ± 2% perf-stat.i.branch-misses
12.66 +1.1 13.78 perf-stat.i.cache-miss-rate%
72720849 -12.0% 63958986 perf-stat.i.cache-misses
5.766e+08 -18.6% 4.691e+08 perf-stat.i.cache-references
4674 ± 2% -13.0% 4064 perf-stat.i.context-switches
4.29 +12.5% 4.83 perf-stat.i.cpi
2.573e+11 -7.4% 2.383e+11 perf-stat.i.cpu-cycles
231.35 -21.5% 181.56 perf-stat.i.cpu-migrations
3522 +4.4% 3677 perf-stat.i.cycles-between-cache-misses
0.09 ± 13% +0.0 0.12 ± 5% perf-stat.i.iTLB-load-miss-rate%
5.894e+10 -15.8% 4.961e+10 perf-stat.i.iTLB-loads
5.901e+10 -15.8% 4.967e+10 perf-stat.i.instructions
1291 ± 14% -21.8% 1010 perf-stat.i.instructions-per-iTLB-miss
0.24 -11.0% 0.21 perf-stat.i.ipc
9476 -17.5% 7821 perf-stat.i.minor-faults
9478 -17.5% 7821 perf-stat.i.page-faults
9.76 -3.6% 9.41 perf-stat.overall.MPKI
1.59 ± 4% -0.1 1.52 perf-stat.overall.branch-miss-rate%
12.61 +1.1 13.71 perf-stat.overall.cache-miss-rate%
4.38 +10.5% 4.83 perf-stat.overall.cpi
3557 +5.3% 3747 perf-stat.overall.cycles-between-cache-misses
0.08 ± 12% +0.0 0.10 perf-stat.overall.iTLB-load-miss-rate%
1268 ± 15% -23.0% 976.22 perf-stat.overall.instructions-per-iTLB-miss
0.23 -9.5% 0.21 perf-stat.overall.ipc
5815 +9.7% 6378 perf-stat.overall.path-length
1.634e+10 -17.5% 1.348e+10 perf-stat.ps.branch-instructions
2.595e+08 ± 3% -21.2% 2.043e+08 ± 2% perf-stat.ps.branch-misses
72565205 -12.2% 63706339 perf-stat.ps.cache-misses
5.754e+08 -19.2% 4.646e+08 perf-stat.ps.cache-references
4640 ± 2% -12.5% 4060 perf-stat.ps.context-switches
2.581e+11 -7.5% 2.387e+11 perf-stat.ps.cpu-cycles
229.91 -22.0% 179.42 perf-stat.ps.cpu-migrations
5.889e+10 -16.3% 4.927e+10 perf-stat.ps.iTLB-loads
5.899e+10 -16.3% 4.938e+10 perf-stat.ps.instructions
9388 -18.2% 7677 perf-stat.ps.minor-faults
9389 -18.2% 7677 perf-stat.ps.page-faults
1.764e+13 -16.2% 1.479e+13 perf-stat.total.instructions
46803 ± 3% -18.8% 37982 ± 6% sched_debug.cfs_rq:/.exec_clock.min
5320 ± 3% +23.7% 6581 ± 3% sched_debug.cfs_rq:/.exec_clock.stddev
6737 ± 14% +58.1% 10649 ± 10% sched_debug.cfs_rq:/.load.avg
587978 ± 17% +58.2% 930382 ± 9% sched_debug.cfs_rq:/.load.max
46952 ± 16% +64.8% 77388 ± 11% sched_debug.cfs_rq:/.load.stddev
7.12 ± 4% +49.1% 10.62 ± 6% sched_debug.cfs_rq:/.load_avg.avg
474.40 ± 23% +67.5% 794.60 ± 10% sched_debug.cfs_rq:/.load_avg.max
37.70 ± 11% +74.8% 65.90 ± 9% sched_debug.cfs_rq:/.load_avg.stddev
13424269 ± 4% -15.6% 11328098 ± 2% sched_debug.cfs_rq:/.min_vruntime.avg
15411275 ± 3% -12.4% 13505072 ± 2% sched_debug.cfs_rq:/.min_vruntime.max
7939295 ± 6% -17.5% 6551322 ± 7% sched_debug.cfs_rq:/.min_vruntime.min
21.44 ± 7% -56.1% 9.42 ± 4% sched_debug.cfs_rq:/.nr_spread_over.avg
117.45 ± 11% -60.6% 46.30 ± 14% sched_debug.cfs_rq:/.nr_spread_over.max
19.33 ± 8% -66.4% 6.49 ± 9% sched_debug.cfs_rq:/.nr_spread_over.stddev
4.32 ± 15% +84.4% 7.97 ± 3% sched_debug.cfs_rq:/.runnable_load_avg.avg
353.85 ± 29% +118.8% 774.35 ± 11% sched_debug.cfs_rq:/.runnable_load_avg.max
27.30 ± 24% +118.5% 59.64 ± 9% sched_debug.cfs_rq:/.runnable_load_avg.stddev
6729 ± 14% +58.2% 10644 ± 10% sched_debug.cfs_rq:/.runnable_weight.avg
587978 ± 17% +58.2% 930382 ± 9% sched_debug.cfs_rq:/.runnable_weight.max
46950 ± 16% +64.8% 77387 ± 11% sched_debug.cfs_rq:/.runnable_weight.stddev
5305069 ± 4% -17.4% 4380376 ± 7% sched_debug.cfs_rq:/.spread0.avg
7328745 ± 3% -9.9% 6600897 ± 3% sched_debug.cfs_rq:/.spread0.max
2220837 ± 4% +55.8% 3460596 ± 5% sched_debug.cpu.avg_idle.avg
4590666 ± 9% +76.8% 8117037 ± 15% sched_debug.cpu.avg_idle.max
485052 ± 7% +80.3% 874679 ± 10% sched_debug.cpu.avg_idle.stddev
561.50 ± 26% +37.7% 773.30 ± 15% sched_debug.cpu.clock.stddev
561.50 ± 26% +37.7% 773.30 ± 15% sched_debug.cpu.clock_task.stddev
3.20 ± 10% +109.6% 6.70 ± 3% sched_debug.cpu.cpu_load[0].avg
309.10 ± 20% +150.3% 773.75 ± 12% sched_debug.cpu.cpu_load[0].max
21.02 ± 14% +160.8% 54.80 ± 9% sched_debug.cpu.cpu_load[0].stddev
3.19 ± 8% +109.8% 6.70 ± 3% sched_debug.cpu.cpu_load[1].avg
299.75 ± 19% +158.0% 773.30 ± 12% sched_debug.cpu.cpu_load[1].max
20.32 ± 12% +168.7% 54.62 ± 9% sched_debug.cpu.cpu_load[1].stddev
3.20 ± 8% +109.1% 6.69 ± 4% sched_debug.cpu.cpu_load[2].avg
288.90 ± 20% +167.0% 771.40 ± 12% sched_debug.cpu.cpu_load[2].max
19.70 ± 12% +175.4% 54.27 ± 9% sched_debug.cpu.cpu_load[2].stddev
3.16 ± 8% +110.9% 6.66 ± 6% sched_debug.cpu.cpu_load[3].avg
275.50 ± 24% +178.4% 766.95 ± 12% sched_debug.cpu.cpu_load[3].max
18.92 ± 15% +184.2% 53.77 ± 10% sched_debug.cpu.cpu_load[3].stddev
3.08 ± 8% +115.7% 6.65 ± 7% sched_debug.cpu.cpu_load[4].avg
263.55 ± 28% +188.7% 760.85 ± 12% sched_debug.cpu.cpu_load[4].max
18.03 ± 18% +196.6% 53.46 ± 11% sched_debug.cpu.cpu_load[4].stddev
14543 -9.6% 13150 sched_debug.cpu.curr->pid.max
5293 ± 16% +74.7% 9248 ± 11% sched_debug.cpu.load.avg
587978 ± 17% +58.2% 930382 ± 9% sched_debug.cpu.load.max
40887 ± 19% +78.3% 72891 ± 9% sched_debug.cpu.load.stddev
1141679 ± 4% +56.9% 1790907 ± 5% sched_debug.cpu.max_idle_balance_cost.avg
2432100 ± 9% +72.6% 4196779 ± 13% sched_debug.cpu.max_idle_balance_cost.max
745656 +29.3% 964170 ± 5% sched_debug.cpu.max_idle_balance_cost.min
239032 ± 9% +81.9% 434806 ± 10% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 27% +92.1% 0.00 ± 31% sched_debug.cpu.next_balance.stddev
1030 ± 4% -10.4% 924.00 ± 2% sched_debug.cpu.nr_switches.min
0.04 ± 26% +139.0% 0.09 ± 41% sched_debug.cpu.nr_uninterruptible.avg
830.35 ± 6% -12.0% 730.50 ± 2% sched_debug.cpu.sched_count.min
912.00 ± 2% -9.5% 825.38 sched_debug.cpu.ttwu_count.avg
433.05 ± 3% -19.2% 350.05 ± 3% sched_debug.cpu.ttwu_count.min
160.70 ± 3% -12.5% 140.60 ± 4% sched_debug.cpu.ttwu_local.min
9072 ± 11% -36.4% 5767 ± 8% softirqs.CPU1.RCU
12769 ± 5% +15.3% 14718 ± 3% softirqs.CPU101.SCHED
13198 +11.5% 14717 ± 3% softirqs.CPU102.SCHED
12981 ± 4% +13.9% 14788 ± 3% softirqs.CPU105.SCHED
13486 ± 3% +11.8% 15071 ± 4% softirqs.CPU111.SCHED
12794 ± 4% +14.1% 14601 ± 9% softirqs.CPU112.SCHED
12999 ± 4% +10.1% 14314 ± 4% softirqs.CPU115.SCHED
12844 ± 4% +10.6% 14202 ± 2% softirqs.CPU120.SCHED
13336 ± 3% +9.4% 14585 ± 3% softirqs.CPU122.SCHED
12639 ± 4% +20.2% 15195 softirqs.CPU123.SCHED
13040 ± 5% +15.2% 15024 ± 5% softirqs.CPU126.SCHED
13123 +15.1% 15106 ± 5% softirqs.CPU127.SCHED
9188 ± 6% -35.7% 5911 ± 2% softirqs.CPU13.RCU
13054 ± 3% +13.1% 14761 ± 5% softirqs.CPU130.SCHED
13158 ± 2% +13.9% 14985 ± 5% softirqs.CPU131.SCHED
12797 ± 6% +13.5% 14524 ± 3% softirqs.CPU133.SCHED
12452 ± 5% +14.8% 14297 softirqs.CPU134.SCHED
13078 ± 3% +10.4% 14439 ± 3% softirqs.CPU138.SCHED
12617 ± 2% +14.5% 14442 ± 5% softirqs.CPU139.SCHED
12974 ± 3% +13.7% 14752 ± 4% softirqs.CPU142.SCHED
12579 ± 4% +19.1% 14983 ± 3% softirqs.CPU143.SCHED
9122 ± 24% -44.6% 5053 ± 5% softirqs.CPU144.RCU
13366 ± 2% +11.1% 14848 ± 3% softirqs.CPU149.SCHED
13246 ± 2% +22.0% 16162 ± 7% softirqs.CPU150.SCHED
13452 ± 3% +20.5% 16210 ± 7% softirqs.CPU151.SCHED
13507 +10.1% 14869 softirqs.CPU156.SCHED
13808 ± 3% +9.2% 15079 ± 4% softirqs.CPU157.SCHED
13442 ± 2% +13.4% 15248 ± 4% softirqs.CPU160.SCHED
13311 +12.1% 14920 ± 2% softirqs.CPU162.SCHED
13544 ± 3% +8.5% 14695 ± 4% softirqs.CPU163.SCHED
13648 ± 3% +11.2% 15179 ± 2% softirqs.CPU166.SCHED
13404 ± 4% +12.5% 15079 ± 3% softirqs.CPU168.SCHED
13421 ± 6% +16.0% 15568 ± 8% softirqs.CPU169.SCHED
13115 ± 3% +23.1% 16139 ± 10% softirqs.CPU171.SCHED
13424 ± 6% +10.4% 14822 ± 3% softirqs.CPU175.SCHED
13274 ± 3% +13.7% 15087 ± 9% softirqs.CPU185.SCHED
13409 ± 3% +12.3% 15063 ± 3% softirqs.CPU190.SCHED
13181 ± 7% +13.4% 14946 ± 3% softirqs.CPU196.SCHED
13578 ± 3% +10.9% 15061 softirqs.CPU197.SCHED
13323 ± 5% +24.8% 16627 ± 6% softirqs.CPU198.SCHED
14072 ± 2% +12.3% 15798 ± 7% softirqs.CPU199.SCHED
12604 ± 13% +17.9% 14865 softirqs.CPU201.SCHED
13380 ± 4% +14.8% 15356 ± 3% softirqs.CPU203.SCHED
13481 ± 8% +14.2% 15390 ± 3% softirqs.CPU204.SCHED
12921 ± 2% +13.8% 14710 ± 3% softirqs.CPU206.SCHED
13468 +13.0% 15218 ± 2% softirqs.CPU208.SCHED
13253 ± 2% +13.1% 14992 softirqs.CPU209.SCHED
13319 ± 2% +14.3% 15225 ± 7% softirqs.CPU210.SCHED
13673 ± 5% +16.3% 15895 ± 3% softirqs.CPU211.SCHED
13290 +17.0% 15556 ± 5% softirqs.CPU212.SCHED
13455 ± 4% +14.4% 15392 ± 3% softirqs.CPU213.SCHED
13454 ± 4% +14.3% 15377 ± 3% softirqs.CPU215.SCHED
13872 ± 7% +9.7% 15221 ± 5% softirqs.CPU220.SCHED
13555 ± 4% +17.3% 15896 ± 5% softirqs.CPU222.SCHED
13411 ± 4% +20.8% 16197 ± 6% softirqs.CPU223.SCHED
8472 ± 21% -44.8% 4680 ± 3% softirqs.CPU224.RCU
13141 ± 3% +16.2% 15265 ± 7% softirqs.CPU225.SCHED
14084 ± 3% +8.2% 15242 ± 2% softirqs.CPU226.SCHED
13528 ± 4% +11.3% 15063 ± 4% softirqs.CPU228.SCHED
13218 ± 3% +16.3% 15377 ± 4% softirqs.CPU229.SCHED
14031 ± 4% +10.2% 15467 ± 2% softirqs.CPU231.SCHED
13770 ± 3% +14.0% 15700 ± 3% softirqs.CPU232.SCHED
13456 ± 3% +12.3% 15105 ± 3% softirqs.CPU233.SCHED
13137 ± 4% +13.5% 14909 ± 3% softirqs.CPU234.SCHED
13318 ± 2% +14.7% 15280 ± 2% softirqs.CPU235.SCHED
13690 ± 2% +13.7% 15563 ± 7% softirqs.CPU238.SCHED
13771 ± 5% +20.8% 16634 ± 7% softirqs.CPU241.SCHED
13317 ± 7% +19.5% 15919 ± 9% softirqs.CPU243.SCHED
8234 ± 16% -43.9% 4616 ± 5% softirqs.CPU244.RCU
13845 ± 6% +13.0% 15643 ± 3% softirqs.CPU244.SCHED
13179 ± 3% +16.3% 15323 softirqs.CPU246.SCHED
13754 +12.2% 15438 ± 3% softirqs.CPU248.SCHED
13769 ± 4% +10.9% 15276 ± 2% softirqs.CPU252.SCHED
13702 +10.5% 15147 ± 2% softirqs.CPU254.SCHED
13315 ± 2% +12.5% 14980 ± 3% softirqs.CPU255.SCHED
13785 ± 3% +12.9% 15568 ± 5% softirqs.CPU256.SCHED
13307 ± 3% +15.0% 15298 ± 3% softirqs.CPU257.SCHED
13864 ± 3% +10.5% 15313 ± 2% softirqs.CPU259.SCHED
13879 ± 2% +11.4% 15465 softirqs.CPU261.SCHED
13815 +13.6% 15687 ± 5% softirqs.CPU264.SCHED
119574 ± 2% +11.8% 133693 ± 11% softirqs.CPU266.TIMER
13688 +10.9% 15180 ± 6% softirqs.CPU267.SCHED
11716 ± 4% +19.3% 13974 ± 8% softirqs.CPU27.SCHED
13866 ± 3% +13.7% 15765 ± 4% softirqs.CPU271.SCHED
13887 ± 5% +12.5% 15621 softirqs.CPU272.SCHED
13383 ± 3% +19.8% 16031 ± 2% softirqs.CPU274.SCHED
13347 +14.1% 15232 ± 3% softirqs.CPU275.SCHED
12884 ± 2% +21.0% 15593 ± 4% softirqs.CPU276.SCHED
13131 ± 5% +13.4% 14891 ± 5% softirqs.CPU277.SCHED
12891 ± 2% +19.2% 15371 ± 4% softirqs.CPU278.SCHED
13313 ± 4% +13.0% 15049 ± 2% softirqs.CPU279.SCHED
13514 ± 3% +10.2% 14897 ± 2% softirqs.CPU280.SCHED
13501 ± 3% +13.7% 15346 softirqs.CPU281.SCHED
13261 +17.5% 15577 softirqs.CPU282.SCHED
8076 ± 15% -43.7% 4546 ± 5% softirqs.CPU283.RCU
13686 ± 3% +12.6% 15413 ± 2% softirqs.CPU284.SCHED
13439 ± 2% +9.2% 14670 ± 4% softirqs.CPU285.SCHED
8878 ± 9% -35.4% 5735 ± 4% softirqs.CPU35.RCU
11690 ± 2% +13.6% 13274 ± 5% softirqs.CPU40.SCHED
11714 ± 2% +19.3% 13975 ± 13% softirqs.CPU41.SCHED
11763 +12.5% 13239 ± 4% softirqs.CPU45.SCHED
11662 ± 2% +9.4% 12757 ± 3% softirqs.CPU46.SCHED
11805 ± 2% +9.3% 12902 ± 2% softirqs.CPU50.SCHED
12158 ± 3% +12.3% 13655 ± 8% softirqs.CPU55.SCHED
11716 ± 4% +8.8% 12751 ± 3% softirqs.CPU58.SCHED
11922 ± 2% +9.9% 13100 ± 4% softirqs.CPU64.SCHED
9674 ± 17% -41.8% 5625 ± 6% softirqs.CPU66.RCU
11818 +12.0% 13237 softirqs.CPU66.SCHED
124682 ± 7% -6.1% 117088 ± 5% softirqs.CPU66.TIMER
8637 ± 9% -34.0% 5700 ± 7% softirqs.CPU70.RCU
11624 ± 2% +11.0% 12901 ± 2% softirqs.CPU70.SCHED
12372 ± 2% +13.2% 14003 ± 3% softirqs.CPU71.SCHED
9949 ± 25% -33.9% 6574 ± 31% softirqs.CPU72.RCU
10392 ± 26% -35.1% 6745 ± 35% softirqs.CPU73.RCU
12766 ± 3% +11.1% 14188 ± 3% softirqs.CPU76.SCHED
12611 ± 2% +18.8% 14984 ± 5% softirqs.CPU78.SCHED
12786 ± 3% +17.9% 15079 ± 7% softirqs.CPU79.SCHED
11947 ± 4% +9.7% 13103 ± 4% softirqs.CPU8.SCHED
13379 ± 7% +11.8% 14962 ± 4% softirqs.CPU83.SCHED
13438 ± 5% +9.7% 14738 ± 2% softirqs.CPU84.SCHED
12768 +19.4% 15241 ± 6% softirqs.CPU88.SCHED
8604 ± 13% -39.3% 5222 ± 3% softirqs.CPU89.RCU
13077 ± 2% +17.1% 15308 ± 7% softirqs.CPU89.SCHED
11887 ± 3% +20.1% 14272 ± 5% softirqs.CPU9.SCHED
12723 ± 3% +11.3% 14165 ± 4% softirqs.CPU90.SCHED
8439 ± 12% -38.9% 5153 ± 4% softirqs.CPU91.RCU
13429 ± 3% +10.3% 14806 ± 2% softirqs.CPU95.SCHED
12852 ± 4% +10.3% 14174 ± 5% softirqs.CPU96.SCHED
13010 ± 2% +14.4% 14888 ± 5% softirqs.CPU97.SCHED
2315644 ± 4% -36.2% 1477200 ± 4% softirqs.RCU
1572 ± 10% +63.9% 2578 ± 39% interrupts.CPU0.NMI:Non-maskable_interrupts
1572 ± 10% +63.9% 2578 ± 39% interrupts.CPU0.PMI:Performance_monitoring_interrupts
252.00 ± 11% -35.2% 163.25 ± 13% interrupts.CPU104.RES:Rescheduling_interrupts
2738 ± 24% +52.4% 4173 ± 19% interrupts.CPU105.NMI:Non-maskable_interrupts
2738 ± 24% +52.4% 4173 ± 19% interrupts.CPU105.PMI:Performance_monitoring_interrupts
245.75 ± 19% -31.0% 169.50 ± 7% interrupts.CPU105.RES:Rescheduling_interrupts
228.75 ± 13% -24.7% 172.25 ± 19% interrupts.CPU106.RES:Rescheduling_interrupts
2243 ± 15% +66.3% 3730 ± 35% interrupts.CPU113.NMI:Non-maskable_interrupts
2243 ± 15% +66.3% 3730 ± 35% interrupts.CPU113.PMI:Performance_monitoring_interrupts
2703 ± 31% +67.0% 4514 ± 33% interrupts.CPU118.NMI:Non-maskable_interrupts
2703 ± 31% +67.0% 4514 ± 33% interrupts.CPU118.PMI:Performance_monitoring_interrupts
2613 ± 25% +42.2% 3715 ± 24% interrupts.CPU121.NMI:Non-maskable_interrupts
2613 ± 25% +42.2% 3715 ± 24% interrupts.CPU121.PMI:Performance_monitoring_interrupts
311.50 ± 23% -47.7% 163.00 ± 9% interrupts.CPU122.RES:Rescheduling_interrupts
266.75 ± 19% -31.6% 182.50 ± 15% interrupts.CPU124.RES:Rescheduling_interrupts
293.75 ± 33% -32.3% 198.75 ± 19% interrupts.CPU125.RES:Rescheduling_interrupts
2601 ± 36% +43.2% 3724 ± 29% interrupts.CPU127.NMI:Non-maskable_interrupts
2601 ± 36% +43.2% 3724 ± 29% interrupts.CPU127.PMI:Performance_monitoring_interrupts
2258 ± 21% +68.2% 3797 ± 29% interrupts.CPU13.NMI:Non-maskable_interrupts
2258 ± 21% +68.2% 3797 ± 29% interrupts.CPU13.PMI:Performance_monitoring_interrupts
3338 ± 29% +54.6% 5160 ± 9% interrupts.CPU139.NMI:Non-maskable_interrupts
3338 ± 29% +54.6% 5160 ± 9% interrupts.CPU139.PMI:Performance_monitoring_interrupts
219.50 ± 27% -23.0% 169.00 ± 21% interrupts.CPU139.RES:Rescheduling_interrupts
290.25 ± 25% -32.5% 196.00 ± 11% interrupts.CPU14.RES:Rescheduling_interrupts
243.50 ± 4% -16.0% 204.50 ± 12% interrupts.CPU140.RES:Rescheduling_interrupts
1797 ± 15% +135.0% 4223 ± 46% interrupts.CPU147.NMI:Non-maskable_interrupts
1797 ± 15% +135.0% 4223 ± 46% interrupts.CPU147.PMI:Performance_monitoring_interrupts
2537 ± 22% +89.6% 4812 ± 28% interrupts.CPU15.NMI:Non-maskable_interrupts
2537 ± 22% +89.6% 4812 ± 28% interrupts.CPU15.PMI:Performance_monitoring_interrupts
292.25 ± 34% -33.9% 193.25 ± 6% interrupts.CPU15.RES:Rescheduling_interrupts
424.25 ± 37% -58.5% 176.25 ± 14% interrupts.CPU158.RES:Rescheduling_interrupts
312.50 ± 42% -54.2% 143.00 ± 18% interrupts.CPU159.RES:Rescheduling_interrupts
725.00 ±118% -75.7% 176.25 ± 14% interrupts.CPU163.RES:Rescheduling_interrupts
2367 ± 6% +59.9% 3786 ± 24% interrupts.CPU177.NMI:Non-maskable_interrupts
2367 ± 6% +59.9% 3786 ± 24% interrupts.CPU177.PMI:Performance_monitoring_interrupts
239.50 ± 30% -46.6% 128.00 ± 14% interrupts.CPU179.RES:Rescheduling_interrupts
320.75 ± 15% -24.0% 243.75 ± 20% interrupts.CPU20.RES:Rescheduling_interrupts
302.50 ± 17% -47.2% 159.75 ± 8% interrupts.CPU200.RES:Rescheduling_interrupts
2166 ± 5% +92.0% 4157 ± 40% interrupts.CPU207.NMI:Non-maskable_interrupts
2166 ± 5% +92.0% 4157 ± 40% interrupts.CPU207.PMI:Performance_monitoring_interrupts
217.00 ± 11% -34.6% 142.00 ± 12% interrupts.CPU214.RES:Rescheduling_interrupts
2610 ± 36% +47.4% 3848 ± 35% interrupts.CPU215.NMI:Non-maskable_interrupts
2610 ± 36% +47.4% 3848 ± 35% interrupts.CPU215.PMI:Performance_monitoring_interrupts
2046 ± 13% +118.6% 4475 ± 43% interrupts.CPU22.NMI:Non-maskable_interrupts
2046 ± 13% +118.6% 4475 ± 43% interrupts.CPU22.PMI:Performance_monitoring_interrupts
289.50 ± 28% -41.1% 170.50 ± 8% interrupts.CPU22.RES:Rescheduling_interrupts
2232 ± 6% +33.0% 2970 ± 24% interrupts.CPU221.NMI:Non-maskable_interrupts
2232 ± 6% +33.0% 2970 ± 24% interrupts.CPU221.PMI:Performance_monitoring_interrupts
4552 ± 12% -27.6% 3295 ± 15% interrupts.CPU222.NMI:Non-maskable_interrupts
4552 ± 12% -27.6% 3295 ± 15% interrupts.CPU222.PMI:Performance_monitoring_interrupts
2013 ± 15% +80.9% 3641 ± 27% interrupts.CPU226.NMI:Non-maskable_interrupts
2013 ± 15% +80.9% 3641 ± 27% interrupts.CPU226.PMI:Performance_monitoring_interrupts
2575 ± 49% +67.1% 4302 ± 34% interrupts.CPU227.NMI:Non-maskable_interrupts
2575 ± 49% +67.1% 4302 ± 34% interrupts.CPU227.PMI:Performance_monitoring_interrupts
248.00 ± 36% -36.3% 158.00 ± 19% interrupts.CPU228.RES:Rescheduling_interrupts
2441 ± 24% +43.0% 3490 ± 30% interrupts.CPU23.NMI:Non-maskable_interrupts
2441 ± 24% +43.0% 3490 ± 30% interrupts.CPU23.PMI:Performance_monitoring_interrupts
404.25 ± 69% -65.5% 139.50 ± 17% interrupts.CPU236.RES:Rescheduling_interrupts
566.50 ± 40% -73.6% 149.50 ± 31% interrupts.CPU237.RES:Rescheduling_interrupts
243.50 ± 26% -37.1% 153.25 ± 21% interrupts.CPU248.RES:Rescheduling_interrupts
258.25 ± 12% -53.5% 120.00 ± 18% interrupts.CPU249.RES:Rescheduling_interrupts
2888 ± 27% +49.4% 4313 ± 30% interrupts.CPU253.NMI:Non-maskable_interrupts
2888 ± 27% +49.4% 4313 ± 30% interrupts.CPU253.PMI:Performance_monitoring_interrupts
2468 ± 44% +67.3% 4131 ± 37% interrupts.CPU256.NMI:Non-maskable_interrupts
2468 ± 44% +67.3% 4131 ± 37% interrupts.CPU256.PMI:Performance_monitoring_interrupts
425.00 ± 59% -60.3% 168.75 ± 34% interrupts.CPU258.RES:Rescheduling_interrupts
1859 ± 16% +106.3% 3834 ± 44% interrupts.CPU268.NMI:Non-maskable_interrupts
1859 ± 16% +106.3% 3834 ± 44% interrupts.CPU268.PMI:Performance_monitoring_interrupts
2684 ± 28% +61.2% 4326 ± 36% interrupts.CPU269.NMI:Non-maskable_interrupts
2684 ± 28% +61.2% 4326 ± 36% interrupts.CPU269.PMI:Performance_monitoring_interrupts
2171 ± 6% +108.8% 4533 ± 20% interrupts.CPU270.NMI:Non-maskable_interrupts
2171 ± 6% +108.8% 4533 ± 20% interrupts.CPU270.PMI:Performance_monitoring_interrupts
2262 ± 14% +61.8% 3659 ± 37% interrupts.CPU273.NMI:Non-maskable_interrupts
2262 ± 14% +61.8% 3659 ± 37% interrupts.CPU273.PMI:Performance_monitoring_interrupts
2203 ± 11% +50.7% 3320 ± 38% interrupts.CPU279.NMI:Non-maskable_interrupts
2203 ± 11% +50.7% 3320 ± 38% interrupts.CPU279.PMI:Performance_monitoring_interrupts
2433 ± 17% +52.9% 3721 ± 25% interrupts.CPU280.NMI:Non-maskable_interrupts
2433 ± 17% +52.9% 3721 ± 25% interrupts.CPU280.PMI:Performance_monitoring_interrupts
2778 ± 33% +63.1% 4531 ± 36% interrupts.CPU283.NMI:Non-maskable_interrupts
2778 ± 33% +63.1% 4531 ± 36% interrupts.CPU283.PMI:Performance_monitoring_interrupts
331.75 ± 32% -39.8% 199.75 ± 17% interrupts.CPU29.RES:Rescheduling_interrupts
2178 ± 22% +53.9% 3353 ± 31% interrupts.CPU3.NMI:Non-maskable_interrupts
2178 ± 22% +53.9% 3353 ± 31% interrupts.CPU3.PMI:Performance_monitoring_interrupts
298.50 ± 30% -39.7% 180.00 ± 6% interrupts.CPU34.RES:Rescheduling_interrupts
2490 ± 3% +58.7% 3953 ± 28% interrupts.CPU35.NMI:Non-maskable_interrupts
2490 ± 3% +58.7% 3953 ± 28% interrupts.CPU35.PMI:Performance_monitoring_interrupts
270.50 ± 24% -31.1% 186.25 ± 3% interrupts.CPU36.RES:Rescheduling_interrupts
2493 ± 7% +57.0% 3915 ± 27% interrupts.CPU43.NMI:Non-maskable_interrupts
2493 ± 7% +57.0% 3915 ± 27% interrupts.CPU43.PMI:Performance_monitoring_interrupts
286.75 ± 36% -32.4% 193.75 ± 7% interrupts.CPU45.RES:Rescheduling_interrupts
259.00 ± 12% -23.6% 197.75 ± 13% interrupts.CPU46.RES:Rescheduling_interrupts
244.00 ± 21% -35.6% 157.25 ± 11% interrupts.CPU47.RES:Rescheduling_interrupts
230.00 ± 7% -21.3% 181.00 ± 11% interrupts.CPU48.RES:Rescheduling_interrupts
281.00 ± 13% -27.4% 204.00 ± 15% interrupts.CPU53.RES:Rescheduling_interrupts
256.75 ± 5% -18.4% 209.50 ± 12% interrupts.CPU54.RES:Rescheduling_interrupts
2433 ± 9% +68.4% 4098 ± 35% interrupts.CPU58.NMI:Non-maskable_interrupts
2433 ± 9% +68.4% 4098 ± 35% interrupts.CPU58.PMI:Performance_monitoring_interrupts
316.00 ± 25% -41.4% 185.25 ± 13% interrupts.CPU59.RES:Rescheduling_interrupts
2703 ± 38% +56.0% 4217 ± 31% interrupts.CPU60.NMI:Non-maskable_interrupts
2703 ± 38% +56.0% 4217 ± 31% interrupts.CPU60.PMI:Performance_monitoring_interrupts
2425 ± 16% +39.9% 3394 ± 27% interrupts.CPU61.NMI:Non-maskable_interrupts
2425 ± 16% +39.9% 3394 ± 27% interrupts.CPU61.PMI:Performance_monitoring_interrupts
2388 ± 18% +69.5% 4047 ± 29% interrupts.CPU66.NMI:Non-maskable_interrupts
2388 ± 18% +69.5% 4047 ± 29% interrupts.CPU66.PMI:Performance_monitoring_interrupts
2322 ± 11% +93.4% 4491 ± 35% interrupts.CPU67.NMI:Non-maskable_interrupts
2322 ± 11% +93.4% 4491 ± 35% interrupts.CPU67.PMI:Performance_monitoring_interrupts
319.00 ± 40% -44.7% 176.25 ± 9% interrupts.CPU67.RES:Rescheduling_interrupts
2512 ± 8% +28.1% 3219 ± 25% interrupts.CPU70.NMI:Non-maskable_interrupts
2512 ± 8% +28.1% 3219 ± 25% interrupts.CPU70.PMI:Performance_monitoring_interrupts
2290 ± 39% +78.7% 4094 ± 28% interrupts.CPU74.NMI:Non-maskable_interrupts
2290 ± 39% +78.7% 4094 ± 28% interrupts.CPU74.PMI:Performance_monitoring_interrupts
2446 ± 40% +94.8% 4764 ± 23% interrupts.CPU75.NMI:Non-maskable_interrupts
2446 ± 40% +94.8% 4764 ± 23% interrupts.CPU75.PMI:Performance_monitoring_interrupts
426.75 ± 61% -67.7% 138.00 ± 8% interrupts.CPU75.RES:Rescheduling_interrupts
192.50 ± 13% +45.6% 280.25 ± 45% interrupts.CPU76.RES:Rescheduling_interrupts
274.25 ± 34% -42.2% 158.50 ± 34% interrupts.CPU77.RES:Rescheduling_interrupts
2357 ± 9% +73.0% 4078 ± 23% interrupts.CPU78.NMI:Non-maskable_interrupts
2357 ± 9% +73.0% 4078 ± 23% interrupts.CPU78.PMI:Performance_monitoring_interrupts
348.50 ± 53% -47.3% 183.75 ± 29% interrupts.CPU80.RES:Rescheduling_interrupts
2650 ± 43% +46.2% 3874 ± 36% interrupts.CPU84.NMI:Non-maskable_interrupts
2650 ± 43% +46.2% 3874 ± 36% interrupts.CPU84.PMI:Performance_monitoring_interrupts
2235 ± 10% +117.8% 4867 ± 10% interrupts.CPU90.NMI:Non-maskable_interrupts
2235 ± 10% +117.8% 4867 ± 10% interrupts.CPU90.PMI:Performance_monitoring_interrupts
2606 ± 33% +38.1% 3598 ± 21% interrupts.CPU92.NMI:Non-maskable_interrupts
2606 ± 33% +38.1% 3598 ± 21% interrupts.CPU92.PMI:Performance_monitoring_interrupts
408.75 ± 58% -56.8% 176.75 ± 25% interrupts.CPU92.RES:Rescheduling_interrupts
399.00 ± 64% -63.6% 145.25 ± 16% interrupts.CPU93.RES:Rescheduling_interrupts
314.75 ± 36% -44.2% 175.75 ± 13% interrupts.CPU94.RES:Rescheduling_interrupts
191.00 ± 15% -29.1% 135.50 ± 9% interrupts.CPU97.RES:Rescheduling_interrupts
94.00 ± 8% +50.0% 141.00 ± 12% interrupts.IWI:IRQ_work_interrupts
841457 ± 7% +16.6% 980751 ± 3% interrupts.NMI:Non-maskable_interrupts
841457 ± 7% +16.6% 980751 ± 3% interrupts.PMI:Performance_monitoring_interrupts
12.75 ± 11% -4.1 8.67 ± 31% perf-profile.calltrace.cycles-pp.do_rw_once
1.02 ± 16% -0.6 0.47 ± 59% perf-profile.calltrace.cycles-pp.sched_clock.sched_clock_cpu.cpuidle_enter_state.cpuidle_enter.do_idle
1.10 ± 15% -0.4 0.66 ± 14% perf-profile.calltrace.cycles-pp.sched_clock_cpu.cpuidle_enter_state.cpuidle_enter.do_idle.cpu_startup_entry
1.05 ± 16% -0.4 0.61 ± 14% perf-profile.calltrace.cycles-pp.native_sched_clock.sched_clock.sched_clock_cpu.cpuidle_enter_state.cpuidle_enter
1.58 ± 4% +0.3 1.91 ± 7% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.copy_page
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.calltrace.cycles-pp.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.calltrace.cycles-pp.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.calltrace.cycles-pp.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64.entry_SYSCALL_64_after_hwframe
2.11 ± 4% +0.5 2.60 ± 7% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.osq_lock.__mutex_lock.hugetlb_fault.handle_mm_fault
0.83 ± 26% +0.5 1.32 ± 18% perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe
0.83 ± 26% +0.5 1.32 ± 18% perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.90 ± 5% +0.6 2.45 ± 7% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.copy_page.copy_subpage
0.65 ± 62% +0.6 1.20 ± 15% perf-profile.calltrace.cycles-pp.alloc_fresh_huge_page.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow.hugetlb_fault
0.60 ± 62% +0.6 1.16 ± 18% perf-profile.calltrace.cycles-pp.free_huge_page.release_pages.tlb_flush_mmu.tlb_finish_mmu.exit_mmap
0.95 ± 17% +0.6 1.52 ± 8% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.mutex_spin_on_owner
0.61 ± 62% +0.6 1.18 ± 18% perf-profile.calltrace.cycles-pp.release_pages.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput
0.61 ± 62% +0.6 1.19 ± 19% perf-profile.calltrace.cycles-pp.tlb_finish_mmu.exit_mmap.mmput.do_exit.do_group_exit
0.61 ± 62% +0.6 1.19 ± 19% perf-profile.calltrace.cycles-pp.tlb_flush_mmu.tlb_finish_mmu.exit_mmap.mmput.do_exit
0.64 ± 61% +0.6 1.23 ± 18% perf-profile.calltrace.cycles-pp.mmput.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
0.64 ± 61% +0.6 1.23 ± 18% perf-profile.calltrace.cycles-pp.exit_mmap.mmput.do_exit.do_group_exit.__x64_sys_exit_group
1.30 ± 9% +0.6 1.92 ± 8% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt.mutex_spin_on_owner.__mutex_lock
0.19 ±173% +0.7 0.89 ± 20% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.free_huge_page.release_pages.tlb_flush_mmu
0.19 ±173% +0.7 0.90 ± 20% perf-profile.calltrace.cycles-pp._raw_spin_lock.free_huge_page.release_pages.tlb_flush_mmu.tlb_finish_mmu
0.00 +0.8 0.77 ± 30% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.prep_new_huge_page.alloc_fresh_huge_page.alloc_surplus_huge_page
0.00 +0.8 0.78 ± 30% perf-profile.calltrace.cycles-pp._raw_spin_lock.prep_new_huge_page.alloc_fresh_huge_page.alloc_surplus_huge_page.alloc_huge_page
0.00 +0.8 0.79 ± 29% perf-profile.calltrace.cycles-pp.prep_new_huge_page.alloc_fresh_huge_page.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow
0.82 ± 67% +0.9 1.72 ± 22% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.alloc_huge_page.hugetlb_cow.hugetlb_fault
0.84 ± 66% +0.9 1.74 ± 20% perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow
2.52 ± 6% +0.9 3.44 ± 9% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.copy_page.copy_subpage.copy_user_huge_page
0.83 ± 67% +0.9 1.75 ± 21% perf-profile.calltrace.cycles-pp._raw_spin_lock.alloc_huge_page.hugetlb_cow.hugetlb_fault.handle_mm_fault
0.84 ± 66% +0.9 1.77 ± 20% perf-profile.calltrace.cycles-pp._raw_spin_lock.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow.hugetlb_fault
1.64 ± 12% +1.0 2.67 ± 7% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt.mutex_spin_on_owner.__mutex_lock.hugetlb_fault
1.65 ± 45% +1.3 2.99 ± 18% perf-profile.calltrace.cycles-pp.alloc_surplus_huge_page.alloc_huge_page.hugetlb_cow.hugetlb_fault.handle_mm_fault
1.74 ± 13% +1.4 3.16 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.mutex_spin_on_owner.__mutex_lock.hugetlb_fault.handle_mm_fault
2.56 ± 48% +2.2 4.81 ± 19% perf-profile.calltrace.cycles-pp.alloc_huge_page.hugetlb_cow.hugetlb_fault.handle_mm_fault.__do_page_fault
12.64 ± 14% +3.6 16.20 ± 8% perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.hugetlb_fault.handle_mm_fault.__do_page_fault
2.97 ± 7% +3.8 6.74 ± 9% perf-profile.calltrace.cycles-pp.apic_timer_interrupt.copy_page.copy_subpage.copy_user_huge_page.hugetlb_cow
19.99 ± 9% +4.1 24.05 ± 6% perf-profile.calltrace.cycles-pp.hugetlb_cow.hugetlb_fault.handle_mm_fault.__do_page_fault.do_page_fault
1.37 ± 15% -0.5 0.83 ± 13% perf-profile.children.cycles-pp.sched_clock_cpu
1.31 ± 16% -0.5 0.78 ± 13% perf-profile.children.cycles-pp.sched_clock
1.29 ± 16% -0.5 0.77 ± 13% perf-profile.children.cycles-pp.native_sched_clock
1.80 ± 2% -0.3 1.47 ± 10% perf-profile.children.cycles-pp.task_tick_fair
0.73 ± 2% -0.2 0.54 ± 11% perf-profile.children.cycles-pp.update_curr
0.42 ± 17% -0.2 0.27 ± 16% perf-profile.children.cycles-pp.account_process_tick
0.73 ± 10% -0.2 0.58 ± 9% perf-profile.children.cycles-pp.rcu_sched_clock_irq
0.27 ± 6% -0.1 0.14 ± 14% perf-profile.children.cycles-pp.__acct_update_integrals
0.27 ± 18% -0.1 0.16 ± 13% perf-profile.children.cycles-pp.rcu_segcblist_ready_cbs
0.40 ± 12% -0.1 0.30 ± 14% perf-profile.children.cycles-pp.__next_timer_interrupt
0.47 ± 7% -0.1 0.39 ± 13% perf-profile.children.cycles-pp.update_rq_clock
0.29 ± 12% -0.1 0.21 ± 15% perf-profile.children.cycles-pp.cpuidle_governor_latency_req
0.21 ± 7% -0.1 0.14 ± 12% perf-profile.children.cycles-pp.account_system_index_time
0.38 ± 2% -0.1 0.31 ± 12% perf-profile.children.cycles-pp.timerqueue_add
0.26 ± 11% -0.1 0.20 ± 13% perf-profile.children.cycles-pp.find_next_bit
0.23 ± 15% -0.1 0.17 ± 15% perf-profile.children.cycles-pp.rcu_dynticks_eqs_exit
0.14 ± 8% -0.1 0.07 ± 14% perf-profile.children.cycles-pp.account_user_time
0.17 ± 6% -0.0 0.12 ± 10% perf-profile.children.cycles-pp.cpuacct_charge
0.18 ± 20% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.irq_work_tick
0.11 ± 13% -0.0 0.07 ± 25% perf-profile.children.cycles-pp.tick_sched_do_timer
0.12 ± 10% -0.0 0.08 ± 15% perf-profile.children.cycles-pp.get_cpu_device
0.07 ± 11% -0.0 0.04 ± 58% perf-profile.children.cycles-pp.raise_softirq
0.12 ± 3% -0.0 0.09 ± 8% perf-profile.children.cycles-pp.write
0.11 ± 13% +0.0 0.14 ± 8% perf-profile.children.cycles-pp.native_write_msr
0.09 ± 9% +0.0 0.11 ± 7% perf-profile.children.cycles-pp.finish_task_switch
0.10 ± 10% +0.0 0.13 ± 5% perf-profile.children.cycles-pp.schedule_idle
0.07 ± 6% +0.0 0.10 ± 12% perf-profile.children.cycles-pp.__read_nocancel
0.04 ± 58% +0.0 0.07 ± 15% perf-profile.children.cycles-pp.__free_pages_ok
0.06 ± 7% +0.0 0.09 ± 13% perf-profile.children.cycles-pp.perf_read
0.07 +0.0 0.11 ± 14% perf-profile.children.cycles-pp.perf_evsel__read_counter
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.cmd_stat
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.__run_perf_stat
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.process_interval
0.07 +0.0 0.11 ± 13% perf-profile.children.cycles-pp.read_counters
0.07 ± 22% +0.0 0.11 ± 19% perf-profile.children.cycles-pp.__handle_mm_fault
0.07 ± 19% +0.1 0.13 ± 8% perf-profile.children.cycles-pp.rb_erase
0.03 ±100% +0.1 0.09 ± 9% perf-profile.children.cycles-pp.smp_call_function_single
0.01 ±173% +0.1 0.08 ± 11% perf-profile.children.cycles-pp.perf_event_read
0.00 +0.1 0.07 ± 13% perf-profile.children.cycles-pp.__perf_event_read_value
0.00 +0.1 0.07 ± 7% perf-profile.children.cycles-pp.__intel_pmu_enable_all
0.08 ± 17% +0.1 0.15 ± 8% perf-profile.children.cycles-pp.native_apic_msr_eoi_write
0.04 ±103% +0.1 0.13 ± 58% perf-profile.children.cycles-pp.shmem_getpage_gfp
0.38 ± 14% +0.1 0.51 ± 6% perf-profile.children.cycles-pp.run_timer_softirq
0.11 ± 4% +0.3 0.37 ± 32% perf-profile.children.cycles-pp.worker_thread
0.20 ± 5% +0.3 0.48 ± 25% perf-profile.children.cycles-pp.ret_from_fork
0.20 ± 4% +0.3 0.48 ± 25% perf-profile.children.cycles-pp.kthread
0.00 +0.3 0.29 ± 38% perf-profile.children.cycles-pp.memcpy_erms
0.00 +0.3 0.29 ± 38% perf-profile.children.cycles-pp.drm_fb_helper_dirty_work
0.00 +0.3 0.31 ± 37% perf-profile.children.cycles-pp.process_one_work
0.47 ± 48% +0.4 0.91 ± 19% perf-profile.children.cycles-pp.prep_new_huge_page
0.70 ± 29% +0.5 1.16 ± 18% perf-profile.children.cycles-pp.free_huge_page
0.73 ± 29% +0.5 1.19 ± 18% perf-profile.children.cycles-pp.tlb_flush_mmu
0.72 ± 29% +0.5 1.18 ± 18% perf-profile.children.cycles-pp.release_pages
0.73 ± 29% +0.5 1.19 ± 18% perf-profile.children.cycles-pp.tlb_finish_mmu
0.76 ± 27% +0.5 1.23 ± 18% perf-profile.children.cycles-pp.exit_mmap
0.77 ± 27% +0.5 1.24 ± 18% perf-profile.children.cycles-pp.mmput
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.children.cycles-pp.__x64_sys_exit_group
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.children.cycles-pp.do_group_exit
0.79 ± 26% +0.5 1.27 ± 18% perf-profile.children.cycles-pp.do_exit
1.28 ± 29% +0.5 1.76 ± 9% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.77 ± 28% +0.5 1.26 ± 13% perf-profile.children.cycles-pp.alloc_fresh_huge_page
1.53 ± 15% +0.7 2.26 ± 14% perf-profile.children.cycles-pp.do_syscall_64
1.53 ± 15% +0.7 2.27 ± 14% perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
1.13 ± 3% +0.9 2.07 ± 14% perf-profile.children.cycles-pp.interrupt_entry
0.79 ± 9% +1.0 1.76 ± 5% perf-profile.children.cycles-pp.perf_event_task_tick
1.71 ± 39% +1.4 3.08 ± 16% perf-profile.children.cycles-pp.alloc_surplus_huge_page
2.66 ± 42% +2.3 4.94 ± 17% perf-profile.children.cycles-pp.alloc_huge_page
2.89 ± 45% +2.7 5.54 ± 18% perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
3.34 ± 35% +2.7 6.02 ± 17% perf-profile.children.cycles-pp._raw_spin_lock
12.77 ± 14% +3.9 16.63 ± 7% perf-profile.children.cycles-pp.mutex_spin_on_owner
20.12 ± 9% +4.0 24.16 ± 6% perf-profile.children.cycles-pp.hugetlb_cow
15.40 ± 10% -3.6 11.84 ± 28% perf-profile.self.cycles-pp.do_rw_once
4.02 ± 9% -1.3 2.73 ± 30% perf-profile.self.cycles-pp.do_access
2.00 ± 14% -0.6 1.41 ± 13% perf-profile.self.cycles-pp.cpuidle_enter_state
1.26 ± 16% -0.5 0.74 ± 13% perf-profile.self.cycles-pp.native_sched_clock
0.42 ± 17% -0.2 0.27 ± 16% perf-profile.self.cycles-pp.account_process_tick
0.27 ± 19% -0.2 0.12 ± 17% perf-profile.self.cycles-pp.timerqueue_del
0.53 ± 3% -0.1 0.38 ± 11% perf-profile.self.cycles-pp.update_curr
0.27 ± 6% -0.1 0.14 ± 14% perf-profile.self.cycles-pp.__acct_update_integrals
0.27 ± 18% -0.1 0.16 ± 13% perf-profile.self.cycles-pp.rcu_segcblist_ready_cbs
0.61 ± 4% -0.1 0.51 ± 8% perf-profile.self.cycles-pp.task_tick_fair
0.20 ± 8% -0.1 0.12 ± 14% perf-profile.self.cycles-pp.account_system_index_time
0.23 ± 15% -0.1 0.16 ± 17% perf-profile.self.cycles-pp.rcu_dynticks_eqs_exit
0.25 ± 11% -0.1 0.18 ± 14% perf-profile.self.cycles-pp.find_next_bit
0.10 ± 11% -0.1 0.03 ±100% perf-profile.self.cycles-pp.tick_sched_do_timer
0.29 -0.1 0.23 ± 11% perf-profile.self.cycles-pp.timerqueue_add
0.12 ± 10% -0.1 0.06 ± 17% perf-profile.self.cycles-pp.account_user_time
0.22 ± 15% -0.1 0.16 ± 6% perf-profile.self.cycles-pp.scheduler_tick
0.17 ± 6% -0.0 0.12 ± 10% perf-profile.self.cycles-pp.cpuacct_charge
0.18 ± 20% -0.0 0.13 ± 3% perf-profile.self.cycles-pp.irq_work_tick
0.07 ± 13% -0.0 0.03 ±100% perf-profile.self.cycles-pp.update_process_times
0.12 ± 7% -0.0 0.08 ± 15% perf-profile.self.cycles-pp.get_cpu_device
0.07 ± 11% -0.0 0.04 ± 58% perf-profile.self.cycles-pp.raise_softirq
0.12 ± 11% -0.0 0.09 ± 7% perf-profile.self.cycles-pp.tick_nohz_get_sleep_length
0.11 ± 11% +0.0 0.14 ± 6% perf-profile.self.cycles-pp.native_write_msr
0.10 ± 5% +0.1 0.15 ± 8% perf-profile.self.cycles-pp.__remove_hrtimer
0.07 ± 23% +0.1 0.13 ± 8% perf-profile.self.cycles-pp.rb_erase
0.08 ± 17% +0.1 0.15 ± 7% perf-profile.self.cycles-pp.native_apic_msr_eoi_write
0.00 +0.1 0.08 ± 10% perf-profile.self.cycles-pp.smp_call_function_single
0.32 ± 17% +0.1 0.42 ± 7% perf-profile.self.cycles-pp.run_timer_softirq
0.22 ± 5% +0.1 0.34 ± 4% perf-profile.self.cycles-pp.ktime_get_update_offsets_now
0.45 ± 15% +0.2 0.60 ± 12% perf-profile.self.cycles-pp.rcu_irq_enter
0.31 ± 8% +0.2 0.46 ± 16% perf-profile.self.cycles-pp.irq_enter
0.29 ± 10% +0.2 0.44 ± 16% perf-profile.self.cycles-pp.apic_timer_interrupt
0.71 ± 30% +0.2 0.92 ± 8% perf-profile.self.cycles-pp.perf_mux_hrtimer_handler
0.00 +0.3 0.28 ± 37% perf-profile.self.cycles-pp.memcpy_erms
1.12 ± 3% +0.9 2.02 ± 15% perf-profile.self.cycles-pp.interrupt_entry
0.79 ± 9% +0.9 1.73 ± 5% perf-profile.self.cycles-pp.perf_event_task_tick
2.49 ± 45% +2.1 4.55 ± 20% perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
10.95 ± 15% +2.7 13.61 ± 8% perf-profile.self.cycles-pp.mutex_spin_on_owner
vm-scalability.throughput
1.6e+07 +-+---------------------------------------------------------------+
|..+.+ +..+.+..+.+. +. +..+.+..+.+..+.+..+.+..+ + |
1.4e+07 +-+ : : O O O O |
1.2e+07 O-+O O O O O O O O O O O O O O O O O O
| : : O O O O |
1e+07 +-+ : : |
| : : |
8e+06 +-+ : : |
| : : |
6e+06 +-+ : : |
4e+06 +-+ : : |
| :: |
2e+06 +-+ : |
| : |
0 +-+---------------------------------------------------------------+
vm-scalability.time.minor_page_faults
2.5e+06 +-+---------------------------------------------------------------+
| |
|..+.+ +..+.+..+.+..+.+..+.+.. .+. .+.+..+.+..+.+..+.+..+ |
2e+06 +-+ : : +. +. |
O O O: O O O O O O O O O O |
| : : O O O O O O O O O O O O O O
1.5e+06 +-+ : : |
| : : |
1e+06 +-+ : : |
| : : |
| : : |
500000 +-+ : : |
| : |
| : |
0 +-+---------------------------------------------------------------+
vm-scalability.workload
3.5e+09 +-+---------------------------------------------------------------+
| .+. .+.+.. .+.. |
3e+09 +-+ + +..+.+..+.+..+.+. +..+.+..+.+..+.+..+.+..+ + |
| : : O O O |
2.5e+09 O-+O O: O O O O O O O O O |
| : : O O O O O O O O O O O O
2e+09 +-+ : : |
| : : |
1.5e+09 +-+ : : |
| : : |
1e+09 +-+ : : |
| : : |
5e+08 +-+ : |
| : |
0 +-+---------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year
883a2cefc0 ("rcu/tree: support kfree_bulk() interface in .."): [ 13.962148] WARNING: CPU: 0 PID: 212 at lib/debugobjects.c:484 debug_print_object
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/0day-ci/linux/commits/Uladzislau-Rezki-Sony/rcu-tree-s...
commit 883a2cefc0684ec945a069e1091407f1f07a0558
Author: Uladzislau Rezki (Sony) <urezki(a)gmail.com>
AuthorDate: Tue Dec 31 13:22:41 2019 +0100
Commit: 0day robot <lkp(a)intel.com>
CommitDate: Fri Jan 3 11:52:33 2020 +0800
rcu/tree: support kfree_bulk() interface in kfree_rcu()
kfree_rcu() logic can be improved further by using kfree_bulk()
interface along with "basic batching support" introduced earlier.
The are at least two advantages of using "bulk" interface:
- in case of large number of kfree_rcu() requests kfree_bulk()
reduces the per-object overhead caused by calling kfree()
per-object.
- reduces the number of cache-misses due to "pointer chasing"
between objects which can be far spread between each other.
This approach defines a new kfree_rcu_bulk_data structure that
stores pointers in an array with a specific size. Number of entries
in that array depends on PAGE_SIZE making kfree_rcu_bulk_data
structure to be exactly one page.
Since it deals with "block-chain" technique there is an extra
need in dynamic allocation when a new block is required. Memory
is allocated with GFP_NOWAIT | __GFP_NOWARN flags, i.e. that
allows to skip direct reclaim under low memory condition to
prevent stalling and fails silently under high memory pressure.
The "emergency path" gets maintained when a system is run out
of memory. In that case objects are linked into regular list
and that is it.
In order to evaluate it, the "rcuperf" was run to analyze how
much memory is consumed and what is kfree_bulk() throughput.
Testing on the HiKey-960, arm64, 8xCPUs with below parameters:
CONFIG_SLAB=y
kfree_loops=200000 kfree_alloc_num=1000 kfree_rcu_test=1
102898760401 ns, loops: 200000, batches: 5822, memory footprint: 158MB
89947009882 ns, loops: 200000, batches: 6715, memory footprint: 115MB
rcuperf shows approximately ~12% better throughput(Total time)
in case of using "bulk" interface. The "drain logic" or its RCU
callback does the work faster that leads to better throughput.
Signed-off-by: Uladzislau Rezki (Sony) <urezki(a)gmail.com>
4892cf6a83 doc: Add some more RCU list patterns in the kernel
883a2cefc0 rcu/tree: support kfree_bulk() interface in kfree_rcu()
+---------------------------------------------------+------------+------------+
| | 4892cf6a83 | 883a2cefc0 |
+---------------------------------------------------+------------+------------+
| boot_successes | 30 | 0 |
| boot_failures | 0 | 10 |
| WARNING:at_lib/debugobjects.c:#debug_print_object | 0 | 10 |
| RIP:debug_print_object | 0 | 10 |
| INFO:rcu_preempt_detected_stalls_on_CPUs/tasks | 0 | 2 |
| RIP:ftrace_likely_update | 0 | 2 |
| BUG:soft_lockup-CPU##stuck_for#s![ksoftirqd:#] | 0 | 3 |
| RIP:_raw_spin_unlock_irqrestore | 0 | 3 |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 0 | 3 |
| BUG:kernel_timeout_in_test_stage | 0 | 7 |
| WARNING:at_kernel/rcu/tree.c:#call_rcu | 0 | 4 |
| RIP:call_rcu | 0 | 4 |
+---------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[child0:632] kexec_load (283:[32BIT]) returned ENOSYS, marking as inactive.
[child0:669] pkey_alloc (381:[32BIT]) returned ENOSYS, marking as inactive.
[child0:669] acct (51:[32BIT]) returned ENOSYS, marking as inactive.
[ 13.957168] ------------[ cut here ]------------
[ 13.958256] ODEBUG: free active (active state 1) object type: rcu_head hint: 0x0
[ 13.962148] WARNING: CPU: 0 PID: 212 at lib/debugobjects.c:484 debug_print_object+0x95/0xd0
[ 13.964298] Modules linked in:
[ 13.964960] CPU: 0 PID: 212 Comm: kworker/0:2 Not tainted 5.5.0-rc1-00136-g883a2cefc0684 #1
[ 13.966712] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 13.968528] Workqueue: events kfree_rcu_work
[ 13.969466] RIP: 0010:debug_print_object+0x95/0xd0
[ 13.970480] Code: d2 e8 2f 06 d6 ff 8b 43 10 4d 89 f1 4c 89 e6 8b 4b 14 48 c7 c7 88 73 be 82 4d 8b 45 00 48 8b 14 c5 a0 5f 6d 82 e8 7b 65 c6 ff <0f> 0b b9 01 00 00 00 31 d2 be 01 00 00 00 48 c7 c7 98 b8 0c 83 e8
[ 13.974435] RSP: 0000:ffff888231677bf8 EFLAGS: 00010282
[ 13.975531] RAX: 0000000000000000 RBX: ffff88822d4200e0 RCX: 0000000000000000
[ 13.976730] RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff8306e028
[ 13.977568] RBP: ffff888231677c18 R08: 0000000000000000 R09: ffff888231670790
[ 13.978412] R10: ffff888231670000 R11: 0000000000000003 R12: ffffffff82bc5299
[ 13.979250] R13: ffffffff82e77360 R14: 0000000000000000 R15: dead000000000100
[ 13.980089] FS: 0000000000000000(0000) GS:ffffffff82e4f000(0000) knlGS:0000000000000000
[ 13.981069] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 13.981746] CR2: 00007f1e913fc77c CR3: 0000000225ce9000 CR4: 00000000000006f0
[ 13.982587] Call Trace:
[ 13.982911] __debug_check_no_obj_freed+0x19a/0x200
[ 13.983494] debug_check_no_obj_freed+0x14/0x20
[ 13.984036] free_pcp_prepare+0xee/0x1d0
[ 13.984541] free_unref_page+0x1b/0x80
[ 13.984994] __free_pages+0x19/0x20
[ 13.985503] __free_pages+0x13/0x20
[ 13.985924] slob_free_pages+0x7d/0x90
[ 13.986373] slob_free+0x34f/0x530
[ 13.986784] kfree+0x154/0x210
[ 13.987155] __kmem_cache_free_bulk+0x44/0x60
[ 13.987673] kmem_cache_free_bulk+0xe/0x10
[ 13.988163] kfree_rcu_work+0x95/0x310
[ 13.989010] ? kfree_rcu_work+0x64/0x310
[ 13.989884] process_one_work+0x378/0x7c0
[ 13.990770] worker_thread+0x40/0x600
[ 13.991587] kthread+0x14e/0x170
[ 13.992344] ? process_one_work+0x7c0/0x7c0
[ 13.993256] ? kthread_create_on_node+0x70/0x70
[ 13.994246] ret_from_fork+0x3a/0x50
[ 13.995039] ---[ end trace cdf242638b0e32a0 ]---
[child0:632] trace_fd was -1
# HH:MM RESULT GOOD BAD GOOD_BUT_DIRTY DIRTY_NOT_BAD
git bisect start 1407715fd1b13c01ffeeec8b3837c2e1b4cbb025 e42617b825f8073569da76dc4510bfa019b1c35a --
git bisect bad 044c40eba826e8d47d53edfb2c5fe866d0f36134 # 12:47 B 0 1 17 0 Merge 'linux-review/Uladzislau-Rezki-Sony/rcu-tree-support-kfree_bulk-interface-in-kfree_rcu/20200103-115231' into devel-catchup-202001031304
git bisect good 01975b69f841f967f889a32ef34425cdb33c3d84 # 13:40 G 10 0 0 0 0day base guard for 'devel-catchup-202001031304'
git bisect good b081888d4e95ae70b3e3098fc26a536d3e3da6ad # 16:03 G 11 0 1 1 Merge branches 'doc.2019.12.10a', 'exp.2019.12.09a', 'fixes.2019.12.12a', 'kfree_rcu.2019.12.09a', 'list.2019.12.09a', 'preempt.2019.12.09a' and 'torture.2019.12.09a' into HEAD
git bisect good 5887751baba7ce39d88b154d7c1a4d53f4608509 # 16:56 G 11 0 0 0 tools/memory-model: Use "-unroll 0" to keep --hw runs finite
git bisect good 59a1e11f586e5635f35beaf9f89e4badd93c3087 # 17:26 G 10 0 0 0 torture: Make results-directory date format completion-friendly
git bisect good 96c4fe7d5735476c2e972b367af4b1a5bf831899 # 17:50 G 10 0 0 0 rcu: Fix spelling mistake "leval" -> "level"
git bisect good de73daec335f0227b7303abb0814a3c00c0c55b2 # 18:16 G 10 0 0 0 rcutorture: Make kvm-find-errors.sh abort on bad directory
git bisect good 8bf389d441538030d07a9a0f9e38ec0843f7a83e # 18:39 G 11 0 0 0 rcuperf: Measure memory footprint during kfree_rcu() test
git bisect bad 883a2cefc0684ec945a069e1091407f1f07a0558 # 19:10 B 0 2 18 0 rcu/tree: support kfree_bulk() interface in kfree_rcu()
git bisect good 4892cf6a83bc59897fc77a270b4098f152e14079 # 19:50 G 10 0 0 0 doc: Add some more RCU list patterns in the kernel
# first bad commit: [883a2cefc0684ec945a069e1091407f1f07a0558] rcu/tree: support kfree_bulk() interface in kfree_rcu()
git bisect good 4892cf6a83bc59897fc77a270b4098f152e14079 # 20:14 G 30 0 0 0 doc: Add some more RCU list patterns in the kernel
# extra tests with debug options
git bisect good 883a2cefc0684ec945a069e1091407f1f07a0558 # 21:31 G 10 0 10 10 rcu/tree: support kfree_bulk() interface in kfree_rcu()
# extra tests on head commit of linux-review/Uladzislau-Rezki-Sony/rcu-tree-support-kfree_bulk-interface-in-kfree_rcu/20200103-115231
git bisect bad 883a2cefc0684ec945a069e1091407f1f07a0558 # 21:38 B 0 10 26 0 rcu/tree: support kfree_bulk() interface in kfree_rcu()
# bad: [883a2cefc0684ec945a069e1091407f1f07a0558] rcu/tree: support kfree_bulk() interface in kfree_rcu()
# extra tests on revert first bad commit
git bisect good 8af7d370bd521685c4a599ceb820299360478fed # 22:43 G 11 0 0 0 Revert "rcu/tree: support kfree_bulk() interface in kfree_rcu()"
# good: [8af7d370bd521685c4a599ceb820299360478fed] Revert "rcu/tree: support kfree_bulk() interface in kfree_rcu()"
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/hyperkitty/list/lkp@lists.01.org Intel Corporation
1 year
[mm/debug] 87c4696d57: kernel_BUG_at_include/linux/mm.h
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 87c4696d57b5e380bc2720fdff0772b042462b7d ("[PATCH V11 RESEND] mm/debug: Add tests validating architecture page table helpers")
url: https://github.com/0day-ci/linux/commits/Anshuman-Khandual/mm-debug-Add-t...
in testcase: trinity
with following parameters:
runtime: 300s
test-description: Trinity is a linux system call fuzz tester.
test-url: http://codemonkey.org.uk/projects/trinity/
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+--------------------------------------------------+----------+------------+
| | v5.5-rc3 | 87c4696d57 |
+--------------------------------------------------+----------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 153 | 12 |
| BUG:workqueue_lockup-pool | 121 | |
| BUG:soft_lockup-CPU##stuck_for#s![dma-fence:#:#] | 25 | |
| EIP:thread_signal_callback | 14 | |
| Kernel_panic-not_syncing:softlockup:hung_tasks | 25 | |
| EIP:kthread_should_stop | 11 | |
| BUG:kernel_hang_in_boot_stage | 7 | |
| kernel_BUG_at_include/linux/mm.h | 0 | 12 |
| invalid_opcode:#[##] | 0 | 12 |
| EIP:__free_pages | 0 | 12 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 12 |
+--------------------------------------------------+----------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
[ 9.781974] kernel BUG at include/linux/mm.h:592!
[ 9.782810] invalid opcode: 0000 [#1] PTI
[ 9.783443] CPU: 0 PID: 1 Comm: swapper Not tainted 5.5.0-rc3-00001-g87c4696d57b5e #1
[ 9.784528] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 9.785756] EIP: __free_pages+0x14/0x40
[ 9.786442] Code: 0c 9c 5e fa 89 d8 e8 5b f3 ff ff 56 9d 5b 5e 5d c3 8d 74 26 00 90 8b 48 1c 55 89 e5 85 c9 75 16 ba b4 b6 84 d6 e8 ac 49 fe ff <0f> 0b 8d b4 26 00 00 00 00 8d 76 00 ff 48 1c 75 10 85 d2 75 07 e8
[ 9.789697] EAX: d68761f7 EBX: ea52f000 ECX: ea4f8520 EDX: d684b6b4
[ 9.790850] ESI: 00000000 EDI: ef45e000 EBP: ea501f08 ESP: ea501f08
[ 9.791879] DS: 007b ES: 007b FS: 0000 GS: 0000 SS: 0068 EFLAGS: 00010286
[ 9.792783] CR0: 80050033 CR2: ffffffff CR3: 16d00000 CR4: 000406b0
[ 9.792783] Call Trace:
[ 9.792783] free_pages+0x3c/0x50
[ 9.792783] pgd_free+0x5a/0x170
[ 9.792783] __mmdrop+0x42/0xe0
[ 9.792783] debug_vm_pgtable+0x54f/0x567
[ 9.792783] kernel_init_freeable+0x90/0x1e3
[ 9.792783] ? rest_init+0xf0/0xf0
[ 9.792783] kernel_init+0x8/0xf0
[ 9.792783] ret_from_fork+0x19/0x24
[ 9.792783] Modules linked in:
[ 9.792803] ---[ end trace 91b7335adcf0b656 ]---
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc3-00001-g87c4696d57b5e .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=i386 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year
[cpuidle] 259231a045: will-it-scale.per_process_ops -12.6% regression
by kernel test robot
Greeting,
FYI, we noticed a -12.6% regression of will-it-scale.per_process_ops due to commit:
commit: 259231a045616c4101d023a8f4dcc8379af265a6 ("cpuidle: add poll_limit_ns to cpuidle_device structure")
https://kernel.googlesource.com/pub/scm/linux/kernel/git/next/linux-next.git master
in testcase: will-it-scale
on test machine: 288 threads Intel(R) Xeon Phi(TM) CPU 7295 @ 1.50GHz with 80G memory
with following parameters:
nr_task: 100%
mode: process
test: mmap1
cpufreq_governor: performance
test-description: Will It Scale takes a testcase and runs it from 1 through to n parallel copies to see if the testcase will scale. It builds both a process and threads based test in order to see any differences between the two.
test-url: https://github.com/antonblanchard/will-it-scale
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
Details are as below:
-------------------------------------------------------------------------------------------------->
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
=========================================================================================
compiler/cpufreq_governor/kconfig/mode/nr_task/rootfs/tbox_group/test/testcase:
gcc-7/performance/x86_64-rhel-7.6/process/100%/debian-x86_64-2019-05-14.cgz/lkp-knm01/mmap1/will-it-scale
commit:
fa86ee90eb ("add cpuidle-haltpoll driver")
259231a045 ("cpuidle: add poll_limit_ns to cpuidle_device structure")
fa86ee90eb111126 259231a045616c4101d023a8f4d
---------------- ---------------------------
fail:runs %reproduction fail:runs
| | |
:4 25% 1:4 dmesg.WARNING:at#for_ip_swapgs_restore_regs_and_return_to_usermode/0x
%stddev %change %stddev
\ | \
1611 -12.6% 1408 will-it-scale.per_process_ops
464144 -12.6% 405580 will-it-scale.workload
1581 ± 2% +3.3% 1633 vmstat.system.cs
35.13 -1.0% 34.78 boot-time.dhcp
11888 -1.0% 11765 boot-time.idle
5207 ± 4% +25.7% 6547 ± 7% slabinfo.kmalloc-rcl-64.active_objs
5207 ± 4% +25.7% 6547 ± 7% slabinfo.kmalloc-rcl-64.num_objs
1.07 ± 53% -38.8% 0.66 ± 6% turbostat.CPU%c6
2.63 -14.9% 2.23 ± 2% turbostat.RAMWatt
874.11 ± 2% +10.7% 967.81 ± 5% sched_debug.cfs_rq:/.exec_clock.stddev
4672082 ± 24% +60.3% 7488441 ± 26% sched_debug.cpu.avg_idle.max
662.84 ± 3% +21.4% 804.72 ± 6% sched_debug.cpu.clock.stddev
662.84 ± 3% +21.4% 804.72 ± 6% sched_debug.cpu.clock_task.stddev
1185029 ± 14% +51.4% 1793617 ± 28% sched_debug.cpu.max_idle_balance_cost.max
75638 ± 25% +62.8% 123124 ± 17% sched_debug.cpu.max_idle_balance_cost.stddev
0.00 ± 2% +21.3% 0.00 ± 6% sched_debug.cpu.next_balance.stddev
1.16 ± 6% -0.1 1.05 perf-stat.i.branch-miss-rate%
1.104e+08 -5.6% 1.042e+08 perf-stat.i.branch-misses
45924834 -3.9% 44147120 perf-stat.i.cache-misses
1455 +2.1% 1485 perf-stat.i.context-switches
9825 +4.4% 10255 perf-stat.i.cycles-between-cache-misses
0.18 ± 8% -0.0 0.16 perf-stat.i.iTLB-load-miss-rate%
67884172 -6.4% 63509655 perf-stat.i.iTLB-load-misses
596.76 +7.3% 640.20 perf-stat.i.instructions-per-iTLB-miss
1.12 -0.1 1.05 perf-stat.overall.branch-miss-rate%
9817 +4.3% 10239 perf-stat.overall.cycles-between-cache-misses
0.17 -0.0 0.16 perf-stat.overall.iTLB-load-miss-rate%
596.10 +7.1% 638.47 perf-stat.overall.instructions-per-iTLB-miss
26461122 +13.8% 30121468 perf-stat.overall.path-length
1.098e+08 -5.7% 1.035e+08 perf-stat.ps.branch-misses
45700304 -3.9% 43918015 perf-stat.ps.cache-misses
67526222 -6.5% 63149942 perf-stat.ps.iTLB-load-misses
481381 ± 18% +17.7% 566437 ± 7% interrupts.CPU0.LOC:Local_timer_interrupts
79.50 ±111% -76.4% 18.75 ± 99% interrupts.CPU100.RES:Rescheduling_interrupts
5783 ± 33% -17.7% 4757 ± 34% interrupts.CPU103.NMI:Non-maskable_interrupts
5783 ± 33% -17.7% 4757 ± 34% interrupts.CPU103.PMI:Performance_monitoring_interrupts
128.00 ±105% -92.8% 9.25 ± 37% interrupts.CPU107.RES:Rescheduling_interrupts
5769 ± 33% -33.5% 3837 interrupts.CPU109.NMI:Non-maskable_interrupts
5769 ± 33% -33.5% 3837 interrupts.CPU109.PMI:Performance_monitoring_interrupts
6774 ± 24% -29.4% 4785 ± 34% interrupts.CPU111.NMI:Non-maskable_interrupts
6774 ± 24% -29.4% 4785 ± 34% interrupts.CPU111.PMI:Performance_monitoring_interrupts
140.50 ± 43% -78.3% 30.50 ±118% interrupts.CPU118.RES:Rescheduling_interrupts
5776 ± 33% -33.7% 3830 interrupts.CPU120.NMI:Non-maskable_interrupts
5776 ± 33% -33.7% 3830 interrupts.CPU120.PMI:Performance_monitoring_interrupts
5801 ± 33% -17.7% 4776 ± 34% interrupts.CPU126.NMI:Non-maskable_interrupts
5801 ± 33% -17.7% 4776 ± 34% interrupts.CPU126.PMI:Performance_monitoring_interrupts
5772 ± 33% -17.7% 4749 ± 34% interrupts.CPU138.NMI:Non-maskable_interrupts
5772 ± 33% -17.7% 4749 ± 34% interrupts.CPU138.PMI:Performance_monitoring_interrupts
5786 ± 33% -18.4% 4722 ± 34% interrupts.CPU14.NMI:Non-maskable_interrupts
5786 ± 33% -18.4% 4722 ± 34% interrupts.CPU14.PMI:Performance_monitoring_interrupts
3844 +74.8% 6718 ± 24% interrupts.CPU167.NMI:Non-maskable_interrupts
3844 +74.8% 6718 ± 24% interrupts.CPU167.PMI:Performance_monitoring_interrupts
96.50 ± 98% -82.9% 16.50 ± 98% interrupts.CPU172.RES:Rescheduling_interrupts
5757 ± 33% -33.8% 3809 interrupts.CPU175.NMI:Non-maskable_interrupts
5757 ± 33% -33.8% 3809 interrupts.CPU175.PMI:Performance_monitoring_interrupts
481026 ± 19% +18.0% 567722 ± 7% interrupts.CPU18.LOC:Local_timer_interrupts
58.50 ± 91% -71.4% 16.75 ± 94% interrupts.CPU180.RES:Rescheduling_interrupts
6737 ± 24% -43.7% 3795 interrupts.CPU184.NMI:Non-maskable_interrupts
6737 ± 24% -43.7% 3795 interrupts.CPU184.PMI:Performance_monitoring_interrupts
5770 ± 32% -18.1% 4728 ± 34% interrupts.CPU188.NMI:Non-maskable_interrupts
5770 ± 32% -18.1% 4728 ± 34% interrupts.CPU188.PMI:Performance_monitoring_interrupts
281.00 ± 87% -80.5% 54.75 ± 49% interrupts.CPU188.RES:Rescheduling_interrupts
529.50 ±124% -95.6% 23.50 ± 81% interrupts.CPU189.RES:Rescheduling_interrupts
7713 -50.7% 3803 interrupts.CPU192.NMI:Non-maskable_interrupts
7713 -50.7% 3803 interrupts.CPU192.PMI:Performance_monitoring_interrupts
5762 ± 33% -33.9% 3809 interrupts.CPU203.NMI:Non-maskable_interrupts
5762 ± 33% -33.9% 3809 interrupts.CPU203.PMI:Performance_monitoring_interrupts
5783 ± 33% -17.9% 4750 ± 35% interrupts.CPU217.NMI:Non-maskable_interrupts
5783 ± 33% -17.9% 4750 ± 35% interrupts.CPU217.PMI:Performance_monitoring_interrupts
16.75 ± 49% +443.3% 91.00 ±129% interrupts.CPU224.RES:Rescheduling_interrupts
6779 ± 24% -43.5% 3830 interrupts.CPU239.NMI:Non-maskable_interrupts
6779 ± 24% -43.5% 3830 interrupts.CPU239.PMI:Performance_monitoring_interrupts
478215 ± 19% +17.7% 562671 ± 8% interrupts.CPU258.LOC:Local_timer_interrupts
350.50 ± 89% -88.3% 41.00 ±121% interrupts.CPU259.RES:Rescheduling_interrupts
493.00 ±139% -95.1% 24.25 ± 69% interrupts.CPU261.RES:Rescheduling_interrupts
6726 ± 24% -43.3% 3813 interrupts.CPU263.NMI:Non-maskable_interrupts
6726 ± 24% -43.3% 3813 interrupts.CPU263.PMI:Performance_monitoring_interrupts
5755 ± 33% -34.2% 3789 interrupts.CPU270.NMI:Non-maskable_interrupts
5755 ± 33% -34.2% 3789 interrupts.CPU270.PMI:Performance_monitoring_interrupts
5776 ± 32% -33.8% 3825 interrupts.CPU275.NMI:Non-maskable_interrupts
5776 ± 32% -33.8% 3825 interrupts.CPU275.PMI:Performance_monitoring_interrupts
5799 ± 33% -17.8% 4769 ± 35% interrupts.CPU276.NMI:Non-maskable_interrupts
5799 ± 33% -17.8% 4769 ± 35% interrupts.CPU276.PMI:Performance_monitoring_interrupts
6708 ± 24% -29.1% 4754 ± 34% interrupts.CPU277.NMI:Non-maskable_interrupts
6708 ± 24% -29.1% 4754 ± 34% interrupts.CPU277.PMI:Performance_monitoring_interrupts
5809 ± 32% -34.2% 3820 interrupts.CPU3.NMI:Non-maskable_interrupts
5809 ± 32% -34.2% 3820 interrupts.CPU3.PMI:Performance_monitoring_interrupts
150.25 ± 27% -61.9% 57.25 ± 21% interrupts.CPU3.RES:Rescheduling_interrupts
7721 -50.0% 3858 interrupts.CPU34.NMI:Non-maskable_interrupts
7721 -50.0% 3858 interrupts.CPU34.PMI:Performance_monitoring_interrupts
135.00 ± 57% -77.6% 30.25 ± 24% interrupts.CPU36.RES:Rescheduling_interrupts
6764 ± 24% -43.4% 3831 interrupts.CPU39.NMI:Non-maskable_interrupts
6764 ± 24% -43.4% 3831 interrupts.CPU39.PMI:Performance_monitoring_interrupts
7661 -38.2% 4733 ± 33% interrupts.CPU4.NMI:Non-maskable_interrupts
7661 -38.2% 4733 ± 33% interrupts.CPU4.PMI:Performance_monitoring_interrupts
117.25 ± 52% -66.1% 39.75 ± 76% interrupts.CPU40.RES:Rescheduling_interrupts
480972 ± 18% +17.6% 565776 ± 7% interrupts.CPU42.LOC:Local_timer_interrupts
1034 ± 65% -85.0% 154.75 ±127% interrupts.CPU45.RES:Rescheduling_interrupts
24.00 ± 68% +2232.3% 559.75 ±103% interrupts.CPU50.RES:Rescheduling_interrupts
4794 ± 34% +20.8% 5790 ± 33% interrupts.CPU51.NMI:Non-maskable_interrupts
4794 ± 34% +20.8% 5790 ± 33% interrupts.CPU51.PMI:Performance_monitoring_interrupts
48.25 ± 96% +590.7% 333.25 ±110% interrupts.CPU51.RES:Rescheduling_interrupts
6767 ± 24% -43.3% 3837 interrupts.CPU53.NMI:Non-maskable_interrupts
6767 ± 24% -43.3% 3837 interrupts.CPU53.PMI:Performance_monitoring_interrupts
16.25 ± 88% +696.9% 129.50 ±137% interrupts.CPU53.RES:Rescheduling_interrupts
479539 ± 19% +17.4% 562923 ± 8% interrupts.CPU58.LOC:Local_timer_interrupts
6744 ± 24% -43.2% 3830 interrupts.CPU58.NMI:Non-maskable_interrupts
6744 ± 24% -43.2% 3830 interrupts.CPU58.PMI:Performance_monitoring_interrupts
6731 ± 24% -29.5% 4747 ± 35% interrupts.CPU6.NMI:Non-maskable_interrupts
6731 ± 24% -29.5% 4747 ± 35% interrupts.CPU6.PMI:Performance_monitoring_interrupts
7693 -50.2% 3834 interrupts.CPU65.NMI:Non-maskable_interrupts
7693 -50.2% 3834 interrupts.CPU65.PMI:Performance_monitoring_interrupts
5821 ± 33% -34.2% 3829 interrupts.CPU66.NMI:Non-maskable_interrupts
5821 ± 33% -34.2% 3829 interrupts.CPU66.PMI:Performance_monitoring_interrupts
18.75 ± 35% +392.0% 92.25 ± 75% interrupts.CPU72.RES:Rescheduling_interrupts
6782 ± 24% -43.4% 3837 interrupts.CPU80.NMI:Non-maskable_interrupts
6782 ± 24% -43.4% 3837 interrupts.CPU80.PMI:Performance_monitoring_interrupts
5776 ± 33% -17.5% 4764 ± 33% interrupts.CPU97.NMI:Non-maskable_interrupts
5776 ± 33% -17.5% 4764 ± 33% interrupts.CPU97.PMI:Performance_monitoring_interrupts
48.96 -0.3 48.70 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.mmap64
48.94 -0.3 48.67 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
49.21 -0.3 48.95 perf-profile.calltrace.cycles-pp.mmap64
46.12 -0.2 45.89 perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.percpu_counter_add_batch.__vm_enough_memory.mmap_region.do_mmap
46.08 -0.2 45.85 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.percpu_counter_add_batch.__vm_enough_memory.mmap_region
48.68 -0.2 48.47 perf-profile.calltrace.cycles-pp.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
48.65 -0.2 48.45 perf-profile.calltrace.cycles-pp.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe.mmap64
48.47 -0.2 48.27 perf-profile.calltrace.cycles-pp.do_mmap.vm_mmap_pgoff.ksys_mmap_pgoff.do_syscall_64.entry_SYSCALL_64_after_hwframe
1.27 -0.1 1.16 perf-profile.calltrace.cycles-pp.unmap_vmas.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap
1.19 -0.1 1.09 perf-profile.calltrace.cycles-pp.unmap_page_range.unmap_vmas.unmap_region.__do_munmap.__vm_munmap
1.61 -0.1 1.54 perf-profile.calltrace.cycles-pp.unmap_region.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
50.20 +0.2 50.37 perf-profile.calltrace.cycles-pp.munmap
49.95 +0.2 50.13 perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
49.96 +0.2 50.15 perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.munmap
46.38 +0.2 46.56 perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.percpu_counter_add_batch.__do_munmap.__vm_munmap
49.67 +0.2 49.88 perf-profile.calltrace.cycles-pp.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
49.64 +0.2 49.85 perf-profile.calltrace.cycles-pp.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe.munmap
49.51 +0.3 49.76 perf-profile.calltrace.cycles-pp.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64.entry_SYSCALL_64_after_hwframe
47.52 +0.4 47.88 perf-profile.calltrace.cycles-pp.percpu_counter_add_batch.__do_munmap.__vm_munmap.__x64_sys_munmap.do_syscall_64
0.00 +1.0 1.03 ± 4% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__vm_enough_memory
0.00 +1.0 1.03 ± 6% perf-profile.calltrace.cycles-pp.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__do_munmap
0.00 +1.1 1.10 ± 4% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__vm_enough_memory.mmap_region
0.00 +1.1 1.10 ± 6% perf-profile.calltrace.cycles-pp.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__do_munmap.__vm_munmap
0.00 +1.1 1.12 ± 4% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__vm_enough_memory.mmap_region.do_mmap
0.00 +1.1 1.13 ± 5% perf-profile.calltrace.cycles-pp._raw_spin_unlock_irqrestore.percpu_counter_add_batch.__do_munmap.__vm_munmap.__x64_sys_munmap
0.00 +1.4 1.40 ± 4% perf-profile.calltrace.cycles-pp.__hrtimer_run_queues.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore
0.00 +1.7 1.72 ± 5% perf-profile.calltrace.cycles-pp.hrtimer_interrupt.smp_apic_timer_interrupt.apic_timer_interrupt._raw_spin_unlock_irqrestore.percpu_counter_add_batch
49.22 -0.3 48.96 perf-profile.children.cycles-pp.mmap64
48.68 -0.2 48.47 perf-profile.children.cycles-pp.ksys_mmap_pgoff
48.66 -0.2 48.45 perf-profile.children.cycles-pp.vm_mmap_pgoff
48.47 -0.2 48.27 perf-profile.children.cycles-pp.do_mmap
1.27 -0.1 1.16 perf-profile.children.cycles-pp.unmap_vmas
0.36 ± 3% -0.1 0.25 ± 5% perf-profile.children.cycles-pp.perf_event_mmap
1.23 -0.1 1.13 perf-profile.children.cycles-pp.unmap_page_range
1.62 -0.1 1.54 perf-profile.children.cycles-pp.unmap_region
0.20 ± 6% -0.1 0.14 ± 8% perf-profile.children.cycles-pp.perf_iterate_sb
0.55 ± 2% -0.0 0.51 ± 2% perf-profile.children.cycles-pp.___might_sleep
0.17 ± 6% -0.0 0.13 ± 3% perf-profile.children.cycles-pp.syscall_return_via_sysret
0.22 -0.0 0.19 ± 2% perf-profile.children.cycles-pp.get_unmapped_area
0.16 ± 6% -0.0 0.14 ± 3% perf-profile.children.cycles-pp.entry_SYSCALL_64
0.17 -0.0 0.15 ± 3% perf-profile.children.cycles-pp.arch_get_unmapped_area_topdown
0.30 ± 2% -0.0 0.28 perf-profile.children.cycles-pp._cond_resched
0.19 ± 3% -0.0 0.17 ± 2% perf-profile.children.cycles-pp.free_p4d_range
0.11 -0.0 0.10 ± 5% perf-profile.children.cycles-pp.unmapped_area_topdown
0.12 +0.0 0.14 perf-profile.children.cycles-pp.security_vm_enough_memory_mm
0.08 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.selinux_vm_enough_memory
0.07 +0.0 0.10 ± 4% perf-profile.children.cycles-pp.cred_has_capability
0.04 ± 57% +0.0 0.07 ± 7% perf-profile.children.cycles-pp.__might_sleep
0.11 ± 4% +0.0 0.14 ± 3% perf-profile.children.cycles-pp.perf_mux_hrtimer_handler
0.15 ± 12% +0.0 0.19 ± 6% perf-profile.children.cycles-pp.update_curr
0.20 ± 33% +0.1 0.26 ± 30% perf-profile.children.cycles-pp.vfs_write
0.00 +0.1 0.06 ± 7% perf-profile.children.cycles-pp.perf_event_task_tick
0.00 +0.1 0.06 ± 6% perf-profile.children.cycles-pp.avc_has_perm_noaudit
0.53 ± 5% +0.1 0.61 ± 3% perf-profile.children.cycles-pp.task_tick_fair
0.00 +0.1 0.09 perf-profile.children.cycles-pp.lru_add_drain_cpu
0.00 +0.1 0.10 ± 11% perf-profile.children.cycles-pp.fpregs_assert_state_consistent
0.01 ±173% +0.1 0.11 ± 7% perf-profile.children.cycles-pp.irq_enter
0.00 +0.1 0.10 ± 4% perf-profile.children.cycles-pp.lru_add_drain
0.69 ± 7% +0.1 0.82 ± 4% perf-profile.children.cycles-pp.scheduler_tick
0.87 ± 6% +0.2 1.02 ± 4% perf-profile.children.cycles-pp.update_process_times
0.94 ± 6% +0.2 1.09 ± 4% perf-profile.children.cycles-pp.tick_sched_timer
0.89 ± 6% +0.2 1.04 ± 4% perf-profile.children.cycles-pp.tick_sched_handle
50.23 +0.2 50.40 perf-profile.children.cycles-pp.munmap
1.28 ± 7% +0.2 1.48 ± 4% perf-profile.children.cycles-pp.__hrtimer_run_queues
49.67 +0.2 49.88 perf-profile.children.cycles-pp.__x64_sys_munmap
49.64 +0.2 49.86 perf-profile.children.cycles-pp.__vm_munmap
49.52 +0.3 49.77 perf-profile.children.cycles-pp.__do_munmap
94.81 +0.3 95.07 perf-profile.children.cycles-pp.percpu_counter_add_batch
1.54 ± 9% +0.3 1.83 ± 5% perf-profile.children.cycles-pp.hrtimer_interrupt
1.83 ± 9% +0.4 2.18 ± 5% perf-profile.children.cycles-pp.smp_apic_timer_interrupt
1.95 ± 10% +0.4 2.34 ± 5% perf-profile.children.cycles-pp.apic_timer_interrupt
0.00 +2.4 2.38 ± 5% perf-profile.children.cycles-pp._raw_spin_unlock_irqrestore
0.15 ± 5% -0.0 0.10 ± 11% perf-profile.self.cycles-pp.perf_iterate_sb
0.17 ± 6% -0.0 0.13 perf-profile.self.cycles-pp.syscall_return_via_sysret
0.51 -0.0 0.46 ± 2% perf-profile.self.cycles-pp.unmap_page_range
0.52 ± 2% -0.0 0.48 ± 2% perf-profile.self.cycles-pp.___might_sleep
0.10 ± 7% -0.0 0.06 perf-profile.self.cycles-pp.perf_event_mmap
0.15 ± 4% -0.0 0.12 ± 4% perf-profile.self.cycles-pp.entry_SYSCALL_64
0.15 ± 3% -0.0 0.13 ± 3% perf-profile.self.cycles-pp._cond_resched
0.10 ± 4% -0.0 0.09 perf-profile.self.cycles-pp.unmapped_area_topdown
0.09 -0.0 0.08 perf-profile.self.cycles-pp.vma_compute_subtree_gap
0.08 +0.0 0.10 ± 4% perf-profile.self.cycles-pp.mmap_region
0.05 +0.0 0.08 perf-profile.self.cycles-pp.mmap64
0.00 +0.1 0.05 ± 8% perf-profile.self.cycles-pp.task_tick_fair
0.00 +0.1 0.06 ± 9% perf-profile.self.cycles-pp.irq_enter
0.04 ± 59% +0.1 0.10 ± 12% perf-profile.self.cycles-pp.hrtimer_interrupt
0.00 +0.1 0.06 ± 7% perf-profile.self.cycles-pp.perf_event_task_tick
0.00 +0.1 0.06 ± 6% perf-profile.self.cycles-pp.avc_has_perm_noaudit
0.01 ±173% +0.1 0.08 ± 5% perf-profile.self.cycles-pp.munmap
0.00 +0.1 0.09 ± 5% perf-profile.self.cycles-pp.lru_add_drain_cpu
0.00 +0.1 0.10 ± 11% perf-profile.self.cycles-pp.fpregs_assert_state_consistent
0.00 +0.1 0.15 ± 5% perf-profile.self.cycles-pp._raw_spin_unlock_irqrestore
will-it-scale.per_process_ops
1800 +-+------------------------------------------------------------------+
| .+ .++.++.++. .++.++.++.+ .++.++.++.+ .+ .+|
1600 +-++.++ + ++.+ ++.+ ++.++ +.++ + + |
1400 +-+O OO OO OO OO OO :O :O OO OO O OO OO OO OO OO |
O O OO O O : : O |
1200 +-+ : : : : |
1000 +-+ : : : : |
| : : : : |
800 +-+ :: :: |
600 +-+ :: :: |
| :: :: |
400 +-+ :: :: |
200 +-+ : : |
| : : |
0 +-+------------------------------------------------------------------+
will-it-scale.workload
500000 +-+----------------------------------------------------------------+
450000 +-+++.++.++.++.+++.++.+ ++.+ ++.+++.++.++.++.++.+++.++.++.++.++.+|
| O O : : : : O O |
400000 OO+OO O OO O OOO OO OO OO OO OO OO O OO OO OO O |
350000 +-+ : : : : |
| : : : : |
300000 +-+ : : : : |
250000 +-+ : : : : |
200000 +-+ :: :: |
| :: :: |
150000 +-+ :: :: |
100000 +-+ : : |
| : : |
50000 +-+ : : |
0 +-+----------------------------------------------------------------+
[*] bisect-good sample
[O] bisect-bad sample
Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.
Thanks,
Rong Chen
1 year
[sock] 7c68c878c9: kernel_selftests.bpf.test_sock_fields.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 7c68c878c9fb0edb23adbcf43cdf215aa4d06550 ("sock: Make sk_protocol a 16-bit value")
https://github.com/0day-ci/linux/commits/Mat-Martineau/Multipath-TCP-Prer...
in testcase: kernel_selftests
with following parameters:
group: kselftests-00
ucode: 0x500002c
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: 192 threads Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz with 192G memory
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
# selftests: bpf: test_select_reuseport
# libbpf: failed to guess program type from ELF section 'select_by_skb_data'
# libbpf: supported section(type) names are: socket kprobe/ uprobe/ kretprobe/ uretprobe/ classifier action tracepoint/ tp/ raw_tracepoint/ raw_tp/ tp_btf/ fentry/ fexit/ xdp perf_event lwt_in lwt_out lwt_xmit lwt_seg6local cgroup_skb/ingress cgroup_skb/egress cgroup/skb cgroup/sock cgroup/post_bind4 cgroup/post_bind6 cgroup/dev sockops sk_skb/stream_parser sk_skb/stream_verdict sk_skb sk_msg lirc_mode2 flow_dissector cgroup/bind4 cgroup/bind6 cgroup/connect4 cgroup/connect6 cgroup/sendmsg4 cgroup/sendmsg6 cgroup/recvmsg4 cgroup/recvmsg6 cgroup/sysctl cgroup/getsockopt cgroup/setsockopt
# ######## IPv6/TCP LOOPBACK ########
# test_err_inner_map: unexpected result
# result: [0, 0, 0, 1, 0, 0]
# expected: [1, 0, 0, 0, 0, 0]
# check_results(343):FAIL:unexpected result expected_results[0] != results[0] bpf_prog_linum:142
not ok 17 selftests: bpf: test_select_reuseport # exit=255
# selftests: bpf: test_netcnt
# test_netcnt:PASS
ok 18 selftests: bpf: test_netcnt
# selftests: bpf: test_tcpnotify_user
# PASSED!
ok 19 selftests: bpf: test_tcpnotify_user
# selftests: bpf: test_sock_fields
# srv_sa6.sin6_port:52925 cli_sa6.sin6_port:49969
#
# check_sk_pkt_out_cnt(269):FAIL:bpf_map_lookup_elem(sk_pkt_out_cnt, &accept_fd) err:0 errno:115 pkt_out_cnt:2 pkt_out_cnt10:20
not ok 20 selftests: bpf: test_sock_fields # exit=255
To reproduce:
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp install job.yaml # job file is attached in this email
bin/lkp run job.yaml
Thanks,
Rong Chen
1 year
[block] f216fdd77b: BUG:kernel_reboot-without-warning_in_boot_stage
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: f216fdd77b5654f8c4f6fac6020d6aabc58878ef ("block: replace seq_zones_bitmap with conv_zones_bitmap")
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git master
in testcase: kernel_selftests
with following parameters:
group: kselftests-x86
test-description: The kernel contains a set of "self tests" under the tools/testing/selftests/ directory. These are intended to be small unit tests to exercise individual code paths in the kernel.
test-url: https://www.kernel.org/doc/Documentation/kselftest.txt
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+-------------------------------------------------------------------------+------------+------------+
| | 9b38bb4b1e | f216fdd77b |
+-------------------------------------------------------------------------+------------+------------+
| boot_successes | 0 | 0 |
| boot_failures | 4 | 4 |
| WARNING:at_drivers/base/core.c:#device_links_supplier_sync_state_resume | 4 | |
| RIP:device_links_supplier_sync_state_resume | 4 | |
| BUG:kernel_reboot-without-warning_in_boot_stage | 0 | 4 |
+-------------------------------------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
needed_size: 0x0000000006800000
trampoline_32bit: 0x000000000009d000
Decompressing Linux... Parsing ELF... done.
Booting the kernel.
BUG: kernel reboot-without-warning in boot stage
Linux version 5.4.0-10082-gf216fdd77b565 #1
Command line: ip=::::vm-snb-f9eef226434d::dhcp root=/dev/ram0 user=lkp job=/lkp/jobs/scheduled/vm-snb-f9eef226434d/kernel_selftests-kselftests-x86-debian-x86_64-2019-11-14.cgz-f216fdd77-20191206-11813-1cb6s7n-3.yaml ARCH=x86_64 kconfig=x86_64-randconfig-h002-20191205 branch=block/for-linus commit=f216fdd77b5654f8c4f6fac6020d6aabc58878ef BOOT_IMAGE=/pkg/linux/x86_64-randconfig-h002-20191205/gcc-7/f216fdd77b5654f8c4f6fac6020d6aabc58878ef/vmlinuz-5.4.0-10082-gf216fdd77b565 erst_disable max_uptime=3600 RESULT_ROOT=/result/kernel_selftests/kselftests-x86/vm-snb/debian-x86_64-2019-11-14.cgz/x86_64-randconfig-h002-20191205/gcc-7/f216fdd77b5654f8c4f6fac6020d6aabc58878ef/3 LKP_SERVER=inn debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 net.ifnames=0 printk.devkmsg=on panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 drbd.minor_count=8 systemd.log_level=err ignore_loglevel console=tty0 earlyprintk=ttyS0,115200 console=ttyS0,115200 vga=normal rw rcuperf.shutdown=0 watchdog_thresh=60
Elapsed time: 60
To reproduce:
# build kernel
cd linux
cp config-5.4.0-10082-gf216fdd77b565 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year
[xfs] 760eca8784: xfstests.xfs.231.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 760eca8784712cccd3b1ecbdac31c9e6ee0c5a36 ("xfs: remove the separate cowblocks worker")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git repair-hard-problems
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: xfs
test: xfs-reflink
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2020-01-04 15:17:35 export TEST_DIR=/fs/vda
2020-01-04 15:17:35 export TEST_DEV=/dev/vda
2020-01-04 15:17:35 export FSTYP=xfs
2020-01-04 15:17:35 export SCRATCH_MNT=/fs/scratch
2020-01-04 15:17:35 mkdir /fs/scratch -p
2020-01-04 15:17:35 export SCRATCH_DEV=/dev/vdd
2020-01-04 15:17:35 export SCRATCH_LOGDEV=/dev/vdb
2020-01-04 15:17:35 export SCRATCH_XFS_LIST_METADATA_FIELDS=u3.sfdir3.hdr.parent.i4
2020-01-04 15:17:35 export SCRATCH_XFS_LIST_FUZZ_VERBS=random
2020-01-04 15:17:35 export MKFS_OPTIONS=-mreflink=1
2020-01-04 15:17:35 sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-reflink | grep -F -f merged_ignored_files
2020-01-04 15:17:35 sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-reflink | grep -v -F -f merged_ignored_files
2020-01-04 15:17:35 ./check xfs/127 xfs/128 xfs/129 xfs/130 xfs/139 xfs/140 xfs/169 xfs/179 xfs/180 xfs/182 xfs/184 xfs/192 xfs/193 xfs/198 xfs/200 xfs/204 xfs/207 xfs/208 xfs/209 xfs/210 xfs/211 xfs/212 xfs/213 xfs/214 xfs/215 xfs/218 xfs/219 xfs/221 xfs/223 xfs/224 xfs/225 xfs/226 xfs/228 xfs/230 xfs/231 xfs/232 xfs/237 xfs/239 xfs/240 xfs/241 xfs/243 xfs/245 xfs/247 xfs/248 xfs/249 xfs/251 xfs/254 xfs/255 xfs/256 xfs/257 xfs/258 xfs/265 xfs/280 xfs/307 xfs/308 xfs/309 xfs/312 xfs/313 xfs/315 xfs/316 xfs/319 xfs/320 xfs/321 xfs/323 xfs/324 xfs/325 xfs/326 xfs/327 xfs/328 xfs/330 xfs/344 xfs/345 xfs/346 xfs/347 xfs/372 xfs/373 xfs/410 xfs/411 xfs/420 xfs/421 xfs/435 xfs/440 xfs/441 xfs/442 xfs/464 xfs/483 xfs/507
FSTYP -- xfs (debug)
PLATFORM -- Linux/x86_64 vm-snb-cbfbef064ba3 5.5.0-rc4-00064-g760eca8784712 #1 SMP Sat Jan 4 13:40:33 CST 2020
MKFS_OPTIONS -- -f -mreflink=1 /dev/vdd
MOUNT_OPTIONS -- /dev/vdd /fs/scratch
xfs/127 48s
xfs/128 47s
xfs/129 17s
xfs/130 7s
xfs/139 138s
xfs/140 430s
xfs/169 216s
xfs/179 7s
xfs/180 8s
xfs/182 10s
xfs/184 9s
xfs/192 8s
xfs/193 7s
xfs/198 10s
xfs/200 9s
xfs/204 10s
xfs/207 10s
xfs/208 10s
xfs/209 7s
xfs/210 7s
xfs/211 917s
xfs/212 6s
xfs/213 5s
xfs/214 6s
xfs/215 5s
xfs/218 5s
xfs/219 6s
xfs/221 6s
xfs/223 5s
xfs/224 5s
xfs/225 5s
xfs/226 6s
xfs/228 5s
xfs/230 6s
xfs/231 - output mismatch (see /lkp/benchmarks/xfstests/results//xfs/231.out.bad)
--- tests/xfs/231.out 2019-12-25 16:49:46.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//xfs/231.out.bad 2020-01-04 15:51:17.105416824 +0800
@@ -1,4 +1,5 @@
QA output created by 231
+cat: /proc/sys/fs/xfs/speculative_cow_prealloc_lifetime: No such file or directory
Format and mount
Create the original files
Compare files
@@ -6,11 +7,14 @@
bdbcf02ee0aa977795a79d25fcfdccb1 SCRATCH_MNT/test-231/file2
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/xfs/231.out /lkp/benchmarks/xfstests/results//xfs/231.out.bad' to see the entire diff)
xfs/232 - output mismatch (see /lkp/benchmarks/xfstests/results//xfs/232.out.bad)
--- tests/xfs/232.out 2019-12-25 16:49:46.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//xfs/232.out.bad 2020-01-04 15:51:25.238416824 +0800
@@ -1,4 +1,5 @@
QA output created by 232
+cat: /proc/sys/fs/xfs/speculative_cow_prealloc_lifetime: No such file or directory
Format and mount
Create the original files
Compare files
@@ -6,11 +7,14 @@
bdbcf02ee0aa977795a79d25fcfdccb1 SCRATCH_MNT/test-232/file2
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/xfs/232.out /lkp/benchmarks/xfstests/results//xfs/232.out.bad' to see the entire diff)
xfs/237 8s
xfs/239 7s
xfs/240 7s
xfs/241 7s
xfs/243 6s
xfs/245 4s
xfs/247 4s
xfs/248 5s
xfs/249 5s
xfs/251 6s
xfs/254 6s
xfs/255 6s
xfs/256 6s
xfs/257 7s
xfs/258 6s
xfs/265 28s
xfs/280 5s
xfs/307 7s
xfs/308 7s
xfs/309 30s
xfs/312 5s
xfs/313 6s
xfs/315 9s
xfs/316 5s
xfs/319 6s
xfs/320 7s
xfs/321 7s
xfs/323 8s
xfs/324 9s
xfs/325 6s
xfs/326 6s
xfs/327 4s
xfs/328 49s
xfs/330 4s
xfs/344 6s
xfs/345 4s
xfs/346 14s
xfs/347 7s
xfs/372 293s
xfs/373 26s
xfs/410 31s
xfs/411 30s
xfs/420 3s
xfs/421 4s
xfs/435 3s
xfs/440 4s
xfs/441 3s
xfs/442 258s
xfs/464 32s
xfs/483 27s
xfs/507 20s
Ran: xfs/127 xfs/128 xfs/129 xfs/130 xfs/139 xfs/140 xfs/169 xfs/179 xfs/180 xfs/182 xfs/184 xfs/192 xfs/193 xfs/198 xfs/200 xfs/204 xfs/207 xfs/208 xfs/209 xfs/210 xfs/211 xfs/212 xfs/213 xfs/214 xfs/215 xfs/218 xfs/219 xfs/221 xfs/223 xfs/224 xfs/225 xfs/226 xfs/228 xfs/230 xfs/231 xfs/232 xfs/237 xfs/239 xfs/240 xfs/241 xfs/243 xfs/245 xfs/247 xfs/248 xfs/249 xfs/251 xfs/254 xfs/255 xfs/256 xfs/257 xfs/258 xfs/265 xfs/280 xfs/307 xfs/308 xfs/309 xfs/312 xfs/313 xfs/315 xfs/316 xfs/319 xfs/320 xfs/321 xfs/323 xfs/324 xfs/325 xfs/326 xfs/327 xfs/328 xfs/330 xfs/344 xfs/345 xfs/346 xfs/347 xfs/372 xfs/373 xfs/410 xfs/411 xfs/420 xfs/421 xfs/435 xfs/440 xfs/441 xfs/442 xfs/464 xfs/483 xfs/507
Failures: xfs/231 xfs/232
Failed 2 of 87 tests
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc4-00064-g760eca8784712 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year
[slub] 43eedfb522: kernel_BUG_at_mm/slub.c
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 43eedfb5222fdb27697f96e19e32518489f0da3b ("slub: call BUG if next_object is not valid")
url: https://github.com/0day-ci/linux/commits/lijiazi/slub-call-BUG-if-next_ob...
in testcase: boot
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
+------------------------------------------+------------+------------+
| | 3a562aee72 | 43eedfb522 |
+------------------------------------------+------------+------------+
| boot_successes | 20 | 0 |
| boot_failures | 0 | 4 |
| kernel_BUG_at_mm/slub.c | 0 | 4 |
| invalid_opcode:#[##] | 0 | 4 |
| RIP:kmem_cache_alloc | 0 | 4 |
| Kernel_panic-not_syncing:Fatal_exception | 0 | 4 |
+------------------------------------------+------------+------------+
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <lkp(a)intel.com>
[ 0.260175] kernel BUG at mm/slub.c:2729!
[ 0.260534] invalid opcode: 0000 [#1] SMP PTI
[ 0.260806] CPU: 0 PID: 0 Comm: swapper Not tainted 5.5.0-rc4-00125-g43eedfb5222fd #2
[ 0.261334] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.10.2-1 04/01/2014
[ 0.261883] RIP: 0010:kmem_cache_alloc+0x79/0x169
[ 0.262177] Code: 89 e7 e8 3e ff ff ff 48 89 c5 eb 40 41 8b 44 24 20 48 89 14 24 48 8b 5c 05 00 48 89 df e8 41 1f df ff 84 c0 48 8b 14 24 75 02 <0f> 0b 49 8b 3c 24 48 8d 4a 01 48 89 e8 48 8d 37 e8 29 5c 95 00 84
[ 0.263852] RSP: 0000:ffffffff82803e38 EFLAGS: 00010046
[ 0.264270] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[ 0.264715] RDX: 0000000000000012 RSI: 0000000000000100 RDI: 0000000080000000
[ 0.265159] RBP: ffff88822a4021c0 R08: ffff88823fc2e6c0 R09: 0000000000000000
[ 0.265641] R10: 0000000000000400 R11: 0000000000000000 R12: ffff88822a402000
[ 0.266109] R13: 0000000000000900 R14: ffff88822a402000 R15: ffffffff82df9345
[ 0.266554] FS: 0000000000000000(0000) GS:ffff88823fc00000(0000) knlGS:0000000000000000
[ 0.267110] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[ 0.267492] CR2: ffff88823ffff000 CR3: 000000000280a000 CR4: 00000000000406b0
[ 0.267965] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[ 0.268437] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[ 0.268918] Call Trace:
[ 0.269083] create_kmalloc_cache+0x31/0xa4
[ 0.269348] new_kmalloc_cache+0x41/0x4c
[ 0.269687] create_kmalloc_caches+0x34/0xdc
[ 0.270059] kmem_cache_init+0xad/0x103
[ 0.270393] start_kernel+0x214/0x4fd
[ 0.270750] ? x86_family+0x5/0x1d
[ 0.271063] secondary_startup_64+0xb6/0xc0
[ 0.271373] Modules linked in:
[ 0.271589] random: get_random_bytes called from init_oops_id+0x22/0x31 with crng_init=0
[ 0.272121] ---[ end trace 4f4aaf971ad2af9d ]---
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc4-00125-g43eedfb5222fd .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> job-script # job-script is attached in this email
Thanks,
lkp
1 year
[xfs] 2155ea1023: xfstests.xfs.262.fail
by kernel test robot
FYI, we noticed the following commit (built with gcc-7):
commit: 2155ea10235b3503ea0abd9df01e6b73a536991f ("xfs: repair inode block maps")
https://git.kernel.org/cgit/linux/kernel/git/djwong/xfs-linux.git repair-quota
in testcase: xfstests
with following parameters:
disk: 4HDD
fs: xfs
test: xfs-group16
test-description: xfstests is a regression test suite for xfs and other files ystems.
test-url: git://git.kernel.org/pub/scm/fs/xfs/xfstests-dev.git
on test machine: qemu-system-x86_64 -enable-kvm -cpu SandyBridge -smp 2 -m 8G
caused below changes (please refer to attached dmesg/kmsg for entire log/backtrace):
If you fix the issue, kindly add following tag
Reported-by: kernel test robot <rong.a.chen(a)intel.com>
2020-01-03 23:48:55 export TEST_DIR=/fs/vda
2020-01-03 23:48:55 export TEST_DEV=/dev/vda
2020-01-03 23:48:55 export FSTYP=xfs
2020-01-03 23:48:55 export SCRATCH_MNT=/fs/scratch
2020-01-03 23:48:55 mkdir /fs/scratch -p
2020-01-03 23:48:55 export SCRATCH_DEV=/dev/vdd
2020-01-03 23:48:55 export SCRATCH_LOGDEV=/dev/vdb
2020-01-03 23:48:55 export SCRATCH_XFS_LIST_METADATA_FIELDS=u3.sfdir3.hdr.parent.i4
2020-01-03 23:48:55 export SCRATCH_XFS_LIST_FUZZ_VERBS=random
2020-01-03 23:48:55 sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-group16 | grep -F -f merged_ignored_files
2020-01-03 23:48:55 sed "s:^:xfs/:" //lkp/benchmarks/xfstests/tests/xfs-group16 | grep -v -F -f merged_ignored_files
2020-01-03 23:48:55 ./check xfs/261 xfs/262 xfs/263 xfs/264 xfs/266 xfs/267 xfs/268 xfs/269 xfs/270 xfs/273 xfs/278 xfs/279
FSTYP -- xfs (debug)
PLATFORM -- Linux/x86_64 vm-snb-b61411d4a03e 5.5.0-rc4-00026-g2155ea10235b3 #1 SMP Fri Jan 3 23:22:59 CST 2020
MKFS_OPTIONS -- -f -bsize=4096 /dev/vdd
MOUNT_OPTIONS -- /dev/vdd /fs/scratch
xfs/261 2s
xfs/262 _check_dmesg: something found in dmesg (see /lkp/benchmarks/xfstests/results//xfs/262.dmesg)
- output mismatch (see /lkp/benchmarks/xfstests/results//xfs/262.out.bad)
--- tests/xfs/262.out 2019-12-25 16:49:46.000000000 +0800
+++ /lkp/benchmarks/xfstests/results//xfs/262.out.bad 2020-01-03 23:49:05.027241420 +0800
@@ -1,3 +1,17 @@
QA output created by 262
Format and populate
Force online repairs
+Corruption: inode 131 (0/131) data block map: Repair unsuccessful; offline repair required. (scrub.c line 790)
+Corruption: inode 132 (0/132) data block map: Repair unsuccessful; offline repair required. (scrub.c line 790)
+Corruption: inode 133 (0/133) data block map: Repair unsuccessful; offline repair required. (scrub.c line 790)
+Corruption: inode 134 (0/134) data block map: Repair unsuccessful; offline repair required. (scrub.c line 790)
...
(Run 'diff -u /lkp/benchmarks/xfstests/tests/xfs/262.out /lkp/benchmarks/xfstests/results//xfs/262.out.bad' to see the entire diff)
xfs/263 2s
xfs/264 9s
xfs/266 16s
xfs/267 [not run] No dump tape specified
xfs/268 [not run] No dump tape specified
xfs/269 2s
xfs/270 1s
xfs/273 164s
xfs/278 2s
xfs/279 7s
Ran: xfs/261 xfs/262 xfs/263 xfs/264 xfs/266 xfs/267 xfs/268 xfs/269 xfs/270 xfs/273 xfs/278 xfs/279
Not run: xfs/267 xfs/268
Failures: xfs/262
Failed 1 of 12 tests
To reproduce:
# build kernel
cd linux
cp config-5.5.0-rc4-00026-g2155ea10235b3 .config
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 olddefconfig prepare modules_prepare bzImage modules
make HOSTCC=gcc-7 CC=gcc-7 ARCH=x86_64 INSTALL_MOD_PATH=<mod-install-dir> modules_install
cd <mod-install-dir>
find lib/ | cpio -o -H newc --quiet | gzip > modules.cgz
git clone https://github.com/intel/lkp-tests.git
cd lkp-tests
bin/lkp qemu -k <bzImage> -m modules.cgz job-script # job-script is attached in this email
Thanks,
Rong Chen
1 year