Re: [LKP] [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
by Michal Hocko
On Wed 02-12-15 14:08:52, Mel Gorman wrote:
> On Wed, Dec 02, 2015 at 01:00:46PM +0100, Michal Hocko wrote:
> > On Wed 02-12-15 11:00:09, Mel Gorman wrote:
> > > On Mon, Nov 30, 2015 at 10:14:24AM +0800, Huang, Ying wrote:
> > > > > There is no reference to OOM possibility in the email that I can see. Can
> > > > > you give examples of the OOM messages that shows the problem sites? It was
> > > > > suspected that there may be some callers that were accidentally depending
> > > > > on access to emergency reserves. If so, either they need to be fixed (if
> > > > > the case is extremely rare) or a small reserve will have to be created
> > > > > for callers that are not high priority but still cannot reclaim.
> > > > >
> > > > > Note that I'm travelling a lot over the next two weeks so I'll be slow to
> > > > > respond but I will get to it.
> > > >
> > > > Here is the kernel log, the full dmesg is attached too. The OOM
> > > > occurs during fsmark testing.
> > > >
> > > > Best Regards,
> > > > Huang, Ying
> > > >
> > > > [ 31.453514] kworker/u4:0: page allocation failure: order:0, mode:0x2200000
> > > > [ 31.463570] CPU: 0 PID: 6 Comm: kworker/u4:0 Not tainted 4.3.0-08056-gd0164ad #1
> > > > [ 31.466115] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
> > > > [ 31.477146] Workqueue: writeback wb_workfn (flush-253:0)
> > > > [ 31.481450] 0000000000000000 ffff880035ac75e8 ffffffff8140a142 0000000002200000
> > > > [ 31.492582] ffff880035ac7670 ffffffff8117117b ffff880037586b28 ffff880000000040
> > > > [ 31.507631] ffff88003523b270 0000000000000040 ffff880035abc800 ffffffff00000000
> > >
> > > This is an allocation failure and is not a triggering of the OOM killer so
> > > the severity is reduced but it still looks like a bug in the driver. Looking
> > > at the history and the discussion, it appears to me that __GFP_HIGH was
> > > cleared from the allocation site by accident. I strongly suspect that Will
> > > Deacon thought __GFP_HIGH was related to highmem instead of being related
> > > to high priority. Will, can you review the following patch please? Ying,
> > > can you test please?
> >
> > I have posted basically the same patch
> > http://lkml.kernel.org/r/1448980369-27130-1-git-send-email-mhocko@kernel.org
> >
>
> Sorry. I missed that while playing catch-up and I wasn't on the cc. I'll
> drop this patch now. Thanks for catching it.
My bad. I should have CCed you. But I considered this merely a cleanup
so I didn't want to swamp you with another email.
> > I didn't mention this allocation failure because I am not sure it is
> > really related.
> >
>
> I'm fairly sure it is. The failure is an allocation site that cannot
> sleep but did not specify __GFP_HIGH.
yeah but this was the case even before your patch. As the caller used
GFP_ATOMIC then it got __GFP_ATOMIC after your patch so it still
managed to do ALLOC_HARDER. I would agree if this was an explicit
GFP_NOWAIT. Unless I am missing something your patch hasn't changed the
behavior for this particular allocation.
> Such callers are normally expected
> to be able to recover gracefully and probably should specify _GFP_NOWARN.
> kswapd would have woken up as normal but the free pages were below the
> min watermark so there was a brief failure.
--
Michal Hocko
SUSE Labs
6 years, 5 months
Re: [LKP] [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
by Michal Hocko
On Wed 02-12-15 11:00:09, Mel Gorman wrote:
> On Mon, Nov 30, 2015 at 10:14:24AM +0800, Huang, Ying wrote:
> > > There is no reference to OOM possibility in the email that I can see. Can
> > > you give examples of the OOM messages that shows the problem sites? It was
> > > suspected that there may be some callers that were accidentally depending
> > > on access to emergency reserves. If so, either they need to be fixed (if
> > > the case is extremely rare) or a small reserve will have to be created
> > > for callers that are not high priority but still cannot reclaim.
> > >
> > > Note that I'm travelling a lot over the next two weeks so I'll be slow to
> > > respond but I will get to it.
> >
> > Here is the kernel log, the full dmesg is attached too. The OOM
> > occurs during fsmark testing.
> >
> > Best Regards,
> > Huang, Ying
> >
> > [ 31.453514] kworker/u4:0: page allocation failure: order:0, mode:0x2200000
> > [ 31.463570] CPU: 0 PID: 6 Comm: kworker/u4:0 Not tainted 4.3.0-08056-gd0164ad #1
> > [ 31.466115] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
> > [ 31.477146] Workqueue: writeback wb_workfn (flush-253:0)
> > [ 31.481450] 0000000000000000 ffff880035ac75e8 ffffffff8140a142 0000000002200000
> > [ 31.492582] ffff880035ac7670 ffffffff8117117b ffff880037586b28 ffff880000000040
> > [ 31.507631] ffff88003523b270 0000000000000040 ffff880035abc800 ffffffff00000000
>
> This is an allocation failure and is not a triggering of the OOM killer so
> the severity is reduced but it still looks like a bug in the driver. Looking
> at the history and the discussion, it appears to me that __GFP_HIGH was
> cleared from the allocation site by accident. I strongly suspect that Will
> Deacon thought __GFP_HIGH was related to highmem instead of being related
> to high priority. Will, can you review the following patch please? Ying,
> can you test please?
I have posted basically the same patch
http://lkml.kernel.org/r/1448980369-27130-1-git-send-email-mhocko@kernel.org
I didn't mention this allocation failure because I am not sure it is
really related.
> ---8<---
> virtio: allow vring descriptor allocations to use high-priority reserves
>
> Commit b92b1b89a33c ("virtio: force vring descriptors to be allocated
> from lowmem") prevented the inappropriate use of highmem pages but it
> also masked out __GFP_HIGH. __GFP_HIGH is used for GFP_ATOMIC allocation
> requests to grant access to a small emergency reserve. It's intended for
> user by callers that have no alternative.
>
> Ying Huang reported the following page allocation failure warning after
> commit d0164adc89f6 ("mm, page_alloc: distinguish between being unable to
> sleep, unwilling to sleep and avoiding waking kswapd")
>
> kworker/u4:0: page allocation failure: order:0, mode:0x2200000
> CPU: 0 PID: 6 Comm: kworker/u4:0 Not tainted 4.3.0-08056-gd0164ad #1
> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
> Workqueue: writeback wb_workfn (flush-253:0)
> 0000000000000000 ffff880035ac75e8 ffffffff8140a142 0000000002200000
> ffff880035ac7670 ffffffff8117117b ffff880037586b28 ffff880000000040
> ffff88003523b270 0000000000000040 ffff880035abc800 ffffffff00000000
> Call Trace:
> [<ffffffff8140a142>] dump_stack+0x4b/0x69
> [<ffffffff8117117b>] warn_alloc_failed+0xdb/0x140
> [<ffffffff81174ec4>] __alloc_pages_nodemask+0x874/0xa60
> [<ffffffff811bcb62>] alloc_pages_current+0x92/0x120
> [<ffffffff811c73e4>] new_slab+0x3d4/0x480
> [<ffffffff811c7c36>] __slab_alloc+0x376/0x470
> [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
> [<ffffffff81338221>] ? xfs_submit_ioend_bio+0x31/0x40
> [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
> [<ffffffff811c8e8d>] __kmalloc+0x20d/0x260
> [<ffffffff814e0ced>] alloc_indirect+0x1d/0x50
> [<ffffffff814e0fec>] virtqueue_add_sgs+0x2cc/0x3a0
> [<ffffffff81573a30>] __virtblk_add_req+0xb0/0x1f0
> [<ffffffff8117a121>] ? pagevec_lookup_tag+0x21/0x30
> [<ffffffff813e5d72>] ? blk_rq_map_sg+0x1e2/0x4f0
> [<ffffffff81573c82>] virtio_queue_rq+0x112/0x280
> [<ffffffff813e9de7>] __blk_mq_run_hw_queue+0x1d7/0x370
> [<ffffffff813e9bef>] blk_mq_run_hw_queue+0x9f/0xc0
> [<ffffffff813eb10a>] blk_mq_insert_requests+0xfa/0x1a0
> [<ffffffff813ebdb3>] blk_mq_flush_plug_list+0x123/0x140
> [<ffffffff813e1777>] blk_flush_plug_list+0xa7/0x200
> [<ffffffff813e1c49>] blk_finish_plug+0x29/0x40
> [<ffffffff81215f85>] wb_writeback+0x185/0x2c0
> [<ffffffff812166a5>] wb_workfn+0xf5/0x390
> [<ffffffff81091297>] process_one_work+0x157/0x420
> [<ffffffff81091ef9>] worker_thread+0x69/0x4a0
> [<ffffffff81091e90>] ? rescuer_thread+0x380/0x380
> [<ffffffff8109746f>] kthread+0xef/0x110
> [<ffffffff81097380>] ? kthread_park+0x60/0x60
> [<ffffffff818bce8f>] ret_from_fork+0x3f/0x70
> [<ffffffff81097380>] ? kthread_park+0x60/0x60
>
> Commit d0164adc89f6 ("mm, page_alloc: distinguish between being unable to
> sleep, unwilling to sleep and avoiding waking kswapd") is stricter about
> reserves. It distinguishes between callers that are high-priority with
> access to emergency reserves and callers that simply do not want to sleep
> and have recovery options. The reported allocation failure is truly atomic
> with no recovery options that appears to have cleared __GFP_HIGH by mistake
> for reasons that are unrelated to highmem. This patch restores the flag.
>
> Signed-off-by: Mel Gorman <mgorman(a)techsingularity.net>
> ---
> drivers/virtio/virtio_ring.c | 5 +++--
> 1 file changed, 3 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 096b857e7b75..f9e119e6df18 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -107,9 +107,10 @@ static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
> /*
> * We require lowmem mappings for the descriptors because
> * otherwise virt_to_phys will give us bogus addresses in the
> - * virtqueue.
> + * virtqueue. Access to high-priority reserves is preserved
> + * if originally requested by GFP_ATOMIC.
> */
> - gfp &= ~(__GFP_HIGHMEM | __GFP_HIGH);
> + gfp &= ~__GFP_HIGHMEM;
>
> desc = kmalloc(total_sg * sizeof(struct vring_desc), gfp);
> if (!desc)
--
Michal Hocko
SUSE Labs
6 years, 5 months
Re: [LKP] [lkp] [mm, page_alloc] d0164adc89: -100.0% fsmark.app_overhead
by Huang, Ying
Mel Gorman <mgorman(a)techsingularity.net> writes:
> On Fri, Nov 27, 2015 at 09:14:52AM +0800, Huang, Ying wrote:
>> Hi, Mel,
>>
>> Mel Gorman <mgorman(a)techsingularity.net> writes:
>>
>> > On Thu, Nov 26, 2015 at 08:56:12AM +0800, kernel test robot wrote:
>> >> FYI, we noticed the below changes on
>> >>
>> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
>> >> commit d0164adc89f6bb374d304ffcc375c6d2652fe67d ("mm, page_alloc:
>> >> distinguish between being unable to sleep, unwilling to sleep and
>> >> avoiding waking kswapd")
>> >>
>> >> Note: the testing machine is a virtual machine with only 1G memory.
>> >>
>> >
>> > I'm not actually seeing any problem here. Is this a positive report or
>> > am I missing something obvious?
>>
>> Sorry the email subject is generated automatically and I forget to
>> change it to some meaningful stuff before sending out. From the testing
>> result, we found the commit make the OOM possibility increased from 0%
>> to 100% on this machine with small memory. I also added proc-vmstat
>> information data too to help diagnose it.
>>
>
> There is no reference to OOM possibility in the email that I can see. Can
> you give examples of the OOM messages that shows the problem sites? It was
> suspected that there may be some callers that were accidentally depending
> on access to emergency reserves. If so, either they need to be fixed (if
> the case is extremely rare) or a small reserve will have to be created
> for callers that are not high priority but still cannot reclaim.
>
> Note that I'm travelling a lot over the next two weeks so I'll be slow to
> respond but I will get to it.
Here is the kernel log, the full dmesg is attached too. The OOM
occurs during fsmark testing.
Best Regards,
Huang, Ying
[ 31.453514] kworker/u4:0: page allocation failure: order:0, mode:0x2200000
[ 31.463570] CPU: 0 PID: 6 Comm: kworker/u4:0 Not tainted 4.3.0-08056-gd0164ad #1
[ 31.466115] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Debian-1.8.2-1 04/01/2014
[ 31.477146] Workqueue: writeback wb_workfn (flush-253:0)
[ 31.481450] 0000000000000000 ffff880035ac75e8 ffffffff8140a142 0000000002200000
[ 31.492582] ffff880035ac7670 ffffffff8117117b ffff880037586b28 ffff880000000040
[ 31.507631] ffff88003523b270 0000000000000040 ffff880035abc800 ffffffff00000000
[ 31.510568] Call Trace:
[ 31.511828] [<ffffffff8140a142>] dump_stack+0x4b/0x69
[ 31.513391] [<ffffffff8117117b>] warn_alloc_failed+0xdb/0x140
[ 31.523163] [<ffffffff81174ec4>] __alloc_pages_nodemask+0x874/0xa60
[ 31.524949] [<ffffffff811bcb62>] alloc_pages_current+0x92/0x120
[ 31.526659] [<ffffffff811c73e4>] new_slab+0x3d4/0x480
[ 31.536134] [<ffffffff811c7c36>] __slab_alloc+0x376/0x470
[ 31.537541] [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
[ 31.543268] [<ffffffff81338221>] ? xfs_submit_ioend_bio+0x31/0x40
[ 31.545104] [<ffffffff814e0ced>] ? alloc_indirect+0x1d/0x50
[ 31.546982] [<ffffffff811c8e8d>] __kmalloc+0x20d/0x260
[ 31.548334] [<ffffffff814e0ced>] alloc_indirect+0x1d/0x50
[ 31.549805] [<ffffffff814e0fec>] virtqueue_add_sgs+0x2cc/0x3a0
[ 31.555396] [<ffffffff81573a30>] __virtblk_add_req+0xb0/0x1f0
[ 31.556846] [<ffffffff8117a121>] ? pagevec_lookup_tag+0x21/0x30
[ 31.558318] [<ffffffff813e5d72>] ? blk_rq_map_sg+0x1e2/0x4f0
[ 31.563880] [<ffffffff81573c82>] virtio_queue_rq+0x112/0x280
[ 31.565307] [<ffffffff813e9de7>] __blk_mq_run_hw_queue+0x1d7/0x370
[ 31.571005] [<ffffffff813e9bef>] blk_mq_run_hw_queue+0x9f/0xc0
[ 31.572472] [<ffffffff813eb10a>] blk_mq_insert_requests+0xfa/0x1a0
[ 31.573982] [<ffffffff813ebdb3>] blk_mq_flush_plug_list+0x123/0x140
[ 31.583686] [<ffffffff813e1777>] blk_flush_plug_list+0xa7/0x200
[ 31.585138] [<ffffffff813e1c49>] blk_finish_plug+0x29/0x40
[ 31.586542] [<ffffffff81215f85>] wb_writeback+0x185/0x2c0
[ 31.592429] [<ffffffff812166a5>] wb_workfn+0xf5/0x390
[ 31.594037] [<ffffffff81091297>] process_one_work+0x157/0x420
[ 31.599804] [<ffffffff81091ef9>] worker_thread+0x69/0x4a0
[ 31.601484] [<ffffffff81091e90>] ? rescuer_thread+0x380/0x380
[ 31.611368] [<ffffffff8109746f>] kthread+0xef/0x110
[ 31.612953] [<ffffffff81097380>] ? kthread_park+0x60/0x60
[ 31.619418] [<ffffffff818bce8f>] ret_from_fork+0x3f/0x70
[ 31.621221] [<ffffffff81097380>] ? kthread_park+0x60/0x60
[ 31.635226] Mem-Info:
[ 31.636569] active_anon:4942 inactive_anon:1643 isolated_anon:0
[ 31.636569] active_file:23196 inactive_file:110131 isolated_file:251
[ 31.636569] unevictable:92329 dirty:2865 writeback:1925 unstable:0
[ 31.636569] slab_reclaimable:10588 slab_unreclaimable:3390
[ 31.636569] mapped:2848 shmem:1687 pagetables:876 bounce:0
[ 31.636569] free:1932 free_pcp:218 free_cma:0
[ 31.667096] Node 0 DMA free:3948kB min:60kB low:72kB high:88kB active_anon:264kB inactive_anon:128kB active_file:1544kB inactive_file:5296kB unevictable:3136kB isolated(anon):0kB isolated(file):236kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:440kB shmem:128kB slab_reclaimable:588kB slab_unreclaimable:304kB kernel_stack:112kB pagetables:80kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:3376 all_unreclaimable? no
[ 31.708140] lowmem_reserve[]: 0 972 972 972
[ 31.710104] Node 0 DMA32 free:3780kB min:3824kB low:4780kB high:5736kB active_anon:19504kB inactive_anon:6444kB active_file:91240kB inactive_file:435228kB unevictable:366180kB isolated(anon):0kB isolated(file):768kB present:1032064kB managed:997532kB mlocked:0kB dirty:11460kB writeback:7700kB mapped:10952kB shmem:6620kB slab_reclaimable:41764kB slab_unreclaimable:13256kB kernel_stack:2752kB pagetables:3424kB unstable:0kB bounce:0kB free_pcp:872kB local_pcp:232kB free_cma:0kB writeback_tmp:0kB pages_scanned:140404 all_unreclaimable? no
[ 31.743737] lowmem_reserve[]: 0 0 0 0
[ 31.745320] Node 0 DMA: 7*4kB (UME) 2*8kB (UM) 2*16kB (ME) 1*32kB (E) 0*64kB 2*128kB (ME) 2*256kB (ME) 2*512kB (UM) 2*1024kB (ME) 0*2048kB 0*4096kB = 3948kB
[ 31.757513] Node 0 DMA32: 1*4kB (U) 0*8kB 4*16kB (UME) 3*32kB (UE) 3*64kB (UM) 1*128kB (U) 1*256kB (U) 0*512kB 3*1024kB (UME) 0*2048kB 0*4096kB = 3812kB
[ 31.766470] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[ 31.772953] 227608 total pagecache pages
[ 31.774127] 0 pages in swap cache
[ 31.775428] Swap cache stats: add 0, delete 0, find 0/0
[ 31.776785] Free swap = 0kB
[ 31.777799] Total swap = 0kB
[ 31.779569] 262014 pages RAM
[ 31.780584] 0 pages HighMem/MovableOnly
[ 31.781744] 8654 pages reserved
[ 31.790944] 0 pages hwpoisoned
[ 31.792008] SLUB: Unable to allocate memory on node -1 (gfp=0x2080000)
[ 31.793537] cache: kmalloc-128, object size: 128, buffer size: 128, default order: 0, min order: 0
[ 31.796088] node 0: slabs: 27, objs: 864, free: 0
6 years, 5 months
[x86/irq] e59533f991: BUG: kernel test hang
by kernel test robot
Greetings,
0day kernel testing robot got the below dmesg and the first bad commit is
https://github.com/jiangliu/linux.git test/x86_apic_vector
commit e59533f991f771b03ecdcc00387c375579f87ed2
Author: Jiang Liu <jiang.liu(a)linux.intel.com>
AuthorDate: Fri Nov 27 16:59:19 2015 +0800
Commit: Jiang Liu <jiang.liu(a)linux.intel.com>
CommitDate: Mon Nov 30 13:10:42 2015 +0800
x86/irq: Enhance __assign_irq_vector() to rollback in case of failure
Enhance __assign_irq_vector() to rollback in case of failure so the
caller doesn't need to explicitly rollback.
Signed-off-by: Jiang Liu <jiang.liu(a)linux.intel.com>
+----------------------+------------+------------+------------+
| | 02ac2d3d62 | e59533f991 | 61d85206ea |
+----------------------+------------+------------+------------+
| boot_successes | 115 | 0 | 0 |
| boot_failures | 0 | 41 | 12 |
| BUG:kernel_boot_hang | 0 | 18 | 7 |
| BUG:kernel_test_hang | 0 | 23 | 5 |
+----------------------+------------+------------+------------+
[21961.014772] Writes: Total: 2 Max/Min: 0/0 Fail: 0
Elapsed time: 21930
BUG: kernel test hang
qemu-system-x86_64 -enable-kvm -cpu Haswell,+smep,+smap -kernel /pkg/linux/i386-randconfig-a0-201548/gcc-5/e59533f991f771b03ecdcc00387c375579f87ed2/vmlinuz-4.4.0-rc2-00079-ge59533f -append 'hung_task_panic=1 earlyprintk=ttyS0,115200 systemd.log_level=err debug apic=debug sysrq_always_enabled rcupdate.rcu_cpu_stall_timeout=100 panic=-1 softlockup_panic=1 nmi_watchdog=panic oops=panic load_ramdisk=2 prompt_ramdisk=0 console=ttyS0,115200 console=tty0 vga=normal root=/dev/ram0 rw link=/kbuild-tests/run-queue/kvm/i386-randconfig-a0-201548/jiangliu:test:x86_apic_vector:e59533f991f771b03ecdcc00387c375579f87ed2:bisect-linux-5/.vmlinuz-e59533f991f771b03ecdcc00387c375579f87ed2-20151201032433-11-ivb41 branch=jiangliu/test/x86_apic_vector BOOT_IMAGE=/pkg/linux/i386-randconfig-a0-201548/gcc-5/e59533f991f771b03ecdcc00387c375579f87ed2/vmlinuz-4.4.0-rc2-00079-ge59533f drbd.minor_count=8' -initrd /osimage/yocto/yocto-minimal-i386.cgz -m 256 -smp 1 -device e1000,netdev=net0 -netdev user,id=net0 -boot order=nc -no-reboot -watchdog i6300esb -rtc base=localtime -drive file=/fs/sda5/disk0-yocto-ivb41-12,media=disk,if=virtio -drive file=/fs/sda5/disk1-yocto-ivb41-12,media=disk,if=virtio -drive file=/fs/sda5/disk2-yocto-ivb41-12,media=disk,if=virtio -drive file=/fs/sda5/disk3-yocto-ivb41-12,media=disk,if=virtio -drive file=/fs/sda5/disk4-yocto-ivb41-12,media=disk,if=virtio -drive file=/fs/sda5/disk5-yocto-ivb41-12,media=disk,if=virtio -drive file=/fs/sda5/disk6-yocto-ivb41-12,media=disk,if=virtio -pidfile /dev/shm/kboot/pid-yocto-ivb41-12 -serial file:/dev/shm/kboot/serial-yocto-ivb41-12 -daemonize -display none -monitor null
git bisect start 61d85206ea751c349f57dddd10dcfee3f64f9d8f 78c4a49a69e910a162b05e4e8727b9bdbf948f13 --
git bisect bad e59533f991f771b03ecdcc00387c375579f87ed2 # 09:31 16- 30 x86/irq: Enhance __assign_irq_vector() to rollback in case of failure
git bisect good 02ac2d3d620ad3c091db1c3f8a8c6396fefaa0d9 # 09:44 38+ 0 x86/irq: Do not reuse struct apic_chip_data.old_domain as temporary buffer
# first bad commit: [e59533f991f771b03ecdcc00387c375579f87ed2] x86/irq: Enhance __assign_irq_vector() to rollback in case of failure
git bisect good 02ac2d3d620ad3c091db1c3f8a8c6396fefaa0d9 # 09:51 108+ 0 x86/irq: Do not reuse struct apic_chip_data.old_domain as temporary buffer
# extra tests on HEAD of jiangliu/test/x86_apic_vector
git bisect bad 61d85206ea751c349f57dddd10dcfee3f64f9d8f # 09:51 0- 12 x86/irq: Trivial cleanups for x86 vector allocation code
# extra tests on tree/branch jiangliu/test/x86_apic_vector
git bisect bad 61d85206ea751c349f57dddd10dcfee3f64f9d8f # 09:51 0- 12 x86/irq: Trivial cleanups for x86 vector allocation code
# extra tests on tree/branch linus/master
git bisect good 2255702db4014d1c69d6037ed7bdad2d2e271985 # 09:58 110+ 1 Merge tag 'mn10300-for-linus-v4.4-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/groeck/linux-staging
# extra tests on tree/branch linux-next/master
git bisect good 0dc00719ee740f4b9d6589371d37aff6f9840249 # 10:17 108+ 5 Add linux-next specific files for 20151127
This script may reproduce the error.
----------------------------------------------------------------------------
#!/bin/bash
kernel=$1
initrd=yocto-minimal-i386.cgz
wget --no-clobber https://github.com/fengguang/reproduce-kernel-bug/raw/master/initrd/$initrd
kvm=(
qemu-system-x86_64
-enable-kvm
-cpu Haswell,+smep,+smap
-kernel $kernel
-initrd $initrd
-m 256
-smp 1
-device e1000,netdev=net0
-netdev user,id=net0
-boot order=nc
-no-reboot
-watchdog i6300esb
-rtc base=localtime
-serial stdio
-display none
-monitor null
)
append=(
hung_task_panic=1
earlyprintk=ttyS0,115200
systemd.log_level=err
debug
apic=debug
sysrq_always_enabled
rcupdate.rcu_cpu_stall_timeout=100
panic=-1
softlockup_panic=1
nmi_watchdog=panic
oops=panic
load_ramdisk=2
prompt_ramdisk=0
console=ttyS0,115200
console=tty0
vga=normal
root=/dev/ram0
rw
drbd.minor_count=8
)
"${kvm[@]}" --append "${append[*]}"
----------------------------------------------------------------------------
---
0-DAY kernel test infrastructure Open Source Technology Center
https://lists.01.org/pipermail/lkp Intel Corporation
6 years, 5 months