<div dir="ltr">Hello Pranay!<div><br></div><div>Thanks for your reply. I apologize for my very late reply, I was very preoccupied earlier at work.<br><div class="gmail_extra"><br><br><div class="gmail_quote">On Tue, Jun 24, 2014 at 1:07 PM, Pranay Srivastava <span dir="ltr"><<a href="mailto:pranjas@gmail.com" target="_blank">pranjas@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">Hello Alvin,<br>
<div class=""><br>
On Mon, Jun 23, 2014 at 10:39 PM, Alvin Abitria <<a href="mailto:abitria.alvin@gmail.com">abitria.alvin@gmail.com</a>> wrote:<br>
> Hello,<br>
><br>
> I'm developing a block driver using the make_request method, effectively<br>
> bypassing existing scsi or request stack in block layer. So that means im<br>
> directly working with bios. As prescribed in linux documentation and from<br>
> referring to similar drivers in kernel, you close a session with a bio with<br>
> the bio_endio function.<br>
<br>
</div>So it means you are just passing on the bios without the request<br>
structure if I'm correct?<br>
I don't know how you are handling blk_finish_plug without having<br>
request or request queue,<br>
I maybe wrong in understanding how you are handling it.<br>
<div class=""><br></div></blockquote><div>Yes, I'm working on bio's level. No struct requests, and I haven't used blk_finish_plug yet.</div><div>The block driver method I'm implementing is somewhat along the same line with nvme and mtip2xxx</div>
<div>drivers in drivers/block directory (but differing in hardware specific level of course). </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">
><br>
> I usually invoke bio_endio during successful I/O completion, meaning with an<br>
> error code of zero. But there are cases that this is not fulfilled or there<br>
> are error cases. My question is, what are the valid error codes that can be<br>
> used with it? My initial impression is that other than zero as error code,<br>
</div>-EIO is the one that you should use I think,<br>
<div class="">> bio_endio will fail. I've read somewhere that -EBUSY is not recognized, and<br>
> I tried -EIO but my driver crashed. I got a panic in some dio_xxx function<br>
> leading from bio_endio(bio,-EIO). I would like to block subsequent bios sent<br>
<br>
</div>If it's okay for you to post the error then can you do that? I was<br>
seeing the code for<br>
dio_end_io but it would be good if you can post the exact crash<br>
backtrace if you've got that.<br></blockquote><div><br></div><div>Here you go:</div><div><br></div><div>BUG: unable to handle kernel NULL pointer dereference at (null)</div><div>IP: [<ffffffff811b9a80>] bio_check_pages_dirty+0x50/0xe0</div>
<div>PGD 42e5e4067 PUD 42e6e7067 PMD 0 </div><div>Oops: 0000 [#1] SMP </div><div>last sysfs file: /sys/devices/system/cpu/cpu0/cache/index0/coherency_line_size</div><div>CPU 7 </div><div>Modules linked in: block_module(U) fuse ip6table_filter ip6_tables ebtable_nat ebtables ipt_MASQUERADE iptable_nat nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_state nf_conntrack ipt_REJECT xt_CHECKSUM iptable_mangle iptable_filter ip_tables bridge autofs4 sunrpc 8021q garp stp llc cpufreq_ondemand freq_table pcc_cpufreq ipv6 vhost_net macvtap macvlan tun kvm_intel kvm uinput power_meter hpilo hpwdt sg tg3 microcode serio_raw iTCO_wdt iTCO_vendor_support ioatdma dca shpchp ext4 mbcache jbd2 sd_mod crc_t10dif hpsa pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_wait_scan]</div>
<div><br></div><div>Pid: 3740, comm: fio Not tainted 2.6.32-358.el6.x86_64 #1 HP ProLiant DL380p Gen8</div><div>RIP: 0010:[<ffffffff811b9a80>] [<ffffffff811b9a80>] bio_check_pages_dirty+0x50/0xe0</div><div>RSP: 0018:ffff8804191618c8 EFLAGS: 00010046</div>
<div>RAX: 2000000000000000 RBX: ffff88041909f0c0 RCX: 00000000000011ae</div><div>RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000</div><div>RBP: ffff8804191618f8 R08: ffffffff81c07728 R09: 0000000000000040</div>
<div>R10: 0000000000000002 R11: 0000000000000002 R12: 0000000000000000</div><div>R13: ffff8804191b9b80 R14: ffff8804191b9b80 R15: 0000000000000000</div><div>FS: 00007fcd43e2d720(0000) GS:ffff8800366e0000(0000) knlGS:0000000000000000</div>
<div>CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033</div><div>CR2: 0000000000000000 CR3: 000000041e69f000 CR4: 00000000000407e0</div><div>DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000</div><div>DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400</div>
<div>Process fio (pid: 3740, threadinfo ffff880419160000, task ffff88043341f500)</div><div>Stack:</div><div> 0000000000000000 ffff88043433f400 ffff88043433f520 ffff88041909f0c0</div><div><d> ffff88041909f0c0 ffff8804191b9b80 ffff880419161948 ffffffff811bdc38</div>
<div><d> ffff880419161968 0000000034236400 00000000fffffffb ffff88043433f400</div><div>Call Trace:</div><div> [<ffffffff811bdc38>] dio_bio_complete+0xc8/0xd0</div><div> [<ffffffff811bef4f>] dio_bio_end_aio+0x2f/0xd0</div>
<div> [<ffffffff811b920d>] bio_endio+0x1d/0x40</div><div> [<ffffffffa02c15a1>] block_make_request+0xe1/0x150 [block_module]</div><div> [<ffffffff8125ccce>] generic_make_request+0x25e/0x530</div><div> [<ffffffff811bae72>] ? bvec_alloc_bs+0x62/0x110</div>
<div> [<ffffffff8125d02d>] submit_bio+0x8d/0x120</div><div> [<ffffffff811bdf6c>] dio_bio_submit+0xbc/0xc0</div><div> [<ffffffff811be951>] __blockdev_direct_IO_newtrunc+0x631/0xb30</div><div> [<ffffffff8111afe3>] ? filemap_fault+0xd3/0x500</div>
<div> [<ffffffff811beeae>] __blockdev_direct_IO+0x5e/0xd0</div><div> [<ffffffff811bb280>] ? blkdev_get_blocks+0x0/0xc0</div><div> [<ffffffff811bc347>] blkdev_direct_IO+0x57/0x60</div><div> [<ffffffff811bb280>] ? blkdev_get_blocks+0x0/0xc0</div>
<div> [<ffffffff8111bb8b>] generic_file_aio_read+0x6bb/0x700</div><div> [<ffffffff81166a2a>] ? kmem_getpages+0xba/0x170</div><div> [<ffffffff81166f87>] ? cache_grow+0x217/0x320</div><div> [<ffffffff811bb893>] blkdev_aio_read+0x53/0xc0</div>
<div> [<ffffffff8111c633>] ? mempool_alloc+0x63/0x140</div><div> [<ffffffff811bb840>] ? blkdev_aio_read+0x0/0xc0</div><div> [<ffffffff811cadc4>] aio_rw_vect_retry+0x84/0x200</div><div> [<ffffffff811cc784>] aio_run_iocb+0x64/0x170</div>
<div> [<ffffffff811cdbb1>] do_io_submit+0x291/0x920</div><div> [<ffffffff811ce250>] sys_io_submit+0x10/0x20</div><div> [<ffffffff8100b072>] system_call_fastpath+0x16/0x1b</div><div>Code: 31 e4 45 31 ff eb 15 0f 1f 40 00 0f b7 43 28 41 83 c4 01 41 83 c7 01 44 39 e0 7e 36 4d 63 ec 49 c1 e5 04 4f 8d 2c 2e 49 8b 7d 00 <48> 8b 07 a8 10 75 06 66 a9 00 c0 74 d3 e8 5e 59 f7 ff 49 c7 45 </div>
<div>RIP [<ffffffff811b9a80>] bio_check_pages_dirty+0x50/0xe0</div><div> RSP <ffff8804191618c8></div><div>CR2: 0000000000000000</div><div> </div><div>Basically, after receiving the bio from generic_make_request, I checked and found that</div>
<div>I already have the maximum outstanding pending I/O (bios) and couldn't accomodate</div><div>this bio for now. Rather than just exiting the make_request() routine, I decided to close</div><div>this bio or return it to kernel via bio_endio(bio, -EIO). No processing or modification</div>
<div> was done to the bio, I just used the callback fn above.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class="">
> to me after reaching my queue depth and with no tags left, and so I want to<br>
> use bio_endio with an error code.<br>
<br>
</div>If you have a request queue then you could call blk_stop_queue and<br>
blk_start_queue but I don't know if this is<br>
relevant to your case.<br></blockquote><div><br></div><div>I do have a request queue allocated using blk_alloc_queue, but this is more of a dummy queue</div><div>in my case - I don't use it as much as the struct requests would use it. Its function that I used is</div>
<div>for me to register my make_request function, register the maximum I/O size and max number of</div><div>buffer segments that can be sent to my driver.</div><div><br></div><div>Thanks for the suggestion, I haven't tried blk_stop_queue and blk_start_queue but I hope it will </div>
<div>work. I also like the possibility that you can prevent the upper layers from sending me further bio</div><div>when my queue depth is full, then letting them know once my driver can accomodate new bios - </div><div>more of telling them I'm busy right now, don't send me further bios, etc. Is that the general idea behind</div>
<div>the two functions you mentioned?</div><div><br></div><div>I also notice that the two kernel APIs you gave either set/clear a certain request queue flag. I'd like to </div><div>think that upper layers (generic_make_request etc) check that flag first to decide if it can dispatch bios </div>
<div>to my driver, is that right?</div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
<div class=""><br>
><br>
> What are those error codes, and will they work for my intended function?<br>
> Thanks!<br>
<br>
</div>-EIO should work, but first let's find out why you got the crash.<br>
<br></blockquote><div>Thanks. I hope we get to the bottom why it failed and crashed. Let me know if you have questions. </div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">
> _______________________________________________<br>
> Kernelnewbies mailing list<br>
> <a href="mailto:Kernelnewbies@kernelnewbies.org">Kernelnewbies@kernelnewbies.org</a><br>
> <a href="http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies" target="_blank">http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies</a><br>
><br>
<span class=""><font color="#888888"><br>
<br>
<br>
--<br>
---P.K.S<br>
</font></span></blockquote></div>Alvin</div></div></div>