Hi,<br> I am calling the merge function of the block device driver below me(since mine is only pass through). Does this not work?<br>When i tried seeing what read requests were coming then i saw that when i issue dd with count=1 it retrieves 4 pages,<br>
so i tried with 'direct' flag. But even with direct io my read performance is way lower than my write performance.<br><br>Regards,<br>Neha<br><br><div class="gmail_quote">On Wed, Apr 10, 2013 at 11:15 PM, Rajat Sharma <span dir="ltr"><<a href="mailto:fs.rajat@gmail.com" target="_blank">fs.rajat@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<div><div><br>
On Thu, Apr 11, 2013 at 2:23 AM, neha naik <<a href="mailto:nehanaik27@gmail.com" target="_blank">nehanaik27@gmail.com</a>> wrote:<br>
> Hi All,<br>
> Nobody has replied to my query here. So i am just wondering if there is a<br>
> forum for block device driver where i can post my query.<br>
> Please tell me if there is any such forum.<br>
><br>
> Thanks,<br>
> Neha<br>
><br>
> ---------- Forwarded message ----------<br>
> From: neha naik <<a href="mailto:nehanaik27@gmail.com" target="_blank">nehanaik27@gmail.com</a>><br>
> Date: Tue, Apr 9, 2013 at 10:18 AM<br>
> Subject: Passthrough device driver performance is low on reads compared to<br>
> writes<br>
> To: <a href="mailto:kernelnewbies@kernelnewbies.org" target="_blank">kernelnewbies@kernelnewbies.org</a><br>
><br>
><br>
> Hi All,<br>
> I have written a passthrough block device driver using 'make_request'<br>
> call. This block device driver simply passes any request that comes to it<br>
> down to lvm.<br>
><br>
> However, the read performance for my passthrough driver is around 65MB/s<br>
> (measured through dd) and write performance is around 140MB/s for dd block<br>
> size 4096.<br>
> The write performance matches with lvm's write performance more or less but,<br>
> the read performance on lvm is around 365MB/s.<br>
><br>
> I am posting snippets of code which i think are relevant here:<br>
><br>
> static int passthrough_make_request(<br>
> struct request_queue * queue, struct bio * bio)<br>
> {<br>
><br>
> passthrough_device_t * passdev = queue->queuedata;<br>
> bio->bi_bdev = passdev->bdev_backing;<br>
> generic_make_request(bio);<br>
> return 0;<br>
> }<br>
><br>
> For initializing the queue i am using following:<br>
><br>
> blk_queue_make_request(passdev->queue, passthrough_make_request);<br>
> passdev->queue->queuedata = sbd;<br>
> passdev->queue->unplug_fn = NULL;<br>
> bdev_backing = passdev->bdev_backing;<br>
> blk_queue_stack_limits(passdev->queue, bdev_get_queue(bdev_backing));<br>
> if ((bdev_get_queue(bdev_backing))->merge_bvec_fn) {<br>
> blk_queue_merge_bvec(sbd->queue, sbd_merge_bvec_fn);<br>
> }<br>
><br>
<br>
</div></div>What is the implementation for sbd_merge_bvec_fn? Please debug through<br>
it to check requests are merging or not? May be that is the cause of<br>
lower performance?<br>
<div><br>
> Now, I browsed through dm code in kernel to see if there is some flag or<br>
> something which i am not using which is causing this huge performance<br>
> penalty.<br>
> But, I have not found anything.<br>
><br>
> If you have any ideas about what i am possibly doing wrong then please tell<br>
> me.<br>
><br>
> Thanks in advance.<br>
><br>
> Regards,<br>
> Neha<br>
><br>
<br>
</div>-Rajat<br>
<br>
><br>
> _______________________________________________<br>
> Kernelnewbies mailing list<br>
> <a href="mailto:Kernelnewbies@kernelnewbies.org" target="_blank">Kernelnewbies@kernelnewbies.org</a><br>
> <a href="http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies" target="_blank">http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies</a><br>
><br>
</blockquote></div><br>