Forum for asking questions related to block device drivers

neha naik nehanaik27 at gmail.com
Thu Apr 11 11:09:33 EDT 2013


Hi,
 I am calling the merge function of the block device driver below me(since
mine is only pass through). Does this not work?
When i tried seeing what read requests were coming then i saw that when i
issue dd with count=1 it retrieves 4 pages,
so i tried with 'direct' flag. But even with direct io my read performance
is way lower than my write performance.

Regards,
Neha

On Wed, Apr 10, 2013 at 11:15 PM, Rajat Sharma <fs.rajat at gmail.com> wrote:

> Hi,
>
> On Thu, Apr 11, 2013 at 2:23 AM, neha naik <nehanaik27 at gmail.com> wrote:
> > Hi All,
> >    Nobody has replied to my query here. So i am just wondering if there
> is a
> > forum for block device driver where i can post my query.
> > Please tell me if there is any such forum.
> >
> > Thanks,
> > Neha
> >
> > ---------- Forwarded message ----------
> > From: neha naik <nehanaik27 at gmail.com>
> > Date: Tue, Apr 9, 2013 at 10:18 AM
> > Subject: Passthrough device driver performance is low on reads compared
> to
> > writes
> > To: kernelnewbies at kernelnewbies.org
> >
> >
> > Hi All,
> >   I have written a passthrough block device driver using 'make_request'
> > call. This block device driver simply passes any request that comes to it
> > down to lvm.
> >
> > However, the read performance for my passthrough driver is around 65MB/s
> > (measured through dd) and write performance is around 140MB/s for dd
> block
> > size 4096.
> > The write performance matches with lvm's write performance more or less
> but,
> > the read performance on lvm is around 365MB/s.
> >
> > I am posting snippets of code which i think are relevant here:
> >
> > static int passthrough_make_request(
> > struct request_queue * queue, struct bio * bio)
> > {
> >
> >         passthrough_device_t * passdev = queue->queuedata;
> >         bio->bi_bdev = passdev->bdev_backing;
> >         generic_make_request(bio);
> >         return 0;
> > }
> >
> > For initializing the queue i am using following:
> >
> > blk_queue_make_request(passdev->queue, passthrough_make_request);
> > passdev->queue->queuedata = sbd;
> > passdev->queue->unplug_fn = NULL;
> > bdev_backing = passdev->bdev_backing;
> > blk_queue_stack_limits(passdev->queue, bdev_get_queue(bdev_backing));
> > if ((bdev_get_queue(bdev_backing))->merge_bvec_fn) {
> >         blk_queue_merge_bvec(sbd->queue, sbd_merge_bvec_fn);
> > }
> >
>
> What is the implementation for sbd_merge_bvec_fn? Please debug through
> it to check requests are merging or not? May be that is the cause of
> lower performance?
>
> > Now, I browsed through dm code in kernel to see if there is some flag or
> > something which i am not using which is causing this huge performance
> > penalty.
> > But, I have not found anything.
> >
> > If you have any ideas about what i am possibly doing wrong then please
> tell
> > me.
> >
> > Thanks in advance.
> >
> > Regards,
> > Neha
> >
>
> -Rajat
>
> >
> > _______________________________________________
> > Kernelnewbies mailing list
> > Kernelnewbies at kernelnewbies.org
> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20130411/f1744815/attachment-0001.html 


More information about the Kernelnewbies mailing list