Breaking up a bvec in a bio for reading more than 512

piyush moghe pmkernel at gmail.com
Sun Jan 12 23:03:18 EST 2014


Thanks Pranay. I think you are correct, bio is for sequential sectors.

Regards,
Piyush


On Tue, Jan 7, 2014 at 9:53 PM, Pranay Srivastava <pranjas at gmail.com> wrote:

>
> On Jan 7, 2014 8:12 PM, "pmkernel" <pmkernel at gmail.com> wrote:
> >
> > I have a query here. I think using bio you can do scatter-gather
> operation i.e you do an I/O from multiple locations from disk using bio
> vector which was a limitation in buffer head.
> >
> From what i understand the bio.vecs for a bio are contiguous. so if my bio
> was to.begin from a sector X then the amount of read/write would follow
> that sector.
>
> i think Sg knows pages only not sectors.
>
> > So in this case can you not read multiple sectors in single bio
> structure with multiple vector entries?
> >
> i don't think that's the case. its just that they need to be contiguous.
> >
> > Sent from Samsung Mobile
> >
> >
> >
> > -------- Original message --------
> > From: Rajat Sharma <fs.rajat at gmail.com>
> > Date:
> > To: neha naik <nehanaik27 at gmail.com>
> > Cc: kernelnewbies <kernelnewbies at kernelnewbies.org>
> > Subject: Re: Breaking up a bvec in a bio for reading more than 512
> >
> >
> > I am not sure if I understood your question correctly but there is no
> scatter-gather semantics in terms of disk/file offsets, I don't think any
> OS would have implemented such complex thing for no good reason. To
> implement such thing, it more than just software what it would take to
> achieve. Disk controller should support such parallelism, which they
> already do by cylinders, but one cylinder is laid out sequentially in terms
> of disk _logical_ offset.
> >
> > Regards,
> > Rajat
> >
> >
> > On Mon, Jan 6, 2014 at 1:09 PM, neha naik <nehanaik27 at gmail.com> wrote:
> >>
> >> Hi Rajat,
> >>  I am not opposed to creating multiple bio. I just meant to say that
> >> if there is another method which involves not breaking the bio (as i
> >> understand breaking the bio) i would love to know it.
> >>
> >> Regards,
> >> Neha
> >>
> >>
> >>
> >> On Mon, Jan 6, 2014 at 12:23 PM, Rajat Sharma <fs.rajat at gmail.com>
> wrote:
> >> > Why do you want to avoid creating multiple bio? If you have data on
> multiple
> >> > disks, create multiple of them and shoot them simultaneously to get
> >> > advantage of parallel IO. And if it is single disk, elevators of
> lower disk
> >> > would do a good job of reading/writing them in serial order of disk
> seek. I
> >> > don't see much of savings with not creating bio, it is going to be
> allocated
> >> > from slab anyways. Also risks you involve with leaving bio in a
> corruptible
> >> > state after customization for one disk are higher.
> >> >
> >> >
> >> > On Mon, Jan 6, 2014 at 10:03 AM, neha naik <nehanaik27 at gmail.com>
> wrote:
> >> >>
> >> >> Hi All,
> >> >>   I figured out the method by some trial and error and looking at the
> >> >> linux source code.
> >> >>   We can do something like this :
> >> >>       Say we want to read pages of bvec in 512 chunks. Create bio
> with
> >> >> a single page and read 512 chunk of data from wherever you want to
> (it
> >> >> can be different disks).
> >> >>
> >> >>            dst = kmap_atomic(bvec->bv_page, KM_USER0); ---> bvec is
> of
> >> >> original bio
> >> >>            src = kmap_atomic(page, KM_USER0); ---> page we read by
> >> >> creating new bio
> >> >>            memcpy(dst+offset, src, 512);
> >> >>            kunmap_atomic(src, KM_USER0);
> >> >>            kunmap_atomic(dst, KM_USER0);
> >> >>
> >> >> My difficulty was not being able to access the high memory page in
> >> >> kernel. I was earlier trying to increment the offset of the bvec and
> >> >> pass the page to the layer below assuming that it would read in the
> >> >> data at correct offset but of course it was resulting in panic. The
> >> >> above solves that. Of course, if there is some other method which
> >> >> involves not creating any bio i would love to know.
> >> >>
> >> >> Regards,
> >> >> Neha
> >> >>
> >> >>
> >> >> On Sat, Jan 4, 2014 at 9:32 AM, Pranay Srivastava <pranjas at gmail.com
> >
> >> >> wrote:
> >> >> >
> >> >> > On 04-Jan-2014 5:18 AM, "neha naik" <nehanaik27 at gmail.com> wrote:
> >> >> >>
> >> >> >> Hi All,
> >> >> >>    I am getting a request with bvec->bv_len > 512. Now, the
> >> >> >> information to be read is scattered across the entire disk in 512
> >> >> >> chunks. So that, information on disk can be : sector 8, sector
> 100,
> >> >> >> sector 9.
> >> >> >>  Now if i get a request to read with the bvec->bv_len > 512 i
> need to
> >> >> >> pull in the information from
> >> >> >> multiple places on disk since the data is not sequentially
> located.
> >> >> >>  I tried to look at the linux source code because i think raid
> must be
> >> >> >> doing it all the time. (eg : on disk 1 we may be storing sector 6
> and
> >> >> >> on disk 2 we may be storing sector 7 and so on).
> >> >> >
> >> >> > You are right. Perhaps you need to clone the bio and set them
> properly.
> >> >> > I
> >> >> > guess you ought to check dm driver's make_request function. It does
> >> >> > clone
> >> >> > bio.
> >> >> >
> >> >> > I don't know if you can split that request while handling it.
> Perhaps
> >> >> > reinserting that request could work.
> >> >> >
> >> >> >>   However, i have not really got any useful information from it.
> Also
> >> >> >> scouring through articles on
> >> >> >> google has not helped much.
> >> >> >>    I am hoping somebody points me in the right direction.
> >> >> >>
> >> >> >> Thanks in advance,
> >> >> >> Neha
> >> >> >>
> >> >> >> _______________________________________________
> >> >> >> Kernelnewbies mailing list
> >> >> >> Kernelnewbies at kernelnewbies.org
> >> >> >> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >> >> >
> >> >> >       ---P.K.S
> >> >>
> >> >> _______________________________________________
> >> >> Kernelnewbies mailing list
> >> >> Kernelnewbies at kernelnewbies.org
> >> >> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >> >
> >> >
> >>
> >> _______________________________________________
> >> Kernelnewbies mailing list
> >> Kernelnewbies at kernelnewbies.org
> >> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >
> >
> >
> > _______________________________________________
> > Kernelnewbies mailing list
> > Kernelnewbies at kernelnewbies.org
> > http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20140113/c099e387/attachment.html 


More information about the Kernelnewbies mailing list