Breaking up a bvec in a bio for reading more than 512
neha naik
nehanaik27 at gmail.com
Mon Jan 6 16:09:58 EST 2014
Hi Rajat,
I am not opposed to creating multiple bio. I just meant to say that
if there is another method which involves not breaking the bio (as i
understand breaking the bio) i would love to know it.
Regards,
Neha
On Mon, Jan 6, 2014 at 12:23 PM, Rajat Sharma <fs.rajat at gmail.com> wrote:
> Why do you want to avoid creating multiple bio? If you have data on multiple
> disks, create multiple of them and shoot them simultaneously to get
> advantage of parallel IO. And if it is single disk, elevators of lower disk
> would do a good job of reading/writing them in serial order of disk seek. I
> don't see much of savings with not creating bio, it is going to be allocated
> from slab anyways. Also risks you involve with leaving bio in a corruptible
> state after customization for one disk are higher.
>
>
> On Mon, Jan 6, 2014 at 10:03 AM, neha naik <nehanaik27 at gmail.com> wrote:
>>
>> Hi All,
>> I figured out the method by some trial and error and looking at the
>> linux source code.
>> We can do something like this :
>> Say we want to read pages of bvec in 512 chunks. Create bio with
>> a single page and read 512 chunk of data from wherever you want to (it
>> can be different disks).
>>
>> dst = kmap_atomic(bvec->bv_page, KM_USER0); ---> bvec is of
>> original bio
>> src = kmap_atomic(page, KM_USER0); ---> page we read by
>> creating new bio
>> memcpy(dst+offset, src, 512);
>> kunmap_atomic(src, KM_USER0);
>> kunmap_atomic(dst, KM_USER0);
>>
>> My difficulty was not being able to access the high memory page in
>> kernel. I was earlier trying to increment the offset of the bvec and
>> pass the page to the layer below assuming that it would read in the
>> data at correct offset but of course it was resulting in panic. The
>> above solves that. Of course, if there is some other method which
>> involves not creating any bio i would love to know.
>>
>> Regards,
>> Neha
>>
>>
>> On Sat, Jan 4, 2014 at 9:32 AM, Pranay Srivastava <pranjas at gmail.com>
>> wrote:
>> >
>> > On 04-Jan-2014 5:18 AM, "neha naik" <nehanaik27 at gmail.com> wrote:
>> >>
>> >> Hi All,
>> >> I am getting a request with bvec->bv_len > 512. Now, the
>> >> information to be read is scattered across the entire disk in 512
>> >> chunks. So that, information on disk can be : sector 8, sector 100,
>> >> sector 9.
>> >> Now if i get a request to read with the bvec->bv_len > 512 i need to
>> >> pull in the information from
>> >> multiple places on disk since the data is not sequentially located.
>> >> I tried to look at the linux source code because i think raid must be
>> >> doing it all the time. (eg : on disk 1 we may be storing sector 6 and
>> >> on disk 2 we may be storing sector 7 and so on).
>> >
>> > You are right. Perhaps you need to clone the bio and set them properly.
>> > I
>> > guess you ought to check dm driver's make_request function. It does
>> > clone
>> > bio.
>> >
>> > I don't know if you can split that request while handling it. Perhaps
>> > reinserting that request could work.
>> >
>> >> However, i have not really got any useful information from it. Also
>> >> scouring through articles on
>> >> google has not helped much.
>> >> I am hoping somebody points me in the right direction.
>> >>
>> >> Thanks in advance,
>> >> Neha
>> >>
>> >> _______________________________________________
>> >> Kernelnewbies mailing list
>> >> Kernelnewbies at kernelnewbies.org
>> >> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>> >
>> > ---P.K.S
>>
>> _______________________________________________
>> Kernelnewbies mailing list
>> Kernelnewbies at kernelnewbies.org
>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
More information about the Kernelnewbies
mailing list