Generic I/O

Kai Meyer kai at gnukai.com
Tue Nov 15 13:40:26 EST 2011


On 11/15/2011 11:13 AM, michi1 at michaelblizek.twilightparadox.com wrote:
> Hi!
>
> On 12:15 Mon 14 Nov     , Kai Meyer wrote:
> ...
>
>> My
>> caller function has an atomic_t value that I set equal to the number of
>> bios I want to submit. Then I pass a pointer to that atomic_t around to
>> each of the bios which decrement it in the endio function for that bio.
>>
>> Then the caller does this:
>> while(atomic_read(numbios)>  0)
>>           msleep(1);
>>
>> I'm finding the msleep(1) is a really really really long time,
>> relatively. It seems to work ok if I just have an empty loop, but it
>> also seems to me like I'm re-inventing a wheel here.
> ...
>
> You might want to take a look at wait queues (the kernel equivalent to pthread
> "condidions"). Basically you instead of calling msleep(), you call
> wait_event(). In the function which decrements numbios, you check whether it
> is 0 and if so call wake_up().
>
> 	-Michi

That sounds very promising. When I read up on wait_event here:
lxr.linux.no/#linux+v2.6.32/include/linux/wait.h#L191

It sounds like it's basically doing the same thing. I would call it like so:

wait_event(wq, atomic_read(numbios) == 0);

To make sure I understand, this seems very much like what I'm doing, 
except I'm being woken up every time a bio finishes instead of being 
woken up once every millisecond. That is, I'm assuming I would use the 
same work queue for all my bios.

During my testing, when I do a lot of disk I/O, I may potentially have 
hundreds of threads waiting on anywhere between 1 and 32 bios. Help me 
understand the sort of impact you think I might see between having 
hundreds waiting for a millisecond, and having hundreds get woken up 
each time a bio completes. It seems like it would be very helpful in low 
I/O scenarios, especially when there are fast disks involved. I'm 
concerned that during heavy I/O loads, I'll be doing a lot of 
atomic_reads, and I have the impression that atomic_read isn't the 
cheapest operation.

-Kai Meyer



More information about the Kernelnewbies mailing list