wait queues semiphores kernel implementations

Ruben Safir ruben at mrbrklyn.com
Wed Apr 22 07:23:48 EDT 2015


Ruben QUOTED Previously:

<<<I'm pouring over Love's (Kernel) book in detail and the section in
Chapter 4 on the wait queue how it is implemented  in the text
completely surprised me.

He is recommending that you have to write your own wait queue entry
routine for every process?  Isn't that reckless?

He is suggesting

DEFINE_WAIT(wait) //what IS wait EXACTLY in this context

add_wait_queue(q, &wait); // in the current kernel this invovled
                         //  flag   checking and a linked list

while(!condition){ /* an event we are weighting for
  prepare_to_wait(&q, &wait, TASK_INTERRUPTIBLE);
  if(signal_pending(current))
        /* SIGNAl HANDLE */
  schedule();
}

finish_wait(&q, &wait);

He also write how this proceeds to function and one part confuses me

5.  When the taks awakens, it again checks whether the condition is
true.  If it is, it exists the loop.  Otherwise it again calls schedule.


This is not the order that it seems to follow according to the code.

To me it looks like it should
1 - creat2 the wait queue
2 - adds &wait onto queue q
3 checks if condition is true, if so, if not, enter a while loop
4 prepare_to_wait which changes the status of our &wait to
TASK_INTERUPPABLE
5 check for signals ... notice the process is still moving.  Does it
stop and wait now?
6  schedule itself on the runtime rbtree... which make NO sense unless
there was a stopage I didn't know about.
7 check the condition again and repeat while look
	7a. if the loop ends fishish_waiting... take it off the queue.



Isn't this reckless to leave this to users to write the code.  Your
begging for a race condition.

Ruben >>



On 04/21/2015 11:05 AM, michi1 at michaelblizek.twilightparadox.com wrote:
> Hi!
> 
> On 12:39 Mon 20 Apr     , Ruben Safir wrote:
>> On 04/20/2015 11:23 AM, michi1 at michaelblizek.twilightparadox.com wrote:
>>> I would not recommend that. There are already functions in linux/wait.h for
>>> these purposes like wait_event_interruptible(). 
>>
>>
>> can you do that in the kernel?  The wait_event_interuptable creates wait
>> queues?
> 
> No, wait_event_interuptable waits on an existing waitqueue. If you want to
> create a waitqueue, call init_waitqueue_head().
> 
> 	-Michi
> 


Here is the confusing part.  this is a discussion on wait and semiphores
in a standard text

5.6.2
 Semaphore Implementation
Recall that the implementation of mutex locks discussed in Section 5.5
suffers from busy waiting. The definitions of the wait() and signal()
semaphore operations just described present the same problem. To
overcome the need for busy waiting, we can modify the definition of the
wait() and signal() operations as follows: When a process executes the
wait() operation and finds that the semaphore value is not positive, it
must wait. However, rather than engaging in busy waiting, the process
can block itself. The block operation places a process into a waiting
queue associated with the semaphore, and the state of the process is
switched to the waiting state. Then control is transferred to the CPU
scheduler, which selects another process to execute.

A process that is blocked, waiting on a semaphore S, should be restarted
when some other process executes a signal() operation. The process is
restarted by a wakeup() operation, which changes the process from the
waiting state to the ready state. The process is then placed in the
ready queue. (The CPU may or may not be switched from the running
process to the newly ready process, depending on the CPU-scheduling
algorithm.)

To implement semaphores under this definition, we define a semaphore as
follows:

typedef struct {
    int value;
    struct process *list;
} semaphore;

Each semaphore has an integer value and a list of processes list. When
a process must wait on a semaphore, it is added to the list of processes. A
signal() operation removes one process from the list of waiting
processes and awakens that process.
Now, the wait() semaphore operation can be defined as

wait(semaphore *S) {
   S->value--;
  if (S->value < 0) {
  add this process to S->list;
  block();
  }
}

and the signal() semaphore operation can be defined as

signal(semaphore *S) {
  S->value++;
  if (S->value <= 0) {
  remove a process P from S->list;
  wakeup(P);
  }
}


Minus the Semiphore, that sounds like what we are doing with the wait
list in the scheduler.   But it looks like we are leaving it to the
user.  Why?  It is similar but oddly different so I'm trying to figure
out what is happening here.

Ruben




More information about the Kernelnewbies mailing list