Why do the CFS chase fairness?

Mulyadi Santosa mulyadi.santosa at gmail.com
Sun Sep 18 10:48:29 EDT 2011


Hi...

This would take us back to the days of comp science bachelor seat :)

On Sun, Sep 18, 2011 at 01:29, Parmenides <mobile.parmenides at gmail.com> wrote:
> Hi,
>
>   Current kernel 2.6 have adopted the CFS scheduler which try to
> ensure that all process can obtain their proportions of allotted
> processor fairly.  I have three questions about it.
>
>   1. How to understand unfairness?

IMHO, the easiest meaning is that:
one task gets proportionally more time slice than the other.

"Proportion" here gets some clue from both process nice level and
later from sleep interval. So, technically process with more negative
nice level (which means higher priority) gets longer time slice.

But that is simply if all processes runs 100% as cpu bound. In
reality, some if not all might run as I/O bound, thus it sleeps.The
longer a process sleep, when it is awake, it is assumed to be working
on something important. Thus it gets temporary priority boost.

That's all great, at least on paper. In reality, even such procedure
is already aplied in the original O(1) scheduler, the perfomance might
suffer in some cases. The problem is still the same, you have N jobs,
but have M processors where M<N. So, one way or another, unfairness
would happen.

>   2. Why do processes need fairness?
>
>   Yes, we can argue that now that we human beings need fairness,
> processes do so. :-)  But, are there advantages if the scheduler care
> about fairness? Just for processes not waiting too long? Or other
> reasons?

To achieve highest response time IMHO. Yes we have preemption, but
preemption itself takes time. Fortunately the processor is gettting
faster and faster and there was a research that concluded that as long
perceived latency is somewhere under ~250-300 milisecond, you would
see it as "snappy"

>   3. What's the drawbacks of traditional schdulers?
>
>    According to Love, traditional schedulers assign each process a
> absolut timeslcie which yields a constant switching rate but variable
> fairness. How to understand 'constant switching rate'? What does cause
> 'variable fairness'?

I believe Robert Love talks about context switching between processes.


Not sure about "variable fairness", but I think that's because a
program could alternately switch between being CPU bound and I/O
bound. In that case, time slices should be dynamically recalculated on
every context switch. Also, notice that a process could voluntarily
call sched_yield(), thus giving up its time slot.

Forgive me if my CS knowledge sucks :)
-- 
regards,

Mulyadi Santosa
Freelance Linux trainer and consultant

blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com



More information about the Kernelnewbies mailing list