Fwd: Custom Linux Kernel Scheduler issue

Kenneth Adam Miller kennethadammiller at gmail.com
Thu Nov 24 10:31:18 EST 2016


On Nov 24, 2016 2:18 AM, "Greg KH" <greg at kroah.com> wrote:
>
> On Thu, Nov 24, 2016 at 02:01:41AM -0500, Kenneth Adam Miller wrote:
> > Hello,
> >
> >
> > I have a scheduler issue in two different respects:
> >
> > 1) I have a process that is supposed to tight loop, and it is being
> > given very very little time on the system. I don't want that - I want
> > those who would use the processor to be given the resources to run as
> > fast as they each can.
>
> What is causing it to give up its timeslice?  Is it waiting for I/O?
> Doing something else to sleep?

It's multithreaded, so it reads in a loop in one thread and writes in
another thread. What I saw when I ran strace on it is each process
would run for too long- the program is designed to try and stay out of
the kernel on each side, so it checks some shared variables before it
ever goes.

>
> > 2) I am seeing with perf that the maximum overhead at each section
> > does not sum up to be more than 15 percent. Total, probably something
> > like 18% of cpu time is used, and my binary has rocketed in slowness
> > from about 2 seconds or less total to several minutes.
>
> What changed to make things slower?  Did you change kernel versions or
> did you change something in your userspace program?
>

The kernel versions specifically couldnt have anything to do with it
but it was different kernels. The test runs in less that 2 seconds on
my host. When I copy it to our custom linux, it takes minutes for it
to run. I think it's some extra setting that we're missing while
building the kernel, and I don't know what that is. I got a huge
improvement when I changed the multicore scheduling to allow
preemption "(desktop)" but there's still a problem as I've described
with one of the processes not using the core as it should.

> > I think that
> > the linux scheduler isn't scheduling it, because this process is just
> > some unit tests that double as benchmarks in that they shm_open a file
> > and write into it with memcpy's.
>
> Are you sure that I/O isn't happening here like through swap or
> something else?
>

Well, we're using tmpfs and don't have a disk in the machine, but I
will say this process is using all lot of the address space. One
problem here is that the kernel has more ram than it thinks it does,
but what I want to emphasize is that I haven't changed the program to
allocate any more than it was previously. I'm not sure if that's a
kernel change or some setting, but it went from 85% to 98%. The reason
why is that there is a large latency even without that big program in
there; I can't run my standalone tests in qemu without it also taking
minutes. I understand qemu has to emulate, and that's its not just a
VM, but I'm going from host CPU to guest, and the settings are the
same.

> What does perf say is taking all of your time?

When I ran perf what it appeared to indicate is that the largest
consumer of time was my library, which should be right in either
scenario because it should use stay out of the kernel as I've designed
it. In addition, the work takes place there anyway, so that's right.
What's not right is the fact that the largest percent of time used is
around 15%, and all the others combined don't add up to anything near
100.

>
> thanks,
>
> greg k-h



More information about the Kernelnewbies mailing list