Testing the performance impact of kernel modifications
SeyedAlireza Sanaee
sarsanaee at gmail.com
Tue Oct 23 09:59:46 EDT 2018
I believe there is no definite methodology, it is all experimental and
dependent on the applications you are running and as Valdis told on the
changes you would like to make. Basically, when a paper offers a new
algorithm or design then they should have tested it on their own testbed.
They may report their experimental methodology in the paper or even have
the experiment scripts on the Github. However, other than the applications,
and your changes, the testbed itself is also important. Their system may
work with a CPU frequency different than yours. So you might not see the
performance gain as they reported and achieved in the paper.
For instance, concerning some network enhancement in TCP stack, some people
may improve the end to end latency just 10s of microseconds and you suppose
to capture those minor microseconds in your experiments. It is really hard
but not impossible, it basically takes time and effort + *extensive
evaluations*.
I know that nowadays system software papers are pretty practical, and they
try to build a working systems. I'm particularly talking about SOSP and
OSDI papers.
On Tue, Oct 16, 2018 at 3:50 AM Carter Cheng <cartercheng at gmail.com> wrote:
> Basically I am looking for methodology guidelines for doing my own testing
> on a bunch of techniques in different papers and seeing what the
> performance impact is overall. Are there guidelines for doing such things?
>
> On Tue, Oct 16, 2018 at 3:19 AM <valdis.kletnieks at vt.edu> wrote:
>
>> On Tue, 16 Oct 2018 01:23:45 +0800, Carter Cheng said:
>> > I am actually looking at some changes that litter the kernel with short
>> > code snippets and thus according to papers i have read can result in CPU
>> > hits of around 48% when applied is userspace.
>>
>> You're going to need to be more specific. Note that 48% increase in a
>> micro-benchmark
>> doesn't necessarily translate to a measurable performance change - for
>> example, I have a
>> kernel build running right now with a cold file cache, and it's only
>> using 6-8% of the CPU in
>> kernel mode (the rest being gcc in userspace and waiting for the
>> spinning-oxide disk). If the
>> entire kernel slowed down by 50% that would only be 3-4% change visible
>> at the macro level.
>>
>> > but I haven't seen any kernel space papers measuring degradations in
>> overall
>> > system performance when adding safety checks(perhaps redundant
>> sometimes) into
>> > the kernel
>>
>> Well.. here's the thing. Papers are usually written by academics and
>> trade
>> journal pundits, not people who write code for a living. As a result,
>> they end
>> up comparing released code versions. As a worked example, see how the
>> whole
>> Spectre thing turned out - the *initial* fears were that we'd see a huge
>> performance drop. But the patches that finally shipped for the Linux
>> kernel
>> were after a bunch of clever people had thought about it and come up with
>> less
>> intrusive ways to close the security issue.
>>
>> (Having said that, the guys at Phoronix do a reasonable job of doing
>> macro-level benchmarks of each kernel release and pointing out if there's
>> a big
>> hit in a subsystem).
>>
>> And as I said earlier - sometimes it doesn't matter, because correctness
>> trumps performance.
>>
> _______________________________________________
> Kernelnewbies mailing list
> Kernelnewbies at kernelnewbies.org
> https://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20181023/85a09f1f/attachment.html>
More information about the Kernelnewbies
mailing list