Leaving I/O pressure with memory!? How to do it?

Daniel. danielhilst at gmail.com
Thu Jul 5 18:30:22 EDT 2018


Hi everybody!

This is a long doubt mine. And usually I want to apply this to testing
machines that would not be a problem if crashed or destroyed at all. For
the cases where I don't really care about the risks. In another words, not
production at all.

Sometime we have a machine that we work on and that is really really slow
when doing I/O. I know that kernel will use memory to avoid doing I/O, and
that it would be a kind of conservative in avoiding keep to much data on
volatile memory susceptible to being lost on power failure. My question is,
how to do the opposite, and avoid I/O as much as possible, doesn't matter
the risks?

I'm using a virtual machine to test some ansible playbooks, the machine is
just a testing environment so it will be created again and again and again.
(And again). The playbook generates a lot of I/O, from yum installs, and
another commands that inspect ISO images to create repositories,  ... it
doesn't matter, the problem is that it is really slow and that it doesn't
contains any important data. What can I do to avoid I/O (by using memory)
as much as possible? And how can I measure if it worked?

What I'm doing is keeping vm.dirty_background_ratio to the default (10) and
setting vm.dirty_ratio to 90. From what I could grasp the first controls
when the kernel thread will be scheduled to flush data to disk, and the
second when the kernel will block I/O entirely. By the way, this is where
the I/O wait from `top` comes from?

Anyway. The idea is that the flushing thread enters as soon as possible and
that the blocking happens as late as possible so that I leave disks working
but avoid I/O blocking.

How can I measure I/O blocking? Is there any counter for that so I can
measure its frequency and compare before and after messing up with
dirty_ratio?

Regards,

-- 
“If you're going to try, go all the way. Otherwise, don't even start. ..."
  Charles Bukowski
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.kernelnewbies.org/pipermail/kernelnewbies/attachments/20180705/50c7b6ae/attachment.html>


More information about the Kernelnewbies mailing list