possible regression?
Mulyadi Santosa
mulyadi.santosa at gmail.com
Thu Jan 20 00:28:09 EST 2011
Hi...
On Thu, Jan 20, 2011 at 10:36, Mag Gam <magawake at gmail.com> wrote:
> Running on Redhat 5.1 if I do,
Are you sure you're using that archaic distro? Or are you talking
about RHEL 5.1?
> dd bs=1024 count=1000000 if=/dev/zero of=/dev/null
>
> I get around 30Gb/sec
Hm, mine is:
$ dd bs=1024 count=1000000 if=/dev/zero of=/dev/null
1000000+0 records in
1000000+0 records out
1024000000 bytes (1.0 GB) copied, 1.12169 seconds, 913 MB/s
This is on 2.6.36 SMP kernel compiled with gcc version 4.1.2 20080704
(Red Hat 4.1.2-48).
>
> However, when I do this with 2.6.37 I get close to 5GB/sec
what if you use another blocksize, let's say 4K or even 32K? here's
mine (again):
$ dd bs=4K count=1000000 if=/dev/zero of=/dev/null
1000000+0 records in
1000000+0 records out
4096000000 bytes (4.1 GB) copied, 1.31167 seconds, 3.1 GB/s
$ dd bs=32K count=1000000 if=/dev/zero of=/dev/null
1000000+0 records in
1000000+0 records out
32768000000 bytes (33 GB) copied, 4.91775 seconds, 6.7 GB/s
see the difference?
IMHO it's a matter of what I call "block merge efficiency"....the more
you stuff pages (that fits into a "magic" number), the faster I/O you
got.
--
regards,
Mulyadi Santosa
Freelance Linux trainer and consultant
blog: the-hydra.blogspot.com
training: mulyaditraining.blogspot.com
More information about the Kernelnewbies
mailing list