possible regression?

Mag Gam magawake at gmail.com
Thu Jan 20 20:16:06 EST 2011


Greg,

Yes, I did one very big like 100TB and I still see the regression.  I
even tried it with your extra dd option.
I am wondering if the new kernel (2.6.36) introduced an options I need to set ?

Can someone else try this?

To reiterate the test scenario,
Box 1, RHEL 5.1, stock kernel, dd bs=1024 count=1000000 if=/dev/zero
bs=4096k of=/dev/null
Box 2, RHEL 5.2, stock kernel,dd bs=1024 count=1000000 if=/dev/zero
bs=4096k of=/dev/null
Box 3, RHEL 5.3, stock kernel, dd bs=1024 count=1000000 if=/dev/zero
bs=4096k of=/dev/null
Box 4, RHEL 5.4, stock kernel, dd bs=1024 count=1000000 if=/dev/zero
bs=4096k of=/dev/null
Box 5, RHEL 5.4, 2.6.35, dd bs=1024 count=1000000 if=/dev/zero
bs=4096k of=/dev/null

Box 5 takes much much longer.

And all of these boxes are the same model and specs...






On Thu, Jan 20, 2011 at 9:29 AM, Greg Freemyer <greg.freemyer at gmail.com> wrote:
> Mulyadi,
>
> You disappoint me. ;(
>
> Just kidding, but discussing dd throughput without the
> "conv=fdatasync" parameter is just a waste of everyone's time.
>
> And Mag, use a big enough count that it at least takes a few seconds
> to complete.  A tenth of a second or less is just way to short to use
> as a benchmark.
>
> Greg
>
> On Thu, Jan 20, 2011 at 12:28 AM, Mulyadi Santosa
> <mulyadi.santosa at gmail.com> wrote:
>> Hi...
>>
>> On Thu, Jan 20, 2011 at 10:36, Mag Gam <magawake at gmail.com> wrote:
>>> Running on Redhat 5.1 if I do,
>>
>> Are you sure you're using that archaic distro? Or are you talking
>> about RHEL 5.1?
>>
>>> dd bs=1024 count=1000000 if=/dev/zero of=/dev/null
>>>
>>> I get around 30Gb/sec
>>
>> Hm, mine is:
>> $ dd bs=1024 count=1000000 if=/dev/zero of=/dev/null
>> 1000000+0 records in
>> 1000000+0 records out
>> 1024000000 bytes (1.0 GB) copied, 1.12169 seconds, 913 MB/s
>>
>> This is on 2.6.36 SMP kernel compiled with gcc version 4.1.2 20080704
>> (Red Hat 4.1.2-48).
>>
>>>
>>> However, when I do this with 2.6.37 I get close to 5GB/sec
>>
>> what if you use another blocksize, let's say 4K or even 32K? here's
>> mine (again):
>> $ dd bs=4K count=1000000 if=/dev/zero of=/dev/null
>> 1000000+0 records in
>> 1000000+0 records out
>> 4096000000 bytes (4.1 GB) copied, 1.31167 seconds, 3.1 GB/s
>>
>> $ dd bs=32K count=1000000 if=/dev/zero of=/dev/null
>> 1000000+0 records in
>> 1000000+0 records out
>> 32768000000 bytes (33 GB) copied, 4.91775 seconds, 6.7 GB/s
>>
>> see the difference?
>>
>> IMHO it's a matter of what I call "block merge efficiency"....the more
>> you stuff pages (that fits into a "magic" number), the faster I/O you
>> got.
>>
>> --
>> regards,
>>
>> Mulyadi Santosa
>> Freelance Linux trainer and consultant
>>
>> blog: the-hydra.blogspot.com
>> training: mulyaditraining.blogspot.com
>>
>> _______________________________________________
>> Kernelnewbies mailing list
>> Kernelnewbies at kernelnewbies.org
>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>>
>
>
>
> --
> Greg Freemyer
> Head of EDD Tape Extraction and Processing team
> Litigation Triage Solutions Specialist
> http://www.linkedin.com/in/gregfreemyer
> CNN/TruTV Aired Forensic Imaging Demo -
>    http://insession.blogs.cnn.com/2010/03/23/how-computer-evidence-gets-retrieved/
>
> The Norcross Group
> The Intersection of Evidence & Technology
> http://www.norcrossgroup.com
>



More information about the Kernelnewbies mailing list