<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Sat, Nov 22, 2014 at 8:24 PM, Greg Freemyer <span dir="ltr"><<a href="mailto:greg.freemyer@gmail.com" target="_blank">greg.freemyer@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5"><br>
<br>
On November 22, 2014 9:43:23 AM EST, Anshuman Aggarwal <<a href="mailto:anshuman.aggarwal@gmail.com">anshuman.aggarwal@gmail.com</a>> wrote:<br>
>On 22 November 2014 at 19:33, Greg Freemyer <<a href="mailto:greg.freemyer@gmail.com">greg.freemyer@gmail.com</a>><br>
>wrote:<br>
>> On Sat, Nov 22, 2014 at 8:22 AM, Anshuman Aggarwal<br>
>> <<a href="mailto:anshuman.aggarwal@gmail.com">anshuman.aggarwal@gmail.com</a>> wrote:<br>
>>> By not using stripes, we restrict writes to happen to just 1 drive<br>
>and<br>
>>> the XOR output to the parity drive which then explains the delayed<br>
>and<br>
>>> batched checksum (resulting in fewer writes to the parity drive).<br>
>The<br>
>>> intention is that if a drive fails then maybe we lose 1 or 2 movies<br>
>>> but the rest is restorable from parity.<br>
>>><br>
>>> Also another advantage over RAID5 or RAID6 is that in the event of<br>
>>> multiple drive failure we only lose the content on the failed drive<br>
>>> not the whole cluster/RAID.<br>
>>><br>
>>> Did I clarify better this time around?<br>
>><br>
>> I still don't understand the delayed checksum/parity.<br>
>><br>
>> With classic raid 4, writing 1 GB of data to just D1 would require 1<br>
>> GB of data first be read from D1 and 1 GB read from P then 1 GB<br>
>> written to both D1 and P. 4 GB worth of I/O total.<br>
>><br>
>> With your proposal, if you stream 1 GB of data to a file on D1:<br>
>><br>
>> - Does the old/previous data on D1 have to be read?<br>
>><br>
>> - How much data goes to the parity drive?<br>
>><br>
>> - Does the old data on the parity drive have to be read?<br>
>><br>
>> - Why does delaying it reduce that volume compared to Raid 4?<br>
>><br>
>> - In the event drive 1 fails, can its content be re-created from the<br>
>> other drives?<br>
>><br>
>> Greg<br>
>> --<br>
>> Greg Freemyer<br>
><br>
>Two things:<br>
>Delayed writes basically to allow the parity drive to spin down if the<br>
>parity writing is only 1 block instead of spinning up the drive for<br>
>every write (obviously the data drive has to be spun up). Delays will<br>
>be both time and size constrained.<br>
>For a large write such as a 1 GB of data to file it would trigger a<br>
>configurable maximum delaying limit which would then dump to parity<br>
>drive immediately preventing memory overuse.<br>
><br>
>This again ties in to the fact that the content is not 'critical' so<br>
>if parity was not dumped when a drive fails, worst case you only lose<br>
>the latest file.<br>
><br>
>Delayed writes may be done via bcache or a similar implementation<br>
>which caches the writes in memory and need not be part of the split<br>
>raid driver at all.<br>
<br>
</div></div>That provided little clarity.<br>
<br>
File systems like xfs queue (delay) significant amounts of actual data before writing it to disk. The same is true of journal data. If all you are doing is caching the parity up until their is enough to bother with, then a filesystem designed for streamed data already does the for the data drive, thus you don't need to do anything new for the parity drive, just run it in sync with the data drive.<br>
<br>
At this point I interpret your proposal to be:<br>
<br>
Implement a Raid 4 like setup, but instead if stripping the date data drives, concatenate them.<br>
<br>
That is something I haven't seen done, but I can see why you would want it. Implementing via unionfs I don't understand, but as a new device mapper mechanism it seems very logical.<br>
<br>
Obviously, I'm not a device mapper maintainer, so I'm not saying it would be accepted, but if I'm right you can now have a discussion of just a few sentences which explain your goal.<br>
<span class="HOEnZb"><font color="#888888"><br></font></span></blockquote><div><br></div><div>RAID4 support does not exist in the mainline. Anshuman, you might want to reach out to Neil Brown who is the maintainer for dmraid.</div><div>IIUC, your requirement can be well implemented by writing a new device mapper target. That will make it modular and will help you make improvements to it easily.</div><div><br></div><div><br></div><div><br></div><div> </div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="HOEnZb"><font color="#888888">
Greg<br>
</font></span><span class="im HOEnZb">--<br>
Sent from my Android phone with K-9 Mail. Please excuse my brevity.<br>
<br>
</span><div class="HOEnZb"><div class="h5">_______________________________________________<br>
Kernelnewbies mailing list<br>
<a href="mailto:Kernelnewbies@kernelnewbies.org">Kernelnewbies@kernelnewbies.org</a><br>
<a href="http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies" target="_blank">http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies</a><br>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br><div class="gmail_signature">Regards,<br>Sandeep.<br><br><br><br><br><br> <br>“To learn is to change. Education is a process that changes the learner.”</div>
</div></div>