Curious about corner case in btrfs code

Tobias Boege tobias at gambas-buch.de
Tue Aug 26 20:37:33 EDT 2014


On Tue, 26 Aug 2014, Nick wrote:
> On 08/26/2014 08:05 PM, Tobias Boege wrote:
> > On Tue, 26 Aug 2014, Nick wrote:
> >> On 08/26/2014 06:58 PM, Mandeep Sandhu wrote:
> >>> If it's a corner case, it won't be hit often enough right? And if it
> >>> was hit often enough, it wouldn't be corner case!? :)
> >>>
> >>> These 2 are mutually exclusive!
> >>>
> >>>
> >>> On Tue, Aug 26, 2014 at 3:47 PM, Nick <xerofoify at gmail.com> wrote:
> >>>> After reading through the code in inode.c today , I am curious about the comment and the following code I will paste
> >>>> below. I am curious if this corner case is hit often enough for me to write a patch to improve the speed of this
> >>>> corner case. Furthermore , compress_file_range is the function name, in case you can't guess by the pasted code.
> >>>> Regards Nick
> >>>> 411     /*
> >>>> 412      * we don't want to send crud past the end of i_size through
> >>>> 413      * compression, that's just a waste of CPU time.  So, if the
> >>>> 414      * end of the file is before the start of our current
> >>>> 415      * requested range of bytes, we bail out to the uncompressed
> >>>> 416      * cleanup code that can deal with all of this.
> >>>> 417      *
> >>>> 418      * It isn't really the fastest way to fix things, but this is a
> >>>> 419      * very uncommon corner.
> >>>> 420      */
> >>>> 421     if (actual_end <= start)
> >>>> 422             goto cleanup_and_bail_uncompressed;
> >>>>
> >>>> _______________________________________________
> >>>> Kernelnewbies mailing list
> >>>> Kernelnewbies at kernelnewbies.org
> >>>> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
> >> I get that my question is if this corner case is hit, enough for me to write a patch to optimize it.
> >> In addition the comment states it isn't but want to known for standard compression workloads in btrfs 
> >> if it's hit enough for me to work on this and how much speed degradation are me we doing my not writing
> >> it better.
> >> Nick 
> >>
> > 
> > Here's how I would go about it:
> > 
> >  1. Understand when the case is met (in theory).
> >  2. Try to trigger it on a real system multiple times.
> >  3. Try to explore systematically under what circumstances the case is met
> >     and rank them by plausibility (if the notion of plausibility makes any
> >     sense in a real world scenario -- I don't know).
> >  4. Estimate cost vs. benefit.
> > 
> > I don't know if this is a good way but notice how you can do all this on
> > yourself which I think is a plus for everyone. And if you decide in step 4
> > to write a patch:
> > 
> >  5. Use your results from step 3 to create an environment that benefits
> >     from your patch (notice how 4 guarantees that there exists such a
> >     system with reasonable connection to real needs). Note the numbers.
> >  6. Test your patch on as many regular configurations as possible. Note
> >     the numbers. If it degrades performance on any of those, abort.
> >  7. Do *NOT* send the patch out.
> > 
> > Regards,
> > Tobi
> > 
> 
> Thanks Tobi,
> >From reading the code off the bat, seems to not need to be written as this case is rarely meet for large files
> or files that are huge and take a lot of time to write.
>

Thanks for letting me compose my mail before you take a closer look at your
matter and decide it's not worth it.

> Was more curious about how to test things like this if 
> I need to :).

Then you need to phrase that -- in the *first* mail of a thread. There is no
need to hide your real questions behind different ones.

I think the steps 1 and 2 above can still be used to answer your question.
Basically, to test something you set up and run your system and then you do
things. What your "system" is constituted of and what "things" are depends
upon the subject and 1 and 2 might help you clear that up.

Regards,
Tobi

-- 
"There's an old saying: Don't change anything... ever!" -- Mr. Monk




More information about the Kernelnewbies mailing list