[jdom-interest] Performance regressions in JDOM

Vojtech Horky horky at d3s.mff.cuni.cz
Fri Jun 19 02:54:09 PDT 2015


Hello Rolf,
thanks for your quick reply and I am sorry for such late response from 
my side.

Dne 10.6.2015 v 12:56 Rolf napsal(a):
> Your results are fascinating, and your feedback is very appreciated. 
> It is going to take me a little while to digest it all, but, at the 
> same time, yes, it is most likely that at a minimum I will duplicate 
> your setup and see how it can be integrated. I strive to produce 
> high-performing code and value any tools and feedback that can help 
> with that.
Thanks! If you really go forward with running it by yourself, please, 
feel free to contact me in case of any issues with the set-up. I would 
be glad to help.


> As for your results, I wish you would have contacted us before 
> choosing your individual test cases. I have some concerns about them. 
> For example, the one you have listed as being a regression (negative 
> Improvement) is this commit here:
>
> https://github.com/hunterhacker/jdom/commit/4e2753539ad1217775e3b79dcb4cc7d1326a7798 
>
>
> That commit is a major update with a huge impact on many parts of the 
> code base... it is also from 2001 and is from when the API was 
> changing significantly.
>
> Most importantly, that commit did two things of note:
>
> 1. it added a Text class to the API which is what likeley does impact 
> and slow down the SAXBuilder.build performance
> 2. the performance changes it mentions are for "FilterList" which is 
> not related to the build process at all. The performance impact would 
> not be seen in build().
>
> Your tool may have been useful to see the regression in the build, but 
> the developer(s) are "innocent" (or, at least there's no relevant 
> evidence) when it comes to a performance-reducing commit being called 
> a performance-improving one.
>
> Also of significance with that commit, it's using Java 1.2 or 
> something.... it is not like we can actually go back and reproduce the 
> same performance environments that were used at the time. The commit 
> message may in fact be right....
You are right in all aspects. I am really sorry if it looked like that 
we want to blame someone or make the developers look incompetent or 
something like that. That was really not our intention. The message we 
wanted to convey was rather "developers care about performance so let's 
give them a tool to capture their assumptions about it". On the examples 
we merely wanted to show that the effects of changes could complex and 
having an automated performance test could help (in a similar way that 
functional test can warn you).

Regarding the older Java - we went back to Java 1.5 for some of the 
tests and we have not seen any big difference in the performance. Of 
course, that is a long way from Java 1.2 but we were unable to setup a 
machine to run our tool and compile/run the measurements with JDK 1.2.


> ----
>
> The first Verifier example 4ad684a ( 
> https://github.com/hunterhacker/jdom/commit/4ad684aa0427ee4f8feec8bbe360d6e32e32771c 
> ) you use for the verifier performance is great to see confirmed, but 
> I am also concerned with that because that commit is just the first of 
> 5 that specifically target performance in the Verifier. As a developer 
> (actually, as one of the two developers in this case), I would have 
> preferred to see the results of the wider range of commits. Using the 
> tool I would have wanted to compare my "current" performance to the 
> "before" performance, and spanned the 5 commits, instead of just the 
> first.
Right. I think that this is partially caused by the retro-active nature 
of our approach where we depended on the commit messages and diffs only 
instead of intuitively knowing what was happening. We should have 
contacted you but honestly we have not expected that someone would 
remember much details about a commit from 5 years ago :-).


> ----
>
> Regardless, I believe there is some significant value in here, and 
> using your tool as a day-to-day aid seems like a good approach.
Thank you for such kind reply. I am sorry if I left some bad impression. 
We should have emphasized more in the paper that the relevant commits 
are several years old and it is impossible to reproduce their 
performance reliably and they should be taken as such.


Regards,
- Vojtech


>
> Rolf
>
>
>
>
> On 10/06/2015 2:32 AM, Vojtech Horky wrote:
>> Hello all.
>>
>> TL;DR version of this e-mail is: at our research group we run some 
>> tests scanning for performance regressions in JDOM. We mostly focused 
>> on finding out whether assumptions stated in commit messages (such as 
>> "it should be faster and consume fewer resources") were actually met 
>> and whether it is possible to capture them in automatically testable 
>> way.
>>
>> Now, we would like to know whether someone would be interested in the 
>> results; whether we should polish the tests to be able to move them 
>> to contrib/ or (better) whether someone would like to push this further.
>>
>>
>> If you are interested in this, here is the full version.
>>
>> Our aim is to allow developers create performance unit tests, i.e. 
>> something to test performance of individual methods. For example, if 
>> you (think that you have) improved performance of a certain method, 
>> it would be nice if you could test that. Of course, you can run tests 
>> manually but that is time consuming and rather inconvenient in 
>> general. So our tool (approach) allows you to capture such assumption 
>> in a Java annotation and our program - still a prototype, but working 
>> pretty well - scans these annotations and runs the tests 
>> automatically, reporting violations. In this sense, it is like a 
>> performance equivalent of an assertion of a unit test.
>>
>> As a simple example, if we want to capture an assumption stating that 
>> SAXBuilder.build() function is faster in version (commit) 6a49ef6 
>> than in 4e27535, we would put in the annotation the following string:
>>
>> SAXBuilder#build @ 6a49ef6 < SAXBuilder#build @ 4e27535
>>
>> and the tool would handle the rest. Well, more or less.
>>
>> Regarding JDOM, we went through its commits and identified almost 50 
>> of them that we found interesting. Interesting commits were those 
>> that mentioned that they improved performance or when the commit was 
>> a "refactoring" one. We measured mostly SAXBuilder.build() and 
>> several methods from the Verifier class.
>>
>> In the end, we found out that we were able to confirm lot of the 
>> assumptions about performance but there were also cases where the 
>> assumptions were not met, i.e. the developer thought that the commit 
>> improved performance while the opposite was true.
>>
>> We published our results in a paper [1] (PDF available as well [2]); 
>> detailed results are available on-line [3].
>>
>> Right now, the tests themselves are in a separate repository [4] and 
>> the setup is rather complicated. However, if someone would find this 
>> interesting and potentially useful, we would gladly refactor the 
>> tests to fit the contrib/ structure and prepare a fork to be merged.
>>
>> Regards,
>> - Vojtech Horky
>>
>>
>> [1] http://dx.doi.org/10.1007/978-3-642-40725-3_12
>> [2] http://d3s.mff.cuni.cz/~horky/papers/epew2013.pdf
>> [3] http://d3s.mff.cuni.cz/software/spl/#jdom-case-study
>> [4] https://sf.net/p/spl-tools/casestudy/ci/master/tree/
>>
>>
>> _______________________________________________
>> To control your jdom-interest membership:
>> http://www.jdom.org/mailman/options/jdom-interest/youraddr@yourhost.com
>>
>



More information about the jdom-interest mailing list