[jdom-interest] Performance benchmark results
graham-glass at mindspring.com
Fri May 11 11:40:09 PDT 2001
i concur with dennis re: the need to run each benchmark
independently. when i was writing my own JDOM/EXML benchmarks,
i found that running the benchmarks in sequence within
a single program run made the results come out fairly
From: jdom-interest-admin at jdom.org
[mailto:jdom-interest-admin at jdom.org]On Behalf Of Dennis Sosnoski
Sent: Friday, May 11, 2001 12:06 PM
To: philip.nelson at omniresources.com
Cc: jdom-interest at jdom.org
Subject: Re: [jdom-interest] Performance benchmark results
Thanks for the information, Philip. I should caution you, though, that the
results from running all of the models in sequence (as in your output) tend
pretty erratic. I found it was best to test only one model per run, using
the "-one n"
command line parameter - that's what I did for the results shown on the
probably either disable the run all models mode or require it to be
requested for the next iteration of the benchmark, rather than have it be
A faster parser would definitely help on the document build time. I get the
that this is especially true for smaller documents, so perhaps Xerces does a
initialization which slows it in the beginning.
philip.nelson at omniresources.com wrote:
> IMHO, the view of JDOM being lightweight, though currently under stress,
> in fact because of the fact that a faster parser could be used, if one
> available. JDOM can't ever be faster than the sax parser it's based on
> but a custom parser could be faster. I think it's important to note that
> my analysis for which I posted graphs, JDOM accounted for around 30% of
> build time itself with the underlying parser accounting for the rest.
> would indicate that there is only room for a fraction of a 30% performance
> gain unless someway could be found to improve the parsers portion of the
> time. I need to find out why this is not reflected in your results.
> > Please let me know if you see any errors or have any suggestsion for
> > improvements in the tests. I'm planning to add an update in a
> > couple of
> > weeks with results using new versions of the code bases,
> > small files in
> > addition to the medium sized (100-200K) ones used for these tests, and
> > some added tests.
> Yes, the smaller ones, 5-20K are much more relavent to me personally
> RPC is not the reason. My experience with serialization of small
> has been very good and in the place I am using it, I don't have the option
> of xml serialization.
> I decided to download and run your tests against the almost current
> (which didn't make much difference) and the small document I was using
> (sorry, but I can't release that document but I am just using it for
> comparison). The makeup of the document is reported in the results. The
> results are closer to my own findings.
> Name: output.txt
> output.txt Type: Plain Text (text/plain)
> Encoding: quoted-printable
To control your jdom-interest membership:
More information about the jdom-interest