[jdom-interest] Performance benchmark results

graham glass graham-glass at mindspring.com
Fri May 11 11:40:09 PDT 2001


i concur with dennis re: the need to run each benchmark
independently. when i was writing my own JDOM/EXML benchmarks,
i found that running the benchmarks in sequence within
a single program run made the results come out fairly
differently.

cheers,
graham

-----Original Message-----
From: jdom-interest-admin at jdom.org
[mailto:jdom-interest-admin at jdom.org]On Behalf Of Dennis Sosnoski
Sent: Friday, May 11, 2001 12:06 PM
To: philip.nelson at omniresources.com
Cc: jdom-interest at jdom.org
Subject: Re: [jdom-interest] Performance benchmark results


Thanks for the information, Philip. I should caution you, though, that the
test
results from running all of the models in sequence (as in your output) tend
to be
pretty erratic. I found it was best to test only one model per run, using
the "-one n"
command line parameter - that's what I did for the results shown on the
website. I'll
probably either disable the run all models mode or require it to be
specifically
requested for the next iteration of the benchmark, rather than have it be
the default
as now.

A faster parser would definitely help on the document build time. I get the
impression
that this is especially true for smaller documents, so perhaps Xerces does a
lot of
initialization which slows it in the beginning.

  - Dennis

philip.nelson at omniresources.com wrote:
<snip>

> IMHO, the view of JDOM being lightweight, though currently under stress,
is
> in fact because of the fact that a faster parser could be used, if one
were
> available.  JDOM can't ever be faster than the sax parser it's based on
now
> but a custom parser could be faster.  I think it's important to note that
in
> my analysis for which I posted graphs, JDOM accounted for around 30% of
the
> build time itself with the underlying parser accounting for the rest.
That
> would indicate that there is only room for a fraction of a 30% performance
> gain unless someway could be found to improve the parsers portion of the
> time.  I need to find out why this is not reflected in your results.
>
> > Please let me know if you see any errors or have any suggestsion for
> > improvements in the tests. I'm planning to add an update in a
> > couple of
> > weeks with results using new versions of the code bases,
> > small files in
> > addition to the medium sized (100-200K) ones used for these tests, and
> > some added tests.
>
> Yes, the smaller ones, 5-20K are much more relavent to me personally
though
> RPC is not the reason.  My experience with serialization of small
documents
> has been very good and in the place I am using it, I don't have the option
> of xml serialization.
>
> I decided to download and run your tests against the almost current
version
> (which didn't make much difference) and the small document I was using
> (sorry, but I can't release that document but I am just using it for
> comparison).  The makeup of the document is reported in the results.  The
> results are closer to my own findings.
>
>   ------------------------------------------------------------------------
--------
>                  Name: output.txt
>    output.txt    Type: Plain Text (text/plain)
>              Encoding: quoted-printable

_______________________________________________
To control your jdom-interest membership:
http://lists.denveronline.net/mailman/options/jdom-interest/youraddr@yourhos
t.com




More information about the jdom-interest mailing list