[jdom-interest] JDOM and very large files
john.logsdon at btclick.com
Fri Oct 10 01:18:02 PDT 2003
You're right, JDOM reads the entire file into memory so that you get a
tree that you can traverse and manipulate.
If all you intend to do is parse a file in a read-only manner use a SAX
parser (Which is what JDOM uses to read the file into the tree), this
uses an event driven model, that allows you to parse the file as it's
being read thus avoiding reading the whole thing into memory first.
As a starting point take a look at this tutorial at IBM
explains when it's best to use SAX over DOM and walks you through
building some examples.
Hope that helps.
From: "Daryl Handley" <darylhandley72 at yahoo.com>
To: <jdom-interest at jdom.org>
Date: Thu, 9 Oct 2003 22:07:54 -0700
Subject: [jdom-interest] JDOM and very large files
I am trying to parse a very large file (1.3 Gb) using JDOM. I have used
before with smaller files, but never large ones. It seems like the
attempts to read the whole file into memory and build the document. Is
a way to read only part of the document into memory ? I only beed to
through the file sequentially, random access to parts of the doc is not
important. Does anyone have any suggestions how to do this with JDOM ?
in general ? Any other language ?
Otherwise I may have to write my own parser to do it in PERL. The
structure is fairly simple (just huge) so this shouldn't be too hard,
would prefer to do it with JDOM if possible.
More information about the jdom-interest