I'm working on a legacy application with a HUGE amount of meta data (in one table). My poor laptop isn't the best spec'd machine so I'm fairly aware of memory usage of the applications I'm working on and generally don't just throw memory at a problem.
Anyways, I'm working on part of the application that interigates the data and writes output to a file sequentially: basically read-record, write-to-file, read-record, write-to-file, so when I get the OutOfMemoryError exception I take a closer look and see that the entire result set is read into memory on the statement executeQuery before you can read the ResultSet.
This creates an itch, there must be a way of specifying read on row from the database, then use the result and then read the next row. After a bit of searching I found this blog: MySQL JDBC Memory Usage on Large ResultSet and it is also buried in the MySQL connector reference.
And here is the relevant code:
stmt = conn.createStatement(java.sql.ResultSet.TYPE_FORWARD_ONLY, java.sql.ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE);
Works like a charm.