-
Bug
-
Resolution: Obsolete
-
Major
-
None
-
8.2.4.Final
-
None
When using the JpaStore it load's IDs, then iterates around the result and manually getting the record. This means for large datasets the performance is really poor. There is a comment in the code regarding this, but in it's current state it effectively makes it unusable.
As an example with a dataset of 12,600 records using a a generic but customised JPA:
- Bulk load: 977ms,
- JpaStore: 137,906ms
Increase: 14,015%
Obviously paralleling the call or another DB might be quicker, but not much!
Would it possible to have some level of chunking/batching of the load? IMO this would be a suitable compromise.
I'm afraid I can't share the code for my loader, but it is loading a simple entity with no referenced objects, no so no joins.