I have dug a bit more and found the reason. The above query is OK. We loop all the entries and do another lookup in the loop, and it is the lookup that consumes the heap:
for (Clog clog : listObj) {
...
Hclog entry = lookupMgr.getHByClog(clog);
if (entry != null) {
....
}
...
}
Here is the lookup code: lookupMgr.getHByClog(clog)
CriteriaBuilder builder = session.getCriteriaBuilder();
CriteriaQuery<Hclog> criteriaQuery = builder.createQuery(Hclog.class);
Root<Hclog> root = criteriaQuery.from(Hclog.class);
criteriaQuery.select(root);
// Initial
Predicate and = builder.equal(root.get(Hclog_.clog),clog);
Query<Hclog> query = session.createQuery(criteriaQuery.where(and));
Hclog obj = query.getSingleResult();
//Hclog obj = query.getSingleResultOrNull() // did not help
return obj != null ? obj : null;
####
Hclog class relation to Clog
private Clog cog = null;
@ManyToOne(fetch = FetchType.EAGER)
@JoinColumn(name = "clogPartnumber")
@NotFound(action = NotFoundAction.IGNORE)
public Clog getCog() {
return clog;
}
####
mapping class:
Hclog_.clog
public static volatile SingularAttribute<Hclog, Clog> clog;
This is not new code, and has been running OK as we converted from Criteria some years ago.
We use a hibernate.current_session_context_class
that uses a Session-per-operation anti-pattern
(8.5.1. from the docs), ie we set the flush mode to manual and then at the end of the job flush the session.
Looking at BatchFetchQueue there is an additional field subselectsByEntityKey which keeps a history of all the selects in this session.
This map is going to get 50K entries in this session. The Mat leak report above points to this.
/**
* A map of {@link SubselectFetch subselect-fetch descriptors} keyed by the
* {@link EntityKey} against which the descriptor is registered.
*/
private Map<EntityKey, SubselectFetch> subselectsByEntityKey;
/**
* Adds a subselect fetch descriptor for the given entity key.
*
* @param key The entity for which to register the subselect fetch.
* @param subquery The fetch descriptor.
*/
public void addSubselect(EntityKey key, SubselectFetch subquery) {
if ( subselectsByEntityKey == null ) {
//subselectsByEntityKey = CollectionHelper.mapOfSize( 12 );
subselectsByEntityKey = Collections
.synchronizedMap(new LRUMap<EntityKey,SubselectFetch>12));
}
final SubselectFetch previous = subselectsByEntityKey.put( key, subquery );
if ( previous != null && LOG.isDebugEnabled() ) {
LOG.debugf(
"SubselectFetch previously registered with BatchFetchQueue for `%s#s`",
key.getEntityName(),
key.getIdentifier()
);
}
}
As a test I swapped the
new CollectionHelper.mapOfSize( 12 )
for a fixed size (org.apache.commons.collections4.map.LRUMap)
new LRUMap<EntityKey,SubselectFetch>12)
Now the job completes normally.
Also, for some reason the subquery in the put is always com.sun.jdi.InvocationException: Exception occurred in target VM occurred invoking method
, and there is never a getSubselect(…) or removeSubselect(…) from the map either.