Query plan cache throwing out of memory for bulk insert query

I have been using bulk insert query to insert data to the table. Query plan cache is being filled with new query plan everytime the number of the tuples during each execution differs. This results in out of memory in session factory. Can anyone suggest me an approach to resolve this without affecting the query execution time ?
If I insert row by row , completion would take long time since the number of rows per execution will be a large number evrytime.

What do you mean by that? First off, you should limit the amount of queries in the query cache according to your available memory through a configuration: https://docs.jboss.org/hibernate/orm/5.4/userguide/html_single/Hibernate_User_Guide.html#configurations-query

If by tuples you mean a multi valued parameter for an IN predicate, you can make use of the hibernate.query.in_clause_parameter_padding configuration setting: https://vladmihalcea.com/improve-statement-caching-efficiency-in-clause-parameter-padding/

I am using bulk insert native query i.e “Insert into table_name(col a, col b…) values ( val a, val b …),(val a, val b …),(val a, val b …) …”. The values generated dynamically based on the List we receive and also the number of cols are higher. This is throwing out of memory after executing certain number of query.

Why aren’t you inserting entities one by one through EntityManager.persist? This would reuse the same JDBC insert statement for every persist call. It can even do native batching if you configure JDBC batching in Hibernate.

The size of the List will be too large. Entitymanager.persist() will create insert query for each objects and it would take forever to persist to db. That’s why native query is being used. Is there anyway we can play around with query plan cache and native query ?

What do you mean by the size of the list? Which list? Hibernate is just issuing a JDBC insert statement as well, but it reuses the same statement and can automatically use the batching API if you configure it to do so.

If you for example configure a JDBC batch size of 100, you can flush and clear the EntityManager after every 100 persist calls.