So I have this old project running 5.2.11 with a huge domain model. The problem I am looking at, as the application has grown a certain age and accumulated a good set of data with history build into the model.
I am looking at StackOverflowExceptions… I know what you are gonna say, recursion, and yup thats it, but its not recursion in the application pr. se and if i increase the stack size from an already healthy 2M to say, 4M, the app runs again, albeit slow.
Investigating it I find that the recursion is around Cascade (org.hibernate.engine.internal.Cascade), and I get stack levels approaching 10.000 frames. Thats a lot. So I monkey patched Cascade and put in some logging for statistics and its pretty clear it is all the @ManyToOne OneToMany etc that is at the root of it, all of them with CascadeType.PERSIST and MERGE.
As far as I can tell mappedBy is also placed correctly.
Cascade gets hit upwards of a billion times.(ill get back with an true number later, gonna run the test again).
As far as I can tell this is what happens, for entity A,B,C,N’s
A->B->C is OneToMany and ManyToOne the other way around.
A->N’s OneToMany and in reverse, N’s represents the rest of the data model, everything is in some way tied up to master table A.
Now a relatively simple Criteria is being made on C, “select from C where…” and what happens is that cascades to B and B cascades to A and A … cascades to all the rest. Now this doesnt translate into database ops, its in memory only, but there is SO many of them and the recursive nature of it just piles on the stack until its game over. Increasing the stack will fix the problem for a year maybe… then its gonna turn sour again.
Can someone explain to me what we are doing wrong? My gut is telling me that doing history like that is a bad idea (columns with cancelled_date or end_date etc) as hibernate cant tell current from history data.