Hi. I’m currently working on a project that uses a large OracleDB. The DB has 100’s of sequences (too many to manually count). I’m currently attempting to debug why the startup time is so slow. One thing I have identified through logging, is that Hibernate takes ~7s “normalising” whilst it appears to scan all sequences that exist in the DB. This is evident in logs such as:
Analysing each sequence in itself doesn’t take long. However, when this is multiplied for the 100’s of sequences in the DB it takes a considerable time. In my testing the first such log appears at: 2021-03-22 15:24:17.163 and ends at: 2021-03-22 15:24:23.658 with no other logs in-between.
The project I am working on only models a very small portion of the DB in question. It only references 3 sequences. My question is 1) why are all of the sequences in the DB being evaluated and 2) how I can stop this!
Thanks for the reply - We do not use hbm2ddl validate - however, the suggestion to try using a user who can’t see as many objects is something I will definitely try. Thanks for the suggestion- I will report back if that works!
I guess you could subclass the Dialect to override getSequenceInformationExtractor and have a custom implementation that returns an empty list for extractMetadata, but that would pose a risk if you incorrectly created your sequences.
I’d prefer if you create a new topic and describe the problem, possibly attach a flamegraph or something to understand what causes your performance problems.
I’d hope that reading and checking 20000 sequences would not take more than a second.
If it does and we can’t improve the code any further, I think that a configuration setting would be ok, but first, we need to understand the problem.