Hello Team Hibernate!
I am using Hibernate Search and really like how it integrates into my application!
For some Entities, I cannot use and do not need the automatic Indexing. I get new data delivered weekly, thats why i use MassIndexer to completely rebulid the Index on a weekly basis. I use purgeAllOnStart and optimizeAfterPurge to make sure the index is rebuilt completely fresh.
I ran into a problem where the index-folder size increases each time i rebuild, even if there are no additional documents.
It seems to me that, after purging, the to-be-deleted segements do not get deleted from my filesystem. Only when i restart my application the “cleanup” is taking place.
I debugged my application and have seen that when i call purgeAll, Lucene tries to delete old Index segments, but gets an IOException (File is in use)
Only when i shutdown my application, Lucene seems to do a “Cleanup” and is able to delete those files.
I use the default Reader Strategy and see that SharingBufferReaderProvider is used by Hibernate.
In the Code of SharingBufferReaderProvider it looks to me like there is at least one IndexReader open all the time and is shared across the application.
Could it be that those IndexReaders prevent Lucene from deleting the old Segments?
One thing i tried is to set the reader strategy to not-shared:
hibernate.search.reader.strategy = not-shared.
If i set this, my problem is solved. The deleted segments get cleaned up correctly.
However i am not sure how much this will be a problem for my filesystem, as i understand each query would open and close the index with this setting.
Is this a known issue and is there a workaround for this?
I use Hibernate Search 5.11.12-final with Spring Boot.
Thanks in advance!
Rene