I think you are. I’m just guessing, but here is what I would worry about:
- Lock files are there for a reason, and if you change the index files while Lucene is writing to the index, at best you’ll lose that data (might not matter if you do some “catch-up” reindexing afterwards), at worst you might introduce inconsistent data in the index (for example a duplicate document, because the document was marked for deletion in the old files and added to the new ones).
- Unless you set
hibernate.search.exclusive_index_use
tofalse
(which is not great for performance), any further write to the index may still be directed to the old index files. That was the point of the loop callingflushAndReleaseResources
: making sure the index writers will be re-opened later and will use file descriptors pointing to the new index.
But at that point we’re reaching very low-level parts of the Lucene integration. @Sanne might be of more help, if he’s available.