Re-indexing and overwriting index folder causes errors

The easiest solution would really be to restart the application at some hour of the day where you know the application is not in use. But admittedly not every developer is lucky enough to have such a time window every day.

If you do not use automatic indexing (i.e. your indexes are only ever updated by your periodic, off-server reindexing process, but not directly by your application on entity changes) , you might twist the filesystem-slave directory provider to periodically copy the content of your “staging” folder into the “production” folder: it will take care of all the low-level Lucene operations that are necessary.
See https://docs.jboss.org/hibernate/search/5.11/reference/en-US/html_single/#search-configuration-directory for more information about directory providers in general, and filesystem-slave in particular.

If you do use automatic indexing, I doubt the solution above will work, because it’s been designed for read-only indexes. You can try it, though. The most likely outcome is that writes from your application will be ignored after the content of “staging” is copied to “production” for the first time. If that happens, you can try setting hibernate.search.exclusive_index_use to false. This will likely lead to terrible performance, because the index writer will have to be reopened for each single transaction, but at least it should work…

If all of the above fails, or performance is not satisfying, then you will have to ensure indexes are not being used while you perform the rather brutal swapping of directories.

This implies a short period of time during which no search queries nor automatic indexing can be performed. Basically a lock-down of your application. However, if you only need to rename folders, the lock-down should be very short.

Implementing the lock-down will be on you: Hibernate Search does not support that. Essentially you will need to make HTTP requests either fail or wait if the lock-down is being enforced. Depending on your framework, there should be a number of ways to do that. You might want to exclude technical administration pages from the lock down, just in case…

As to the process, you will have to do this:

  1. Start enforcing the lock-down, preventing further HTTP requests from being processed.
  2. Wait for ongoing HTTP requests to be processed. After that, there should be no read lock on the indexes anymore (at least not with default settings).
  3. Release the write locks on the indexes:
    EntityManagerFactory entityManagerFactory = ...; // this can be either injected with @PersistenceUnit, or retrieved from an entity manager using .getEntityManagerFactory
    SearchIntegrator searchIntegrator = SearchIntegratorHelper.extractFromEntityManagerFactory( entityManagerFactory );
    for ( EntityIndexBinding value : searchIntegrator.getIndexBindings().values() ) {
        for ( IndexManager indexManager : value.getIndexManagerSelector().all() ) {
            // Flush pending index changes and release lock
            indexManager.flushAndReleaseResources();
        }
    }
    
    Be aware that this code uses SPIs, meaning you might experience incompatible changes in minor releases of Hibernate Search. If you’re ready to update your code when upgrading Hibernate Search, that should not be a problem.
  4. Replace the old index directories with the new ones.
  5. Stop enforcing the lock-down, allowing HTTP requests to be processed by your application. Hibernate Search will automatically lock the indexes again.