Some data is not indexed. The error name is HSEARCH400007

If i run creating indexing of total of 4 million data. but It will not be able to create index about 2,000.
(The number of failures is not regular.)
This is simple code to make indexing with es.

FullTextSession fullTextSession = Search.getFullTextSession(session);
SearchFactory searchFactory = Search.getFullTextSession(session).getSearchFactory();
//                    .cacheMode(CacheMode.NORMAL)
//                    .threadsToLoadObjects(5)
//                    .transactionTimeout(3000)
//                    .idFetchSize(200)
           .progressMonitor(new SimpleIndexingProgressMonitor() {

It not be indexed as below log occurs.
How do i get an failed indexed id list? And is it possible to automatically create a failed indexing?

2019-10-29 14:48:12.898 ERROR 23400 --- [Hibernate Search: Elasticsearch transport thread-2] o.h.s.exception.impl.LogErrorHandler     : HSEARCH000058: Exception occurred HSEARCH400007: Elasticsearch request failed.
Request: POST /_bulk with parameters {refresh=false}
Response: null
Subsequent failures:
        Entity com....entity.######  Id 2248733  Work Type HSEARCH400007: Elasticsearch request failed.
Request: POST /_bulk with parameters {refresh=false}
Response: null
        at java.util.concurrent.CompletableFuture.uniExceptionally(
        at java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(
        at java.util.concurrent.CompletableFuture.postComplete(
        at java.util.concurrent.CompletableFuture.completeExceptionally(
        at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onDefinitiveFailure(
        at org.elasticsearch.client.RestClient$1.retryIfPossible(
        at org.elasticsearch.client.RestClient$1.failed(
        at org.apache.http.concurrent.BasicFuture.failed(
        at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.executionFailed(
        at org.apache.http.impl.nio.client.AbstractClientExchangeHandler.failed(
        at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.endOfInput(
        at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(
        at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(
        at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(
        at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(
        at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(
        at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(
        at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(
        at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$
Caused by: org.apache.http.ConnectionClosedException: Connection is closed
        ... 12 common frames omitted

I am using springboot 2.1.5.RELEASE version and these hibernate dependencies


Why does this log happen?
I await your favorable reply.

Implement a custom ErrorHandler, and set the configuration property to the fully qualified class name of your implementation.
The handle method will get an ErrorContext in parameter, whose getFailingOperations and getOperationAtFault methods return objects that expose a getId method. These are the IDs of the documents that couldn’t be indexed.

I’m not sure I understand the question:

  • Are you asking how to reproduce the problem in an automated test? If so, I don’t know.
  • Are you asking how to throw an exception when at least one entity failed to index? If so, there’s no such built-in feature in Hibernate Search 5. Hibernate Search 6.0.0.Beta2 does that by default, and that could be an option if the cost of migrating your whole project to the new APIs is not too high (see here for more info).
    In Hibernate Search 5, you could implement the ErrorHandler mentioned above, and maintain an error count. If you notice that the count increased between the start of massindexing and the end, you’ll know that an error happened.

From the last line in your stack trace, the connection to Elasticsearch was closed while indexing. Probably a network problem. Or an Elasticsearch node was restarted during mass indexing, but last time I checked the REST client is supposed to send the request to another node when that happens.

You should investigate why the connection gets closed. I know some people have a router that closes long-running connections, so this might be something like that. If that’s the case, this might provide you with a solution, but the error wasn’t exactly the same…