I am persisting data using the EntityManager over multiple threads.
Each thread is commiting on finish, which should lead to inconsistent data.
So how can I say
only the main thread should make the commit
when any of the threads are failed, all data should be rolled back
Can I handle this by using ManagedExecutorService and passing the same UserTransaction through all threads?
Transactions are usually bound to a Java Thread, so you can’t pass around a UserTransaction. A database transaction also is “isolated” in some way, so if you want to implement some sort of synchronization between multiple transactions, you will have to implement this yourself.
Can I handle this by using ManagedExecutorService and passing the same UserTransaction through all threads?
What is “this”? You haven’t explained what you’re even trying to do. You can certainly generate data in multiple threads, but if you can’t partition the data in a way that every partition is independently useful/valid, then you will have to transfer that data to a single thread which will then do all the database interaction and commit.
It’s not possible. You can’t have multiple threads interact with the same JDBC connection as part of the same transaction.
Try isolating the work of the two steps into “partitions” so that every partition of work can be independently committed, only that way you can parallelize. Also, with parallelization you will ahve to forget about the atomic commit of the whole work. One thing you could do is to write to some temporary tables and drop old + rename the temporary tables in a post-step. That way you’d have atomic visibility of the data, but not sure if this is possible for you. Alternatively, you could insert into the main tables with some visibility flag or token set and in a post-step reset that column to make the rows visible to the rest of your application.