Disable setting IMPLICIT_TRANSACTIONS to ON in SQL Server


#1

Hello

We are developing a high performance/throughput application using Hibernate 5.2 and SQL Server Standard 2016. The application is processing data in multiple threads that access the same sets of tables. We are unfortunately experiencing huge slowdowns and locking issues due to the fact, that Hibernate sends directly or indirectly (through JDBC driver) SET IMPLICIT_TRANSACTIONS ON statement every time we want to access the data, even for read only processing.

Our transaction strategy is as follows:

  1. Read all the data to memory (without transaction, @Transactional(Propagation.NEVER) is used to ensure that.
  2. Process the data on the application side
  3. Start the transaction, save the results and commit right away.

Unfortunately even with NEVER propagation and with autoCommit being set to TRUE on Hibernate properties level, we are still having implicit transactions being created even on step 1, when we are trying to enforce the application to not create a transaction.

This is a serious problem for us and I’ve seen some people on the Internet were experiencing that too, but there is no answer to be found. Can you please advise? As far as we know (from our dbas) SQL Server is by default supposed to work in an autoCommit enabled and should never work in Transaction, unless one is explicitly started.


#2

It’s not Hibernate that does that. It’s the SQL Server JDBC Driver.

Unfortunately even with NEVER propagation and with autoCommit being set to TRUE on Hibernate properties level, we are still having implicit transactions being created even on step 1, when we are trying to enforce the application to not create a transaction.

A RDBMS will always use transactions. The only difference is the transaction scope. So, when you use autocommit the transaction scope is the statement while if you disable autocommit the scope can include multiple statements.

Why do you think that enabling autocommit in step 1 will speed up the workflow?

This is a serious problem for us and I’ve seen some people on the Internet were experiencing that too, but there is no answer to be found. Can you please advise? As far as we know (from our dbas) SQL Server is by default supposed to work in an autoCommit enabled and should never work in Transaction, unless one is explicitly started.

I really hope your DBA didn’t say that or you misinterpreted his words. No RDBMS is designed to work in autocommit mode. How would it enforce Serializability or Concurrency Control if reads and writes are executed in separate transactions?

I think you should just benchmark your workflow and tune the system based on measurements, not from random advices you find over the Internet.

For start, check out these 14 high-performance Java Persistence tips. There’s a lot you can do to speed up an application, and using autocommit is not on ofthose.


#3

Hi Vlad, thanks for the quick reply.

Is it possible then that the “hibernate.connection.autocommit” setting on Hibernate Properties is not being correctly propagated to the JDBC driver?

I really hope your DBA didn’t say that or you misinterpreted his words. No RDBMS is designed to work in autocommit mode. How would it enforce Serializability or Concurrency Control if reads and writes are executed in separate transactions?

Yoy have no idea how much pain it was to refactor the application to do that and have some kind of data consistency. I know for sure I’ve not misinterpreted anything :slight_smile:


#4

Unless you are using the DatasourceConnectionProviderImpl, the hibernate.connection.autocommit will be used by the Hibernate ConnectionProvider.

I know for sure I’ve not misinterpreted anything

Please explain why you want to set the autocomit mode to speed up your application. More, can you provide a link from the official MSDN documentation which says you should do that?


#5

Hi Vlad

This was decided after:


#6

The locking and blocking is due to the default 2PL (Two-Phase Locking) concurrency control mechanism employed by SQL Server.

I’d probably give a try to the MVCC engine and use Snapshot Isolation instead of using “auto-commit” which will open the door for a great variety of anomalies.

So, you do realize that you are going to face non-repeatable read, phantom read, read skew and write skew phenomena, right?

Not mention that you cannot even do a transfer between two accounts without breaking consistency.

Any Data visualizer works in autocommit, just like JDBC. However, this is not the way an enterprise application is supposed to work.

Again, your application design is seriously flawed if you’re using the autocommit mode.


#7

The first thing we did was to switch to READ_COMMITED_SNAPSHOT isolation level. It helped to some extent, but the locking was still an issue. Just to be clear - we are using auto-commit only on the reading part. After that, when we want to persist and save the data, we are using usual Spring Transaction management and wrapping everything in Transactions.

Please explain to me why do we need an active Transaction while reading data, if we are not intereseted in increasing the isolation to REPEATABLE_READ or SERIALIZABLE? We are also not selecting for update so we don’t need to lock what we’ve read. We have our own case-specific distributed locking to ensure that no 2 parallel processes are touching the same data.

You are very quickly jumping into conclusions and judging design that you haven’t even seen. Believe me, the auto-commit is the last resort for us, the design was changed drastically and we weren’t able to fully resolve locking issues.


#8

Sure thing. If you reads and writes happen in separate transactions, unless you are using optimistic concurrency control, it means that you cannot guarantee that what you are modifying is still relevant in the DB. So, for instance, you can lose updates which cannot happen under both 2PL or MVCC.

You are very quickly jumping into conclusions and judging design that you haven’t even seen. Believe me, the auto-commit is the last resort for us, the design was changed drastically and we weren’t able to fully resolve locking issues.

Actually, I’m not. Breaking reads and writes into separate transactions without additional Concurrency Control is called ACIDRain, and it has led to many hacks. Just read the paper for more details.


#9

That is why we’ve implemented a distributed locking that makes sure if 1 process starts to process data related to our aggregate, other processes will wait until the lock is lifted. That way we are sure the data we’ve read are still relevant :slight_smile: I realize that I didn’t mention that before and without that mechanism you are absolutely correct - we would lose updates.

The aim of our design is also to shorten the Transaction time to the most possible extent. That is why are bursting the processed data in a short, separate Transaction instead of Reading, Processing and Saving everything in one Conversation.


#10

That’s what I meant by additional concurrency control mechanisms.

In this case, there’s even a simpler solution for your use case. JPA allows reads outside if a transactional context, so you can execute those like that. And pass all the data you need to modify to the @Transactional service.