Hi,
I’m writing a server which is multi-threaded, therefore I am using a session per thread.
The first level cache causes problema in these cases since the sessions do not get updated
if a record was changed out of the session scope.
Is session.evict/clear the only solution to this problem or is there anything else that can be done
to avoid lack of coordination between the different sessions in the app?
p.s when the DB is changed outside the app - this problem occurs too. This server needs to support outside DB changes too.
The Hibernate Session caching the records you read is not a bug. It’s a feature. It allows you to provide REPEATABLE READS even on lower database isolation levels. Because of this, you can prevent lost updates anomalies due to other transactions changing the records you’ve been reading.
Thanks. That was very helpful for understanding this issue.
Since I am using a separate Session for each thread, and hibernate controls the concurrency between sessions - all
I have to do is to open and close transactions in the different threads for different work blocks.
Therefor how can locking help me? the problem occurs when 2 different sessions are not sharing the same data since they hold separate caches…
So, it seems that the only way of dealing with this is by deleting the data that was saved by evict or by clear, no?
Regarding these 2 commands -
Is the ‘clear’ command considered expensive?
It is much easier for me to run ‘clear’ at the end of each transaction, rather than running evict for each object that was pulled.
is there a run time difference if I use clear to clear X object that were just saved on the cache, or if I run evict on each of them separately (which means to run evict X times)?
Hibernate does not control any concurrency between Sessions. It’s the DB who does concurrency control.
Therefor how can locking help me? the problem occurs when 2 different sessions are not sharing the same data since they hold separate caches…
That’s not really a problem. Even if you get the latest state of a row with JDBC, some other transaction can still come and modify the record. You can use pessimistic locking if you want to prevent updates to a certain row.
So, it seems that the only way of dealing with this is by deleting the data that was saved by evict or by clear, no?
Nope. Pessimistic or optimistic locking can help you with that.
It is much easier for me to run ‘clear’ at the end of each transaction, rather than running evict for each object that was pulled.
You don’t need to clear the Session after commit. You only need to do that if you don’t want the entities to be managed anymore.
is there a run time difference if I use clear to clear X object that were just saved on the cache, or if I run evict on each of them separately (which means to run evict X times)?
It’s the same with java.utilMap. Calling clear is faster than calling remove N times.