Object normalObj = repository.findById(1);
……
// use PESSIMISTIC_WRITE lock
Object lockedObj = repository.findUpdateById(1);
// normalObj == lockedObj is true
It seems that pessimistic lock query was executed, why still use first-level cache.
Object normalObj = repository.findById(1);
……
// use PESSIMISTIC_WRITE lock
Object lockedObj = repository.findUpdateById(1);
// normalObj == lockedObj is true
It seems that pessimistic lock query was executed, why still use first-level cache.
@cuihaohao the entity itself is read from the first level cache to avoid repeated data access, then if PESSIMISTIC_WRITE
is used and the entity is versioned when the second find
is executed the lock will be upgraded.
When upgrading a PESSIMISTIC_WRITE
lock, a query should be executed against the database which checks if a row exists with the given identifier and the current version, and if not throws a StaleObjectStateException
, ensuring the entity can be locked as requested.
So I don’t quite understand why the query for the second lock upgrade still uses the cache from the first find?
It doesn’t, it’s a straight jdbc query to the database.
I encountered a problem similar to this question,So are there any other solutions besides clear, refresh, and other operations now
Not to my knowledge, locking semantics remained the same in the JPA specification over the years so find
still does not force a refresh it the entity is found in the first level cache.