We are currently using Hibernate 6.6.5 where we are facing a similar issue with a delete operation that throws an OptimisticLockException with the following stacktrace:
jakarta.persistence.OptimisticLockException: Batch update returned unexpected row count from update [0]; actual row count: 0; expected: 1; statement executed: delete from SLOTTOVIRTUALCONNREF where VIRTUALCONNECTOR_ID=? and SLOT_ID=?
at org.hibernate.internal.ExceptionConverterImpl.wrapStaleStateException(ExceptionConverterImpl.java:221)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:95)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:167)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:173)
at org.hibernate.internal.SessionImpl.doFlush(SessionImpl.java:1433)
at org.hibernate.internal.SessionImpl.flush(SessionImpl.java:1415)
at chs.capitalmanager.pof.PersistenceSession.flushHibernateSession(PersistenceSession.java:374)
There are few bugs related to this same issue example: HHH-19322, HHH-13208
We are unsure how the Hibernate release schedule works in our organization. We still use hbm.xml, which is deprecated but not removed in 6.6.5. If there is a fix for this issue, will there be a backport? Since we are still in our upgrading phase and cannot directly move to 7.0 (where hbm.xml is removed), any suggestions would be greatly appreciated.
Hibernate ORM 7.0 does not remove hbm.xml support yet, but ORM 8.0 will.
I don’t think your issue is related to the Jira tickets you posted. Please try to create a reproducer with our test case template and if you are able to reproduce the issue, create a bug ticket in our issue tracker and attach that reproducer.
We found this issue in our project, as there is a unit test getting failed with the above exception, but the same unit test is passing when using hibernate version 5.6.x for now. Even so, it’s unclear what’s happening in the backend and why hibernate 6.6.x is throwing this exception, or if we’ve messed up something on our end. Can you please help us understand this and advise on how to proceed?
We encountered similar error as well when upgrading to Hibernate 6.6.x that involves entity persistence after transaction rollbacks, though I am not sure if it is the same as with yours.
What happens is that an entity gets initially persisted and flushed, which generates its ID and version number, but then the transaction fails and rolls back. However, our system has a separate listener component that receives this entity as an argument and attempts to persist it again, unaware that the original transaction was rolled back, and the record does not get persisted in the DB at all.
This problem relates to a behavior change in Hibernate 6.6.x documented in their migration guide under ‘Merge versioned entity when row is deleted.’ The way Hibernate 6.6.x handles previously rolled back entities with version information has changed, causing our persistence operations to fail in this specific scenario.
We’ve identified a breaking change in batch processing between Hibernate 5.5 and 6.x, specifically regarding how expectations are verified during batch operations org.hibernate.engine.jdbc.batch.internal.BatchImpl#checkRowCounts.
Key Difference:
Hibernate 5.x:
this.getKey().getExpectation().verifyOutcome(rowCounts[i], ps, i, statementSQL);
Hibernate 6.x:
statementDetails.getExpectation().verifyOutcome(rowCounts[i], statementDetails.getStatement(), i, statementDetails.getSqlString());
The fundamental change is that in 5.x, the expectation was retrieved from our custom BatchKey implementation, while in 6.x, it’s now obtained from statementDetails.
Our Setup:
public class CHSDBBatchBuilder extends BatchBuilderImpl {
@Override
public Batch buildBatch(BatchKey batchKey, Integer explicitBatchSize,
Supplier<PreparedStatementGroup> statementGroupSupplier,
JdbcCoordinator jdbcCoordinator) {
BatchKey keyHolder = batchKey != null ? new CHSDBBatchKey(batchKey) : batchKey;
return super.buildBatch(keyHolder, explicitBatchSize, statementGroupSupplier, jdbcCoordinator);
}
}
Our custom BatchKey implementation, which worked fine in 5.x, is effectively being ignored in 6.x due to this architectural change.
Question: What’s the recommended approach to migrate custom BatchKey implementations to Hibernate 6.x? Is there an equivalent way to customize batch expectations in the new architecture?