@SequenceGenerator: allocationSize versus INCREMENT BY

Hello,

Way back in Hibernate 5.1.0, I had a setup where my postgres database sequence was set to increment by 20, and my entity’s @SequenceGenerator was set with allocationSize = 1.
(Example class at end of this post)

This meant that Hibernate would call select nextval("my_entity_seq") every time it was going to add a new row to the database.
This worked perfectly, as I need gaps of 20 in my local database rows.

However, shortly after, such as hibernate 5.1.17, and 5.4.28, this is no longer allowed - the hibernate validator says that the database sequence INCREMENT BY must match the @SequenceGenerator’s allocationSize.

And so, changing my SequenceGenerator to have allocationSize=20 causes Hibernate to grab a block of 20 sequential values, and does not call nextval() every time. The sequential values are causing conflicts during data merging.

Is there a way to have Hibernate call nextval() every time, with my sequence INCREMENT BY set to 20?

Thanks very much,

Jamie

Okay, if you must know(!), we have an application that collects copious amounts of data, and we have split the collection across multiple shard machines. We run a nightly process that merges the data from all collector shards, and generates reports.
To prevent collisions while merging the data, we have organized our sequences so that cluster 1 has indexes 1, 21, 41, etc… and cluster 2 has indexes 2, 22, 42, etc… and so on.

I suppose the alternative is to have each shard increment by 1, and have each shard have a starting value of, for example, 1E9 * the cluster index. I would rather not have to resort to that, though, as it will be a pain!

Example:

Database
----------------------------
testdb=# CREATE SEQUENCE userentity_seq START 100003 INCREMENT 20;
CREATE SEQUENCE
testdb=# CREATE TABLE userentity (
    user_id BIGINT PRIMARY KEY default nextval('userentity_seq'), 
    username text);
CREATE TABLE

Entity Class
------------------------------
package com.test;

import javax.persistence.Entity;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
import javax.persistence.Id;
import javax.persistence.SequenceGenerator;
import javax.persistence.Table;

@Entity
@SequenceGenerator(sequenceName="userentity_seq", name="userentity_seq", allocationSize=1)
public class UserEntity
{
	@Id
	@GeneratedValue(strategy=GenerationType.SEQUENCE, generator="userentity_seq")
	private Long user_id;
	public Long getUserId() { return user_id; }
	
	String username;
}

I think you could just configure the none optimizer, or you could write a custom optimizer for this purpose similar to the org.hibernate.id.enhanced.PooledOptimizer and setting the persistence property optimizer to the fully qualified class name.

Thanks,
I’ll give that a shot.

Hello,

I see that if you set the allocationSize to 1, then ‘none’ will be chosen, which uses org.hibernate.id.enhanced.NoopOptimizer as the implementation.
So, that’s why it was working for me before the validation change, as that hits the database for every id.

With 5.4.28, setting the hibernate property hibernate.id.optimizer.pooled.preferred=none unfortunately seems to have the side effect of overriding the specified allocationSize with “1” before validation, thus causing validation to once again fail.

ie., with

@SequenceGenerator(sequenceName="userentity_seq", name="userentity_seq", allocationSize=20)

and

hibernate.id.optimizer.pooled.preferred=none

we get:

Caused by: org.hibernate.tool.schema.spi.SchemaManagementException: Schema-validation: sequence [userentity_seq] defined inconsistent increment-size; found [20] but expecting [1]

Setting the allocationSize to 0, 1, or 20 all result in the same validation error.
I can’t seem to find the bit that is overriding the allocationSize, so writing another PooledOptimizer class probably won’t help.

Thanks for any ideas,
Jamie.

I managed to take the existing NoopOptimzer and write a new class that always returns false for applyIncrementSizeToSourceValues method, and that works!

Hibernate: select nextval ('userentity_seq')
Added user user000000 with ID: 101543
Hibernate: insert into userentity (username, user_id) values (?, ?)
Hibernate: select nextval ('userentity_seq')
Added user user000001 with ID: 101563
Hibernate: insert into userentity (username, user_id) values (?, ?)
Hibernate: select nextval ('userentity_seq')
Added user user000002 with ID: 101583
Hibernate: insert into userentity (username, user_id) values (?, ?)

Will I have to do this in Hibernate 6 as well? Personally, I think this validation requiring the allocationSize to be the same as the Increment By is overzealous, especially when using the NoopOptimizer.

I would have extended the NoopOptimizer class, but it is marked final.

Here is the modified class, with applyIncrementSizeToSourceValues set to always return false:

package org.hibernate.id.enhanced;

import java.io.Serializable;
import org.hibernate.id.IntegralDataTypeHolder;

public class DbSequenceOptimizer extends AbstractOptimizer{
	private IntegralDataTypeHolder lastSourceValue;

	DbSequenceOptimizer(Class returnClass, int incrementSize) 
	{
		super(returnClass, incrementSize);
	}

	@Override
	public Serializable generate(AccessCallback callback) {
		// IMPL NOTE : this method is called concurrently and is
		// not synchronized. It is very important to work on the
		// local variable: the field lastSourceValue is not
		// reliable as it might be mutated by multiple threads.
		// The lastSourceValue field is only accessed by tests,
		// so this is not a concern.
		IntegralDataTypeHolder value = callback.getNextValue();
		lastSourceValue = value;
		return value.makeValue();
	}

	@Override
	public IntegralDataTypeHolder getLastSourceValue() {
		return lastSourceValue;
	}

	@Override
	public boolean applyIncrementSizeToSourceValues() {
		// We allow the increment size to be 0 for backward-compatibility with legacy
		// ID generators; we don't apply a value of 0, so the default will be used instead.
		// We don't apply an increment size of 1, since it is already the default.
		// return getIncrementSize() != 0 && getIncrementSize() != 1;
		return false;
	}
}

Perhaps the validation could be modified to require the database increment-by to be equal to or greater than the allocationSize.

Sorry for all the messages. I have found out that it was not the applyIncrementSizeToSourceValues returning false that was the answer. It was the change from using ‘none’ to using an explicit classname.
So, using the existing NoopOptimizer works, provided you set the hibernate property to this:

hibernate.id.optimizer.pooled.preferred=org.hibernate.id.enhanced.NoopOptimizer

instead of ‘none’.

I don’t know if the JPA TCK has anything to say about the allocationSize, but the spec document says:

The amount
to increment by when
allocating numbers
from the generator/sequence.

What should hbm2ddl update do in such a case where increment size > allocation size?

This is a pretty special use case for which it IMO is ok to require a user to write/configure a custom optimizer. I don’t think it is very common to intentionally introduce gaps in sequence values. Hope you understand this.