How to sync the manual entries for PK given through any shell script with those generated through a Hibernate sequence


#1

I have encountered a quite complex but genuine issue with my application design. Some predefined data is to be given through shell script which is mentioned below:

INSERT INTO 
    config.profile_backend_point_asso(assoc_id,enabled,end_point_id,profile_id) 
VALUES
(1,true,1,1),
(2,true,2,1);

Note:- “assoc_id” is defined as the primary key.

Now user can also give some custom entries through UI for which “assoc_id” are generated using hibernate sequence.

Let’s say hibernate allotted 3 as “assoc_id” for which entry in the table will be (3,true,3,1);

At some point of time, the requirement for predefined entries increases and we allotted below entries in table unknowingly what hibernate has generated in past on UI operations.
(3,true,3,1);

Now, this “assoc_id” has already been allotted and will give constraint violation exception.

So can we have any solution to synchronize these two approaches of primary key allotment so that whether its shell script or UI using hibernate sequence will allot the unique id every time.


#2

Let the database sequence allocate positive values while the script uses negative values. That’s a typical way of solving this problem.


#3

Thanks, Vlad for giving out the solution!!!
Actually, we thought of the same solution, but I am unaware of the impact on DB because we need to run the huge number of UPDATE queries for opting this solution as we are in a stage that we cannot create a new script or DROP schema to have new entries.

Also, this predefined data given through shell script are the core part to back our application for creating new ones as it is default data.

Let’s say in some case DB turns down during the execution of the script. If these happen, we will be at a stage where we cannot do anything.

So we need to come up with a solid solution that will be less and less bug-prone.

Well, I appreciate your concern about our issue.


#4

In that case, just replace the script with an EIP pipeline (e.g. Apache Camel, Spring Integration) that does the batch processing task and which could use the same DB sequence.