That was a substantial update in your article.
You suggest to remove avoid deleting and adding a new instance of an object (a DELETE and an INSERT) sequence like this
entityManager.remove(post);
...
entityManager.flush();
entityManager.persist(newPost);
with an UPDATE to the existing instance like this
post.setTitle(...)
This solves the constraint violation issue by way of a workaround that avoids Hibernate’s flush order. That is, Hibernate’s flush order did not avoid the constraint violation; instead it was the code refactoring you did that avoided this violation. This means that Hibernate’s flush order could not solve this case of constraint violation. Perhaps it can only minimize the chance of such violations, like, as suggested above, that are related to managing cascading, second tables, etc (yet I have yet to see evidence of this).
(Note how the refactored code degenerates in change order flush.)
This example reinforces my view that Hibernate’s flush order is mostly there (only?) because of performance optimizations regarding the way batch processing is done in JDBC. Please, do not see this as a critique to anything, as Hibernate’s flush order is perhaps the only way to leverage JDBC’s batch processing feature.
Nope. It’s a code smell. …
In your example, I would agree that it is a code smell. In fact, I think the developer did not really want to delete a post and insert another, which was how she coded initially. What she really wanted, was to change the title of an existing post, but she did not know it …
Now, suppose a situation when deleting and inserting a post is really the way to go. Suppose that posts have many fields, including the slug
, and the developer really wants a new post with all fields different, but the slug
. Imagine that the new post is read from a file, like a JSON feed, so that a single line of Java code would bring it into existence in memory. In this case, removing the existing post, doing a manual flush, and persisting the new post would be much easier and more elegant than typing a long and cumbersome sequence of existingPost.setX(...)
calls for every field X
in post but the slug
field. If not, suppose that the existing and the new post have different associations to other objects, in which case simply updating the existing post to become the new one is not possible… This example shows that your suggestion is either cumbersome or impossible, in more complex cases than the example you gave.
So, my conclusion is that, manually calling flush()
to force code change ordering is never a code smell: it is really needed to avoid constraint violations, in real delete and insert cases, as a workaround to restore code change order when Hibernate’s default flush order cannot avoid a constraint violation. What really is a code smell, is doing a delete and insert when what one wants to do is an update.
If you are really interested in this topic, you could try to investigate whether your suggestion can be done while still preserving batch statement ordering …
I don’t think it can be done because of batch statement ordering. And I do not know if this is done as it is in JDBC because of a strict JDBC design decision, or because of the way most databases support it.
A few other comments to your blog article:
-
there is a typo in “Doman Model”;
-
add “, but a different title:” to the end of “… instead with the same slug
attribute”.