Retrieve SQL columns for unidirectional associations and indexed assoc (i.e. list)

Hi!

I have such an unidirectional association:

	@OneToMany(cascade = CascadeType.ALL, orphanRemoval = true)
	@JoinColumn(name = "employee_id", nullable = true)
	@OrderColumn(name = "index_in_list")
	private List<EmployeeDiscussion> discussions = new ArrayList<>();

I want to perform an export of the EmployeeDiscussion table, but the EmployeeDiscussion object doesn’t know the value of the fields: employee_id and index_in_list. I need to obtain them programmatically, so that I can issue a corresponding SQL request.

I experimented the following solution:

EntityType et = em.getMetamodel().entity(Employee.class);
Session session = em.unwrap(Session.class);
AbstractEntityPersister ep = (AbstractEntityPersister) session.getSessionFactory().getClassMetadata(et.getJavaType());

for (AttributeDefinition def : ep.getAttributes()) {
	if (!(def instanceof EntityBasedAssociationAttribute)) {
		continue;
	}
	Joinable joinable = ServerUtils.getPrivateFieldValue(def, "joinable");
	if (!(joinable instanceof OneToManyPersister) ) {
		continue;
	}
	OneToManyPersister persister = (OneToManyPersister) joinable;
	
	System.err.println(def + ", " + Arrays.toString(persister.getKeyColumnNames()) + ", " + Arrays.toString(persister.getIndexColumnNames()));
}

However, I feel that this is a bit of hacky approach. I was forced to access a protected field, because it + it’s getter is false. Also, I feel that I’m being a bit to dependent on the internal impl of Hibernate, so maybe it’s not very future proof.

Is there maybe a better, more “elegant” approach so that I can achieve my goal? I mention that changing the mappings in the app is kind of no go. There are a lot of tables. And adding new fields would need additional modifications in the UI.

Thanks in advance!

So why don’t you retrieve Employee objects instead and join fetch the discussions association? Then you can easily export the data since you have all you need.

Thanks for your answer.

We have an export utility, that is rather table based. This approach worked quite well on an app where we don’t have parent-child objects. We find that this pattern (of having big object graphs) is not convenient on the long term, including performance wise.

However, we have now an app, where this parent-child pattern is (over) used. We did implement an approach based on what you are suggesting in the past. But it has a lot of disadvantages. Including performance. Hence we try to adapt our “per table” approach.

I mention that this export utility is used for preparing test data. I.e. before running a test, a specific DB “snapshot” is prepared for the test. Performance is important. Because if the setup phase of a single test takes tens of seconds => it’s bad. W/ various optimisations (which leverage this table approach), we managed to have an overhead of up to 2 or 3 seconds per test, which is acceptable.

Does this export utility not also export data attached to the employee table? If so, you are fetching that data anyway, so what’s the problem of doing a single query select * from employee join employee_discussion ... instead of two?

You are talking about performance but unless you can show me that a single query is slower than running 2 queries, I have a hard time believing you. Even if it is slower, by how much?

If you are preferring the concept of tables, why do you model your entities this way? Do you know that you can have multiple entities for the same table? Maybe you just need a different entity for your export?

Databases usually have the concept of database or at least copying. Maybe you should use that instead of doing a manual export and re-import which is usually even faster. For example, in PostgreSQL you can create a template database from an existing database and then create new databases that are based on that template. That approach usually works in the realm of milliseconds.

Thanks Christian.

If you are preferring the concept of tables, why do you model your entities this way?

We tried this in the past, but we didn’t finish. I updated the code, and the solution seems far more simple and robust. We’ll go this way.

For example, in PostgreSQL you can create a template database from an existing database and then create new databases that are based on that template. That approach usually works in the realm of milliseconds.

This is interesting. I didn’t know. However, we’d be very DB specific.

However, we’d be very DB specific.

So what? How often do you switch databases in a company? Use the tools the DB you have provides to get the best out of it.

You are right. But since we also have a lot of code on the “initial” approach, it’s more convenient.

And BTW, I’m a fan of “using Hibernate, the code runs on multiple DBs”. A while ago, thanks to that, a client of ours migrated from Oracle to Postgres for our app. It was a success, because they also had another app, super Oracle dependent: and for that, the migration was not possible.

And I forgot: we use the tools for more programs. Most are for Postgres and one for Oracle. So I’d need to find the equivalent in Oracle.

We already struggled in finding a way to disable the FKs (also for this import phase).
And we are working now to have replication on both. But this is rather a DBA task, so nothing to do w/ Hibernate

You can drop and recreate the FKs in PostgreSQL in 3 phases. Select all foreign key infos from the information schema, then run a bunch of alter table t1 drop constraint .. statements and finally after data import, run the alter table t1 create foreign key ... statements.

A while ago, thanks to that, a client of ours migrated from Oracle to Postgres for our app.

Even with Hibernate, you surely had to change a lot of things because Oracle treats empty strings specially and handles timestamps differently etc.

I understand that Hibernate helps you, but IMO you shouldn’t avoid using database features just because they might not have a 1:1 equivalent in other databases. Chances are, when you actually do have to migrate, the target database already has most of the features you need already.

FKs in PostgreSQL in 3 phases
We did it this way in Oracle. In Postgres we found a shorter method w/ a single command.

but IMO you shouldn’t avoid using database features
You are right. We do use this way some features that it has related to JSON.

1 Like