Hibernate Envers - default_catalog not working

spring.jpa.properties.org.hibernate.envers.store_data_at_delete = true
spring.jpa.properties.org.hibernate.envers.default_schema = audit
spring.jpa.properties.org.hibernate.envers.default_catalog = audit
spring.jpa.properties.org.hibernate.envers.audit_table_suffix = _audit

is not working. Audit changes get stored in ‘audit’ schema of the application database. I want them to go to a standalone database ‘audit’. Anyone has any ideas if I miss any configuration? I’m using Postgres 9.6

standalone database ‘audit’.

That parameter probably is defined by JDBC connection property.

Depending on DB/JDBC schema / catalog isn’t same as DB instance.

I use the schema value in the Annotation like this

@Entity(name = “Audit”)
@Table(name = “audit”, schema = “audit”)

This is a flexible solution, you can decide at table level which schema has to be used.
Hope this helps

That is not possible because Envers uses the same database connection that your application uses. This means the only options currently available are to write the audit data to a separate schema or catalog, depending on the database platform you’re on.

What I suggest you is to install dblink extension on your main DB and configure the connection.

CREATE EXTENSION IF NOT EXISTS dblink SCHEMA ;
CREATE SERVER
FOREIGN DATA WRAPPER dblink_fdw
OPTIONS (host ‘<your address, can be on the same db server>’,
dbname ‘audit’, port ‘5432’);

CREATE USER MAPPING FOR
SERVER
OPTIONS (user ‘’, password ‘’);

Then when you have to insert data in the db unwrap your Hibernate session as explained below:

Session sessionJDBC = session.unwrap(Session.class);

			sessionJDBC.doWork(new Work() {
				
				@Override
				public void execute(Connection conn) throws SQLException {
					try {
						PreparedStatement stmt = conn.prepareStatement("SELECT " 
																        + <your schema> 
																		+ ".dblink_exec('" + 'auditserver for instance'  + "','INSERT INTO  etc etc "');')");
						stmt.execute();

					} catch (Exception e){
						logger.error(e);				
					}

}
});

This use same session. Hope this help.

If I understand what you’re proposing, I’m not sure that will work. In this case, @adorogensky doesn’t have control over the insert statements they want to have re-routed, they happen automatically when Envers attempts to save a dynamic-map entity instance, Hibernate is what translates that map instance into an actual insert operation, not the user.

Remember, data capture is only half of what Envers provides users. There is the entire Query API that Envers exposes that allows users to fetch entity instances based on revision numbers, date ranges, and user defined criteria which is not trivial to understand and then implement in just plain SQL.

I think the only way to automate this behaviour is to create a trigger function (always using dbLink to save data in the standalone database). In this case you are bypassing Hibernate as the statement will be triggered by the database event handler.

Second solution is to write a store procedure and mapping that procedure in Hibernate; up to know I did not find a solution to use a single Hibernate instance to manage different databases.

A trigger is most definitely not the only way.

In my experience, triggers can be unpredictable and performance hindering tools. If you really want to stream changes from one database to another, I might suggest that you check out Debezium for doing that task because it can do it asynchronously without impacting your run-time performance.

That’s because that isn’t how Hibernate works.

Traditionally, if you need to operate on multiple databases, you’ll need to have a SessionFactory that is configured per database, acquire a Session from each and perform your operations. But in this scenario, you need to take special care with transaction management, making sure the distributed transaction is committed correctly to guarantee database consistency across those participating in the transaction.

I believe I mentioned earlier, some databases have a way to mirror a remote table as a real table. This means you delegate the transaction management aspect to the database and you simply configure and interact with a single SessionFactory. If your platform supports this, this is the best alternative.

The Debezium solution avoids the distributed transaction concerns and it also handles replication in an asynchronous way, allowing the run-time overhead to be non-existent. It will mean the user would have to use a SessionFactory for their main database reads/writes and a seperate SessionFactory for reading from the audit datastore.