we are planning to replace a self implemented legacy data historization by Hibernate Envers, because it allows us to separate business concerns from historization concerns.
Due to the fact, that amongst others we are dealing with historized/versioned time series data, I would like to ask about the typical order of magnitude of the number of revisions, which can be handled sensibly by Envers.
Moreover, would it be sensible to index the REVINFO table columns (the data base right now is Oracle 12)?
Unfortunately, not storing the time series data in the relational data base is no option.
First of all, you are better off using the
ValidityAuditStrategy instead of the default one.
ValidityAuditStrategy will provide you better means for indexing the revision columns which, in turn, will render better performance.
Moreover, would it be sensible to index the REVINFO table columns (the database right now is Oracle 12)?
Not just the
REVINFO, but also the
_AUD tables should be indexed to provide better performance for reading data.
_AUD tables grow to large, you can use archiving or database partitioning.
thank you very much for your answer, it is very much appreciated.
I will look into the concepts suggested by you.
As to the ValidityAuditStrategy – here it seems to be more complicated to migrate an existing data base to Envers.
As to the indexing – taking into account the queries generated by Envers, are there special suggestions to be made for the indexes based on these?
Nothing different than tuning any other SQL query. Just check the Execution Plan before and after adding the index and you should know which indexes work better.