5. Persistence Context

Both the org.hibernate.Session API and javax.persistence.EntityManager API represent a context for dealing with persistent data. This concept is called a persistence context. Persistent data has a state in relation to both a persistence context and the underlying database.

transient

the entity has just been instantiated and is not associated with a persistence context. It has no persistent representation in the database and typically no identifier value has been assigned (unless the assigned generator was used).

managed or persistent

the entity has an associated identifier and is associated with a persistence context. It may or may not physically exist in the database yet.

detached

the entity has an associated identifier but is no longer associated with a persistence context (usually because the persistence context was closed or the instance was evicted from the context)

removed

the entity has an associated identifier and is associated with a persistence context, however, it is scheduled for removal from the database.

Much of the org.hibernate.Session and javax.persistence.EntityManager methods deal with moving entities among these states.

5.1. Accessing Hibernate APIs from JPA

JPA defines an incredibly useful method to allow applications access to the APIs of the underlying provider.

Example 294. Accessing Hibernate APIs from JPA

  1. Session session = entityManager.unwrap( Session.class );
  2. SessionImplementor sessionImplementor = entityManager.unwrap( SessionImplementor.class );
  3. SessionFactory sessionFactory = entityManager.getEntityManagerFactory().unwrap( SessionFactory.class );

5.2. Bytecode Enhancement

Hibernate “grew up” not supporting bytecode enhancement at all. At that time, Hibernate only supported proxy-based alternative for lazy loading and always used diff-based dirty calculation. Hibernate 3.x saw the first attempts at bytecode enhancement support in Hibernate. We consider those initial attempts (up until 5.0) completely as an incubation. The support for bytecode enhancement in 5.0 onward is what we are discussing here.

5.2.1. Capabilities

Hibernate supports the enhancement of an application Java domain model for the purpose of adding various persistence-related capabilities directly into the class.

Lazy attribute loading

Think of this as partial loading support. Essentially, you can tell Hibernate that only part(s) of an entity should be loaded upon fetching from the database and when the other part(s) should be loaded as well. Note that this is very much different from the proxy-based idea of lazy loading which is entity-centric where the entity’s state is loaded at once as needed. With bytecode enhancement, individual attributes or groups of attributes are loaded as needed.

Lazy attributes can be designated to be loaded together, and this is called a “lazy group”. By default, all singular attributes are part of a single group, meaning that when one lazy singular attribute is accessed all lazy singular attributes are loaded. Lazy plural attributes, by default, are each a lazy group by themselves. This behavior is explicitly controllable through the @org.hibernate.annotations.LazyGroup annotation.

Example 295. @LazyGroup example

  1. @Entity
  2. public class Customer {
  3. @Id
  4. private Integer id;
  5. private String name;
  6. @Basic( fetch = FetchType.LAZY )
  7. private UUID accountsPayableXrefId;
  8. @Lob
  9. @Basic( fetch = FetchType.LAZY )
  10. @LazyGroup( "lobs" )
  11. private Blob image;
  12. //Getters and setters are omitted for brevity
  13. }

In the above example, we have 2 lazy attributes: accountsPayableXrefId and image. Each is part of a different fetch group (accountsPayableXrefId is part of the default fetch group), which means that accessing accountsPayableXrefId will not force the loading of the image attribute, and vice-versa.

As a hopefully temporary legacy hold-over, it is currently required that all lazy singular associations (many-to-one and one-to-one) also include @LazyToOne(LazyToOneOption.NO_PROXY). The plan is to relax that requirement later.

In-line dirty tracking

Historically Hibernate only supported diff-based dirty calculation for determining which entities in a persistence context have changed. This essentially means that Hibernate would keep track of the last known state of an entity in regards to the database (typically the last read or write). Then, as part of flushing the persistence context, Hibernate would walk every entity associated with the persistence context and check its current state against that “last known database state”. This is by far the most thorough approach to dirty checking because it accounts for data-types that can change their internal state (java.util.Date is the prime example of this). However, in a persistence context with a large number of associated entities, it can also be a performance-inhibiting approach.

If your application does not need to care about “internal state changing data-type” use cases, bytecode-enhanced dirty tracking might be a worthwhile alternative to consider, especially in terms of performance. In this approach Hibernate will manipulate the bytecode of your classes to add “dirty tracking” directly to the entity, allowing the entity itself to keep track of which of its attributes have changed. During the flush time, Hibernate asks your entity what has changed rather than having to perform the state-diff calculations.

Bidirectional association management

Hibernate strives to keep your application as close to “normal Java usage” (idiomatic Java) as possible. Consider a domain model with a normal Person/Book bidirectional association:

Example 296. Bidirectional association

  1. @Entity(name = "Person")
  2. public static class Person {
  3. @Id
  4. private Long id;
  5. private String name;
  6. @OneToMany(mappedBy = "author")
  7. private List<Book> books = new ArrayList<>();
  8. //Getters and setters are omitted for brevity
  9. }
  10. @Entity(name = "Book")
  11. public static class Book {
  12. @Id
  13. private Long id;
  14. private String title;
  15. @NaturalId
  16. private String isbn;
  17. @ManyToOne
  18. private Person author;
  19. //Getters and setters are omitted for brevity
  20. }

Example 297. Incorrect normal Java usage

  1. Person person = new Person();
  2. person.setName( "John Doe" );
  3. Book book = new Book();
  4. person.getBooks().add( book );
  5. try {
  6. book.getAuthor().getName();
  7. }
  8. catch (NullPointerException expected) {
  9. // This blows up ( NPE ) in normal Java usage
  10. }

This blows up in normal Java usage. The correct normal Java usage is:

Example 298. Correct normal Java usage

  1. Person person = new Person();
  2. person.setName( "John Doe" );
  3. Book book = new Book();
  4. person.getBooks().add( book );
  5. book.setAuthor( person );
  6. book.getAuthor().getName();

Bytecode-enhanced bi-directional association management makes that first example work by managing the “other side” of a bi-directional association whenever one side is manipulated.

Internal performance optimizations

Additionally, we use the enhancement process to add some additional code that allows us to optimized certain performance characteristics of the persistence context. These are hard to discuss without diving into a discussion of Hibernate internals.

5.2.2. Performing enhancement

Runtime enhancement

Currently, runtime enhancement of the domain model is only supported in managed JPA environments following the JPA-defined SPI for performing class transformations.

Even then, this support is disabled by default. To enable runtime enhancement, specify one of the following configuration properties:

**hibernate.enhancer.enableDirtyTracking** (e.g. true or false (default value))

Enable dirty tracking feature in runtime bytecode enhancement.

**hibernate.enhancer.enableLazyInitialization** (e.g. true or false (default value))

Enable lazy loading feature in runtime bytecode enhancement. This way, even basic types (e.g. @Basic(fetch = FetchType.LAZY)) can be fetched lazily.

**hibernate.enhancer.enableAssociationManagement** (e.g. true or false (default value))

Enable association management feature in runtime bytecode enhancement which automatically synchronizes a bidirectional association when only one side is changed.

Also, at the moment, only annotated classes support runtime enhancement.

Gradle plugin

Hibernate provides a Gradle plugin that is capable of providing build-time enhancement of the domain model as they are compiled as part of a Gradle build. To use the plugin, a project would first need to apply it:

Example 299. Apply the Gradle plugin

  1. apply plugin: 'org.hibernate.orm'
  2. ext {
  3. hibernateVersion = 'hibernate-version-you-want'
  4. }
  5. buildscript {
  6. dependencies {
  7. classpath "org.hibernate:hibernate-gradle-plugin:$hibernateVersion"
  8. }
  9. }
  10. hibernate {
  11. enhance {
  12. enableLazyInitialization = true
  13. enableDirtyTracking = true
  14. enableAssociationManagement = true
  15. }
  16. }

The configuration that is available is exposed through a registered Gradle DSL extension:

enableLazyInitialization

Whether enhancement for lazy attribute loading should be done.

enableDirtyTracking

Whether enhancement for self-dirty tracking should be done.

enableAssociationManagement

Whether enhancement for bi-directional association management should be done.

The default value for all 3 configuration settings is false.

The enhance { } block is required in order for enhancement to occur. Enhancement is disabled by default in preparation for additions capabilities (hbm2ddl, etc) in the plugin.

Maven plugin

Hibernate provides a Maven plugin capable of providing build-time enhancement of the domain model as they are compiled as part of a Maven build. See the section on the Gradle plugin for details on the configuration settings. Again, the default for those 3 is false.

The Maven plugin supports one additional configuration settings: failOnError, which controls what happens in case of error. The default behavior is to fail the build, but it can be set so that only a warning is issued.

Example 300. Apply the Maven plugin

  1. <build>
  2. <plugins>
  3. [...]
  4. <plugin>
  5. <groupId>org.hibernate.orm.tooling</groupId>
  6. <artifactId>hibernate-enhance-maven-plugin</artifactId>
  7. <version>$currentHibernateVersion</version>
  8. <executions>
  9. <execution>
  10. <configuration>
  11. <failOnError>true</failOnError>
  12. <enableLazyInitialization>true</enableLazyInitialization>
  13. <enableDirtyTracking>true</enableDirtyTracking>
  14. <enableAssociationManagement>true</enableAssociationManagement>
  15. </configuration>
  16. <goals>
  17. <goal>enhance</goal>
  18. </goals>
  19. </execution>
  20. </executions>
  21. </plugin>
  22. [...]
  23. </plugins>
  24. </build>

5.3. Making entities persistent

Once you’ve created a new entity instance (using the standard new operator) it is in new state. You can make it persistent by associating it to either an org.hibernate.Session or a javax.persistence.EntityManager.

Example 301. Making an entity persistent with JPA

  1. Person person = new Person();
  2. person.setId( 1L );
  3. person.setName("John Doe");
  4. entityManager.persist( person );

Example 302. Making an entity persistent with Hibernate API

  1. Person person = new Person();
  2. person.setId( 1L );
  3. person.setName("John Doe");
  4. session.save( person );

org.hibernate.Session also has a method named persist which follows the exact semantics defined in the JPA specification for the persist method. It is this org.hibernate.Session method to which the Hibernate javax.persistence.EntityManager implementation delegates.

If the DomesticCat entity type has a generated identifier, the value is associated with the instance when the save or persist is called. If the identifier is not automatically generated, the manually assigned (usually natural) key value has to be set on the instance before the save or persist methods are called.

5.4. Deleting (removing) entities

Entities can also be deleted.

Example 303. Deleting an entity with JPA

  1. entityManager.remove( person );

Example 304. Deleting an entity with the Hibernate API

  1. session.delete( person );

Hibernate itself can handle deleting entities in detached state. JPA, however, disallows this behavior.

The implication here is that the entity instance passed to the org.hibernate.Session delete method can be either in managed or detached state, while the entity instance passed to remove on javax.persistence.EntityManager must be in the managed state.

5.5. Obtain an entity reference without initializing its data

Sometimes referred to as lazy loading, the ability to obtain a reference to an entity without having to load its data is hugely important. The most common case being the need to create an association between an entity and another existing entity.

Example 305. Obtaining an entity reference without initializing its data with JPA

  1. Book book = new Book();
  2. book.setAuthor( entityManager.getReference( Person.class, personId ) );

Example 306. Obtaining an entity reference without initializing its data with Hibernate API

  1. Book book = new Book();
  2. book.setId( 1L );
  3. book.setIsbn( "123-456-7890" );
  4. entityManager.persist( book );
  5. book.setAuthor( session.load( Person.class, personId ) );

The above works on the assumption that the entity is defined to allow lazy loading, generally through use of runtime proxies. In both cases an exception will be thrown later if the given entity does not refer to actual database state when the application attempts to use the returned proxy in any way that requires access to its data.

Unless the entity class is declared final, the proxy extends the entity class. If the entity class is final, the proxy will implement an interface instead. See the @Proxy mapping section for more info.

5.6. Obtain an entity with its data initialized

It is also quite common to want to obtain an entity along with its data (e.g. like when we need to display it in the UI).

Example 307. Obtaining an entity reference with its data initialized with JPA

  1. Person person = entityManager.find( Person.class, personId );

Example 308. Obtaining an entity reference with its data initialized with Hibernate API

  1. Person person = session.get( Person.class, personId );

Example 309. Obtaining an entity reference with its data initialized using the byId() Hibernate API

  1. Person person = session.byId( Person.class ).load( personId );

In both cases null is returned if no matching database row was found.

It’s possible to return a Java 8 Optional as well:

Example 310. Obtaining an Optional entity reference with its data initialized using the byId() Hibernate API

  1. Optional<Person> optionalPerson = session.byId( Person.class ).loadOptional( personId );

5.7. Obtain multiple entities by their identifiers

If you want to load multiple entities by providing their identifiers, calling the EntityManager#find method multiple times is not only inconvenient, but also inefficient.

While the JPA standard does not support retrieving multiple entities at once, other than running a JPQL or Criteria API query, Hibernate offers this functionality via the byMultipleIds method of the Hibernate Session.

The byMultipleIds method returns a MultiIdentifierLoadAccess which you can use to customize the multi-load request.

The MultiIdentifierLoadAccess interface provides several methods which you can use to change the behavior of the multi-load call:

enableOrderedReturn(boolean enabled)

This setting controls whether the returned List is ordered and positional in relation to the incoming ids. If enabled (the default), the return List is ordered and positional relative to the incoming ids. In other words, a request to multiLoad([2,1,3]) will return [Entity#2, Entity#1, Entity#3].

An important distinction is made here in regards to the handling of unknown entities depending on this “ordered return” setting. If enabled, a null is inserted into the List at the proper position(s). If disabled, the nulls are not put into the return List.

In other words, consumers of the returned ordered List would need to be able to handle null elements.

enableSessionCheck(boolean enabled)

This setting, which is disabled by default, tells Hibernate to check the first-level cache (a.k.a Session or Persistence Context) first and, if the entity is found and already managed by the Hibernate Session, the cached entity will be added to the returned List, therefore skipping it from being fetched via the multi-load query.

enableReturnOfDeletedEntities(boolean enabled)

This setting instructs Hibernate if the multi-load operation is allowed to return entities that were deleted by the current Persistence Context. A deleted entity is one which has been passed to this Session.delete or Session.remove method, but the Session was not flushed yet, meaning that the associated row was not deleted in the database table.

The default behavior is to handle them as null in the return (see enableOrderedReturn). When enabled, the result set will contain deleted entities. When disabled (which is the default behavior), deleted entities are not included in the returning List.

with(LockOptions lockOptions)

This setting allows you to pass a given LockOptions mode to the multi-load query.

with(CacheMode cacheMode)

This setting allows you to pass a given CacheMode strategy so that we can load entities from the second-level cache, therefore skipping the cached entities from being fetched via the multi-load query.

withBatchSize(int batchSize)

This setting allows you to specify a batch size for loading the entities (e.g. how many at a time).

The default is to use a batch sizing strategy defined by the Dialect.getDefaultBatchLoadSizingStrategy() method.

Any greater-than-one value here will override that default behavior.

with(RootGraph<T> graph)

The RootGraph is a Hibernate extension to the JPA EntityGraph contract, and this method allows you to pass a specific RootGraph to the multi-load query so that it can fetch additional relationships of the current loading entity.

Now, assuming we have 3 Person entities in the database, we can load all of them with a single call as illustrated by the following example:

Example 311. Loading multiple entities using the byMultipleIds() Hibernate API

  1. Session session = entityManager.unwrap( Session.class );
  2. List<Person> persons = session
  3. .byMultipleIds( Person.class )
  4. .multiLoad( 1L, 2L, 3L );
  5. assertEquals( 3, persons.size() );
  6. List<Person> samePersons = session
  7. .byMultipleIds( Person.class )
  8. .enableSessionCheck( true )
  9. .multiLoad( 1L, 2L, 3L );
  10. assertEquals( persons, samePersons );
  1. SELECT p.id AS id1_0_0_,
  2. p.name AS name2_0_0_
  3. FROM Person p
  4. WHERE p.id IN ( 1, 2, 3 )

Notice that only one SQL SELECT statement was executed since the second call uses the enableSessionCheck method of the MultiIdentifierLoadAccess to instruct Hibernate to skip entities that are already loaded in the current Persistence Context.

If the entities are not available in the current Persistence Context but they could be loaded from the second-level cache, you can use the with(CacheMode) method of the MultiIdentifierLoadAccess object.

Example 312. Loading multiple entities from the second-level cache

  1. SessionFactory sessionFactory = entityManagerFactory().unwrap( SessionFactory.class );
  2. Statistics statistics = sessionFactory.getStatistics();
  3. sessionFactory.getCache().evictAll();
  4. statistics.clear();
  5. sqlStatementInterceptor.clear();
  6. assertEquals( 0, statistics.getQueryExecutionCount() );
  7. doInJPA( this::entityManagerFactory, entityManager -> {
  8. Session session = entityManager.unwrap( Session.class );
  9. List<Person> persons = session
  10. .byMultipleIds( Person.class )
  11. .multiLoad( 1L, 2L, 3L );
  12. assertEquals( 3, persons.size() );
  13. } );
  14. assertEquals( 0, statistics.getSecondLevelCacheHitCount() );
  15. assertEquals( 3, statistics.getSecondLevelCachePutCount() );
  16. assertEquals( 1, sqlStatementInterceptor.getSqlQueries().size() );
  17. doInJPA( this::entityManagerFactory, entityManager -> {
  18. Session session = entityManager.unwrap( Session.class );
  19. sqlStatementInterceptor.clear();
  20. List<Person> persons = session.byMultipleIds( Person.class )
  21. .with( CacheMode.NORMAL )
  22. .multiLoad( 1L, 2L, 3L );
  23. assertEquals( 3, persons.size() );
  24. } );
  25. assertEquals( 3, statistics.getSecondLevelCacheHitCount() );
  26. assertEquals( 0, sqlStatementInterceptor.getSqlQueries().size() );

In the example above, we first make sure that we clear the second-level cache to demonstrate that the multi-load query will put the returning entities into the second-level cache.

After executing the first byMultipleIds call, Hibernate is going to fetch the requested entities, and as illustrated by the getSecondLevelCachePutCount method call, 3 entities were indeed added to the shared cache.

Afterward, when executing the second byMultipleIds call for the same entities in a new Hibernate Session, we set the CacheMode.NORMAL second-level cache mode so that entities are going to be returned from the second-level cache.

The getSecondLevelCacheHitCount statistics method returns 3 this time, since the 3 entities were loaded from the second-level cache, and, as illustrated by sqlStatementInterceptor.getSqlQueries(), no multi-load SELECT statement was executed this time.

5.8. Obtain an entity by natural-id

In addition to allowing to load the entity by its identifier, Hibernate allows applications to load entities by the declared natural identifier.

Example 313. Natural-id mapping

  1. @Entity(name = "Book")
  2. public static class Book {
  3. @Id
  4. private Long id;
  5. private String title;
  6. @NaturalId
  7. private String isbn;
  8. @ManyToOne
  9. private Person author;
  10. //Getters and setters are omitted for brevity
  11. }

We can also opt to fetch the entity or just retrieve a reference to it when using the natural identifier loading methods.

Example 314. Get entity reference by simple natural-id

  1. Book book = session.bySimpleNaturalId( Book.class ).getReference( isbn );

Example 315. Load entity by natural-id

  1. Book book = session
  2. .byNaturalId( Book.class )
  3. .using( "isbn", isbn )
  4. .load( );

We can also use a Java 8 Optional to load an entity by its natural id:

Example 316. Load an Optional entity by natural-id

  1. Optional<Book> optionalBook = session
  2. .byNaturalId( Book.class )
  3. .using( "isbn", isbn )
  4. .loadOptional( );

Hibernate offers a consistent API for accessing persistent data by identifier or by the natural-id. Each of these defines the same two data access methods:

getReference

Should be used in cases where the identifier is assumed to exist, where non-existence would be an actual error. Should never be used to test existence. That is because this method will prefer to create and return a proxy if the data is not already associated with the Session rather than hit the database. The quintessential use-case for using this method is to create foreign key based associations.

load

Will return the persistent data associated with the given identifier value or null if that identifier does not exist.

Each of these two methods defines an overloading variant accepting a org.hibernate.LockOptions argument. Locking is discussed in a separate chapter.

5.9. Filtering entities and associations

Hibernate offers two options if you want to filter entities or entity associations:

static (e.g. @Where and @WhereJoinTable)

which are defined at mapping time and cannot change at runtime.

dynamic (e.g. @Filter and @FilterJoinTable)

which are applied and configured at runtime.

5.9.1. @Where

Sometimes, you want to filter out entities or collections using custom SQL criteria. This can be achieved using the @Where annotation, which can be applied to entities and collections.

Example 317. @Where mapping usage

  1. public enum AccountType {
  2. DEBIT,
  3. CREDIT
  4. }
  5. @Entity(name = "Client")
  6. public static class Client {
  7. @Id
  8. private Long id;
  9. private String name;
  10. @Where( clause = "account_type = 'DEBIT'")
  11. @OneToMany(mappedBy = "client")
  12. private List<Account> debitAccounts = new ArrayList<>( );
  13. @Where( clause = "account_type = 'CREDIT'")
  14. @OneToMany(mappedBy = "client")
  15. private List<Account> creditAccounts = new ArrayList<>( );
  16. //Getters and setters omitted for brevity
  17. }
  18. @Entity(name = "Account")
  19. @Where( clause = "active = true" )
  20. public static class Account {
  21. @Id
  22. private Long id;
  23. @ManyToOne
  24. private Client client;
  25. @Column(name = "account_type")
  26. @Enumerated(EnumType.STRING)
  27. private AccountType type;
  28. private Double amount;
  29. private Double rate;
  30. private boolean active;
  31. //Getters and setters omitted for brevity
  32. }

If the database contains the following entities:

Example 318. Persisting and fetching entities with a @Where mapping

  1. doInJPA( this::entityManagerFactory, entityManager -> {
  2. Client client = new Client();
  3. client.setId( 1L );
  4. client.setName( "John Doe" );
  5. entityManager.persist( client );
  6. Account account1 = new Account( );
  7. account1.setId( 1L );
  8. account1.setType( AccountType.CREDIT );
  9. account1.setAmount( 5000d );
  10. account1.setRate( 1.25 / 100 );
  11. account1.setActive( true );
  12. account1.setClient( client );
  13. client.getCreditAccounts().add( account1 );
  14. entityManager.persist( account1 );
  15. Account account2 = new Account( );
  16. account2.setId( 2L );
  17. account2.setType( AccountType.DEBIT );
  18. account2.setAmount( 0d );
  19. account2.setRate( 1.05 / 100 );
  20. account2.setActive( false );
  21. account2.setClient( client );
  22. client.getDebitAccounts().add( account2 );
  23. entityManager.persist( account2 );
  24. Account account3 = new Account( );
  25. account3.setType( AccountType.DEBIT );
  26. account3.setId( 3L );
  27. account3.setAmount( 250d );
  28. account3.setRate( 1.05 / 100 );
  29. account3.setActive( true );
  30. account3.setClient( client );
  31. client.getDebitAccounts().add( account3 );
  32. entityManager.persist( account3 );
  33. } );
  1. INSERT INTO Client (name, id)
  2. VALUES ('John Doe', 1)
  3. INSERT INTO Account (active, amount, client_id, rate, account_type, id)
  4. VALUES (true, 5000.0, 1, 0.0125, 'CREDIT', 1)
  5. INSERT INTO Account (active, amount, client_id, rate, account_type, id)
  6. VALUES (false, 0.0, 1, 0.0105, 'DEBIT', 2)
  7. INSERT INTO Account (active, amount, client_id, rate, account_type, id)
  8. VALUES (true, 250.0, 1, 0.0105, 'DEBIT', 3)

When executing an Account entity query, Hibernate is going to filter out all records that are not active.

Example 319. Query entities mapped with @Where

  1. doInJPA( this::entityManagerFactory, entityManager -> {
  2. List<Account> accounts = entityManager.createQuery(
  3. "select a from Account a", Account.class)
  4. .getResultList();
  5. assertEquals( 2, accounts.size());
  6. } );
  1. SELECT
  2. a.id as id1_0_,
  3. a.active as active2_0_,
  4. a.amount as amount3_0_,
  5. a.client_id as client_i6_0_,
  6. a.rate as rate4_0_,
  7. a.account_type as account_5_0_
  8. FROM
  9. Account a
  10. WHERE ( a.active = true )

When fetching the debitAccounts or the creditAccounts collections, Hibernate is going to apply the @Where clause filtering criteria to the associated child entities.

Example 320. Traversing collections mapped with @Where

  1. doInJPA( this::entityManagerFactory, entityManager -> {
  2. Client client = entityManager.find( Client.class, 1L );
  3. assertEquals( 1, client.getCreditAccounts().size() );
  4. assertEquals( 1, client.getDebitAccounts().size() );
  5. } );
  1. SELECT
  2. c.client_id as client_i6_0_0_,
  3. c.id as id1_0_0_,
  4. c.id as id1_0_1_,
  5. c.active as active2_0_1_,
  6. c.amount as amount3_0_1_,
  7. c.client_id as client_i6_0_1_,
  8. c.rate as rate4_0_1_,
  9. c.account_type as account_5_0_1_
  10. FROM
  11. Account c
  12. WHERE ( c.active = true and c.account_type = 'CREDIT' ) AND c.client_id = 1
  13. SELECT
  14. d.client_id as client_i6_0_0_,
  15. d.id as id1_0_0_,
  16. d.id as id1_0_1_,
  17. d.active as active2_0_1_,
  18. d.amount as amount3_0_1_,
  19. d.client_id as client_i6_0_1_,
  20. d.rate as rate4_0_1_,
  21. d.account_type as account_5_0_1_
  22. FROM
  23. Account d
  24. WHERE ( d.active = true and d.account_type = 'DEBIT' ) AND d.client_id = 1

5.9.2. @WhereJoinTable

Just like @Where annotation, @WhereJoinTable is used to filter out collections using a joined table (e.g. @ManyToMany association).

Example 321. @WhereJoinTable mapping example

  1. @Entity(name = "Book")
  2. public static class Book {
  3. @Id
  4. private Long id;
  5. private String title;
  6. private String author;
  7. @ManyToMany
  8. @JoinTable(
  9. name = "Book_Reader",
  10. joinColumns = @JoinColumn(name = "book_id"),
  11. inverseJoinColumns = @JoinColumn(name = "reader_id")
  12. )
  13. @WhereJoinTable( clause = "created_on > DATEADD( 'DAY', -7, CURRENT_TIMESTAMP() )")
  14. private List<Reader> currentWeekReaders = new ArrayList<>( );
  15. //Getters and setters omitted for brevity
  16. }
  17. @Entity(name = "Reader")
  18. public static class Reader {
  19. @Id
  20. private Long id;
  21. private String name;
  22. //Getters and setters omitted for brevity
  23. }
  1. create table Book (
  2. id bigint not null,
  3. author varchar(255),
  4. title varchar(255),
  5. primary key (id)
  6. )
  7. create table Book_Reader (
  8. book_id bigint not null,
  9. reader_id bigint not null
  10. )
  11. create table Reader (
  12. id bigint not null,
  13. name varchar(255),
  14. primary key (id)
  15. )
  16. alter table Book_Reader
  17. add constraint FKsscixgaa5f8lphs9bjdtpf9g
  18. foreign key (reader_id)
  19. references Reader
  20. alter table Book_Reader
  21. add constraint FKoyrwu9tnwlukd1616qhck21ra
  22. foreign key (book_id)
  23. references Book
  24. alter table Book_Reader
  25. add created_on timestamp
  26. default current_timestamp

In the example above, the current week Reader entities are included in the currentWeekReaders collection which uses the @WhereJoinTable annotation to filter the joined table rows according to the provided SQL clause.

Considering that the following two Book_Reader entries are added into our system:

Example 322. @WhereJoinTable test data

  1. Book book = new Book();
  2. book.setId( 1L );
  3. book.setTitle( "High-Performance Java Persistence" );
  4. book.setAuthor( "Vad Mihalcea" );
  5. entityManager.persist( book );
  6. Reader reader1 = new Reader();
  7. reader1.setId( 1L );
  8. reader1.setName( "John Doe" );
  9. entityManager.persist( reader1 );
  10. Reader reader2 = new Reader();
  11. reader2.setId( 2L );
  12. reader2.setName( "John Doe Jr." );
  13. entityManager.persist( reader2 );
  14. statement.executeUpdate(
  15. "INSERT INTO Book_Reader " +
  16. " (book_id, reader_id) " +
  17. "VALUES " +
  18. " (1, 1) "
  19. );
  20. statement.executeUpdate(
  21. "INSERT INTO Book_Reader " +
  22. " (book_id, reader_id, created_on) " +
  23. "VALUES " +
  24. " (1, 2, DATEADD( 'DAY', -10, CURRENT_TIMESTAMP() )) "
  25. );

When fetching the currentWeekReaders collection, Hibernate is going to find only one entry:

Example 323. @WhereJoinTable fetch example

  1. Book book = entityManager.find( Book.class, 1L );
  2. assertEquals( 1, book.getCurrentWeekReaders().size() );

5.9.3. @Filter

The @Filter annotation is another way to filter out entities or collections using custom SQL criteria. Unlike the @Where annotation, @Filter allows you to parameterize the filter clause at runtime.

Now, considering we have the following Account entity:

Example 324. @Filter mapping entity-level usage

  1. @Entity(name = "Account")
  2. @FilterDef(
  3. name="activeAccount",
  4. parameters = @ParamDef(
  5. name="active",
  6. type="boolean"
  7. )
  8. )
  9. @Filter(
  10. name="activeAccount",
  11. condition="active_status = :active"
  12. )
  13. public static class Account {
  14. @Id
  15. private Long id;
  16. @ManyToOne(fetch = FetchType.LAZY)
  17. private Client client;
  18. @Column(name = "account_type")
  19. @Enumerated(EnumType.STRING)
  20. private AccountType type;
  21. private Double amount;
  22. private Double rate;
  23. @Column(name = "active_status")
  24. private boolean active;
  25. //Getters and setters omitted for brevity
  26. }

Notice that the active property is mapped to the active_status column.

This mapping was done to show you that the @Filter condition uses a SQL condition and not a JPQL filtering predicate.

As already explained, we can also apply the @Filter annotation for collections as illustrated by the Client entity:

Example 325. @Filter mapping collection-level usage

  1. @Entity(name = "Client")
  2. public static class Client {
  3. @Id
  4. private Long id;
  5. private String name;
  6. @OneToMany(
  7. mappedBy = "client",
  8. cascade = CascadeType.ALL
  9. )
  10. @Filter(
  11. name="activeAccount",
  12. condition="active_status = :active"
  13. )
  14. private List<Account> accounts = new ArrayList<>( );
  15. //Getters and setters omitted for brevity
  16. public void addAccount(Account account) {
  17. account.setClient( this );
  18. this.accounts.add( account );
  19. }
  20. }

If we persist a Client with three associated Account entities, Hibernate will execute the following SQL statements:

Example 326. Persisting and fetching entities with a @Filter mapping

  1. Client client = new Client()
  2. .setId( 1L )
  3. .setName( "John Doe" );
  4. client.addAccount(
  5. new Account()
  6. .setId( 1L )
  7. .setType( AccountType.CREDIT )
  8. .setAmount( 5000d )
  9. .setRate( 1.25 / 100 )
  10. .setActive( true )
  11. );
  12. client.addAccount(
  13. new Account()
  14. .setId( 2L )
  15. .setType( AccountType.DEBIT )
  16. .setAmount( 0d )
  17. .setRate( 1.05 / 100 )
  18. .setActive( false )
  19. );
  20. client.addAccount(
  21. new Account()
  22. .setType( AccountType.DEBIT )
  23. .setId( 3L )
  24. .setAmount( 250d )
  25. .setRate( 1.05 / 100 )
  26. .setActive( true )
  27. );
  28. entityManager.persist( client );
  1. INSERT INTO Client (name, id)
  2. VALUES ('John Doe', 1)
  3. INSERT INTO Account (active_status, amount, client_id, rate, account_type, id)
  4. VALUES (true, 5000.0, 1, 0.0125, 'CREDIT', 1)
  5. INSERT INTO Account (active_status, amount, client_id, rate, account_type, id)
  6. VALUES (false, 0.0, 1, 0.0105, 'DEBIT', 2)
  7. INSERT INTO Account (active_status, amount, client_id, rate, account_type, id)
  8. VALUES (true, 250.0, 1, 0.0105, 'DEBIT', 3)

By default, without explicitly enabling the filter, Hibernate is going to fetch all Account entities.

Example 327. Query entities mapped without activating the @Filter

  1. List<Account> accounts = entityManager.createQuery(
  2. "select a from Account a", Account.class)
  3. .getResultList();
  4. assertEquals( 3, accounts.size());
  1. SELECT
  2. a.id as id1_0_,
  3. a.active_status as active2_0_,
  4. a.amount as amount3_0_,
  5. a.client_id as client_i6_0_,
  6. a.rate as rate4_0_,
  7. a.account_type as account_5_0_
  8. FROM
  9. Account a

If the filter is enabled and the filter parameter value is provided, then Hibernate is going to apply the filtering criteria to the associated Account entities.

Example 328. Query entities mapped with @Filter

  1. entityManager
  2. .unwrap( Session.class )
  3. .enableFilter( "activeAccount" )
  4. .setParameter( "active", true);
  5. List<Account> accounts = entityManager.createQuery(
  6. "select a from Account a", Account.class)
  7. .getResultList();
  8. assertEquals( 2, accounts.size());
  1. SELECT
  2. a.id as id1_0_,
  3. a.active_status as active2_0_,
  4. a.amount as amount3_0_,
  5. a.client_id as client_i6_0_,
  6. a.rate as rate4_0_,
  7. a.account_type as account_5_0_
  8. FROM
  9. Account a
  10. WHERE
  11. a.active_status = true

Filters apply to entity queries, but not to direct fetching.

Therefore, in the following example, the filter is not taken into consideration when fetching an entity from the Persistence Context.

Fetching entities mapped with @Filter
  1. entityManager
  2. .unwrap( Session.class )
  3. .enableFilter( activeAccount )
  4. .setParameter( active”, true);
  5. Account account = entityManager.find( Account.class, 2L );
  6. assertFalse( account.isActive() );
  1. SELECT
  2. a.id as id10_0,
  3. a.activestatus as active2_0_0,
  4. a.amount as amount30_0,
  5. a.clientid as client_i6_0_0,
  6. a.rate as rate40_0,
  7. a.accounttype as account_5_0_0,
  8. c.id as id11_1,
  9. c.name as name21_1
  10. FROM
  11. Account a
  12. WHERE
  13. a.id = 2

As you can see from the example above, contrary to an entity query, the filter does not prevent the entity from being loaded.

Just like with entity queries, collections can be filtered as well, but only if the filter is explicitly enabled on the currently running Hibernate Session.

Example 329. Traversing collections without activating the @Filter

  1. Client client = entityManager.find( Client.class, 1L );
  2. assertEquals( 3, client.getAccounts().size() );
  1. SELECT
  2. c.id as id1_1_0_,
  3. c.name as name2_1_0_
  4. FROM
  5. Client c
  6. WHERE
  7. c.id = 1
  8. SELECT
  9. a.id as id1_0_,
  10. a.active_status as active2_0_,
  11. a.amount as amount3_0_,
  12. a.client_id as client_i6_0_,
  13. a.rate as rate4_0_,
  14. a.account_type as account_5_0_
  15. FROM
  16. Account a
  17. WHERE
  18. a.client_id = 1

When activating the @Filter and fetching the accounts collections, Hibernate is going to apply the filter condition to the associated collection entries.

Example 330. Traversing collections mapped with @Filter

  1. entityManager
  2. .unwrap( Session.class )
  3. .enableFilter( "activeAccount" )
  4. .setParameter( "active", true);
  5. Client client = entityManager.find( Client.class, 1L );
  6. assertEquals( 2, client.getAccounts().size() );
  1. SELECT
  2. c.id as id1_1_0_,
  3. c.name as name2_1_0_
  4. FROM
  5. Client c
  6. WHERE
  7. c.id = 1
  8. SELECT
  9. a.id as id1_0_,
  10. a.active_status as active2_0_,
  11. a.amount as amount3_0_,
  12. a.client_id as client_i6_0_,
  13. a.rate as rate4_0_,
  14. a.account_type as account_5_0_
  15. FROM
  16. Account a
  17. WHERE
  18. accounts0_.active_status = true
  19. and a.client_id = 1

The main advantage of @Filter over the @Where clause is that the filtering criteria can be customized at runtime.

It’s not possible to combine the @Filter and @Cache collection annotations. This limitation is due to ensuring consistency and because the filtering information is not stored in the second-level cache.

If caching were allowed for a currently filtered collection, then the second-level cache would store only a subset of the whole collection. Afterward, every other Session will get the filtered collection from the cache, even if the Session-level filters have not been explicitly activated.

For this reason, the second-level collection cache is limited to storing whole collections, and not subsets.

5.9.4. @Filter with @SqlFragmentAlias

When using the @Filter annotation and working with entities that are mapped onto multiple database tables, you will need to use the @SqlFragmentAlias annotation if the @Filter defines a condition that uses predicates across multiple tables.

Example 331. @SqlFragmentAlias mapping usage

  1. @Entity(name = "Account")
  2. @Table(name = "account")
  3. @SecondaryTable(
  4. name = "account_details"
  5. )
  6. @SQLDelete(
  7. sql = "UPDATE account_details SET deleted = true WHERE id = ? "
  8. )
  9. @FilterDef(
  10. name="activeAccount",
  11. parameters = @ParamDef(
  12. name="active",
  13. type="boolean"
  14. )
  15. )
  16. @Filter(
  17. name="activeAccount",
  18. condition="{a}.active = :active and {ad}.deleted = false",
  19. aliases = {
  20. @SqlFragmentAlias( alias = "a", table= "account"),
  21. @SqlFragmentAlias( alias = "ad", table= "account_details"),
  22. }
  23. )
  24. public static class Account {
  25. @Id
  26. private Long id;
  27. private Double amount;
  28. private Double rate;
  29. private boolean active;
  30. @Column(table = "account_details")
  31. private boolean deleted;
  32. //Getters and setters omitted for brevity
  33. }

Now, when fetching the Account entities and activating the filter, Hibernate is going to apply the right table aliases to the filter predicates:

Example 332. Fetching a collection filtered with @SqlFragmentAlias

  1. entityManager
  2. .unwrap( Session.class )
  3. .enableFilter( "activeAccount" )
  4. .setParameter( "active", true);
  5. List<Account> accounts = entityManager.createQuery(
  6. "select a from Account a", Account.class)
  7. .getResultList();
  1. select
  2. filtersqlf0_.id as id1_0_,
  3. filtersqlf0_.active as active2_0_,
  4. filtersqlf0_.amount as amount3_0_,
  5. filtersqlf0_.rate as rate4_0_,
  6. filtersqlf0_1_.deleted as deleted1_1_
  7. from
  8. account filtersqlf0_
  9. left outer join
  10. account_details filtersqlf0_1_
  11. on filtersqlf0_.id=filtersqlf0_1_.id
  12. where
  13. filtersqlf0_.active = ?
  14. and filtersqlf0_1_.deleted = false
  15. -- binding parameter [1] as [BOOLEAN] - [true]

5.9.5. @FilterJoinTable

When using the @Filter annotation with collections, the filtering is done against the child entries (entities or embeddables). However, if you have a link table between the parent entity and the child table, then you need to use the @FilterJoinTable to filter child entries according to some column contained in the join table.

The @FilterJoinTable annotation can be, therefore, applied to a unidirectional @OneToMany collection as illustrated in the following mapping:

Example 333. @FilterJoinTable mapping usage

  1. @Entity(name = "Client")
  2. @FilterDef(
  3. name="firstAccounts",
  4. parameters=@ParamDef(
  5. name="maxOrderId",
  6. type="int"
  7. )
  8. )
  9. @Filter(
  10. name="firstAccounts",
  11. condition="order_id <= :maxOrderId"
  12. )
  13. public static class Client {
  14. @Id
  15. private Long id;
  16. private String name;
  17. @OneToMany(cascade = CascadeType.ALL)
  18. @OrderColumn(name = "order_id")
  19. @FilterJoinTable(
  20. name="firstAccounts",
  21. condition="order_id <= :maxOrderId"
  22. )
  23. private List<Account> accounts = new ArrayList<>( );
  24. //Getters and setters omitted for brevity
  25. public void addAccount(Account account) {
  26. this.accounts.add( account );
  27. }
  28. }
  29. @Entity(name = "Account")
  30. public static class Account {
  31. @Id
  32. private Long id;
  33. @Column(name = "account_type")
  34. @Enumerated(EnumType.STRING)
  35. private AccountType type;
  36. private Double amount;
  37. private Double rate;
  38. //Getters and setters omitted for brevity
  39. }

The firstAccounts filter will allow us to get only the Account entities that have the order_id (which tells the position of every entry inside the accounts collection) less than a given number (e.g. maxOrderId).

Let’s assume our database contains the following entities:

Example 334. Persisting and fetching entities with a @FilterJoinTable mapping

  1. Client client = new Client()
  2. .setId( 1L )
  3. .setName( "John Doe" );
  4. client.addAccount(
  5. new Account()
  6. .setId( 1L )
  7. .setType( AccountType.CREDIT )
  8. .setAmount( 5000d )
  9. .setRate( 1.25 / 100 )
  10. );
  11. client.addAccount(
  12. new Account()
  13. .setId( 2L )
  14. .setType( AccountType.DEBIT )
  15. .setAmount( 0d )
  16. .setRate( 1.05 / 100 )
  17. );
  18. client.addAccount(
  19. new Account()
  20. .setType( AccountType.DEBIT )
  21. .setId( 3L )
  22. .setAmount( 250d )
  23. .setRate( 1.05 / 100 )
  24. );
  25. entityManager.persist( client );
  1. INSERT INTO Client (name, id)
  2. VALUES ('John Doe', 1)
  3. INSERT INTO Account (amount, client_id, rate, account_type, id)
  4. VALUES (5000.0, 1, 0.0125, 'CREDIT', 1)
  5. INSERT INTO Account (amount, client_id, rate, account_type, id)
  6. VALUES (0.0, 1, 0.0105, 'DEBIT', 2)
  7. INSERT INTO Account (amount, client_id, rate, account_type, id)
  8. VALUES (250.0, 1, 0.0105, 'DEBIT', 3)
  9. INSERT INTO Client_Account (Client_id, order_id, accounts_id)
  10. VALUES (1, 0, 1)
  11. INSERT INTO Client_Account (Client_id, order_id, accounts_id)
  12. VALUES (1, 0, 1)
  13. INSERT INTO Client_Account (Client_id, order_id, accounts_id)
  14. VALUES (1, 1, 2)
  15. INSERT INTO Client_Account (Client_id, order_id, accounts_id)
  16. VALUES (1, 2, 3)

The collections can be filtered only if the associated filter is enabled on the currently running Hibernate Session.

Example 335. Traversing collections mapped with @FilterJoinTable without enabling the filter

  1. Client client = entityManager.find( Client.class, 1L );
  2. assertEquals( 3, client.getAccounts().size());
  1. SELECT
  2. ca.Client_id as Client_i1_2_0_,
  3. ca.accounts_id as accounts2_2_0_,
  4. ca.order_id as order_id3_0_,
  5. a.id as id1_0_1_,
  6. a.amount as amount3_0_1_,
  7. a.rate as rate4_0_1_,
  8. a.account_type as account_5_0_1_
  9. FROM
  10. Client_Account ca
  11. INNER JOIN
  12. Account a
  13. ON ca.accounts_id=a.id
  14. WHERE
  15. ca.Client_id = ?
  16. -- binding parameter [1] as [BIGINT] - [1]

If we enable the filter and set the maxOrderId to 1 when fetching the accounts collections, Hibernate is going to apply the @FilterJoinTable clause filtering criteria, and we will get just 2 Account entities, with the order_id values of 0 and 1.

Example 336. Traversing collections mapped with @FilterJoinTable

  1. Client client = entityManager.find( Client.class, 1L );
  2. entityManager
  3. .unwrap( Session.class )
  4. .enableFilter( "firstAccounts" )
  5. .setParameter( "maxOrderId", 1);
  6. assertEquals( 2, client.getAccounts().size());
  1. SELECT
  2. ca.Client_id as Client_i1_2_0_,
  3. ca.accounts_id as accounts2_2_0_,
  4. ca.order_id as order_id3_0_,
  5. a.id as id1_0_1_,
  6. a.amount as amount3_0_1_,
  7. a.rate as rate4_0_1_,
  8. a.account_type as account_5_0_1_
  9. FROM
  10. Client_Account ca
  11. INNER JOIN
  12. Account a
  13. ON ca.accounts_id=a.id
  14. WHERE
  15. ca.order_id <= ?
  16. AND ca.Client_id = ?
  17. -- binding parameter [1] as [INTEGER] - [1]
  18. -- binding parameter [2] as [BIGINT] - [1]

5.10. Modifying managed/persistent state

Entities in managed/persistent state may be manipulated by the application, and any changes will be automatically detected and persisted when the persistence context is flushed. There is no need to call a particular method to make your modifications persistent.

Example 337. Modifying managed state with JPA

  1. Person person = entityManager.find( Person.class, personId );
  2. person.setName("John Doe");
  3. entityManager.flush();

Example 338. Modifying managed state with Hibernate API

  1. Person person = session.byId( Person.class ).load( personId );
  2. person.setName("John Doe");
  3. session.flush();

By default, when you modify an entity, all columns but the identifier are being set during update.

Therefore, considering you have the following Product entity mapping:

Example 339. Product entity mapping

  1. @Entity(name = "Product")
  2. public static class Product {
  3. @Id
  4. private Long id;
  5. @Column
  6. private String name;
  7. @Column
  8. private String description;
  9. @Column(name = "price_cents")
  10. private Integer priceCents;
  11. @Column
  12. private Integer quantity;
  13. //Getters and setters are omitted for brevity
  14. }

If you persist the following Product entity:

Example 340. Persisting a Product entity

  1. Product book = new Product();
  2. book.setId( 1L );
  3. book.setName( "High-Performance Java Persistence" );
  4. book.setDescription( "Get the most out of your persistence layer" );
  5. book.setPriceCents( 29_99 );
  6. book.setQuantity( 10_000 );
  7. entityManager.persist( book );

When you modify the Product entity, Hibernate generates the following SQL UPDATE statement:

Example 341. Modifying the Product entity

  1. doInJPA( this::entityManagerFactory, entityManager -> {
  2. Product book = entityManager.find( Product.class, 1L );
  3. book.setPriceCents( 24_99 );
  4. } );
  1. UPDATE
  2. Product
  3. SET
  4. description = ?,
  5. name = ?,
  6. price_cents = ?,
  7. quantity = ?
  8. WHERE
  9. id = ?
  10. -- binding parameter [1] as [VARCHAR] - [Get the most out of your persistence layer]
  11. -- binding parameter [2] as [VARCHAR] - [High-Performance Java Persistence]
  12. -- binding parameter [3] as [INTEGER] - [2499]
  13. -- binding parameter [4] as [INTEGER] - [10000]
  14. -- binding parameter [5] as [BIGINT] - [1]

The default UPDATE statement containing all columns has two advantages:

  • it allows you to better benefit from JDBC Statement caching.

  • it allows you to enable batch updates even if multiple entities modify different properties.

However, there is also one downside to including all columns in the SQL UPDATE statement. If you have multiple indexes, the database might update those redundantly even if you don’t actually modify all column values.

To fix this issue, you can use dynamic updates.

5.10.1. Dynamic updates

To enable dynamic updates, you need to annotate the entity with the @DynamicUpdate annotation:

Example 342. Product entity mapping

  1. @Entity(name = "Product")
  2. @DynamicUpdate
  3. public static class Product {
  4. @Id
  5. private Long id;
  6. @Column
  7. private String name;
  8. @Column
  9. private String description;
  10. @Column(name = "price_cents")
  11. private Integer priceCents;
  12. @Column
  13. private Integer quantity;
  14. //Getters and setters are omitted for brevity
  15. }

This time, when rerunning the previous test case, Hibernate generates the following SQL UPDATE statement:

Example 343. Modifying the Product entity with a dynamic update

  1. UPDATE
  2. Product
  3. SET
  4. price_cents = ?
  5. WHERE
  6. id = ?
  7. -- binding parameter [1] as [INTEGER] - [2499]
  8. -- binding parameter [2] as [BIGINT] - [1]

The dynamic update allows you to set just the columns that were modified in the associated entity.

5.11. Refresh entity state

You can reload an entity instance and its collections at any time.

Example 344. Refreshing entity state with JPA

  1. Person person = entityManager.find( Person.class, personId );
  2. entityManager.createQuery( "update Person set name = UPPER(name)" ).executeUpdate();
  3. entityManager.refresh( person );
  4. assertEquals("JOHN DOE", person.getName() );

Example 345. Refreshing entity state with Hibernate API

  1. Person person = session.byId( Person.class ).load( personId );
  2. session.doWork( connection -> {
  3. try(Statement statement = connection.createStatement()) {
  4. statement.executeUpdate( "UPDATE Person SET name = UPPER(name)" );
  5. }
  6. } );
  7. session.refresh( person );
  8. assertEquals("JOHN DOE", person.getName() );

One case where this is useful is when it is known that the database state has changed since the data was read. Refreshing allows the current database state to be pulled into the entity instance and the persistence context.

Another case where this might be useful is when database triggers are used to initialize some of the properties of the entity.

Only the entity instance and its value type collections are refreshed unless you specify REFRESH as a cascade style of any associations. However, please note that Hibernate has the capability to handle this automatically through its notion of generated properties. See the discussion of non-identifier generated attributes.

Traditionally, Hibernate allowed detached entities to be refreshed. Unfortunately, JPA prohibits this practice and specifies that an IllegalArgumentException should be thrown instead.

For this reason, when bootstrapping the Hibernate SessionFactory using the native API, the legacy detached entity refresh behavior is going to be preserved. On the other hand, when bootstrapping Hibernate through the JPA EntityManagerFactory building process, detached entities are not allowed to be refreshed by default.

However, this default behavior can be overwritten through the hibernate.allow_refresh_detached_entity configuration property. If this property is explicitly set to true, then you can refresh detached entities even when using the JPA bootstraps mechanism, therefore bypassing the JPA specification restriction.

For more about the hibernate.allow_refresh_detached_entity configuration property, check out the Configurations section as well.

5.11.1. Refresh gotchas

The refresh entity state transition is meant to overwrite the entity attributes according to the info currently contained in the associated database record.

However, you have to be very careful when cascading the refresh action to any transient entity.

For instance, consider the following example:

Example 346. Refreshing entity state gotcha

  1. try {
  2. Person person = entityManager.find( Person.class, personId );
  3. Book book = new Book();
  4. book.setId( 100L );
  5. book.setTitle( "Hibernate User Guide" );
  6. book.setAuthor( person );
  7. person.getBooks().add( book );
  8. entityManager.refresh( person );
  9. }
  10. catch ( EntityNotFoundException expected ) {
  11. log.info( "Beware when cascading the refresh associations to transient entities!" );
  12. }

In the aforementioned example, an EntityNotFoundException is thrown because the Book entity is still in a transient state. When the refresh action is cascaded from the Person entity, Hibernate will not be able to locate the Book entity in the database.

For this reason, you should be very careful when mixing the refresh action with transient child entity objects.

5.12. Working with detached data

Detachment is the process of working with data outside the scope of any persistence context. Data becomes detached in a number of ways. Once the persistence context is closed, all data that was associated with it becomes detached. Clearing the persistence context has the same effect. Evicting a particular entity from the persistence context makes it detached. And finally, serialization will make the deserialized form be detached (the original instance is still managed).

Detached data can still be manipulated, however, the persistence context will no longer automatically know about these modifications, and the application will need to intervene to make the changes persistent again.

5.12.1. Reattaching detached data

Reattachment is the process of taking an incoming entity instance that is in the detached state and re-associating it with the current persistence context.

JPA does not support reattaching detached data. This is only available through Hibernate org.hibernate.Session.

Example 347. Reattaching a detached entity using lock

  1. Person person = session.byId( Person.class ).load( personId );
  2. //Clear the Session so the person entity becomes detached
  3. session.clear();
  4. person.setName( "Mr. John Doe" );
  5. session.lock( person, LockMode.NONE );

Example 348. Reattaching a detached entity using saveOrUpdate

  1. Person person = session.byId( Person.class ).load( personId );
  2. //Clear the Session so the person entity becomes detached
  3. session.clear();
  4. person.setName( "Mr. John Doe" );
  5. session.saveOrUpdate( person );

The method name update is a bit misleading here. It does not mean that an SQL UPDATE is immediately performed. It does, however, mean that an SQL UPDATE will be performed when the persistence context is flushed since Hibernate does not know its previous state against which to compare for changes. If the entity is mapped with select-before-update, Hibernate will pull the current state from the database and see if an update is needed.

Provided the entity is detached, update and saveOrUpdate operate exactly the same.

5.12.2. Merging detached data

Merging is the process of taking an incoming entity instance that is in the detached state and copying its data over onto a new managed instance.

Although not exactly per se, the following example is a good visualization of the merge operation internals.

Example 349. Visualizing merge

  1. public Person merge(Person detached) {
  2. Person newReference = session.byId( Person.class ).load( detached.getId() );
  3. newReference.setName( detached.getName() );
  4. return newReference;
  5. }

Example 350. Merging a detached entity with JPA

  1. Person person = entityManager.find( Person.class, personId );
  2. //Clear the EntityManager so the person entity becomes detached
  3. entityManager.clear();
  4. person.setName( "Mr. John Doe" );
  5. person = entityManager.merge( person );

Example 351. Merging a detached entity with Hibernate API

  1. Person person = session.byId( Person.class ).load( personId );
  2. //Clear the Session so the person entity becomes detached
  3. session.clear();
  4. person.setName( "Mr. John Doe" );
  5. person = (Person) session.merge( person );
Merging gotchas

For example, Hibernate throws IllegalStateException when merging a parent entity which has references to 2 detached child entities child1 and child2 (obtained from different sessions), and child1 and child2 represent the same persistent entity, Child.

A new configuration property, hibernate.event.merge.entity_copy_observer, controls how Hibernate will respond when multiple representations of the same persistent entity (“entity copy”) is detected while merging.

The possible values are:

disallow (the default)

throws IllegalStateException if an entity copy is detected

allow

performs the merge operation on each entity copy that is detected

log

(provided for testing only) performs the merge operation on each entity copy that is detected and logs information about the entity copies. This setting requires DEBUG logging be enabled for org.hibernate.event.internal.EntityCopyAllowedLoggedObserver

In addition, the application may customize the behavior by providing an implementation of org.hibernate.event.spi.EntityCopyObserver and setting hibernate.event.merge.entity_copy_observer to the class name. When this property is set to allow or log, Hibernate will merge each entity copy detected while cascading the merge operation. In the process of merging each entity copy, Hibernate will cascade the merge operation from each entity copy to its associations with cascade=CascadeType.MERGE or CascadeType.ALL. The entity state resulting from merging an entity copy will be overwritten when another entity copy is merged.

Because cascade order is undefined, the order in which the entity copies are merged is undefined. As a result, if property values in the entity copies are not consistent, the resulting entity state will be indeterminate, and data will be lost from all entity copies except for the last one merged. Therefore, the last writer wins.

If an entity copy cascades the merge operation to an association that is (or contains) a new entity, that new entity will be merged (i.e., persisted and the merge operation will be cascaded to its associations according to its mapping), even if that same association is ultimately overwritten when Hibernate merges a different representation having a different value for its association.

If the association is mapped with orphanRemoval = true, the new entity will not be deleted because the semantics of orphanRemoval do not apply if the entity being orphaned is a new entity.

There are known issues when representations of the same persistent entity have different values for a collection. See HHH-9239 and HHH-9240 for more details. These issues can cause data loss or corruption.

By setting hibernate.event.merge.entity_copy_observer configuration property to allow or log, Hibernate will allow entity copies of any type of entity to be merged.

The only way to exclude particular entity classes or associations that contain critical data is to provide a custom implementation of org.hibernate.event.spi.EntityCopyObserver with the desired behavior, and setting hibernate.event.merge.entity_copy_observer to the class name.

Hibernate provides limited DEBUG logging capabilities that can help determine the entity classes for which entity copies were found. By setting hibernate.event.merge.entity_copy_observer to log and enabling DEBUG logging for org.hibernate.event.internal.EntityCopyAllowedLoggedObserver, the following will be logged each time an application calls EntityManager.merge( entity ) or
Session.merge( entity ):

  • number of times multiple representations of the same persistent entity was detected summarized by entity name;

  • details by entity name and ID, including output from calling toString() on each representation being merged as well as the merge result.

The log should be reviewed to determine if multiple representations of entities containing critical data are detected. If so, the application should be modified so there is only one representation, and a custom implementation of org.hibernate.event.spi.EntityCopyObserver should be provided to disallow entity copies for entities with critical data.

Using optimistic locking is recommended to detect if different representations are from different versions of the same persistent entity. If they are not from the same version, Hibernate will throw either the JPA OptimisticLockException or the native StaleObjectStateException depending on your bootstrapping strategy.

5.13. Checking persistent state

An application can verify the state of entities and collections in relation to the persistence context.

Example 352. Verifying managed state with JPA

  1. boolean contained = entityManager.contains( person );

Example 353. Verifying managed state with Hibernate API

  1. boolean contained = session.contains( person );

Example 354. Verifying laziness with JPA

  1. PersistenceUnitUtil persistenceUnitUtil = entityManager.getEntityManagerFactory().getPersistenceUnitUtil();
  2. boolean personInitialized = persistenceUnitUtil.isLoaded( person );
  3. boolean personBooksInitialized = persistenceUnitUtil.isLoaded( person.getBooks() );
  4. boolean personNameInitialized = persistenceUnitUtil.isLoaded( person, "name" );

Example 355. Verifying laziness with Hibernate API

  1. boolean personInitialized = Hibernate.isInitialized( person );
  2. boolean personBooksInitialized = Hibernate.isInitialized( person.getBooks() );
  3. boolean personNameInitialized = Hibernate.isPropertyInitialized( person, "name" );

In JPA there is an alternative means to check laziness using the following javax.persistence.PersistenceUtil pattern (which is recommended wherever possible).

Example 356. Alternative JPA means to verify laziness

  1. PersistenceUtil persistenceUnitUtil = Persistence.getPersistenceUtil();
  2. boolean personInitialized = persistenceUnitUtil.isLoaded( person );
  3. boolean personBooksInitialized = persistenceUnitUtil.isLoaded( person.getBooks() );
  4. boolean personNameInitialized = persistenceUnitUtil.isLoaded( person, "name" );

5.14. Evicting entities

When the flush() method is called, the state of the entity is synchronized with the database. If you do not want this synchronization to occur, or if you are processing a huge number of objects and need to manage memory efficiently, the evict() method can be used to remove the object and its collections from the first-level cache.

Example 357. Detaching an entity from the EntityManager

  1. for(Person person : entityManager.createQuery("select p from Person p", Person.class)
  2. .getResultList()) {
  3. dtos.add(toDTO(person));
  4. entityManager.detach( person );
  5. }

Example 358. Evicting an entity from the Hibernate Session

  1. Session session = entityManager.unwrap( Session.class );
  2. for(Person person : (List<Person>) session.createQuery("select p from Person p").list()) {
  3. dtos.add(toDTO(person));
  4. session.evict( person );
  5. }

To detach all entities from the current persistence context, both the EntityManager and the Hibernate Session define a clear() method.

Example 359. Clearing the persistence context

  1. entityManager.clear();
  2. session.clear();

To verify if an entity instance is currently attached to the running persistence context, both the EntityManager and the Hibernate Session define a contains(Object entity) method.

Example 360. Verify if an entity is contained in a persistence context

  1. entityManager.contains( person );
  2. session.contains( person );

5.15. Cascading entity state transitions

JPA allows you to propagate the state transition from a parent entity to a child. For this purpose, the JPA javax.persistence.CascadeType defines various cascade types:

ALL

cascades all entity state transitions.

PERSIST

cascades the entity persist operation.

MERGE

cascades the entity merge operation.

REMOVE

cascades the entity remove operation.

REFRESH

cascades the entity refresh operation.

DETACH

cascades the entity detach operation.

Additionally, the CascadeType.ALL will propagate any Hibernate-specific operation, which is defined by the org.hibernate.annotations.CascadeType enum:

SAVE_UPDATE

cascades the entity saveOrUpdate operation.

REPLICATE

cascades the entity replicate operation.

LOCK

cascades the entity lock operation.

The following examples will explain some of the aforementioned cascade operations using the following entities:

  1. @Entity
  2. public class Person {
  3. @Id
  4. private Long id;
  5. private String name;
  6. @OneToMany(mappedBy = "owner", cascade = CascadeType.ALL)
  7. private List<Phone> phones = new ArrayList<>();
  8. //Getters and setters are omitted for brevity
  9. public void addPhone(Phone phone) {
  10. this.phones.add( phone );
  11. phone.setOwner( this );
  12. }
  13. }
  14. @Entity
  15. public class Phone {
  16. @Id
  17. private Long id;
  18. @Column(name = "`number`")
  19. private String number;
  20. @ManyToOne(fetch = FetchType.LAZY)
  21. private Person owner;
  22. //Getters and setters are omitted for brevity
  23. }

5.15.1. CascadeType.PERSIST

The CascadeType.PERSIST allows us to persist a child entity along with the parent one.

Example 361. CascadeType.PERSIST example

  1. Person person = new Person();
  2. person.setId( 1L );
  3. person.setName( "John Doe" );
  4. Phone phone = new Phone();
  5. phone.setId( 1L );
  6. phone.setNumber( "123-456-7890" );
  7. person.addPhone( phone );
  8. entityManager.persist( person );
  1. INSERT INTO Person ( name, id )
  2. VALUES ( 'John Doe', 1 )
  3. INSERT INTO Phone ( `number`, person_id, id )
  4. VALUE ( '123-456-7890', 1, 1 )

Even if just the Person parent entity was persisted, Hibernate has managed to cascade the persist operation to the associated Phone child entity as well.

5.15.2. CascadeType.MERGE

The CascadeType.MERGE allows us to merge a child entity along with the parent one.

Example 362. CascadeType.MERGE example

  1. Phone phone = entityManager.find( Phone.class, 1L );
  2. Person person = phone.getOwner();
  3. person.setName( "John Doe Jr." );
  4. phone.setNumber( "987-654-3210" );
  5. entityManager.clear();
  6. entityManager.merge( person );
  1. SELECT
  2. p.id as id1_0_1_,
  3. p.name as name2_0_1_,
  4. ph.owner_id as owner_id3_1_3_,
  5. ph.id as id1_1_3_,
  6. ph.id as id1_1_0_,
  7. ph."number" as number2_1_0_,
  8. ph.owner_id as owner_id3_1_0_
  9. FROM
  10. Person p
  11. LEFT OUTER JOIN
  12. Phone ph
  13. on p.id=ph.owner_id
  14. WHERE
  15. p.id = 1

During merge, the current state of the entity is copied onto the entity version that was just fetched from the database. That’s the reason why Hibernate executed the SELECT statement which fetched both the Person entity along with its children.

5.15.3. CascadeType.REMOVE

The CascadeType.REMOVE allows us to remove a child entity along with the parent one. Traditionally, Hibernate called this operation delete, that’s why the org.hibernate.annotations.CascadeType provides a DELETE cascade option. However, CascadeType.REMOVE and org.hibernate.annotations.CascadeType.DELETE are identical.

Example 363. CascadeType.REMOVE example

  1. Person person = entityManager.find( Person.class, 1L );
  2. entityManager.remove( person );
  1. DELETE FROM Phone WHERE id = 1
  2. DELETE FROM Person WHERE id = 1

5.15.4. CascadeType.DETACH

CascadeType.DETACH is used to propagate the detach operation from a parent entity to a child.

Example 364. CascadeType.DETACH example

  1. Person person = entityManager.find( Person.class, 1L );
  2. assertEquals( 1, person.getPhones().size() );
  3. Phone phone = person.getPhones().get( 0 );
  4. assertTrue( entityManager.contains( person ));
  5. assertTrue( entityManager.contains( phone ));
  6. entityManager.detach( person );
  7. assertFalse( entityManager.contains( person ));
  8. assertFalse( entityManager.contains( phone ));

5.15.5. CascadeType.LOCK

Although unintuitively, CascadeType.LOCK does not propagate a lock request from a parent entity to its children. Such a use case requires the use of the PessimisticLockScope.EXTENDED value of the javax.persistence.lock.scope property.

However, CascadeType.LOCK allows us to reattach a parent entity along with its children to the currently running Persistence Context.

Example 365. CascadeType.LOCK example

  1. Person person = entityManager.find( Person.class, 1L );
  2. assertEquals( 1, person.getPhones().size() );
  3. Phone phone = person.getPhones().get( 0 );
  4. assertTrue( entityManager.contains( person ) );
  5. assertTrue( entityManager.contains( phone ) );
  6. entityManager.detach( person );
  7. assertFalse( entityManager.contains( person ) );
  8. assertFalse( entityManager.contains( phone ) );
  9. entityManager.unwrap( Session.class )
  10. .buildLockRequest( new LockOptions( LockMode.NONE ) )
  11. .lock( person );
  12. assertTrue( entityManager.contains( person ) );
  13. assertTrue( entityManager.contains( phone ) );

5.15.6. CascadeType.REFRESH

The CascadeType.REFRESH is used to propagate the refresh operation from a parent entity to a child. The refresh operation will discard the current entity state, and it will override it using the one loaded from the database.

Example 366. CascadeType.REFRESH example

  1. Person person = entityManager.find( Person.class, 1L );
  2. Phone phone = person.getPhones().get( 0 );
  3. person.setName( "John Doe Jr." );
  4. phone.setNumber( "987-654-3210" );
  5. entityManager.refresh( person );
  6. assertEquals( "John Doe", person.getName() );
  7. assertEquals( "123-456-7890", phone.getNumber() );
  1. SELECT
  2. p.id as id1_0_1_,
  3. p.name as name2_0_1_,
  4. ph.owner_id as owner_id3_1_3_,
  5. ph.id as id1_1_3_,
  6. ph.id as id1_1_0_,
  7. ph."number" as number2_1_0_,
  8. ph.owner_id as owner_id3_1_0_
  9. FROM
  10. Person p
  11. LEFT OUTER JOIN
  12. Phone ph
  13. ON p.id=ph.owner_id
  14. WHERE
  15. p.id = 1

In the aforementioned example, you can see that both the Person and Phone entities are refreshed even if we only called this operation on the parent entity only.

5.15.7. CascadeType.REPLICATE

The CascadeType.REPLICATE is to replicate both the parent and the child entities. The replicate operation allows you to synchronize entities coming from different sources of data.

Example 367. CascadeType.REPLICATE example

  1. Person person = new Person();
  2. person.setId( 1L );
  3. person.setName( "John Doe Sr." );
  4. Phone phone = new Phone();
  5. phone.setId( 1L );
  6. phone.setNumber( "(01) 123-456-7890" );
  7. person.addPhone( phone );
  8. entityManager.unwrap( Session.class ).replicate( person, ReplicationMode.OVERWRITE );
  1. SELECT
  2. id
  3. FROM
  4. Person
  5. WHERE
  6. id = 1
  7. SELECT
  8. id
  9. FROM
  10. Phone
  11. WHERE
  12. id = 1
  13. UPDATE
  14. Person
  15. SET
  16. name = 'John Doe Sr.'
  17. WHERE
  18. id = 1
  19. UPDATE
  20. Phone
  21. SET
  22. "number" = '(01) 123-456-7890',
  23. owner_id = 1
  24. WHERE
  25. id = 1

As illustrated by the SQL statements being generated, both the Person and Phone entities are replicated to the underlying database rows.

5.15.8. @OnDelete cascade

While the previous cascade types propagate entity state transitions, the @OnDelete cascade is a DDL-level FK feature which allows you to remove a child record whenever the parent row is deleted.

So, when annotating the @ManyToOne association with @OnDelete( action = OnDeleteAction.CASCADE ), the automatic schema generator will apply the ON DELETE CASCADE SQL directive to the Foreign Key declaration, as illustrated by the following example.

Example 368. @OnDelete @ManyToOne mapping

  1. @Entity(name = "Person")
  2. public static class Person {
  3. @Id
  4. private Long id;
  5. private String name;
  6. //Getters and setters are omitted for brevity
  7. }
  1. @Entity(name = "Phone")
  2. public static class Phone {
  3. @Id
  4. private Long id;
  5. @Column(name = "`number`")
  6. private String number;
  7. @ManyToOne(fetch = FetchType.LAZY)
  8. @OnDelete( action = OnDeleteAction.CASCADE )
  9. private Person owner;
  10. //Getters and setters are omitted for brevity
  11. }
  1. create table Person (
  2. id bigint not null,
  3. name varchar(255),
  4. primary key (id)
  5. )
  6. create table Phone (
  7. id bigint not null,
  8. "number" varchar(255),
  9. owner_id bigint,
  10. primary key (id)
  11. )
  12. alter table Phone
  13. add constraint FK82m836qc1ss2niru7eogfndhl
  14. foreign key (owner_id)
  15. references Person
  16. on delete cascade

Now, you can just remove the Person entity, and the associated Phone entities are going to be deleted automatically via the Foreign Key cascade.

Example 369. @OnDelete @ManyToOne delete example

  1. Person person = entityManager.find( Person.class, 1L );
  2. entityManager.remove( person );
  1. delete from Person where id = ?
  2. -- binding parameter [1] as [BIGINT] - [1]

The @OnDelete annotation can also be placed on a collection, as illustrated in the following example.

Example 370. @OnDelete @OneToMany mapping

  1. @Entity(name = "Person")
  2. public static class Person {
  3. @Id
  4. private Long id;
  5. private String name;
  6. @OneToMany(mappedBy = "owner", cascade = CascadeType.ALL)
  7. @OnDelete(action = OnDeleteAction.CASCADE)
  8. private List<Phone> phones = new ArrayList<>();
  9. //Getters and setters are omitted for brevity
  10. }
  1. @Entity(name = "Phone")
  2. public static class Phone {
  3. @Id
  4. private Long id;
  5. @Column(name = "`number`")
  6. private String number;
  7. @ManyToOne(fetch = FetchType.LAZY)
  8. private Person owner;
  9. //Getters and setters are omitted for brevity
  10. }

Now, when removing the Person entity, all the associated Phone child entities are deleted via the Foreign Key cascade even if the @OneToMany collection was using the CascadeType.ALL attribute.

Example 371. @OnDelete @ManyToOne delete example

  1. Person person = entityManager.find( Person.class, 1L );
  2. entityManager.remove( person );
  1. delete from Person where id = ?
  2. -- binding parameter [1] as [BIGINT] - [1]

Without the @OnDelete annotation, the @OneToMany association relies on the cascade attribute to propagate the remove entity state transition from the parent entity to its children. However, when the @OnDelete annotation is in place, Hibernate prevents the child entity DELETE statement from being executed while flushing the Persistence Context.

This way, only the parent entity gets deleted, and all the associated child records are removed by the database engine, instead of being deleted explicitly via DELETE statements.

5.16. Exception handling

If the JPA EntityManager or the Hibernate-specific Session throws an exception, including any JDBC SQLException, you have to immediately rollback the database transaction and close the current EntityManager or Session.

Certain methods of the JPA EntityManager or the Hibernate Session will not leave the Persistence Context in a consistent state. As a rule of thumb, no exception thrown by Hibernate can be treated as recoverable. Ensure that the Session will be closed by calling the close() method in a finally block.

Rolling back the database transaction does not put your business objects back into the state they were at the start of the transaction. This means that the database state and the business objects will be out of sync. Usually, this is not a problem because exceptions are not recoverable and you will have to start over after rollback anyway.

The JPA PersistenceException or the HibernateException wraps most of the errors that can occur in a Hibernate persistence layer.

Both the PersistenceException and the HibernateException are runtime exceptions because, in our opinion, we should not force the application developer to catch an unrecoverable exception at a low layer. In most systems, unchecked and fatal exceptions are handled in one of the first frames of the method call stack (i.e., in higher layers) and either an error message is presented to the application user or some other appropriate action is taken. Note that Hibernate might also throw other unchecked exceptions that are not a HibernateException. These are not recoverable either, and appropriate action should be taken.

Hibernate wraps the JDBC SQLException, thrown while interacting with the database, in a JDBCException. In fact, Hibernate will attempt to convert the exception into a more meaningful subclass of JDBCException. The underlying SQLException is always available via JDBCException.getSQLException(). Hibernate converts the SQLException into an appropriate JDBCException subclass using the SQLExceptionConverter attached to the current SessionFactory.

By default, the SQLExceptionConverter is defined by the configured Hibernate Dialect via the buildSQLExceptionConversionDelegate method which is overridden by several database-specific Dialects.

However, it is also possible to plug in a custom implementation. See the hibernate.jdbc.sql_exception_converter configuration property for more details.

The standard JDBCException subtypes are:

ConstraintViolationException

indicates some form of integrity constraint violation.

DataException

indicates that evaluation of the valid SQL statement against the given data resulted in some illegal operation, mismatched types, truncation or incorrect cardinality.

GenericJDBCException

a generic exception which did not fall into any of the other categories.

JDBCConnectionException

indicates an error with the underlying JDBC communication.

LockAcquisitionException

indicates an error acquiring a lock level necessary to perform the requested operation.

LockTimeoutException

indicates that the lock acquisition request has timed out.

PessimisticLockException

indicates that a lock acquisition request has failed.

QueryTimeoutException

indicates that the current executing query has timed out.

SQLGrammarException

indicates a grammar or syntax problem with the issued SQL.

Starting with Hibernate 5.2, the Hibernate Session extends the JPA EntityManager. For this reason, when a SessionFactory is built via Hibernate’s native bootstrapping, the HibernateException or SQLException can be wrapped in a JPA PersistenceException when thrown by Session methods that implement EntityManager methods (e.g., Session.merge(Object object), Session.flush()).

If your SessionFactory is built via Hibernate’s native bootstrapping, and you don’t want the Hibernate exceptions to be wrapped in the JPA PersistenceException, you need to set the hibernate.native_exception_handling_51_compliance configuration property to true. See the hibernate.native_exception_handling_51_compliance configuration property for more details.