Archive for the 'JPA' Category

October 5th, 2022
4:16 pm
Using Spring Data JPA with Oracle

Posted under CouchDB & Java & JPA & Knowledge Base & Oracle & Spring & Spring Boot & Spring Data JPA
Tags , ,

I was using Spring Data JPA with Oracle as part of an export process from CouchDB to Oracle. The CouchDB export application first exports my existing Places data from CouchDB, then maps it to the entity model I am using for JPA/Oracle, then persists it into Oracle, and finally performs some rudimentary listing of the Oracle data by logging it.

Some issues/points came to light during this excercise which I am noting here:-

1/ My connect string to Oracle was using a Pluggable database in Oracle XE. This needs to use a service name (xepdb1) rather than a SID (xe) for the connection. To do this I needed to use a slash before the service name in the application properties, to indicate a service name, rather than the colon I would normally use for a SID:

# Oracle settings
spring.datasource.url = jdbc:oracle:thin:@localhost:1521/xepdb1
spring.datasource.username = places
spring.datasource.password = xxxxxxxx

2/ I changed the Hibernate naming strategy in the application properties to prevent unwanted snake casing of Oracle column names:

# Hibernate settings
spring.jpa.hibernate.naming.physical-strategy = org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl

3/ For persisting to Oracle, I had the choice of using Spring Data JPA, calling save() or saveAll() on the Individual Repository interfaces that extended CrudRepository (such as PlacesRepository), or I could use the Entity Manager. The Spring Data methods would persist or update fine, and could also save an iterable in one call. However, I had some small additional logic as I was passing and returning a Map. As I chose not to put this logic in the service layer, I therefore used an ordinary JPA Dao (repository) class, and autowired the entity manager via the @PersistenceContext annotation. I could therefore incorporate my logic in a single call and use entityManager.merge() to persist. I felt that situations vary and that it was not always appropriate to use Spring Data Interfaces for some use cases. However the Spring Data find queries were useful and clever for querying the database as they allowed for example a find by key with a like, and order by, just by declaring the appropriate method calls in the interface. I also used the @Query annotation for a custom query, which again was just annotated on a method in the interface.

4/ I had issues whereby the foreign keys on dependant entities were not being auto populated and threw up as null, even when I set nullable=false on the @JoinColumn annotations. In the end it was a simple case of not populating the objects in my model on both sides of the join before persisting – i.e. I was populating a list in a parent object for a one-to-many, but not populating the parent in the dependant object. Doing this resolved the problem and everything persisted automatically with a single parent persist/merge, with hibernate allocating primary keys via Oracle sequences, and then auto populating them in the foreign keys. This stack overflow post here pointed me in the right direction on this. I kept the nullable=false annotations where appropriate to correctly match the relationship I was using, and what was in Oracle.

5/ Once I was at the point of listing everything I had persisted, using Spring Data queries and some simple logging, I was initially hit with Hibernate multiple bag fetch exceptions when listing the data. At this stage I had annotated my repository methods as @Transactional rather than the service layer, and was therefore using explicit eager loading annotations as I needed everything to load before the methods returned. This post here details the issue, and it appears that Hibernate can only eagerly load one collection at a time. My solution to this was to take the more normal route of annotating my service methods as @Transactional rather than the repository, which then allowed a number of repository calls to be made from a single service method, all in the same transaction. Whilst this did expose the service layer to the transaction architecture and javax.transaction package, it is the more normal approach and gives flexibility in the service layer. In my case I could then just revert to the default lazy loading and perform the logging in the same service layer method (which I was happy to do as this was a very basic example just to demonstrate fetching the imported data from Oracle). Everything then worked fine.

 

Comments Off on Using Spring Data JPA with Oracle

September 10th, 2022
11:09 am
Oracle XE 21c – Container vs Pluggable Databases – CDBs vs PDBs

Posted under JPA & Knowledge Base & Oracle
Tags , ,

I just installed the latest 21c version of XE, and having created the system user, I connected to this via SQL developer, using XE as the SID. All fine so far.

When I then went to create a new schema via the CREATE USER command, I received ORA-65096: invalid common user or role name. Upon researching this I found that Oracle now uses a hierarchy of pluggable databases or PDBs inside a container database or CDB. Schemas aka users can then sit inside a PDB as normal. Inside the CDB, where I was logged in, the rules are different and common users are required which have their own rules and naming conventions.

I just wanted a simple local develpment solution for JPA use, so did not especially want to learn all the ins and outs of this. I found that an XE installation creates a default PDB called XEPDB1, and managed to create an SQL developer connection to this using the already created system account. I tried using XEPDB1 as the SID rather than XE, but this failed as invalid. However when I tried XEPDB1 as a service name rather than a SID it all worked fine. I did not need to make any other changes to the connection settings – user (system), password, and port (1521) were identical.

Having then connected via this new connection, I was successfully able to create a new schema in the default PDB. As expected, to connect to this new schema I again had to use the service name of XEPDB1 and I was able to connect, once I had granted the CREATE SESSION privilege to the new user after creating it to allow me to log on. I could then proceed to grant the necessary privileges to the user and create/populate tables etc. Initially, in addition to granting CREATE SESSION, I just simply granted UNLIMITED TABLESPACE to allow the schema user to allow tables to be populated. With this I was successfully able to perform inserts, upates, deletes, and selects on all the tables present.

Comments Off on Oracle XE 21c – Container vs Pluggable Databases – CDBs vs PDBs

September 7th, 2022
4:56 pm
Exporting and Enterprise Architect 12 model as SQL/DDL statements

Posted under Design & JPA & Knowledge Base & Oracle & UML
Tags ,

I had a legacy EA 12 model for the places guide database which I had used as the basis for the CouchDB implementation.

I wanted to explore a relational database as an alternative route, with the aim of not bothering with completely offline access due to the devlopment effort needed, and the lessening need due to improvements in mobile data access in the last few years. Initially I created an Oracle version of the model file, and tweaked the PK and FK  data types from UUID to long and checked a few other things.

I then tried selecting the domain model in the explorer, and using Package/Database Engineering/Generate Package DDL from the toolbar. This brought up the relevant dialogue to do the export, but try as I might it could not find any entities to export as DDL.

I then found this post here, which points out that you need to transform the relevant entities to DDL first to enable this to work. I did this using Package/Model Transformation (MDA)/Transform Current Package. You then select the entities to transform and select DDL on the right under transformations, and then hit Do Transform. This creates a DDL diagram. The original Generate Package DDL option above then worked – I was able to find all the desired entities and transform them. I had to visit each entity to select the target database DDL format, in my case Oracle, or else I got an error, but once this was done, it all worked fine.

I elected to create a single DDL SQL file and did not worry about the entity ordering – you can reorder when doing the DDL generation but I did not bother. The output DDL was pretty good as a starter and was clear. It also appeared to have been created in a way that was order independent – the tables were all created first, followed by all the foreign key contraints, so that all the tables existed prior to creating the constraints, making it order independent.

Some tweaking of the model will be needed, especially in the area of Place Features related tables, and also sequence generation, as I would likely use Oracle sequences for PK generation, via JPA/Hibernate. However, I had reached a good trial starting point which avoided the need to create all the DDL manually from scratch.

Comments Off on Exporting and Enterprise Architect 12 model as SQL/DDL statements

September 23rd, 2011
1:21 pm
JPA – Using a mapped superclass to hold Ids and Version Numbers for all Entities

Posted under JPA
Tags , , , ,

All my entities have a numeric ID as a primary key, except in very exceptional cases. In particular, I never used business data as part of the primary key, nor do I ever use a primary key for any business purposes (such as an invoice number). I consider that primary and foreign keys are only used to define relationships, i.e. they are metadata as far as the domain model is concerned.

The problem of using business data in primary keys is that if the business data format for a primary key changes in any way, all the foreign keys referring to this primary key column must also be changed. This can result in massive changes throughout the data model for what started as a simple business change. Taking the above invoice number for example, the business may decide to add an alphanumeric branch prefix to the invoice number.

If instead all primary keys are kept as separate numeric IDs, changes to the format of business data should typically only require changes to one table.

With that rant out of the way, the main point of this post is that, given the above, plus the need to hold version numbers on all entities to support optimistic locking, it is desirable to hold all the primary keys and version numbers in a mapped abstract superclass. This has the following benefits:-

  • the entities are simplified and the key/version number implementation is encapsulated and centralised, which is a good design principle
  • the mechanism allows for a standard mechanism to get the primary key/version of any entity polymorphically via the mapped superclass. This is especially useful when you need to do comparisons between entities and you do not want to override equals on the entities. It is also useful for other polymorphic situations – in my case I also have a polymorphic tree implementation which makes use of this

Note the following points about the implementation:-

  • As I have discussed here, I always annotate getters rather than fields, to permit inheritance. In this case, I provide fields plus polymorphic getters/setters in the mapped superclass for getEntityId and setEntityId which are marked transient and are not annotated. They are purely to allow polymorphic access to the primary key.
  • Each entity defines its own getters/setters with a mnemonic name for the primary key. These getters/setters call the base polymorphic ones. The reason for this is that as well as improving the key naming,  it allows each entity to annotate its primary key differently, for example to use a specific generator for sequences etc.
  • As the primary keys are annotated in the subclasses, this permits the base superclass IDs to be generic. In my case, this caters for both Long and Integer primary keys using a generic base superclass. Note that it would not be possible to annotate in the superclass if the ID is a parameterised type – JPA spits this out and will not allow it.
  • Due to the above generic issue, the type for the version number must be fixed, as this is annotated in the superclass. In my case I settled for an int for version numbers.
  • Note that as the primary key is generic, it must therefore be a reference type rather than a primitive type (e.g. Long rather than long). In my case, primary keys are normally Long
  • In some cases, I have other mapped superclasses, such as Tree and TreeNode classes. In this case, Tree and TreeNode extend the EntityBase class, and Tree and TreeNode subclasses then just extend Tree and TreeNode respectively.

The following code illustrates a simple example of the superclass and an entity subclass:-

 

EntityBase

/**
* Entity implementation class for Entity: EntityBase
* This mapped superclass defines the primary key and locking version number for all entities.
* The getters/setters are marked transient so they are not mapped here. This allows polymorphic access to primary key (and version),
* but still allows the subclass to define its own (mapped) getter/setter names for the primary key, and to annotate for specific generators etc.
* This allows meaningful names for primary keys, normally <tableName>Id
*/
@MappedSuperclass
@Access(AccessType.PROPERTY)
public abstract class EntityBase<C> implements Serializable {
    private static final long serialVersionUID = 8075127067609241095L;

    private C entityId;
    private int version;
   
    @Transient public C getEntityId() {
        return entityId;
    }
    @Transient public void setEntityId(C entityId) {
        this.entityId = entityId;
    }
    /*
     * The version cannot be generic as it is annotated here (unlike the primary key)
     * If you do, Eclipselink throws a tantrum. In practice, an int is plenty –
     * it would allow continuous version changes once a second for over 68 years for example.
     */
    @Version
    public int getVersion() {
        return version;
    }
    public void setVersion(int version) {
        this.version = version;
    }
}

 

Address

@Entity
@Access(AccessType.PROPERTY)
public class Address extends EntityBase<Long> implements Serializable {
    private static final long serialVersionUID = 3206799789679218177L;
   
    private int latitude;
    private int longitude;   
    private String streetAddress1;
    private String streetAddress2;
    private String locality;
    private String postTown;
    private String county;
    private String postCode;
       
    public Address(){}

    @Id
    @GeneratedValue(generator="AddressId")   
    public Long getAddressId() {return super.getEntityId();}
    public void setAddressId(Long addressId) {super.setEntityId(addressId);}
   
    public int getLatitude() {return latitude;}
    public void setLatitude(int latitude) {this.latitude = latitude;}
    public int getLongitude() {return longitude;}
    public void setLongitude(int longitude) {this.longitude = longitude;}
   
    public String getStreetAddress1() {return streetAddress1;}
    public void setStreetAddress1(String streetAddress1) {this.streetAddress1 = streetAddress1;}
    public String getStreetAddress2() {return streetAddress2;}
    public void setStreetAddress2(String streetAddress2) {this.streetAddress2 = streetAddress2;}
    public String getLocality() {return locality;}
    public void setLocality(String locality) {this.locality = locality;}
    public String getPostTown() {return postTown;}
    public void setPostTown(String postTown) {this.postTown = postTown;}
    public String getCounty() {return county;}
    public void setCounty(String county) {this.county = county;}
    public String getPostCode() {return postCode;}
    public void setPostCode(String postCode) {this.postCode = postCode;}   
}

No Comments »

July 19th, 2011
8:54 pm
JPA – Accessing the primary key ID allocated during a persist operation on a new entity

Posted under JPA
Tags , , , ,

I wanted to do this in order to use the ID as part of the creation of a Base64 encoded tree path in another column in the same row, as I was using path enumeration as my tree algorithm (see here on slideshare.net for an excellent article on database tree algorithms).

The simple answer is – you can’t do this reliably, unless you allocate IDs yourself in some way.

  1. The ID is not available before a persist. I could call flush and then read it back, create the TreePath and then persist again – this is safe. This is discussed here on Stack Overflow.
  2. The JPA Lifecycle callback @PrePersist does not guarantee that the ID is visible. The net is rather quiet on this but this post here about hibernate says it cannot be relied upon  (@PostPersist would of course be different). There are strong limits on what you can do in such a callback or entity listener. For example, you can’t do entity manager operations, persist entities, or refer to other entities etc.
  3. Triggers would be another way to avoid the double update. I could set the rest of my TreePath prior to the first persist, and then retrieve the actual ID in a before insert trigger using the new.TreePathID syntax (MySQL and Oracle use similar syntax in this respect). I could then encode it in Base64 using a stored procedure, and append it to the treepath. Oracle has a built in package with Base64 encoding and decoding available (the utl_encode package). For MySql there is an open source example on the internet here. From posts on the net, e.g. here, triggers do work in MySQL via JPA.

The best solution looks like using triggers on the database server, as this avoids the double update. I have yet to investigate/try this.

No Comments »

March 28th, 2011
11:02 am
Eclipselink error in Eclipse–Schema “null” cannot be resolved….

Posted under JPA
Tags , , ,

This error came up while building a new development environment with the Eclipse Helios SR2 and Eclipselink 2.2.0.
The error appeared for every JPA entity, even after the database drivers/connections were correctly defined.

The simple answer  as detailed here is to right click the project and do a validate – hey presto, all the errors disappear!
This appears to have been around a while and is still not fixed in Eclipselink 2.2.0, but at least it is simple to resolve.

The tip was found on this Oracle tutorial post :-

Note:

If you encounter a Schema “null” cannot be resolved for table “<TABLE_NAME>” error, you’ve hit a known Eclipse bug.
To resolve the issue, right-click the project and choose select Validate from menu. Eclipse will clear the errors.

No Comments »

March 5th, 2011
7:16 pm
JPA entities – field vs method annotation

Posted under JPA
Tags , , ,

Having read a number of technical blogs on this there appeared to be no clear winner either way, until I hit a requirement to use a mapped superclass for an entity. Then the following issues became very important:-

  1. Annotating fields gives you less flexibility – methods can be overriden by a subclass, whereas fields cannot.
  2. My particular example where I hit this was when designing an implementation to use for hierarchical/tree structured data, to be used in a number of places in an application. I wanted to make all the tree usage polymorphic from the same base class implementation, as then various other areas of the code could be reused for any tree. As entities were passed all the way up to the (JSF) UI, I was using certain design patterns for handling hierarchies at the UI level and making them able to handle any tree was desirable.
  3. Some of the functionality for handling trees – in my case path enumerated trees as described here in this excellent article on slideshare.net – could be placed in a base class. Examples of this were methods to compress node Ids in the path enumerated string using Base64, and building tree path strings for use at the UI etc.
  4. A key gotcha was that I wanted different impementations of trees to be annotated differently, for example to use different sequence Id generation. If the tree Id was in the mapped superclass and I was using field annotation, I was stuck. This issue is discussed in these articles here (see the comments at the end) and here.

My solution was to take the decision to always annotate (getter) methods from now on. This allowed the following implementation:-

  1. My mapped superclass contained getters annotated @Transient where I wanted to change their behaviour in a subclass. For example, my treeId primary key was in the superclass but marked @Transient so that it would not be mapped there. Then, in each subclass, I added an override getter/setter pair for the field, and annotated the getter with the desired ID generation. This allowed every subclass to have its own sequence generation (in my case), but to still allow polymorphic usage in other code.
  2. When doing this, a classic gotcha was to forget to mark the base class getters @Transient. This caused them to be mapped in both the mapped superclass and the subclass, resulting in multiple writable mappings which Eclipselink complained about loudly! It feels counter-intuitive to set them as transient, but the important point is that they will be mapped by the subclass.
  3. When doing this, I had to explicitly annotate the mapped superclass for property annotation using @Access(AccessType.PROPERTY). This is because normally the default type of access (field or property) is derived automatically from where the primary key is annotated. In this case, there was no such annotation on the mapped superclass so it needed telling – again, strange and loud Eclipselink errors resulted without this as my @Transient annotations on getters in the superclass were being ignored, as it defaulted to field annotation!
  4. Another issue was that I wanted a name field in the base superclass, but wanted to override this in the subclasses. In fact one tree implementation had node classes with an associated keyword class, where the name was stored on the keyword class and delegated from the getter on the node class, which in turn overrode the getter in the mapped superclass. Again, I marked the name getters as @Transient in both the tree mapped superclass and in the node class, and just mapped the name in the keyword class. This allowed other code to obtain the name polymorphically using a base superclass reference, but the actual name came from 2 levels further down. JPA was kept happy as the name was mapped wherever I needed it to be.

The following sample code fragment from my mapped superclass illustrated some of these points:-

package uk.co.salientsoft.test.domain;
import javax.persistence.Access;
import javax.persistence.AccessType;
import javax.persistence.Column;
import javax.persistence.MappedSuperclass;
import javax.persistence.Transient;

import uk.co.salientsoft.jpa.ColumnMeta;

/**
* Entity implementation class for Entity: TreeNode
* This mapped superclass performs the common tree/treepath handling for all tree entities
*
*/
@MappedSuperclass
@Access(AccessType.PROPERTY)
public abstract class TreeNode {
   
    private long TreeNodeId;
    private String treePath;
    private int nodeLevel;

    @Transient public long getTreeNodeId() {return this.TreeNodeId;}
    @Transient public void setTreeNodeId(long TreeNodeId) {this.TreeNodeId = TreeNodeId;}
   
    @Transient public abstract String getName();
    @Transient public abstract void setName(String name);

No Comments »

January 23rd, 2011
1:41 pm
Eclipse JPA table generation fails with Eclipslink 2.1.2, Glassfish V3.0.1

Posted under JPA
Tags , , , , ,

I was attempting to create a database for a new project which just contained entities – I had imported the entity classes from Enterprise Architect and added the required annotations, getters & setters etc.

My first attempt used the following persistence.xml (I have removed most of the classes for the example)

<?xml version=”1.0″ encoding=”UTF-8″?>
<persistence version=”2.0″ xmlns=http://java.sun.com/xml/ns/persistence
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation=”http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd”>
    
    <persistence-unit name=”Test” transaction-type=”RESOURCE_LOCAL”>
      <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>

      <non-jta-data-source>jdbc/TestPool</non-jta-data-source>
      <class>uk.co.salientsoft.Test.domain.Address</class>
      <class>uk.co.salientsoft.Test.domain.AddressLine</class>
      <class>uk.co.salientsoft.Test.domain.Campaign</class>
      <class>uk.co.salientsoft.Test.domain.CampaignText</class>      
      <exclude-unlisted-classes>false</exclude-unlisted-classes>     
      <properties>
       <property name=”eclipselink.logging.level” value=”INFO” />
       <property name=”eclipselink.target-server” value=”None” />
       <!– <property name=”eclipselink.target-server” value=”SunAS9″ />–>
       <property name=”eclipselink.target-database” value=”Oracle” />      
       <property name=”eclipselink.ddl-generation.output-mode” value=”database” />
      </properties>
     </persistence-unit>
</persistence>

 Having created the database and validated the connection/datasource etc., I attempted to create the tables from  Eclipse by right clicking the project and selecting JPA Tools/Create Tables from Entities…

This resulted in the following error:-

[EL Config]: Connection(5226838)–connecting(DatabaseLogin(
    platform=>OraclePlatform
    user name=> “Test”
    connector=>JNDIConnector datasource name=>jdbc/TestPool
))
[EL Severe]: Local Exception Stack:
Exception [EclipseLink-7060] (Eclipse Persistence Services – 2.0.1.v20100213-r6600): org.eclipse.persistence.exceptions.ValidationException
Exception Description: Cannot acquire data source [jdbc/TestPool].
Internal Exception: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file:  java.naming.factory.initial

The error occurred even though the data source was being used outside the container as a non-jta data source (this worked previously for JPA1/Glassfish V2, allowing JPA to be used outside the container but still using a datasource defined in Glassfish). A previous post here (in the section Notes on Persistence.xml) also reports that when running outside the container, it is vital to have eclipselink.target-server defaulted or set to none. In this case, the setting caused no difference either way. After a fair bit of research/Googling, I could not find the cause of the problem, so I simply reverted to specifying the connection explicitly in persistence.xml,  without using the connection defined in Glassfish at all.

This worked correctly. I will not need to create the database very often, and it only involved tweaking a few lines in the file, so I’m happy to live with it. Here is my modified persistence.xml which uses a direct connection specified in the file. This version created the database correctly. Note that as detailed in this post here (near the bottom of the post), the property names for JDBC url, user, password and driver have now been standardised and are no longer eclipslink specific.

<?xml version=”1.0″ encoding=”UTF-8″?>
<persistence version=”2.0″ xmlns=”http://java.sun.com/xml/ns/persistence” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd”>

    <persistence-unit name=”Test” transaction-type=”RESOURCE_LOCAL”>   
    <!– <persistence-unit name=”Test” transaction-type=”JTA”>–>
   
      <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>

     <!–<jta-data-source>jdbc/TestPool</jta-data-source>–>
     <!–<non-jta-data-source>jdbc/TestPool</non-jta-data-source>–>
    
      <class>uk.co.salientsoft.Test.domain.Address</class>
      <class>uk.co.salientsoft.Test.domain.AddressLine</class>
      <class>uk.co.salientsoft.Test.domain.Campaign</class>
      <class>uk.co.salientsoft.Test.domain.CampaignText</class>      
      <exclude-unlisted-classes>false</exclude-unlisted-classes>     
      <properties> 
      
        <property name=”javax.persistence.jdbc.url” value=”jdbc:oracle:thin:@localhost:1521:xe”/>     
        <property name=”javax.persistence.jdbc.user” value=”Test”/>
        <property name=”javax.persistence.jdbc.password” value=”Test”/>
        <property name=”javax.persistence.jdbc.driver” value=”oracle.jdbc.OracleDriver”/>     
     
       <property name=”eclipselink.logging.level” value=”INFO” />
        <property name=”eclipselink.target-server” value=”None” />
       <!– <property name=”eclipselink.target-server” value=”SunAS9″ />–>
       <property name=”eclipselink.target-database” value=”Oracle” />      
       <property name=”eclipselink.ddl-generation.output-mode” value=”database” />
      </properties>
     </persistence-unit>
</persistence>

No Comments »

December 3rd, 2010
9:42 am
JPA error “Class is mapped, but is not included in any persistence unit”

Posted under JPA
Tags , , , ,

I received this compile time error from Eclipse with one of my test projects which was previously fine, using Eclipse Helios with Eclipselink 2.1.0 (and container managed JPA in EJBs). The error occurred when I was autodiscovering the entities rather than listing them in persistence.xml explicitly, even when <exclude-unlisted-classes>false</exclude-unlisted-classes> was explicitly present.

Supposedly, as the annotated entity sources for the classes are in a subfolder of the parent of the META-INF folder containing persistence.xml, they should be discovered automatically. Listing the fully qualified classes explicitly in persistence.xml solved the problem, however this is not ideal as it would mean having an alternative version of persistence.xml for testing.

I googled for the problem but could not find a solution listed anywhere. For now this is just one to be aware of – I’ll stick with listing the classes explicitly and look further for a solution later on as and when it becomes a nuisance!

No Comments »

November 26th, 2010
8:25 pm
Using container managed JPA 2.0 with CDI/EJB 3.1/Glassfish 3

Posted under JPA
Tags , , , , , ,

Update

  • One issue to be aware of, which is not immediately obvious, is when populating a database from scripts. If the version column is left null (Eclipselink leaves it as nullable by default), then the null value causes an immediate OptimisticLockException to be thrown. This is because EclipseLink expects and checks for a zero as the initial value for a new record. If EclipseLink itself persists the record, then it writes a zero.
  • One good idea to resolve this would be to enforce version columns to be non null with a zero default – although it must be said that it would be helpful if EclipsLink did this automatically in response to the @Version annotation.
  • Population scripts should therefore set version to zero, unless it is set non null with a default of zero in JPA – this latter solution is cleaner and would be automatic for me because I create version columns in a mapped superclass used by all Entities. I have yet to test this out however.

 

Original Post

This follows on from a previous post relating to JPA 1.0/Glassfish 2.1, and contains a few notes and pointers relating to JPA 2.0 under Glassfish 3 with CDI/Weld.

Firstly, when using an EJB from a client, use CDI injection (@Inject) rather than the @EJB annotation, as CDI gives many additional benefits such as type safety.

In an EJB:-

  • You are using container managed transactions, and so simply have to inject an entity manager rather than create one from an entity manager factory.
  • The container managed entity manager can be injected with a simple @PersistenceContext annotation – the persistence unit name may be defaulted and is discovered automatically.
  • The container is responsible for entity manager lifetime/closing. Calling e.g. em.close at the end of a bean method is illegal and will make Glassfish angry. You do not want to make Glassfish angry – you end up with many long stack traces in the log which on the surface do not appear to relate to this trivial problem!

A major issue with container managed transactions is how to use optimistic locking and detect and handle optimistic lock exceptions correctly. This is detailed in Pro JPA 2, pages 355-357 (you can preview this in Google Books). The main problem is that in a server environment, the OptimisticLockException will simply be logged by the container which then throws an EJBException. Trying to catch an OptimisticLockException in the calling client is therefore doomed to failure – it will probably never be caught!

The solution is to call flush() on the entity manger (e.g. em.flush()) just before you are ready to complete the method. This forces a write to the database, locking resources at the end of the method so that the effects on concurrency are minimised. This allows us to handle an optimistic failure while we are still in control, without the container swallowing the exception. The ChangeCollisionException might contain the object which caused the exception, but it is not guarranteed to. If there were multiple objects in the transaction, we could have invoked getEntity() on the caught exception to see whether the offending object was included.

If we do get an exception from the flush() call, we can throw a domain specific application exception which the caller can recognise, i.e. convert an OptimisticLockException to our own ChangeCollisionException, avoiding coupling the caller to the internal semantics of JPA transactions. It is also desirable to factor out the code which calls flush() into our own method, e.g. flushChanges() to avoid the need for every method which needs to flush having to catch the optimistic lock exception and then throw our domain specific one – we can centralise the exception conversion in our own single method.

We must annotate the ChangeCollisionException class with @ApplicationException, which is an EJB container annotation, to indicate to the container that the exception is not really a system level one but should be thrown back to the client as-is. Normally defining an application exception will cause the container not to roll back the transaction, but this is an EJB 3 container notion. The persistence provider that threw the OptimisticLockException  (in this case Eclipselink) does not know about the special semantics of designated application exceptions and, seeing a runtime exception, will go ahead and mark the transaction for rollback. The client can now receive and handle the ChangeCollisionException and do something about it.

Here is an example of our exception class with the required annotation:-

@ApplicationException
public class ChangeCollisionException extends RunTimeException {
   public ChangeCollisionException() {super(); }
}

Here are code fragment examples from a stateless session bean showing the entity manager injection, and a client JSF model bean showing the CDI injection of the stateless session bean. Note that with EJB 3.1 it is not necessary to declare local or remote interfaces for an EJB as business interfaces are not a requirement. However, I always use an interface as it allows for example a mock version of a service layer to be easily swapped in for testing. CDI supports this conveniently via alternatives, whereby a mock version of a bean can be switched in via a setting in beans.xml for testing.

@Stateless
public class SentryServiceImpl implements SentryServiceLocal {

    @PersistenceContext
    private EntityManager em;

}

@Named
@Dependent
public class ModelBeanImpl implements ModelBean, Serializable {
    private static final long serialVersionUID = -366401344661990776L;
    private @Inject SentryServiceLocal sentryService;
… 
}

With container managed JPA and/or Glassfish your persistence.xml  can also be simplified, as you can define your data source in Glassfish rather than explicitly in persistence.xml, as described here.

Note that when defining a data source directly in persistence.xml,  some property names have been standardized for JPA 2.0, so have changed from e.g. eclipselink.jdbc.user to javax.persistence.jdbcuser. The following properties in persistence.xml are now standard:-

<property name="javax.persistence.jdbc.url" value="jdbc:oracle:thin:@localhost:1521:xe"/>
<property name="javax.persistence.jdbc.user" value="sentry"/>
<property name="javax.persistence.jdbc.password" value="sentry"/>
<property name="javax.persistence.jdbc.driver" value="oracle.jdbc.OracleDriver"/>

An example persistence.xml used under JPA 2.0 follows:-

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="2.0"
xmlns="http://java.sun.com/xml/ns/persistence"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd">

    <persistence-unit name="SentryPrototype" transaction-type="JTA">
      <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
      <jta-data-source>jdbc/SentryPool</jta-data-source>
      <properties>
       <property name="eclipselink.logging.level" value="INFO" />
       <property name="eclipselink.target-database" value="Oracle" />      
       <property name="eclipselink.ddl-generation.output-mode" value="database" />      
      </properties>
     </persistence-unit>
</persistence>

No Comments »