Archive for December, 2009

December 18th, 2009
11:33 am
Java Dev setup – Eclipse/Eclipselink/ICEfaces/Glassfish/Tomcat

Posted under Java
Tags , , , , ,

  1. Decide on component versions to be used by building a cross compatibility matrix. Generally you will want to install the latest production builds of all components consistent with full cross compatibility. Also to be considered are the various tool bundles available, which can simplify installation and configuration by allowing multiple tools to be installed from a single kit. However, many of the bundles overlap, so this needs to be taken into account (in addition to the compatibility matrix)  so that you do not end up installing multiple versions of the same tool – for example, Java EE can come with the JDK or with Eclipse. In my case, I am using an Eclipse bundle which includes Java EE.
  2. Download and install the chosen Java SE JDK.
  3. Download and install Glassfish (Eclipse has a plugin for, and therefore a dependency on, Glassfish so we do the latter first).
  4. Download and install Eclipse. My bundle included Java EE 5 and Eclipselink and all the web tools.
  5. Install the Glassfish Plugin for Eclipse.
  6. Download ICEfaces (e.g. v1.8.x). This just needs unzipping to any desired target folder (e.g. E:\ICEfaces-1.8.x). The distribution contains full documentation for getting started, running tutorials, and development.
  7. Download and Install the Eclipse ICEfaces plugin.
  8. Download Tomcat if required. You may just want to use Glassfish as your servlet container, but in my case some development projects will use JSF/ICEfaces/Tomcat without Java EE. I have in the past used the windows service installer version, which is a self installing .exe and installs/runs Tomcat as a service. However note that this does not work under Windows 64-bit if you are using a 32 bit JVM. See here for a manual install process from the zip kit to give you all the functionality of the windows installer version, including run as a service, configuration utility, and system tray monitor/utility.

No Comments »

December 17th, 2009
7:22 pm
Changing Server Properties in Eclipse

Posted under Eclipse
Tags , ,

A little counter intuitive gotcha this one. At first glance, you would expect to right click a server in the servers view, and pick properties. However, if you do this, you do not get the full set – you just get the ability to switch deployed file location and some monitoring options.

If you just double click the server in the list, or right click and select Open, you get the Overview page listing (almost – see below) all the available properties for the server. For some reason in the past I tripped over this more than once, as it just does not seem right that the properties context menu option does not give the full property page (or at least link to it)!

Also, note that the ‘monitoring’ options available on the properties context menu option do not also appear on the Overview page – however the location  options do appear on the overview page in more comprehensive form.

DOH!!!

Comments Off on Changing Server Properties in Eclipse

December 17th, 2009
5:12 pm
Correct Scoping/Separation of JSF Managed Beans

Posted under JSF
Tags , ,

This ICEfaces article details the importance of the correct separation of concerns and scoping of JSF managed bean. Of particular importance is ensuring that you do not mix up the layers of the MVC design pattern, by mashing up different MVC concerns into a single managed bean. Correct splitting up of classes and scoping can avoid this and achieve the desired loose coupling.

The approach I have taken, which differs subtly from the above article but is fundamentally the same, is as follows :-

  1. I encapsulate all calls to the business/service layer in a ModelBean. This bean exposes properties which hold all the business data fetched from the service layer, and various populate and save methods to fetch/populate those properties and finally persist modified versions of them back via the service layer. As I am using JPA and have generally elected not to use the DTO pattern,  the properties  are JPA entities, objects which are the result of JPA select constructor expressions, or primitive data values from the database. For clarity, the ModelBean is stateful and session scoped and so holds the properties all the time a page is being viewed/edited, whereas all the calls to the service layer are stateless. I might have a single model bean, or I might break it down into a number of separate beans, depending on what suits the design. Whilst there could be one per JSF page, typically I find that there is scope for a lot of re-use of common methods and properties acrosss pages. Conversely, for a larger application, it may become unmodular and unwieldy to have a single model bean. A fundamental point about this bean is that it does not  perform any controller logic, i.e. does not directly interact with the components on the page (although typically of course they will refer to its properties to get data).
  2. I encapsulate all the logic which interacts with the components on the page in a controller bean for the page. Where the page contains subcomponents such as custom facelets tags (in my case, a TablePair tag is an example of this), I typically find that a good design is to have a separate child controller bean for each custom component, so that the controller mirrors the structure and requirements of the view. These controller beans do not access the business/service layer directly – all such access is routed via the ModelBean above. This separates all code which is directly aware of the business layer interface from controller code which is aware of the structure and handling of the page.
  3. I ensure that the view (the JSF page, facelets and tags) does not contain any business or controller logic. As this post indicates in its last paragraph, the presence of logic expressions in JSF value expressions is often a warning bell that this may have occurred, and the view has been polluted with controller or model concerns. For example, the disable attribute on a JSF component is strictly a property reference to an enable/disable property in a controller bean. The conditions under which that property is enabled or disabled are strictly a controller issue. The above post also details design guidelines for state change management and event handling and propagation for the controller bean.

 

A good test of the design is to ask yourself the following questions:-

  1. What would need to change if the call interface to the service layer changed? Would it be necessary to change any backing beans which also contain controller logic – which handle components on the page? If so, the design has become polluted. Consider a possible requirement where the service layer interface was enhanced, and the old interface was eventually due to be deprecated. You might have a situation where you needed to support both versions. Would this result in having 2 identical copies of some controller code which interacts with components on the page? If so, you have polluted the design and mixed controller concerns in with the model. If you had to perform any bug fixes on the duplicated controller code, you would have to update both copies identically to keep them in step, and so have broken modularity.
  2. Conversely, what would need to change if the user interface to the page changed, but performed the same functions in terms of the model (perhaps the interface was improved to make it more user friendly)? Would it be necessary to change any beans which contain calls to the business/service layer? If so, the design has again become polluted. Consider a possible requirement where you needed to maintain both the old and the new versions of the user interface for a period until the old one had been completely phased out. This would require 2 versions of the controller beans to be maintained. Would this result in having 2 identical copies of some model code which directly calls the business/service layer? If so, you have polluted the design and mixed model concerns in with the controller. If you had to perform any bug fixes on the duplicated model code, you would have to update both copies identically to keep them in step, and so have broken modularity.

Comments Off on Correct Scoping/Separation of JSF Managed Beans

December 17th, 2009
4:15 pm
Using Eclipse/JSF/JPA with Tomcat 6

Posted under JPA
Tags , , , , , ,

I’ve found that this is useful for prototyping/mockups of apps that are targetted for GlassFish. It is possible to run JPA with Eclipselink outside an EJB container, so a JSF/JPA app can be run under Tomcat, resulting in less overhead and faster turnaround of changes. For a while I’ve had to use a PC which is RAM limited, and so developing/prototyping with Eclipse and Glassfish has been slow. Turning around deployments/changes around takes time, and I’ve found the synchronisation between Eclipse and Glassfish to be a bit fussy – sometimes Eclipse misunderstands the state of Glassfish, needing a Glassfish restart which again takes valuable time.

Note the following points on doing this:-

1/ My application stack looks similar to the ‘real’ one, but my Service layer/domain objects are just POJOs which are explicitly created or injected via faces_config.xml. This Oracle Example details the various IOC features of the JSF. managed bean facility.

2/ Persistence.xml needs setting up differently to run JPA outside the container. I have found it easiest to specify the database connection explicitly, rather than using a data source. In Eclipse, you can use the persistence.xml editor to populate the connection properties directly from the one Eclipse uses for its entity validation. Whilst you can do JNDI lookups of a data source stored in Tomcat, the naming context used is different, as it makes use of the Enterprise Naming Context (ENC), i.e. “java:comp/env”. This example details how to do it if you want to go that way. A sample persistence.xml follows (with entries commented out where they have been replaced for use with Tomcat) :- 

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0" xmlns="http://java.sun.com/xml/ns/persistence"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
 http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">

 <!--<persistence-unit name="SentryPrototype" transaction-type="JTA">-->
 <persistence-unit name="SentryPrototype" transaction-type="RESOURCE_LOCAL">

   <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>

   <!--<jta-data-source>jdbc/JPATestPool</jta-data-source>-->

   <class>uk.co.salientsoft.jpatest.domain.UserInfo</class>
   <properties>
    <property name="eclipselink.logging.level" value="INFO" />

    <!-- <property name="eclipselink.target-server" value="SunAS9" />-->
    <property name="eclipselink.target-server" value="None" />

    <property name="eclipselink.session-name" value="SentryPrototypeSession" />
    <property name="eclipselink.target-database" value="Oracle" />   
    <property name="eclipselink.ddl-generation.output-mode" value="database" />

    <property name="eclipselink.jdbc.url" value="jdbc:oracle:thin:@localhost:1521:xe"/>
    <property name="eclipselink.jdbc.user" value="sentry"/>
    <property name="eclipselink.jdbc.password" value="sentry"/>
    <property name="eclipselink.jdbc.driver" value="oracle.jdbc.OracleDriver"/>

   </properties>
  </persistence-unit>
</persistence>

 

3/ Ref this eclipse tutorial :-
Tomcat does not have an EJB container so you must add EJB 3.0/JPA 1.0 capability for EclipseLink JPA by placing the specification persistence.jar into the container lib directory $TOMCAT_HOME/lib.
It is recommended to put persistence_1_0.xsd there as well.  It is not recommended that the WAR include its own version of persistence.jar. Your eclipse project should reference but not include this jar and xsd.

The eclipselink.jar should be placed off of the container lib directory $TOMCAT_HOME/lib.
Since Tomcat does not include an EJB container, you may put eclipselink.jar in the web applications’s lib directory or higher up in the classloader by putting it off of Tomcats’ lib directory, where all web applications can access it.
Do not split the eclipselink.jar from the javax.persistence jar – keep them together – preferably in $TOMCAT_HOME/lib

 4/ Tomcat has its own dependencies on the Expression Language (EL) web libraries. These should be removed from WEB_INF/lib when deploying to Tomcat as pointed out here. (If you do not do this, the application will crash with duplicate class loading exceptions.) This may be done in eclipse on the Java Server Faces facet page – open the project properties for the project, and select Java Server Faces under Project Facets. Untick the entry for Standardized EL Library v1.0.

Then save the changes and redeploy the application.

For more information, the eclipse tutorial referenced in 3/ goes into greater depth on the subject. You can also find a list of the available eclipse JPA tutorials here.

5/ When running xxxxxx

Comments Off on Using Eclipse/JSF/JPA with Tomcat 6

December 15th, 2009
11:09 am
Accessing JPA Entity Metadata

Posted under JPA
Tags , ,

It is often useful to access the entity definition metadata. A classic example would be to access the actual maximum defined length of a string column from the code. Such a definition should be centralised to avoid multiple definitions of a constant value, which would otherwise require multiple changes if the value were increased. There are several approaches to this, but unfortunately not an obvious ‘silver bullet’ which is straightforward and also performant.

1/ Access Database Metadata via JDBC

You can access the actual database definitions via a JDBC connection. This would have a performance penalty, but would make sense if the database is your master definition source, i.e. you are starting with a database defninition and then creating entities based on the database. You can get a JDBC connection from JPA via the code below, as described at the end of this post here :-

JPA 2.0

entityManager.getTransaction().begin();
java.sql.Connection connection = entityManager.unwrap(java.sql.Connection.class);
...
entityManager.getTransaction().commit();

 

JPA 1.0

entityManager.getTransaction().begin();
UnitOfWorkImpl unitOfWork = (UnitOfWorkImpl)((JpaEntityManager)entityManager.getDelegate()).getActiveSession();
unitOfWork.beginEarlyTransaction();
java.sql.Connection connection = unitOfWork.getAccessor().getConnection();
...
entityManager.getTransaction().commit();

 

  This post here  details how to get metadata from a connection. The sample code from the post follows :-

Get Column Size

import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;

public class Main {
  public static void main(String[] argsthrows Exception {
    Connection conn = getMySqlConnection();
    System.out.println("Got Connection.");
    Statement st = conn.createStatement();
    st.executeUpdate("drop table survey;");
    st.executeUpdate("create table survey (id int,name varchar(30));");
    st.executeUpdate("insert into survey (id,name ) values (1,'nameValue')");

    ResultSet rsColumns = null;
    DatabaseMetaData meta = conn.getMetaData();
    rsColumns = meta.getColumns(null, null, "survey"null);
    while (rsColumns.next()) {
      String columnName = rsColumns.getString("COLUMN_NAME");
      System.out.println("column name=" + columnName);
      String columnType = rsColumns.getString("TYPE_NAME");
      System.out.println("type:" + columnType);
      int size = rsColumns.getInt("COLUMN_SIZE");
      System.out.println("size:" + size);
      int nullable = rsColumns.getInt("NULLABLE");
      if (nullable == DatabaseMetaData.columnNullable) {
        System.out.println("nullable true");
      else {
        System.out.println("nullable false");
      }
      int position = rsColumns.getInt("ORDINAL_POSITION");
      System.out.println("position:" + position);
      
    }

    st.close();
    conn.close();
  }

  private static Connection getHSQLConnection() throws Exception {
    Class.forName("org.hsqldb.jdbcDriver");
    System.out.println("Driver Loaded.");
    String url = "jdbc:hsqldb:data/tutorial";
    return DriverManager.getConnection(url, "sa""");
  }

  public static Connection getMySqlConnection() throws Exception {
    String driver = "org.gjt.mm.mysql.Driver";
    String url = "jdbc:mysql://localhost/demo2s";
    String username = "oost";
    String password = "oost";

    Class.forName(driver);
    Connection conn = DriverManager.getConnection(url, username, password);
    return conn;
  }

  public static Connection getOracleConnection() throws Exception {
    String driver = "oracle.jdbc.driver.OracleDriver";
    String url = "jdbc:oracle:thin:@localhost:1521:caspian";
    String username = "mp";
    String password = "mp2";

    Class.forName(driver)// load Oracle driver
    Connection conn = DriverManager.getConnection(url, username, password);
    return conn;
  }
}

 

2/ Access JPA Annotations via Reflection

Alternatively, you could try using Reflection to read the JPA annotations via introspection.  This post here describes how to read annotations with Reflection, but note that this is not a JPA example – it is a ‘roll your own persistance provider’ example, but the introspection techniques should work the same way. The downside of this technique is that all that introspection just seems laborious and slow performing if all you want is the length of a few string columns.

 

3/ Use Constants in Inner Classes to roll your own master Metadata Definitions

Another way, which makes sense if the entity defninitions are the ‘master’ and you are creating the database tables from the entities, is just to define some inner classes in the entity class containing static constants for the metadata you need access to. You can then use these definitions as the master, and refer to them both in the annotations and the calling code. You can also use an inheritance hierachy to standardise the metadata you are using, and centralise common definitions. As you are rolling your own metadata, you do not have to stick to the jpa annotations, you are free to invent any other constants that you need to tell you something about your entities/columns. The example which follows demonstrates column metadata, but you could of course apply the same technique to entity (table) level metadata if required.

In this example, a number of entities have Code fields containing a coded string which might be 30 characters, and description fields which might be 255 characters. The definitions for “Code” and “Description” can be centralised in a common superclass. In each Entity, for all the fields for which you want to expose the metadata, you define a public  inner class named <FieldName>Meta just above the field and annotation declarations. You can then access the constants in the inner class statically from outside the entity. (If required, you could also provide a statically initialised field containing a reference to the <FieldName>Meta class, and create a getter for it, if you need getter access as well, e.g. for managed access via a framework such as JSF, although this involves non static access to a static value which eclipse/java will winge about). The following code samples illustrate the technique:-

Class ColumnMeta

package uk.co.salientsoft.jpa;

/* This base class defines the metadata used. */

public abstract class ColumnMeta {
 public static final String NAME = null;
 public static final int LENGTH = 255;
}

 

Class ColumnMetaCode

package uk.co.salientsoft.test.domain;
import uk.co.salientsoft.jpa.ColumnMeta;

public abstract class ColumnMetaCode extends ColumnMeta {
 public static final String NAME = "Code";
 public static final int LENGTH = 30;
}

 

Class ColumnMetaDescription

package uk.co.salientsoft.test.domain;
import uk.co.salientsoft.jpa.ColumnMeta;

public abstract class ColumnMetaDescription extends ColumnMeta {
 public static final String NAME = "Description";
 public static final int LENGTH = 255;
}

 

Entity Class AppRole

package uk.co.salientsoft.test.domain;

import java.io.Serializable;
import java.lang.String;
import javax.persistence.*;

@Entity
public class AppRole implements Serializable {

 private static final long serialVersionUID = 1L;

 @Id
 @GeneratedValue(generator="AppRoleID")
 private long AppRoleID;
 private long AppID;

 public class CodeMeta extends ColumnMetaCode {};
 @Column(length=CodeMeta.LENGTH)
 private String Code;
 
 public class DescriptionMeta extends ColumnMetaDescription {};
 @Column(length=DescriptionMeta.LENGTH)
 private String Description;

 public AppRole() {
  super();
 }  
 public long getAppRoleID() {
  return this.AppRoleID;
 }
 public void setAppRoleID(long AppRoleID) {
  this.AppRoleID = AppRoleID;
 }  
 public long getAppID() {
  return this.AppID;
 }
 public void setAppID(long AppID) {
  this.AppID = AppID;
 }  
 public String getCode() {
  return this.Code;
 }
 public void setCode(String Code) {
  this.Code = Code;
 }  
 public String getDescription() {
  return this.Description;
 }
 public void setDescription(String Description) {
  this.Description = Description;
 }
}

 

Example MetaData References

codeLength = AppRole.CodeMeta.LENGTH;
descLength = AppRole.DescriptionMeta.LENGTH;

Comments Off on Accessing JPA Entity Metadata

December 9th, 2009
6:45 pm
Creating database tables from Entities in Eclipse

Posted under JPA
Tags , , ,

This can be done a couple of different ways :-

  1. From the Project context menu, select JPA Tools/Create Tables from Entities. This will drop and create the whole schema based on the entity definitions.
  2. Alternatively, you can configure persistence.xml to drop and create the whole schema before each run as detailed in the example file here.

Comments Off on Creating database tables from Entities in Eclipse

December 9th, 2009
6:06 pm
Using Oracle Specific features in JPA with Eclipselink

Posted under JPA
Tags , , ,

To use Oracle specific features in JPA with Eclipselink, such as using sequences for ID generation as described in this example here, it is necessary to set the correct property in persistence.xml to ensure that JPA/Eclipselink knows that an Oracle database is in use. By default, it does not discover this automatically and resorts to using table based ID generation even if you turn sequences on as per the above example.

A sample persistence.xml with the correct properties follows :-

<?xml version="1.0" encoding="UTF-8"?>
<persistence version="1.0"
 xmlns="http://java.sun.com/xml/ns/persistence"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
 xsi:schemaLocation="http://java.sun.com/xml/ns/persistence
 http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd">
 <persistence-unit name="JPATest" transaction-type="RESOURCE_LOCAL">
  <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>
  <non-jta-data-source>jdbc/JPATestPool</non-jta-data-source>
  <mapping-file>META-INF/orm.xml</mapping-file>
  <class>uk.co.salientsoft.jpatest.domain.UserInfo</class>
  <properties>
   <property name="eclipselink.target-server" value="SunAS9" />
   <property name="eclipselink.logging.level" value="FINEST" />
   <property name="eclipselink.session-name" value="JPATestSession" />

   <!-- Ensure Eclipselink knows we have an Oracle Database
        so that Oracle specific features work correctly -->
   <property name="eclipselink.target-database" value="Oracle"/>

   <!-- This will cause EclipseLink to create the database schema
        automatically on every run.
        You can also do this from the JPA Tools/Create Tables from Entities
        project context menu option in Eclipse, which may well be preferrable-->
   <!--
   <property name="eclipselink.ddl-generation" value="drop-and-create-tables" />
   <property name="eclipselink.ddl-generation.output-mode" value="database" />
   -->

  </properties>
 </persistence-unit>
</persistence>

Comments Off on Using Oracle Specific features in JPA with Eclipselink

December 4th, 2009
5:09 pm
Using JPA Annotations with XML

Posted under JPA
Tags , ,

It is possible to use both annotations and descriptors in orm.xml to define entity mappings with JPA. The debate on which way to go has become very heated, but in my opinion there is a strong case for both.

  1. Annotations are best for metadata which you would consider closely bound to the code. This promotes clarity, as elements which are closely bound together are also in close proximity, promoting clarity and maintainability.
  2. XML is best for metadata which may change independantly of the code, and provides the ability to isolate platform specific issues from the code, for example to enhance persistence provider independance.

The OO design principle “Separate what changes from what stays the same” comes to mind clearly here.

I found the following simple example helpful in this context. The ID for the entity is defined via annotations in the code. However, I have defined the ID generator seperately in orm.xml. The generator name is fixed and referred to by the @GeneratedValue annotation, but the actual generator is in orm.xml and may be specified either as a table generator (which is database independant), or as a sequence generator which allows me to take advantage of Oracle’s sequences for primary key generation. Either generator may be defined in orm.xml, but the code does not change. This gives database independance, whilst still allowing leverage of the enhanced features of a specific database platform. In the example below, for illustration, orm.xml contains both generators and one has been commented out.

Note that to enable sequence generation in Oracle for this example to work, the correct properties must be set in persistence.xml, as by default Eclipselink will use table based ID generation even if you turn sequence generation on. The way to do this is detailed in this post here.

Class UserInfo

package uk.co.salientsoft.jpatest.domain;
import java.io.Serializable;
import javax.persistence.*;

@Entity
public class UserInfo implements Serializable {

 @Id
 @GeneratedValue(generator="UserID")
 private long userID;

 private String userName;
 private static final long serialVersionUID = 1L; 

 public UserInfo() {
  super();
 }  
 public long getUserID() {
   return this.userID;
 }
 public void setUserID(long userID) {
  this.userID = userID;
 }   
 public String getUserName() {
   return this.userName;
 }
 public void setUserName(String userName) {
  this.userName = userName;
 }
}

 

orm.xml

<?xml version="1.0" encoding="UTF-8"?>
<entity-mappings version="1.0" xmlns="http://java.sun.com/xml/ns/persistence/orm"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/persistence/orm
    http://java.sun.com/xml/ns/persistence/orm_1_0.xsd">

 <!--<table-generator name="UserID" />-->

 <sequence-generator name="UserID" sequence-name="UserID"/>

</entity-mappings>

No Comments »

December 4th, 2009
4:30 pm
OSGi

Posted under Java
Tags , ,

OSGI is an acronym for the Open Services Gateway Initiative, an open standards organisation formed to promote the specification of a Java based services platform that can be remotely managed. The Acronym is now obsolete, and instead the organisation is called the OSGi Alliance.

Eclipse is an OSGi compliant platform, and uses OSGi to manage its services and plugins. An OSGi plugin is quickly recognisable by the fact that its library jars are broken down to a fine grained level to allow flexible deployment, and have long names consisting of a package prefix and version number information. A typical example of an OSGi style jar is this core Java persistence jar :-

org.eclipse.persistence.core_1.1.3.v20091002-r5404.jar

Glassfish V3 is also OSGi compliant, but Glassfish V2.1 and earlier are not.

An overview and details of the OSGi concept and specification may be found on Wikipedia here.

No Comments »

December 4th, 2009
3:33 pm
Configuring Libraries for Eclipse Plugins

Posted under Eclipse
Tags , , , ,

If you want to configure the libraries for use with a particular Eclipse plugin, this can seem rather a mystery at first. The traditional way in Eclipse to configure libraries for a project is to open the project properties for a project, select Java Build Path in the list on the left, and visit the libraries tab in the pane on the right. You can then use the “Add Jars” or “Add External Jars” to add jars to the project. To improve project portability across different development platforms, you can also add Classpath Variables which can be defined appropriately for each platform to isolate a project from platform specific jar locations etc. An example of this screen, with the JSF and ICEfaces entries expanded, is here :-

However, whilst you will see entries for plugins such as Eclipslink and ICEfaces on this screen, it does not provide any means of configuring them. Libraries for such plugins use an Eclipse feature called a Classpath Container. This allows the library configuration to be completely dynamic, as it is handled in code via a class. This allows the configuration to change at any time in response to project changes, as every reference to the classpath container is handled via the code rather than being static. To configure plugin libraries, therefore, you need to visit the dialogs for the Project Facets, which is where plugins are configured for the project. This post here, which details how to create a JPA-enabled EJB, shows this in action for JPA.

Another example would be to configure the  libraries for JSF and ICEfaces. You can do this as follows :-

Open the Project Properties dialog for the project, and select the Java Server Faces option under Project Facets in the left hand pane. This lists all the JSF and ICEfaces libraries available which may be selected/deselected.

The buttons on the right of the libraries frame allow management and downloading of the libraries. Clicking the Manage Libraries button allows configuration of the libraries available for selection in the project facets dialog above.

This is for example where you would upgrade ICEfaces libraries to a new version. You can leave both versions, or remove the old ones once you don’t need them any more. You can then select the new versions for that particular project in the parent Project Facets dialog.

Note that one particular issue arises when creating a new Eclipse workspace, as it is useful to migrate all the library definitions to the new workspace to avoid having to set them up manually or download them again. This post describes how you can do this by exporting and importing workspaces preferences. The post also discusses the issue of library location when sharing libraries – this is important as by default you would end up with one workspace sharing libraries within the folder structure of another workspace. My typical development folder structure is detailed at the end of this post.

If you subsequently return to view the libraries on the Libraries tab of the Java Build Path screen, you will then see the results of your efforts – the libraries for the plugins will have changed in accordance with your actions.

The dialogs used for plugin configuration do vary and are certainly not always intuitive or consistent – in these examples JPA was configured via the Java Persistence dialog in the project properties or via a link on the JPA facet screen which is not always present. In contrast, the JSF libraries (including the ICEfaces libraries) are configured via the Java Server Faces option under Project Facets. Even though there is an ICEfaces option under project facets, in my 3.5 Galileo Eclipse this displays “unknown library configuration”. However, the configuration is all there if you are willing to hunt around a bit for it!

No Comments »