Blog Archives

April 12th, 2011
2:21 pm
Composite Components–Best Practice

Posted under JSF
Tags , , , ,

Update

The original post below mentioned at the beginning #{cc.attrs.clientId}. Whilst this does work and is mentioned on the net, the correct form is #{cc.clientId}. I have amended accordingly.

Original Post

This post here is an IBM Developerworks post on CC best practice. Some comments/observations follow:-

  1. Wrap a cc in a div (NOT a panelgroup) and give it id=“#{cc.attrs.clientId }” “#{cc.clientId }” to give it the ID given to the actual composite. Note that if you use a div, it will NOT be given the CC’s Nameingcontainer prefix (and hence will not have the “double ID” issue whereby it prefixes the ID you specify with the naming container prefix again).
  2. You cannot specify an h:panelGroup inside a CC without an ID, or the resulting div is not rendered on the page at all.
  3. The id= and rendered= are both ‘standard’ attributes of a CC that you can use by virtue of the fact that a CC is a jsf component – you don’t need to roll your own ‘display=’ attribute for example. This is helpful, as otherwise you would need to wrap a CC in a div in order to give it the CC naming container ID as in 1/, but you would also need to wrap it additionally in an actual h:panelGroup in order to roll your own display= ‘rendered’ style attribute. This may very well be the reason why you get an illegal argument exception if you try to specify your own rendered= attribute on a CC (as per this Mantis issue).
  4. cc.id is a built in reference for the CC’s declared ID without any naming container prefixes. cc.clientId is the full monty with all the prefixes. These are analagous to component.id and component.clientId which can be used for any JSF component. Sadly I have not seen any full documentation for the cc or component built in objects.

The second part of the IBM Developerworks article is here. It details how to add Ajax behaviour to a CC, and in particular how to add Ajax behaviour to a component inside a CC from outside. This is useful for less complex CCs, and partners with the other features like retargeting which allows you to pass a listener to a CC which is attached to a component inside the CC.

For more complex CCs, I tend to take a different approach, as follows:-

  • I code a controller class in Java for use with the CC, and pass it in as an attribute of the CC (typically controller=). Any behaviour of components inside the CC such as Ajax and listeners are all handled by this controller bean.
  • The controller bean is injected by CDI as an @dependent bean, and is typically injected into the page bean which acts as controller for the page. In this way, the lifecycle of the controller is tied to that of its containing bean, which is clear and exactly as it should be.
  • As the controller is a dependent bean, CDI allows it to be generic and to use parameterised types (you can only do this for a dependent bean). For example, I have my own breadcrumb control CC (see here) which has its own controller class. This accepts a list of crumbs, where a crumb is a parameterised type for the class.
  • A page may have multiple instances of such a CC, in which case it just injects multiple controllers, one per CC that it manages.
  • In order to provide event handling, I take a simple approach. A controller just has an event interface which it calls out to in order to pass on any relevant events such as listener or action events. Typically, these will mimic the interface for a listener or action and just pass it on. I generally add an additional argument to the start of all such calls, containing a reference to the controller itself (i.e. the controller passes this in the argument). This is useful in cases where for example the containing page bean itself implements the event interface and handles the events, as where there are multiple components/controllers on a page this argument can be used to determine which one the event was for.
  • A key concept with this technique, which works better for more complex cases, is that I am not just exposing internal component behaviour of components in the CC to the outside as you might using JSF retargeting for example. Rather, I am using the controller to provide its own abstraction of the internal behaviour and to provide an external interface which reflects its overall function. In other words, I can map the internal behaviour to the external interface in any way I choose, which gives me much greater control and design flexibility.
  • Another approach would be to use a JSF CC Backing Component, as detailed in Core Javaserver Faces on p373. This allows Java behaviour to be added to a CC transparently to the CC’s clients. However, it does involve using some of the JSF internal interfaces that would also be used to develop full blown custom components. In that sense it provides a halfway house to a full custom component implementation. I have deliberately not used this approach in my use cases – the ‘controller’ approach I have outlined is simple and clear, and easy to develop. The controller is not hidden, being declared as a dependent by a calling backing bean, and being passed in to the CC on the facelet page. I feel that in my use cases the visibility, clear lifecycle control, and use of a separate controller instance per component on the page are a benefit and keep things clear and simple.  In design pattern terms, JSF uses the MVC pattern. The JSF components form the view, and my controllers and page beans form the controller. It could be said that a backing component for a CC is part of the view rather than part of the controller, as it uses internal JSF interfaces used by JSF components to implement the view part of the pattern.

No Comments »

April 4th, 2011
2:18 pm
How to obtain the current default directory in Java

Posted under Java
Tags ,

This simple line of code does it, by looking up the current directory (“.”) and fetching its canonical path :-

 

System.out.println(new File(".").getCanonicalPath());

No Comments »

March 23rd, 2011
3:24 pm
Java Generics–Class literals as run time type tokens

Posted under Java
Tags , , , ,

This post refers to the section on page 16 of the Generics tutorial document by Gilad Bracha.

The example here uses the Cojen dynamic bytecode generation library in order to create custom sort comparators. The advantage of Cojen is that you end up with a fully compiled comparator and therefore do not incur the performance hit of reflective code. Furthermore, comparators generated by Cojen can be cached in a map so that they are only generated once. This particular example borrows some code originally used for Icefaces table sorting, which I adapted for use with Primefaces. Whilst Primefaces does have its own table sorting, I had some issues with it. I wanted visibility of the current sort column and direction, and I wanted control over the sorting – I wanted to reapply the sort to a table when rows were added to it from another table, so that it was always maintained in order. Furthermore, I wanted to maintain the current sort column and direction that had been selected by the user via the column headers when applying the sort when new rows were added. As I had trodden this route before for Icefaces, it was straightforward to move the code over and incorporate it into a standard table controller class which I was already developing.

One issue which I had not previously bottomed out in the Icefaces version was making the code fully generic and fully eliminating all the generic warnings when using a class literal as a run time type token. With Cojen, you call a static forClass method on its BeanComparator to construct a comparator, passing in the class and the comparator ordering details:-

if (beanComparator == null) {
    beanComparator = BeanComparator.forClass(rowClass).orderBy(key);               
    comparatorMap.put(key, beanComparator);
}   

In my first Icefaces attempt, I pulled the first row out of the current row list for the table, and passed that to Cojen to create the class. The problem with this was that I could not do it generically this way. My second attempt, with the Primefaces version, passed the class in to an init method generically. As this tied up all the generics loose ends correctly, everything was happy and the class was recognised as having the correct parameterised type. The cost of this was the need to pass it explicitly as an argument to an init method (or a setter), but this was simple and made the parameterised type clear and explicit. This is the correct approach and one that I will use from now on.

The following code fragments illustrate the mechanism:-

Call Site


    private @Inject TableCtrl<Person, RowMetadata> personTable;

        personTable.init(Person.class, RowMetadata.class, this);

 

TableCtrlImpl.java

@Dependent
public class TableCtrlImpl<C, D extends RowMetadata> implements Serializable, TableCtrl<C, D>, Iterable<C> {

    private Class<C> rowClass;

    private Map<String, BeanComparator<C>> comparatorMap = new HashMap<String, BeanComparator<C>>();

@Override public void init(Class<C> rowClass, Class<D> rowMetadataClass, TableCtrlEvent<C, D> tableCtrlEvent) {       
    rowMeta = new RowMetaMap<C, D>();
    this.rowClass = rowClass;
    this.rowMetadataClass = rowMetadataClass;
    this.tableCtrlEvent = tableCtrlEvent;
}

    @Override public void sort() {
        if (sortColumn != null && !rows.isEmpty()) {
            String key = (sortAscending ? “” : “-“) + sortColumn;
            BeanComparator<C> beanComparator = comparatorMap.get(key);
            if (beanComparator == null) {
                beanComparator = BeanComparator.forClass(rowClass).orderBy(key);               
                comparatorMap.put(key, beanComparator);
            }
           
            Collections.sort(rows, beanComparator);
        }
    }

No Comments »

March 6th, 2011
8:53 pm
Using the Decorator pattern when extending the underlying Class

Posted under Java
Tags , , , , ,

In some cases, a Decorator just wraps an underlying class and does not add any interface changes of its own – it just changes behaviour without adding any new methods or properties. An example of this would be this ‘coffee’ example in the chapter on the Decorator pattern from the excellent Head First Design Patterns book by Eric and Elisabeth Freeman. In this case, it is not necessary to have visibility of individual decorators in the chain – they all add their required changes to the price and description of the coffee, and calling methods on the outermost decorator causes all the inner ones to be called to do their thing.

In other cases, however, it is necessary to add new methods and properties to a Decorator. An example from later in the same chapter would be the LineNumberInputStream decorator in the standard java.io package. this performs line numbering on the lines that pass through it, so it is obviously necessary to use the properties of that particular decorator in the chain to fetch the line numbers.

Therefore, in cases like this, client code will need to refer to particular decorators in the chain to access such methods and properties. This in no way invalidates the decorator pattern – it is important to note that we have still extended the underlying class using the open/closed principle, i.e. we have not modified it but decorated it with one or more new classes. Also and equally importantly, the various decorators in the chain are completely decoupled both from the underlying class and from each other as well.

There will be occasions where a decorator needs to extend the behaviour not just of the underlying base class, but also some of the behaviour of another decorator. In this case, such a decorator will extend the decorator it is adding behaviour to, rather than the underlying base class. The general guiding principles here are as follows:-

A decorator should extend the lowest level class it needs to in order to perform its function. For example, if A is a base class and B and C are decorators, C should not extend B unless it actually has dependencies on B’s additional behaviour, as otherwise we are introducing unnecessary coupling between objects that do not need to be coupled.

Note that all the decorators can and should be generic. Any subclass of the class a decorator extends may be passed to it, and may be retrieved generically. For example, if class ClassB decorates class ClassA, then we would declare B as follows:-

public class ClassB<C extends ClassA> extends ClassA {

private C decoratedClass;

}

This allows ClassB full access to all of the ClassA methods and properties, even when it is passed an arbitrary subclass of ClassA. Also, we have ensured that we can generically access via a ClassB instance, the particular subclass of ClassA used as the decorated class.

When we need to access particular decorators in a chain, we can either remember the object references used when we created them, or we can fetch references to them from their container decorator. In this case, we obviously have to keep knowledge of how the decorator chain was set up, and what was decorated in what order.

Decorators are a powerful pattern for flexible behaviour extension, but can have their downsides. You can end up with a large number of small decorator classes and large decorator chains – just look at the java.io package to see this. Also, you have to create all the delegated methods and properties for every decorator, although if you are using an IDE like Eclipse this can all be done automatically for you which is a big boon when using decorators heavily, and eliminates a lot of sources of error. Also, the major benefit compared with static extension of a class to add behaviour is that you can set up a decorator chain with any decorators you like in any order – illustrating again the benefits of favouring composition over inheritance.

No Comments »

December 12th, 2010
7:37 pm
Using the Enterprise Architect UML design tool

Posted under UML
Tags , , , , ,

Updated 11/12/2010

I’ve clarified some of the points re collection classes, code generation, and the Local Path feature.

 

Original Post (7/5/2010, 07:30)

I’m trialling Enterprise Architect from Sparx Systems at the moment, and have found the following points/tips/gotchas so far :-

Class Stereotypes

If you choose a stereotype such as entity which has its own image, this changes the appearance on the class diagram and prevents detail such asproperties and operations being listed. You can either use another stereotype (or none at all – sometimes I have not as it clutters up the simplicity of the diagram), or you can customise the appearance under settings/uml to prevent the image being used.

Collection classes and generics for relationships

The collection class to be used can be set as a global default or per class:-

  • global default – visit Tools/Options, pick Java under source code engineering in the left hand treeview pane. Click the Collection Classes button to configure the defaults. Note that you can enable generics by using e.g. Collection<#TYPE#> as the definition, where #TYPE# is replaced by the appropriate class name. This is mentioned in the help but is not easy to dig up. You can set different collection classes for Default, Ordered, and Qualified.
  • Per Class setting – open the class details (double click on the class), select the details tab, and click the Collection Classes button to configure the class in the same way as with the global defaults (including generics using #TYPE#). **Important potential Gotcha** – when setting this up, it is easy to get confused. If Class A contains a collection of Class B, then to enable a collection declaration to appear in the Java code for Class A, you must set the appropriate collection class settings on Class B, not Class A. Setting up class B will then cause that declaration to appear in Class A. Note that the per class settings overide any global default that you have applied.
  • The trigger for generation of the collection declaration is to set the Multiplicity at the appropriate end of the (aggregation) relationship between the classes. So for example, an Application might have a 0-or-1 to many relationship with AppRole, i.e. an Application will hold a collection of AppRoles. The aggregation relationship will be drawn from the AppRole class to  the Application class, such that the diamond arrowhead appears against the Application class, as that holds the collection. You would then double click the relationship, and under Source Role you would set a Multiplicity of 0..*, and under the Target Role you would set a multiplicity of 0..1. This will cause the collection declaration when the code is generated. Note that there are other settings on the source and target role pages for Role and alias, and derived, derived union, and owned. These do not appear to affect the code generation (at least in my simple examples).
  • Note that if you select ordered on the multiplicity you get the ordered type of collection class. In my case, I used global settings of Collection for unordered, List for ordered, and left the qualified as Collection. Using this I obtained a declaration of List in the generated code when I selected ordered on the multiplicity of the 0..* end of the relationship (Source Role in the above example).

Enabling/Disabling Destructor (Java finalize) calls

These can also be configured globally or on a per class basis :-

  • Global default – visit Tools/Options, pick Object Lifetimes under source code engineering, and tick or untick generate destructor. Constructors can be similarly configured.
  • Per Class setting – right click on the class and select Generate Code. Click advanced, and the same page is displayed as for the global default. Constructors/destructors are configured in the same way as above, except that the settings are per class rather than global defaults.

When Java is the target language, enabling a destructor adds a call to finalize as Java is garbage collected and does not have explicit destructors. finalize is called by the garbage collector to notify the object of garbage collection, but the call is made at the discretion of the garbage collector. For Java you would normally want to untick the generate destructor  setting.

Auto generation of package statements in Java

To configure this, you select a package in the project browser hierarchy as the package route, by selecting Code Engineering on its context menu, then picking Set as Namespace Root. Then, all packages under that root package (but not including the root package) will have their names contatenated to form the java package that is used in the code. You can use dots in the package name, so for example you could have a package heirachy of Class Model/uk.co.salientsoft/appname/domain, where Class Model would be set as the root. Then, classes in the Domain package would have a package in the code of uk.co.salientsoft.appname.domain as you would expect. Note that there are some gotchas/possible bugs around this. I only managed to get it to work when I actually created a class diagram at each level and added the next sub package onto the diagram. This looks tidy/correct anyway, and is what the example does, but it is not enforced – you can have a package hierarchy in the browser without having all the intervening class diagrams with the next subpackage on, but if you do so, it appears that package statements are not then output in the java code.

Code generation / directories

When generating code, this can be done at package level by selecting Code Engineering/Generate Source Code from a package context menu in the project browser. Ticking Include All Child Packages does just that and generates the code recursively down the package tree. You can select Auto generate files to cause the file directory paths to be automatically derived from the package hierarchy, ignoring any path set for the individual classes. However, I found this awkward to set up – ideally I wanted a java directory tree to match the package fields, but to do this seems to need a carefully crafted package hierarchy which may not match the way you want to work. For example, I had a package called uk.co.salientsoft, which ended up as a single directory name rather than being broken down into component fields. I did not want to add all the individual package levels for uk, co, and salientsoft. Also, when you Auto generate files the root path under which the tree is created is not saved and you have to browse for it every time – not pleasant. Therefore, in the end I elected not to use Auto generated files, but to set the correct file path on each class. Thus can be done by right clicking the class and selecting Generate Code, then browsing for the desired path. Clicking Save will save the path which is then used for code generation as above. Having done this, I finally acheived correct package statements in the code together with the correct directory hierarchy.

Using Local Paths

Local Paths offer a means of parameterising root directories e.g. for code generation, such that the same EAP  file can be used by multiple users with different local code generation directories.

On the surface it looks like a means of defining environment/path style variables within EA, which you then use when defining code generation locations. This is somewhat of a misconception, which is unfortunately reinforced by the section of the help on Local Paths:-

Developer A might define a local path of:
JAVA_SOURCE = “C:\Java\Source”

All Classes generated and stored in the Enterprise Architect project are stored as:
%JAVA_SOURCE%\<xxx.java>.

Developer B now defines a local path as:
JAVA_SOURCE =”D:\Source”.

Now, Enterprise Architect stores all java files in these directories as:
%JAVA_SOURCE%\<filename>

On each developer’s machine, the filename is expanded to the correct local version.

In fact, whilst you do define a local path under Settings/Local Paths…  and give the local path an ID (or variable name), you do not enter the ID anywhere when configuring directories for code generation – the local paths are applied behind the scenes in somewhat of a smoke and mirrors fashion.  In fact you do the following to set it up (example is for setting Java code generation directories directly on classes) :-

  • Select Code Generation… on the context menu for one or more classes on a class diagram. Browse for the actual desired full path for the class. You will note that at no stage is there a means to enter a variable ID in the path – you are just using the standard file browse dialog.
  • Under Settings/Local Paths… create a local path of type Java, browsing to a root directory which is either the path entered for the classes or a parent directory on the same path. Save it with a suitable name.
  • Clicking Apply Path will then apply the path to the code generation directories. It will return a count of the number of instances it applied (number of paths found). In my case, it initially found 0, but when I clicked Expand Path which reverses the process and removes your local path, and is supposed to revert to using normal full directories again – it said it removed 4 (the number of affected classes). When I clicked Apply Path again, it again found all 4, so I suspect the initial “found 0” was a bug and it had worked – confusing, especially as it is all doing stuff behind the scenes!
  • Now, to see the effect of what you have done, change your local path definition in the local path dialog for the ID you created and applied, and save the change. Now look at the code generation folders for the classes again under Code Generation…  on the context menus for the classes, and you should find that the path has changed by magic to the one you just set in the Local Path dialog! Smoke and mirrors indeed.
  • Note that normally your local path definition will just be a parent directory which is shared by all your code generation subdirectories for the classes, as there will typically be a number of different subdirectories. EA correctly changes just the parent directory part of the path.
  • You can then if desired define a number of local paths for Java, e.g. “Freds path”, “Bills path”, and apply the desired one which will then take effect. You will not see any local path IDs appearing anywhere, they are stored in the path but applied behind the scenes.
  • To switch between local paths again feels strange. You would expect that if “”Freds path” is in effect and you apply “Bills path” (which has a different definition), then the directories will  all change. They do not! What you have to do is to define “Bills path” initially to be the same definition as “Freds path”, then do an Expand Path for “Freds Path” (Whereupon you will see the correct count of classes affected). Then you can do an Apply Path for “Bills path”, which will then be applied to the same number of classes. Finally, you can amend “Bill’s path”, which is the one that currently has control, and all the class code generation directories will change.
  • One frustration of this method is that when adding new classes when a local path is in effect, you must still browse to the correct actual directory just as you would if no local path was present – you cannot enter the local path anywhere. It will however take effect ‘by magic’. In my case, I created a new class under the local path, and it immediately ‘took on’ the local path and would change its directory when I changed the local path definition, even though I had not done an ‘Apply Path’ to pick up the new class.
  • It appears therefore that new classes under the parent folder of a currently active local path pick up the effect of the local path by default.

 

My scenario is not the actual use case which will occur in practice, as I was doing this as a single user and making changes. In practice, if the EAP file was passed around, a different user would already have his source directories created according to his own standard. Nevertheless, I find the operation of this functionality to be strange, counterintuitive, and even slightly buggy in parts.

Having said that, in fairness, EA is the best UML tool I tested by far in my price range as an independent developer – $199 for the professional edition, as opposed to a 4 figure sum for other packages which I did not even bother to look at. The other features of EA are very fully functional, intuitive to use and bug free. Surprisingly, as per my other post here, all the other packages I looked at were very poor in comparison, so for my needs and price range,  and from my testing, it is still very much a one horse race.

No Comments »

May 26th, 2010
9:35 pm
Notes on Using CDI/Weld

Posted under Java
Tags , , , ,

CDI stands for Context and Dependency Injection, and is the standard for Java EE 6, being an important part of the Java EE 6 stack. The reference implementation of CDI is JBoss Weld. Full details including documentation downloads for Weld, may be found here.

Here are some initial pointers/gotchas when starting out with CDI:-

  • Any bean which is @SessionScoped or @ConversationScoped must be serializable. If you do not obey this rule you get an error when trying to deploy your application. The easiest way to do this in Eclipse is to implement Serializable. This will give a warning about the need to declare serialVersionUID.Just right click the warning and pick quick fix. You will have the choice of a default serialVersionUID or a generated one. Normally you would want to use a generated one, which Eclipse will do automatically for you when you pick the option. (You can also use a Java command line tool to do it but there is no point as Eclipse is easier). This post here gives more details about SerialVersionUID and serialization.
  • Any bean which uses parameterised types (i.e. generics) in its interface must be declared as @Dependent, i.e. it is only used and referenced from one client bean and its lifetime is dependent on the lifetime of that client bean – it is created when the client bean is created, and destroyed when it is destroyed. If you do not obey this rule you get an error when trying to deploy your application. An example of this situation might be a generic table bean which returns a list of rows of a generic type <E>, e.g. @Dependent public class TableBean<E> implements Serializable. This makes sense if you think about it – if it returns a generic type, it cannot be instantiated in isolation as it is the creating client which determines the actual type returned. Remember in this regard that CDI performs type safe injection in order to pick up as many errors as possible at compile time. Therefore, taking an example of our table bean, a consequence of this is that the table bean cannot be referred to directly from a JSF page. This is not the issue that it might appear. In practice, the table bean will be created by a client backing bean for the page. For example, a UserCredentials.jsf page might contain several tables, each backed by a different TableBean. The page will have its own backing bean, for example UserCredentialsBean, which will hold the backing state for the whole page. When referring to a table on the JSF page, it will always be referred to via the page backing bean, for example userCredentials.rightTable.rows. This is what you would naturally do anyway, and referring to the dependent bean via a property of its ‘owning’ parent if perfectly correct and does not violate any CDI rules. Note that it is also perfectly correct to assign such a reference to a JSF table value attribute for use in the table, and to then use the shorthand name assigned to the var attribute to refer to the bean. This is also fine and does not break any rules.
  • When referencing an EJB from a JSF backing bean, you should always use CDI annotations rather than the older @EJB ones, as the CDI ones are typesafe and allow all the additional flexibility of CDI, such as alternatives for swapping in mock test versions of EJBs by adding settings in beans.xml. An example of injecting a local EJB into a backing bean might be  private @Inject SentryServiceLocal sentryService; assuming that SentryServiceLocal is the local interface for the EJB.
  • Given the choice, you should always use CDI for preference rather than a legacy feature. For example, JSF managed beans are still available for backwards compatibility in JSF (and even have some new features for JSF 2.0). However, you should ignore these and use CDI instead as it offers superior flexibility and type safety. Using CDI across the board gives additional benefits such as when grouping annotations using Stereotypes.

No Comments »

May 14th, 2010
6:45 pm
Javadoc creation and formatting

Posted under Java
Tags , , , ,

Some tips and gotchas on this, as follows :-

1/ the Oracle documentation pages for Javadoc complete with examples etc. may be found here

2/ An example of Javadoc applied to a project may be found in the PropertyTree project in the repository.

3/ JAutodoc is a useful Eclipse plugin. Whilst Eclipse can generate Javadoc comments at source creation time if you tick the box for it, and lets you add them afterwards one by one via ctrl/alt/J etc., using  JAutodoc allows creation, addition or replacement of a full set of Javadoc comments to an entire project at any time. It also has a better stab at guessing comments for properties etc. than Eclipse, by parsing words out of the code. Obviously there is still a lot of manual editing to be done, but I found it a useful addition. Note that JAutodoc is available for a project on the package Explorer context menu for a project, but not on the project explorer one.

4/ Package level documentation sits in package.html in the package source directory. As Javadoc is frames based, the following doctype is recommended by Oracle for this file :- <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd">. Simple html is the order of the day as Javadoc has simple, somewhat retro styling. You can get new style sheets for it to spice it up, but I find the standard ones perfectly readable. Javadoc has its own Stylesheet.css, but I avoided amending this, and deliberately did not add another stylesheet of my own. Therefore, I avoided heavy use of styling, and just borrowed the odd style from the built in stylesheet for my own uses such as table header styling below.

5/ For subheadings in the package doc <h3> and <h4> worked fine.

6/ It was straightforward to add simple tables. Whilst some of the attributes are rather outdated/deprecated in favour of CSS, they are also used elsewhere in the Javadoc for its own tables, so I did not lose sleep over it. I used the TableHeadingColor class from Javadoc’s own stylesheet to give a similar background colour to my own table headers.

<table border="1" cellspacing="1" width="100%">
    <tr class="TableHeadingColor">
        <th>Column 1 Header</th>
        <th>Column 2 Header</th>
    </tr>
    <tr>
        <td>Column 1 data</td>
        <td>Column 1 data</td>
    </tr>
</table>

7/ Styling code samples is a bit messy with several tags being involved. This post here on StackOverflow discusses some of the issues In the end I used the following examples, generally going with what worked well:-

<blockquote><pre>
      PropertyTree propertyTree = new PropertyTree("lan").load("Test.properties");
</pre></blockquote>

or :-

<blockquote><pre>{@code
lan.default.broadcast=10.0.0.255
lan.default.port=7

}</pre></blockquote>

The code tag (“{@code …}” was supposed to be the one to use here (particularly when Generics are involved) but was a bit mixed in its usefulness, and I found some variations in the tabbing, and closing braces in the code can be an issue as the code tag also uses this for its own closure. I did not therefore always use it. Interestingly the above StackOverflow post cites that String.java in the JRE just uses <p><blockquote><pre>,  so I often followed suit.

8/ Individual code or data identifiers such as class and method names looked good in a fixed font when framed with <code></code>, which seemed to be the preferred way to highlight them. Sometimes I made them links as follows, but I did not do this all the time when there was already a link on the comment, as it was tedious and unnecessary. Links can be added, and you can link to a method, a class or a package (which links to package.html) by just linking to the desired level. The first parameter after {@link is the hyperlink, and the second parameter is the clickable description that appears for the hyperlink. Eclipse does code completion for the tags when you are entering them. The following example links to a property (which is specified after the “#”):-

{@link  uk.co.salientsoft.util.propertytree.TagTree#getProperty() TagTree.getProperty()}

9/ One gotcha is that for overriden methods, the Javadoc always comes from the inherited method, and the Javadoc method comments in the subclass are ignored. However, you can add additional comments by using the Javadoc documentation comment syntax (prefixed “/**” i.e. with the extra asterisk on the first line) in the subclass, and these will be added in and not ignored:-

/**
  * For the <code>NullIterator</code> this will always return <code>false</code>.
  */

10/ In Eclipse, you can create Javadoc via the Project/Create Javadoc menu bar option (this is not available on the package explorer context menu). There are a number of options when doing so, but these are not all defaulted to your previous settings when you go back next time round. You can therefore create and save an Ant build file for the Javadoc which you can then re-run much more easily. Javadoc.exe needs to be on the path for this – it should be anyway if you have Java SE installed and on the path, and for me I did not have to add it. You can then open the Ant view in Eclipse. In the view pane, right click and select Add Buildfiles… and browse for the Ant build file which you saved at Javadoc creation time. This then stays pinned to the Ant pane and you can double click it any time to recreate your Javadoc with all the saved settings.

No Comments »

May 8th, 2010
2:32 pm
New Java Stack ideas – articles on CDI, Weld, JSF2.0, Java EE6, Spring

Posted under Java
Tags , , , , ,

This is a collection of useful notes and post/article references on ideas for a suitable new Java stack:-

http://blogs.sun.com/enterprisetechtips/entry/using_cdi_and_dependency_injection

http://stackoverflow.com/questions/2270379/differences-between-java-ee-6-cdi-implementations

http://docs.jboss.org/weld/reference/1.0.0/en-US/html/environments.html#d0e4910

http://stackoverflow.com/questions/2499323/jee6-vs-spring-3-stack

No Comments »

May 3rd, 2010
3:04 pm
Comparison reviews of UML Tools

Posted under UML
Tags , ,

This review compares open source UML tools. Unlike a number of other reviews I read, it is fairly recent (02/2009) so well worth a read.  The conclusion is as follows :-

From the perspective of a reviewer with no specific software development project in mind, the most feature-laden option is the Papyrus / Acceleo combination. If your primary IDE is Eclipse, you will benefit from having your modeling software running in the same environment as your active code editor. For Java programmers using Netbeans, the same can said of its modeling tool. BOUML, while superb in its own right, is the vision of a single author and, as such, enterprise development institutions may be hesitant to adopt it. If you don’t mind breaking away from your IDE, give Taylor a test drive.

This Eclipse post gives a comparison of UML Tools which are Eclipse Plugins. I tried several different versions of the Eclipse UML2 plugin, both installing via the update site and with a manual/dropin install, and could not get it to run with my Eclipse galileo installation. I note anyway that it does not yet support code generation, so that rules it out for me as I want to generate class stubs.

Another interesting review from diagramming.org may be found here.

After a long look around at commercial offerings as well, I found that it was all rather a minefield – some products were rough around the edges to say the least, but still trying to command a 4-5 figure sum for purchase!

In the end I found Enterprise Architect from Sparx Systems, and was immediately very impressed. Good reviews on the net, a variety of flavours at reasonable prices. It appears stable and has a very large feature set and extensive documentation. To generate and import Java code (which I want to do), the professional edition is needed as a minimum. This works out at $199 at the time of posting, which translates roughly to £133 – very impressive for a very reasonable price. Support is prompt (they fixed a broken trial version download promptly), forums seem helpful. I’m trialling it at the moment but it is likely that this is what I will go for.

No Comments »

January 8th, 2010
7:40 pm
Debug Logging of Bean properties

Posted under Java
Tags , , ,

A useful tool for doing this is the apache.commons.lang.builder.ToStringBuilder using one of the reflectionToString method flavours which are designed for this job. You can easily create dumps of entity beans and bean collections with this, and you can choose the output format.

Whilst it is tempting to override the toString methods on e.g. all your entity beans, as this will automatically dump all related beans & collections just by calling toString on a parent bean, this is not recommended! It can easily cause a storm of toString calls to propagate across all the related beans, resulting in an almost instant stack overflow! This post details how to do this using the apache BeanUtils describe method, and the comments take credit for pointing me in the direction of the apache.commons.lang.ToStringBuilder. However, both methods cause stack overflows when used with JSF and JPA, even when you are not doing any logging.

A better way is to provide a utility method into which you pass any bean or collection, and get the dump string returned which you can then log. The following is an example of a utility class to do this :-

package uk.co.salientsoft.util.beans;
import java.util.Collection;
import org.apache.commons.lang.builder.ToStringBuilder;
import org.apache.commons.lang.builder.ToStringStyle;

public class BeanUtil {

  private static final ToStringStyle defaultStyle = ToStringStyle.SHORT_PREFIX_STYLE;

  public static <E> String dumpProperties(Collection<E> coll) {
    return dumpProperties("", coll, defaultStyle);
  }
  public static <E> String dumpProperties(String prompt, Collection<E> coll) {
    return dumpProperties(prompt, coll, defaultStyle);
  }
  public static <E> String dumpProperties(Collection<E> coll, ToStringStyle style) {
    return dumpProperties("", coll, style);
  }
  public static <E> String dumpProperties(String prompt, Collection<E> coll,
                                          ToStringStyle style) {
    String properties = prompt;
    for (E e : coll) {
     /* Use the apache.commons.lang.builder toStringBuilder
      * to list all this bean's properties
      */
     properties += "\n" + ToStringBuilder.reflectionToString(e, style);
    }
    return properties;
  } 

  public static <E> String dumpProperties(E e) {
    return dumpProperties("", e, defaultStyle);
  }
   public static <E> String dumpProperties(String prompt, E e) {
     return dumpProperties(prompt, e, defaultStyle);
   }
   public static <E> String dumpProperties(E e, ToStringStyle style) {
     return dumpProperties("", e, style);
   }
   public static <E> String dumpProperties(String prompt, E e, ToStringStyle style) {
     return prompt + ToStringBuilder.reflectionToString(e, style);
   }
}

No Comments »