May 28th, 2011
11:19 am
Resign Patterns(!)

Posted under Design
Tags ,

This is a hugely funny take on design (anti) patterns – well worth a read, also available in Google Docs and as PDF download.

No Comments »

March 6th, 2011
8:53 pm
Using the Decorator pattern when extending the underlying Class

Posted under Java
Tags , , , , ,

In some cases, a Decorator just wraps an underlying class and does not add any interface changes of its own – it just changes behaviour without adding any new methods or properties. An example of this would be this ‘coffee’ example in the chapter on the Decorator pattern from the excellent Head First Design Patterns book by Eric and Elisabeth Freeman. In this case, it is not necessary to have visibility of individual decorators in the chain – they all add their required changes to the price and description of the coffee, and calling methods on the outermost decorator causes all the inner ones to be called to do their thing.

In other cases, however, it is necessary to add new methods and properties to a Decorator. An example from later in the same chapter would be the LineNumberInputStream decorator in the standard java.io package. this performs line numbering on the lines that pass through it, so it is obviously necessary to use the properties of that particular decorator in the chain to fetch the line numbers.

Therefore, in cases like this, client code will need to refer to particular decorators in the chain to access such methods and properties. This in no way invalidates the decorator pattern – it is important to note that we have still extended the underlying class using the open/closed principle, i.e. we have not modified it but decorated it with one or more new classes. Also and equally importantly, the various decorators in the chain are completely decoupled both from the underlying class and from each other as well.

There will be occasions where a decorator needs to extend the behaviour not just of the underlying base class, but also some of the behaviour of another decorator. In this case, such a decorator will extend the decorator it is adding behaviour to, rather than the underlying base class. The general guiding principles here are as follows:-

A decorator should extend the lowest level class it needs to in order to perform its function. For example, if A is a base class and B and C are decorators, C should not extend B unless it actually has dependencies on B’s additional behaviour, as otherwise we are introducing unnecessary coupling between objects that do not need to be coupled.

Note that all the decorators can and should be generic. Any subclass of the class a decorator extends may be passed to it, and may be retrieved generically. For example, if class ClassB decorates class ClassA, then we would declare B as follows:-

public class ClassB<C extends ClassA> extends ClassA {

private C decoratedClass;

}

This allows ClassB full access to all of the ClassA methods and properties, even when it is passed an arbitrary subclass of ClassA. Also, we have ensured that we can generically access via a ClassB instance, the particular subclass of ClassA used as the decorated class.

When we need to access particular decorators in a chain, we can either remember the object references used when we created them, or we can fetch references to them from their container decorator. In this case, we obviously have to keep knowledge of how the decorator chain was set up, and what was decorated in what order.

Decorators are a powerful pattern for flexible behaviour extension, but can have their downsides. You can end up with a large number of small decorator classes and large decorator chains – just look at the java.io package to see this. Also, you have to create all the delegated methods and properties for every decorator, although if you are using an IDE like Eclipse this can all be done automatically for you which is a big boon when using decorators heavily, and eliminates a lot of sources of error. Also, the major benefit compared with static extension of a class to add behaviour is that you can set up a decorator chain with any decorators you like in any order – illustrating again the benefits of favouring composition over inheritance.

No Comments »

March 6th, 2011
7:47 pm
Adding jsf view state to domain objects–part II

Posted under JSF
Tags , , , , , , ,

My original post on this is here.  It links to an Icefaces post which discusses using a decorator to do this. The fundamental problem is that you often want view state such as (but certainly not limited to) a selected flag on objects e.g. in tables. However, you do not want to pollute domain classes with view state, so it would not be desirable to add a selected flag to your domain objects as this blurs the layers (especially as you may have multiple different UI layers on the same domain, such as JSF browser clients, smart phone web clients, web services etc. all with different needs). Other examples of view state might be style classes to be used on different table rows in response to use actions etc.

Since my original post I have looked at other ways of achieving this, and summarize various approaches below which could be used to add view state to rows of a table.

In each case, assume initially we have a JSF table of the form <h:dataTable var=”row” value=”#{mainBean.rows}”>, where each row object has the properties firstName and surname, plus the need for a selected boolean flag to be maintained separately from the row.

 

1/ Using a row decorator

My original post discusses using a decorator to add the extra state, as per the icefaces post. This involves wrapping every row, but is transparent to the code. The downside of the decorator is that each domain class requiring the extra state needs its own decorator even though each case might require the same extra state. This potentially means a lot of extra classes just to add a selected flag.

 

2/ Using a generic row wrapper class composed with the row

A second alternative which is similar to the decorator would be just to use a wrapper class which holds the extra state plus the domain class instance, and explicitly make the calls to the domain class methods and properties. Thus, the following code might be used to read our table row properties and the selected flag (all the examples use direct instantiation for simplicity) :-

//wrap a row. rowWrappers is the list containing the wrapped rows
rowWrappers.add(new RowWrapper<Row>(row));

// read properties on a facelets page – note that the table now contains wrapped rows:-

<h:dataTable var=”rowWrapper” value=”#{mainBean.rowWrappers}”>,
<h:outputText value=”#{rowWrapper.row.firstName}”>
<h:outputText value=”#{rowWrapper.row.surname}”>

//read the selected flag on a facelets page
<h:outputText styleClass=”#{rowWrapper.selected ? ‘style1’ : ‘style2’}” value=”…”/>

The pros and cons of this approach are:-

  • All the rows need to be wrapped as before.
  • All the ‘delegation’ to access row properties is explicit in all the expressions.
  • The upside however is that to add a given set of view state properties, only a single class is needed and this can wrap any number of domain classes generically, which can be a significant advantage.

 

3/ Using a Map to hold the additional row state objects

A third approach makes use of the fact that JSF EL value expressions can transparently read Map entries just like properties. Therefore where object m is a map or object that implements java.util.Map, the equivalent expressions m.property1 and m[‘property1’] when reading a value (i.e. on the right hand side of an expression, or rvalue mode) both result in

m.get(‘property1’)

being executed. When writing a value (i.e. on the left hand side of an expression, or in lvalue mode), as in m.property1 = rhsexpression, the statement

m.put(‘property1’, rhsexpression)

will be executed, assigning rhsexpression as the value of property1.

This technique allows us to use the row object itself as a key to the map containing the additional row state:-

//add row state object to map
rowFlagsMap.add(row, new RowFlags());

// read properties on a facelets page :-

<h:dataTable var=”row” value=”#{mainBean.rows}”>,
<h:outputText value=”#{row.firstName}”>
<h:outputText value=”#{row.surname}”>

//read the selected flag on a facelets page using the current row as a map key
<h:outputText styleClass=”#{mainBean.rowFlagsMap[row].selected ? ‘style1’ : ‘style2’}” value=”…”/>

Note that the nested expression evaluation works correctly with the map as the ‘man in the middle’ and that the expression mainBean.rowFlagsMap[row].selected could be used equally to fetch any other property from the rowflags object instead of the selected  property. The expression works as we would expect in both lvalue and rvalue modes, such that if for example the above property value was written to as true from the jsf page (e.g. from the value property of an updated check box we had just ticked), the expression would execute correctly as

mainBean.rowFlagsMap.get(row).selected=true;

The pros and cons of this approach are:-

  • We need an extra map access to get the row state, but for typical table sizes a hashmap would give very efficient access and should not be an issue.
  • We need to use the more complex expression mainBean.rowFlagsMap[row].selected  to fetch the additional row state. However, the plus is that we do not have to wrap each row in the code, and assuming that there are more table properties than additional state, we do not have to explicitly use a nested expression for them as we would in the previous approach.
  • When using this approach, it must be born in mind that you are using an object as a Map key. This should not be an issue, but if you override the hashCode function such that the hash value could change during the lifetime of a row object (for example if a row property changed) you would need to take account of this. Note also that rows in the map will only correctly match the exact same objects. If you refetch the same domain objects again from your entity manager you cannot expect them to match any existing objects in the map. Again, this should not normally be an issue as you would expect to rebuild the map from scratch when refetching domain objects from persistent storage.

 

4/ Using a Smart Map Interface to implement a selected property by storing the selected row objects

This approach is similar to the previous one, in that it uses a Map interface to hold a selected flag. The difference is that we implement our own class implementing  the Map interface and hide a map behind it. Our Map class simply stores each selected row in a ‘real’ map. When the caller reads the selected flag for a row by calling map.get on our map class, we perform a contains call on the ‘real’ map and return true if the row is present, or false if it is not. Similarly, when asked to set selected =true for a row, we just add the row to the ‘real’ map. When asked to set selected=false for a row, we remove it from the ‘real’ map. The pros and cons of this approach are:-

  • We only need to store the selected rows in the map, so we are storing less map entries.
  • The expressions to check the selected flag are a bit simpler, requiring something of the form mainBean.selectedMap[row]
  • Conversely, we need to implement the smart map – this needs quite a number of dummy methods when implementing the Map interface. When implementing this, I typically create an abstract base class which dummies out most of the java.util.Map methods which I don’t need, and then extend this for a desired implementation. This project in the repository makes use of this technique – see the classes uk.co.salientsoft.view.JSFValueMap and uk.co.salientsoft.view.SelectedMap.
  • The other downside is that this method only allows storage of a simple boolean based on the presence or absence of a row. If we need more additional row state, we need to look to one of the other methods.
  • As with the previous method, we are using a row object itself as a map key, so the same caveats apply as in the previous case.

 

Conclusion

As is often the case, there is no clear winner – you take your pick depending on individual requirements and the various pros and cons. Overall, my current favourite in general is to use the additional Map. It is simple to implement (you don’t need to dummy a load of Map methods like you do with the smart Map). Unlike the smart Map, you can add any number of additional properties to a row. The domain row properties can be accessed exactly as before, and you don’t need to wrap every row (however you do need to create the row state object and add it to the map for every row). Whilst the access to the additional state requires a slightly more complex expression, this is likely to be needed less often than expressions to read the domain properties.

No Comments »

December 12th, 2010
7:37 pm
Using the Enterprise Architect UML design tool

Posted under UML
Tags , , , , ,

Updated 11/12/2010

I’ve clarified some of the points re collection classes, code generation, and the Local Path feature.

 

Original Post (7/5/2010, 07:30)

I’m trialling Enterprise Architect from Sparx Systems at the moment, and have found the following points/tips/gotchas so far :-

Class Stereotypes

If you choose a stereotype such as entity which has its own image, this changes the appearance on the class diagram and prevents detail such asproperties and operations being listed. You can either use another stereotype (or none at all – sometimes I have not as it clutters up the simplicity of the diagram), or you can customise the appearance under settings/uml to prevent the image being used.

Collection classes and generics for relationships

The collection class to be used can be set as a global default or per class:-

  • global default – visit Tools/Options, pick Java under source code engineering in the left hand treeview pane. Click the Collection Classes button to configure the defaults. Note that you can enable generics by using e.g. Collection<#TYPE#> as the definition, where #TYPE# is replaced by the appropriate class name. This is mentioned in the help but is not easy to dig up. You can set different collection classes for Default, Ordered, and Qualified.
  • Per Class setting – open the class details (double click on the class), select the details tab, and click the Collection Classes button to configure the class in the same way as with the global defaults (including generics using #TYPE#). **Important potential Gotcha** – when setting this up, it is easy to get confused. If Class A contains a collection of Class B, then to enable a collection declaration to appear in the Java code for Class A, you must set the appropriate collection class settings on Class B, not Class A. Setting up class B will then cause that declaration to appear in Class A. Note that the per class settings overide any global default that you have applied.
  • The trigger for generation of the collection declaration is to set the Multiplicity at the appropriate end of the (aggregation) relationship between the classes. So for example, an Application might have a 0-or-1 to many relationship with AppRole, i.e. an Application will hold a collection of AppRoles. The aggregation relationship will be drawn from the AppRole class to  the Application class, such that the diamond arrowhead appears against the Application class, as that holds the collection. You would then double click the relationship, and under Source Role you would set a Multiplicity of 0..*, and under the Target Role you would set a multiplicity of 0..1. This will cause the collection declaration when the code is generated. Note that there are other settings on the source and target role pages for Role and alias, and derived, derived union, and owned. These do not appear to affect the code generation (at least in my simple examples).
  • Note that if you select ordered on the multiplicity you get the ordered type of collection class. In my case, I used global settings of Collection for unordered, List for ordered, and left the qualified as Collection. Using this I obtained a declaration of List in the generated code when I selected ordered on the multiplicity of the 0..* end of the relationship (Source Role in the above example).

Enabling/Disabling Destructor (Java finalize) calls

These can also be configured globally or on a per class basis :-

  • Global default – visit Tools/Options, pick Object Lifetimes under source code engineering, and tick or untick generate destructor. Constructors can be similarly configured.
  • Per Class setting – right click on the class and select Generate Code. Click advanced, and the same page is displayed as for the global default. Constructors/destructors are configured in the same way as above, except that the settings are per class rather than global defaults.

When Java is the target language, enabling a destructor adds a call to finalize as Java is garbage collected and does not have explicit destructors. finalize is called by the garbage collector to notify the object of garbage collection, but the call is made at the discretion of the garbage collector. For Java you would normally want to untick the generate destructor  setting.

Auto generation of package statements in Java

To configure this, you select a package in the project browser hierarchy as the package route, by selecting Code Engineering on its context menu, then picking Set as Namespace Root. Then, all packages under that root package (but not including the root package) will have their names contatenated to form the java package that is used in the code. You can use dots in the package name, so for example you could have a package heirachy of Class Model/uk.co.salientsoft/appname/domain, where Class Model would be set as the root. Then, classes in the Domain package would have a package in the code of uk.co.salientsoft.appname.domain as you would expect. Note that there are some gotchas/possible bugs around this. I only managed to get it to work when I actually created a class diagram at each level and added the next sub package onto the diagram. This looks tidy/correct anyway, and is what the example does, but it is not enforced – you can have a package hierarchy in the browser without having all the intervening class diagrams with the next subpackage on, but if you do so, it appears that package statements are not then output in the java code.

Code generation / directories

When generating code, this can be done at package level by selecting Code Engineering/Generate Source Code from a package context menu in the project browser. Ticking Include All Child Packages does just that and generates the code recursively down the package tree. You can select Auto generate files to cause the file directory paths to be automatically derived from the package hierarchy, ignoring any path set for the individual classes. However, I found this awkward to set up – ideally I wanted a java directory tree to match the package fields, but to do this seems to need a carefully crafted package hierarchy which may not match the way you want to work. For example, I had a package called uk.co.salientsoft, which ended up as a single directory name rather than being broken down into component fields. I did not want to add all the individual package levels for uk, co, and salientsoft. Also, when you Auto generate files the root path under which the tree is created is not saved and you have to browse for it every time – not pleasant. Therefore, in the end I elected not to use Auto generated files, but to set the correct file path on each class. Thus can be done by right clicking the class and selecting Generate Code, then browsing for the desired path. Clicking Save will save the path which is then used for code generation as above. Having done this, I finally acheived correct package statements in the code together with the correct directory hierarchy.

Using Local Paths

Local Paths offer a means of parameterising root directories e.g. for code generation, such that the same EAP  file can be used by multiple users with different local code generation directories.

On the surface it looks like a means of defining environment/path style variables within EA, which you then use when defining code generation locations. This is somewhat of a misconception, which is unfortunately reinforced by the section of the help on Local Paths:-

Developer A might define a local path of:
JAVA_SOURCE = “C:\Java\Source”

All Classes generated and stored in the Enterprise Architect project are stored as:
%JAVA_SOURCE%\<xxx.java>.

Developer B now defines a local path as:
JAVA_SOURCE =”D:\Source”.

Now, Enterprise Architect stores all java files in these directories as:
%JAVA_SOURCE%\<filename>

On each developer’s machine, the filename is expanded to the correct local version.

In fact, whilst you do define a local path under Settings/Local Paths…  and give the local path an ID (or variable name), you do not enter the ID anywhere when configuring directories for code generation – the local paths are applied behind the scenes in somewhat of a smoke and mirrors fashion.  In fact you do the following to set it up (example is for setting Java code generation directories directly on classes) :-

  • Select Code Generation… on the context menu for one or more classes on a class diagram. Browse for the actual desired full path for the class. You will note that at no stage is there a means to enter a variable ID in the path – you are just using the standard file browse dialog.
  • Under Settings/Local Paths… create a local path of type Java, browsing to a root directory which is either the path entered for the classes or a parent directory on the same path. Save it with a suitable name.
  • Clicking Apply Path will then apply the path to the code generation directories. It will return a count of the number of instances it applied (number of paths found). In my case, it initially found 0, but when I clicked Expand Path which reverses the process and removes your local path, and is supposed to revert to using normal full directories again – it said it removed 4 (the number of affected classes). When I clicked Apply Path again, it again found all 4, so I suspect the initial “found 0” was a bug and it had worked – confusing, especially as it is all doing stuff behind the scenes!
  • Now, to see the effect of what you have done, change your local path definition in the local path dialog for the ID you created and applied, and save the change. Now look at the code generation folders for the classes again under Code Generation…  on the context menus for the classes, and you should find that the path has changed by magic to the one you just set in the Local Path dialog! Smoke and mirrors indeed.
  • Note that normally your local path definition will just be a parent directory which is shared by all your code generation subdirectories for the classes, as there will typically be a number of different subdirectories. EA correctly changes just the parent directory part of the path.
  • You can then if desired define a number of local paths for Java, e.g. “Freds path”, “Bills path”, and apply the desired one which will then take effect. You will not see any local path IDs appearing anywhere, they are stored in the path but applied behind the scenes.
  • To switch between local paths again feels strange. You would expect that if “”Freds path” is in effect and you apply “Bills path” (which has a different definition), then the directories will  all change. They do not! What you have to do is to define “Bills path” initially to be the same definition as “Freds path”, then do an Expand Path for “Freds Path” (Whereupon you will see the correct count of classes affected). Then you can do an Apply Path for “Bills path”, which will then be applied to the same number of classes. Finally, you can amend “Bill’s path”, which is the one that currently has control, and all the class code generation directories will change.
  • One frustration of this method is that when adding new classes when a local path is in effect, you must still browse to the correct actual directory just as you would if no local path was present – you cannot enter the local path anywhere. It will however take effect ‘by magic’. In my case, I created a new class under the local path, and it immediately ‘took on’ the local path and would change its directory when I changed the local path definition, even though I had not done an ‘Apply Path’ to pick up the new class.
  • It appears therefore that new classes under the parent folder of a currently active local path pick up the effect of the local path by default.

 

My scenario is not the actual use case which will occur in practice, as I was doing this as a single user and making changes. In practice, if the EAP file was passed around, a different user would already have his source directories created according to his own standard. Nevertheless, I find the operation of this functionality to be strange, counterintuitive, and even slightly buggy in parts.

Having said that, in fairness, EA is the best UML tool I tested by far in my price range as an independent developer – $199 for the professional edition, as opposed to a 4 figure sum for other packages which I did not even bother to look at. The other features of EA are very fully functional, intuitive to use and bug free. Surprisingly, as per my other post here, all the other packages I looked at were very poor in comparison, so for my needs and price range,  and from my testing, it is still very much a one horse race.

No Comments »

May 3rd, 2010
3:04 pm
Comparison reviews of UML Tools

Posted under UML
Tags , ,

This review compares open source UML tools. Unlike a number of other reviews I read, it is fairly recent (02/2009) so well worth a read.  The conclusion is as follows :-

From the perspective of a reviewer with no specific software development project in mind, the most feature-laden option is the Papyrus / Acceleo combination. If your primary IDE is Eclipse, you will benefit from having your modeling software running in the same environment as your active code editor. For Java programmers using Netbeans, the same can said of its modeling tool. BOUML, while superb in its own right, is the vision of a single author and, as such, enterprise development institutions may be hesitant to adopt it. If you don’t mind breaking away from your IDE, give Taylor a test drive.

This Eclipse post gives a comparison of UML Tools which are Eclipse Plugins. I tried several different versions of the Eclipse UML2 plugin, both installing via the update site and with a manual/dropin install, and could not get it to run with my Eclipse galileo installation. I note anyway that it does not yet support code generation, so that rules it out for me as I want to generate class stubs.

Another interesting review from diagramming.org may be found here.

After a long look around at commercial offerings as well, I found that it was all rather a minefield – some products were rough around the edges to say the least, but still trying to command a 4-5 figure sum for purchase!

In the end I found Enterprise Architect from Sparx Systems, and was immediately very impressed. Good reviews on the net, a variety of flavours at reasonable prices. It appears stable and has a very large feature set and extensive documentation. To generate and import Java code (which I want to do), the professional edition is needed as a minimum. This works out at $199 at the time of posting, which translates roughly to £133 – very impressive for a very reasonable price. Support is prompt (they fixed a broken trial version download promptly), forums seem helpful. I’m trialling it at the moment but it is likely that this is what I will go for.

No Comments »