Archive for the 'Knowledge Base' Category

March 6th, 2011
7:47 pm
Adding jsf view state to domain objects–part II

Posted under JSF
Tags , , , , , , ,

My original post on this is here.  It links to an Icefaces post which discusses using a decorator to do this. The fundamental problem is that you often want view state such as (but certainly not limited to) a selected flag on objects e.g. in tables. However, you do not want to pollute domain classes with view state, so it would not be desirable to add a selected flag to your domain objects as this blurs the layers (especially as you may have multiple different UI layers on the same domain, such as JSF browser clients, smart phone web clients, web services etc. all with different needs). Other examples of view state might be style classes to be used on different table rows in response to use actions etc.

Since my original post I have looked at other ways of achieving this, and summarize various approaches below which could be used to add view state to rows of a table.

In each case, assume initially we have a JSF table of the form <h:dataTable var=”row” value=”#{mainBean.rows}”>, where each row object has the properties firstName and surname, plus the need for a selected boolean flag to be maintained separately from the row.

 

1/ Using a row decorator

My original post discusses using a decorator to add the extra state, as per the icefaces post. This involves wrapping every row, but is transparent to the code. The downside of the decorator is that each domain class requiring the extra state needs its own decorator even though each case might require the same extra state. This potentially means a lot of extra classes just to add a selected flag.

 

2/ Using a generic row wrapper class composed with the row

A second alternative which is similar to the decorator would be just to use a wrapper class which holds the extra state plus the domain class instance, and explicitly make the calls to the domain class methods and properties. Thus, the following code might be used to read our table row properties and the selected flag (all the examples use direct instantiation for simplicity) :-

//wrap a row. rowWrappers is the list containing the wrapped rows
rowWrappers.add(new RowWrapper<Row>(row));

// read properties on a facelets page – note that the table now contains wrapped rows:-

<h:dataTable var=”rowWrapper” value=”#{mainBean.rowWrappers}”>,
<h:outputText value=”#{rowWrapper.row.firstName}”>
<h:outputText value=”#{rowWrapper.row.surname}”>

//read the selected flag on a facelets page
<h:outputText styleClass=”#{rowWrapper.selected ? ‘style1’ : ‘style2’}” value=”…”/>

The pros and cons of this approach are:-

  • All the rows need to be wrapped as before.
  • All the ‘delegation’ to access row properties is explicit in all the expressions.
  • The upside however is that to add a given set of view state properties, only a single class is needed and this can wrap any number of domain classes generically, which can be a significant advantage.

 

3/ Using a Map to hold the additional row state objects

A third approach makes use of the fact that JSF EL value expressions can transparently read Map entries just like properties. Therefore where object m is a map or object that implements java.util.Map, the equivalent expressions m.property1 and m[‘property1’] when reading a value (i.e. on the right hand side of an expression, or rvalue mode) both result in

m.get(‘property1’)

being executed. When writing a value (i.e. on the left hand side of an expression, or in lvalue mode), as in m.property1 = rhsexpression, the statement

m.put(‘property1’, rhsexpression)

will be executed, assigning rhsexpression as the value of property1.

This technique allows us to use the row object itself as a key to the map containing the additional row state:-

//add row state object to map
rowFlagsMap.add(row, new RowFlags());

// read properties on a facelets page :-

<h:dataTable var=”row” value=”#{mainBean.rows}”>,
<h:outputText value=”#{row.firstName}”>
<h:outputText value=”#{row.surname}”>

//read the selected flag on a facelets page using the current row as a map key
<h:outputText styleClass=”#{mainBean.rowFlagsMap[row].selected ? ‘style1’ : ‘style2’}” value=”…”/>

Note that the nested expression evaluation works correctly with the map as the ‘man in the middle’ and that the expression mainBean.rowFlagsMap[row].selected could be used equally to fetch any other property from the rowflags object instead of the selected  property. The expression works as we would expect in both lvalue and rvalue modes, such that if for example the above property value was written to as true from the jsf page (e.g. from the value property of an updated check box we had just ticked), the expression would execute correctly as

mainBean.rowFlagsMap.get(row).selected=true;

The pros and cons of this approach are:-

  • We need an extra map access to get the row state, but for typical table sizes a hashmap would give very efficient access and should not be an issue.
  • We need to use the more complex expression mainBean.rowFlagsMap[row].selected  to fetch the additional row state. However, the plus is that we do not have to wrap each row in the code, and assuming that there are more table properties than additional state, we do not have to explicitly use a nested expression for them as we would in the previous approach.
  • When using this approach, it must be born in mind that you are using an object as a Map key. This should not be an issue, but if you override the hashCode function such that the hash value could change during the lifetime of a row object (for example if a row property changed) you would need to take account of this. Note also that rows in the map will only correctly match the exact same objects. If you refetch the same domain objects again from your entity manager you cannot expect them to match any existing objects in the map. Again, this should not normally be an issue as you would expect to rebuild the map from scratch when refetching domain objects from persistent storage.

 

4/ Using a Smart Map Interface to implement a selected property by storing the selected row objects

This approach is similar to the previous one, in that it uses a Map interface to hold a selected flag. The difference is that we implement our own class implementing  the Map interface and hide a map behind it. Our Map class simply stores each selected row in a ‘real’ map. When the caller reads the selected flag for a row by calling map.get on our map class, we perform a contains call on the ‘real’ map and return true if the row is present, or false if it is not. Similarly, when asked to set selected =true for a row, we just add the row to the ‘real’ map. When asked to set selected=false for a row, we remove it from the ‘real’ map. The pros and cons of this approach are:-

  • We only need to store the selected rows in the map, so we are storing less map entries.
  • The expressions to check the selected flag are a bit simpler, requiring something of the form mainBean.selectedMap[row]
  • Conversely, we need to implement the smart map – this needs quite a number of dummy methods when implementing the Map interface. When implementing this, I typically create an abstract base class which dummies out most of the java.util.Map methods which I don’t need, and then extend this for a desired implementation. This project in the repository makes use of this technique – see the classes uk.co.salientsoft.view.JSFValueMap and uk.co.salientsoft.view.SelectedMap.
  • The other downside is that this method only allows storage of a simple boolean based on the presence or absence of a row. If we need more additional row state, we need to look to one of the other methods.
  • As with the previous method, we are using a row object itself as a map key, so the same caveats apply as in the previous case.

 

Conclusion

As is often the case, there is no clear winner – you take your pick depending on individual requirements and the various pros and cons. Overall, my current favourite in general is to use the additional Map. It is simple to implement (you don’t need to dummy a load of Map methods like you do with the smart Map). Unlike the smart Map, you can add any number of additional properties to a row. The domain row properties can be accessed exactly as before, and you don’t need to wrap every row (however you do need to create the row state object and add it to the map for every row). Whilst the access to the additional state requires a slightly more complex expression, this is likely to be needed less often than expressions to read the domain properties.

No Comments »

March 5th, 2011
7:16 pm
JPA entities – field vs method annotation

Posted under JPA
Tags , , ,

Having read a number of technical blogs on this there appeared to be no clear winner either way, until I hit a requirement to use a mapped superclass for an entity. Then the following issues became very important:-

  1. Annotating fields gives you less flexibility – methods can be overriden by a subclass, whereas fields cannot.
  2. My particular example where I hit this was when designing an implementation to use for hierarchical/tree structured data, to be used in a number of places in an application. I wanted to make all the tree usage polymorphic from the same base class implementation, as then various other areas of the code could be reused for any tree. As entities were passed all the way up to the (JSF) UI, I was using certain design patterns for handling hierarchies at the UI level and making them able to handle any tree was desirable.
  3. Some of the functionality for handling trees – in my case path enumerated trees as described here in this excellent article on slideshare.net – could be placed in a base class. Examples of this were methods to compress node Ids in the path enumerated string using Base64, and building tree path strings for use at the UI etc.
  4. A key gotcha was that I wanted different impementations of trees to be annotated differently, for example to use different sequence Id generation. If the tree Id was in the mapped superclass and I was using field annotation, I was stuck. This issue is discussed in these articles here (see the comments at the end) and here.

My solution was to take the decision to always annotate (getter) methods from now on. This allowed the following implementation:-

  1. My mapped superclass contained getters annotated @Transient where I wanted to change their behaviour in a subclass. For example, my treeId primary key was in the superclass but marked @Transient so that it would not be mapped there. Then, in each subclass, I added an override getter/setter pair for the field, and annotated the getter with the desired ID generation. This allowed every subclass to have its own sequence generation (in my case), but to still allow polymorphic usage in other code.
  2. When doing this, a classic gotcha was to forget to mark the base class getters @Transient. This caused them to be mapped in both the mapped superclass and the subclass, resulting in multiple writable mappings which Eclipselink complained about loudly! It feels counter-intuitive to set them as transient, but the important point is that they will be mapped by the subclass.
  3. When doing this, I had to explicitly annotate the mapped superclass for property annotation using @Access(AccessType.PROPERTY). This is because normally the default type of access (field or property) is derived automatically from where the primary key is annotated. In this case, there was no such annotation on the mapped superclass so it needed telling – again, strange and loud Eclipselink errors resulted without this as my @Transient annotations on getters in the superclass were being ignored, as it defaulted to field annotation!
  4. Another issue was that I wanted a name field in the base superclass, but wanted to override this in the subclasses. In fact one tree implementation had node classes with an associated keyword class, where the name was stored on the keyword class and delegated from the getter on the node class, which in turn overrode the getter in the mapped superclass. Again, I marked the name getters as @Transient in both the tree mapped superclass and in the node class, and just mapped the name in the keyword class. This allowed other code to obtain the name polymorphically using a base superclass reference, but the actual name came from 2 levels further down. JPA was kept happy as the name was mapped wherever I needed it to be.

The following sample code fragment from my mapped superclass illustrated some of these points:-

package uk.co.salientsoft.test.domain;
import javax.persistence.Access;
import javax.persistence.AccessType;
import javax.persistence.Column;
import javax.persistence.MappedSuperclass;
import javax.persistence.Transient;

import uk.co.salientsoft.jpa.ColumnMeta;

/**
* Entity implementation class for Entity: TreeNode
* This mapped superclass performs the common tree/treepath handling for all tree entities
*
*/
@MappedSuperclass
@Access(AccessType.PROPERTY)
public abstract class TreeNode {
   
    private long TreeNodeId;
    private String treePath;
    private int nodeLevel;

    @Transient public long getTreeNodeId() {return this.TreeNodeId;}
    @Transient public void setTreeNodeId(long TreeNodeId) {this.TreeNodeId = TreeNodeId;}
   
    @Transient public abstract String getName();
    @Transient public abstract void setName(String name);

No Comments »

March 4th, 2011
5:48 pm
Breadcrumb Composite Component using ui:repeat

Posted under JSF
Tags , , , , , , ,

Update 16/1/2012

A bug was found in the crumb styling. max-width was correctly set on the crumbs to truncate them, but when white space is present in a crumb, the default is to wrap at the max-width. This causes the breadcrumb to increase in height and add a second line. The fix was to turn off the wrapping by setting “white-space:nowrap” in the css for the crumb anchor tags (see Mantis issue 99 for details) :-

.ss-breadcrumb ul li,
.ss-breadcrumb ul li a {
    overflow:hidden!important;
    display:inline-block;
    white-space:nowrap;
    text-decoration: none;
}

 

Original Post

This follows on from my previous posts on the Primefaces breadcrumb here  and here.

I found a couple of further important issues I hit when using the standard breadcrumb:-

  1. In order to add and remove crumbs dynamically, it uses the Primefaces MenuModel. Methods of the MenuItem class are used to add actions and listeners.
  2. The mechanism seems somewhat cumbersome. In particular, it does not appear to be easy to pass a particular target object as an argument to an action or listener method, as the binding appears to be created as a string. Ideally what I would want is for the breadcrumb entries to be created by iterating a list using a var attribute like a table, and then to pass the var object containing the current crumb in the list as an argument in a method call to an action when that crumb was clicked.
  3. The only way I could see to do it involved passing an identifier for an object (such as the primary key for an entity). Whilst I certainly did not bottom it out completely, the mechanism seemed somewhat complex to use and lacking in flexibility.

Adding these issues to the ones already highlighted in the other posts (in particular the slowness of the jQuery operations when initially collapsing a breadcrumb in preview mode), I decided to take the step of rolling my own custom breadcrumb. The following approach was taken:-

  1. I decided to use a composite component rather than create a full blown custom component in code, as this looked more straightforward, and I had experience of composite component creation.
  2. The key factor which made this possible was that I could use ui:repeat to build the breadcrumbs dynamically from a list.
  3. Conditionally rendered xhtml tags could be handled using ui:fragment within the ui:repeat loop. This allowed the use of the rendered attribute on ui:fragment to optionally render sections even when standard non-jsf html tags such as <li> were used. For example, the root crumb of the component is rendered differently to the others, using an icon only. Also, full width crumbs are rendered differently to truncated crumbs (when truncating the component to fit a target width)

The following are the key features and design points for the breadcrumb:-

  1. The component is designed specifically for dynamic use, being driven from a list much like a table. It is not possible to build it statically on the facelets page (static use is covered by the standard control so this was specifically a non-goal/non-requirement).
  2. It is rendered entirely statically, so there is no (potentially slow) javascript/jQuery stuff going on at render time or during use.
  3. Compression to fit a target width is done by passing the desired target width in to the control. This is then used to set max-width css properties on the crumbs. This allows the crumbs to take the width they need and to only be truncated when they hit the max value. The max value is calculated on the server at render time by dividing the max target width by the number of crumbs to be rendered (allowing for the root crumb as a special case).
  4. The full text for a crumb can be viewed at any time by hovering over the crumb – it is displayed using the HTML title attribute. This is one annoyance with the standard Primefaces control – it does not allow use of the title attribute. Using the title  attribute allows the control to be static on the page for speed but still allows truncated crumbs to be easily read.
  5. The component supports the concept of stale crumbs. The idea is that when moving back up the crumb path (leftwards, or navigating up a tree path), the crumbs to the right which were previously visited are not immediately removed. They are highlighted differently as non bold and italic to make them less important compared to the ‘fresh’ crumbs on the path. In addition, the current crumb is underlined in order to make it completely clear where the current position is. Without stale crumbs present, this would normally be the rightmost crumb, but this will now no longer be the case if stale crumbs are present.
  6. Stale crumbs are only removed if the user navigates down a different tree path which invalidates them, in which case they disappear. Provided the user remains on the same tree path, the crumbs will be left in place.
  7. The stale crumbs may be clicked for navigation just like the fresh ones, to allow the user to revisit old ‘stale’ locations. When a stale crumb is revisited, it changes back to a fresh crumb again (as do all other stale crumbs to its left) and it becomes the current crumb – it is therefore rendered in bold, non-italic, and is underlined. The crumbs to the right of it if any still remain stale as they were before.
  8. To assist navigation up a tree structured hierarchy, it is common to include an ‘up’ button somewhere. This is taken a step further by including both up (= left pointing) and down (=right pointing)buttons on the left hand side of the breadcrumb component. When combined with stale crumb support, this feature allows easy navigation both up and down a given tree path. The right button is only enabled when stale crumbs are present, and clicking it moves down the stale crumb list just as if the next stale crumb was clicked. Navigating to the root of the tree (leftmost crumb) disables the left button.
  9. The component is styled in a similar way to the standard Primefaces component, and uses the same CSS classes for the styling.
  10. Styling may be modified, and features such as up/down support turned on or off.
  11. The component is styled using an inner div which holds the component itself – this can be floating if desired in which case it resizes the width to hold its contents. The inner div is contained in an outer div which is typically fixed in size to suit the page, allowing the inner div to scale up to that size. The width used for crumb truncation calculation is passed separately into the control – this will normally be slightly less than the max width set for the divs.
  12. The component is backed by a controller backing bean, BreadcrumbCtrl/BreadcrumbCtrlImpl, which may be subclassed to allow further customisation. The crumbs are passed into the bean as a simple generic list, and a property name used for the crumb text display is passed as a string attribute to the composite component. The controller fields action and listener events from the component, and passes them on using a BreadcrumbEvent interface – an object implementing this interface is passed into the controller bean at init time. When the user clicks on a crumb, the actual crumb in the list plus its list index are simply passed to its action method. When multiple components are used on a page, each will have their own controller object. These will typically be injected into the page backing bean e.g. by CDI as dependent beans. When the action and listener events are called, the actual controller object is also passed. This would allow for example the page backing bean to implement BreadcrumbEvent and manage actions for a number of dependent breadcrumb controllers, using the passed controller object to determine which one was clicked.

The following are some of the key technical issues which arose during development of the component:-

  • ui:repeat is used for iterative xhtml generation of the crumbs. This uses a var attribute like a table to represent the current object (crumb) in the list. It also supports a varstatus attribute which allows access to various loop properties within the loop, such as the current loop index, first and last boolean properties and others. The index property in particular is used to determine styling for the current crumb and stale crumbs etc.
  • In addition, the varstatus object (com.sun.faces.facelets.tag.jstl.core.IterationStatus) can be accessed from the controller bean using code similar to the following, where varStatusKey is the name given in the varstatus attribute to ui:repeat:-

Map<String, Object> requestMap = FacesContext.getCurrentInstance().getExternalContext().getRequestMap();
IterationStatus varStatus = IterationStatus) requestMap.get(varStatusKey);

  • It is important to note that whilst it is tempting to cache the varstatus object in the controller bean, this does not work – a new object must be obtained each time using the above code.
  • ui:fragment can be used directly in the ui:repeat loop within the composite component implementation – it is not immediately obvious from the documentation that this is the case.
  • When declaring the attributes for the composite component using <composite:attribute>, note that the type attribute allows specification of a particular object type, the required attribute makes an attribute mandatory, and the default attribute sets a default.

The source code and a usage example of the breadcrumb control may be found in the repository here.

No Comments »

February 11th, 2011
5:49 pm
Further h:outputStylesheet issues

Posted under JSF
Tags , , , , , , ,

I needed to load a custom style sheet for a composite component.

Primefaces has problems loading theme images correctly when using h:outputStylesheet as per here, but using a raw link tag was no good as I wanted to develop a custom component which loaded its own styles, i.e. it had no <head> section.

This post here indicates that if you don’t use the library attribute you can get h:outputStylesheet to work even with Primefaces themes.

In my case, I was not loading themes/images etc. so this was not an issue, but I still used the following directive and avoided use of the library attribute for simplicity:-

<h:outputStylesheet name=”uk.co.salientsoft/util/SSBreadcrumb.css”/>

This version worked. My initial attempts (which failed with a resource loading error) were as follows :-

<h:outputStylesheet name=”resources/uk.co.salientsoft/util/SSBreadcrumb.css”/>

<h:outputStylesheet name=”#{request.contextPath}/resources/uk.co.salientsoft/util/SSBreadcrumb.css”/>

In other words, with a css file stored in Eclipse under WebContent/resources/uk.co.salientsoft/util/SSBreadcrumb.css,  the above working example was correct, i.e. to load correctly, omit the context root and the resources level and use a relative reference without the preceding slash.

I did try to debug this by visiting the logger settings in Glassfish admin, then under the Log Levels tab, setting everything with JSF in it to finest! This did display an error when trying to load the resource:-

[#|2011-02-11T17:21:13.331+0000|FINE|glassfish3.0.1|javax.enterprise.resource.webcontainer.jsf.application|_ThreadID=31;_ThreadName=Thread-1;
ClassName=com.sun.faces.application.resource.ResourceHandlerImpl;MethodName=logMissingResource;
|JSF1064: Unable to find or serve resource, /SSBreadcrumb/resources/uk.co.salientsoft/util/SSBreadcrumb.css.|#]

However the error was not particularly informative as it just spat back to me the path I gave it to load. It would have been nice for it to tell me how it was mapping the path as then I could have seen the problem more easily.

Note however that using this page in Glassfish is a very convenient way to turn on logging in Glassfish, as it lists the common logger paths for you and gives a combo to select the level for each. You don’t have to go googling for the logger paths and edit a log properties file yourself. Note also that when you do this you must restart Glassfish for the log level changes to actually take effect.

No Comments »

February 2nd, 2011
7:56 pm
Primefaces Table Sorting–Common Gotchas

Posted under JSF
Tags , , , , , , , ,

Update 2/2/2011

According to this post here by Cagatay Civici, the Primefaces Project Leader, as at Primefaces 2.2M1, client side datatable is not supported and the dynamic attribute is deprecated from then on.
Seemingly it was becoming too hard to support all the table features on both client and server sides.
It is a shame that this was not documented better and more widely – the user manual still documents the dynamic attribute, and there are a number of posts about client side sorting (though in fairness they probably predate 2.2M1).

This behaviour exactly ties up with what I am seeing in my testing – the dynamic attribute does nothing, sortFunction is ignored, and all sorting is done via Ajax server side.

Original Post 2/2/2011

Having successfully got sorting working I was frustrated to find that a new simple example failed to work. Primefaces table sorting is pretty much transparent, requiring minimal effort and working out of the box. The lessons learned on this one are instructive so I summarise them below.

Details of Example and Problem

  1. The example fetched a list using JPA, stored the list in a view backing bean, then displayed it in a table with a sortable header.
  2. When the header was clicked, the up/down arrows were correctly displayed, but the data was not sorted – it was as if the request was ignored.
  3. It was not immediately obvious how the header click was handled – there was no obvious click event on the HTML element, and I could not see any Javascript running after a quick test with Firefox/Firebug. I gave up this line of attack but may look into it later as to how it all works. Some dynamic CSS pseudo classes/jQuery trickery may be involved.
  4. I was not certain as to whether the table was client side or server side/Ajax, even though it should have been client side by default. To diagnose the problem, I changed the value of the sortBy attribute in the column header to be an invalid bean property. The symptoms did not change, but an errors were logged in the Glassfish server log for each header click – “[#|2011-02-02T18:22:14.557+0000|SEVERE|glassfish3.0.1|org.primefaces.model.BeanPropertyComparator|_ThreadID=24;_ThreadName=Thread-1;|Error in sorting|#]”.
  5. This showed that the sort was being attempted but was failing due to the invalid property, which indicates that the sort was attempted server side. According to the Primefaces forum, for client side sorting, the sortBy property is not used as the displayed data is sorted.
  6. Interestingly, I would have expected Primefaces to use a client side table by default, but clearly it was not. Changing the dynamic property on the table tag to false explicitly had no effect and the behaviour was unchanged, so I will need to investigate client side tables further as for small data sets they are faster – I will update this post when I have done this.
  7. I turned on FINEST logging with Eclipselink in persistence.xml. Use <property name="eclipselink.logging.level" value="FINEST" /> to do this, and note that you need to bounce Glassfish to for a logging level change to be picked up. I noticed after this that the JPA query seemed to be rerunning very often, perhaps for every click of the sort header. This lead me to wonder if the backing bean was being recreated each time.

 

Cause and Solution

In the end, it was simply that I had not annotated the JSF backing bean as @SessionScoped (the example used CDI throughout, and did not use JSF managed beans). This meant the bean was request scoped by default, and was indeed being recreated – the sort was trying to do the correct thing, but the data was getting reloaded from the database in the original sort order each time!

Adding the correct annotation as above (and making a couple of the beans correctly implement serialisable to satisfy CDI/Weld) solved the problem.

This is a good example of where a problem cause is not immediately apparent from the symptoms.

Sample JSF page fragment showing the form containing the table

<h:form id="frmTest">   
    <util:themeSwitcher form="frmTest" outerStyle="clear: both; float: left; margin-bottom:20px;" innerStyleClass="ui-widget"/>
    <h:panelGroup layout="block" style="width:300px;">   
        <p:dataTable var="taxonomyNodes" value="#{taxonomyBrowser.taxonomyNodes}" scrollable="true"
             styleClass="clear #{fn:length(taxonomyBrowser.taxonomyNodes) lt 20 ? ‘ss-tableheader-scrolladjust-nobars’ : ‘ss-tableheader-scrolladjust-bars’}"
             height="460" > 
             <f:facet name="header">
                Client Side Sorting
             </f:facet>
             <p:column headerText="Category List"  sortBy="#{taxonomyNodes.text}">
                 <h:outputText value="#{taxonomyNodes.text}" />
             </p:column>           
        </p:dataTable>
    </h:panelGroup>
</h:form>

No Comments »

January 26th, 2011
11:48 pm
Oracle Sql*Plus Nested Scripting, Parameters and Exit Codes

Posted under Oracle
Tags , , , , ,

Update 29/3/2012 – SP2-0310 error due to backslash “\” in parameter

SQLplus has a bug whereby if call to a subscript in the same directory has a command line parameter with a backslash in it, SQLplus messes up the default directory processing and cannot find the called script. It seems to think that the backslash in the parameter is part of the called script file path.

To get around this and avoid hard coding the subdirectory in this script, I pass the subdirectory in from the caller.

Note that this issue may well be the same issue listed below in connection with the use of forward slash “/” causing default directory issues.

A second bug is also present, whereby &1 only works the first time. When used a second time, it incorrectly replaces it with the &1 parameter for the last called script, rather than using the one passed in from our caller. It may be that these &1-&n parameters are global in nature, rather than per level. To get around this, I use an explicit named define for the &1 and use that instead.

The following illustrates the problems:-

–This works ok assuming AddDomain.sql is present in the same directory as this script
@@AddDomain ‘1’  ‘!’

–This will fail with an SP2-0310 error
@@AddDomain ‘1’ ‘\’

The following fix resolves the problem:-

Calling script Main.sql

–We pass the script its own subfolder to get around an SqlPlus SP2-0310 bug, see Lookups.sql
@@Lookups\Lookups ‘Lookups’

 

Called Script Lookups.sql

/*
* SQLplus has a bug whereby if a parameter has a backslash in it,
* SQLplus messes up the default directory processing and cannot find the called script
* It seems to think that the backslash in the parameter is part of the called script file path
* To get around this and avoid hard coding the subdirectory in this script,
* we pass it in from the caller.
*
* A second bug is also present, whereby &1 only works the first time.
* When used a second time, it incorrectly replaces it with the &1 parameter
* for the last called script, rather than using the one passed in from our caller
* It may be that these &1-&n parameters are global in nature, rather than per level.
* To get around this, we use an explicit named define for the &1 and use that instead.
*
* These 2 bugs are awful – I’m not impressed!
*/

define dir=&1
 
@@&dir\AddDomain ‘1’  ‘\’
@@&dir\AddDomain ‘2’  ‘\reports’

These fixes resolve the problem.

 

Update – Parameter passing, use of “/”  and default directory issues

These issues came about when switching to the command line sqlplus utility rather than using Toad as I had before.

  1. If a parameter passed to a called script (in this case via @@) contains a slash “/”, Sql*Plus gives an SP2-0310 file open error. The error is nothing to do with opening the file, but is due to the slash. Escaping it was no help. I was using the slash as a date delimiter for a constant date string passed to the called script, so I just switched this to a dash “-“ instead and changed the to_date string in the called script to ‘YYYY-MM-DD HH24:MI:SS’. This solved the problem.
  2. If a script called via “’@@” is followed by a “/”, then the script will be called twice. Whilst it is normal to follow an SQL command with the slash, this must not be done when calling a script.
  3. When a relative path is used to call a script via the “@@” command, the default directory for the relative path is not the directory of the calling script, as you might expect, but rather the top level default directory. This is rather nasty and inconsistent in my view, but is simple to fix once you know.

Update – Return Codes from Sql*Plus

Sql*Plus can be configured to return an exit code to the calling operating system, to indicate either an OS error (such as a file open error), or an SQL error. This is extremely useful for detecting the success or failure of a script run – for example, I use it during JUnit testing when running scripts to set up a test, as there is then no need to scan log files and you can then just look at the test outcome (which is a central goal of JUnit).

To do this, just add the following at the head of your top level script. This will cause the script or any of its called scripts to abort with an exit code when it fails for any reason:-

WHENEVER OSERROR  EXIT 1
WHENEVER SQLERROR EXIT SQL.SQLCODE

The OSERROR unsurprisingly deals with OS related errors. Unfortunately, it does not appear to be possible to return the actual OS error, so in this case I have returned value of 1 which is a Windows ‘illegal function’ error, which is also what is returned when you copy or rename an invalid filename for example. The SQLERROR returns the actual Oracle error code. These codes are returned to the operating system, and for example under Windows may be detected by testing the %ERRORLEVEL% value as per normal with Windows batch files. I have yet to try this under Linux but the mechanism is similar.

Update – Verify and echo

When debugging a script, it is often useful to log each statement in the output (echo), and also to check how parameter substitution was performed (verify). To do this, just add the following to the head of your top level script. To disable them, just use off instead of on:-

set verify on
set echo on

 

Original Post

This post details some common pointers and gotchas when using nested scripts in Sql*Plus, and passing paramters to a nested script.
The examples were all executed with Toad, which is subset compatible with Sql*Plus, so the points here should work with either.

  1. Use @filepath to call a nested script with a full file path. The default file type is .sql.
  2. A common technique is to place all related scripts and sub scripts in the same directory. If you do this, use @@filename instead with just the filename, and SQL*Plus/Toad will assume that the called script is in the same directory as the executing one. This allows a collection of scripts to be moved to any desired directory and still call each other correctly with no changes.
  3. Parameters may be defined with the define  command as in the example below. Defined  parameters are global and visible to a called scripted even if defined in the calling one.
  4. Parameters are referred to using &name for a defined parameter, or &1, &2 for the first or second parameter passed to a script.
  5. When passing parameters to a script you must delimit them with spaces and must have single quotes around the parameter values, which will allow the parameter value to contain spaces. Note however that you cannot pass an empty string using two quotes ‘’ as this gives a missing parameter error. Oracle treats an empty string as a null anyway, so you should pass the null value differently, e.g. using null.
  6. If you use &name for a parameter you have not previously defined, you will be prompted for the value each time you refer to &name. If you use &&name you will only be prompted for the value the first time, and it will stick from then on and you will not be prompted. If a variable has been previously defined you just use &name all the time and the defined value is used – you will not be prompted.
  7. When defining a parameter or passing one in a call to a sub script,  put single quotes around the value. These will be stripped off when you refer to the parameter, so you will need to add them back when referring to the parameter if you are using a quoted string, for example ‘&1’ or ‘&param’. Double quotes did not work correctly with Toad (not sure about Sql*Plus) so are best avoided.
  8. Due to the above quote handling/stripping therefore, it is not straightforward/posible to add your own secondary quotes to pass a quoted string with the quotes already present. You must add the quotes in the called script, which means that you must decide whether you are passing a constant string (which will have quotes added by the called script), or an expression (which will not have quotes added). There is not a way I could find to pass a value which can either be a quoted string or expression. In practice, this tends not to be an issue and you can work within the limitation, unless you are trying to make scripts fly to the moon – don’t – they are not programs!
  9. If you want to append alphanumerics immediately after a parameter expansion, place a dot after the parameter name, and the dot will be removed, for example &param.xxxx will substitute the definition of param and concatenate it directly with xxxx. Set Concat lets you define the concatenation character to be used, period (dot) is the default.

The documentation for Sql*Plus may be found here. In addition, this is a useful site detailing a number of the features (although it gets the meaning of & and && the wrong way around!)

Simple Example of Nested Scripts with Parameter passing

Test1.sql

define Table = ‘dual’
@@test2 ‘dummy’ ‘X’

 

Test2.sql

select * from &Table where &1 like ‘&2’
/

No Comments »

January 23rd, 2011
1:41 pm
Eclipse JPA table generation fails with Eclipslink 2.1.2, Glassfish V3.0.1

Posted under JPA
Tags , , , , ,

I was attempting to create a database for a new project which just contained entities – I had imported the entity classes from Enterprise Architect and added the required annotations, getters & setters etc.

My first attempt used the following persistence.xml (I have removed most of the classes for the example)

<?xml version=”1.0″ encoding=”UTF-8″?>
<persistence version=”2.0″ xmlns=http://java.sun.com/xml/ns/persistence
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance
xsi:schemaLocation=”http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd”>
    
    <persistence-unit name=”Test” transaction-type=”RESOURCE_LOCAL”>
      <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>

      <non-jta-data-source>jdbc/TestPool</non-jta-data-source>
      <class>uk.co.salientsoft.Test.domain.Address</class>
      <class>uk.co.salientsoft.Test.domain.AddressLine</class>
      <class>uk.co.salientsoft.Test.domain.Campaign</class>
      <class>uk.co.salientsoft.Test.domain.CampaignText</class>      
      <exclude-unlisted-classes>false</exclude-unlisted-classes>     
      <properties>
       <property name=”eclipselink.logging.level” value=”INFO” />
       <property name=”eclipselink.target-server” value=”None” />
       <!– <property name=”eclipselink.target-server” value=”SunAS9″ />–>
       <property name=”eclipselink.target-database” value=”Oracle” />      
       <property name=”eclipselink.ddl-generation.output-mode” value=”database” />
      </properties>
     </persistence-unit>
</persistence>

 Having created the database and validated the connection/datasource etc., I attempted to create the tables from  Eclipse by right clicking the project and selecting JPA Tools/Create Tables from Entities…

This resulted in the following error:-

[EL Config]: Connection(5226838)–connecting(DatabaseLogin(
    platform=>OraclePlatform
    user name=> “Test”
    connector=>JNDIConnector datasource name=>jdbc/TestPool
))
[EL Severe]: Local Exception Stack:
Exception [EclipseLink-7060] (Eclipse Persistence Services – 2.0.1.v20100213-r6600): org.eclipse.persistence.exceptions.ValidationException
Exception Description: Cannot acquire data source [jdbc/TestPool].
Internal Exception: javax.naming.NoInitialContextException: Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file:  java.naming.factory.initial

The error occurred even though the data source was being used outside the container as a non-jta data source (this worked previously for JPA1/Glassfish V2, allowing JPA to be used outside the container but still using a datasource defined in Glassfish). A previous post here (in the section Notes on Persistence.xml) also reports that when running outside the container, it is vital to have eclipselink.target-server defaulted or set to none. In this case, the setting caused no difference either way. After a fair bit of research/Googling, I could not find the cause of the problem, so I simply reverted to specifying the connection explicitly in persistence.xml,  without using the connection defined in Glassfish at all.

This worked correctly. I will not need to create the database very often, and it only involved tweaking a few lines in the file, so I’m happy to live with it. Here is my modified persistence.xml which uses a direct connection specified in the file. This version created the database correctly. Note that as detailed in this post here (near the bottom of the post), the property names for JDBC url, user, password and driver have now been standardised and are no longer eclipslink specific.

<?xml version=”1.0″ encoding=”UTF-8″?>
<persistence version=”2.0″ xmlns=”http://java.sun.com/xml/ns/persistence” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://java.sun.com/xml/ns/persistence http://java.sun.com/xml/ns/persistence/persistence_2_0.xsd”>

    <persistence-unit name=”Test” transaction-type=”RESOURCE_LOCAL”>   
    <!– <persistence-unit name=”Test” transaction-type=”JTA”>–>
   
      <provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>

     <!–<jta-data-source>jdbc/TestPool</jta-data-source>–>
     <!–<non-jta-data-source>jdbc/TestPool</non-jta-data-source>–>
    
      <class>uk.co.salientsoft.Test.domain.Address</class>
      <class>uk.co.salientsoft.Test.domain.AddressLine</class>
      <class>uk.co.salientsoft.Test.domain.Campaign</class>
      <class>uk.co.salientsoft.Test.domain.CampaignText</class>      
      <exclude-unlisted-classes>false</exclude-unlisted-classes>     
      <properties> 
      
        <property name=”javax.persistence.jdbc.url” value=”jdbc:oracle:thin:@localhost:1521:xe”/>     
        <property name=”javax.persistence.jdbc.user” value=”Test”/>
        <property name=”javax.persistence.jdbc.password” value=”Test”/>
        <property name=”javax.persistence.jdbc.driver” value=”oracle.jdbc.OracleDriver”/>     
     
       <property name=”eclipselink.logging.level” value=”INFO” />
        <property name=”eclipselink.target-server” value=”None” />
       <!– <property name=”eclipselink.target-server” value=”SunAS9″ />–>
       <property name=”eclipselink.target-database” value=”Oracle” />      
       <property name=”eclipselink.ddl-generation.output-mode” value=”database” />
      </properties>
     </persistence-unit>
</persistence>

No Comments »

January 19th, 2011
7:39 pm
Using a Command Pattern for EJB method access

Posted under EJB
Tags , , , ,

Further Update 26/1/2012 – Return Status vs Exceptions in EJB methods

A further  design point has come up re the choice between returning a status enum or throwing exceptions in EJB methods. This also impacts the use of Command Patterns and in particular the status enum idea below. I have posted on this issue here.

 

Further Update 26/1/2012

  1. Another nice feature would be the ability to check the status of a command after it executed generically and then take a decision on whether to continue with the rest of a CommandList.
  2. If this check could be done in the CommandList execute() loop, it would avoid the need for custom initialisers just to abort after a status return. This would avoid the need for the anonymous inner initialiser classes, finals for all the Commands, and constructor injection for all the final Commands.
  3. The initialiser feature is a nice to have, but messy just for simple generic status checking.
  4. A flexible way to do this would be to add a status enum to each command, and then in the base results class, have an EnumSet of values from that status. Adding it to the base results ensures it gets copied back along with everything else.
  5. If a command succeeded with no issues, it would just return null for the status. If there were any ‘issues’ with the result, it return a status code from the enum saying what happened. This would be done in the actual command subclass, just after calling the actual SSB method, when building the results object to be returned.
  6. When using a Command in a list, you would then pass in an EnumSet of status codes for which execution is allowed to continue, to the actual command object generically (via a generic method of Command). Execution would continue either if the status was null, or if it was contained in the set. If non null and not in the set, then a CommandList would stop execution at that point and return what it had done so far.
  7. A further enhancement to 6. could be to also pass in a ‘default action’ enum value to indicate the default action (abort or continue) when a non null status is present. Then this enum value defines the default action, and the EnumSet defines the exceptions to the default, i.e. “abort on all non null statuses except  continue for any in this set” or “continue on all non null statuses except abort for any in this set”.
  8. If the feature was not needed, it could default to null status returns and a CommandList would always carry on (subject to what any Initialisers do)
  9. The feature adds flexible decision making for a CommandList about continuing, allowing checking for multiple status codes via the EnumSet, but in an entirely generic way which does not involve initialisers.
  10. An EnumSet is also efficient and compact, as it is implemented as  a bit mask, and typically a Command will only have a few statuses in its enum.

 

Update

  1. A better technique to fetching the results would be to do it automatically in the Invoker after the results are returned. This has the following advantages:-
  2. The caller has no work to do – for a command list, the results would be immediately available in the original passed in Command/CommandList. If a CommandList was passed, then the individual Commands used to build it would be populated and would be referred to directly so that the correct result subclass could be referenced.
  3. The best way would be for a Command/CommandList to do this generically by passing the original command list in rather than the current varargs mechanism. There is no point in just fetching some of the results as they will all be needed.
  4. Doing it this way presents even less risk of mismatching the command objects to their results – it is automatic and the caller cannot get it wrong. In particular it is not possible to fetch results into the ‘wrong’ command object.
  5. To do this, the fetching needs to be done in the caller’s context, not that of the Invoker (which is an SSB). This could be done by calling a static invoke method on the invoker and passing the invoker SSB and command/command list to it. The static method would then do the ‘real’ invoke, and then do the fetching directly afterwards, directly into the command/CommandList passed by the caller. This static method would therefore not return anything :-

Invoker.invoke(invoker, myCommandList);

Original Post

A command pattern provides a number of advantages for EJB method calling to stateless session beans, at the expense of having to wrap the EJB methods in command classes :-

  1. It provides a convenient and tidy way of performing a number of EJB method calls, specifically to stateless session beans, in a single transaction, by wrapping them in an outer Command List which is itself a command (a macro concept). This then means that only a single call is needed to the EJB container to establish transactional context, by calling a standard command invoker which is itself an EJB.
  2. When wrapping a number of EJB calls in this way, the interface is much cleaner than trying to roll your own higher level method call which calls all the others. When rolling your own, passing in parameters cleanly for a number of EJB calls can be complex, and packing the returned data for convenient retrieval is also a more complex issue. With a command list, each command in the list has its own call parameters, set up and stored in the command object when it is created. Each also has its own return object, retrieved using a convenient standard mechanism.
  3. Add hoc lists of commands can easily be created, as the list does not need to be hard coded.
  4. All the underlying EJB calls can of course still be used, so commands can be used on an as needed basis if it is deemed overly complex to use them throughout. However, there are advantages if they are used consistently, as other parts of a design can then rely on command usage.
  5. A command list can be used as a ‘drop box’ of commands, decoupling different client objects of the drop box from each other. This allows a single transactional context call to be used to execute all the commands, but the client objects need have no knowledge of each other or of the other commands in the drop box. An example of this might be different sub areas of a page which post commands into the drop box command list, and the list is executed by a submit button for the whole page. The sub areas of the page need have no knowledge of each other but can still share the same outer EJB call to the invoker of the command list, such that a single EJB container call results (indirectly) from the submit button.
  6. An initialisation mechanism allows initialiser objects to be attached to each command. This allows later commands in an ad hoc list to have inputs which depend on the output of earlier commands, but still having everything running as a result of a single EJB call to the container. The initialisers are typically anonymous inner classes passed in to a command object when it is set up. Each initialiser is called just prior to execution of the command it is attached to.
  7. The command objects will typically be called from a JSF model bean in a JSF environment.

The main disadvantages are as follows :-

  1. The EJB method calls need to be wrapped in Command classes, so extra coding is needed. However, this  is straightforward and follows a standard pattern.
  2. Some extra complexity arises when command lists are used, in order to unpack the results afterwards. However, this all follows a standard mechanism and importantly does not rely on any error prone use of hard coded list indexes to unpack results.
  3. If custom initialisers are used, the anonymous inner classes which are typically used make the code somewhat more complex. However, if certain lists of commands are commonly used they can and should be combined into a hard coded command list/macro. If this is done, the initialisation can be performed inline as the commands are called, rather than having to be passed in as in the case of an ad hoc list with initialisation.

A sample implementation of this may be found in the repository here.

Key design points for the implementation are as follows :-

  1. The command objects themselves are stateful (as they hold any passed parameters and results, and for a command list, the list of commands). They are not EJBs themselves, but they do refer to the actual EJBs being wrapped. The Invoker is an EJB, albeit a standard one, and is passed the commands to be invoked. Creation of the transactional context happens when the invoker is first invoked. All calls to the underlying EJBs (from the execute methods of the command objects being invoked) are then calls made within the same transactional context already established in the container, and so require much less overhead, and are of course part of the same transaction.
  2. CDI is used throughout for dependency injection, both on the EJBs and on all the commands. The commands must be injected with CDI as they themselves must also have EJB references injected into them – this will not happen correctly if a command object is just created via new, as it will not be properly under CDI control to allow the injection to take place. However, as Command Lists are purely containers for commands, and do not themselves have any CDI dependencies/any references injected, they can freely be created with new if desired. For example, A Command List may be created with new, have all its constituent commands passed into its constructor, and be created directly in the argument list of a call to the invoker, as in the code example above.
  3. A command list is literally a list of commands executed in list order.
  4. As a command list is also a command, lists may be nested – it is entirely possible for a command in a list to itself be another command list.
  5. When a command (or a command list, as it is also a command) is executed, it returns a new instance of its own class containing the results object as the return value. This is necessary as when calling into an EJB container, modifications to properties of parameter objects passed in are not normally visible on return from an EJB container. The only means of returning data is via the method return value. Whilst this behaviour can be overridden in some cases with EJB container settings, this was deemed highly undesirable, and is particularly an issue when the call is to a remote interface. It is important to understand the distinction between the methods of Command objects and Command lists which are called only in the client environment, and calls made into the EJB container (possibly remotely). Calls made in the client environment do allow changes to passed object properties – this is necessary for example to store the listIndex in a passed Command object when it is added to a Command List (see later in point 12), and also when fetching the results back via the fetch method after invoking a Command list (also in point 12). This behaviour is also used by a number of the Java APIs – for example, Collections.sort() sorts the passed list argument, but does not have a return value from the method. Conversely, when the Invoker object is used to invoke (execute) a passed command, the invoker is an EJB (stateless session bean), so the client is calling into the EJB container and establishing transactional context (the call may also be a remote one over RMI if a remote interface is used). In this case, the story is very different, and changes are only passed back via the method return value.
  6. Note that the client environment will often also be within an EJB container – Command objects will typically be invoked for example from a stateful JSF model bean, which may often be in the same Java EE container as the EJBs. However, they are not created in transactional EJB context, so whilst for example they may be created and hosted in Glassfish (which will also be responsible for providing CDI services among many other things), they are not created in its EJB container – they are passed into the EJB container via the Invoker which is itself an EJB (stateless session bean).
  7. Commands and Command lists are implemented using a Composite pattern based on an abstract Command class. This design choice was adopted as it became necessary for the Command base abstract class to have knowledge command lists. For example, a command must have a listIndex property which is used to hold its position in a command list if it is added to one, even though this has no relevance to a standalone command. Also, when command lists are themselves nested inside other command lists, the structure genuinely becomes composite so it is easier to be able to declare all member types generically using a standard abstract base Command class.
  8. A command class typically wraps all the overloaded method calls for an EJB method. This is convenient as a command must have a single return type, but allowing the overloaded methods with different call parameters to be encapsulated in the same command class reduces the number of command classes and eliminates unnecessary clutter. When implementing this, the command class has a set of method calls which mimic the method calls to the underlying EJB. The difference is that rather than calling the EJB, these method calls just store the parameters as properties of the command for later use when the command is invoked by the invoker. When implementing this, an enum (CallType) is used to define which overloaded method is being used. The enum value is set when a particular overloaded method is called. Later, when the command is invoked, the enum is used to determine which corresponding overloaded EJB method to call. This mechanism avoids arcane decoding of null parameters to try to decide which call is in use, and is completely transparent. When these methods on the command class are called, they typically return a reference to the command object as this is often useful. For example, the return reference might be used in a call to add the command to a new command list directly.
  9. When creating a fixed command list to be reused, it is easier and clearer just to extend Command and embed commands in a simple outer ‘macro’ command. Whilst it is possible to create a fixed list by either extending a CommandList or composing one, it turns out that this is slightly longer than just extending a Command, and the code is significantly more complex to understand. A comparison was done in an early version of the Command pattern implementation here. (Note that this is an old version and should only be used for reference. The link given further up is the latest version which is an EJB sample and should be used to base other code on.
  10. Each command has an inner class containing its results to be returned, together with a reference property to an instance of the inner class. The results are unique to the command class and are only used once, so the inner class avoids clutter. The inner class extends a base CommandResult inner class in the base Command class. This allows the results for a command to be reassigned polymorphically to the result property of another object – a trick which is used when accessing the results of commands from a command list. Note that a command class should use delegated property access to get the results – i.e. the command class itself will have property getters for all the result properties, but will get values directly from the result object members. This saves another level of nested reference by the client of the command object when accessing results. Note that when accessing result object members from within the command class, they are access directly without using getters. This saves code, and it is reasonable that the inner class result object is fully exposed to its parent command object.
  11. In the case of a command list, its result object contains a list of the results for its constituent commands, stored in the same sequence as the original command list.
  12. Each command, when inserted into the list, has its own internal listIndex property which stores its position in the list. After the command list has been executed, a fetch method is called on the returned result command list. The original command(s) stored in the list are passed to the fetch method. The fetch method uses the listIndex property from each command to fetch its results, as the results are stored in a list in the same sequence as the original command list. The result object from the list is then polymorphically assigned to the result property of the object passed in to the fetch method. The beauty of this is that the caller has no knowledge of the list index – they simply pass in the same object that they originally added to the command list when fetching the results, and this object holds its index in the list. This avoids errors which could easily occur if list indexes were explicitly hard coded. Refactoring for example could change the indexes of existing commands and break the code, which is completely avoided with this mechanism. Also, as the results are passed back into the original object, the original object may then be used to access the contents of the results as it knows the exact type of the results class – the results are its own! In this way, the results may be fetched polymorphically but accessed via the correct subclass.  At present, the error handling in the example is incomplete. Ideally a type check should be performed in fetch to confirm that the class passed into fetch really is an instance of the same type as the one in the command list. (If this is not the case, run time errors will happen when the result object is accessed as it will be the wrong result subclass.) One reason it may not be could be if the caller passes in a command object which was added to a different list, i.e. the command object is not for the list that is being accessed. A command object can be reused across multiple command lists, but can only be used with a new list after the results have been fetched into it (and accessed) from the list it was last added to and executed in. Another problem could arise if a caller passed a new instance of a command object into fetch – this would have a null listIndex and would therefore give an error. Error handling could also be improved by having a unique ID assigned to each command list, and storing this ID in each command object as well as the list Index. Then the fetch method could check this ID for a match as well. The ID could be reset if the command list is cleared for reuse a subsequent time. This kind of error cannot be detected at compile time with Generics as the list contains a mix of different Command subclasses. The polymorphic casting/assignment of the result object does not give any unchecked warning errors as Generics have not been used here. (Even with Java 5 and later, you won’t necessarily get unchecked warnings when you are using your own non-generic classes.)
  13. If an anonymous inner class is used as an initialiser, the anonymous inner class has access to the properties of its creator, i.e. the client object creating and invoking the commands. In particular, it can access the command object references which have been added to the command list. An initialiser is passed both the original command list and the results so far up to the point reached in the list. It can then use the standard fetch method on one of the object references added to the list in order to fetch details of previous results in the list, and use them to initialise the passed parameters for the command it is initialising (or indeed a later one in the list if required). Note that an anonymous inner class can only access final objects in its parent/creator, so this means that in this case any commands added to the list which are referenced in this way must be declared final. This is not the problem it might seem – for a final object, only the object reference cannot change – properties of the object can still be modified. Note that as CDI is normally used to inject all the object references, CDI constructor injection must be used to inject these object references in order to allow them to be final but still injectable by CDI. This is very straightforward (see the example code)– just define a constructor with all the required final objects as parameters, assign them in the constructor, and prefix the constructor with @Inject.
  14. Each initialiser returns a boolean. If true, the command and subsequent ones are executed normally. If an initialiser returns false, then all command execution is aborted at that point, and the results of any commands not executed are set to null. This would be convenient for example where subsequent commands depend on the result of an initial one, and the initial one returns null – the whole sequence can then be aborted as there is no point in executing all the dependant commands – they will all also return null.
  15. Each command has an enabled property which defaults to true. An initialiser can use this to disable any subsequent commands in the list (while leaving others enabled), as each initialiser is passed the original command list as well as the results so far. An initialiser is also passed the index of its own command in the command list so it is fully aware of the point reached in the list.

No Comments »

January 6th, 2011
12:59 pm
Replacement PSU for Shuttle SD11G5

Posted under PC
Tags , ,

Its a Delta Electronics SADP-220DB B
A similar delta model is here on their web site
It appears to be the same/similar to one sold by Delta for CISCO kit as here in this auction selling them in the USA.

Excerpt here:-

Original Cisco Adapter by Delta Electronics

Model: EADP-220AB B    P/N: 341-0222-01

Input: 100-240V   Output: 12V (18A) 220W

compatible to DELL Optiplex SX280 GX620 USFF adapter other Compatible P/N MK394 Y2515 M8811 Y2515 D3860 D220P-01, ADP-220AB B ZVC220HD12S1
It occasionally comes up on Ebay, and may be able to get a compatible one elsewhere – shuttle don’t seem to be of help, tried before.
A picture of the PSU is here on Silent PC Review in their review of the SD11G5.

No Comments »

January 6th, 2011
10:57 am
Nebula DigiTV Issues

Posted under DigiTV
Tags , , ,

There are 2 main issues – the latest version (3.7.12) is needed to auto tune correctly as it deals with split NIT, a new broadcast feature of freeview.

There may be other issues with 3.7.12 so you can use an older version (e.g. 3.7.06) to get a working EPG and for normal watching etc.

This post has the details about the split NIT and EPG problems. Extract from the relevant comment is as follows:-

Yes. Use 05. It will work.
Please read the other threads if you want details. There are way too many threads being started on this same question as it is.
Use 3.7.05 to watch TV, make records, and have an EPG that works.
Use 3.7.12 ONLY to do a fresh Auto-tune as this deals with the “Split NIT” issue. (Not sure if that is relevant in Germany, so just stick with 3.7.05)

 

As at 1/1/2011, I had EPG issues. I switched to the latest V4 release, but it crashed during recording and proved rather a resource hog.

I then rolled back to 3.7.06 (needed to do restore a restore point, as it appears the registry settings in V4 blanked all the button labels in V3 – the registry settings are not removed even on an uninstall.

Then tried 3.7.10 and then 3.7.12 on top, and this seems to work fine so sticking with it at present.

No Comments »