Archive for February, 2012

February 27th, 2012
9:32 am
Passing the “this” reference for a subclass from a self-referential generic superclass

Posted under Java
Tags , , ,

Update 7/12/22

Previously I have implemented a composite iterator for iterating a tree of tags from a search on a PropertyTree implementation. Note that in this case, the abstract base class for the composites was a TagTree, and the concrete subclasses were a TagList and a Tag. As a Tag did not need a list, the list of children was in the concrete TagList and not the TagTree. Therefore, the TagTree did not need to have generic references to self referential subtypes in the way the example below does – in fact it did not need to be generic at all. Therefore it did not suffer this problem. For further clarity see the example below from Angelika Langer here. In her code, the list is in the abstract superclass which is why it suffers the problem below and potentially needs the infamous getThis trick per below.

Original Post

Some generic classes are recursively self-referential, such as in the following abstract class statement for a tree node :-

public abstract class Node <N extends Node<N>>  {

}

In this situation it is sometimes necessary for example to pass a reference to this from the superclass to a specific subclass which expects an argument of type N :-

public void methodA(N node) {
//do something with the passed node
}

public void methodB(N node) {
node.methodA(this); // error – incompatible types
}

In the above fragment, the error is because this in the superclass has a type of Node<N> which does not match N.

Another related use case is where a fluent API returns a ‘this’ reference from each of the fluent methods. If for example a fluent method in an abstract base class needs to return a ‘this’, as above the reference needs to be for the concrete subclass so the same problem occurs.

The topic is discussed in detail in Java Generics and Collections, section 9.4 on the Strategy pattern. The section from the book may be viewed online at Flylib here.

Angelika Langer also discusses it in more detail her Generics FAQ here

In the above case, the problem can be solved with the so-called getThis trick. An abstract getThis method is defined in the superclass, and implemented in each subclass:-

Superclass

protected abstract N getThis();

Subclass

protected Subtype getThis() {return this}; // where Subtype is the concrete subtype of N

Now the superclass can obtain a type correct reference to this and pass it without error:-

public void methodB(N node) {
node.methodA(getThis()); // this time it works
}

As Angelika Langer points out in her examples, there can be other ways around this kind of problem depending on the situation, but sometimes you do need the getThis trick.

No Comments »

February 21st, 2012
3:57 pm
CDI Interceptors not called on local call to service layer EJB method

Posted under CDI
Tags , ,

When a service layer EJB method is called locally from within the same class, any CDI interceptors on the method are not invoked. This is standard CDI behaviour, as interceptors are only designed to be called when a method is called from an external client.

I hit this issue when my OptimisticLockInterceptor (which converts OptimisticLockException to ChangeCollisionException) was not being called, and OptimisticLockExceptions where being propagated all the way up to the view layer -  Mantis issue 120 details this.

The solution to this is to ensure that any required interceptors are annotated on methods at the point of call from an external client. If they then call another EJB method locally, any interceptors on the second method will not be called, but this is not a problem if the first method is correctly annotated. If the second method is also used externally, then its interceptors will be correctly invoked.

No Comments »

February 14th, 2012
3:11 pm
Using a Message Bundle to make Dialog sizes locale dependant

Posted under JSF
Tags , , , , ,

A common issue I have found is setting confirmation dialog widths correctly to hold the target message neatly.

With a Primefaces p:confirmDialog for example, the height auto adjusts but the width defaults to 300 pixels, and is set via an integer width attribute.

When supporting multiple languages, message sizes and sentence structure obviously differ, and it can be desirable to change the dialog width for example. Whilst a large width could be chosen which is likely to support any language, this can look less than perfect due to the extra unwanted space.

One simple solution which I came up with is just to store the dialog width in the Message Bundle along with the message. This way, the dialog width will automatically adjust to the value set for the target locale when switching locales. The same idea could be used to set a width etc. via CSS as well.

Whilst I am in no way suggesting turning a message bundle into a style sheet, this is a clean solution for this particular simple case and others like it, especially as the width is set as a component attribute and not in CSS, so style sheet localisation is not a possibility. Style sheet/resource localisation is available in JSF 2, but is somewhat non-intuitive. (Core JSF 3 p113-114 opines that it is an unappealing solution, and hopes for a better implementation in a future JSF release.)

No Comments »

February 13th, 2012
2:42 pm
CDI/Weld fails to inject a reference with nested parameterised types

Posted under CDI
Tags , , , ,

Weld 1.0 was failing with “WELD-001408 Injection point has unsatisfied dependencies” when nested parameterised types were present.

The referenced bean was correctly present and annotated.

I found that I could work around the problem by simplifying/removing some of the generics. This allowed the beans to inject, but also gave warnings about the use of raw types. For example:-

//Original code
    private @Inject TableCtrl<TreeNodePath<TaxonomyNode>, RowMetadata> selectionTable;

//was replaced with the following
    private @Inject TableCtrl<TreeNodePath, RowMetadata> selectionTable;

Attempts to use a CDI extension to find out more about what was happening did not reveal any more insight. This post discusses Weld performance and debugging and refers to a Stack Overflow post on the subject.

My solution to the problem (which I have also logged on Mantis) is  as follows :-

  1. Injecting the bean into a raw type does not suffer from the problem.
  2. I therefore inject into a temporary raw type, and cast that to the correct type.
  3. Wrapping this in a method annotated with @Inject neatly solves the problem, as the method can take the raw types as arguments, and cast to the correctly parameterised fields in the method.
  4. As all this is done in a method, warnings can be suppressed for the method. This is a tidy solution, as the method only has this specific purpose, and no other warnings in the class are incorrectly suppressed.
  5. This is far better than the original workaround which involved hacking the generics back to non nested parameterised types throughout – this meant hacking a number of classes. The current solution entirely isolates the issue and allows the desired generics to be correctly used everywhere.

An example of the correct generic declarations and the method used follows :-

public abstract class TreeBrowser<N extends TreeNode<T>, T extends Tree<N>, P extends TreeNodePath<N>>
                            implements Serializable, BreadcrumbCtrlEvent<N> {

    private CrumbComparator<N> crumbComparator;
    private BreadcrumbCtrl<N> breadcrumb;   
    private TableCtrl<N, RowMetadata> treeNodeTable;
    private TableCtrl<P, RowMetadataStatusMap> trayTable;

    @Inject
    @SuppressWarnings({"rawtypes", "unchecked"})
    void injectWeldUnsatisfiedGenericBeans(CrumbComparator crumbComparator, BreadcrumbCtrl breadcrumb,
                           TableCtrl treeNodeTable, TableCtrl trayTable) {
        this.crumbComparator = crumbComparator;
        this.breadcrumb = breadcrumb;
        this.treeNodeTable = treeNodeTable;
        this.trayTable = trayTable;
    }

 

This solved the problem.

No Comments »

February 9th, 2012
12:01 pm
EJB methods–returning status codes vs throwing exceptions

Posted under EJB
Tags , , ,

Exceptions are hotly debated on line, as is this issue as to whether to return status or throw an exception. There is no straight answer for all cases, but for the projects I am working on, I am adopting the following.

  1. Many EJB methods return data as lists of entities for example, and to return a status code as well is a nuisance as you need to build a wrapper class for this.
  2. My present way forward therefore is to throw exceptions in the EJB methods, for example when you attempt to delete a non existent row.
  3. Even though some methods like delete have no return and could return a status, others do return data and could not easily do it this way. I prefer to be consistent in my approach.
  4. Therefore, I will throw checked exceptions for these ‘important’ ones like non existant data, that the caller should check for.
  5. If I am using  a command pattern as per my post on this here, I will then catch these in the command objects,  and convert to one of the status enum values I have proposed for my Command Pattern implementation. This is an ideal place to do it, exceptions are not friendly to the command pattern mechanism if you want to continue from one, and I have already added the generic status enum idea.
  6. For cases when I do not use command patterns, a similar approach could be used by catching the exceptions in the model layer and converting to a status enum there. My model layer is stateful, and typically my fetch methods in the model populate result properties and then return void. These would be ideally placed to catch any exceptions from the service layer and return a status enum instead (or just return the enum status from a command if that was used).
  7. Coupling and isolation of the status enum needs to be considered. Where the enum is defined in the model, the view/controller layer has no dependency on the service layer. However, if the enum were to be defined in the service layer, the view then has a dependency on it as it uses that enum. This is (perhaps to a lesser extent) also true if the enum is defined in a command class, as these are also service layer classes. I may therefore decide to map them to a model specific enum (which may be similar).

No Comments »

February 8th, 2012
8:21 am
Event Viewer slow on Windows 7

Posted under Windows 7
Tags , , ,

My event viewer is slow to open – feels like 20-30 seconds or so.

This post here indicates that others have found the same thing.

The post has a workaround – copy the eventvwr.msc from Windows\System32 on an XP system, and it will run the XP style event view on Windows 7. This starts almost instantly, but of course it does not provide the nice error summary of the new version, so you can’t have it both ways!

Unlike the advice in the post I copy the file to another folder rather than System32, such as System Management, as it is a separate addition. I renamed it EventVwrClassic.msc.

You can just make a shortcut to it to run it. Unlike the standard windows 7 one, it will pop the UAC dialog to ask you to approve the privilege increase, even if you set the shortcut to run as administrator.

No Comments »

February 3rd, 2012
1:11 pm
OCZ Vertex / ASUS P6T Flash Upgrade process & Issues

Posted under 64 Bit
Tags , , , , , , , , ,

This was done in order to resolve ongoing reliability issues with the OCZ Vertex and SATA port errors from the P6T as detailed here.

This post also follows on from my earlier post here concerning hot swap issues, where I went into some detail of the P6T flashing process. Reference should be made to that post for more details of P6T flashing.

The following steps were performed:-

  1. I identified the current firmware version of my OCZ Vertex. To do this start device manager, find the disk under Disk drives, and open its properties. Then select the Harware Ids property and the version number of the firmware will be shown. There were hints in the forums of a more detailed revision number, but I could not find it. My initial version was 1.5.
  2. I also tried to run the OCZ Toolbox which allows identification and upgrade of the firmware, and also auto downloads the correct new version. I nice idea if it had worked! In my case it failed to find the OCZ Vertex either before or after the upgrade process.
  3. I did note that a number of posts suggested switching the SATA mode to AHCI in both the bios and windows. This OCZ post details the process, which (for the windows part) simply involves changing the value of HKLM\System\CurrentControlSet\Services\msahci\Start in the registry from 3 (=IDE mode) to 0 (= AHCI mode). However, when I tried this at this stage, the system froze at boot time. Fortunately, I was able to switch the setting back in the BIOS, boot into safe mode, and set the registry setting back to 3, to restore the system to a working state.
  4. The latest version for my Vertex was 1.7, but it is not possible to upgrade directly from 1.5 to 1.7. It is necessary to upgrade to 1.6 first. Finding all the historical versions of the firmware is not easy on the OCZ site – the main download area only has the most recent version. Older version may be found from a forum post here which has all the historical downloads.
  5. For my Vertex, here is  the 1.6 upgrade and here is the 1.7 upgrade.
  6. Note that as per the 1.6 upgrade, I had the ‘filename error’ which meant that I had an older version which needed a destructive upgrade to 1.6 due to a change in the NAND/wear levelling algorithm. This would mean loss of all data on the drive and a restore afterwards, and the use of a different kit for upgrading. The upgrade also could not be done with the drive in use as the system disk. The links in the previous paragraph give all the instructions and the alternative versions of the upgrade.
  7. I then made 2 full backups with Acronis, plus another daily file backup, to be sure I would be able to restore the drive.
  8. As per the instructions, I jumpered the drive to set it into standalone/upgrade mode, and booted from the previous version of Windows 7 which I still had available on a standard Hard drive. I kept this available just in case, and on this occasion I was very glad I did.
  9. The destructive upgrade is available as 16_win_vertex.zip. Unfortunately the zip contains 2 versions of the update, 641102VTX.exe and 661102VTXR.exe. There are no release notes to say which version to use or what they are (!) I wondered if they related to the internal firmware/drive version from which you were upgrading, but I didn’t know that either as I have said earlier. Another forum post hinted that as may be expected, the 661102VTXR.exe was a later version and the ’R’ indicated revised. I decided to try this one.
  10. Having jumpered and rebooted, the drive correctly identified itself as YATAPDONG BAREFOOT, and I proceeded with the upgrade, which went successfully.
  11. I then immediately upgraded from 1.6 to 1.7 using 17_updaters_1.zip.  This time, a standard non-destructive upgrade could be done (not that it mattered in my case), which mean burning an ISO on a CD, and booting that to do the upgrade. The zip contained ISOs for a number of OCZ products plus some release notes this time, so I burnt the ISO for the Vertex. I booted it and did the upgrade, which went with no problems. One point is that I cannot recall whether I removed the jumper on the drive before or after this final upgrade. I am fairly sure that it was after, in which case the upgrade does not mind whether the jumper is present, but if there are any issues it should be tried without the jumper present as would be normal for a non destructive upgrade.
  12. Following this, I booted the system. Initially it could not see the Vertex, so I rebooted into the bios, found it was then visible, and did a bios save and another reboot. This time the drive was visible but not formatted.
  13. I then booted Acronis from its rescue CD and restored the backup, plus the Master Boot Record. This proceeded normally.
  14. After rebooting, the system came up and ran fine, however it did present the boot manager from the other disk, and the system had to be selected from there. To prevent the need for this, I copied the boot files back to the new SSD using bcdboot as per this post here. It was also necessary to reset the boot order in the bios so that the SSD was chosen first, otherwise the boot manager is still presented even after you have run bcdboot.

 

Now I had a working system with the updated SSD, I decided to reflash the P6T as well. Previously I had had hot swap and SATA issues with later bios versions, as detail here. However, since then I had disabled the onboard JMicron controller which was giving problems, and replaced it with a Startech controller, as detailed in the update to this post here. Therefore, the original issues which were preventing the use of a later bios were no longer present, so I opted to flash to the latest version which at the time of upgrade was version 1408 :-

  1. I used the in-bios EZ-Flash, and the process and precautions are detailed in the original section of this post here.
  2. I re-applied the required motherboard settings as per the process, and rebooted successfully.
  3. I then retried switching into AHCI mode, as detail above when I tried it prior to the upgrade. This time it worked correctly with no problems. In the bios, I just set the onboard SATA to AHCI. The JMicron controller was disabled, and the Startech controller was not touched as this was an add-in card just used for SATA backups. It is possible that now either the onboard ICH10 SATA or JMicron SATA might work and hot swap reliably in eSATA mode with AHCI enabled, but as I was happy enough with the Startech I decided to leave this issue well alone for now.
  4. As a final test, I re-ran the Windows Experience tests to see if AHCI had improved performance, and indeed the primary hard disk figure had increased from 7.1 to 7.3, which is good considering the fact that my original OCZ Vertex is now an old  technology which has been set to End-Of-Life by OCZ. The system certainly felt snappier, but this is highly subjective as I had not run before and after benchmarks – the goal of the whole excercise was reliability rather than speed.

No Comments »