Blog Archives

April 16th, 2010
2:28 pm
Sage Instant Accounts – “Username in use” error

Posted under Sage Accounts
Tags , ,

On login to Sage Instant, you may get the following error:-

username is in use - the program cannot connect you at this time

 

Sage then exits and will not let you log in.

 To fix this problem, open the ACCDATA subdirectory under the Sage Instant installation directory, and delete the file QUEUE.DTA. You should then be able to log in correctly.

No Comments »

March 9th, 2010
5:28 pm
Asus P6T eSATA Hot Swap issue

Posted under 64 Bit
Tags , , , , , , ,

Update 4/4/2010

Whilst hot detection has been working, I was unable to run a backup as Acronis gave errors. A disk check with windows failed to complete intermittently on one of my drives. When I disk checked the same drive afterwards via USB, it checked out with no problem.

I’ve finally taken the decision to dump the internal eSATA on the P6T. Whilst my PC is an Arbico one under return to base warranty, I will not be returning it. I may have a hardware fault, but so many other people are reporting problems with the P6T and eSATA that it is impossible to be sure. I do not wish to be without my PC for a week or 2 only to have it returned with the problem still present.

I have disabled the JMicron controller in the Bios, and installed a StarTech 2 Port PCI Express eSATA card (Silicon Image 3132 chip).  I bough mine from CCL Computers for £20-78 including VAT and postage, so not much more than it would have cost me to courier my PC back to have it looked at under warranty, and it took me 10 minutes to install. So far this is performing flawlessly. When installing on Windows 7, Windows Update found the driver and auto installed it – the bios on the card was 7.4.05, and Windows installed version 1.0.15.3 of the driver. This version did not appear to be available when I searched the Silicon Image Site – the CD with the card, and Startech’s site, both had version 1.0.15.0 of the driver, so I stuck with what Windows did and had no issues with it.

Having already used a card with a Silicon Image Chip, I immediately installed HotSwap! as this allows safe removal of the drive. As installed, the driver does not enable safe removal in windows, but does enable write caching on the drive, so HotSwap! is recommended. After installing HotSwap!, remember to ctrl/click the systray icon – this will allow you to specify those devices which are to be excluded from hotswapping, such as your fixed hard drives. HotSwap! is designed for Silicon Image chipsets and is able to optionally spin down an eSATA drive on removal. It can also scan for hardware changes if you have hot detection problems (I did not). The only slight issue with it is that it will not autostart in the tray under Windows 7 as it gets a UAC prompt to continue when it starts. You can get around this by making it start as a scheduled task with high privilege, but I just leave it on the quick launch bar or superbar and start it manually when I need it, and put up with the extra click.

Update 10/3/2010

After reverting the Bios back to 1004 as at the end of this post, a couple of reboots later the pc failed to detect a hot insertion again. I reverted the JMicron driver back to 1.17.53.0, the version used by the poster referred to below, as this was cited as stable, but still no joy. I tweaked the Bios to disable the floppy drive (on by default) as I don’t have one, resaved and booted, and then it all worked again. After 5-10 more warm and cold boots, it is still hot detecting fine. I don’t think the floppy controller was the issue, but the fact that I went in and changed and resaved the Bios settings, as this has kicked it into life in the past.

My final conclusion – it does run stable now, but I wouldn’t call it rock solid – it still won’t detect an eSATA drive present at boot time, and I’ll need to keep an eye on things to see if the detection problem resurfaces. If it does, I suspect just entering Bios setup and resaving the Bios settings after a small change (or even no change) will sort it!

Original Post 9/3/2010

My P6T was delivered with bios revision 0904, and as delivered would not hot-recognise or safely remove an eSATA drive (safe removal was off, and write caching was off which obviously impacted performance).  I installed the latest JMicron jmb36x Controller driver (1.17.55.0, 27/01/2010) , and this enabled write caching and safe removal via the system tray. Note that whilst the P6T has both jmb363 and jmb322 controllers listed in the spec, the controller under windows lists as a jmb363, and this driver appears to handle both the above chips, therefore the jmb36x driver is the correct one. Upgrading the driver in my case affected eSATA hotswapping/safe removal etc. using both the rear eSATA port on the P6T (jmb363), and my front port connected to SATA_E1 (jmb322).  However, hot recognition was very intermittent (even with a device rescan) and the only way to consistently recognise the drive was to have it plugged in at boot time, enter the bios, exit the bios and boot. It would then recognise the drive at boot time once only, and could be safely removed (but would need booting to recognise it again).

This post states that bios version 1004, plus the JMicron driver I was using, sorted the problem. As there was now also a later bios version, 1201, I reflashed the P6T to this version. I used the flash utility in the Bios itself as this was convenient, and it would recognise USB flash drives to read the new bios. There is also a recovery process if it crashes during flashing, which involves booting with the target Bios in the root of  a usb flash drive in which case it auto flashes (see manual).  However, this recovery process only  works with small FAT32/FAT16 flash drives of less than 8GB, so I prepared a 2GB flash drive especially in case this was needed. Note also that the Bios flash utility does not do long filenames and so converts them to DOS 8.3 format – if you have multiple Bios file versions on the drive this may mean you won’t be able to tell which is which, so rename the files to sensible short names before you start, and save yourself another reboot! Also be sure to unzip them.

It turned out that version 1201 did not work at all with eSATA, so it appears that Asus fixed this in version 1004 and then broke it again in 1201 – doh! I even tried a CMOS reset with 1201 just to be sure, but it still did not work. With the drive present at boot time, it just froze during the JMicron controller drive recognition phase, just prior to starting windows.

I reflashed back to 1004 and hotswapping all worked fined, both recognition and safe removal, including multiple times. However, the drive would not be recognised at boot time if present, and would still freeze as before. As my real goal was true hotswapping, I was happy to live with this issue as hotwapping was fine – I just plugged the drive in later after booting.

Prior to clearing the CMOS, I took shots of all the Bios screens with the old settings in, to be sure I did not miss any custom settings, as the PC had been built by a custom builder, Arbico. Whilst there is a Bios feature to allow saving and restoring of CMOS settings to a flash drive, there was the concern that reloading old settings could introduce old data which did not match the new Bios, so I wanted to be sure to load default settings and tweak from there.

The screen shots of my old settings may be downloaded in this zip file. Afterwards, I manually made the following changes after clearing the CMOS and loading new defaults :-

Menu: AI Tweaker, Setting: Ai Overclock Tuner, Value: X.M.P.

Menu: Power options, Setting: boot via keyboard, Value: set to Ctrl/esc

Menu: Power options, Setting: Power On via PCIE devices, Value: enabled
(This setting was needed to allow Wake On Lan to work correctly)

Menu: Boot/Boot Device Priority :-
1 SATA: PM-SAMSUNG HD
2 CDROM:SM-Optiarc D]
3 DISABLED

Menu: Boot/Boot Configuration settings, Setting: Full Screen Logo, Value: disabled

Menu: Advanced/On Board Devices Configuration, Setting: High Definition Audio: Value: disabled
(Onboard audio was disabled as I was using a separate sound card)

Menu: Advanced/USB Configuration, Setting: Legacy USB Support, Value: Enabled

Menu: Tools, Setting: Asus Express Gate, Value: Disabled

No Comments »

March 1st, 2010
11:40 am
Multiple Versions of Office Under Windows 7 64 bit

Posted under 64 Bit
Tags , , , , ,

According to Microsoft here you can do this if you install them in increasing order, i.e. you must install the earliest first.

In my case, Office 2000 would not play ball at all, as Word 2000 froze on startup.

Office XP ran ok, and does seem to coexist with Office 2007, but there are some issues.
I had to mess around with compatibility mode for XP, but ended up taking it off as it appeared to try to run Word XP in administrator mode even though this was not enabled – strange. I ended up not needing it and Word XP started OK but was initially a bit flakey as said.

Two issues remain outstanding, but they are possible to live with:-

1/ Double clicking a .doc document always opens Word 2007, even if you try to change the file association. I had it working correctly for a little while (loading up Word XP for .docs and Word 2007 for .docx, but Word 2007 ended up taking over. This may have had something to do with the next point – ‘re-registering’

2/ Both versions wanted to reconfigure themselves each time you switched from running one version to the other one. Even on a fast PC this takes 10-20 seconds of thrashing with a conifiguration dialog displayed – not nice, although Microsoft say that this is normal behaviour if you have multiple versions (Word XP did its configuration a lot quicker). This post  describes the way around this by telling both versions of Word not to ‘rereg’ themselves, by setting the following registry keys :-

HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Word\Options\NoReReg = 1 (DWord) – for Word 2007
HKEY_CURRENT_USER\Software\Microsoft\Office\10.0\Word\Options\NoReReg = 1 (DWord) – for Word XP

This fixed the problem, although it is not entirely recommended as it may cause issues during fixes/updates that may need the ReReg to be run. I would imagine that removing the keys temporarily and re-adding them would solve this.

If I was in this situation again, I would certainly try to avoid installing multiple versions on the same OS – it is clearly not supported well. It would be best to use another machine or a VM.

No Comments »

February 20th, 2010
1:33 pm
Imedia Embedded Linux – xorg.conf configuration issues

Posted under Linux
Tags , ,

These issues came about when trying to change the screen resolution/refresh rate on my diskless LinITX Jupiter PC.
This simple exercise became a real nightmare with a really steep learning curve. However, as they say, experience is what you get when you are looking for something else! The full details of this saga are as follows. Motherboard documentation and sample xorg.conf files are just below.

Resources

CVCM2 Motherboard Manual
my xorg.conf for ADI Microscan G910

my xorg.conf for Samsung Syncmaster 2443BW

Hardware Configuration

LinITX Jupiter Fanless Diskless PC
Boot from LAN, USB or internal IDE socket.
Motherboard: CV7CM2
CPU: VIA C7 1Ghz Processor
Chipset: VIA CN700 Northbridge and VT8237R Southbridge
Memory: 1GB DDR2 RAM Installed
LAN: VT 6103 10/100 Fast Ethernet
IDE: One 40pin IDE connector – suitable for use with a 40pin FDM module
USB: Four Rear USB ports and Two Front USB ports
Audio: AC’97 Compliant Audio
BIOS: Award BIOS
Power: Internal DC-DC power supplied from an external AC/DC Adapter (12V 5A DC)
Dimensions: 213mm x 200mm x 45mm

EDC 4000 2GB 40 pin IDE flash drive

Monitor :-
ADI Microscan G910 (original configuration)
Samsung Syncmaster 2443BW (later replacement)

Software Configuration

Imedia embedded Linux 6.0.4
Kiosk build selected
All packages selected (everything installed in addition to Kiosk build)

Configuration Issues

1/ Debugging/usage Tips :-

a/ Details of all the login accounts are posted in the library.

b/ The startup can be interrupted via ctrl/C when the display says “entering run level 3”.
Note that continued pressing of ctrl/C is required to be sure it catches it.
This will then give you an mms prompt.

c/ mms is the autologin user. this is set up in /etc/inittab via the –autologin=mms switch
XFCE is then fired up via the users profile “.profile” in /home/mms.
Note that there are a number of such setup files in the home directory, but the “.xxx” syntax indicates that they are hidden.
You must turn on viewing hidden files to see them.

d/ You can exit from XFCE and restart it without a reboot, using “startx”. Note that you can also just say “X”,
but this does not work – presume it does not use the same script!

e/ You can switch to another login (e.g. root) from any terminal prompt by saying e.g. “su root”.
You can then start XFCE as root (but BE CAREFUL – Linux is easier to trash than windows when fully privileged)

f/ For iMedia Linux with flash, the log files are under a ram disk called /var/log.
The logs are rotating files so that they do not fill the disk.
To read them you must use cat (NOT cp) and/or copy them to another file via cat, e.g.
cat Xorg.0.log > X.log
(followed by a ctrl/d or ctrl/c to terminate as cat also reads from standard input)
Then the resulting file can be copied with cp or viewed with nano/leaf.

g/ For viewing/editing, nano is good in a terminal window as it is modeless and notepad like
leaf is good when in XFCE

2/ After installation, the system would boot to a shell prompt only and would not start XFCE
The solution for this is documented on the iMedia forums here: http://forums.imedialinux.com/index.php?topic=62.0

We had many reports and complaints about graphical mode not working.
Sorry guys for that, we are tying to maintain configuration files for each and every architecture, but it time this proved to a real pain in the ..
Because of hardware differences, many times user could not use at all X graphical mode.
That’s why, in the latest release I created a simple bash script that will force X to auto detect it’s settings and overwriting everything we had in the xorg.conf file.
For those of you who are not so lucky and be prompted with a shell prompt, stopping autologin is pretty easy. As soon as you see “Entering level 3” start pressing CTRL+C key till you get a shell prompt.
Then go to Console 2 or su as root and run xorg-auto-config
NB This causes xorg.conf to be overwritten so save it first!

This auto configure allowed xfce to start but with low refresh rate.

3/ VBEModes was enabled in the “Device” section, to enable the Unichrome Hardware acceleration

Option      “VBEModes” “true”

This improved the resulution but the refresh rate was still low (56Hz or 60Hz)

4/ The correct monitor definition section was then added. The HorizSync and VertRefresh values are important here.
I had to google them for my monitor – the defaults are very low.
(Note that turning off DDC with…

Option  “noddc”

or similar had no effect on increasing the refresh rate.)

Section “Monitor”
Identifier   “Monitor0”
VendorName   “ADI”
ModelName    “ADI Microscan G910”
HorizSync    30-110
VertRefresh  50-160
Option       “DPMS” “true”
EndSection

5/ I added display modes (and a DefaultDepth to set the colour depth that XFCE uses) to the Screen section as follows.
Note that these modes are names which are known internally. I think that it uses a set of VESA modes,
and allows additional ones to be added to here. I’m not quite sure what is going on, but according to the log file,
XFCE needed at least one of these to stop it ‘tripping up’ as it did not find a decent vesa mode/resolution for 1280×1024.
Some forums use e.g. “1280×1024@85” instead of “1280x1024_85” – looks like both syntaxes work but have not confirmed.

Section “Screen”
Identifier “Screen0”
Device     “Card0”
Monitor    “Monitor0”
DefaultDepth 32
SubSection “Display”
Viewport   0 0
Depth     8
Modes     “1280x1024_85” “1152x864_85” “1024x768_100” “800x600_100”
EndSubSection
SubSection “Display”
Viewport   0 0
Depth     16
Modes     “1280x1024_85” “1152x864_85” “1024x768_100” “800x600_100”
EndSubSection
SubSection “Display”
Viewport   0 0
Depth     24
Modes     “1280x1024_85” “1152x864_85” “1024x768_100” “800x600_100”
EndSubSection
SubSection “Display”
Viewport   0 0
Depth     32
Modes     “1280x1024_85” “1152x864_85” “1024x768_100” “800x600_100”
EndSubSection
EndSection

Following this, I had a reasonable choice of resolutions and refresh rates within XFCE.

6/ My final choice (in XFCE) was 1280×1024 @ 86Hz, with a default depth of 32 set in xorg.conf.
I chose 96DPI in the XFCE settings and this gave an excellent display on a 19″ monitor.
75DPI gave more detail on the screen but as I intended to use a CAT5 KVM remote control (i.e. somewhat fuzzy),
I stuck with 96DPI

7/ Keyboard internationalisation was a pain – I first set the correct keyboard settings here :-

Section “ServerLayout”
Identifier     “X.org Configured”
Screen      0  “Screen0” 0 0
InputDevice    “Mouse0” “CorePointer”
InputDevice    “Keyboard0” “CoreKeyboard”
EndSection

… then this later

Section “InputDevice”
Identifier  “Keyboard0”
Driver      “kbd”
Option      “CoreKeyboard”
Option      “XkbRules” “xorg”
Option      “XkbModel” “pc105”
Option      “XkbLayout” “gb”
EndSection

However, this appeared to be ignored and I still had a US keyboard with some keys crossed over etc.
I then discovered that by default, “hotplugging” was enabled, which caused the above settings to be ignored!
Whilst there appeared to be ways of telling hotplugging what the keyboard was, I elected to turn it off as follows :-
(This went right at the top of xorg.conf)

Section “ServerFlags”
Option “AutoAddDevices” “false”
EndSection

As a bonus, I noticed that with hotplugging off, I had less trouble with the mouse after KVM switching to the box.
This was presumably because it tried to reconfigure devices as it thought I had hotplugged something.

8/ Keyboard is still US when in a terminal window either inside or outside XFCE.
It appears that this can be set up here :-
/etc/sysconfig/console
This may also feed into xorg.conf hotplugging automatically should you want to turn this on but not sure.
I have yet to try this out.

9/ Since my original configuration, the monitor has been upgraded, so I have reconfigured xorg.conf. As the CVCM2 motherboard graphics chipset will not do 1920 X 1200, I set the resolution to the maximum I could that looked good, which was 1680 X 1050. For this version, I needed to take the refresh rates out of the Modes option thus :-

Section “Screen”
Identifier “Screen0”
Device     “Card0”
Monitor    “Monitor0”
DefaultDepth 32
SubSection “Display”
Viewport   0 0
Depth     8
Modes     “1680×1050” “1440×900” “1600×1200” “1280×1024” “1152×864” “1024×768” “800×600”
EndSubSection
SubSection “Display”
Viewport   0 0
Depth     16
Modes     “1680×1050” “1440×900” “1600×1200” “1280×1024” “1152×864” “1024×768” “800×600”
EndSubSection
SubSection “Display”
Viewport   0 0
Depth     24
Modes     “1680×1050” “1440×900” “1600×1200” “1280×1024” “1152×864” “1024×768” “800×600”
EndSubSection
SubSection “Display”
Viewport   0 0
Depth     32
Modes     “1680×1050” “1440×900” “1600×1200” “1280×1024” “1152×864” “1024×768” “800×600”
EndSubSection
EndSection

Taking out the refresh rates may have worked better in the original file with the G910, as it may have auto detected the highest refresh rate it could do based on the given horizontal and vertical frequencies. However I have not tried this.
Links to the xorg.conf files (together with the motherboard manual, or at least the best I could find) are at the top of this post. Note that my original xorg.conf for the G910 has an incorrect monitor name in it – this did not appear to cause a problem so I presume the field is just there for documentation purposes.

No Comments »

January 11th, 2010
9:52 am
How to Pin shortcuts to taskbar/enable quick launch bar in Windows 7

Posted under Windows
Tags , , ,

Some desktop shortcuts cannot be pinned to the new taskbar/superbar in Windows 7, for example a desktop shortcut to a url.
You can enable the old quick launch toolbar to do this as follows :-

On the taskbar, right click and pick Toolbars/new toolbar…

  1. In the Folder box at the bottom, paste in the following :-
    	%userprofile%\AppData\Roaming\Microsoft\Internet Explorer\Quick Launch
  2. Click Select Folder to save the change
  3. Note that the quick launch bar may end up to the right of the superbar. Dragging it to the left seems to be a fine art! I googled for a while and could not find out how to move it. In the end I discovered that the speed of dragging is very important here. It should be dragged by left clicking the dotted lines, or by left clicking just to the right of them so that the cross arrow appears. Click and hold and drag with a swift motion over the other bar, and drop, and this should work. If you are too slow with the drag it will not work and will just hit the adjacent bar. In my case, dragging the dotted lines seemed to work in either direction (i.e. can either drag superbar over quick launch or vice versa), but dragging with the crossed arrow only seemed to work rightwards. I did read a blog post that advocated unpinning everything from the superbar first, but this was not required, in the end it all comes down to the dragging technique. Once you have tried a few times and got it working it becomes second nature, but it is certainly very counter intuitive!
  4. You may also find that the quick launch bar disappears on reboot. You can re-add it back as in 1. above, and you should find that its contents are still there as previously. This post details how to modify the registry to remedy the disappearance problem, but this post on sevenforums states that you only need to make sure the that the current theme is saved (right click desktop, select personalize, right click and save the current theme). I’ve tried saving the theme and will see if it still disappears, and then try the registry mod.

This will also give access to the hidden 3-D Window Switcher that was in Windows Vista.

Full details of this may be found on here.

Comments Off on How to Pin shortcuts to taskbar/enable quick launch bar in Windows 7

January 8th, 2010
9:30 pm
JSF 1.2/Facelets event & state handling

Posted under JSF
Tags , , ,

The issue here came about when trying to propagate click events from a child table pair tag to its parent facelet page. The table pair contained buttons to move rows between the tables, and to reorder rows on the target table – a typical design pattern used in this case to assign roles and rights to a user. The table pair tag is responsible for all the buttons for this function – move row selection left/right, move all left/right, reorder target row selection up, reorder target row selection down.

My general design in this situation is for the backing bean structure to mirror the objects on the page. There is therefore an overall page bean which has a child table pair bean for the table pair facelet tag, which in turn has 2 child table beans for its subsidiary facelet table tags. Whilst the parent facelet page does not need to know about the above button logic (it just pulls the result rows out of the child beans e.g. when a save is done), it does need to know when the child table pair has caused the page to be dirty, i.e. to have unsaved changes, which comes about when one of the above buttons is clicked.

My ideal solution to this would be to add an additional action listener on the table pair buttons – the main one is handled by the table pair bean, and an additional one on each button calls a single method on the overall page bean to notify that the page has changed. The parent does not need to know which button was clicked or why, but merely to know that a change has occurred. However this idea is scuppered in JSF 1.2 by the fact that a secondary f:actionListener tag on a button does not accept a method binding like the primary actionListener, but takes a fully qualified class name which must implement the ActionListener interface. This is a right pain as it is a class and not even an instance – you cannot tie the event to a particular bean.

My next attempt was to simply inject a parent reference in the TablePair bean in faces-config.xml, however this fails because JSF does not allow cyclic references in faces-config.

My final solution was therefore to pass a value binding for my own ChangeListener interface on the parent bean to the TablePair tag as a parameter, and declare this on each button using f:attribute. The TablePair bean then fetches the bean reference from the attribute when its actionListener is fired and calls the notify method on the reference’s Change Listener implementation. (I could also have used setPropertyActionListener to simplify this but an ICEfaces bug prevented this working – setPropertyActionListener was being called after the actionListener so the latter only ever saw a null value). The following code fragment illustrates what was used to pull the reference out of the attribute (note that this is incomplete and imports etc. are missing) :-

// Calling code
  private static final String CHANGE_LISTENER_ATTRIBUTE = "changeListener";
  ...
  ChangeListener changeListener =
    JSFUtil.fetchComponentAttribute(actionEvent, CHANGE_LISTENER_ATTRIBUTE);
  if (changeListener != null)
    changeListener.notify(this);
...
// Utility Class
public class JSFUtil {
  public static<F> F fetchComponentAttribute(ActionEvent actionEvent, String attribute) {
    UIComponent component = actionEvent.getComponent();
    Map<String, Object> attributes = component.getAttributes();
    return (F)(attributes.get(attribute));
  }
}

When propagating changes like this, it is important that the design does not cause recursive notifications, for example where a change gets passed up from one child to a parent, back down from the parent to all its children, who send it back up and so on. To this end, I therefore distinguish between the following concepts to avoid this happening:-

  1. Action Notification Events are only propagated upwards from children to parents, and are only raised in respect of external actions,  for example when a button click in a child tag causes a change which the parent needs to know about. ActionNotifications are never raised as a result of a property change on a component which could have been initiated by a parent. For clarity, if an external action causes a property change and needs to propagate this upwards, it is the fact that an external action occurred which is being propagated, and not the fact that a property has changed, i.e. in JSF terms it is the result of an action event and not a value change event. This means that whilst such an external action will pass an event up as well as changing a property, if a parent directly changes the same property, no event will be propagated up.  A child cannot directly change properties in its parent so it needs this mechanism to tell the parent “something external such as a button or link click has happened in me which you need to be aware of”. Typically this will be because a click action on the child has made the whole page dirty and the parent therefore needs to change component state throughout the page to reflect this (for example to enable the save and discard buttons and disable navigation from the page). As notifications only travel upwards, recursive notifications cannot therefore occur.
  2. Property Changes are only performed by a component on itself or on its child components. On itself, it can change any of its internal properties. On a child component, it can change any of the publically changeable properties of the child. As stated above, such changes will never give rise to action notification event propagation. However, property changes can certainly propagate downwards as required.

The state determination of a JSF page or a custom component can become complex when a number of components need to be, for example, selectively enabled and disabled depending on various property combinations. If the logic to determine the state is scattered throughout the component’s managed bean, or even worse, is coded in value expressions on a JSF page, it is hard to make sense of the state determination logic, easy to miss logic that is required, and easy to break the state determination when the code is changed. Therefore, I always use a private changeState method on a bean to perform all the state determination and changes. Typically this will involve a ladder of if..else based logic which checks properties on the bean, and then for example sets or clears various enable/disable properties for the JSF components. In State Machine terms, the various properties of the bean are inputs to the changeState method, and the method uses these inputs to determine a new state for the bean, setting various properties for the new state. I am not fussy about the precise implementation of this – in complex cases, it may be worth using a state machine/state table driven approach to determine state, and maintain a ‘current state’ for the bean which is independant of the beans properties, but for most normal cases this is overkill. What is important is to use an implementation which is clear, correct and easy to maintain, and where the mechanism is easy to understand and flows naturally from the state change requirements. This method should be called whenever a change happens to any of the inputs which the method uses to determine state, and therefore will normally be called many times throughout the bean. If AOP is in use, this might be a good candidate for using a pointcut and advice to dynamically insert the calls, for example on various property changes. In typical cases, I end up with a dirty flag on the bean which indicates that something has changed, and the changeState method checks the flag and sets various enable/disable properties as required. It is therefore often worth having an overloaded version of changeState which allows new values of such inputs to be passed in and set, and then the state change logic performed, in a single call.

As mentioned earlier, I specifically do not perform any such logic on a JSF page. The presence of logic in value expressions on a JSF page is a warning bell as it may indicate that either MVC controller logic or worse,  business logic has been placed on the page. As an example, my TablePair bean allows the ‘move selected rows to destination’ button to be clicked if any rows have been selected in the source table – otherwise the button is disabled. It would be possible to code an ‘anyRowsSelected’ property on the bean, and call this property directly from the disable attribute on the JSF page. Whilst this is a simple example, I would contend that this effectively places a controller logic decision in the MVC view. The correct way would be for the disable property on the button to directly refer to an enable/disable flag on the bean, and for the changeState method in the bean to call anyRowsSelected as a method, and set the enable/disable flag as required. This removes all evidence of the logic from the view, and also means that if the logic changes, for example if other conditions can occur on the bean which affect the button state, the change is made in the correct place – only in the changeState method on the bean. The JSF page would not need to be touched and has no knowledge of this.

No Comments »

January 8th, 2010
9:28 pm
ICEfaces components missing from Eclipse Palette

Posted under Eclipse
Tags , , , , ,

I had an Eclipse Galileo workspace containing several projects, and for some reason some of the projects did not show a full palette of components when opening a jspx file using the Web Page Editor. The main culprit was the ICEfaces components set which was missing. The workspace as a whole was clearly OK, but some projects had become broken. I now believe that some changes I made to user library locations may have caused the problem, but not sure on this.

Firstly, I visited the JSF project facets screen by right clicking the project in the explorer and selecting Properties to open the project properties page, then opening the  Project Facets entry in the left hand pane. I removed (unticked) support for ICEfaces and saved the changes. Then I added it back and saved again. This did not fix the problem.

I looked at the JSF Tag Registry view which lists all the Tag libraries registered for a project. This can be displayed by clicking menu option Window/Show View/Other and then entering Tag in the search box at the top. Select Tag Registry under JSF and the view will open. You must select the desired project in the view as it does not default to the current project. You can then expand the entries to see the Tag libraries registered. If you select one of the top level nodes on the registry tree (in my case these were labelled Facelet Tag Registry and JSP Tag registry) you can click the refresh button adjacent to the project selection button. This refreshes the entries, and you can elect whether or not to flush the cache in the resulting dialog. Unfortunately it was of no help to me in fixing the above problem, and refreshing/clearing the cache did not clear the issue. However, this view could be useful to browse Tag Library registrations.

I was aware that the palette entries are driven by the available Tag libraries on the project Build path, so still felt that it was library related. I then revisited the project properties page, and this time expanded the Project Facets entry in the left pane rather than clicking on it. I then selected Java Server Faces. This listed the user libraries for the project in the right hand pane, including all of the ICEfaces and JSF ones. I deselected all but one of them (you cannot deselect all of them), leaving the JSF 1.2 (Sun RI) library selected. I then applied/saved the changes and closed the screen. I then re-opened it, selected all the originally selected libraries and applied/saved again. This time, when I opened a jspx file in the project with the Web Page Editor, the ICEfaces components were present on the palette.

One last glitch was that after doing this, the palette entries for ICEfaces only had icons present, no descriptions. I then  closed the web page editor window, and closed Eclipse down normally, and re-opened the same workspace in Eclipse again. I then re-opened the same jspx in the Web Page Editor, and this time the ICEfaces components were all present with both the Icons and the descriptions. Problem solved!

No Comments »

January 8th, 2010
5:35 pm
ICEfaces ice:form tag adds extra auto margin in IE7

Posted under JSF
Tags , , , ,

The ice:form tag renders a stateSavingMarker div which declares a couple of hidden fields for use by ICEfaces.

In IE7, (or IE8 in IE& compatibility mode), this div has its margin to auto which can cause an extra 10px or so margin to be added in between any elements outside the form and those inside.

This can be eliminated by adding a style to the form, either by adding a StyleClass attribute to refer to a CSS class, or by using the Style attribute to code an inline style. As this is a ‘one off’ just to clear a margin, the following example uses an inline style :-

<ice:form style=”margin:0″>

No Comments »

January 8th, 2010
4:22 pm
Unable to execute JSF Lifecycle – Duplicate ID tags in JSF

Posted under JSF
Tags ,

A common gotcha this one which has had me scratching my head a few times. ID tags on elements must be unique, and if you duplicate one as I did on an ICEfaces table, you get a nice big stack trace saying “Unable to execute JSF lifecycle” amongst many other things. It is not obvious that this is the cause, and the IDE does not appear to pick it up even if you do a validate, so it is worth remembering and checking for, as it gives the impression that something much nastier has happened! (Some duplicates do not appear cause the problem but should be weeded out anyway).

No Comments »

January 8th, 2010
3:25 pm
JSF: #{…} is not allowed in template text

Posted under JSF
Tags , , , , ,

This error can occur when loading a JSF/Facelets page :-

org.apache.jasper.JasperException: /home.jspx(16,17)
   #{...} is not allowed in template text

One reason for this, described on JavaRanch here, is that the JSF page is not being routed properly via the servlet mapping in web.xml. The following extract from web.xml shows a mapping for .iface, to route urls ending in .iface to the ICEfaces Persistent Faces Servlet:-

    <servlet-mapping>
        <servlet-name>Persistent Faces Servlet</servlet-name>
        <url-pattern>*.iface</url-pattern>
    </servlet-mapping>

Even though the page itself may be a .jspx file, using .jspx rather than .iface in the url would not route the page to the above servlet, causing it to be parsed incorrectly. This post describes the cause in more detail – the #{…} syntax is unified EL, which is not allowed in template text in a JSP (hence the error). In the above case the page was being treated as a JSP rather than a Facelet due to the incorrect routing.

No Comments »