Blog Archives

February 21st, 2012
3:57 pm
CDI Interceptors not called on local call to service layer EJB method

Posted under CDI
Tags , ,

When a service layer EJB method is called locally from within the same class, any CDI interceptors on the method are not invoked. This is standard CDI behaviour, as interceptors are only designed to be called when a method is called from an external client.

I hit this issue when my OptimisticLockInterceptor (which converts OptimisticLockException to ChangeCollisionException) was not being called, and OptimisticLockExceptions where being propagated all the way up to the view layer -  Mantis issue 120 details this.

The solution to this is to ensure that any required interceptors are annotated on methods at the point of call from an external client. If they then call another EJB method locally, any interceptors on the second method will not be called, but this is not a problem if the first method is correctly annotated. If the second method is also used externally, then its interceptors will be correctly invoked.

No Comments »

February 13th, 2012
2:42 pm
CDI/Weld fails to inject a reference with nested parameterised types

Posted under CDI
Tags , , , ,

Weld 1.0 was failing with “WELD-001408 Injection point has unsatisfied dependencies” when nested parameterised types were present.

The referenced bean was correctly present and annotated.

I found that I could work around the problem by simplifying/removing some of the generics. This allowed the beans to inject, but also gave warnings about the use of raw types. For example:-

//Original code
    private @Inject TableCtrl<TreeNodePath<TaxonomyNode>, RowMetadata> selectionTable;

//was replaced with the following
    private @Inject TableCtrl<TreeNodePath, RowMetadata> selectionTable;

Attempts to use a CDI extension to find out more about what was happening did not reveal any more insight. This post discusses Weld performance and debugging and refers to a Stack Overflow post on the subject.

My solution to the problem (which I have also logged on Mantis) is  as follows :-

  1. Injecting the bean into a raw type does not suffer from the problem.
  2. I therefore inject into a temporary raw type, and cast that to the correct type.
  3. Wrapping this in a method annotated with @Inject neatly solves the problem, as the method can take the raw types as arguments, and cast to the correctly parameterised fields in the method.
  4. As all this is done in a method, warnings can be suppressed for the method. This is a tidy solution, as the method only has this specific purpose, and no other warnings in the class are incorrectly suppressed.
  5. This is far better than the original workaround which involved hacking the generics back to non nested parameterised types throughout – this meant hacking a number of classes. The current solution entirely isolates the issue and allows the desired generics to be correctly used everywhere.

An example of the correct generic declarations and the method used follows :-

public abstract class TreeBrowser<N extends TreeNode<T>, T extends Tree<N>, P extends TreeNodePath<N>>
                            implements Serializable, BreadcrumbCtrlEvent<N> {

    private CrumbComparator<N> crumbComparator;
    private BreadcrumbCtrl<N> breadcrumb;   
    private TableCtrl<N, RowMetadata> treeNodeTable;
    private TableCtrl<P, RowMetadataStatusMap> trayTable;

    @Inject
    @SuppressWarnings({"rawtypes", "unchecked"})
    void injectWeldUnsatisfiedGenericBeans(CrumbComparator crumbComparator, BreadcrumbCtrl breadcrumb,
                           TableCtrl treeNodeTable, TableCtrl trayTable) {
        this.crumbComparator = crumbComparator;
        this.breadcrumb = breadcrumb;
        this.treeNodeTable = treeNodeTable;
        this.trayTable = trayTable;
    }

 

This solved the problem.

No Comments »

February 3rd, 2012
1:11 pm
OCZ Vertex / ASUS P6T Flash Upgrade process & Issues

Posted under 64 Bit
Tags , , , , , , , , ,

This was done in order to resolve ongoing reliability issues with the OCZ Vertex and SATA port errors from the P6T as detailed here.

This post also follows on from my earlier post here concerning hot swap issues, where I went into some detail of the P6T flashing process. Reference should be made to that post for more details of P6T flashing.

The following steps were performed:-

  1. I identified the current firmware version of my OCZ Vertex. To do this start device manager, find the disk under Disk drives, and open its properties. Then select the Harware Ids property and the version number of the firmware will be shown. There were hints in the forums of a more detailed revision number, but I could not find it. My initial version was 1.5.
  2. I also tried to run the OCZ Toolbox which allows identification and upgrade of the firmware, and also auto downloads the correct new version. I nice idea if it had worked! In my case it failed to find the OCZ Vertex either before or after the upgrade process.
  3. I did note that a number of posts suggested switching the SATA mode to AHCI in both the bios and windows. This OCZ post details the process, which (for the windows part) simply involves changing the value of HKLM\System\CurrentControlSet\Services\msahci\Start in the registry from 3 (=IDE mode) to 0 (= AHCI mode). However, when I tried this at this stage, the system froze at boot time. Fortunately, I was able to switch the setting back in the BIOS, boot into safe mode, and set the registry setting back to 3, to restore the system to a working state.
  4. The latest version for my Vertex was 1.7, but it is not possible to upgrade directly from 1.5 to 1.7. It is necessary to upgrade to 1.6 first. Finding all the historical versions of the firmware is not easy on the OCZ site – the main download area only has the most recent version. Older version may be found from a forum post here which has all the historical downloads.
  5. For my Vertex, here is  the 1.6 upgrade and here is the 1.7 upgrade.
  6. Note that as per the 1.6 upgrade, I had the ‘filename error’ which meant that I had an older version which needed a destructive upgrade to 1.6 due to a change in the NAND/wear levelling algorithm. This would mean loss of all data on the drive and a restore afterwards, and the use of a different kit for upgrading. The upgrade also could not be done with the drive in use as the system disk. The links in the previous paragraph give all the instructions and the alternative versions of the upgrade.
  7. I then made 2 full backups with Acronis, plus another daily file backup, to be sure I would be able to restore the drive.
  8. As per the instructions, I jumpered the drive to set it into standalone/upgrade mode, and booted from the previous version of Windows 7 which I still had available on a standard Hard drive. I kept this available just in case, and on this occasion I was very glad I did.
  9. The destructive upgrade is available as 16_win_vertex.zip. Unfortunately the zip contains 2 versions of the update, 641102VTX.exe and 661102VTXR.exe. There are no release notes to say which version to use or what they are (!) I wondered if they related to the internal firmware/drive version from which you were upgrading, but I didn’t know that either as I have said earlier. Another forum post hinted that as may be expected, the 661102VTXR.exe was a later version and the ’R’ indicated revised. I decided to try this one.
  10. Having jumpered and rebooted, the drive correctly identified itself as YATAPDONG BAREFOOT, and I proceeded with the upgrade, which went successfully.
  11. I then immediately upgraded from 1.6 to 1.7 using 17_updaters_1.zip.  This time, a standard non-destructive upgrade could be done (not that it mattered in my case), which mean burning an ISO on a CD, and booting that to do the upgrade. The zip contained ISOs for a number of OCZ products plus some release notes this time, so I burnt the ISO for the Vertex. I booted it and did the upgrade, which went with no problems. One point is that I cannot recall whether I removed the jumper on the drive before or after this final upgrade. I am fairly sure that it was after, in which case the upgrade does not mind whether the jumper is present, but if there are any issues it should be tried without the jumper present as would be normal for a non destructive upgrade.
  12. Following this, I booted the system. Initially it could not see the Vertex, so I rebooted into the bios, found it was then visible, and did a bios save and another reboot. This time the drive was visible but not formatted.
  13. I then booted Acronis from its rescue CD and restored the backup, plus the Master Boot Record. This proceeded normally.
  14. After rebooting, the system came up and ran fine, however it did present the boot manager from the other disk, and the system had to be selected from there. To prevent the need for this, I copied the boot files back to the new SSD using bcdboot as per this post here. It was also necessary to reset the boot order in the bios so that the SSD was chosen first, otherwise the boot manager is still presented even after you have run bcdboot.

 

Now I had a working system with the updated SSD, I decided to reflash the P6T as well. Previously I had had hot swap and SATA issues with later bios versions, as detail here. However, since then I had disabled the onboard JMicron controller which was giving problems, and replaced it with a Startech controller, as detailed in the update to this post here. Therefore, the original issues which were preventing the use of a later bios were no longer present, so I opted to flash to the latest version which at the time of upgrade was version 1408 :-

  1. I used the in-bios EZ-Flash, and the process and precautions are detailed in the original section of this post here.
  2. I re-applied the required motherboard settings as per the process, and rebooted successfully.
  3. I then retried switching into AHCI mode, as detail above when I tried it prior to the upgrade. This time it worked correctly with no problems. In the bios, I just set the onboard SATA to AHCI. The JMicron controller was disabled, and the Startech controller was not touched as this was an add-in card just used for SATA backups. It is possible that now either the onboard ICH10 SATA or JMicron SATA might work and hot swap reliably in eSATA mode with AHCI enabled, but as I was happy enough with the Startech I decided to leave this issue well alone for now.
  4. As a final test, I re-ran the Windows Experience tests to see if AHCI had improved performance, and indeed the primary hard disk figure had increased from 7.1 to 7.3, which is good considering the fact that my original OCZ Vertex is now an old  technology which has been set to End-Of-Life by OCZ. The system certainly felt snappier, but this is highly subjective as I had not run before and after benchmarks – the goal of the whole excercise was reliability rather than speed.

No Comments »

January 29th, 2012
3:26 pm
Cannot obtain a class literal for a concrete parameterised type

Posted under Java
Tags , , , , , ,

Update 30/1/2012

After further testing, I have found more issues trying to use a parameterised concrete class in a JPA constructor expression. EclipseLink was throwing MethodNotFoundException complaining that it could not find the constructor method for the class, due to the use of the parameterised type in one of the constructor arguments. The precise circumstances where this does and does not work are not clear.

My solution has been to not use a parameterised class at all in a JPA constructor expression. To avoid most of the code duplication, what I did was to subclass the parameterised class for each concrete type I was using, i.e. I created a concrete subclass which only contained a constructor calling the superclass constructor. This then allowed me to use a typed query in JPA, as the returned class was now non-parameterised and so I could create a class literal for it.

 

Original Post

Whilst in some situations this may be an issue, it is also a statement of fact!

This is due to type erasure at run time, and cannot be done.

It means, for example, that in some cases it is not possible to completely eliminate unchecked warnings.

One example of mine was the use of a parameterised type in a constructor expression class in JPA. I had a requirement to do fetches which involved returning an entity plus another column that could not be handled as a relationship/JPA entity property. The property in question was the count of children for a tree node entity. As I was using path enumeration to describe the tree, I could not use JPA relationships to handle it, and so had to return the child count separately. I had more than one type of tree, all handled polymorphically, and so wanted to re-use the constructor class by making it parameterised.

This all worked fine but the class could not have its class literal passed when created a TypedQuery to fetch the results back. The solution was to use some casting isolated in a specific method, and to suppress the generic warnings via @SuppressWarnings("unchecked").

The following posts detail this issue:-

http://stackoverflow.com/questions/2390662/java-how-do-i-get-a-class-literal-from-a-generic-type

http://www.angelikalanger.com/GenericsFAQ/FAQSections/ParameterizedTypes.html#Why is there no class literal for the concrete instantiation of a parameterized type?

No Comments »

January 27th, 2012
7:01 pm
JSF session sharing across multiple Browser Windows

Posted under JSF
Tags , , ,

It is a design feature of servlet sessions that if you open the same application in multiple windows in the same browser type, the session data is shared across them all. This post on Stack Overflow discusses the issue.

Therefore, if for example you need multiple sessions present in order to test Optimistic Lock collisions, you need a way around this. There are 2 solutions :-

  1. Use a different browser type in each window. For example, if you use Firefox and Chrome as a pair, or Firefox and Opera for example, you will in fact get separate sessions as there is no sharing of cookies and other related data across different browser types.
  2. Log in from another computer, for example by using an RDP session to another computer on the same desktop and running the application from there. In this case, both browser types can be the same.

No Comments »

January 27th, 2012
6:09 pm
h:link navigation fails when Javascript confirm function called in click event

Posted under JSF
Tags , , , ,

I use the click event to validate a navigation, returning true to allow it and false to cancel it.
On some occasions, the toolbar buttons (which are based internally on h:link with pre-emptive Get style navigation based on an outcome) do not navigate event when the click event correctly returns true.

Some toolbar buttons always did this, others always worked. No explanation could be found for the failure to navigate, even after extensive testing and online searching for answers.

The solution to this was to return cancel in the click event, but to have the click event explicitly do the navigation itself via Javascript. The following code fragment illustrates the problem:-

<h:link id="link" disabled="#{disabled}" outcome="#{outcome}" href="#{href}" title="#{title}" tabindex="#{tabIndex}"
      onclick="if (#{empty onNavigate ? ‘true’ : el:format1(onNavigate, el:concat3(‘\”, component.clientId, ‘\”))})
      window.location.href=this.href; return false;">

The if test checks if the navigation is require, then if so, window.location.href is set to the target href in the component. False is always returned so that standard navigation is disabled.

This solved the problem consistently. (This has also been logged as Mantis issue 104).

No Comments »

January 27th, 2012
11:30 am
Controlling CDI conversation propagation

Posted under JSF
Tags , , , , ,

This is a tricky issue to control correctly. The following points have been found during testing (Mojarra 2.0.4 FCS / Primefaces 2.2.1 / Glassfish 3.0.1)

  1. One key issue is that to start a new conversation on a page, the navigation to that page must not propagate any previously active conversation. This will then start a new transient (i.e. not long-running/not started) conversation on the new page, and conversation.begin() can be used to make it long-running (i.e. start it).
  2. When doing a standard JSF Ajax postback, the current conversation is always propagated, and it does not appear to be possible to prevent this/start a new conversation from a postback. JSF always adds the current CID as a querystring parameter
  3. To prevent propagation and to allow a new page to start a new conversation, GET navigation needs to be performed. This fits in well with my main toolbar options – I use pre-emptive GET navigation for the main pages with full page refreshes.
  4. Primefaces p:button allows the new JSF 2 outcome style GET navigation but cannot be stopped from propagating a conversation – it always adds a CID parameter. If you add one with an empty or null value, you still get a “cid=” attribute which CDI then spits out as an invalid conversation on the new page.
  5. h:link can be stopped from propagating by using an explicit f:param for the cid, providing the param has no value or an explicit null value (an empty string will not work).
  6. Trying to refer to another inactive conversation, e.g. by storing conversation scoped bean references in a session level map and looking them up, seriously screws up CDI with an extremely long/possibly recursive stack trace. (See Mantis issue xxx for the stack dump). This is likely to be due to the proxying mechanism used for conversations.
  7. Therefore, it is not possible to invent say a ‘onPageLeave’ event for a previous page and call it after a new page has started and had its conversation activated.
  8. If therefore I need to refresh all pages for the application, for example if an OptimisticLockException occurred, I use a central workspace session bean to manage all the pages. I set a refresh pending flag in a map for each page, and then when the page is next visited, the workspace bean calls a refresh event of my own for the page, and then clears the pending flag. When processing the current page after an OptimisticLockException, I can refresh directly if the user requests it from a lock dialog.
  9. My workspace bean holds the CID for each page against the page name in a map, and uses this when rendering the Toolbar, adding the appropriate cid= parameter for each toolbar option. In fact I use an outer map keyed on page group name (where the page group corresponds to a top level toolbar option), with an inner map containing all the cids and refresh pending flags for the individual pages in that group. This allows for example a report toolbar option to be a menu of currently active reports, and allows new reports to be created dynamically as new conversations and added to the list under the report menu on the toolbar.
  10. Conversations are started and ended via calls to the workspace bean, which can then add/remove entries from the maps to keep track of what pages/cids are currently active.
  11. Regarding navigation and cid propagation, another option is to call JSF directly to obtain the navigation case for an outcome. ConfigurableNavigationHandler allows you to pass an outcome and get the actual navigation case, from which you can then get the URL. This should allow full control of the cid= propagation parameter on the querystring, and allow control for components which do not support JSF 2.0 style GET pre-emptive navigation directly from an outcome. The url can be passed to any component which accepts a url, and should then still perform the correct navigation and conversation control. See CoreJSF 3 page 333 for a code example on this. This method could prove useful for example with a Primefaces MenuItem, which does either JSF postback style navigation, or URL navigation, but does not directly do pre-emptive GET navigation by giving it an outcome.

No Comments »

January 27th, 2012
10:16 am
Method call in rendered attribute is called again after navigating to new form

Posted under JSF
Tags , , ,

Initially this looked very strange. When navigating to a new form, the rendered= methods for the old form were being called again, and were failing due to scoping issues – a new request scope bean was being created causing a null pointer exception in the method as required state had been lost.

This post and also this post on StackOverflow explain the problem. Apparently JSF rechecks the rendered conditions after the submit as part of an attack safeguard. This is what caused recreation of a new bean and the failure.

This behaviour was certainly unexpected, and I have not seen it documented ‘up front’ previously – so it is certainly one to be aware of! This is not the first time I have had issues with the render= attribute.

In my case, resolving issues in creation of conversations solved the problem – once I had conversations propagating properly, everything worked correctly and the state was present for the extra calls to the rendered= methods.

No Comments »

November 8th, 2011
8:02 pm
iMedia Linux– configuration/application install issues

Posted under Linux
Tags , , , , , , ,

This post brings together a number of past issues which I have not posted previously, discovered while setting up and using iMedia embedded Linux on my diskless LinITX Jupiter test box.

A zip of all the customised scripts and various config files (such as the infamous xorg.conf) are in the repository.

Shell Script Usage

1/ The startup scripts for applications and drivers etc. are in /etc/rcS.d/. You just add scripts to that and they are automatically run. They are all run by /etc/rcS in alpha order. They are grouped by a prefix of M05, M10, M20, M30 etc. – this seems just to be an arbitrary grouping that determines the order for dependencies etc. Note that all of rcS.d/ is run at run level 3 as set in /etc/inittab

The general order of startup is therefore (see /etc/inittab):-

  • /etc/rc.init runs, and calls rc.sysinit directly
  • /etc/rc.sysinit iterates init.d and runs everything in it. The order is partly determined by an initial grouping number – note the first to run is 00createvar which creates the /var tree on startup (see later)
  • /etc/rcS runs as above, iterating /etc/rcS.d/

2/ /usr/local/bin/ has the extra shared shell scripts – rlog, rlog-leafpad, sx (start X), sxr (start X as root), plus start/stop scripts for Tomcat and Glassfish.

In Thunar, you can add file associations for scripts by right clicking in the file and selecting open with other application – for each app you can add a custom launch command below. (I added rlog and rlog-leafpad). If you add an incorrect one (I added beaver by mistake) Thunar may not let you remove it from the list. Just edit ~/.local/share/applications/defaults.list and remove it from there. Note that /usr/local/bin/ is already on the path so all the script in it are visible – good place to put other global scripts.

Note that you can add scripts to e.g. a program launcher on the top panel bar – right click and create a program launcher, then add items to it – useful for scripts which run without parameters (unlike the rlog and rlog-leafpad above which act on files and therefore are Thunar’s context menu for the files they operate on). When adding a script as an item to a program launcher, the ‘startup notification’ just gives a wait cursor while the script is running (useful). Running in an explicit terminal window is WEIRD – for example for glassfish I set the middle icon on the RHS pane to “terminal”, and set the command to be “aterm -e start-glassfish”. I did not tick “Run in Terminal” as this did not work however!

Logging under iMedia Linux

1/ Logs are stored under the /var tree which is recreated at startup each time see above, so log files are purged as we are running on a small flash based distro. Therefore, Tomcat and Glassfish for example are set to use sym links to link their log directories from their normal install directories to be under /var/log/. There is some trickery to this – for Glassfish I make sure that the *normal* log folder is not present (renaming it with a date/time stamp if required) else the sym link is set up wrongly.

2/ Some logs (but by no means all) in /var/logs/ are circular buffers handled by emlog  – see here for details. They show in Thunar as ‘character type devices’ and  need copying to be visible – /usr/local/bin/rlog does this using nbcat as per the emlog site. I also added /usr/local/bin/rlog-leafpad which does the copy and fires up leafpad on the result. Note that as the emlog site states, you cannot list such a log when active using cat as a non blocking read is required – hence the need for nbcat as used by rlog and rlog-leafpad.

Installing new Applications

1/ Installing new apps is a pain, as rpm installs don’t appear to be supported etc. I tried installing various zipped tarballs of VNC/RDP alternatives for remote desktop but they all needed dependant libraries I did not have, so I had to give up. I think you can build from source to add things but did not go that far! The apps I did install successfully – java 1.6.0_29, tomcat 6.0.29 and Glassfish 3.0.1 – was all installed in folders under /opt as this appeared to be the correct place. Note that for Tomcat, I placed the config files under /etc/apache-tomcat-6.0.20/conf/ and sym-linked them from the ‘expected’ place, /opt/apache-tomcat-6.0.20/opt/, as this appeared to be correct protocol – other apps placed their config under /etc/ and linked to it from their install directory under /opt/. However I did not go to the trouble of doing this for Glassfish – it would be possible (I did do it for the log directory) but the directory structure is more complex.

2/ I did have success with Java – I just ran the .bin install kit of the latest Linux version of the sun Java SE (jdk6u29), and it all unzipped properly and ran fine. Thankfully there were no library problems – probably because there had already been an earlier working jdk6 already installed by default. I added a shell script /etc/profile.d/java-jdk1.6.0_29.sh which created the JAVA_HOME and added the bin directory to the path, but note that this did not work for init time shell scripts for either Tomcat or Glassfish, so their startup scripts had to add their own definitions to work.

3/ I successfully installed Glassfish 3.0.1 (from the zip kit download). To get it to work I had to upgrade Java as above as I got errors otherwise, and the old jdk was below the minimum certified version. However, it worked fine after upgrading java. Note that to stop it checking for updates on startup I just renamed /opt/glassfish-3.0.1-web/glassfish/modules/console-updatecenter-plugin.jar by adding “.disabled” on the end. You can achieve the same thing using the updatetool, but this failed to install properly due to a python error/dependency. I also removed the default jsf-api.jar and jsf-impl.jar from the same folder as above, and added the Mojarra 2.0.4 FCS jars to the folder instead, as my apps needed this version. Note that this also speeds up the startup of the Glassfish admin tool. See my notes here about clearing the osgi cache if you do this.

4/ For Tomcat and Glassfish I separated out the log directory creation from the actual startup, using separate scripts. The log creation is always run, and so sits under /etc/rcS.d/, and the auto startup is optional and may sit under /etc/rcS.disabled/ as a placeholder if it is not autostarted. However, as the log directories under /var/ are always setup, you can then manually start them if desired and they will still work. This is done with the scripts in /usr/local/bin/start-glassfish/stop-glassfish/start-tomcat/stop-tomcat. As this directory is on the path, the scripts can be directly run from a terminal window OR from a remote SSH window BUT you would need to be logged in as root to do it.

Graphical Remote Desktop

Re remote desktop, it is in theory possible to use a windows based X server and connect in that way. To do this, you need to run an X display manager on the Linux box to handle the xdmcp protocol etc. imedia linux does have /usr/bin/xdm which is an old display manager. I did manage to run it from the initial login prompt before typing startx, just by typing xdm, and it popped a login screen. However it would not connect to xfce. It recognised a valid login and tried to do something, failed, and redisplayed itself. An invalid login gave a login failure. I also managed to run the launcher from XMING under windows, and got as far as connecting to xdm. to do this I edited /etc/X11/xdm/Xaccess to enable the “*” entry to allow all comers. I also experimented with editing /etc/X11/xdm/xdm-config and set DisplayManager*authorize to false, and tried commenting out the last line as follows :-

! SECURITY: do not listen for XDMCP or Chooser requests
! Comment out this line if you want to manage X terminals with xdm
!DisplayManager.requestPort:    0

I managed to get XMING to display the same xdm login screen that I got locally, which was a good start – I knew I had a connection working. However, I realised in the end that xfce does not appear to be using a display manager – it simplifies things and bypasses it, and to set up xdm to link properly to xfce and to work remotely looked like loads of setting up with no instructions on how to do it properly.

I therefore sadly gave up on this and admitted defeat on installing any kind of graphical remote desktop!

Samba

I got the samba server to share folders successfully – I added the shares in /etc/samba/smb.conf. Security was set to user (security = user). I tried setting “valid users=” to a list of users, to allow restriction of users e.g. to glassfish/tomcat log folders, but could not get this to work, so I left it out.

Note that I did need to use “smbpasswd -a username” to add an existing linux user to the samba user list, and give it a password. You need to run as root to do this, or it merely allows you to change your own password. smbusers allows you to map external users (used by windows clients) to linux users – see here. Note that you don’t need to do this – you can just use the linux names. Note also that when adding/setting passwords with smbpasswd, you *always* use the Linux name, not the samba alias. Some other useful links for Samba follow:-

Ftp, Telnet and SSH

1/ ftp is enabled by default via vsftp. The config file is in /etc/vsftpd/vsftpd.conf. The default settings are pretty sensible in opening it up for use in a secur local lan. ftp works via all the local linux accounts including root. The log is in /vat/log/vsftpd.log.

2/ Telnet works but does not allow root access (presumably as too insecure). utelnetd is the daemon but there is no documentation for it on the net I could find. SSH works fine via the sshd and allows root access, but could not find any config files or logging for it, although it is documented on the net – the man pages are here, however the config files mentioned for the daemon do not appear to be present under imedia Linux. No problem, it is set up fine for LAN use and works well from putty in ssh mode.

No Comments »

October 4th, 2011
12:10 pm
Glassfish 3.0.1–Slow login due to checking for updates

Posted under Glassfish
Tags , ,

I did a new install of GF 3.0.1 on a laptop, and elected not to install the update tool as this was a fixed demo environment.

I found that when connecting to Glassfish admin, login was very slow, easily taking a minute or more longer than normal. After login, the main screen displayed a count of available updates at the top left, which I had not seen in the past. This looked suspicious and I wondered if the slowness was due to checking for available updates.

It turned out that this was indeed the case. This post discusses the issue and various workarounds, including disabling network access for Glassfish and removing a jar – neither of which I particularly liked. The post indicated that you could set preferences in the update tool, but as I had not installed it I was stuck.

I then tried reinstalling GF from scratch, this time selecting the update tool as well during install, but not selecting the check box for ‘checking for updates’. After doing this, I discovered that the problem was no longer present – login was quicker and the count of available updates was not displayed. It was also possible to use the update tool to configure preferences, but in my case automatic update checking was off anyway.

The solution to this issue, somewhat paradoxically, is therefore, if you want a fixed installation with no update checking, make sure you still select the update tool when installing, but do not enable checking for updates. This solves the problem, and also allows you to configure update preferences later if you wish.

No Comments »