Blog Archives

February 8th, 2012
8:21 am
Event Viewer slow on Windows 7

Posted under Windows 7
Tags , , ,

My event viewer is slow to open – feels like 20-30 seconds or so.

This post here indicates that others have found the same thing.

The post has a workaround – copy the eventvwr.msc from Windows\System32 on an XP system, and it will run the XP style event view on Windows 7. This starts almost instantly, but of course it does not provide the nice error summary of the new version, so you can’t have it both ways!

Unlike the advice in the post I copy the file to another folder rather than System32, such as System Management, as it is a separate addition. I renamed it EventVwrClassic.msc.

You can just make a shortcut to it to run it. Unlike the standard windows 7 one, it will pop the UAC dialog to ask you to approve the privilege increase, even if you set the shortcut to run as administrator.

No Comments »

February 3rd, 2012
1:11 pm
OCZ Vertex / ASUS P6T Flash Upgrade process & Issues

Posted under 64 Bit
Tags , , , , , , , , ,

This was done in order to resolve ongoing reliability issues with the OCZ Vertex and SATA port errors from the P6T as detailed here.

This post also follows on from my earlier post here concerning hot swap issues, where I went into some detail of the P6T flashing process. Reference should be made to that post for more details of P6T flashing.

The following steps were performed:-

  1. I identified the current firmware version of my OCZ Vertex. To do this start device manager, find the disk under Disk drives, and open its properties. Then select the Harware Ids property and the version number of the firmware will be shown. There were hints in the forums of a more detailed revision number, but I could not find it. My initial version was 1.5.
  2. I also tried to run the OCZ Toolbox which allows identification and upgrade of the firmware, and also auto downloads the correct new version. I nice idea if it had worked! In my case it failed to find the OCZ Vertex either before or after the upgrade process.
  3. I did note that a number of posts suggested switching the SATA mode to AHCI in both the bios and windows. This OCZ post details the process, which (for the windows part) simply involves changing the value of HKLM\System\CurrentControlSet\Services\msahci\Start in the registry from 3 (=IDE mode) to 0 (= AHCI mode). However, when I tried this at this stage, the system froze at boot time. Fortunately, I was able to switch the setting back in the BIOS, boot into safe mode, and set the registry setting back to 3, to restore the system to a working state.
  4. The latest version for my Vertex was 1.7, but it is not possible to upgrade directly from 1.5 to 1.7. It is necessary to upgrade to 1.6 first. Finding all the historical versions of the firmware is not easy on the OCZ site – the main download area only has the most recent version. Older version may be found from a forum post here which has all the historical downloads.
  5. For my Vertex, here is  the 1.6 upgrade and here is the 1.7 upgrade.
  6. Note that as per the 1.6 upgrade, I had the ‘filename error’ which meant that I had an older version which needed a destructive upgrade to 1.6 due to a change in the NAND/wear levelling algorithm. This would mean loss of all data on the drive and a restore afterwards, and the use of a different kit for upgrading. The upgrade also could not be done with the drive in use as the system disk. The links in the previous paragraph give all the instructions and the alternative versions of the upgrade.
  7. I then made 2 full backups with Acronis, plus another daily file backup, to be sure I would be able to restore the drive.
  8. As per the instructions, I jumpered the drive to set it into standalone/upgrade mode, and booted from the previous version of Windows 7 which I still had available on a standard Hard drive. I kept this available just in case, and on this occasion I was very glad I did.
  9. The destructive upgrade is available as 16_win_vertex.zip. Unfortunately the zip contains 2 versions of the update, 641102VTX.exe and 661102VTXR.exe. There are no release notes to say which version to use or what they are (!) I wondered if they related to the internal firmware/drive version from which you were upgrading, but I didn’t know that either as I have said earlier. Another forum post hinted that as may be expected, the 661102VTXR.exe was a later version and the ’R’ indicated revised. I decided to try this one.
  10. Having jumpered and rebooted, the drive correctly identified itself as YATAPDONG BAREFOOT, and I proceeded with the upgrade, which went successfully.
  11. I then immediately upgraded from 1.6 to 1.7 using 17_updaters_1.zip.  This time, a standard non-destructive upgrade could be done (not that it mattered in my case), which mean burning an ISO on a CD, and booting that to do the upgrade. The zip contained ISOs for a number of OCZ products plus some release notes this time, so I burnt the ISO for the Vertex. I booted it and did the upgrade, which went with no problems. One point is that I cannot recall whether I removed the jumper on the drive before or after this final upgrade. I am fairly sure that it was after, in which case the upgrade does not mind whether the jumper is present, but if there are any issues it should be tried without the jumper present as would be normal for a non destructive upgrade.
  12. Following this, I booted the system. Initially it could not see the Vertex, so I rebooted into the bios, found it was then visible, and did a bios save and another reboot. This time the drive was visible but not formatted.
  13. I then booted Acronis from its rescue CD and restored the backup, plus the Master Boot Record. This proceeded normally.
  14. After rebooting, the system came up and ran fine, however it did present the boot manager from the other disk, and the system had to be selected from there. To prevent the need for this, I copied the boot files back to the new SSD using bcdboot as per this post here. It was also necessary to reset the boot order in the bios so that the SSD was chosen first, otherwise the boot manager is still presented even after you have run bcdboot.

 

Now I had a working system with the updated SSD, I decided to reflash the P6T as well. Previously I had had hot swap and SATA issues with later bios versions, as detail here. However, since then I had disabled the onboard JMicron controller which was giving problems, and replaced it with a Startech controller, as detailed in the update to this post here. Therefore, the original issues which were preventing the use of a later bios were no longer present, so I opted to flash to the latest version which at the time of upgrade was version 1408 :-

  1. I used the in-bios EZ-Flash, and the process and precautions are detailed in the original section of this post here.
  2. I re-applied the required motherboard settings as per the process, and rebooted successfully.
  3. I then retried switching into AHCI mode, as detail above when I tried it prior to the upgrade. This time it worked correctly with no problems. In the bios, I just set the onboard SATA to AHCI. The JMicron controller was disabled, and the Startech controller was not touched as this was an add-in card just used for SATA backups. It is possible that now either the onboard ICH10 SATA or JMicron SATA might work and hot swap reliably in eSATA mode with AHCI enabled, but as I was happy enough with the Startech I decided to leave this issue well alone for now.
  4. As a final test, I re-ran the Windows Experience tests to see if AHCI had improved performance, and indeed the primary hard disk figure had increased from 7.1 to 7.3, which is good considering the fact that my original OCZ Vertex is now an old  technology which has been set to End-Of-Life by OCZ. The system certainly felt snappier, but this is highly subjective as I had not run before and after benchmarks – the goal of the whole excercise was reliability rather than speed.

No Comments »

January 29th, 2012
3:26 pm
Cannot obtain a class literal for a concrete parameterised type

Posted under Java
Tags , , , , , ,

Update 30/1/2012

After further testing, I have found more issues trying to use a parameterised concrete class in a JPA constructor expression. EclipseLink was throwing MethodNotFoundException complaining that it could not find the constructor method for the class, due to the use of the parameterised type in one of the constructor arguments. The precise circumstances where this does and does not work are not clear.

My solution has been to not use a parameterised class at all in a JPA constructor expression. To avoid most of the code duplication, what I did was to subclass the parameterised class for each concrete type I was using, i.e. I created a concrete subclass which only contained a constructor calling the superclass constructor. This then allowed me to use a typed query in JPA, as the returned class was now non-parameterised and so I could create a class literal for it.

 

Original Post

Whilst in some situations this may be an issue, it is also a statement of fact!

This is due to type erasure at run time, and cannot be done.

It means, for example, that in some cases it is not possible to completely eliminate unchecked warnings.

One example of mine was the use of a parameterised type in a constructor expression class in JPA. I had a requirement to do fetches which involved returning an entity plus another column that could not be handled as a relationship/JPA entity property. The property in question was the count of children for a tree node entity. As I was using path enumeration to describe the tree, I could not use JPA relationships to handle it, and so had to return the child count separately. I had more than one type of tree, all handled polymorphically, and so wanted to re-use the constructor class by making it parameterised.

This all worked fine but the class could not have its class literal passed when created a TypedQuery to fetch the results back. The solution was to use some casting isolated in a specific method, and to suppress the generic warnings via @SuppressWarnings("unchecked").

The following posts detail this issue:-

http://stackoverflow.com/questions/2390662/java-how-do-i-get-a-class-literal-from-a-generic-type

http://www.angelikalanger.com/GenericsFAQ/FAQSections/ParameterizedTypes.html#Why is there no class literal for the concrete instantiation of a parameterized type?

No Comments »

January 27th, 2012
11:30 am
Controlling CDI conversation propagation

Posted under JSF
Tags , , , , ,

This is a tricky issue to control correctly. The following points have been found during testing (Mojarra 2.0.4 FCS / Primefaces 2.2.1 / Glassfish 3.0.1)

  1. One key issue is that to start a new conversation on a page, the navigation to that page must not propagate any previously active conversation. This will then start a new transient (i.e. not long-running/not started) conversation on the new page, and conversation.begin() can be used to make it long-running (i.e. start it).
  2. When doing a standard JSF Ajax postback, the current conversation is always propagated, and it does not appear to be possible to prevent this/start a new conversation from a postback. JSF always adds the current CID as a querystring parameter
  3. To prevent propagation and to allow a new page to start a new conversation, GET navigation needs to be performed. This fits in well with my main toolbar options – I use pre-emptive GET navigation for the main pages with full page refreshes.
  4. Primefaces p:button allows the new JSF 2 outcome style GET navigation but cannot be stopped from propagating a conversation – it always adds a CID parameter. If you add one with an empty or null value, you still get a “cid=” attribute which CDI then spits out as an invalid conversation on the new page.
  5. h:link can be stopped from propagating by using an explicit f:param for the cid, providing the param has no value or an explicit null value (an empty string will not work).
  6. Trying to refer to another inactive conversation, e.g. by storing conversation scoped bean references in a session level map and looking them up, seriously screws up CDI with an extremely long/possibly recursive stack trace. (See Mantis issue xxx for the stack dump). This is likely to be due to the proxying mechanism used for conversations.
  7. Therefore, it is not possible to invent say a ‘onPageLeave’ event for a previous page and call it after a new page has started and had its conversation activated.
  8. If therefore I need to refresh all pages for the application, for example if an OptimisticLockException occurred, I use a central workspace session bean to manage all the pages. I set a refresh pending flag in a map for each page, and then when the page is next visited, the workspace bean calls a refresh event of my own for the page, and then clears the pending flag. When processing the current page after an OptimisticLockException, I can refresh directly if the user requests it from a lock dialog.
  9. My workspace bean holds the CID for each page against the page name in a map, and uses this when rendering the Toolbar, adding the appropriate cid= parameter for each toolbar option. In fact I use an outer map keyed on page group name (where the page group corresponds to a top level toolbar option), with an inner map containing all the cids and refresh pending flags for the individual pages in that group. This allows for example a report toolbar option to be a menu of currently active reports, and allows new reports to be created dynamically as new conversations and added to the list under the report menu on the toolbar.
  10. Conversations are started and ended via calls to the workspace bean, which can then add/remove entries from the maps to keep track of what pages/cids are currently active.
  11. Regarding navigation and cid propagation, another option is to call JSF directly to obtain the navigation case for an outcome. ConfigurableNavigationHandler allows you to pass an outcome and get the actual navigation case, from which you can then get the URL. This should allow full control of the cid= propagation parameter on the querystring, and allow control for components which do not support JSF 2.0 style GET pre-emptive navigation directly from an outcome. The url can be passed to any component which accepts a url, and should then still perform the correct navigation and conversation control. See CoreJSF 3 page 333 for a code example on this. This method could prove useful for example with a Primefaces MenuItem, which does either JSF postback style navigation, or URL navigation, but does not directly do pre-emptive GET navigation by giving it an outcome.

No Comments »

January 20th, 2012
4:08 pm
Using EL expressions to specify custom Validators and Converters

Posted under JSF
Tags , , , , ,

This post here discusses using an EL reference to specify a converter. This gets around the fact that you cannot use CDI beans in a converter class.

You can also do this in a validator, for example using the validator attribute on h:inputText. This is useful if you need access to other instance variables in order to do the validation. In my case, I needed access to a list for a table to verify uniqueness on string names in the list.

This is mentioned in Core JSF 3 on page 294:-

In the preceding section, you saw how to implement a validation class. However, you can also add the validation method to an existing class and invoke it through a method expression, like this:
<h:inputText id=”card” value=”#{payment.card}”
required=”true” validator=”#{payment.luhnCheck}”/>
The payment bean must then have a method with the exact same signature as the validate method of the Validator interface:

public class PaymentBean {

public void luhnCheck(FacesContext context, UIComponent component, Object value) {
… // same code as in the preceding example
}
}

Why would you want to do this? There is one major advantage. The validation method can access other instance variables of the class. You saw an example in the section, “Supplying Attributes to Converters” on page 289.

No Comments »

December 18th, 2011
7:58 pm
Eclipse–saving jar creation settings/automating jar export

Posted under Eclipse
Tags , , , ,

Eclipse allows you to export Jars via the Export/Java/Jar File right click option from the package or project explorer on a project.

The export wizard has a number of options. I generally create one jar with the class files in and a second for the sources.

It is obviously somewhat tedious and error prone to select all the options each time, so fortunately on the export wizard, on the Jar Packaging Options page you can select the option to Save the description of this jar in the workspace.

This will then create a .jardesc file with all the options on. Later you can simply right click a previously saved .jardesc  and select Create Jar to recreate the jar with the same options that you saved.

No Comments »

November 14th, 2011
3:29 pm
Using jQuery/ThemeRoller based themes with WordPress

Posted under Wordpress
Tags , , , , ,

It appears that there are widgets and themes which do this, and which allow dynamic theme switching of the jQuery themes. The attraction of this is that it would allow closer integration of the look and feel when using a JSF/Primefaces Java based web app called from an enclosing WordPress based site. One interesting possibility would be to tie the theme selection used for the WordPress site in with the theme selection for the Primefaces based web app.

This will need more investigation but as a starter here are a few interesting links found from initial searches :-

No Comments »

November 8th, 2011
8:02 pm
iMedia Linux– configuration/application install issues

Posted under Linux
Tags , , , , , , ,

This post brings together a number of past issues which I have not posted previously, discovered while setting up and using iMedia embedded Linux on my diskless LinITX Jupiter test box.

A zip of all the customised scripts and various config files (such as the infamous xorg.conf) are in the repository.

Shell Script Usage

1/ The startup scripts for applications and drivers etc. are in /etc/rcS.d/. You just add scripts to that and they are automatically run. They are all run by /etc/rcS in alpha order. They are grouped by a prefix of M05, M10, M20, M30 etc. – this seems just to be an arbitrary grouping that determines the order for dependencies etc. Note that all of rcS.d/ is run at run level 3 as set in /etc/inittab

The general order of startup is therefore (see /etc/inittab):-

  • /etc/rc.init runs, and calls rc.sysinit directly
  • /etc/rc.sysinit iterates init.d and runs everything in it. The order is partly determined by an initial grouping number – note the first to run is 00createvar which creates the /var tree on startup (see later)
  • /etc/rcS runs as above, iterating /etc/rcS.d/

2/ /usr/local/bin/ has the extra shared shell scripts – rlog, rlog-leafpad, sx (start X), sxr (start X as root), plus start/stop scripts for Tomcat and Glassfish.

In Thunar, you can add file associations for scripts by right clicking in the file and selecting open with other application – for each app you can add a custom launch command below. (I added rlog and rlog-leafpad). If you add an incorrect one (I added beaver by mistake) Thunar may not let you remove it from the list. Just edit ~/.local/share/applications/defaults.list and remove it from there. Note that /usr/local/bin/ is already on the path so all the script in it are visible – good place to put other global scripts.

Note that you can add scripts to e.g. a program launcher on the top panel bar – right click and create a program launcher, then add items to it – useful for scripts which run without parameters (unlike the rlog and rlog-leafpad above which act on files and therefore are Thunar’s context menu for the files they operate on). When adding a script as an item to a program launcher, the ‘startup notification’ just gives a wait cursor while the script is running (useful). Running in an explicit terminal window is WEIRD – for example for glassfish I set the middle icon on the RHS pane to “terminal”, and set the command to be “aterm -e start-glassfish”. I did not tick “Run in Terminal” as this did not work however!

Logging under iMedia Linux

1/ Logs are stored under the /var tree which is recreated at startup each time see above, so log files are purged as we are running on a small flash based distro. Therefore, Tomcat and Glassfish for example are set to use sym links to link their log directories from their normal install directories to be under /var/log/. There is some trickery to this – for Glassfish I make sure that the *normal* log folder is not present (renaming it with a date/time stamp if required) else the sym link is set up wrongly.

2/ Some logs (but by no means all) in /var/logs/ are circular buffers handled by emlog  – see here for details. They show in Thunar as ‘character type devices’ and  need copying to be visible – /usr/local/bin/rlog does this using nbcat as per the emlog site. I also added /usr/local/bin/rlog-leafpad which does the copy and fires up leafpad on the result. Note that as the emlog site states, you cannot list such a log when active using cat as a non blocking read is required – hence the need for nbcat as used by rlog and rlog-leafpad.

Installing new Applications

1/ Installing new apps is a pain, as rpm installs don’t appear to be supported etc. I tried installing various zipped tarballs of VNC/RDP alternatives for remote desktop but they all needed dependant libraries I did not have, so I had to give up. I think you can build from source to add things but did not go that far! The apps I did install successfully – java 1.6.0_29, tomcat 6.0.29 and Glassfish 3.0.1 – was all installed in folders under /opt as this appeared to be the correct place. Note that for Tomcat, I placed the config files under /etc/apache-tomcat-6.0.20/conf/ and sym-linked them from the ‘expected’ place, /opt/apache-tomcat-6.0.20/opt/, as this appeared to be correct protocol – other apps placed their config under /etc/ and linked to it from their install directory under /opt/. However I did not go to the trouble of doing this for Glassfish – it would be possible (I did do it for the log directory) but the directory structure is more complex.

2/ I did have success with Java – I just ran the .bin install kit of the latest Linux version of the sun Java SE (jdk6u29), and it all unzipped properly and ran fine. Thankfully there were no library problems – probably because there had already been an earlier working jdk6 already installed by default. I added a shell script /etc/profile.d/java-jdk1.6.0_29.sh which created the JAVA_HOME and added the bin directory to the path, but note that this did not work for init time shell scripts for either Tomcat or Glassfish, so their startup scripts had to add their own definitions to work.

3/ I successfully installed Glassfish 3.0.1 (from the zip kit download). To get it to work I had to upgrade Java as above as I got errors otherwise, and the old jdk was below the minimum certified version. However, it worked fine after upgrading java. Note that to stop it checking for updates on startup I just renamed /opt/glassfish-3.0.1-web/glassfish/modules/console-updatecenter-plugin.jar by adding “.disabled” on the end. You can achieve the same thing using the updatetool, but this failed to install properly due to a python error/dependency. I also removed the default jsf-api.jar and jsf-impl.jar from the same folder as above, and added the Mojarra 2.0.4 FCS jars to the folder instead, as my apps needed this version. Note that this also speeds up the startup of the Glassfish admin tool. See my notes here about clearing the osgi cache if you do this.

4/ For Tomcat and Glassfish I separated out the log directory creation from the actual startup, using separate scripts. The log creation is always run, and so sits under /etc/rcS.d/, and the auto startup is optional and may sit under /etc/rcS.disabled/ as a placeholder if it is not autostarted. However, as the log directories under /var/ are always setup, you can then manually start them if desired and they will still work. This is done with the scripts in /usr/local/bin/start-glassfish/stop-glassfish/start-tomcat/stop-tomcat. As this directory is on the path, the scripts can be directly run from a terminal window OR from a remote SSH window BUT you would need to be logged in as root to do it.

Graphical Remote Desktop

Re remote desktop, it is in theory possible to use a windows based X server and connect in that way. To do this, you need to run an X display manager on the Linux box to handle the xdmcp protocol etc. imedia linux does have /usr/bin/xdm which is an old display manager. I did manage to run it from the initial login prompt before typing startx, just by typing xdm, and it popped a login screen. However it would not connect to xfce. It recognised a valid login and tried to do something, failed, and redisplayed itself. An invalid login gave a login failure. I also managed to run the launcher from XMING under windows, and got as far as connecting to xdm. to do this I edited /etc/X11/xdm/Xaccess to enable the “*” entry to allow all comers. I also experimented with editing /etc/X11/xdm/xdm-config and set DisplayManager*authorize to false, and tried commenting out the last line as follows :-

! SECURITY: do not listen for XDMCP or Chooser requests
! Comment out this line if you want to manage X terminals with xdm
!DisplayManager.requestPort:    0

I managed to get XMING to display the same xdm login screen that I got locally, which was a good start – I knew I had a connection working. However, I realised in the end that xfce does not appear to be using a display manager – it simplifies things and bypasses it, and to set up xdm to link properly to xfce and to work remotely looked like loads of setting up with no instructions on how to do it properly.

I therefore sadly gave up on this and admitted defeat on installing any kind of graphical remote desktop!

Samba

I got the samba server to share folders successfully – I added the shares in /etc/samba/smb.conf. Security was set to user (security = user). I tried setting “valid users=” to a list of users, to allow restriction of users e.g. to glassfish/tomcat log folders, but could not get this to work, so I left it out.

Note that I did need to use “smbpasswd -a username” to add an existing linux user to the samba user list, and give it a password. You need to run as root to do this, or it merely allows you to change your own password. smbusers allows you to map external users (used by windows clients) to linux users – see here. Note that you don’t need to do this – you can just use the linux names. Note also that when adding/setting passwords with smbpasswd, you *always* use the Linux name, not the samba alias. Some other useful links for Samba follow:-

Ftp, Telnet and SSH

1/ ftp is enabled by default via vsftp. The config file is in /etc/vsftpd/vsftpd.conf. The default settings are pretty sensible in opening it up for use in a secur local lan. ftp works via all the local linux accounts including root. The log is in /vat/log/vsftpd.log.

2/ Telnet works but does not allow root access (presumably as too insecure). utelnetd is the daemon but there is no documentation for it on the net I could find. SSH works fine via the sshd and allows root access, but could not find any config files or logging for it, although it is documented on the net – the man pages are here, however the config files mentioned for the daemon do not appear to be present under imedia Linux. No problem, it is set up fine for LAN use and works well from putty in ssh mode.

No Comments »

September 23rd, 2011
3:17 pm
Preventing Overflow with long strings in table cells

Posted under JSF
Tags , , , , , , ,

I hit this issue when displaying email addresses in a p:dataTable.

The problem is that for long strings without spaces, the browser tries to enlarge the table cell width, causing the table to stretch beyond the correct bounds and messing up the layout. A column will wrap and enlarge vertically, but this will only be done when it can break on a space. Setting the widths for the columns is technically just advisory on the browser and may be overriden if needed.

There are various solutions to this, and this post here in Stack Overflow discusses them.

My chosen solution in this case was the following:-

  • Place the text in the cells in <spans>, to allow them to be styled with CSS. You could also style the column <td> cells by applying style/styleClass attributes to the p:column tag, but this will limit your flexibility – for example if you do this p:column does not expose the title attribute on the <td> (see below)
  • Style the <spans>  with display:inline-block;overflow:hidden. This allows a width to be set for the span (I used the same width as set for the column), and hides any overflow (which would otherwise bleed into the next column).
  • Text will still wrap on spaces if they are present, but long strings without spaces, such as email addresses or file paths, do not cause a problem, but they will truncate and so will not be fully visible by default.
  • I include a title attribute on the span to include the full text. This allows the full text to be displayed on hover for any long strings that are truncated in the cell.

This neatly solves the problem, and prevents the table layout messing up.

No Comments »

September 23rd, 2011
1:21 pm
JPA – Using a mapped superclass to hold Ids and Version Numbers for all Entities

Posted under JPA
Tags , , , ,

All my entities have a numeric ID as a primary key, except in very exceptional cases. In particular, I never used business data as part of the primary key, nor do I ever use a primary key for any business purposes (such as an invoice number). I consider that primary and foreign keys are only used to define relationships, i.e. they are metadata as far as the domain model is concerned.

The problem of using business data in primary keys is that if the business data format for a primary key changes in any way, all the foreign keys referring to this primary key column must also be changed. This can result in massive changes throughout the data model for what started as a simple business change. Taking the above invoice number for example, the business may decide to add an alphanumeric branch prefix to the invoice number.

If instead all primary keys are kept as separate numeric IDs, changes to the format of business data should typically only require changes to one table.

With that rant out of the way, the main point of this post is that, given the above, plus the need to hold version numbers on all entities to support optimistic locking, it is desirable to hold all the primary keys and version numbers in a mapped abstract superclass. This has the following benefits:-

  • the entities are simplified and the key/version number implementation is encapsulated and centralised, which is a good design principle
  • the mechanism allows for a standard mechanism to get the primary key/version of any entity polymorphically via the mapped superclass. This is especially useful when you need to do comparisons between entities and you do not want to override equals on the entities. It is also useful for other polymorphic situations – in my case I also have a polymorphic tree implementation which makes use of this

Note the following points about the implementation:-

  • As I have discussed here, I always annotate getters rather than fields, to permit inheritance. In this case, I provide fields plus polymorphic getters/setters in the mapped superclass for getEntityId and setEntityId which are marked transient and are not annotated. They are purely to allow polymorphic access to the primary key.
  • Each entity defines its own getters/setters with a mnemonic name for the primary key. These getters/setters call the base polymorphic ones. The reason for this is that as well as improving the key naming,  it allows each entity to annotate its primary key differently, for example to use a specific generator for sequences etc.
  • As the primary keys are annotated in the subclasses, this permits the base superclass IDs to be generic. In my case, this caters for both Long and Integer primary keys using a generic base superclass. Note that it would not be possible to annotate in the superclass if the ID is a parameterised type – JPA spits this out and will not allow it.
  • Due to the above generic issue, the type for the version number must be fixed, as this is annotated in the superclass. In my case I settled for an int for version numbers.
  • Note that as the primary key is generic, it must therefore be a reference type rather than a primitive type (e.g. Long rather than long). In my case, primary keys are normally Long
  • In some cases, I have other mapped superclasses, such as Tree and TreeNode classes. In this case, Tree and TreeNode extend the EntityBase class, and Tree and TreeNode subclasses then just extend Tree and TreeNode respectively.

The following code illustrates a simple example of the superclass and an entity subclass:-

 

EntityBase

/**
* Entity implementation class for Entity: EntityBase
* This mapped superclass defines the primary key and locking version number for all entities.
* The getters/setters are marked transient so they are not mapped here. This allows polymorphic access to primary key (and version),
* but still allows the subclass to define its own (mapped) getter/setter names for the primary key, and to annotate for specific generators etc.
* This allows meaningful names for primary keys, normally <tableName>Id
*/
@MappedSuperclass
@Access(AccessType.PROPERTY)
public abstract class EntityBase<C> implements Serializable {
    private static final long serialVersionUID = 8075127067609241095L;

    private C entityId;
    private int version;
   
    @Transient public C getEntityId() {
        return entityId;
    }
    @Transient public void setEntityId(C entityId) {
        this.entityId = entityId;
    }
    /*
     * The version cannot be generic as it is annotated here (unlike the primary key)
     * If you do, Eclipselink throws a tantrum. In practice, an int is plenty –
     * it would allow continuous version changes once a second for over 68 years for example.
     */
    @Version
    public int getVersion() {
        return version;
    }
    public void setVersion(int version) {
        this.version = version;
    }
}

 

Address

@Entity
@Access(AccessType.PROPERTY)
public class Address extends EntityBase<Long> implements Serializable {
    private static final long serialVersionUID = 3206799789679218177L;
   
    private int latitude;
    private int longitude;   
    private String streetAddress1;
    private String streetAddress2;
    private String locality;
    private String postTown;
    private String county;
    private String postCode;
       
    public Address(){}

    @Id
    @GeneratedValue(generator="AddressId")   
    public Long getAddressId() {return super.getEntityId();}
    public void setAddressId(Long addressId) {super.setEntityId(addressId);}
   
    public int getLatitude() {return latitude;}
    public void setLatitude(int latitude) {this.latitude = latitude;}
    public int getLongitude() {return longitude;}
    public void setLongitude(int longitude) {this.longitude = longitude;}
   
    public String getStreetAddress1() {return streetAddress1;}
    public void setStreetAddress1(String streetAddress1) {this.streetAddress1 = streetAddress1;}
    public String getStreetAddress2() {return streetAddress2;}
    public void setStreetAddress2(String streetAddress2) {this.streetAddress2 = streetAddress2;}
    public String getLocality() {return locality;}
    public void setLocality(String locality) {this.locality = locality;}
    public String getPostTown() {return postTown;}
    public void setPostTown(String postTown) {this.postTown = postTown;}
    public String getCounty() {return county;}
    public void setCounty(String county) {this.county = county;}
    public String getPostCode() {return postCode;}
    public void setPostCode(String postCode) {this.postCode = postCode;}   
}

No Comments »