Blog Archives

February 23rd, 2017
1:03 pm
Angular2 Tour of Heroes Tutorial – Issues and Learning Points

Posted under Angular
Tags ,

Whilst adding the changes to for 2 way data binding by adding the new file src/app/app.module.ts as detailed here, VS Code reported the following errors:-

[ts] Cannot find module ‘@angular/core’. (1,24)
[ts] Cannot find module ‘@angular/platform-browser’. (2,31)
[ts] Cannot find module ‘@angular/forms’. (3,31)

The compilation was not affected and the app worked.

The notes state that you should add it to the NgModuledecorator’s imports array, however this decorator and indeed the above file were not even present in the project.

Having looked into it, it appears that the available git repos do not match the tutorial notes! Annoying! There might be a VS Code issue but it looks likely that it is a repo issue.

The standard repo is here. This appears to already have the changes listed above for  app.module.ts, so is not suitable for starting the tutorial.

The version I used from Rob Lourens, which contains the launch.json configurations for VS Code, is here. This is different again, and still does not match the tutorial start!

No Comments »

February 22nd, 2017
10:28 pm
Angular2 Tour of Heroes Tutorial – IDE/Debugging Setup

Posted under Angular
Tags , ,

The aim of this was to get the tutorial up and running in a suitable IDE, such that I could debug and step through the code whilst understanding it. The tutorial site is here. The standard repository on Git is here.

A number of different IDE options where tried, and most of them hit problems and were therefore discounted. The options tried were as follows:-

Eclipse with the Angular 2 Eclipse plugin

The following links document how to set this up:-

http://fcorti.com/2016/08/22/angular-2-eclipse-how-to/

http://stackoverflow.com/questions/35890887/how-to-get-angular2-to-work-in-eclipse-with-typescript

When loading the Tour of Heroes tutorial, I tried to import it as a file system project as the cloned Git repo was not an ecplipse project.

The problems I hit with this were as follows:-

  1. After importing into eclipse, there were compile errors on import and export statements in rollup.js.
  2. The core problem appeared to be eclipse bug 496348 which was listed as resolved but judging by the comments at the end was still a problem.
  3. I tried various ways around this, such as upgrading to a dev release of eclipse Neon, trying a prerelease of eclipse Oxygen, and upgrading the eclipse plugins. None of these solved the problem.
  4. As this was not a very commonly used IDE, there was not that much in the way of online help. In particular, no-one appeared to have posted online about getting the Tour of Heroes demo working with it. Therefore, support etc. going forward would likely be a problem.

Webclipse

  1. This is a paid for eclipse plugin, also available as a separate IDE on its own called Angular IDE, detailed   here.
  2. I tried this also with Eclipse Neon and Oxygen, and received similar Typscript related errors that I could not eliminate when trying to load the Tour of Heroes demo. I therefore dumped this too.
  3. Whilst I did not try IntelliJ/Webstorm at this stage, which was the Jetbrains/IntelliJ paid for option, one Stack Overflow comment (which I now cannot find) cited that there were still Typescript issues with it, even in IntelliJ. (With hindsight, I wish I had, as in the end I have standardising on it as it works both for this tutorial, which uses systemjs, and for applications generated via angular-cli which use webpack. Details on this stage of the journey here and here).

Visual Studio Code

  1. This is the IDE I settled on at this stage. Whilst it meant learning another IDE, it had a number of overriding benefits.
  2. Basically it just works with no problems and no wacky Typescript errors! The basic point here is that Microsoft originated Typescript, and they are the authors of VS Code, so unsurprisingly it has the best compatibility.
  3. As it is probably the most commonly used IDE for Angular, unlike with the other IDE solutions, there are many examples online for it, including for the Tour of Heroes tutorial, which needs a custom launch.js file for running the demo.

 

The setup steps for it were as follows :-

  1. Follow the initial steps including installation of the latest Node.js and npm from the Angular tutorial site here
  2. Rob Lourens has forked the Tour of Heroes git repo here, with a custom launch.js file for launching in VS code. Clone that git repo and follow his instructions.
  3. Note that this uses the VS Code debugger for Chrome extension, in order to allow browser run angular JS code to be debugged inside VS Code instead of in the Browser.
  4. In order to get this working, I had to start the app from the command line first via npm i (to build) then npm start.
  5. Also, initially I could not get VS code to connect to Chrome, either by running up a new chrome via Rob’s launch.js configuration “Launch localhost with sourcemaps”, or by connecting to an existing chrome using “Attach with sourcemaps”. The issue was that Chrome needed starting with a command line option to allow debug connections via its debug port. This is detailed here (comment by auchenberg) – the command line is C:\Program Files (x86)\Google\Chrome\Application\chrome.exe –remote-debugging-port=9222
  6. Once the above was done, I could connect from VS code provided I tried to attach to the existing chrome using  “Attach with sourcemaps” – otherwise, a new chrome was started which did not allow debug connections!
  7. The steps needed were therefore 1/ run (having built) using npm-start, 2/ run up a debug chrome, then 3/ click on the debug icon in VS code, select the  “Attach with sourcemaps” dropdown launch.js option, then hit the green go button just to the left of it. This successfully fired up the demo.
  8. Note that I was able to breakpoint in the code using VS code either on the Typscript (.ts) files or the equivalent transpiled .js files. It appears that the compiled code is interspersed in the same folders as the Typescript, which does not seem cool at all – something to look into. I would expect that the build artifact code would be separate as in Java.

No Comments »

May 15th, 2012
7:22 pm
Oracle XE 11g R2 installation issue on Windows 7 64 bit

Posted under Oracle
Tags , , ,

During installation, I received the following error message:-

“The installer is unable to instantiate the file …{…}\KEY_XE.REG”

The installation was mostly correct but I was unable to access the getting started/home page.

This post here details a workaround for the issue but this did not  work for me.

In the end, I recreated the Get_Started.url file under (in my case) E:\oraclexe\app\oracle\product\11.2.0\server.

This contained a parameter placeholder for the HTTP port, and I just edited the correct port in directly. Note that for some reason it was not possible to change the existing shortcut. I had to delete it and create a new one of the same name, which then worked fine.

Whilst doing this, I also changed the HTTP port that the web interface listens on. This is done by running a stored procedure, and is detailed in both this post and also this alternative post. Both posts detail the same method.

Note also for XE 11g R2 that the getting started page (which corresponds to the old 10g home page) no longer has the means to create schemas or to run sql queries. Schemas/users can be created via SQLplus, or via TOAD or SQLDeveloper.

Note that TOAD 8 is not compatible with 11g. The latest free TOAD now lasts for a year and is excellent with plenty of functionality, albeit with an occasional nag/advert for the paid version. The paid version is around $800 for a seat which is very high for a SQL development tool, even if it is a good one. Note that the free TOAD is available from Toad World here, but is not available from the main Quest Software site – this appears to be a marketing decision to keep the free version out of sight from people who may be persuaded to part with money for the paid one. I should not complain too much as they are after all providing an excellent free tool.

No Comments »

May 15th, 2012
6:58 pm
Automatically disabling Sleep during Acronis True Image Home Backups

Posted under Windows 7
Tags , , ,

Previously this had been an issue as the PC would sleep during the validation of a Backup, so I manually switched to an Always On power plan.

With the current PC build I was using, the PC would hang and not shut down cleanly if it was put into sleep and woken whilst an eSATA drive was attached to the 2-port STARTECH card I was using. Providing the PC did not sleep, all was fine.

I therefore looked for a means of automatically switching to always on for Backup runs. Rather than switch back to the normal power plan automatically after the backup, I elected to leave Always On set and switch back to normal on every startup. This way, I would be sure that if I still had the eSATA drive connected after a backup, the pc would not sleep.

This Acronis forum post which refers to this instructional pdf detail a workaround to the problem, which involves coding a batch file to switch power plans, and calling it as a pre/post custom command in Acronis in the backup settings.

In addition, as I wanted to set the plan back to normal at boot time, I used the technique from this StackOverflow post which worked correctly. Originally I had tried using a scheduled task triggered at boot time, but it refused to run and did not give a reason why.

My registry settings file, batch files, and vbs script are listed below for reference. Note that the GUIDS correspond to the particular power plans, and are unique for each one and different for each pc – see the above pdf for details of how to list the power plans with their corresponding GUIDs.

This fixed the problem, but I remain unimpressed that Acronis does not handle this automatically or have a setting for it in a backup definition. You can tell it to prevent sleep for scheduled backup runs, but not for manually initiated ones.

 

SetPowerPlanNormal.reg

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run]
"SetPowerPlanNormal"="wscript.exe \"C:\\System Management\\RunInvisibleBatchFile.vbs\" \"C:\\System Management\\SetPowerPlanNormal.bat\""

RunInvisibleBatchFile.vbs

CreateObject("Wscript.Shell").Run """" & WScript.Arguments(0) & """", 0, False

SetPowerPlanNormal.bat

:: Change Power Plan to Normal.
::
@ECHO OFF
POWERCFG -SETACTIVE ccaec46d-cbf8-42af-9e8f-ab66182942f7
::
@EXIT

SetPowerPlanAlwaysOn.bat

:: Change Power Plan to AlwaysOn.
::
@ECHO OFF
POWERCFG -SETACTIVE 499ab33e-0735-4605-8ccc-98211478164b
::
@EXIT

No Comments »

May 5th, 2012
1:30 pm
Sage Instant Accounts V11 install hangs on Windows 7 64bit

Posted under Sage Accounts
Tags , , , ,

Update 15/5/2012

Further to the issues below, whilst I could run Sage satisfactorily for the most part, some features still needed ODBC to be working, such as Invoice preview and VAT return printing.

After a couple of attempts I was able to setup ODBC after the event by running E:\Salient Soft\Accounts\Sage\Instant Accounts\ODBC32\Disk1\Setup.exe. This allowed ODBC setup/install even after Sage had been installed.

Sage was happy even though there was no mention of the ODBC driver or connection for Sage under Administrative Tools/Data Sources (ODBC). The real underlying issue was not discovered, but the problem was finally resolved.

I was then able to successfully print a VAT return and preview invoices, which previously had failed.

 

Original Post

This was a reinstall using a fresh windows install, but the same location for Sage on a data disk. The following issues were hit:-

  1. On install, it hung at the point where it said “Setup is configuring the ODBC”
  2. I tried installing from a shortcut to the CD, with administrator mode set and Windows XP compatibility of various flavours, but the same error persisted.
  3. I then tried removing the “ODBC files” option from the (custom) install. This changed the problem – the install now hung at the point where it said “configuring MIS files”.
  4. I then did another custom install removing everything that I did not need, including ODBC, MIS support, E-Banking, web/email templates etc.
  5. This time, the install finished successfully. Furthermore, Sage ran fine even though I had not told it to “configure the ODBC”.

Note that it is possible that the previous duff installs did configure ODBC and that the final install was dependant on the previous ones to work. This should be borne in mind if trying again. However, all the installed files for Sage were all there anyway as this was a reinstall – I was only doing it to install the windows/registry settings/shortcuts etc. that Sage needed.

I did not try a new install to a new directory as I did not need to – this may have worked, but I cannot say. If so, it may be possible to import or copy the old company data into a new install. A quick look at the menus and a quick google did not reveal any obvious way to do this, but a judicios copy may work.

Note also that this issue did not occur the first time I installed on Windows 7 64. This first time was also an install over an existing Sage installation copied from Windows XP.

No Comments »

February 13th, 2012
2:42 pm
CDI/Weld fails to inject a reference with nested parameterised types

Posted under CDI
Tags , , , ,

Weld 1.0 was failing with “WELD-001408 Injection point has unsatisfied dependencies” when nested parameterised types were present.

The referenced bean was correctly present and annotated.

I found that I could work around the problem by simplifying/removing some of the generics. This allowed the beans to inject, but also gave warnings about the use of raw types. For example:-

//Original code
    private @Inject TableCtrl<TreeNodePath<TaxonomyNode>, RowMetadata> selectionTable;

//was replaced with the following
    private @Inject TableCtrl<TreeNodePath, RowMetadata> selectionTable;

Attempts to use a CDI extension to find out more about what was happening did not reveal any more insight. This post discusses Weld performance and debugging and refers to a Stack Overflow post on the subject.

My solution to the problem (which I have also logged on Mantis) is  as follows :-

  1. Injecting the bean into a raw type does not suffer from the problem.
  2. I therefore inject into a temporary raw type, and cast that to the correct type.
  3. Wrapping this in a method annotated with @Inject neatly solves the problem, as the method can take the raw types as arguments, and cast to the correctly parameterised fields in the method.
  4. As all this is done in a method, warnings can be suppressed for the method. This is a tidy solution, as the method only has this specific purpose, and no other warnings in the class are incorrectly suppressed.
  5. This is far better than the original workaround which involved hacking the generics back to non nested parameterised types throughout – this meant hacking a number of classes. The current solution entirely isolates the issue and allows the desired generics to be correctly used everywhere.

An example of the correct generic declarations and the method used follows :-

public abstract class TreeBrowser<N extends TreeNode<T>, T extends Tree<N>, P extends TreeNodePath<N>>
                            implements Serializable, BreadcrumbCtrlEvent<N> {

    private CrumbComparator<N> crumbComparator;
    private BreadcrumbCtrl<N> breadcrumb;   
    private TableCtrl<N, RowMetadata> treeNodeTable;
    private TableCtrl<P, RowMetadataStatusMap> trayTable;

    @Inject
    @SuppressWarnings({"rawtypes", "unchecked"})
    void injectWeldUnsatisfiedGenericBeans(CrumbComparator crumbComparator, BreadcrumbCtrl breadcrumb,
                           TableCtrl treeNodeTable, TableCtrl trayTable) {
        this.crumbComparator = crumbComparator;
        this.breadcrumb = breadcrumb;
        this.treeNodeTable = treeNodeTable;
        this.trayTable = trayTable;
    }

 

This solved the problem.

No Comments »

February 3rd, 2012
1:11 pm
OCZ Vertex / ASUS P6T Flash Upgrade process & Issues

Posted under 64 Bit
Tags , , , , , , , , ,

This was done in order to resolve ongoing reliability issues with the OCZ Vertex and SATA port errors from the P6T as detailed here.

This post also follows on from my earlier post here concerning hot swap issues, where I went into some detail of the P6T flashing process. Reference should be made to that post for more details of P6T flashing.

The following steps were performed:-

  1. I identified the current firmware version of my OCZ Vertex. To do this start device manager, find the disk under Disk drives, and open its properties. Then select the Harware Ids property and the version number of the firmware will be shown. There were hints in the forums of a more detailed revision number, but I could not find it. My initial version was 1.5.
  2. I also tried to run the OCZ Toolbox which allows identification and upgrade of the firmware, and also auto downloads the correct new version. I nice idea if it had worked! In my case it failed to find the OCZ Vertex either before or after the upgrade process.
  3. I did note that a number of posts suggested switching the SATA mode to AHCI in both the bios and windows. This OCZ post details the process, which (for the windows part) simply involves changing the value of HKLM\System\CurrentControlSet\Services\msahci\Start in the registry from 3 (=IDE mode) to 0 (= AHCI mode). However, when I tried this at this stage, the system froze at boot time. Fortunately, I was able to switch the setting back in the BIOS, boot into safe mode, and set the registry setting back to 3, to restore the system to a working state.
  4. The latest version for my Vertex was 1.7, but it is not possible to upgrade directly from 1.5 to 1.7. It is necessary to upgrade to 1.6 first. Finding all the historical versions of the firmware is not easy on the OCZ site – the main download area only has the most recent version. Older version may be found from a forum post here which has all the historical downloads.
  5. For my Vertex, here is  the 1.6 upgrade and here is the 1.7 upgrade.
  6. Note that as per the 1.6 upgrade, I had the ‘filename error’ which meant that I had an older version which needed a destructive upgrade to 1.6 due to a change in the NAND/wear levelling algorithm. This would mean loss of all data on the drive and a restore afterwards, and the use of a different kit for upgrading. The upgrade also could not be done with the drive in use as the system disk. The links in the previous paragraph give all the instructions and the alternative versions of the upgrade.
  7. I then made 2 full backups with Acronis, plus another daily file backup, to be sure I would be able to restore the drive.
  8. As per the instructions, I jumpered the drive to set it into standalone/upgrade mode, and booted from the previous version of Windows 7 which I still had available on a standard Hard drive. I kept this available just in case, and on this occasion I was very glad I did.
  9. The destructive upgrade is available as 16_win_vertex.zip. Unfortunately the zip contains 2 versions of the update, 641102VTX.exe and 661102VTXR.exe. There are no release notes to say which version to use or what they are (!) I wondered if they related to the internal firmware/drive version from which you were upgrading, but I didn’t know that either as I have said earlier. Another forum post hinted that as may be expected, the 661102VTXR.exe was a later version and the ’R’ indicated revised. I decided to try this one.
  10. Having jumpered and rebooted, the drive correctly identified itself as YATAPDONG BAREFOOT, and I proceeded with the upgrade, which went successfully.
  11. I then immediately upgraded from 1.6 to 1.7 using 17_updaters_1.zip.  This time, a standard non-destructive upgrade could be done (not that it mattered in my case), which mean burning an ISO on a CD, and booting that to do the upgrade. The zip contained ISOs for a number of OCZ products plus some release notes this time, so I burnt the ISO for the Vertex. I booted it and did the upgrade, which went with no problems. One point is that I cannot recall whether I removed the jumper on the drive before or after this final upgrade. I am fairly sure that it was after, in which case the upgrade does not mind whether the jumper is present, but if there are any issues it should be tried without the jumper present as would be normal for a non destructive upgrade.
  12. Following this, I booted the system. Initially it could not see the Vertex, so I rebooted into the bios, found it was then visible, and did a bios save and another reboot. This time the drive was visible but not formatted.
  13. I then booted Acronis from its rescue CD and restored the backup, plus the Master Boot Record. This proceeded normally.
  14. After rebooting, the system came up and ran fine, however it did present the boot manager from the other disk, and the system had to be selected from there. To prevent the need for this, I copied the boot files back to the new SSD using bcdboot as per this post here. It was also necessary to reset the boot order in the bios so that the SSD was chosen first, otherwise the boot manager is still presented even after you have run bcdboot.

 

Now I had a working system with the updated SSD, I decided to reflash the P6T as well. Previously I had had hot swap and SATA issues with later bios versions, as detail here. However, since then I had disabled the onboard JMicron controller which was giving problems, and replaced it with a Startech controller, as detailed in the update to this post here. Therefore, the original issues which were preventing the use of a later bios were no longer present, so I opted to flash to the latest version which at the time of upgrade was version 1408 :-

  1. I used the in-bios EZ-Flash, and the process and precautions are detailed in the original section of this post here.
  2. I re-applied the required motherboard settings as per the process, and rebooted successfully.
  3. I then retried switching into AHCI mode, as detail above when I tried it prior to the upgrade. This time it worked correctly with no problems. In the bios, I just set the onboard SATA to AHCI. The JMicron controller was disabled, and the Startech controller was not touched as this was an add-in card just used for SATA backups. It is possible that now either the onboard ICH10 SATA or JMicron SATA might work and hot swap reliably in eSATA mode with AHCI enabled, but as I was happy enough with the Startech I decided to leave this issue well alone for now.
  4. As a final test, I re-ran the Windows Experience tests to see if AHCI had improved performance, and indeed the primary hard disk figure had increased from 7.1 to 7.3, which is good considering the fact that my original OCZ Vertex is now an old  technology which has been set to End-Of-Life by OCZ. The system certainly felt snappier, but this is highly subjective as I had not run before and after benchmarks – the goal of the whole excercise was reliability rather than speed.

No Comments »

January 27th, 2012
6:09 pm
h:link navigation fails when Javascript confirm function called in click event

Posted under JSF
Tags , , , ,

I use the click event to validate a navigation, returning true to allow it and false to cancel it.
On some occasions, the toolbar buttons (which are based internally on h:link with pre-emptive Get style navigation based on an outcome) do not navigate event when the click event correctly returns true.

Some toolbar buttons always did this, others always worked. No explanation could be found for the failure to navigate, even after extensive testing and online searching for answers.

The solution to this was to return cancel in the click event, but to have the click event explicitly do the navigation itself via Javascript. The following code fragment illustrates the problem:-

<h:link id="link" disabled="#{disabled}" outcome="#{outcome}" href="#{href}" title="#{title}" tabindex="#{tabIndex}"
      onclick="if (#{empty onNavigate ? ‘true’ : el:format1(onNavigate, el:concat3(‘\”, component.clientId, ‘\”))})
      window.location.href=this.href; return false;">

The if test checks if the navigation is require, then if so, window.location.href is set to the target href in the component. False is always returned so that standard navigation is disabled.

This solved the problem consistently. (This has also been logged as Mantis issue 104).

No Comments »

January 27th, 2012
10:16 am
Method call in rendered attribute is called again after navigating to new form

Posted under JSF
Tags , , ,

Initially this looked very strange. When navigating to a new form, the rendered= methods for the old form were being called again, and were failing due to scoping issues – a new request scope bean was being created causing a null pointer exception in the method as required state had been lost.

This post and also this post on StackOverflow explain the problem. Apparently JSF rechecks the rendered conditions after the submit as part of an attack safeguard. This is what caused recreation of a new bean and the failure.

This behaviour was certainly unexpected, and I have not seen it documented ‘up front’ previously – so it is certainly one to be aware of! This is not the first time I have had issues with the render= attribute.

In my case, resolving issues in creation of conversations solved the problem – once I had conversations propagating properly, everything worked correctly and the state was present for the extra calls to the rendered= methods.

No Comments »

July 27th, 2011
7:34 am
Primefaces p:panel–main panel bar/title bar does not render at all

Posted under JSF
Tags , , , ,

This issue was encountered under Primefaces 2.2.1 – other versions have not been tested for this. In certain cases it is possible for the panel component to not render the title bar at all.

When this happens, the content is rendered assuming the panel is expanded, but the panel bar itself does not render, and crucially, there is therefore no access to the expand/collapse icon as it is not rendered. Other components can expand/collapse the panel via its widget however.

This is caused when there is neither a header attribute present nor a header facet present. If one or other (or both) are present, the panel title bar does render. Note that the behaviour occurs even if toggleable is set to true.

This behaviour is undocumented and is rather a gotcha when you hit it, as it seems indicative of something more serious.

The solution is straightforward however – you must provide either the header attribute or a header facet in order for the panel component itself to render correctly.

No Comments »