Wednesday, October 10, 2012

Internet Explorer Is A Very Naughty Boy

It has been three months since I last felt the urge to post anything on my blog. There isn't any real particular reason why there should've been a hiatus with the possible exception that the sun was shining (sometimes) and I was out and about rather than being tied to my desk.

But the days are shorter than the nights now. There is definitely a nip in the air. Being "out and about" isn't quite as enjoyable as it was just a few weeks ago.

And so here I am... ready and willing to commit some more of my findings to this blog.

So what shall I tell you? Well, what about the fact that Internet Explorer is a very naughty boy! Hardly a startling revelation. Those of us working in the web world already understand how often we come across situations whereby Firefox, Chrome, Safari and Opera can all render a page correctly, but Internet Explorer seems to fail to do so! It gets rather tedious after while, right?

This week, I had the joys of diagnosing why a page protected by WebSEAL wouldn't render in Internet Explorer. Capturing the HTTP Headers whizzing back and forth in Internet Explorer and Firefox provided the answer quite quickly: Internet Explorer would sometimes not bother to send the session cookie back to WebSEAL.

Why would it "sometimes" just not bother to do this? Well, there is some well documented evidence that Internet Explorer (up to version 8) treats cookies in a rather unexpected fashion. Internet Explorer can start dropping in-memory cookies as it has a finite limit on the number of in-memory cookies it can handle!

Those clever people in the development labs of IBM, however, have come across this before and the problem can be alleviated by setting the resend-webseal-cookies parameter to yes in the WebSEAL configuration file. This ensures that the cookie gets set with every request!

For many of you, you will have come across this quirk before. Many times, potentially. For those just getting started out with your WebSEAL deployment, though, make sure you have the ability to take a grab of the HTTP Headers from within your browser. It's amazing what you can see inside them!

Useful Header Inspection Tools

I promise to blog more... now that that winter is almost upon us!

Friday, June 29, 2012

TDI and MQTT to RSMB

That's far too many acronyms, really. What do they mean? Well, readers of this blog will understand that TDI has got nothing to do with diesel engines but is, in fact, Tivoli Directory Integrator.



MQTT? MQ Telemetry Transport - "a machine-to-machine (M2M)/"Internet of Things" connectivity protocol".

RSMB? Really Small Message Broker - "a very small messaging server that uses the lightweight MQTT publish/subscribe protocol to distribute messages between applications".

So what do I want to do with this stuff? Well, you will now know that I got myself a Raspberry Pi and I was scratching around thinking of things I'd like my Pi to do. I came across an excellent video showing how Andy Stanford-Clark is utilising his Pi to monitor and control devices around his home - it is definitely worth a look

I have no intention (yet) of trying to copy Andy's achievements as I'm quite sure I don't have the spare hours in the day! However, I was intrigued to see if I could use my favourite tool (TDI) to ping messages to RSMB using MQTT.

Step 1 - Download RSMB
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=AW-0U9

Step 2 - Startup RSMB



Step 3 - Fire up a Subscriber listening to the TDI topic
 


Step 4 - Write an Assembly Line to use the MQTT Publisher Connector
 


Step 5 - Populate the output map of my connector and run the Assembly Line.
The result will be a message published to RSMB which I can see in my subscriber utility:

I can also see the RSMB log shows the connections to the server:

Of course, TDI doesn't have an MQTT Publisher Connector - I had to write one. The good news is that this was possibly the simplest connector of all-time to write. That said, it is extra-ordinarily basic and is missing a myriad of features. For example - it does not support authentication to RSMB. It's error handling is what I can only describe as flaky. It is a publisher only - I haven't provided subscriber functions within the connector. But it shows how TDI could be used to ping very simple lightweight messages to a message broker using MQTT.

So what? Sounds like an intellectual exercise, right? Well, maybe. But MQTT is a great way of pushing information to mobile devices (as demonstrated by Dale Lane) so what I have is a means of publishing information from my running assembly lines to multiple mobile devices in real-time - potentially.

At this point, though, it is worth pointing out that the development of a connector is complete overkill for this exercise (though it does look pretty).

Dropping the wmqtt.jar file that can be found in the IA92 package into {TDI_HOME}/jars/3rdparty will allow you to publish to RSMB using the following few lines of TDI scripting:

// Let's create a MQTT client instance
var mqttpersistence = null;
var mqttclient = Packages.com.ibm.mqtt.MqttClient.createMqttClient("tcp://rsmb-server:1883", mqttpersistence);

// Let's connect to the RSMB server and provide a ClientID
var mqttclientid = "tdi-server";
var mqttcleanstart = true;
var mqttkeepalive = 0;
mqttclient.connect(mqttclientid, mqttcleanstart, mqttkeepalive);

// Let's publish
var mqtttopic = "TDI";
var mqttmessage = "This is a sample message!";
var mqttqos = 0;
var mqttretained = false;
mqttclient.publish(mqtttopic, mqttmessage.getBytes("ISO-8859-1"), mqttqos, mqttretained);

Not very complicated. In fact, very simple indeed! (The connector I developed doesn't get very much more complicated than this!)

Wednesday, June 13, 2012

Raspberry Pi, XBMC and Android

Well this is completely off-topic for me... Pi, XBMC and Android really doesn't fit with the myriad of previous posts on Identity & Access Management and IBM Tivoli security software. So for those of you who are only interested in the latter, you might as well stop reading now.

For those of you interested in how you can build your own media centre for just a few pennies, read on.

The Raspberry PI is a cred-card sized computer that plugs into your television. Having grown up with a ZX Spectrum in my younger days, I was keen to recreate the feelings I had as a teenager fiddling with Sir Clive's version of BASIC.

The Pi has an ethernet port, an HDMI slot, two USB ports, an SD card reader, a power slot and additional audio/video outputs as well as a GPIO for adding your own hardware attachments.

For my project, I just needed an ethernet cable, an HDMI cable, some form of power and an SD card to host my Operating System - Debian6.

Following the instructions provided at raspberrypi.org, I was able to flash my debian6-19-04-2012 image to my 8GB SD card. I then resized the partitions using a GParted LiveCD as attempting to deploy anything on to the image is an exercise in futility as the image provided is TINY!

I slotted the SD card into my Pi, attached the HDMI and ethernet cables and then attached my Amazon Kindle power cable and watched the lights come on and the boot sequence start.


Unfortunately, the image provided doesn't have SSH enabled by default which meant hooking up an old keyboard to the Pi and sorting that out - as such:

sudo mv /boot/boot_enable_ssh.rc /boot/boot.rc
sudo reboot


The keyboard was disconnected and it was time for some PuTTY action back on my home desktop.

Next up, I decided to install XBMC and thankfully this is a path well trodden so far. In fact, detailed instructions can be found online and it is worth following this thread on the Raspberry Pi Forum.


The following instructions are slightly more detailed and detail the approach I took. NOTE: I have placed the actual files I downloaded on to my own website so I can recreate this procedure should the images "disappear" from the original sources.


First step, install XBMC:

cd /home/pi
wget http://www.stephen-swann.co.uk/downloads/xbmc-bcm.tar.gz
gunzip xbmc-bcm.tar.gz
tar -xvf xbmc-bcm.tar
sudo mv xbmc-bcm /opt
rm xbmc-bcm.tar

Next up, update the Raspberry Pi firmware:

cd /home/pi
wget {host}/raspberrypi-firmware-0c3566c.zip
unzip raspberrypi-firmware-0c3566c.zip
rm raspberrypi-firmware-0c3566c.zip

cd raspberrypi-firmware-0c3566c
sudo cp boot/* /boot
sudo cp -R opt/vc/* /opt/vc/
sudo cp -R lib/modules/3.1.9+/* /modules/3.1.9+/
sudo reboot


So far, so good. Now to install some dependencies which takes a not inconsiderable amount of time:

sudo apt-get -y install autoconf libboost-dev libass-dev libmpeg2-4-dev libmad0-dev libjpeg-dev libsamplerate0-dev libogg-dev libvorbis-dev libmodplug-dev libcurl4-gnutls-dev libflac-dev libmysqlclient-dev libbz2-dev libtiff4-dev libssl-dev libssh-dev libsmbclient-dev libyajl-dev libfribidi-dev libsqlite3-dev libpng12-dev libpcre3-dev libpcrecpp0 libcdio-dev libiso9660-dev libfreetype6-dev libjasper-dev libmicrohttpd-dev python-dev python-sqlite libplist-dev libavahi-client-dev

Next? Install the Raspberry Tools:

cd /home/pi
wget {host}/raspberrypi-tools-772201f.zip
unzip raspberrypi-tools-772201f.zip
rm raspberrypi-tools-772201f.zip

cd raspberrypi-tools-772201f
sudo cp -R arm-bcm2708/linux-x86/arm-bcm2708-linux-gnueabi/sys-root/lib/libstdc++.so.6.0.14 /usr/lib
sudo ldconfig


At this stage, we are almost finished. I wanted to make sure that everything was bang up-to-date though and threw in some reboots, just to be sure:

sudo reboot
sudo apt-get update
sudo apt-get upgrade
sudo reboot

And finally, it was time to run XBMC:

sudo LD_LIBRARY_PATH=/opt/xbmc-bcm/xbmc-bin/lib /opt/xbmc-bcm/xbmc-bin/lib/xbmc/xbmc.bin

Configuring the XBMC settings through the XBMC interface required my old keyboard to be hooked up once again, unfortunately, but this was a "one-off". I enabled the XBMC webserver and added my ReadyNAS Duo as a UPNP source for video, music and pictures.


I then downloaded the XBMC Remote software for Android to my phone, punched in the webserver details for my new XBMC installation, and hey presto - I was able to control the XBMC from my phone and enjoy watching a movie on a television that is already DLNA capable :-)

Now, I just need to move the Pi to the bedroom where there is an old television that couldn't even spell DLNA if it had the power of speech.

Cool? I think so...

Thursday, May 24, 2012

Re-Routing ITIM Approvals

The ITIM workflow engine can be really quite powerful once you have mastered it but getting the very best of it can require some out-of-the-box thinking.

Take, for example, the ability to re-route approvals. Why would you ever want to do that, you might ask. Well, we might provide a mechanism for an end user to select a particular user population for approving requests which would, of course, mean that the end user will habitually select the wrong user population!

Let's take an example:

Massive Corporation provides a suite of services which are accessed by staff and 3rd parties. Of the 3rd parties, many are users requiring read/write access to data but some are journalists who require access to press releases. Read/Write users should be approved by Read/Write approvers; Journalists should be approved by the Press Release approvers.

Now, let's consider that a journalist attempts to register for access but accidentally selects the Read/Write access. When his request is routed to the Read/Write approvers, the approvers have the ability to either Approve or Reject. What if they wanted to route the request to the Press Release approvers instead?

One solution to this problem might be to embed the approval process in a loop and capture rejections within the loop with an "Are You Sure?" RFI node:
So what's going on here? The Approval node should be understood as should the connection from that node to the script marked APPROVED. This route assumes that approval has been granted.

The rejection route, however, connects to an RFI node. The RFI node could be configured to route the request to some "central" team who has responsibility for validating rejections and routing requests appropriately. The RFI could ask this team to confirm the rejection or select a new approver and, in effect, try the process again.

Rejection confirmation would cause the LOOP to exit, but the selection of a new approver would cause us to go around the LOOP once more. And we can LOOP ad infinitum, if required.

Of course, an RFI is a Request For Information which requires data attributes to be populated on the entity object for the workflow. We would have to attach 2 new attributes to the entity:
  • reject
  • approver

Reject could be a boolean attribute wherease approver is a container for a role within the system - a role which can be selected using a Search Match.

Now, we have a simple and robust mechanism for re-routing approvals following a rejection. With an RFI node after a rejection, we can even capture Reject Reasons and "sanitise" them just in case we publish the reason back to the user!

Of course, this is just one solution to the problem. There are others. How would you tackle the problem?

Tuesday, April 17, 2012

TDI Reconnect Rules

Network connections have a habit of dropping whether it be because of network issues, server load or application faults.

TDI, fortunately, has some in-built mechanisms to help the TDI developer cope with such situations - Reconnect Rules. The theory is that should a connection to a data source "disappear" for some reason, TDI can attempt to reconnect to the data source and continue processing. This reconnection can be attempted a number of times and can also be configured to wait a certain amount of time before attempting to reconnect. But of course, I'm not really educating you seasoned TDI professionals by telling you this. You already understand the concepts and where to configure the rules, right?






So all is well in the world again. Our JDBC, LDAP and HTTP connectors can all be configured to be robust enough to withstand network glitches.

What about our Function Components? What about, for example, the Axis Easy Invoke Soap Web Service Function Component? There is certainly a tab for Connection Errors available which would suggest that we can avail of the same technology to ensure we have a robust Assembly Line.

Alas, it would seem not. I was recently confronted with a badly behaving Web Service data source that every now and again wouldn't bother to respond to my requests. I would get a java.net.SocketTimeoutException much to my frustration. I didn't want to set the Function Component's timeout unreasonably high - I merely wanted to retry my operation so turned to my Connection Errors tab. Behaviourally? No change!

So what was going on here? Can it be as simple as Reconnect Rules are only appropriate for connectors? I simulated my issue to test the theory. I used an HTTP Client connector to connect to an HTTP Server connector that would never respond and configured my HTTP Client connector with just a 2 second timeout. I got my java.net.SocketTimeoutException after 2 seconds, unsurprisingly. I also saw my connector wait for 2 seconds and attempt a reconnection (as per the configuration in the above diagram).

That said, I killed my client Assembly Line after a minute or so to find the following in my log file:

08:40:54,615 INFO  - CTGDIS087I Iterating.
08:40:54,615 INFO  - CTGDIS086I No iterator in AssemblyLine, will run single pass only.
08:40:54,615 INFO  - CTGDIS092I Using runtime provided entry as working entry (first pass only).
08:41:54,011 INFO  - CTGDIS100I Printing the Connector statistics.
08:41:54,161 INFO  -  [HTTPClientConnector] Reconnect attempts:13, Errors:14
08:41:54,161 INFO  - CTGDIS104I Total: Reconnect attempts:13, Errors:14.
08:41:54,161 INFO  - CTGDIS101I Finished printing the Connector statistics.
08:41:54,171 INFO  - CTGDIS080I Terminated successfully (14 errors).

13 reconnection attempts? Can that be right? I only configured my Connection Errors tab to reconnect once! Odd. Maybe it has something to do with the fact that I'm receiving a java.net.SocketTimeoutException? I shut down my HTTP Server component and re-ran my client and got the following:

08:48:35,298 INFO  - [HTTPClientConnector] CTGDJP201I Finding entries with URL set to http://localhost:8091.
08:48:36,309 INFO  - [HTTPClientConnector] CTGDIS495I handleException , lookup, java.net.ConnectException: Connection refused: connect
08:48:38,322 INFO  - [HTTPClientConnector] CTGDJP201I Finding entries with URL set to http://localhost:8091.
08:48:39,304 INFO  - [HTTPClientConnector] CTGDIS495I handleException , lookup, java.net.ConnectException: Connection refused: connect
08:48:41,316 INFO  - [HTTPClientConnector] CTGDJP201I Finding entries with URL set to http://localhost:8091.
08:48:42,698 INFO  - [HTTPClientConnector] CTGDIS495I handleException , lookup, java.net.ConnectException: Connection refused: connect
08:48:44,821 INFO  - [HTTPClientConnector] CTGDJP201I Finding entries with URL set to http://localhost:8091.
08:48:45,813 INFO  - [HTTPClientConnector] CTGDIS495I handleException , lookup, java.net.ConnectException: Connection refused: connect

Reconnecting furiously on java.net.ConnectException now and continually! Maybe that's an issue for another day though as I'm really interested in what happens with my Function Component as no matter what I put into the Connection Errors tab, I always get an aborting Assembly Line.

The answer to the problem is probably as I suggested - Connection Errors are inappropriate for Function Components. I can also tell that this is the case because if I examine the underlying XML for my Assembly Line I do not see the Reconnect tag as I do in my HTTP Client connector:

<Reconnect>
   <InheritFrom>[parent]</InheritFrom>
   <parameter name="autoreconnect">true</parameter>
   <parameter name="initreconnect">true</parameter>
   <parameter name="numberOfRetries">1</parameter>
   <parameter name="retryDelay">2</parameter>
   <ReconnectRules/>
</Reconnect>


If I'm being brutally honest, I always knew that the reconnect rules wouldn't work on Function Components as the tab wasn't even present in TDI v6.1.1 for Function Components. It seems that it has appeared at some point without me really noticing - everything above was performed using TDI v7.1.5. So, a tab has appeared which is misleading. Ah well, the only way forward now is to handle the failure myself in my hooks. A simple catch for java.net.SocketTimeoutException, a counter, a sleep and a system.skipTo() will do the trick nicely.

Monday, February 20, 2012

Copy, Paste And ITIM Workflows

The astute amongst you will have seen that developing ITIM Workflows got a little trickier about a year or so ago when copying and pasting into and out of the script editor just stopped working.

Of course, this isn't an ITIM issue - but rather a "security" loophole plugging issue given to use by the powers that be at Java HQ.

The very astute amonst you will know how to get around this hassle and re-enable clipboard access, but for the rest of you, read on.

Shut-down your browser and any Java based applications. Open your favourite editor - if you are a Windows user, you may want to run it as Administrator! Locate your java.policy file which you may find under {JAVA_HOME}\lib\security and add the following line within the grant { } section:

permission java.awt.AWTPermission "accessClipboard";

Restart your browser and you ought to have full copy/paste functionality once again.

Thursday, February 09, 2012

Tivoli Directory Integrator and Long Objects

Those of you who use Tivoli Directory Integrator to interface with Tivoli Identity Manager using the ITIM API may have found yourselves wanting to grab the Process ID of requests that you make to ITIM. If so, you may have come across a peculiarity within TDI when it comes to handling Long objects.

Of course, the peculiarity is not limited to those interactions with ITIM - it's just that ITIM returns Process IDs in java.lang.Long format. All handling of java.lang.Long objects should be treated suspiciously.

Take for example the following logic:

var myString = "6746449726097972618";
var myLong = new java.lang.Long(myString);

task.logmsg("INFO", "STRING is: " + myString);
task.logmsg("INFO", "LONG is: " + myLong.longValue);

One might reasonably assume that the resulting log file will display the long number 6746449726097972618 twice. But you would be wrong. In fact, the log will look a lot like this:

08:51:23,047 INFO  - CTGDIS087I Iterating.
08:51:23,047 INFO  - CTGDIS086I No iterator in AssemblyLine, will run single pass only.
08:51:23,047 INFO  - CTGDIS092I Using runtime provided entry as working entry (first pass only).
08:51:23,057 INFO - STRING is: 6746449726097972618
08:51:23,057 INFO - LONG is: 6746449726097972224
08:51:23,057 INFO  - CTGDIS088I Finished iterating.
08:51:23,057 INFO  - CTGDIS100I Printing the Connector statistics.
08:51:23,057 INFO  -  [EmptyScript] Calls: 1
08:51:23,057 INFO  - CTGDIS104I Total: Not used.
08:51:23,057 INFO  - CTGDIS101I Finished printing the Connector statistics.
08:51:23,057 INFO  - CTGDIS080I Terminated successfully (0 errors).

So why the lack of precision when we try to display our Long object? A clue to what is going on here can be found by adding the following line of code:

task.logmsg(myLong.longValue);

Now, our log file looks like this:

08:51:29,237 INFO - STRING is: 6746449726097972618
08:51:29,237 INFO - LONG is: 6746449726097972224
08:51:29,237 INFO  - 6746449726097972618

So, we have now been able to display our Long object correctly. But the only difference was that I passed my Long object as a long object type rather than as a string. ("LONG is: " + myLong.longValue will convert myLong.longValue to a string prior to appending it to the string "LONG is: ")

It would therefore seem that the toString() conversion within Javascript when operated on a Long object is incapable of retaining precision. In fact, there is plenty of documentary evidence out there explaining Javascript's limitations when it comes to java.lang.Long handling. But why did that final task.logmsg(myLong.longValue) statement work?

The reason, is that we are calling a proper Java method at that point and Java is more than capable of handling java.lang.Long objects (as you might expect).

If you really must play with java.lang.Long objects within your script nodes in TDI and you want to write the object to any target, you'll probably be better to have a think about performing a true Java conversion of the object in order to retain precision, like so...

var myConvertedString = new java.lang.String.valueOf(myLong.longValue);

myConvertedString will now contain 6746449726097972618, as expected, and can be displayed quite happily as a result of calling the task.logmsg() method regardless of how you construct the arguments to the method.

NOTE: In the case of playing with the ITIM API, you will have to wrapper tha String.valueOf() method inside a Java class rather than rely on performing this in Javascript. This is because ITIM will return Request objects within which the Process ID can be retrieved using the getID() method. However, Javascript being Javascript means that a non-precise conversion will have already taken place during a getID() call. Wrap it in a Java class to be sure to get the correct string representation of your Process ID!


Tuesday, January 24, 2012

TDI Connector Initialisation

Tivoli Directory Integrator can be a great tool for quickly building routines that can move data from one repository to another. It can also be the perfect answer for building robust data synchronisation routines in the enterprise environment where the failure can be catastrophic. But let me tell you about a "gotcha".

The default initialisation sequence for a TDI Assembly Line's connectors is to initialise on Assembly Line startup. That might not sound too unreasonable but let's consider the following scenario.

Two connectors, in iterator mode, connecting to Active Directory instances are instantiated on Assembly Line startup. Connector A trundles retrieves data from its Active Directory container and passes its work entry to a myriad of other connectors and scripts which perform some logic on the data before committing it to another repository.

10 minutes later, Connector A gets to the end of its cycle and control passes to Connector B which similarly retrieves data from its Active Directory container and logic and commitment are performed.

All is well and good. Unless, of course, Connector A has a lot of data to process and more time is taken to do so.

If Connector A takes longer than 15 minutes, Connector B will keel over unceremoniously. Why? Well, it is all to do with the MaxConnIdleTime setting within Active Directory which has a default setting of 900 seconds (or 15 minutes). Microsoft's website gives an explanation for this setting as:

"The maximum time in seconds that the client can be idle before the LDAP server closes the connection. If a connection is idle for more than this time, the LDAP server returns an LDAP disconnect notification."

The TDI documentation states that when a connector in Iterator mode initializes, however, "the Prolog – Before Selection Hook is processed, and the Connector performs the entry selection; this triggers a data source specific call, like performing an SQL SELECT or an LDAP search."

So if the LDAP search was invoked at initialisation time, how can the MaxConnIdleTime have any impact on our processing? The answer, my friend, is that search requests are paged. On initialization, the LDAP search is indeed invoked, but only the first page of data is returned - 500 entries, for example. Only when Connector A has finishing iterating through its data will control pass to Connector B at which point Connector B can request the next page of data. At this point, however, the MaxConnIdleTime has been surpassed and Active Directory refuses to play ball.

To get around the problem, though, is really quite straightforward. Instead of initialising the connector when the Assembly Line starts, initialise it when it is required. Clicking on the little more button and selecting only when used for the initializate parameter as such:


Next time you write an Assembly Line, have a think about how connectors should be initialised. It might save you from a whole world of pain!

NOTE: Just because a connector in your test environment can process data inside 15 minutes, doesn't mean you will get the same result in your production environment ;-)