Thursday, June 05, 2014

LDAP Load Balancing And Timeouts

It's been ages since I posted anything despite the fact I have lots to say. The last few months have been extraordinarily busy and challenging (and not necessarily in a technical way).

Some colleagues have pointed out that I have aged significantly in recent times. It is true that the hair on my chin has gone from grey to white and that the hair around my temples is severely lacking in any kind of hue. It seems that my work-life balance has gone slightly askew.

Balancing brings me rather neatly on to the topic of LDAP Load Balancing. I say Load Balancing though what I really mean to say is providing a mechanism for assuring availability of Directory Servers for IBM Security Identity Manager. My preference is for load to go to one directory server which replicates to a second directory server which can be used should something go awry on the primary.

So, what's the best way to ensure traffic routes to the correct directory server and stays stuck to that directory server until something happens? Well, that's the domain for the F5 BIG-IP beast. Or is it.

There is plenty of documentation around the web that states that one should tread carefully when attempting to use these kind of tools to front a directory server (and IBM Tivoli Directory Server in particular). In recent dealings, I've observed the following which is worth sharing:

Point 1
Be careful of the BIG-IP idle timeout default of 300 seconds. Any connection that BIG-IP thinks is stale after 300 seconds will be torn down. (You should see how a TDI Assembly Line in iterator mode behaves with that without auto-reconnect and skip-forward disabled!)

Point 2
Be careful of the TCP connection setup and seriously consider using Performance Layer 4 as the connection profile on the BIG-IP. A 50% increase in throughput was not atypical in some of my recent tests.

Point 3
Ensure that the default settings for LDAP pool handling are updated in ISIM's enRole.properties file. In particular, pay attention to the following:

enrole.connectionpool.timeout
enrole.connectionpool.retryCountForSUException

The timeout should be set just a little below the BIG-IP's idle timeout. The retryCountForSUException should be set to ensure that ISIM reconnects should the BIG-IP device tear-down the connection regardless of the timeout.

And with those tips, you should have a highly available infrastructure with a level of tolerance.

Thursday, March 06, 2014

Understanding ISIM Reconciliations

Most ITIM/ISIM gurus will understand what goes on during Service reconciliation. Let's be honest, most gurus have had to write countless adapters and perform countless bouts of debugging problems when they arise.

What happens, though, if you have one ISIM environment that can reconcile a service successfully but a second ISIM environment which cannot reconcile the exact same service. And let us, for a moment, assume that both ISIM environments are configured identically.

Of course, if something works in one environment and doesn't work in another, there must be a difference somewhere. But the ISIM version is the same, the service profile is the same, the service definition is the same, it's talking to the same TDI instance which performs the grunt work of retrieving all the data from the target service. On the face of it - there is no reason for one environment to behave any differently than the other.

Yet it can happen! In fact, I recently saw an ISIM environment get itself into a hung state trying to reconcile a service yet all the reconcilable objects had made their way into ISIM.

When ISIM reconciles a service, the result is that supporting data objects are stored in the LDAP under the service container (erglobalid=123,ou=services,erglobalid=00000000000000000000,ou=org,dc=com) and ultimately accounts are stored under the ou=accounts and ou=orphans containers. I say ultimately for a reason. The accounts are actually temporarily stored under the service container too before being moved to a more appropriate container.

And therein lay the difference in my two environments. The working environment had no accounts stored under the service container prior to executing the reconciliation. Somehow, the non-working environment did have some accounts left hanging about under the service container.

A ruthless LDAPDELETE on all objects sub-ordinate to the service was all that was required for the reconciliation to complete successfully.

Next time you have a misbehaving reconciliation process, why not check to see how previous reconciliations have been left by having a look under your service container. You might be surprised at what you will find.

Thursday, September 26, 2013

ITIM Best Practices For Workflow Design

It sounds rather arrogant to call this blog entry "Best Practices" when it is merely my meandering thoughts that I'm about to spew forth. But a number of experiences of other people's work recently has almost moved me to tears.

So here is my list of "best practices".

Workflow Start Nodes

The Start Node (and the End Node for that matter) is not somewhere that code should be placed. Code can be placed in these nodes, of course. But it shouldn't ever be placed in here. For example, how could I know that the following Workflow has been customised:



If you have code that needs to be run as soon as the workflow starts, place a Script Node immediately after the Start Node and name it appropriately. This is MUCH better:



Script Node Size

If I open a Script Node and find that there are 1,000 lines of code in there, I will cry. If you have 1,000 lines of code you are probably doing something wrong. I would much rather see multiple Script Nodes with a sensible name describing their function laid out in an organised manner that gives me a good visual representation of the process rather than having to wade through hundreds or thousands of lines of code.

Workflow Extensions

If you have written some Javascript that looks cool and funky and re-usable, then make it cool and funky and re-usable by creating a Workflow Extension. Also, feel free to give extensions back to the community! (I shall publish some of my favourites soon!)

Hardcoding!

If I see a hardcoded erglobalid reference in a Script Node, I will, in all possibility, hunt the developer down and do very un-Tivolian things to him or her. Their assumption that their code will work when promoted through various environments is very flawed and they are being lazy. Do Not Hardcode!

Commenting & Logging
You might think your code is readable, but the chances are that it could still do with some comments here and there. Even if the code is understandable, the reasoning behind performing the function may not be so clear and requirements documents and design documents have a habit of disappearing!

When it comes to logging, log appropriately and update the status of the workflow throughout the workflow process. The following statements are great to see in Script Nodes because we can successfully handle any problems and ensure that we always navigate our way to the End Node.

activity.setResult(activity.SUCCESS, "Attributes Successfully Verified");
activity.setResult(activity.FAILED, "Attribute Verification Failed due to " + errMsg);

And Finally

The chances are that someone will come along after you have done all your hard work and they will look at your workflow. Do you want them to cry? Or do you want them to be delighted in what they have seen? Make sure you delight people!

Friday, June 07, 2013

Careful Where You Point That Finger

Over the last 9 months, I have managed to shed almost 50lbs in weight - that in excess of 20Kg for those living in the metric world. As part of the process of losing weight, I took to tracking my walking and cycling habits using Endomondo on my phone - and what an excellent piece of software that is.

I did have a problem, though!

When I started the application and enabled GPS, I would wait quite a considerably amount of time before my location could be determined. Indeed, I had completed the bulk of some of my journeys by the time I got the "GPS Excellent" notification.

Initially, my thoughts went along these lines:

"Endomondo have coded their freebie application to not pick up my location in the hope that I will purchase the Pro version of their app"

I did purchase the Pro version - mostly because I thought the app was terrific with only a slight nod towards the hope that my GPS issues would be resolved. My GPS issues weren't resolved.

So I then thought:

"I must have dropped my phone at some point and 'disturbed' its ability to perform it's GPS magic"

This seemed like a reasonable thought as all my friends and colleagues and perfectly acceptable Endomondo experiences on their phones. Indeed, my Samsung Galaxy II seemed to be really struggling when compared to a colleague's Nexus - and they were both Android phones.

I figured it must be time for a new phone. And proceeded to spend the next couple of weeks evaluating my options. And then... I woke up one morning to discover that my phone wanted to apply an update to my Android operating system.

An hour or so later, I had a shiny new OS. Admittedly, performance was awful initially as all my apps needed updating too. And, all my icons had been resorted alphabetically (for which I could quite happily exact some kind of tortuous revenge on the developer of the upgrade process). Apart from the initial dodgy performance and the tears that followed the pain of re-organising my icons, I noticed two things:
  • Playing 7x7 was super-fast - 7x7 is one of those simple yet addictive games that can keep me going for hours
  • GPS performance was perfect!

Wonderful! When starting Endomondo Pro now, I have to wait no longer than 2 or 3 seconds before I have my "GPS Excellent" notification. And to think my problems were with the app or the phone.

So. What's the point of this story I hear you ask! Well, it is far too easy to make a judgement about an app, or a product or almost anything you might encounter during the course of your life. My problems had nothing to do with Endomondo's developers' coding ability; nor had it anything to do with how Samsung hardware technicians had put my phone together. Yet, even for a techie like me, my thought processes led me to think bad thoughts about both!

I have seen senior managers in enterprises make some very strange decisions in the past. They may decide that they no longer need "Huge Software Developer's Amazing IT Solution" because it costs too much and doesn't perform the way they expected and instead by "Enormous Software Developer's Stupendous IT Solution" in the anticipation that it will cost less and perform perfectly.

And frequently, I don't see the cost benefit and the performance problems still exist.

Sometimes the problems we are faced with in the IT world are not being caused by the applications we are using. Sometimes we need to dig a little deeper to find out what the real cause of a problem is - whether it be the hardware platform we are using; the operating system in use; the networking setup; the interfaces; and of course, the users and their expectations.

And finally... maybe some of the big boys can learn from the developers of the apps we now use on a daily basis on our smartphones. Endomondo does exactly what I need in a manner which makes sense for me. Can we say the same thing for enterprise software?

Tuesday, April 16, 2013

Tempus Fugit

I remember being a follower of a blogger who wrote about IBM Tivoli security solutions and becoming quite concerned for his well being when he "stopped" blogging for a while. When he had gone fully six months without blogging, I had myself convinced that something terrible had happened to him personally.

And now I find that I have done the same thing. My periodical briefings have stopped. But fear not - it's not because of any ill health. It's not because of a change in career. It's not even to do with boredom. It has everything to do with being far too busy and that's the best reason of all for the temporary blip in my output.

But, time flies... and too much time has passed since I last committed my thoughts to writing.

So what has happened since in the last 6 months? Well, the IBM Tivoli re-branding exercise is in full swing with IBM Security Identity Manager and IBM Security Access Manager products having been released.

And what has changed in those products?

ISAM has a brand new deployment process which is greatly simplified. However, for TAMeb users who have deployed their software on Windows, beware? The upgrade process might not behave as you expect? Why? The move to a 64-bit architecture, that's why? Think seriously about any attempt to perform an in-situ upgrade!

You might like to also check compression settings on your WebSEALs - a respected colleague of mine has already encountered some fun and games with those!

And ISIM? Is it just a pure re-branding exercise? Not at all. Some functional additions are definitely welcome like the controls added to the services offering retry operations on failed resources. Account ownership and role attributes look interesting (despite how they have been implemented). Privileged Identity Management is a great addition as is the inclusion of supported web services API.

But core processing that ITIM administrators will know and love is still there!

And what of my work recently? Well, it's a matter of spending a lot of time concentrating on federated security, and environmental upgrades. Working on pre-sales; sprucing up designs; sizing projects; and helping those around me get best use out of their IBM Tivoli/Security solutions.

So much has happened in recent months, though, that I hardly know where to start in documenting it all. There has been fun with SAML v2. Flirtations with FESI extension re-writes. Dalliances with web services and APIs. Encounters with business process re-engineering.

My next article, however, will likely be an IBM Tivoli Directory Integrator article centred on best practice for collaborative development. That sounds like a tricky one!

Wednesday, October 10, 2012

Internet Explorer Is A Very Naughty Boy

It has been three months since I last felt the urge to post anything on my blog. There isn't any real particular reason why there should've been a hiatus with the possible exception that the sun was shining (sometimes) and I was out and about rather than being tied to my desk.

But the days are shorter than the nights now. There is definitely a nip in the air. Being "out and about" isn't quite as enjoyable as it was just a few weeks ago.

And so here I am... ready and willing to commit some more of my findings to this blog.

So what shall I tell you? Well, what about the fact that Internet Explorer is a very naughty boy! Hardly a startling revelation. Those of us working in the web world already understand how often we come across situations whereby Firefox, Chrome, Safari and Opera can all render a page correctly, but Internet Explorer seems to fail to do so! It gets rather tedious after while, right?

This week, I had the joys of diagnosing why a page protected by WebSEAL wouldn't render in Internet Explorer. Capturing the HTTP Headers whizzing back and forth in Internet Explorer and Firefox provided the answer quite quickly: Internet Explorer would sometimes not bother to send the session cookie back to WebSEAL.

Why would it "sometimes" just not bother to do this? Well, there is some well documented evidence that Internet Explorer (up to version 8) treats cookies in a rather unexpected fashion. Internet Explorer can start dropping in-memory cookies as it has a finite limit on the number of in-memory cookies it can handle!

Those clever people in the development labs of IBM, however, have come across this before and the problem can be alleviated by setting the resend-webseal-cookies parameter to yes in the WebSEAL configuration file. This ensures that the cookie gets set with every request!

For many of you, you will have come across this quirk before. Many times, potentially. For those just getting started out with your WebSEAL deployment, though, make sure you have the ability to take a grab of the HTTP Headers from within your browser. It's amazing what you can see inside them!

Useful Header Inspection Tools

I promise to blog more... now that that winter is almost upon us!

Friday, June 29, 2012

TDI and MQTT to RSMB

That's far too many acronyms, really. What do they mean? Well, readers of this blog will understand that TDI has got nothing to do with diesel engines but is, in fact, Tivoli Directory Integrator.



MQTT? MQ Telemetry Transport - "a machine-to-machine (M2M)/"Internet of Things" connectivity protocol".

RSMB? Really Small Message Broker - "a very small messaging server that uses the lightweight MQTT publish/subscribe protocol to distribute messages between applications".

So what do I want to do with this stuff? Well, you will now know that I got myself a Raspberry Pi and I was scratching around thinking of things I'd like my Pi to do. I came across an excellent video showing how Andy Stanford-Clark is utilising his Pi to monitor and control devices around his home - it is definitely worth a look

I have no intention (yet) of trying to copy Andy's achievements as I'm quite sure I don't have the spare hours in the day! However, I was intrigued to see if I could use my favourite tool (TDI) to ping messages to RSMB using MQTT.

Step 1 - Download RSMB
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=AW-0U9

Step 2 - Startup RSMB



Step 3 - Fire up a Subscriber listening to the TDI topic
 


Step 4 - Write an Assembly Line to use the MQTT Publisher Connector
 


Step 5 - Populate the output map of my connector and run the Assembly Line.
The result will be a message published to RSMB which I can see in my subscriber utility:

I can also see the RSMB log shows the connections to the server:

Of course, TDI doesn't have an MQTT Publisher Connector - I had to write one. The good news is that this was possibly the simplest connector of all-time to write. That said, it is extra-ordinarily basic and is missing a myriad of features. For example - it does not support authentication to RSMB. It's error handling is what I can only describe as flaky. It is a publisher only - I haven't provided subscriber functions within the connector. But it shows how TDI could be used to ping very simple lightweight messages to a message broker using MQTT.

So what? Sounds like an intellectual exercise, right? Well, maybe. But MQTT is a great way of pushing information to mobile devices (as demonstrated by Dale Lane) so what I have is a means of publishing information from my running assembly lines to multiple mobile devices in real-time - potentially.

At this point, though, it is worth pointing out that the development of a connector is complete overkill for this exercise (though it does look pretty).

Dropping the wmqtt.jar file that can be found in the IA92 package into {TDI_HOME}/jars/3rdparty will allow you to publish to RSMB using the following few lines of TDI scripting:

// Let's create a MQTT client instance
var mqttpersistence = null;
var mqttclient = Packages.com.ibm.mqtt.MqttClient.createMqttClient("tcp://rsmb-server:1883", mqttpersistence);

// Let's connect to the RSMB server and provide a ClientID
var mqttclientid = "tdi-server";
var mqttcleanstart = true;
var mqttkeepalive = 0;
mqttclient.connect(mqttclientid, mqttcleanstart, mqttkeepalive);

// Let's publish
var mqtttopic = "TDI";
var mqttmessage = "This is a sample message!";
var mqttqos = 0;
var mqttretained = false;
mqttclient.publish(mqtttopic, mqttmessage.getBytes("ISO-8859-1"), mqttqos, mqttretained);

Not very complicated. In fact, very simple indeed! (The connector I developed doesn't get very much more complicated than this!)