Monday, February 09, 2015

Giving Away Secrets

I've often been asked why I 'blog' (not that I update this blog anywhere near as often as I would like). I've had people say that I'm "giving away secrets" and that I won't be in demand in the job marketplace if I continually tell other people how to do what I do.

That maybe so, but surely that's a good thing right? I want to be able to move on and learn new things on a daily basis and if I can get others to do the things that I traditionally do, then that creates the opportunity for me to move on?

Then there is the fact that I'm not getting any younger and retaining information in my head isn't as easy these days as it used to be. It's like creaking limbs and deteriorating eyesight... I find I'm becoming more forgetful (which my wife will more than happily corroborate). As such, writing all these things down is actually as much for my own private use as it is to help others.

And finally, altruism feels good. Now, I don't for one minute think that I'm the most altruistic person I know. Giving away information on IBM Tivoli security software can barely be described as altruistic really. But nonetheless, it feels like I'm doing a good thing. Last week, I received an email from a very kind individual which reminded me that it is a good thing. The email said:

"You are a rock star and a gentleman. Thanks so much for all the helpful material you have put together! I owe you at least a suitcase of beer. Feel free to cash in anytime."

I don't get many emails of that type. Mostly, I get emails asking for free consultancy so it brightened my day when I saw the above. So, to the sender (Tim), I say thank you for the kind offer of beer but it's really not necessary. Acknowledgement that the material is worthwhile is quite enough for me.

Friday, December 12, 2014

The Who And When Of Account Recertification

We all know that there are rules and laws in place that mean that organisations must demonstrate that they have control over access to their IT systems. We've all heard about the concepts around recertification of access rights. But what is the best way to certify that the access rights in place are still appropriate and who should undertake that certification task?


A starting point, at least, is to identify who are potential candidates for certifying access rights. They are, in no particular order:

  • The account owner themselves
  • The account owner's line manager
  • The application owner
  • The service owner
  • The person who defined the role that granted the entitlements

Of course, some of these people may or may not actually exist. In some cases, the person might not be appropriate for undertaking certification tasks. Line Managers these days are often merely the person who you see once or twice a year for appraisals and to be given the good news about your promotion/bonus (or otherwise). Functional Managers or Project Managers may be more appropriate, but can they be identified? Is that relationship with the account owner held anywhere? Unlikely. And definitely unlikely in the context of an identity management platform!

And what about service owners? Maybe this person is the person who has responsibility for ensuring the lights stay on for a particular system and maybe they don't care what the 500,000 accounts on that system are doing? Maybe her system is being used by multiple applications and she would prefer the application owners to take responsibility for the accounts?

And what of the account owner themselves? Self-certifying is more than likely going to merely yield a "yes, I still need that account and all those entitlements" response! But is that better than nothing? Maybe! Then again, maybe the account owner is unaware of what the account gives them. What do they do when they are told they have an AD account which is a member of the PC-CB-100 group, will they understand what that means? For many users, the answer will be no!

Ultimately, the decision as to who is best placed to perform the certification will come down to a number of factors:

  • Can a person be identified
  • Can the person recognise the information they will be presented with and correlate that with a day-to-day job function
  • Is the person empowered to make the certification

There will be a high chance that only one person fits the bill given this criteria!


Some organisations like to perform regular recertifications... it's not unusual to see quarterly returns being required. For most people, however, their job function doesn't change too often and a lot of effort will have gone into gathering information which (for the most part) is unchanged from the previous round of recertifications.

Is there a better way?

Merely lengthening the amount of time between cycles isn't necessarily the answer. But maybe recertification can be more "targeted". Rather than a time-based approach, recertification should probably target those people who have undergone some form of change or other life-cycle event, such as:

  • Change of Job Title
  • Change of Line Manager
  • Change of Organisational Unit
  • Return from long-term leave
  • Lack of use on the account

All of these events can be used as useful recertification triggers rather than waiting for a quarterly review. The benefits are easy to articulate to any CISO - immediate review and more control of access rights.

Of course, these triggers can be used in conjunction with a time-based approach - but maybe that time-based approach should be based on the last time a recertification was performed for that user/account/entitlement rather than a fixed date in the calendar.

Thursday, December 04, 2014

ISIM Workflow Extensions

During recent ramblings, I mentioned that I would publish some of my favourite ISIM workflow extensions. The workflow extensions that are provided "out of the box" cover tasks such as adding, modifying and deleting accounts and identities as well as enforcing policy and other standard operations. However, the "out of the box" list could do with having a number of useful extensions added.

One of my favourites (and something I pretty much insist on being installed as soon as I've deployed ISIM) is my version of a sendmail. ISIM has the ability to send emails, but only to those identities who have an email address. However, what if you wanted to send an email to an address which was not attached to any identity? What if you wanted to email a user during a self-registration workflow at which point the identity has not yet been created?

My sendmail extension is just a handful of lines long, uses standard ISIM methods and takes just the following arguments: eMail Address, Subject and Body.

The code is very simple indeed:

public ActivityResult sendMail(String mailAddress, String mailSubject, String mailBody) {
  try {
    List<String> addresses = new ArrayList<String>();
    NotificationMessage nMessage = new NotificationMessage(addresses, mailSubject, mailBody, mailBody);


    return new ActivityResult(ActivityResult.STATUS_COMPLETE,
      "eMail Sent",
  } catch (Exception e) {
    return new ActivityResult(ActivityResult.STATUS_COMPLETE,
      "eMail Not Sent",

Registering the JAR and registering the extension in the workflowextensions.xml file is all that is required to make the extension available.

    <APPLICATION CLASS_NAME="com.sswann.sendMail" METHOD_NAME="sendMail" />
    <IN_PARAMETERS PARAM_ID="recipient" TYPE="String" />
    <IN_PARAMETERS PARAM_ID="subject" TYPE="String" />
    <IN_PARAMETERS PARAM_ID="body" TYPE="String" />

On-screen, the extension will be rendered as such:

From here, it's merely a matter of constructing your relevant data and handling the activity.resultSummary correctly in your subsequent nodes.

Previously, I stated that workflow extensions should be created when you find you are repeating Javascript over and over again. If you need to check the status of an account owner before invoking an account operation, that would sound like a great candidate for creating a workflow extension and having a solid reusable component. A list of useful extensions might include:
  • Check Account Already Exists
  • Check Owner Status
  • Check Manager Status
  • Read/Update LDAP Object (not an identity or account)
  • Read/Update Database Object
  • Read/Update Message on ESB (MQ)
  • Call Web Service
  • Create Helpdesk Ticket/Send Alert

Have you any good candidates for an extension?

Thursday, June 05, 2014

LDAP Load Balancing And Timeouts

It's been ages since I posted anything despite the fact I have lots to say. The last few months have been extraordinarily busy and challenging (and not necessarily in a technical way).

Some colleagues have pointed out that I have aged significantly in recent times. It is true that the hair on my chin has gone from grey to white and that the hair around my temples is severely lacking in any kind of hue. It seems that my work-life balance has gone slightly askew.

Balancing brings me rather neatly on to the topic of LDAP Load Balancing. I say Load Balancing though what I really mean to say is providing a mechanism for assuring availability of Directory Servers for IBM Security Identity Manager. My preference is for load to go to one directory server which replicates to a second directory server which can be used should something go awry on the primary.

So, what's the best way to ensure traffic routes to the correct directory server and stays stuck to that directory server until something happens? Well, that's the domain for the F5 BIG-IP beast. Or is it.

There is plenty of documentation around the web that states that one should tread carefully when attempting to use these kind of tools to front a directory server (and IBM Tivoli Directory Server in particular). In recent dealings, I've observed the following which is worth sharing:

Point 1
Be careful of the BIG-IP idle timeout default of 300 seconds. Any connection that BIG-IP thinks is stale after 300 seconds will be torn down. (You should see how a TDI Assembly Line in iterator mode behaves with that without auto-reconnect and skip-forward disabled!)

Point 2
Be careful of the TCP connection setup and seriously consider using Performance Layer 4 as the connection profile on the BIG-IP. A 50% increase in throughput was not atypical in some of my recent tests.

Point 3
Ensure that the default settings for LDAP pool handling are updated in ISIM's file. In particular, pay attention to the following:


The timeout should be set just a little below the BIG-IP's idle timeout. The retryCountForSUException should be set to ensure that ISIM reconnects should the BIG-IP device tear-down the connection regardless of the timeout.

And with those tips, you should have a highly available infrastructure with a level of tolerance.

Thursday, March 06, 2014

Understanding ISIM Reconciliations

Most ITIM/ISIM gurus will understand what goes on during Service reconciliation. Let's be honest, most gurus have had to write countless adapters and perform countless bouts of debugging problems when they arise.

What happens, though, if you have one ISIM environment that can reconcile a service successfully but a second ISIM environment which cannot reconcile the exact same service. And let us, for a moment, assume that both ISIM environments are configured identically.

Of course, if something works in one environment and doesn't work in another, there must be a difference somewhere. But the ISIM version is the same, the service profile is the same, the service definition is the same, it's talking to the same TDI instance which performs the grunt work of retrieving all the data from the target service. On the face of it - there is no reason for one environment to behave any differently than the other.

Yet it can happen! In fact, I recently saw an ISIM environment get itself into a hung state trying to reconcile a service yet all the reconcilable objects had made their way into ISIM.

When ISIM reconciles a service, the result is that supporting data objects are stored in the LDAP under the service container (erglobalid=123,ou=services,erglobalid=00000000000000000000,ou=org,dc=com) and ultimately accounts are stored under the ou=accounts and ou=orphans containers. I say ultimately for a reason. The accounts are actually temporarily stored under the service container too before being moved to a more appropriate container.

And therein lay the difference in my two environments. The working environment had no accounts stored under the service container prior to executing the reconciliation. Somehow, the non-working environment did have some accounts left hanging about under the service container.

A ruthless LDAPDELETE on all objects sub-ordinate to the service was all that was required for the reconciliation to complete successfully.

Next time you have a misbehaving reconciliation process, why not check to see how previous reconciliations have been left by having a look under your service container. You might be surprised at what you will find.

Thursday, September 26, 2013

ITIM Best Practices For Workflow Design

It sounds rather arrogant to call this blog entry "Best Practices" when it is merely my meandering thoughts that I'm about to spew forth. But a number of experiences of other people's work recently has almost moved me to tears.

So here is my list of "best practices".

Workflow Start Nodes

The Start Node (and the End Node for that matter) is not somewhere that code should be placed. Code can be placed in these nodes, of course. But it shouldn't ever be placed in here. For example, how could I know that the following Workflow has been customised:

If you have code that needs to be run as soon as the workflow starts, place a Script Node immediately after the Start Node and name it appropriately. This is MUCH better:

Script Node Size

If I open a Script Node and find that there are 1,000 lines of code in there, I will cry. If you have 1,000 lines of code you are probably doing something wrong. I would much rather see multiple Script Nodes with a sensible name describing their function laid out in an organised manner that gives me a good visual representation of the process rather than having to wade through hundreds or thousands of lines of code.

Workflow Extensions

If you have written some Javascript that looks cool and funky and re-usable, then make it cool and funky and re-usable by creating a Workflow Extension. Also, feel free to give extensions back to the community! (I shall publish some of my favourites soon!)


If I see a hardcoded erglobalid reference in a Script Node, I will, in all possibility, hunt the developer down and do very un-Tivolian things to him or her. Their assumption that their code will work when promoted through various environments is very flawed and they are being lazy. Do Not Hardcode!

Commenting & Logging
You might think your code is readable, but the chances are that it could still do with some comments here and there. Even if the code is understandable, the reasoning behind performing the function may not be so clear and requirements documents and design documents have a habit of disappearing!

When it comes to logging, log appropriately and update the status of the workflow throughout the workflow process. The following statements are great to see in Script Nodes because we can successfully handle any problems and ensure that we always navigate our way to the End Node.

activity.setResult(activity.SUCCESS, "Attributes Successfully Verified");
activity.setResult(activity.FAILED, "Attribute Verification Failed due to " + errMsg);

And Finally

The chances are that someone will come along after you have done all your hard work and they will look at your workflow. Do you want them to cry? Or do you want them to be delighted in what they have seen? Make sure you delight people!

Friday, June 07, 2013

Careful Where You Point That Finger

Over the last 9 months, I have managed to shed almost 50lbs in weight - that in excess of 20Kg for those living in the metric world. As part of the process of losing weight, I took to tracking my walking and cycling habits using Endomondo on my phone - and what an excellent piece of software that is.

I did have a problem, though!

When I started the application and enabled GPS, I would wait quite a considerably amount of time before my location could be determined. Indeed, I had completed the bulk of some of my journeys by the time I got the "GPS Excellent" notification.

Initially, my thoughts went along these lines:

"Endomondo have coded their freebie application to not pick up my location in the hope that I will purchase the Pro version of their app"

I did purchase the Pro version - mostly because I thought the app was terrific with only a slight nod towards the hope that my GPS issues would be resolved. My GPS issues weren't resolved.

So I then thought:

"I must have dropped my phone at some point and 'disturbed' its ability to perform it's GPS magic"

This seemed like a reasonable thought as all my friends and colleagues and perfectly acceptable Endomondo experiences on their phones. Indeed, my Samsung Galaxy II seemed to be really struggling when compared to a colleague's Nexus - and they were both Android phones.

I figured it must be time for a new phone. And proceeded to spend the next couple of weeks evaluating my options. And then... I woke up one morning to discover that my phone wanted to apply an update to my Android operating system.

An hour or so later, I had a shiny new OS. Admittedly, performance was awful initially as all my apps needed updating too. And, all my icons had been resorted alphabetically (for which I could quite happily exact some kind of tortuous revenge on the developer of the upgrade process). Apart from the initial dodgy performance and the tears that followed the pain of re-organising my icons, I noticed two things:
  • Playing 7x7 was super-fast - 7x7 is one of those simple yet addictive games that can keep me going for hours
  • GPS performance was perfect!

Wonderful! When starting Endomondo Pro now, I have to wait no longer than 2 or 3 seconds before I have my "GPS Excellent" notification. And to think my problems were with the app or the phone.

So. What's the point of this story I hear you ask! Well, it is far too easy to make a judgement about an app, or a product or almost anything you might encounter during the course of your life. My problems had nothing to do with Endomondo's developers' coding ability; nor had it anything to do with how Samsung hardware technicians had put my phone together. Yet, even for a techie like me, my thought processes led me to think bad thoughts about both!

I have seen senior managers in enterprises make some very strange decisions in the past. They may decide that they no longer need "Huge Software Developer's Amazing IT Solution" because it costs too much and doesn't perform the way they expected and instead by "Enormous Software Developer's Stupendous IT Solution" in the anticipation that it will cost less and perform perfectly.

And frequently, I don't see the cost benefit and the performance problems still exist.

Sometimes the problems we are faced with in the IT world are not being caused by the applications we are using. Sometimes we need to dig a little deeper to find out what the real cause of a problem is - whether it be the hardware platform we are using; the operating system in use; the networking setup; the interfaces; and of course, the users and their expectations.

And finally... maybe some of the big boys can learn from the developers of the apps we now use on a daily basis on our smartphones. Endomondo does exactly what I need in a manner which makes sense for me. Can we say the same thing for enterprise software?