Tuesday, May 26, 2015


I was recently asked how to successfully perform an HTTP POST request using TDI's HTTP Client Connector. NULL values kept being processed by the web server and there seemed to be no obviously documented way to perform the task.

On the face of it, this seems like a straightforward piece of functionality, but the truth is that it is not quite as straightforward as it seems.

To understand how to pass parameters to a web server via HTTP POST using TDI, it would help to understand what is actually happening when using a standard web form.

Consider the following:

<head />
<form action="result.php" method="post">
User Name<br />
<input type="text" name="uname">
<br />
Password<br />
<input type="password" name="pass">
<br />
<input type="submit" value="submit">

The submit button on this form will cause the browser to send an HTTP request to the web server with a CONTENT-TYPE of: "application/x-www-form-urlencoded". This is the key to unlocking our problem!

The values for the attributes requested in the form will be passed in the BODY of the request rather like the query string you might see in HTTP GET requests:


So to mimic form submission using the POST method via TDI, all you need to do is follow these steps:

  • Set the Mode to CallReply (if you want to see what the web server has done with your request)
  • Set the Request method to POST
  • Set the http.Content-Type to "application/x-www-form-urlencoded"
  • Set the http.body to name/value pairs in query string format

Hopefully that will see your TDI Assembly Lines behaving themselves when acting as HTTP Client and attempting to use the POST method to transfer information.

Friday, May 15, 2015

LDAP Schema Issues

It's annoying when a basic task consumes too much of your time!

Getting an LDAP Operations Error when attempting to perform an ldapmodify on an object can be irksome. It is especially irksome if the change you are making is trivial!

Consider the following object that already exists in my LDAP Server:

dn: myattr=ABC,dc=com
objectclass: top
objectclass: mycustomobject
myattr: ABC
mytrivialattribute: Z

Now consider changing that object to the following:

dn: myattr=ABC,dc=com
objectclass: top
objectclass: mycustomobject
myattr: ABC
mytrivialattribute: Y

One might reasonable expect the LDAP modify operation to be successful bearing in mind how trivial the change is. But when you get an Operations Error being thrown back at you by a hissy-fitting Directory Server, you might start to scratch your head.

The V3.modifiedschema looked perfect as mytrivialattribute was defined with a syntax of{1024}. But looking inside DB2 revealed something a little more sinister.

I followed these steps:
db2 connect to idsldap
db2 describe table idsldap.mytrivialattribute

And what I got back was:

EID with column length 4
MYTRIVIALATTRIBUTE with column length 240
RMYTRIVIALATTRIBUTE with column length 240

That didn't look right! Somehow, mytrivialattribute was created using default parameters and the V3.modifiedschema file was manually updated at a later date. As such, the database plain refused to act upon any requests to add/modify mytrivialattributes!

Getting around the problem is simple and can be done in a number of ways. I like the brutal approach though:

  • Drop the DB2 table idsldap.mytrivialattribute
  • Restart the LDAP server

Describing the table now returns:

EID with column length 4
MYTRIVIALATTRIBUTE with column length 1024
MYTRIVIALATTRIBUTE_T with column length 240
RMYTRIVIALATTRIBUTE_T with column length 240

So what was going on? Well, setting a length of 1024 on the schema definition meant that the LDAP Server wanted to put the full string into the column named MYTRIVIALATTRIBUTE and to put a truncated version of the string into MYTRIVIALATTRIBUTE_T. But the _T column didn't exist so the server couldn't perform the operation.

Dropping the table and allowing the directory server to recreate it properly on startup resolved the issue.

Of course, the ramifications now begin as to why there was a mismatch in the first place, but at least the problem has been diagnosed and rectified.

Monday, February 09, 2015

Giving Away Secrets

I've often been asked why I 'blog' (not that I update this blog anywhere near as often as I would like). I've had people say that I'm "giving away secrets" and that I won't be in demand in the job marketplace if I continually tell other people how to do what I do.

That maybe so, but surely that's a good thing right? I want to be able to move on and learn new things on a daily basis and if I can get others to do the things that I traditionally do, then that creates the opportunity for me to move on?
Then there is the fact that I'm not getting any younger and retaining information in my head isn't as easy these days as it used to be. It's like creaking limbs and deteriorating eyesight... I find I'm becoming more forgetful (which my wife will more than happily corroborate). As such, writing all these things down is actually as much for my own private use as it is to help others.
And finally, altruism feels good. Now, I don't for one minute think that I'm the most altruistic person I know. Giving away information on IBM Tivoli security software can barely be described as altruistic really. But nonetheless, it feels like I'm doing a good thing. Last week, I received an email from a very kind individual which reminded me that it is a good thing. The email said:

"You are a rock star and a gentleman. Thanks so much for all the helpful material you have put together! I owe you at least a suitcase of beer. Feel free to cash in anytime."
I don't get many emails of that type. Mostly, I get emails asking for free consultancy so it brightened my day when I saw the above. So, to the sender (Tim), I say thank you for the kind offer of beer but it's really not necessary. Acknowledgement that the material is worthwhile is quite enough for me.

Friday, December 12, 2014

The Who And When Of Account Recertification

We all know that there are rules and laws in place that mean that organisations must demonstrate that they have control over access to their IT systems. We've all heard about the concepts around recertification of access rights. But what is the best way to certify that the access rights in place are still appropriate and who should undertake that certification task?


A starting point, at least, is to identify who are potential candidates for certifying access rights. They are, in no particular order:

  • The account owner themselves
  • The account owner's line manager
  • The application owner
  • The service owner
  • The person who defined the role that granted the entitlements

Of course, some of these people may or may not actually exist. In some cases, the person might not be appropriate for undertaking certification tasks. Line Managers these days are often merely the person who you see once or twice a year for appraisals and to be given the good news about your promotion/bonus (or otherwise). Functional Managers or Project Managers may be more appropriate, but can they be identified? Is that relationship with the account owner held anywhere? Unlikely. And definitely unlikely in the context of an identity management platform!

And what about service owners? Maybe this person is the person who has responsibility for ensuring the lights stay on for a particular system and maybe they don't care what the 500,000 accounts on that system are doing? Maybe her system is being used by multiple applications and she would prefer the application owners to take responsibility for the accounts?

And what of the account owner themselves? Self-certifying is more than likely going to merely yield a "yes, I still need that account and all those entitlements" response! But is that better than nothing? Maybe! Then again, maybe the account owner is unaware of what the account gives them. What do they do when they are told they have an AD account which is a member of the PC-CB-100 group, will they understand what that means? For many users, the answer will be no!

Ultimately, the decision as to who is best placed to perform the certification will come down to a number of factors:

  • Can a person be identified
  • Can the person recognise the information they will be presented with and correlate that with a day-to-day job function
  • Is the person empowered to make the certification

There will be a high chance that only one person fits the bill given this criteria!


Some organisations like to perform regular recertifications... it's not unusual to see quarterly returns being required. For most people, however, their job function doesn't change too often and a lot of effort will have gone into gathering information which (for the most part) is unchanged from the previous round of recertifications.

Is there a better way?

Merely lengthening the amount of time between cycles isn't necessarily the answer. But maybe recertification can be more "targeted". Rather than a time-based approach, recertification should probably target those people who have undergone some form of change or other life-cycle event, such as:

  • Change of Job Title
  • Change of Line Manager
  • Change of Organisational Unit
  • Return from long-term leave
  • Lack of use on the account

All of these events can be used as useful recertification triggers rather than waiting for a quarterly review. The benefits are easy to articulate to any CISO - immediate review and more control of access rights.

Of course, these triggers can be used in conjunction with a time-based approach - but maybe that time-based approach should be based on the last time a recertification was performed for that user/account/entitlement rather than a fixed date in the calendar.

Thursday, December 04, 2014

ISIM Workflow Extensions

During recent ramblings, I mentioned that I would publish some of my favourite ISIM workflow extensions. The workflow extensions that are provided "out of the box" cover tasks such as adding, modifying and deleting accounts and identities as well as enforcing policy and other standard operations. However, the "out of the box" list could do with having a number of useful extensions added.

One of my favourites (and something I pretty much insist on being installed as soon as I've deployed ISIM) is my version of a sendmail. ISIM has the ability to send emails, but only to those identities who have an email address. However, what if you wanted to send an email to an address which was not attached to any identity? What if you wanted to email a user during a self-registration workflow at which point the identity has not yet been created?

My sendmail extension is just a handful of lines long, uses standard ISIM methods and takes just the following arguments: eMail Address, Subject and Body.

The code is very simple indeed:

public ActivityResult sendMail(String mailAddress, String mailSubject, String mailBody) {
  try {
    List<String> addresses = new ArrayList<String>();
    NotificationMessage nMessage = new NotificationMessage(addresses, mailSubject, mailBody, mailBody);


    return new ActivityResult(ActivityResult.STATUS_COMPLETE,
      "eMail Sent",
  } catch (Exception e) {
    return new ActivityResult(ActivityResult.STATUS_COMPLETE,
      "eMail Not Sent",

Registering the JAR and registering the extension in the workflowextensions.xml file is all that is required to make the extension available.

    <APPLICATION CLASS_NAME="com.sswann.sendMail" METHOD_NAME="sendMail" />
    <IN_PARAMETERS PARAM_ID="recipient" TYPE="String" />
    <IN_PARAMETERS PARAM_ID="subject" TYPE="String" />
    <IN_PARAMETERS PARAM_ID="body" TYPE="String" />

From here, it's merely a matter of constructing your relevant data and handling the activity.resultSummary correctly in your subsequent nodes.

Previously, I stated that workflow extensions should be created when you find you are repeating Javascript over and over again. If you need to check the status of an account owner before invoking an account operation, that would sound like a great candidate for creating a workflow extension and having a solid reusable component. A list of useful extensions might include:
  • Check Account Already Exists
  • Check Owner Status
  • Check Manager Status
  • Read/Update LDAP Object (not an identity or account)
  • Read/Update Database Object
  • Read/Update Message on ESB (MQ)
  • Call Web Service
  • Create Helpdesk Ticket/Send Alert

Have you any good candidates for an extension?

Thursday, June 05, 2014

LDAP Load Balancing And Timeouts

It's been ages since I posted anything despite the fact I have lots to say. The last few months have been extraordinarily busy and challenging (and not necessarily in a technical way).

Some colleagues have pointed out that I have aged significantly in recent times. It is true that the hair on my chin has gone from grey to white and that the hair around my temples is severely lacking in any kind of hue. It seems that my work-life balance has gone slightly askew.

Balancing brings me rather neatly on to the topic of LDAP Load Balancing. I say Load Balancing though what I really mean to say is providing a mechanism for assuring availability of Directory Servers for IBM Security Identity Manager. My preference is for load to go to one directory server which replicates to a second directory server which can be used should something go awry on the primary.

So, what's the best way to ensure traffic routes to the correct directory server and stays stuck to that directory server until something happens? Well, that's the domain for the F5 BIG-IP beast. Or is it.

There is plenty of documentation around the web that states that one should tread carefully when attempting to use these kind of tools to front a directory server (and IBM Tivoli Directory Server in particular). In recent dealings, I've observed the following which is worth sharing:

Point 1
Be careful of the BIG-IP idle timeout default of 300 seconds. Any connection that BIG-IP thinks is stale after 300 seconds will be torn down. (You should see how a TDI Assembly Line in iterator mode behaves with that without auto-reconnect and skip-forward disabled!)

Point 2
Be careful of the TCP connection setup and seriously consider using Performance Layer 4 as the connection profile on the BIG-IP. A 50% increase in throughput was not atypical in some of my recent tests.

Point 3
Ensure that the default settings for LDAP pool handling are updated in ISIM's enRole.properties file. In particular, pay attention to the following:


The timeout should be set just a little below the BIG-IP's idle timeout. The retryCountForSUException should be set to ensure that ISIM reconnects should the BIG-IP device tear-down the connection regardless of the timeout.

And with those tips, you should have a highly available infrastructure with a level of tolerance.

Thursday, March 06, 2014

Understanding ISIM Reconciliations

Most ITIM/ISIM gurus will understand what goes on during Service reconciliation. Let's be honest, most gurus have had to write countless adapters and perform countless bouts of debugging problems when they arise.

What happens, though, if you have one ISIM environment that can reconcile a service successfully but a second ISIM environment which cannot reconcile the exact same service. And let us, for a moment, assume that both ISIM environments are configured identically.

Of course, if something works in one environment and doesn't work in another, there must be a difference somewhere. But the ISIM version is the same, the service profile is the same, the service definition is the same, it's talking to the same TDI instance which performs the grunt work of retrieving all the data from the target service. On the face of it - there is no reason for one environment to behave any differently than the other.

Yet it can happen! In fact, I recently saw an ISIM environment get itself into a hung state trying to reconcile a service yet all the reconcilable objects had made their way into ISIM.

When ISIM reconciles a service, the result is that supporting data objects are stored in the LDAP under the service container (erglobalid=123,ou=services,erglobalid=00000000000000000000,ou=org,dc=com) and ultimately accounts are stored under the ou=accounts and ou=orphans containers. I say ultimately for a reason. The accounts are actually temporarily stored under the service container too before being moved to a more appropriate container.

And therein lay the difference in my two environments. The working environment had no accounts stored under the service container prior to executing the reconciliation. Somehow, the non-working environment did have some accounts left hanging about under the service container.

A ruthless LDAPDELETE on all objects sub-ordinate to the service was all that was required for the reconciliation to complete successfully.

Next time you have a misbehaving reconciliation process, why not check to see how previous reconciliations have been left by having a look under your service container. You might be surprised at what you will find.