Showing posts with label ISIM. Show all posts
Showing posts with label ISIM. Show all posts

Monday, January 15, 2018

Déjà Vu - IGI Handling of AD Dormant Accounts

I know I had this problem years ago with ISIM. Why do I have this problem again with IGI?

Years ago, the ISIM adapter for Active Directory would reconcile the Last Logon Date that was specific to which ever Domain Controller the adapter connected to. That meant, of course, that the REAL last logon date for a user was not being reconciled and any life-cycle rule built to query that attribute and act upon the value contained therein would, in all probability, cause a major "hoo-ha".

IBM resolved the issue by adding erADLastLogonTimeStamp which returned the domain replicated last logon date (albeit, the date could be +/- 14 days in accuracy for reasons which Bill Gates would be best to explain).

IGI is a new product but uses the ISIM adapters. However, the internal IGI mapping for of Last Logon is using the old erADLastLogon date rather than the slightly more reliable erADLastLogonTimeStamp.

Doh!

What are these crazy kids thinking? Did the ISIM guys not talk to the IGI guys? (That said, I've checked the latest resource.def file for the ISIM adapter and I'm disappointed to report that the erLastAccessDate mapping is set to erADLastLogon!).

In any case... if you want to run any kind of dormancy rule on Active Directory accounts in IGI, make sure you do this BEFORE you reconcile your service:

update itimuser.entity_schema_mapping set custom_attribute_name='eradlastlogontimestamp' where custom_class_name='eradaccount' and system_attribute_name='erlastaccessdate';

This little gem will ensure that the correct AD attribute is used as the last access date rather than the pitiful erADLastLogon attribute which is borderline useless.

Maybe IBM will update their configuration and resource.def file and documentation at some point (again).

Sunday, August 13, 2017

IGI v ISIM v ISAM v ITDI

I've been writing this blogs for many years now and I typically use it as a means of recording snippets of information that ideally I would like to refer back to and snippets of information that I think others would be interested in.

So, what are you guys interested in?

Well, the beauty of running a blog like this for so long is that I get to see which posts are popular and which are not! I get to see which products you guys are using and abusing. And I can tell you that the popularity of these IBM security products can be easily ranked. In reverse order (by popularity):

IBM Security Identity Governance & Intelligence (IGI) is the new kid on the block so it is understandable that not many people are interested in what I have to say about IGI. Maybe, over time, this will change. Fingers crossed!

IBM Security Access Manager (ISAM) has been around forever but it seems that nobody is interested anymore. That's probably a sign that organisations have shifted their focus elsewhere and that federated security (which ISAM handles really well) is the way forward and freebie tools that are SAML and OpenID:Connect ready are preferable?

IBM Security Identity Manager (ISIM) has been around forever too, but it has always been a heavyweight beast of a product that requires significant investment and therefore is used by huge organisations only. I don't know how many ISIM customers there are out there, but I'm guessing it is a dwindling list given that IGI is seen as the long-term replacement.

And then we come to IBM Tivoli Directory Integrator (ITDI). Of all the things I've ever blogged about, it seems that TDI generates the most interest - and by a country mile! As an example, the last time I mentioned TDI, it managed to gather more than 300 times as many page requests as everything to do with IGI put together! One single post!

TDI is still a tool I go to on a daily basis. It is truly wonderful and flexible and easy to get to grips with. Maybe I should focus more on TDI topics?

In any case, I'd be interested to hear from whomever out there reads the stuff that I write. What topics would you like me to cover? What products do you think deserve focus? Do you still find any of this information worthwhile?

I still hope there are plenty of you IBM Security specialists out there helping to deliver a smarter, more secure planet.

Tuesday, August 30, 2016

Javadoc Updates

It has been a while, but I've finally got round to uploading the latest Javadocs for IBM Security Identity Manager v7.0 and IBM Security Access Manager v9.0.

These can be found by following the links from here: https://www.stephen-swann.co.uk/links-and-tools/

Enjoy - if it's possible to enjoy Javadocs!

Friday, December 12, 2014

The Who And When Of Account Recertification

We all know that there are rules and laws in place that mean that organisations must demonstrate that they have control over access to their IT systems. We've all heard about the concepts around recertification of access rights. But what is the best way to certify that the access rights in place are still appropriate and who should undertake that certification task?

Who

A starting point, at least, is to identify who are potential candidates for certifying access rights. They are, in no particular order:

  • The account owner themselves
  • The account owner's line manager
  • The application owner
  • The service owner
  • The person who defined the role that granted the entitlements


Of course, some of these people may or may not actually exist. In some cases, the person might not be appropriate for undertaking certification tasks. Line Managers these days are often merely the person who you see once or twice a year for appraisals and to be given the good news about your promotion/bonus (or otherwise). Functional Managers or Project Managers may be more appropriate, but can they be identified? Is that relationship with the account owner held anywhere? Unlikely. And definitely unlikely in the context of an identity management platform!

And what about service owners? Maybe this person is the person who has responsibility for ensuring the lights stay on for a particular system and maybe they don't care what the 500,000 accounts on that system are doing? Maybe her system is being used by multiple applications and she would prefer the application owners to take responsibility for the accounts?

And what of the account owner themselves? Self-certifying is more than likely going to merely yield a "yes, I still need that account and all those entitlements" response! But is that better than nothing? Maybe! Then again, maybe the account owner is unaware of what the account gives them. What do they do when they are told they have an AD account which is a member of the PC-CB-100 group, will they understand what that means? For many users, the answer will be no!

Ultimately, the decision as to who is best placed to perform the certification will come down to a number of factors:

  • Can a person be identified
  • Can the person recognise the information they will be presented with and correlate that with a day-to-day job function
  • Is the person empowered to make the certification


There will be a high chance that only one person fits the bill given this criteria!

When

Some organisations like to perform regular recertifications... it's not unusual to see quarterly returns being required. For most people, however, their job function doesn't change too often and a lot of effort will have gone into gathering information which (for the most part) is unchanged from the previous round of recertifications.

Is there a better way?

Merely lengthening the amount of time between cycles isn't necessarily the answer. But maybe recertification can be more "targeted". Rather than a time-based approach, recertification should probably target those people who have undergone some form of change or other life-cycle event, such as:

  • Change of Job Title
  • Change of Line Manager
  • Change of Organisational Unit
  • Return from long-term leave
  • Lack of use on the account


All of these events can be used as useful recertification triggers rather than waiting for a quarterly review. The benefits are easy to articulate to any CISO - immediate review and more control of access rights.

Of course, these triggers can be used in conjunction with a time-based approach - but maybe that time-based approach should be based on the last time a recertification was performed for that user/account/entitlement rather than a fixed date in the calendar.

Thursday, December 04, 2014

ISIM Workflow Extensions

During recent ramblings, I mentioned that I would publish some of my favourite ISIM workflow extensions. The workflow extensions that are provided "out of the box" cover tasks such as adding, modifying and deleting accounts and identities as well as enforcing policy and other standard operations. However, the "out of the box" list could do with having a number of useful extensions added.

One of my favourites (and something I pretty much insist on being installed as soon as I've deployed ISIM) is my version of a sendmail. ISIM has the ability to send emails, but only to those identities who have an email address. However, what if you wanted to send an email to an address which was not attached to any identity? What if you wanted to email a user during a self-registration workflow at which point the identity has not yet been created?

My sendmail extension is just a handful of lines long, uses standard ISIM methods and takes just the following arguments: eMail Address, Subject and Body.

The code is very simple indeed:

public ActivityResult sendMail(String mailAddress, String mailSubject, String mailBody) {
  try {
    List<String> addresses = new ArrayList<String>();
    addresses.add(mailAddress);
    NotificationMessage nMessage = new NotificationMessage(addresses, mailSubject, mailBody, mailBody);

    MailManager.notify(nMessage);

    return new ActivityResult(ActivityResult.STATUS_COMPLETE,
      ActivityResult.SUCCESS,
      "eMail Sent",
      null);
  } catch (Exception e) {
    return new ActivityResult(ActivityResult.STATUS_COMPLETE,
      ActivityResult.FAILED,
      "eMail Not Sent",
      null);
  }
}

Registering the JAR and registering the extension in the workflowextensions.xml file is all that is required to make the extension available.

<ACTIVITY ACTIVITYID="sendMail" LIMIT="600000">
  <IMPLEMENTATION_TYPE>
    <APPLICATION CLASS_NAME="com.sswann.sendMail" METHOD_NAME="sendMail" />
  </IMPLEMENTATION_TYPE>
  <TRANSITION_RESTRICTION SPLIT="XOR" />
  <PARAMETERS>
    <IN_PARAMETERS PARAM_ID="recipient" TYPE="String" />
    <IN_PARAMETERS PARAM_ID="subject" TYPE="String" />
    <IN_PARAMETERS PARAM_ID="body" TYPE="String" />
  </PARAMETERS>
</ACTIVITY>


From here, it's merely a matter of constructing your relevant data and handling the activity.resultSummary correctly in your subsequent nodes.

Previously, I stated that workflow extensions should be created when you find you are repeating Javascript over and over again. If you need to check the status of an account owner before invoking an account operation, that would sound like a great candidate for creating a workflow extension and having a solid reusable component. A list of useful extensions might include:
  • Check Account Already Exists
  • Check Owner Status
  • Check Manager Status
  • Read/Update LDAP Object (not an identity or account)
  • Read/Update Database Object
  • Read/Update Message on ESB (MQ)
  • Call Web Service
  • Create Helpdesk Ticket/Send Alert

Have you any good candidates for an extension?

Thursday, June 05, 2014

LDAP Load Balancing And Timeouts

It's been ages since I posted anything despite the fact I have lots to say. The last few months have been extraordinarily busy and challenging (and not necessarily in a technical way).

Some colleagues have pointed out that I have aged significantly in recent times. It is true that the hair on my chin has gone from grey to white and that the hair around my temples is severely lacking in any kind of hue. It seems that my work-life balance has gone slightly askew.

Balancing brings me rather neatly on to the topic of LDAP Load Balancing. I say Load Balancing though what I really mean to say is providing a mechanism for assuring availability of Directory Servers for IBM Security Identity Manager. My preference is for load to go to one directory server which replicates to a second directory server which can be used should something go awry on the primary.

So, what's the best way to ensure traffic routes to the correct directory server and stays stuck to that directory server until something happens? Well, that's the domain for the F5 BIG-IP beast. Or is it.

There is plenty of documentation around the web that states that one should tread carefully when attempting to use these kind of tools to front a directory server (and IBM Tivoli Directory Server in particular). In recent dealings, I've observed the following which is worth sharing:

Point 1
Be careful of the BIG-IP idle timeout default of 300 seconds. Any connection that BIG-IP thinks is stale after 300 seconds will be torn down. (You should see how a TDI Assembly Line in iterator mode behaves with that without auto-reconnect and skip-forward disabled!)

Point 2
Be careful of the TCP connection setup and seriously consider using Performance Layer 4 as the connection profile on the BIG-IP. A 50% increase in throughput was not atypical in some of my recent tests.

Point 3
Ensure that the default settings for LDAP pool handling are updated in ISIM's enRole.properties file. In particular, pay attention to the following:

enrole.connectionpool.timeout
enrole.connectionpool.retryCountForSUException

The timeout should be set just a little below the BIG-IP's idle timeout. The retryCountForSUException should be set to ensure that ISIM reconnects should the BIG-IP device tear-down the connection regardless of the timeout.

And with those tips, you should have a highly available infrastructure with a level of tolerance.

Thursday, March 06, 2014

Understanding ISIM Reconciliations

Most ITIM/ISIM gurus will understand what goes on during Service reconciliation. Let's be honest, most gurus have had to write countless adapters and perform countless bouts of debugging problems when they arise.

What happens, though, if you have one ISIM environment that can reconcile a service successfully but a second ISIM environment which cannot reconcile the exact same service. And let us, for a moment, assume that both ISIM environments are configured identically.

Of course, if something works in one environment and doesn't work in another, there must be a difference somewhere. But the ISIM version is the same, the service profile is the same, the service definition is the same, it's talking to the same TDI instance which performs the grunt work of retrieving all the data from the target service. On the face of it - there is no reason for one environment to behave any differently than the other.

Yet it can happen! In fact, I recently saw an ISIM environment get itself into a hung state trying to reconcile a service yet all the reconcilable objects had made their way into ISIM.

When ISIM reconciles a service, the result is that supporting data objects are stored in the LDAP under the service container (erglobalid=123,ou=services,erglobalid=00000000000000000000,ou=org,dc=com) and ultimately accounts are stored under the ou=accounts and ou=orphans containers. I say ultimately for a reason. The accounts are actually temporarily stored under the service container too before being moved to a more appropriate container.

And therein lay the difference in my two environments. The working environment had no accounts stored under the service container prior to executing the reconciliation. Somehow, the non-working environment did have some accounts left hanging about under the service container.

A ruthless LDAPDELETE on all objects sub-ordinate to the service was all that was required for the reconciliation to complete successfully.

Next time you have a misbehaving reconciliation process, why not check to see how previous reconciliations have been left by having a look under your service container. You might be surprised at what you will find.

Thursday, September 26, 2013

ITIM Best Practices For Workflow Design

It sounds rather arrogant to call this blog entry "Best Practices" when it is merely my meandering thoughts that I'm about to spew forth. But a number of experiences of other people's work recently has almost moved me to tears.

So here is my list of "best practices".

Workflow Start Nodes

The Start Node (and the End Node for that matter) is not somewhere that code should be placed. Code can be placed in these nodes, of course. But it shouldn't ever be placed in here. For example, how could I know that the following Workflow has been customised:



If you have code that needs to be run as soon as the workflow starts, place a Script Node immediately after the Start Node and name it appropriately. This is MUCH better:



Script Node Size

If I open a Script Node and find that there are 1,000 lines of code in there, I will cry. If you have 1,000 lines of code you are probably doing something wrong. I would much rather see multiple Script Nodes with a sensible name describing their function laid out in an organised manner that gives me a good visual representation of the process rather than having to wade through hundreds or thousands of lines of code.

Workflow Extensions

If you have written some Javascript that looks cool and funky and re-usable, then make it cool and funky and re-usable by creating a Workflow Extension. Also, feel free to give extensions back to the community! (I shall publish some of my favourites soon!)

Hardcoding!

If I see a hardcoded erglobalid reference in a Script Node, I will, in all possibility, hunt the developer down and do very un-Tivolian things to him or her. Their assumption that their code will work when promoted through various environments is very flawed and they are being lazy. Do Not Hardcode!

Commenting & Logging
You might think your code is readable, but the chances are that it could still do with some comments here and there. Even if the code is understandable, the reasoning behind performing the function may not be so clear and requirements documents and design documents have a habit of disappearing!

When it comes to logging, log appropriately and update the status of the workflow throughout the workflow process. The following statements are great to see in Script Nodes because we can successfully handle any problems and ensure that we always navigate our way to the End Node.

activity.setResult(activity.SUCCESS, "Attributes Successfully Verified");
activity.setResult(activity.FAILED, "Attribute Verification Failed due to " + errMsg);

And Finally

The chances are that someone will come along after you have done all your hard work and they will look at your workflow. Do you want them to cry? Or do you want them to be delighted in what they have seen? Make sure you delight people!

Tuesday, April 16, 2013

Tempus Fugit

I remember being a follower of a blogger who wrote about IBM Tivoli security solutions and becoming quite concerned for his well being when he "stopped" blogging for a while. When he had gone fully six months without blogging, I had myself convinced that something terrible had happened to him personally.

And now I find that I have done the same thing. My periodical briefings have stopped. But fear not - it's not because of any ill health. It's not because of a change in career. It's not even to do with boredom. It has everything to do with being far too busy and that's the best reason of all for the temporary blip in my output.

But, time flies... and too much time has passed since I last committed my thoughts to writing.

So what has happened since in the last 6 months? Well, the IBM Tivoli re-branding exercise is in full swing with IBM Security Identity Manager and IBM Security Access Manager products having been released.

And what has changed in those products?

ISAM has a brand new deployment process which is greatly simplified. However, for TAMeb users who have deployed their software on Windows, beware? The upgrade process might not behave as you expect? Why? The move to a 64-bit architecture, that's why? Think seriously about any attempt to perform an in-situ upgrade!

You might like to also check compression settings on your WebSEALs - a respected colleague of mine has already encountered some fun and games with those!

And ISIM? Is it just a pure re-branding exercise? Not at all. Some functional additions are definitely welcome like the controls added to the services offering retry operations on failed resources. Account ownership and role attributes look interesting (despite how they have been implemented). Privileged Identity Management is a great addition as is the inclusion of supported web services API.

But core processing that ITIM administrators will know and love is still there!

And what of my work recently? Well, it's a matter of spending a lot of time concentrating on federated security, and environmental upgrades. Working on pre-sales; sprucing up designs; sizing projects; and helping those around me get best use out of their IBM Tivoli/Security solutions.

So much has happened in recent months, though, that I hardly know where to start in documenting it all. There has been fun with SAML v2. Flirtations with FESI extension re-writes. Dalliances with web services and APIs. Encounters with business process re-engineering.

My next article, however, will likely be an IBM Tivoli Directory Integrator article centred on best practice for collaborative development. That sounds like a tricky one!