2010 is drawing to a close. Indeed, it is not long to go before I can open the Champagne and welcome in 2011! So it should be a good time to reflect on the past twelve months. The pagans used to burn the yule log for 12 days and some like to think that each day was devoted to reflecting on one of the preceding months. The first day of yule would be spent reflecting on January; the second on February and so on.
Fortunately for those of you who happen upon this blog, my memory isn't so good and I am incapable of remembering what happened yesterday never mind dream up something interesting to remark upon for each month of the year!
What I do remember, however, is this:
January
I felt very alone in January as I managed to get stuck in Oxford during some very heavy downfalls of snow. That said, I used the time wisely by visiting Rauls cocktail bar for something to warm the tummy!
February
My time in Oxford drew to a close. In a way, I miss the friends I made there. Matt and Fizzy-pop will be hard to forget!
March
I managed to finalise a new adapter for IBM Tivoli Identity Manager with some considerable input from RACF guru, Alan Harrison. The adapter is an extension of the IBM RACF Adapter but it reconciles additional information from the zSecure suite of utilities to give a complete view of RACF data from within ITIM. ITIM was enhanced to provide reporting functionality which allowed ITIM users to see vital RACF information and security breaches.
April
I know it was my birthday during this month. But for the life of me, I can't seem to remember too much about it. I do know that I was working on an HR Feeder mechanism for a Tier One bank. But, let's be honest, in the world of Identity Management, this is pretty much step 1! Been there. Done that. Countless times, even.
May
Nothing to say about May. Did it happen? Actually, I did knock together a website for a friend of mine one weekend.
June
I spent some time extending the OPAL-based APIScript for IBM Tivoli Identity Manager with functionality that should make the building of future ITIM environments a lot simpler!
July
My first forays into the world of IBM Tivoli Directory Integrator and Twitter integration. I was mildly excited by the possibilities even if the enterprise world is not!
August
Continuing my extra-curricular activities with TDI, I decided to see how Active MQ could be used to pull together a high-availabilty solution for TDI messaging. Results were positive though Active MQ isn't something I've actually come across in use by the customers I deal with. I got into some bother with some ladies this month! Owls Ladies Hockey Club wanted their website updated and I said I'd do it for free (as I do have a connection with Owls, albeit tenuous). So, another weekend lost, but the result seemed to please the girls: http://www.owlsladies.com/
September
I finalised TDI connectors for Google and Salesforce and even wrapped them in to ITIM Adapters. These are now being actively marketed by my employers, Pirean, and further details can be found on their website. My wife also coaxed me in to putting together a simple website for her one weekend... she wanted to showcase some of her artwork. Here's a plug for her work: http://www.jackiespence.com/
October
October was spent enjoying the sights and sounds of London on an engagement with one of the big universities in the city. Productivity seemed to be at an all-time high as integration guides for Tivoli Access Manager seemed to materialise with great rapidity! Lotus Connections, Quickr, MySource Matrix and Moodle integration were defined, tested and deployed at such a rate that I got called a Legend. I like that name. I like customers calling me Legend. I didn't realise how much I'd like it until it happened. It may never happen again, so I need to make sure that this particular episode is recorded for future posterity! For reference, it was dilftechnical who called me that.
November
I finally got round to posting an article about my Twitter connector for IBM Tivoli Directory Integrator based on an open-source OAuth library and JTwitter. I can only imagine that it was this article that prompted the great Eddie Hartman to phone me and also send me a Metamerge pen as a thank you. I shall treasure the pen always - or at least until it runs out of ink.
December
More snow. Thankfully, I spent most of the month working from home and rarely had to get out of my slippers never mind brave the 10 inches of snow outside my window!
So that was the year that was. Here's looking forward to a fabulous 2011. Happy New Year.
In a world where technology is supposed to make things simpler, why is it that the world seems to be more complicated? This blog is made up of the ramblings of an IT Security Consultant specialising in IBM Security software with a heavy focus on IGI, ITIM/ISIM, ITAM/ISAM and ITDI/ISDI. All opinions expressed are my own and have nothing to do with any employer past or present. I hope you find them useful.
Friday, December 31, 2010
Friday, December 17, 2010
The Perils Of Privilieged Identities
It seems that Privileged Identity Management is the long-slumbering beast that has finally awoken and organisations are now scratching their heads wondering what it means to them and (more crucially) how they are going to address the problems posed by the it.
The issue of privileged accounts is certainly not a new one. Operating Systems have always had the concept of a root or administrator account, for example. These super accounts are not the kind of accounts that people should be using on a business as usual basis (though I suspect this "rule" is a case of Do As I Say, Not As I Do). Sometimes, though, there is just no way of getting around the need to access a system or application with a privileged account.
Rogue insider employees are on the increase, if press reports are to be believed. Indeed, it is probably a fair comment given that staff turnover has increased dramatically in recent years! So it makes a lot of sense to ensure that employees don't have permanent access to credentials that can cause damage to your IT infrastructure. And so... Privileged Identity Management has finally come of age.
Or has it?
IBM, earlier this year, announced an initiative based on the integration of Tivoli Access Manager for Enterprise Single Sign On and Tivoli Identity Manager to address the needs of privileged access and details of their solution can be found in their Redguide publication: "Centrally Managing and Auditing Privileged User Identities by Using the IBM Integration Services for Privileged Identity Management".
It works on the basis of being able to elevate your privileges upon request and having TAM ESSO check-out the elevated privileges from an ITIM vault and inject the credentials into the target platform/application login sequence without the end user ever having to know the privileged account's password. Sounds like genius, right? Maybe. It certainly a neat way of taking the best bits of two fairly solid applications to cater for a gap in both products. And in the main it ought to work... at least for those applications that TAM ESSO can integrate with.
Most software applications that claim to provide a solution to the privileged access rights problem only seem to do so for either the Windows administrator accounts or the Unix administrator accounts and almost forget about those other special accounts that reside inside of applications. TAM ESSO can therefore help resolve that. However, this is only true for accessing a system with elevated privileges. What about changing passwords on a frequent basis and changing passwords after account use?
One of the reasons why privileged accounts have been causing so much pain is because organisations are scared to change the passwords for these accounts for fear that their systems will break! That's certainly a fair comment, in my opinion. We all know that applications shouldn't bind to a Directory Server with the cn=root account, but I bet they do. And when that account's password is changed, what will happen? Where within the application is the password for cn=root stored? In a properties file? In a database? How should the password be updated? Do we have to stop the application first? What if there are multiple applications using that account? Do we need a major outage of our systems? Maybe we should do it at a weekend, what do you think?
So the solution described in the Redguide publication goes a long way to plug a gap. The supposed industry leading solutions for Privileged Identity Management also go a long way to help alleviate the problem. But it seems to me that there are still issues needing addressing!
Ultimately, the right answer for delivering a solution will be dependent on:
The issue of privileged accounts is certainly not a new one. Operating Systems have always had the concept of a root or administrator account, for example. These super accounts are not the kind of accounts that people should be using on a business as usual basis (though I suspect this "rule" is a case of Do As I Say, Not As I Do). Sometimes, though, there is just no way of getting around the need to access a system or application with a privileged account.
Rogue insider employees are on the increase, if press reports are to be believed. Indeed, it is probably a fair comment given that staff turnover has increased dramatically in recent years! So it makes a lot of sense to ensure that employees don't have permanent access to credentials that can cause damage to your IT infrastructure. And so... Privileged Identity Management has finally come of age.
Or has it?
IBM, earlier this year, announced an initiative based on the integration of Tivoli Access Manager for Enterprise Single Sign On and Tivoli Identity Manager to address the needs of privileged access and details of their solution can be found in their Redguide publication: "Centrally Managing and Auditing Privileged User Identities by Using the IBM Integration Services for Privileged Identity Management".
It works on the basis of being able to elevate your privileges upon request and having TAM ESSO check-out the elevated privileges from an ITIM vault and inject the credentials into the target platform/application login sequence without the end user ever having to know the privileged account's password. Sounds like genius, right? Maybe. It certainly a neat way of taking the best bits of two fairly solid applications to cater for a gap in both products. And in the main it ought to work... at least for those applications that TAM ESSO can integrate with.
Most software applications that claim to provide a solution to the privileged access rights problem only seem to do so for either the Windows administrator accounts or the Unix administrator accounts and almost forget about those other special accounts that reside inside of applications. TAM ESSO can therefore help resolve that. However, this is only true for accessing a system with elevated privileges. What about changing passwords on a frequent basis and changing passwords after account use?
One of the reasons why privileged accounts have been causing so much pain is because organisations are scared to change the passwords for these accounts for fear that their systems will break! That's certainly a fair comment, in my opinion. We all know that applications shouldn't bind to a Directory Server with the cn=root account, but I bet they do. And when that account's password is changed, what will happen? Where within the application is the password for cn=root stored? In a properties file? In a database? How should the password be updated? Do we have to stop the application first? What if there are multiple applications using that account? Do we need a major outage of our systems? Maybe we should do it at a weekend, what do you think?
So the solution described in the Redguide publication goes a long way to plug a gap. The supposed industry leading solutions for Privileged Identity Management also go a long way to help alleviate the problem. But it seems to me that there are still issues needing addressing!
Ultimately, the right answer for delivering a solution will be dependent on:
- the customer's appetite for solving the problem completely
- the customer's appetite for risk
- the customer's financial muscle when it comes to licensing software solutions
Thursday, November 25, 2010
Twitter and TDI - Part 3
A while back, I wrote myself a Twitter connector for Tivoli Directory Integrator. I was bored one weekend and it seemed like an interesting exercise. After all, it was a good opportunity to find out about Twitter's migration to OAuth.
I didn't want a complicated Twitter connector. It only had to perform a handful of functions:
I didn't want to bother with direct messaging or retweeting or geo-location or anything fancy. After all, my intention was to show how I could get TDI respond to events posted on Twitter and repond to other events by posting an update on Twitter. (In short, use Twitter as a less-enterprise like MQ system!)
I'm glad to say that the experiment worked and I provide it to the populace at large to take a look at. For a sneak preview, here's the main connector:
In the example above, I'm processing Twitter Object Types of "Tweets: Other" and looking at Stephen Fry's tweets. The Consumer Key/Secret and Access Key/Secret are not displayed for obvious reasons! Indeed, obtaining the key/secret combinations is a little fiddly though there is plenty of documentation out there to help you obtain that info from Twitter.
Running an AL with the connector in iterator mode pointed at "stephenfry" results in the following work entries being dumped in my log:
CTGDIS087I Iterating.
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Thu Nov 25 09:24:58 GMT 2010'
tweet (replace): 'Damnably chilly on the Sherlock Holmes set this morning. Frost forming on the supporting artists' moustaches...'
CTGDIS004I *** Finished dumping Entry
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Thu Nov 25 07:06:52 GMT 2010'
tweet (replace): 'Lordy. 260 all out. Knew we shouldn't write Australia off. A wounded wallaby us a dangerous thing. Ho hum. Business as usual. #theashes'
CTGDIS004I *** Finished dumping Entry
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Wed Nov 24 17:40:50 GMT 2010'
tweet (replace): '@AWOLTom Already have...'
CTGDIS004I *** Finished dumping Entry
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Wed Nov 24 16:55:24 GMT 2010'
tweet (replace): 'The fine is taken care of, but there is a fighting fund @TwJokeTrialFund http://bit.ly/cTK2Li A fiver from you to help the appeal?'
CTGDIS004I *** Finished dumping Entry
Wow, you may be thinking. But why would I want to do that?
Indeed. I guess, as I have alluded to in this blog before, you could have your assembly lines "tweet" upon component failure so that an out-of-hours support person can respond. After all, tweets can easily be delivered to smartphones at no cost to the organisation.
Alternatively, you could remotely control assembly lines using this mechanism. Just think, I could tweet "Start HR Feed" to my personal twitter-stream and I could have an assembly line iterating through my twitter-stream just waiting for the instruction to start processing that feed! (NOTE: I wouldn't necessarily advocate that this is a great way of managing your production schedule!)
The connector and supporting JAR files can be retrieved from here:
NOTE: jtwitter.jar and signpost-core.1.2.1.1.jar are open source code provided elsewhere on the net. I've added these versions here as they are known to work with my connector.
Drop the twitterConnector.jar into TDI_HOME/jars/connectors. Drop the other JARs into TDI/HOME/jars/3rdparty.
If you need help getting your keys/secrets, I may be able to sort you out, though you will probably appreciate figuring it out for yourself in the long run.
For those interested in the underlying code, it is really very simple. A bind to Twitter using OAuth can be achieved in two lines of code:
// Make an oauth client
OAuthSignpostClient oauthClient = new OAuthSignpostClient(
this.twitterConsumerKey,
this.twitterConsumerSecret,
this.twitterAccessKey,
this.twitterAccessSecret
);
// Make a Twitter object
this.twitterBind = new Twitter(this.twitterUser, oauthClient);
Sending a tweet is a single line of code:
And iterating through tweets is a matter of invoking one of four methods, like this:
You can look through TwitterConnector.java to get a feel for the full source code.
I've had fun building the connector and it can certainly ease the pain of putting together Assembly Lines that need to make calls to Twitter by having such a neat interface. I hope you have fun too.
I didn't want a complicated Twitter connector. It only had to perform a handful of functions:
- Send a tweet
- Iterate through my tweets
- Iterate through my friends' tweets
- Iterate through the public timeline
- Iterate through a specific person's tweets (other than mine)
I didn't want to bother with direct messaging or retweeting or geo-location or anything fancy. After all, my intention was to show how I could get TDI respond to events posted on Twitter and repond to other events by posting an update on Twitter. (In short, use Twitter as a less-enterprise like MQ system!)
I'm glad to say that the experiment worked and I provide it to the populace at large to take a look at. For a sneak preview, here's the main connector:
In the example above, I'm processing Twitter Object Types of "Tweets: Other" and looking at Stephen Fry's tweets. The Consumer Key/Secret and Access Key/Secret are not displayed for obvious reasons! Indeed, obtaining the key/secret combinations is a little fiddly though there is plenty of documentation out there to help you obtain that info from Twitter.
Running an AL with the connector in iterator mode pointed at "stephenfry" results in the following work entries being dumped in my log:
CTGDIS087I Iterating.
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Thu Nov 25 09:24:58 GMT 2010'
tweet (replace): 'Damnably chilly on the Sherlock Holmes set this morning. Frost forming on the supporting artists' moustaches...'
CTGDIS004I *** Finished dumping Entry
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Thu Nov 25 07:06:52 GMT 2010'
tweet (replace): 'Lordy. 260 all out. Knew we shouldn't write Australia off. A wounded wallaby us a dangerous thing. Ho hum. Business as usual. #theashes'
CTGDIS004I *** Finished dumping Entry
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Wed Nov 24 17:40:50 GMT 2010'
tweet (replace): '@AWOLTom Already have...'
CTGDIS004I *** Finished dumping Entry
CTGDIS003I *** Start dumping Entry
Operation: generic
Entry attributes:
date (replace): 'Wed Nov 24 16:55:24 GMT 2010'
tweet (replace): 'The fine is taken care of, but there is a fighting fund @TwJokeTrialFund http://bit.ly/cTK2Li A fiver from you to help the appeal?'
CTGDIS004I *** Finished dumping Entry
Wow, you may be thinking. But why would I want to do that?
Indeed. I guess, as I have alluded to in this blog before, you could have your assembly lines "tweet" upon component failure so that an out-of-hours support person can respond. After all, tweets can easily be delivered to smartphones at no cost to the organisation.
Alternatively, you could remotely control assembly lines using this mechanism. Just think, I could tweet "Start HR Feed" to my personal twitter-stream and I could have an assembly line iterating through my twitter-stream just waiting for the instruction to start processing that feed! (NOTE: I wouldn't necessarily advocate that this is a great way of managing your production schedule!)
The connector and supporting JAR files can be retrieved from here:
NOTE: jtwitter.jar and signpost-core.1.2.1.1.jar are open source code provided elsewhere on the net. I've added these versions here as they are known to work with my connector.
Drop the twitterConnector.jar into TDI_HOME/jars/connectors. Drop the other JARs into TDI/HOME/jars/3rdparty.
If you need help getting your keys/secrets, I may be able to sort you out, though you will probably appreciate figuring it out for yourself in the long run.
For those interested in the underlying code, it is really very simple. A bind to Twitter using OAuth can be achieved in two lines of code:
// Make an oauth client
OAuthSignpostClient oauthClient = new OAuthSignpostClient(
this.twitterConsumerKey,
this.twitterConsumerSecret,
this.twitterAccessKey,
this.twitterAccessSecret
);
// Make a Twitter object
this.twitterBind = new Twitter(this.twitterUser, oauthClient);
Sending a tweet is a single line of code:
this.twitterBind.setStatus(tweet);
And iterating through tweets is a matter of invoking one of four methods, like this:
this.tweetList = this.twitterBind.getUserTimeline(this.twitterUser);
You can look through TwitterConnector.java to get a feel for the full source code.
I've had fun building the connector and it can certainly ease the pain of putting together Assembly Lines that need to make calls to Twitter by having such a neat interface. I hope you have fun too.
Wednesday, November 17, 2010
ITIM - Java Version, Password Reset and SAP JCo
It's not very often that I wrapper a combination of topics together into a single post, but these are short snippets that would look a bit strange in a posting all of their own. They are trivial to the point of being unworthy of their own posting!
Java Versions
In the good old days, the version of Java you had installed on your client machine would play havoc with ITIM's applets: Workflow Designer; Form Designer; Join Rules Definition. It used to be that an upgrade of Java would immediately wreck your ability to use these applets. That all changed, though, and for quite some time I've enjoyed the ability to upgrade Java as and when I saw fit and everything still worked. Until yesterday.
Java 1.6.0_18 afforded me the luxury of using the ITIM applets. Java 1.6.0_22 does not! At least, 1.6.0_22 won't allow me to save my workflow! At least, not using ITIM v5.1 FP1.
You have been warned!
Password Resets
This is more of a reminder than anything else. Sometimes, it isn't enough to change a password in ITIM. What happens if the account on the target system has been locked due to authentication failures? A change password may not unlock the account and indeed this is the case when it comes to many systems - SAP springs to mind as my most recent example.
What can be done? Well, in simplistic terms, the changePassword operation for the account type could be updated to perform an account restore function after the password has been changed. The resulting workflow could look like this:
Of course, you may want to put a significant amount of logic around this restore process. You may want to invoke that only if the requestor of the change password operation is a specific person, for example.
SAP JCo
The SAP Java Connector that is used by the TDI SAP NetWeaver connector can, periodically, through an error message like this: "max no of 100 conversations exceeded".
The "fix", apparently, is to set an environment variable called CPIC_MAX_CONV and set the variable to a value of at least 500. I'm sure you can figure out how to set the environment variable for your Operating System and I'm sure you can work out that your TDI service will need to be restarted for the variable to have any effect.
And so ends today's collection of snippets. I told you they were trivial. I do hope you aren't too bored as a result of reading the above. Until the next adventure!
Java Versions
In the good old days, the version of Java you had installed on your client machine would play havoc with ITIM's applets: Workflow Designer; Form Designer; Join Rules Definition. It used to be that an upgrade of Java would immediately wreck your ability to use these applets. That all changed, though, and for quite some time I've enjoyed the ability to upgrade Java as and when I saw fit and everything still worked. Until yesterday.
Java 1.6.0_18 afforded me the luxury of using the ITIM applets. Java 1.6.0_22 does not! At least, 1.6.0_22 won't allow me to save my workflow! At least, not using ITIM v5.1 FP1.
You have been warned!
Password Resets
This is more of a reminder than anything else. Sometimes, it isn't enough to change a password in ITIM. What happens if the account on the target system has been locked due to authentication failures? A change password may not unlock the account and indeed this is the case when it comes to many systems - SAP springs to mind as my most recent example.
What can be done? Well, in simplistic terms, the changePassword operation for the account type could be updated to perform an account restore function after the password has been changed. The resulting workflow could look like this:
Of course, you may want to put a significant amount of logic around this restore process. You may want to invoke that only if the requestor of the change password operation is a specific person, for example.
SAP JCo
The SAP Java Connector that is used by the TDI SAP NetWeaver connector can, periodically, through an error message like this: "max no of 100 conversations exceeded".
The "fix", apparently, is to set an environment variable called CPIC_MAX_CONV and set the variable to a value of at least 500. I'm sure you can figure out how to set the environment variable for your Operating System and I'm sure you can work out that your TDI service will need to be restarted for the variable to have any effect.
And so ends today's collection of snippets. I told you they were trivial. I do hope you aren't too bored as a result of reading the above. Until the next adventure!
Wednesday, October 27, 2010
ITIM Spooky Password Behaviour
TIM and TAM experts will already be aware of how to provision TAM accounts from TIM and you will probably already be aware of how to provision GSO credentials from TAM to TIM. If so, this article may bore you but I did come across some rather odd behaviour that I wasn't really expecting.
In my provisioning policy for my TAM account, I was attempting to set my GSO credentials using Javascript and for some reason I decided to use the ersynchpassword attribute.
All was well when creating TAM accounts for existing PERSON objects in TIM. However, when I created a new PERSON object, I was presented with a failure in the provisioning of my TAM account:
My suspicions were that the ersynchpassword was not "available" at provisioning time so I dropped the following code into the provisioning policy:
Enrole.log("SSO", "uid is " + subject.getProperty("uid")[0]);
Enrole.log("SSO", "ersynchpassword is " + subject.getProperty("ersynchpassword"));
Enrole.log("SSO", "personpassword is " + subject.getAndDecryptPersonPassword());
The result wasn't terribly surprising in that ersynchpassword was null or empty. At least, that's what it seemed like at first glance when I noticed the following log messages:
Error: uid is account01
Error: ersynchpassword is
Error: personpassword is passw0rd
The real surprise, however, came when I read on through the log. Within milliseconds of the above messages, the following messages were presented:
So what do I make of this?
Well, it's best to use the getAndDecryptPersonPassword() method within this particular provisioning policy, that's for sure. But ONLY on account creation. Password changes need to be evaluated using the ersynchpassword. Luckily, there is a catch-all:
if (subject.getProperty("ersynchpassword")[0] == null) {
return "sapGSO (Web Resource)"
+ "|" + subject.getProperty("uid")[0]
+ "|{clear}" + subject.getAndDecryptPersonPassword();
} else {
return "sapGSO (Web Resource)"
+ "|" + subject.getProperty("uid")[0]
+ "|{clear}" + subject.getProperty("ersynchpassword")[0];
}
There are still some questions left unanswered here, though. Why was the policy evaluated TWICE and why did the first failure drive the CTGIMA617E message (rather than the second successful evaluation). Maybe someone in the land of the development team can explain it. And also explain why the ersynchpassword didn't manage to appear until the second evaluation just a 100 milliseconds after the first evaluation.
Then again... maybe it's because it is almost Halloween and it's the time of year for strangeness!
NOTES:
The above scenario was produced using ITIM v5.1 (Fix Pack 1) running on WebSphere 6.1 on a Windows 2003 Server. I'm quite sure I've used ersynchpassword in the past on ITIM v5.0 instances and did not see this behaviour!
In my provisioning policy for my TAM account, I was attempting to set my GSO credentials using Javascript and for some reason I decided to use the ersynchpassword attribute.
All was well when creating TAM accounts for existing PERSON objects in TIM. However, when I created a new PERSON object, I was presented with a failure in the provisioning of my TAM account:
CTGIMA617E The account {account} cannot be created either because the account is disallowed for the user or one or more attributes are not compliant with provisioning policy.Odd. Because when I manually requested the account to be created... it appeared without fuss.
My suspicions were that the ersynchpassword was not "available" at provisioning time so I dropped the following code into the provisioning policy:
Enrole.log("SSO", "uid is " + subject.getProperty("uid")[0]);
Enrole.log("SSO", "ersynchpassword is " + subject.getProperty("ersynchpassword"));
Enrole.log("SSO", "personpassword is " + subject.getAndDecryptPersonPassword());
The result wasn't terribly surprising in that ersynchpassword was null or empty. At least, that's what it seemed like at first glance when I noticed the following log messages:
Error: uid is account01
Error: ersynchpassword is
Error: personpassword is passw0rd
The real surprise, however, came when I read on through the log. Within milliseconds of the above messages, the following messages were presented:
Error: uid is account01
Error: ersynchpassword is passw0rd
Error: personpassword is passw0rd
CTGIMA617E The account account01 cannot be created either because the account is disallowed for the user or one or more attributes are not compliant with provisioning policy.
Error: ersynchpassword is passw0rd
Error: personpassword is passw0rd
CTGIMA617E The account account01 cannot be created either because the account is disallowed for the user or one or more attributes are not compliant with provisioning policy.
So what do I make of this?
Well, it's best to use the getAndDecryptPersonPassword() method within this particular provisioning policy, that's for sure. But ONLY on account creation. Password changes need to be evaluated using the ersynchpassword. Luckily, there is a catch-all:
if (subject.getProperty("ersynchpassword")[0] == null) {
return "sapGSO (Web Resource)"
+ "|" + subject.getProperty("uid")[0]
+ "|{clear}" + subject.getAndDecryptPersonPassword();
} else {
return "sapGSO (Web Resource)"
+ "|" + subject.getProperty("uid")[0]
+ "|{clear}" + subject.getProperty("ersynchpassword")[0];
}
There are still some questions left unanswered here, though. Why was the policy evaluated TWICE and why did the first failure drive the CTGIMA617E message (rather than the second successful evaluation). Maybe someone in the land of the development team can explain it. And also explain why the ersynchpassword didn't manage to appear until the second evaluation just a 100 milliseconds after the first evaluation.
Then again... maybe it's because it is almost Halloween and it's the time of year for strangeness!
NOTES:
The above scenario was produced using ITIM v5.1 (Fix Pack 1) running on WebSphere 6.1 on a Windows 2003 Server. I'm quite sure I've used ersynchpassword in the past on ITIM v5.0 instances and did not see this behaviour!
Tuesday, October 12, 2010
MASSL Between WebSEAL And Apache On Windows
Every now and again I get faced with Windows infrastructure. As I've previously stated on this blog, I'm not "religious" about my Operating Systems but Windows does present one or two slightly different challenges and SSL enabling Apache (though very easy) does have a gotcha!
Here's how to generate the certificates necessary to create a Mutually Authenticated SSL junction between a WebSEAL and an Apache instance on Windows.
Let's assume that Apache (v2.2) has been installed at c:\Apache2.2 and that OpenSSL (v1.0.0a) has been installed at c:\openssl-win32 using the Win32 binary installer.
Setting Up The Environment
The environment setup for OpenSSL requires some directories to be created:
The openssl.cfg file should be updated so that the "dir" directive within the CA_default stanza reads:
Within the req_distinguished_name stanza, add the following:
xx can be set to anything you want, but it is important that a stateOrProvinceName is provided during certificate generation.
Creating a Certificate Authority
openssl genrsa -out TAM\cacert.key 1024
openssl req -new -key TAM\cacert.key -out TAM\cacert.csr
openssl x509 -req -days 365 -in TAM\cacert.csr -out TAM\cacert.crt -signkey TAM\cacert.key
openssl x509 -in TAM\cacert.crt -text
That was painless, wasn't it? We now have a certificate authority which can be used for the signing of other certificates. Our next step is to create certificates for the Apache and WebSEAL instances. Note, we have no need for Apache or WebSEAL to generate the certificate requests, we can do that with OpenSSL:
Create Apache Certificate
This is a four step process which involves the generation of a key, a request, a certificate and a conversion of the key to unencrypted format because Apache on Windows "hissy-fits" when attempting to use an encrypted key.
openssl genrsa -des3 -out TAM\keys\apache.key 1024
openssl req -new -key TAM\keys\apache.key -out TAM\requests\apache.csr
openssl ca -days 365 -in TAM\requests\apache.csr -cert TAM\cacert.crt -keyfile TAM\cacert.key -out TAM\certificates\apache.crt -config openssl.cfg
openssl rsa -in TAM\keys\apache.key -out TAM\keys\apache_unenc.key
We can now move these certificates to the correct location:
mkdir c:\Apache2.2\certs
copy TAM\cacert.crt c:\Apache2.2\certs\cacert.pem
copy TAM\certificates\apache.crt c:\Apache2.2\certs\apache.crt
copy TAM\keys\apache_unenc.key c:\Apache2.2\certs\apache.key
Configuring Apache
The c:\Apache2.2\conf\httpd.conf file should have the handful of references to SSL uncommented:
The c:\Apache2.2\conf\extras\httpd-ssl.conf should be updated as such:
Create WebSEAL Certificate
The generation of the key, request and certificate for WebSEAL is also a four step process though the final step isn't the conversion of the key to unencrypted format but rather the generation of a #PKCS12 format certificate:
openssl genrsa -des3 -out TAM\keys\webseald.key 1024
openssl req -new -key TAM\keys\webseald.key -out TAM\requests\webseald.csr
openssl ca -days 365 -in TAM\requests\webseald.csr -cert TAM\cacert.crt -keyfile TAM\cacert.key -out TAM\certificates\webseald.crt -config openssl.cfg
openssl pkcs12 -export -clcerts -in TAM\certificates\webseald.crt -inkey TAM\keys\webseald.key -out TAM\certificates\webseald.p12
Importing the webseald.p12 file into the pdsrv.kdb keystore can be tricky unless your Java Policy Files allow the level of encryption that will have been applied to the p12. So, update the Java Policy Files on the WebSEAL by visiting https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=jcesdk. Download the Policy Files for Java 1.4.2 or above and copy the two policy files to {java_home}/jre/lib/security
The #PKCS12 file can now be imported with the following commands:
java -cp {gsk7_location}/classes/gsk7cls.jar;{gsk7_location}/classes/cfwk.zip com.ibm.gsk.ikeyman.ikeycmd -cert -add -file cacert.pem -format ascii -db pdsrv.kdb -pw pdsrv -type cms -label TAMCA -trust enable
java -cp {gsk7_location}/classes/gsk7cls.jar;{gsk7_location}/classes/cfwk.zip com.ibm.gsk.ikeyman.ikeycmd -cert -import -file webseald.p12 -type pkcs12 -target pdsrv.kdb -target_pw pdsrv -target_type cms -pw {p12 password}
You should now determine the label that has been assigned to the certificate:
java -cp {gsk7_location}/classes/gsk7cls.jar;{gsk7_location}/classess/cfwk.zip com.ibm.gsk.ikeyman.ikeycmd -cert -list -db pdsrv.kdb -pw pdsrv -type cms
The label will look like something like this: "2cn=webseald, o=x,st=x,c=x, etc"
Create a WebSEAL Junction
The WebSEAL junction can now be created with the -K option (plus the above label) which should result in a "Created junction" message with no other warnings.
Notes & Observations
The certificates used to created a MASSL connection between WebSEAL and Apache won't ever be seen by any client - remember, WebSEAL acts as the client to the Apache server. As such, there is no strong need for these certificates to be generated by the likes of a Verisign. There is no need for these certificates to make any reference to the actual host names at all - the names Apache and WebSEAL seem like good names to me for the Common Name of the certificate. In reality, using the above method for generating the certificates is as good than any other (if not better as it has been tried and tested).
Here's how to generate the certificates necessary to create a Mutually Authenticated SSL junction between a WebSEAL and an Apache instance on Windows.
Let's assume that Apache (v2.2) has been installed at c:\Apache2.2 and that OpenSSL (v1.0.0a) has been installed at c:\openssl-win32 using the Win32 binary installer.
Setting Up The Environment
The environment setup for OpenSSL requires some directories to be created:
cd c:\openssl-win32\bin
mkdir TAM
mkdir TAM\keys
mkdir TAM\requests
mkdir TAM\certificates
mkdir TAM\newcerts
> TAM\index.txt
echo "01" > TAM\serial
mkdir TAM
mkdir TAM\keys
mkdir TAM\requests
mkdir TAM\certificates
mkdir TAM\newcerts
> TAM\index.txt
echo "01" > TAM\serial
The openssl.cfg file should be updated so that the "dir" directive within the CA_default stanza reads:
dir = ./TAM
Within the req_distinguished_name stanza, add the following:
stateOrProvinceName_default = xx
xx can be set to anything you want, but it is important that a stateOrProvinceName is provided during certificate generation.
Creating a Certificate Authority
openssl genrsa -out TAM\cacert.key 1024
openssl req -new -key TAM\cacert.key -out TAM\cacert.csr
openssl x509 -req -days 365 -in TAM\cacert.csr -out TAM\cacert.crt -signkey TAM\cacert.key
openssl x509 -in TAM\cacert.crt -text
That was painless, wasn't it? We now have a certificate authority which can be used for the signing of other certificates. Our next step is to create certificates for the Apache and WebSEAL instances. Note, we have no need for Apache or WebSEAL to generate the certificate requests, we can do that with OpenSSL:
Create Apache Certificate
This is a four step process which involves the generation of a key, a request, a certificate and a conversion of the key to unencrypted format because Apache on Windows "hissy-fits" when attempting to use an encrypted key.
openssl genrsa -des3 -out TAM\keys\apache.key 1024
openssl req -new -key TAM\keys\apache.key -out TAM\requests\apache.csr
openssl ca -days 365 -in TAM\requests\apache.csr -cert TAM\cacert.crt -keyfile TAM\cacert.key -out TAM\certificates\apache.crt -config openssl.cfg
openssl rsa -in TAM\keys\apache.key -out TAM\keys\apache_unenc.key
We can now move these certificates to the correct location:
mkdir c:\Apache2.2\certs
copy TAM\cacert.crt c:\Apache2.2\certs\cacert.pem
copy TAM\certificates\apache.crt c:\Apache2.2\certs\apache.crt
copy TAM\keys\apache_unenc.key c:\Apache2.2\certs\apache.key
Configuring Apache
The c:\Apache2.2\conf\httpd.conf file should have the handful of references to SSL uncommented:
LoadModule ssl_module modules/mod_ssl.so
Include conf/extra/httpd-ssl.conf
The c:\Apache2.2\conf\extras\httpd-ssl.conf should be updated as such:
- Update SSLCertificateFile to point at c:\Apache2.2\certs\apache.crt
- Update SSLCertificateKeyFile to point at c:\Apache2.2\certs\apache.keyUpdate
- SSLCACertificateFile to point at c:\Apache2.2\certs\cacert.pem
Create WebSEAL Certificate
The generation of the key, request and certificate for WebSEAL is also a four step process though the final step isn't the conversion of the key to unencrypted format but rather the generation of a #PKCS12 format certificate:
openssl genrsa -des3 -out TAM\keys\webseald.key 1024
openssl req -new -key TAM\keys\webseald.key -out TAM\requests\webseald.csr
openssl ca -days 365 -in TAM\requests\webseald.csr -cert TAM\cacert.crt -keyfile TAM\cacert.key -out TAM\certificates\webseald.crt -config openssl.cfg
openssl pkcs12 -export -clcerts -in TAM\certificates\webseald.crt -inkey TAM\keys\webseald.key -out TAM\certificates\webseald.p12
Importing the webseald.p12 file into the pdsrv.kdb keystore can be tricky unless your Java Policy Files allow the level of encryption that will have been applied to the p12. So, update the Java Policy Files on the WebSEAL by visiting https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=jcesdk. Download the Policy Files for Java 1.4.2 or above and copy the two policy files to {java_home}/jre/lib/security
The #PKCS12 file can now be imported with the following commands:
java -cp {gsk7_location}/classes/gsk7cls.jar;{gsk7_location}/classes/cfwk.zip com.ibm.gsk.ikeyman.ikeycmd -cert -add -file cacert.pem -format ascii -db pdsrv.kdb -pw pdsrv -type cms -label TAMCA -trust enable
java -cp {gsk7_location}/classes/gsk7cls.jar;{gsk7_location}/classes/cfwk.zip com.ibm.gsk.ikeyman.ikeycmd -cert -import -file webseald.p12 -type pkcs12 -target pdsrv.kdb -target_pw pdsrv -target_type cms -pw {p12 password}
You should now determine the label that has been assigned to the certificate:
java -cp {gsk7_location}/classes/gsk7cls.jar;{gsk7_location}/classess/cfwk.zip com.ibm.gsk.ikeyman.ikeycmd -cert -list -db pdsrv.kdb -pw pdsrv -type cms
The label will look like something like this: "2cn=webseald, o=x,st=x,c=x, etc"
Create a WebSEAL Junction
The WebSEAL junction can now be created with the -K option (plus the above label) which should result in a "Created junction" message with no other warnings.
Notes & Observations
The certificates used to created a MASSL connection between WebSEAL and Apache won't ever be seen by any client - remember, WebSEAL acts as the client to the Apache server. As such, there is no strong need for these certificates to be generated by the likes of a Verisign. There is no need for these certificates to make any reference to the actual host names at all - the names Apache and WebSEAL seem like good names to me for the Common Name of the certificate. In reality, using the above method for generating the certificates is as good than any other (if not better as it has been tried and tested).
Friday, October 08, 2010
Securing Lotus Connections With WebSEAL
There are a few documents on the web that try to explain how to integrate Lotus Connections with WebSEAL and they do actually work to a certain extent. However, there are holes in their explanations that prevent the full rich experience of a Lotus Connections environment when fronted with a WebSEAL.
Here's how to "plug" those holes for a Lotus Connections v2.5 and Tivoli Access Manager v6.1 infrastructure.
Firstly... I should point out that the integration guide written by En Hui Chen is excellent and was used as the basis for this guide. His guide can be found at http://www-10.lotus.com/ldd/lcwiki.nsf/page.xsp?documentId=65A226C20BEC2302852576B100410A04&action=openDocument
Lotus Connections Data Flow
To understand why WebSEAL and Connections should be configured the way they are, it is important to understand how the components communicate with and through each other. Fundamentally, a user's experience with Connections is not constrained to the HTTP traffic bouncing between a browser and (ultimately) the Connections applications. Instead, we need to be mindful of Ajax components being rather chatty with the back end as well as inter-service communications across Connections applications. When we introduce WebSEAL, we need to ensure that the traffic is being routed appropriately as such:
The Holes
Hole 1
The connectionsAdmin account must be an account known to TAM. The following pdadmin commands should therefore be called to ensure that the account is imported correctly and valid:
pdadmin> user import connectionsAdmin {connectionsAdmin dn}
pdadmin> user modify connectionsAdmin account-valid yes
Hole 2
This might not necessarily be called a hole, but rather an amalgamation of information on the Lotus Wiki and the information on the IBM Infocenter sites. However, the following objects required the Connections ACL to be applied to them:
Hole 3
This probably isn't a hole either, to be honest. Instead, it's best to see it as a re-iteration or re-clarification.
The LotusConnections-config.xml file should be updated to contain the following:
Also, ensure that the static href, static ssl_href and interService URLs for all services are pointing at the WebSEAL cluster:
Note, the fully-qualified-host-name MUST be set to the host name where the WebSEAL is to be found.
Hole 4
Lotus Connections applications will attempt to open server to server communications with other Lotus Connections applications via Tivoli Access Manager. If forms-auth has been set to https in the webseald-.conf file, then the signer certificate for WebSEAL client-side SSL communications should be added to the WebSphere trust stores. In addition, the LotusConnections-config.xml file should be updated to contain the following:
Following En Hui Chen's guide and applying the "plugs" above should get you a perfectly working TAM-Lotus Connections environment. If not, drop me a line... I may be able to help. Failing that, have a chat with @dilftechnical who provided some invaluable Lotus Connections insight while diagnosing the issues we faced during integration.
Here's how to "plug" those holes for a Lotus Connections v2.5 and Tivoli Access Manager v6.1 infrastructure.
Firstly... I should point out that the integration guide written by En Hui Chen is excellent and was used as the basis for this guide. His guide can be found at http://www-10.lotus.com/ldd/lcwiki.nsf/page.xsp?documentId=65A226C20BEC2302852576B100410A04&action=openDocument
Lotus Connections Data Flow
To understand why WebSEAL and Connections should be configured the way they are, it is important to understand how the components communicate with and through each other. Fundamentally, a user's experience with Connections is not constrained to the HTTP traffic bouncing between a browser and (ultimately) the Connections applications. Instead, we need to be mindful of Ajax components being rather chatty with the back end as well as inter-service communications across Connections applications. When we introduce WebSEAL, we need to ensure that the traffic is being routed appropriately as such:
The Holes
Hole 1
The connectionsAdmin account must be an account known to TAM. The following pdadmin commands should therefore be called to ensure that the account is imported correctly and valid:
pdadmin> user import connectionsAdmin {connectionsAdmin dn}
pdadmin> user modify
Hole 2
This might not necessarily be called a hole, but rather an amalgamation of information on the Lotus Wiki and the information on the IBM Infocenter sites. However, the following objects required the Connections ACL to be applied to them:
acl attach /WebSEAL/{webseal}/profiles/dsx {connections_acl}
acl attach /WebSEAL/{webseal}/communities/dsx {connections_acl}
acl attach /WebSEAL/{webseal}/blogs/blogsapi {connections_acl}
acl attach /WebSEAL/{webseal}/blogs/blogsfeed {connections_acl}
acl attach /WebSEAL/{webseal}/files/basic/anonymous/atom {connections_acl}
acl attach /WebSEAL/{webseal}/files/form/anonymous/atom {connections_acl}
acl attach /WebSEAL/{webseal}/files/wl {connections_acl}
acl attach /WebSEAL/{webseal}/activities/images {connections_acl}
acl attach /WebSEAL/{webseal}/communities/dsx {connections_acl}
acl attach /WebSEAL/{webseal}/blogs/blogsapi {connections_acl}
acl attach /WebSEAL/{webseal}/blogs/blogsfeed {connections_acl}
acl attach /WebSEAL/{webseal}/files/basic/anonymous/atom {connections_acl}
acl attach /WebSEAL/{webseal}/files/form/anonymous/atom {connections_acl}
acl attach /WebSEAL/{webseal}/files/wl {connections_acl}
acl attach /WebSEAL/{webseal}/activities/images {connections_acl}
Hole 3
This probably isn't a hole either, to be honest. Instead, it's best to see it as a re-iteration or re-clarification.
The LotusConnections-config.xml file should be updated to contain the following:
<dynamichosts enabled="true">
<host href="http://fully-qualified-host-name" ssl_href="https://fully-qualified-host-name">
</dynamichosts>
Also, ensure that the static href, static ssl_href and interService URLs for all services are pointing at the WebSEAL cluster:
<sloc:static href="http://fully-qualified-host-name" ssl_href="https://fully-qualified-host-name" />
<sloc:interservice href="https://fully-qualified-host-name" />Note, the fully-qualified-host-name MUST be set to the host name where the WebSEAL is to be found.
Hole 4
Lotus Connections applications will attempt to open server to server communications with other Lotus Connections applications via Tivoli Access Manager. If forms-auth has been set to https in the webseald-
<forceconfidentialcommunications enabled="true"></forceconfidentialcommunications>
Following En Hui Chen's guide and applying the "plugs" above should get you a perfectly working TAM-Lotus Connections environment. If not, drop me a line... I may be able to help. Failing that, have a chat with @dilftechnical who provided some invaluable Lotus Connections insight while diagnosing the issues we faced during integration.
Tuesday, October 05, 2010
WebSEAL Javascript Toolkit
Every good systems integrator needs to have ready access to their essential toolkits. You know the ones I mean? Those little snippets of code that absolutely, must be dropped into every deployment you ever complete.
In the WebSEAL world, I find that the following get my vote over and over again.
Frame Busting
Isn't it horrible when a WebSEAL login form appears in a frame? Aren't frames horrible in the first place? Anyway, I like the brutal approach in destroying those frames by dropping this piece of code into the login page:
And, if you want to swap from http to https?
The Cookie Crumbler
This is another favourite of mine. Upon logout, let's be brutal in trashing all cookies for the domain. Of course, the key word here is brutal. You may not want to do this. In fact, there are any number of reasons why this might be an incredibly bad idea for your environment. But if this is the case, then the code can be tailored to leave those "special" cookies intact. The rest? Crumble away.
Cache Handling
Amazingly, the vanilla/default pages for login and logout pages will get cached by browsers which can cause confusion to users. Am I authenticated? Am I not? Maybe it would be best to instruct the browser to not cache these pages (and probably others). So we can drop the following meta-tags into our pages:
Why so many statements? Well, as we all know, not all browsers behave in accordance with agreed standards. Enough said?
Conclusion
This isn't an exhaustive list of must-do tasks for a vanilla WebSEAL installation and it certainly isn't even accurate for all installations. But they are certainly a good starting point for putting good navigational structure around your WebSEAL protected environment.
In the WebSEAL world, I find that the following get my vote over and over again.
Frame Busting
Isn't it horrible when a WebSEAL login form appears in a frame? Aren't frames horrible in the first place? Anyway, I like the brutal approach in destroying those frames by dropping this piece of code into the login page:
if (self.location != top.location) {
top.location = self.location;
}
And, if you want to swap from http to https?
if (window.location.href.indexOf("https") == -1) {
var uri = window.location.href.substring(4);
window.location = "https" + uri;
}
The Cookie Crumbler
This is another favourite of mine. Upon logout, let's be brutal in trashing all cookies for the domain. Of course, the key word here is brutal. You may not want to do this. In fact, there are any number of reasons why this might be an incredibly bad idea for your environment. But if this is the case, then the code can be tailored to leave those "special" cookies intact. The rest? Crumble away.
var warningString = "WARNING: To maintain your login session, make sure that your browser is configured to accept Cookies.";
document.cookie = 'acceptsCookies=yes';
if(document.cookie == '') {
document.write(warningString);
} else {
// Cookie Crumbler
var strSeparator1 = " ";
var strSeparator2 = "=";
var strCookie = document.cookie;
var strCookieName = null;
var intCount;
var intStart = 0;
var intEnd = 0;
for (intCount = 1; intCount < strCookie.length; intCount++) {
if (strCookie.charAt(intCount) == strSeparator2) {
intEnd = intCount;
strCookieName = strCookie.substring(intStart, intEnd);
document.cookie = strCookieName + "=yes; expire=Fri, 13-Apr-1970 00:00:00 GMT";
strCookieName = null;
}
if (strCookie.charAt(intCount) == strSeparator1) {
intStart = intCount + 1;
}
}
}
Cache Handling
Amazingly, the vanilla/default pages for login and logout pages will get cached by browsers which can cause confusion to users. Am I authenticated? Am I not? Maybe it would be best to instruct the browser to not cache these pages (and probably others). So we can drop the following meta-tags into our pages:
content="No-Cache" http-equiv="Pragma"
content="No-Store" http-equiv="Cache-Control"
content="No-Cache" http-equiv="Cache-Control"
http-equiv="Cache-Control", "private"
content="0" http-equiv="Expires"
Why so many statements? Well, as we all know, not all browsers behave in accordance with agreed standards. Enough said?
Conclusion
This isn't an exhaustive list of must-do tasks for a vanilla WebSEAL installation and it certainly isn't even accurate for all installations. But they are certainly a good starting point for putting good navigational structure around your WebSEAL protected environment.
Thursday, September 09, 2010
User Provisioning Basics
I had the pleasure of spending some time at the IBM Innovation Labs in Hursley yesterday. The idea, bring Pirean expertise in the Tivoli Security space to a retail environment.
Having IBM Tivoli Identity Manager provision to a SAP Point of Sale system, Motorola CA50 VOIP system, Lotus Domino and having ID cards generated as a result was the intention and the result is a slick demonstration of the power and effect that an automatic provisioning tool can have in an environment with a high turnover of staff. Let's face it, retail outlets have a tendency to hire and fire (particularly over the Christmas and holiday periods) that just wouldn't happen in most other vertical markets.
The result is a retail environment that is (at last) fully joined-up and I even managed to get a souvenir from the experience. A Zebra Quickcard generated ID card with a barcode allowing me to logon to the Motorola CA50 VOIP system (see right).
But what would make the experience even better? Well, enterprise applications should adhere to a few fundamental principles when it comes to exposing an API for user management. That is:
Unfortunately, not every "enterprise" application provides these fundamental abilities. Typical problem areas are the inability to perform a lookup of all users; the inability to suspend access rights and the real heartache that is the lack of a proper delete mechanism! One nameless Cloud based provider doesn't provide a delete mechanism at all and instead the account must be made inactive (although quite how you tell the difference between an account that you want to suspend and an account that you want to delete is left up to your imagination/ingenuity).
Identity Management principles have been around for quite some time now and vendors of enterprise applications have had plenty of notice when it comes to providing either a sensible API for user management or adopting an industry recognised external user repository which can be easily managed.
It would seem, however, that while progress has been made, we're still not quite there.
Having IBM Tivoli Identity Manager provision to a SAP Point of Sale system, Motorola CA50 VOIP system, Lotus Domino and having ID cards generated as a result was the intention and the result is a slick demonstration of the power and effect that an automatic provisioning tool can have in an environment with a high turnover of staff. Let's face it, retail outlets have a tendency to hire and fire (particularly over the Christmas and holiday periods) that just wouldn't happen in most other vertical markets.
The result is a retail environment that is (at last) fully joined-up and I even managed to get a souvenir from the experience. A Zebra Quickcard generated ID card with a barcode allowing me to logon to the Motorola CA50 VOIP system (see right).
But what would make the experience even better? Well, enterprise applications should adhere to a few fundamental principles when it comes to exposing an API for user management. That is:
- Ability to add a user to the system
- Ability to modify the user details on the system
- Ability to suspend the user
- Ability to un-suspend the user
- Ability to delete the user
- Ability to perform a lookup of all users on the system
Unfortunately, not every "enterprise" application provides these fundamental abilities. Typical problem areas are the inability to perform a lookup of all users; the inability to suspend access rights and the real heartache that is the lack of a proper delete mechanism! One nameless Cloud based provider doesn't provide a delete mechanism at all and instead the account must be made inactive (although quite how you tell the difference between an account that you want to suspend and an account that you want to delete is left up to your imagination/ingenuity).
Identity Management principles have been around for quite some time now and vendors of enterprise applications have had plenty of notice when it comes to providing either a sensible API for user management or adopting an industry recognised external user repository which can be easily managed.
It would seem, however, that while progress has been made, we're still not quite there.
Tuesday, August 03, 2010
Tivoli Directory Integrator, Availability and ActiveMQ
I previously tackled how we can get IBM Tivoli Directory Integrator (TDI) to interface with Twitter and use the power of Twitter as a simplistic Message Queue mechanism. Today, I thought I would cover having TDI put messages onto and receive messages from an MQ queue with a view that it could be a useful means of introducing a number of concepts which could be critical in the enterprise:
High Availability can be achieved with queues because I can have multiple TDI instances reading from a queue and performing actions as dictated by messages in the queue. I could have a queue called "actionQueue" which contains messages with actions embedded; a queue called "responseQueue" which could contain the status return code for each action and a queue called "errorQueue" for all the nasty things that may happen to us. I could have many TDI instances pick up a message, perform the action, respond and I would sleep soundly in bed each night safe in the knowledge that the way in which MQ queues operate would mean that I would not get duplication of effort and all messages would be processed even if a TDI instance were to crash.
Abstraction, of course, means that I can send a message to another TDI Server or Assembly Line without having any real need of understanding what or where that process is. I can merely put a message on a queue (in a format previously agreed) and proceed with my normal processing safe in the knowledge that my message will be delivered - eventually.
Environment Setup
If you want to follow this article, you will need:
Installing ActiveMQ is really very straightforward. Grab it from the Apache website and unzip it somewhere on your file system - I chose c:\apache-activemq-5.3.2 as my location on a Windows 2003 Server host.
As I like my Windows Services, I installed ActiveMQ as a service by running the c:\apacher-activemq-5.3.2\bin\win32\InstallService.bat batch file. After a few minutes, I could see ActiveMQ listed as a service though not, at this stage running. I started the service!
The activemq-all-5.3.2.jar file (found at c:\apache-activemq-5.3.2) should be made available to TDI. A simple way of achieving this is to drop the file into the TDI\V7.0\jars\3rdparty directory .
We're now ready to create an Assembly Line which will write messages to an ActiveMQ queue!
MQ Connector
The first step we should take when we start the TDI GUI and create an Assembly Line is to drop in a JMS Connector (in Add mode) and populate the form as follows:
The JMS Driver Script needs to be populated with the following code so that the connector understands how to communicate with the ActiveMQ instance:
I used the Simple XML Parser on this connector to parse my output in XML format:
As I decided I'd write to an errorQueue, I updated my Output Map to include an errorCode attribute and an errorMessage attribute:
NOTE: Although not shown, my work entry was populated by an iterator which read through an input file of error codes and messages!
Running the Assembly Line, which is always the exciting bit, resulted in my ActiveMQ errorQueue queue being populated with 9 messages. Queues can be "browsed" with a standard browser by pointing the browser at http://localhost:8161/demo/queueBrowse/{queue-name}:
9 messages. As expected.
The Consumer
Now that I have populated my queue, I need "something" to consume these messages. A JMS Connector in iterator mode can be used to act as a mechanism to constantly poll the queue for new messages:
This will certainly read the messages and with an appropriately configured Parser and Input Map I will end up with a work entry containing the errorCode and errorMessage attributes I expect. At this point, I can take decisions on what I want to do with these "errors". I could merely write them to a file. I could send an email to someone. I could even send an alert to the world via Twitter (see previous article). And I can base these decisions on the contents of my message. For example, if my errorMessage contained the word Tweet, I could send the message to Twitter, right?
The End Result
For simplicity's sake, I wrote each errorCode and errorMessage to a flat file (along with a timestamp) and the result was this:
Within my IF branch, I include an AssemblyLine Function call to my Twitter Status Update AL (from my previous tutorial) and saw messages 7 and 8 appear on my Twitter timeline.
Conclusion
ActiveMQ is a very simple product to deploy and get working. Of course, it would be prudent to secure the queues. It would also be prudent to introduce some serious error handling in the Assembly Lines - for example, reading a message from a queue removes it from the queue immediately which may or may not be desirable before the message has actually been processed!
Hopefully, however, this will give you a taster of how TDI can interact with MQ. Good luck "queueing".
- High Availability
- Abstraction
High Availability can be achieved with queues because I can have multiple TDI instances reading from a queue and performing actions as dictated by messages in the queue. I could have a queue called "actionQueue" which contains messages with actions embedded; a queue called "responseQueue" which could contain the status return code for each action and a queue called "errorQueue" for all the nasty things that may happen to us. I could have many TDI instances pick up a message, perform the action, respond and I would sleep soundly in bed each night safe in the knowledge that the way in which MQ queues operate would mean that I would not get duplication of effort and all messages would be processed even if a TDI instance were to crash.
Abstraction, of course, means that I can send a message to another TDI Server or Assembly Line without having any real need of understanding what or where that process is. I can merely put a message on a queue (in a format previously agreed) and proceed with my normal processing safe in the knowledge that my message will be delivered - eventually.
Environment Setup
If you want to follow this article, you will need:
- IBM Tivoli Directory Integrator v7
- Apache ActiveMQ 5.3.2
Installing ActiveMQ is really very straightforward. Grab it from the Apache website and unzip it somewhere on your file system - I chose c:\apache-activemq-5.3.2 as my location on a Windows 2003 Server host.
As I like my Windows Services, I installed ActiveMQ as a service by running the c:\apacher-activemq-5.3.2\bin\win32\InstallService.bat batch file. After a few minutes, I could see ActiveMQ listed as a service though not, at this stage running. I started the service!
The activemq-all-5.3.2.jar file (found at c:\apache-activemq-5.3.2) should be made available to TDI. A simple way of achieving this is to drop the file into the TDI\V7.0\jars\3rdparty directory .
We're now ready to create an Assembly Line which will write messages to an ActiveMQ queue!
MQ Connector
The first step we should take when we start the TDI GUI and create an Assembly Line is to drop in a JMS Connector (in Add mode) and populate the form as follows:
The JMS Driver Script needs to be populated with the following code so that the connector understands how to communicate with the ActiveMQ instance:
var brokerURL = env.get("jms.broker").trim();
var connectionFactory = new org.apache.activemq.ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(brokerURL);
ret.queueConnectionFactory = connectionFactory;
ret.topicConnectionFactory = connectionFactory;
var connectionFactory = new org.apache.activemq.ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(brokerURL);
ret.queueConnectionFactory = connectionFactory;
ret.topicConnectionFactory = connectionFactory;
I used the Simple XML Parser on this connector to parse my output in XML format:
As I decided I'd write to an errorQueue, I updated my Output Map to include an errorCode attribute and an errorMessage attribute:
NOTE: Although not shown, my work entry was populated by an iterator which read through an input file of error codes and messages!
Running the Assembly Line, which is always the exciting bit, resulted in my ActiveMQ errorQueue queue being populated with 9 messages. Queues can be "browsed" with a standard browser by pointing the browser at http://localhost:8161/demo/queueBrowse/{queue-name}:
9 messages. As expected.
The Consumer
Now that I have populated my queue, I need "something" to consume these messages. A JMS Connector in iterator mode can be used to act as a mechanism to constantly poll the queue for new messages:
This will certainly read the messages and with an appropriately configured Parser and Input Map I will end up with a work entry containing the errorCode and errorMessage attributes I expect. At this point, I can take decisions on what I want to do with these "errors". I could merely write them to a file. I could send an email to someone. I could even send an alert to the world via Twitter (see previous article). And I can base these decisions on the contents of my message. For example, if my errorMessage contained the word Tweet, I could send the message to Twitter, right?
The End Result
For simplicity's sake, I wrote each errorCode and errorMessage to a flat file (along with a timestamp) and the result was this:
dateTime|errorCode|errorMessage
"2010-08-02 21:02:18"|"1"|"Msg1"
"2010-08-02 21:02:18"|"2"|"Msg2"
"2010-08-02 21:02:18"|"3"|"Msg3"
"2010-08-02 21:02:18"|"4"|"Msg4"
"2010-08-02 21:02:18"|"5"|"Msg5"
"2010-08-02 21:02:18"|"6"|"eMail me a message 6"
"2010-08-02 21:02:18"|"7"|"Tweet me a message 7"
"2010-08-02 21:02:18"|"8"|"Tweet me a message 8"
"2010-08-02 21:02:18"|"9"|"Msg9"
Within my IF branch, I include an AssemblyLine Function call to my Twitter Status Update AL (from my previous tutorial) and saw messages 7 and 8 appear on my Twitter timeline.
Conclusion
ActiveMQ is a very simple product to deploy and get working. Of course, it would be prudent to secure the queues. It would also be prudent to introduce some serious error handling in the Assembly Lines - for example, reading a message from a queue removes it from the queue immediately which may or may not be desirable before the message has actually been processed!
Hopefully, however, this will give you a taster of how TDI can interact with MQ. Good luck "queueing".
Tuesday, July 13, 2010
Twitter and ITDI - Part 2
In my "Twitter and ITDI" article I showed how we can use ITDI to send a tweet and I stated that getting ITDI to read tweets might be a good next step in the process of fully Twitter-enabling our ITDI Assembly Lines. This article will hopefully shed some light on how we could go about doing just that.
The first thing we need to do is create ourselves an ITDI Assembly Line and drop in a FOR-EACH Connector Loop of type HTTP Client. I configured my HTTP Client connector to perform a GET on the following URL:
http://api.twitter.com/1/statuses/user_timeline.xml?screen_name=stephenjswann&count=5
I slected the Simple XML Parser as a parser for this connector and assigned the Root Tag and Entry Tag as "statuses" and "status" respectively, as such:
I performed a connection on my Input Map tab, clicked on next and dragged the created_at and text schema attributes to my map.
Fantastic! I now have a connector which will loop around the tweets it receives as a result of performing a GET request on the user_timeline.xml resource on Twitter. So what next? Well, within the loop, I would probably want to drop in an IF branch with the following script in place:
This script will query the contents of each tweet looking for the hashtag #tditest. Any tweet containing this hashtag will invoke the contents of our IF branch. Of course, you could put whatever you want into the IF branch but for the purposes of my test, I dropped in the sendTweet function used in yesterday's article to post a response back to Twitter stating that a tweet with the #tditest hashtag had been found. I'm quite sure there are many more practical actions which could be performed at this stage!
Now the intelligencia reading this will be wondering what use this loop connector is in its present form. After all, once we have read through the tweets that we have requested, the Assembly Line will stop. And if we run it again, it will just process the same tweets again, right?
Right. Unless we expand our solution with a couple of minor tweaks. We can enable the delta processing in our HTTP Client Connector which ensures that we will only ever process NEW tweets. Any unique attribute of the tweet should suffice (and there are a few) but I settled on created_at, for no particular reason:
Now I can run my Assembly Line over and over again safe in the knowledge that it won't trigger my IF branch at all unless there is a new Tweet that it hasn't ever seen before. The final step for me was to wrap this Assembly Line in a timer controlled Assembly Line. I created a readTweetsController Assembly Line with a timer function as an iterator and all time attributes set to * (which means the AL will fire every minute). In my data flow, I placed an Assembly Line Function component to call the AL I built above.
The result? A mechanism which polls Twitter every minute looking for new tweets with a particular hashtag and "does stuff" if a matching tweet is found.
So what are the real world applications for such a mechanism? Theoretically, I could:
Hopefully this article will give you some food for thought.
The first thing we need to do is create ourselves an ITDI Assembly Line and drop in a FOR-EACH Connector Loop of type HTTP Client. I configured my HTTP Client connector to perform a GET on the following URL:
http://api.twitter.com/1/statuses/user_timeline.xml?screen_name=stephenjswann&count=5
I slected the Simple XML Parser as a parser for this connector and assigned the Root Tag and Entry Tag as "statuses" and "status" respectively, as such:
I performed a connection on my Input Map tab, clicked on next and dragged the created_at and text schema attributes to my map.
Fantastic! I now have a connector which will loop around the tweets it receives as a result of performing a GET request on the user_timeline.xml resource on Twitter. So what next? Well, within the loop, I would probably want to drop in an IF branch with the following script in place:
a = work.getString("text");
if (a.contains("#tditest")) {
return true;
} else {
return false;
}
if (a.contains("#tditest")) {
return true;
} else {
return false;
}
This script will query the contents of each tweet looking for the hashtag #tditest. Any tweet containing this hashtag will invoke the contents of our IF branch. Of course, you could put whatever you want into the IF branch but for the purposes of my test, I dropped in the sendTweet function used in yesterday's article to post a response back to Twitter stating that a tweet with the #tditest hashtag had been found. I'm quite sure there are many more practical actions which could be performed at this stage!
Now the intelligencia reading this will be wondering what use this loop connector is in its present form. After all, once we have read through the tweets that we have requested, the Assembly Line will stop. And if we run it again, it will just process the same tweets again, right?
Right. Unless we expand our solution with a couple of minor tweaks. We can enable the delta processing in our HTTP Client Connector which ensures that we will only ever process NEW tweets. Any unique attribute of the tweet should suffice (and there are a few) but I settled on created_at, for no particular reason:
Now I can run my Assembly Line over and over again safe in the knowledge that it won't trigger my IF branch at all unless there is a new Tweet that it hasn't ever seen before. The final step for me was to wrap this Assembly Line in a timer controlled Assembly Line. I created a readTweetsController Assembly Line with a timer function as an iterator and all time attributes set to * (which means the AL will fire every minute). In my data flow, I placed an Assembly Line Function component to call the AL I built above.
The result? A mechanism which polls Twitter every minute looking for new tweets with a particular hashtag and "does stuff" if a matching tweet is found.
So what are the real world applications for such a mechanism? Theoretically, I could:
- post a tweet "TDI: getStatistics" and my TDI AL could return "TDI RESPONSE: 20,000 records processed"
post a tweet "TDI: startSync" and my TDI AL would kick off a synchronisation process and return "TDI RESPONSE: synchronisation started" followed some time later with "TDI RESPONSE: synchronisation complete" - post a tweet "TDI: switchOnLights" and my TDI AL could switch on the lights in my house
post a tweet "TDI: exec "su - shutdown now" and my TDI AL would ignore me as a fool
Hopefully this article will give you some food for thought.
Monday, July 12, 2010
Twitter and ITDI
It is becoming clear that Twitter isn't just a platform for telling the world what you are having for your dinner. Twitter is used as a marketing tool, a news information service and a tool for comedians to test new jokes. It's robustness and message persistence make it an excellent candidate for creating a Message Queue service for those who haven't the inclination for deploying a "proper" MQ service and it can also be used as an alerting mechanism.
For example, I would be quite happy to receive a tweet stating that something had happened to my enterprise application during the night with relevant information like an error code. After all, it is a free way to send the equivalent of a SMS.
Twitter offers a great API and a simple single command can update your Twitter status as such:
But what if curl isn't available on your system? What if you have a suite of ITDI Assembly Lines which you would like to interface with Twitter? Well, nothing could be simpler (until Twitter disable their Basic Authentication mechanism).
An HTTP Client Connector should be created in LookUp mode within your Assembly Line and the connection details updated as such:
The Link Criteria should be updated to include a status parameter:
And the returning http.body attribute can be converted into a string for further analysis such as checking that the tweet was sent successfully:
Our sendTweet connector would be a great addition to our arsenal of alerting mechanisms which should already include sendMail, sendSNMPAlert and updateLog.
Next Steps
Building a Twitter Assembly Line which can perform the OAuth authentication would be a natural next step as would a connector which can read a "twitter stream" (as the equivalent of an MQ Consumer).
NOTE: The above screenshots were taken from a TDI v7.0 instance running on Windows 2003 but the concepts can be used in older versions of TDI with little (or no) modification.
For example, I would be quite happy to receive a tweet stating that something had happened to my enterprise application during the night with relevant information like an error code. After all, it is a free way to send the equivalent of a SMS.
Twitter offers a great API and a simple single command can update your Twitter status as such:
curl -u user:password -d "status=This is my first tweet via the Command Line"
But what if curl isn't available on your system? What if you have a suite of ITDI Assembly Lines which you would like to interface with Twitter? Well, nothing could be simpler (until Twitter disable their Basic Authentication mechanism).
An HTTP Client Connector should be created in LookUp mode within your Assembly Line and the connection details updated as such:
The Link Criteria should be updated to include a status parameter:
And the returning http.body attribute can be converted into a string for further analysis such as checking that the tweet was sent successfully:
Our sendTweet connector would be a great addition to our arsenal of alerting mechanisms which should already include sendMail, sendSNMPAlert and updateLog.
Next Steps
Building a Twitter Assembly Line which can perform the OAuth authentication would be a natural next step as would a connector which can read a "twitter stream" (as the equivalent of an MQ Consumer).
NOTE: The above screenshots were taken from a TDI v7.0 instance running on Windows 2003 but the concepts can be used in older versions of TDI with little (or no) modification.
Monday, July 05, 2010
How To Provision TAM GSO Credentials
Most of the time, getting IBM Tivoli Identity Manager to provision accounts successfully in target systems is a breeze. And you could be forgiven for thinking that provisioning to IBM Tivoli Access Manager (another stalwart of the IBM Tivoli portfolio) would be the simplest of the lot.
You would, of course, be wrong. At least, if you want to make use of GSO credentials, you would be wrong. Maybe. Possibly.
You see, the provisioning of GSO credentials just doesn't seem to be that easy!
This article assumes the following:
For an explanation of the problem, let us assume that a resource has been created within the TAM domain called backed. In pdadmin, we can create this resource as such:
In ITIM, we have a provisioning policy created with all the standard default TAM attributes being populated (such as cn, sn, our TAM groups, etc.). We check the box for Single Sign On Capability which leaves us with the Resource Credentials to populate.
In our environment, we have password synchronisation enabled across all systems and the GSO credentials will be no different. In other words, should a user change their password in ITIM, their TAM password and their GSO resource credential passwords will also be updated.
The "Constant Value" option for the Resource Credentials on our Provisioning Policy form is of no help for a User ID and Password that will be different for each user:
Which leaves us with the option of scripting the credentials. The documentation provided with the TAM Combo Adapter at least tells us that the TAM Resource Credentials must be provided in the following format:
In our case, that means that our javascript will look something like this:
We have a problem. We've always known that we can use parameters.eruid[0] for our User ID but what do we use for our password? A little bit of crawling around the documentation tells us that when password synchronisation is enabled, the ersynchpassword can be used to retrieve the password. So our {something} becomes ersynchpassword, right?
Wrong!
In fact, when we do this and provision an account (with a password of auiTNG85) our secAuthData attribute in LDAP contains the following:
How do I know that this is wrong? I know because I manually created the Resource Credential via the pdadmin command prompt and know that the result I'm looking for is:
Indeed, if I hardcode my Provisioning Policy to set {something} to my password (auiTNG85), I get the same incorrect result. Odd? Not really. If you dig further into the documentation you will see that there is some funny business going on with the TAM Combo Adapter and the password in the Resource Credential attribute on the Provisioning Policy MUST be prefixed with {clear}. In our hardcoded entitlement, we would therefore have:
Now, when we create our account, the secAuthnData is set correctly! Of course, I'm hardcoding the password still so I need to do something about that! We need to re-introduce our ersynchpassword attribute but ensure we are prefixing is with {clear}. And the reason we do that is because ITIM is just too clever by half and has already Base64 Decoded the attribute on our behalf!
The resulting secAuthnData is now set to:
Perfection. I'm sure the documentation contains all the information required to get this result but it isn't instantly clear that this is how you should be going about the GSO Credential Password Synchronisation problem.
You would, of course, be wrong. At least, if you want to make use of GSO credentials, you would be wrong. Maybe. Possibly.
You see, the provisioning of GSO credentials just doesn't seem to be that easy!
This article assumes the following:
- the reader has a basic understanding of IBM Tivoli Identity Manager
- the reader has a basic understanding of IBM Tivoli Access Manager
- the reader understands GSO credentials within TAM
- the version of ITIM being used is v5.0 (or higher)
- the version of TAM being used is v6.0 (or higher)
- the TAM Combo Adapter is being used (v5.0.9)
For an explanation of the problem, let us assume that a resource has been created within the TAM domain called backed. In pdadmin, we can create this resource as such:
rsrc create backend
In ITIM, we have a provisioning policy created with all the standard default TAM attributes being populated (such as cn, sn, our TAM groups, etc.). We check the box for Single Sign On Capability which leaves us with the Resource Credentials to populate.
In our environment, we have password synchronisation enabled across all systems and the GSO credentials will be no different. In other words, should a user change their password in ITIM, their TAM password and their GSO resource credential passwords will also be updated.
The "Constant Value" option for the Resource Credentials on our Provisioning Policy form is of no help for a User ID and Password that will be different for each user:
Which leaves us with the option of scripting the credentials. The documentation provided with the TAM Combo Adapter at least tells us that the TAM Resource Credentials must be provided in the following format:
RESOURCE NAME|USER ID|PASSWORD
In our case, that means that our javascript will look something like this:
"backend (Web Resource)" + "|" + parameters.eruid[0] + "|" + {something}
We have a problem. We've always known that we can use parameters.eruid[0] for our User ID but what do we use for our password? A little bit of crawling around the documentation tells us that when password synchronisation is enabled, the ersynchpassword can be used to retrieve the password. So our {something} becomes ersynchpassword, right?
Wrong!
In fact, when we do this and provision an account (with a password of auiTNG85) our secAuthData attribute in LDAP contains the following:
secAuthnData:: IUAja0BAQEBAQEDJKioqKioqKqgh5CMjIyMjIyMjIyMjgiMjIyMjIyMjIyMjniMjIyMjIyMjIyMjNyQk
How do I know that this is wrong? I know because I manually created the Resource Credential via the pdadmin command prompt and know that the result I'm looking for is:
secAuthnData:: IUAjZyoqKioqKip6JiYmJiYmJiYmJiZpIVQhT0BAQEBAQEBIQEBAQEBAQDghOSUlJSUlJSUlJSUlJSUA
Indeed, if I hardcode my Provisioning Policy to set {something} to my password (auiTNG85), I get the same incorrect result. Odd? Not really. If you dig further into the documentation you will see that there is some funny business going on with the TAM Combo Adapter and the password in the Resource Credential attribute on the Provisioning Policy MUST be prefixed with {clear}. In our hardcoded entitlement, we would therefore have:
"backend (Web Resource)" + "|" + parameters.eruid[0] + "|" + "{clear}auiTNG85"
Now, when we create our account, the secAuthnData is set correctly! Of course, I'm hardcoding the password still so I need to do something about that! We need to re-introduce our ersynchpassword attribute but ensure we are prefixing is with {clear}. And the reason we do that is because ITIM is just too clever by half and has already Base64 Decoded the attribute on our behalf!
The resulting secAuthnData is now set to:
secAuthnData:: IUAjZyoqKioqKip6JiYmJiYmJiYmJiZpIVQhT0BAQEBAQEBIQEBAQEBAQDghOSUlJSUlJSUlJSUlJSUA
Perfection. I'm sure the documentation contains all the information required to get this result but it isn't instantly clear that this is how you should be going about the GSO Credential Password Synchronisation problem.
Sunday, July 04, 2010
IBM Tivoli Identity Manager Data Object Relationships
IBM Tivoli Identity Manager can be a beast at times. On the face of it, having a tool that can manage your account objects in various data repositories/systems doesn't sound like it ought to be complicated. However, the reality can be quite tricky. Person objects, account objects, role objects, organisational hierarchy objects, service objects, provisioning policies, identity policies, adoption policies, password policies, entitlements, accesses... that's a lot of data and the relationships these data objects have with each other can get confusing for some.
Person objects own account objects which are provisioned by virtue of an access request or a provisioning policy which contains entitlements granted by role membership for specific services or service types and the accounts' User ID is governed by an Identity Policy, etc.
There are some excellent technical documents available on the IBM website which attempt to explain these objects but I've rarely found a visual description of the objects which works - thus my attempt using Visio:
Now, it should be pointed out that this visual representation is incomplete. How could I possibly have shown ALL the relationship lines without them criss-crossing in a way which would make the diagram "unviewable". For example, almost every object gets "placed" in a Business Unit yet I've only shown a person object belonging to a business unit! However, I hope it helps explain the basic relationships.
If you want the Visio 2007 version of this diagram, you can get it from http://www.stephen-swann.co.uk/downloads/itim-object-relationships.vsd. Enjoy.
NOTE: This diagram refers to IBM Tivoli Identity Manager v5.1
Person objects own account objects which are provisioned by virtue of an access request or a provisioning policy which contains entitlements granted by role membership for specific services or service types and the accounts' User ID is governed by an Identity Policy, etc.
There are some excellent technical documents available on the IBM website which attempt to explain these objects but I've rarely found a visual description of the objects which works - thus my attempt using Visio:
Now, it should be pointed out that this visual representation is incomplete. How could I possibly have shown ALL the relationship lines without them criss-crossing in a way which would make the diagram "unviewable". For example, almost every object gets "placed" in a Business Unit yet I've only shown a person object belonging to a business unit! However, I hope it helps explain the basic relationships.
If you want the Visio 2007 version of this diagram, you can get it from http://www.stephen-swann.co.uk/downloads/itim-object-relationships.vsd. Enjoy.
NOTE: This diagram refers to IBM Tivoli Identity Manager v5.1
Wednesday, June 16, 2010
The IBM Tivoli Identity Manager API - Jythonised
I wanted to build a new ITIM environment recently and figured it was about time I started to put together a proper set of Jython scripts to help me automate the process. Below, I've detailed my thinking behind a Jython script which will be capable of taking a delimited file defining an Organisational Structure, and load it into ITIM using ITIM APIs and the Jython scripting framework.
My virtual environment for this purpose was:
The Setup
The apiscript tool (v5.0.0.4) should be downloaded from the OPAL site and deployed as per the instructions. For example, the following files need to be configured to match the ITIM environment being managed:
But because I'm using an ITIM v5.1 system, I need to also make a "tweak" to the apiscript.bat (or apiscript.ksh) file as the structure of the extensions directory has been updated in this release. I need to include a 5.1 directory between extensions and examples as such:
Input File
The following input file is to be used to define the organisational structure:
Each line was terminated with a | character because early in the testing process I noticed that carriage return/line feed characters were making their way into the ITIM environment.
The Code
The code required to process such an input file should be broken down into a number of sections:
Section 1: Process The Command Line Arguments
In order to ensure that the script can process a variety of input files, the file to be processed should be passed as a command line argument. An additional argument is being processed here to enable verbose logging to take place:
The processing of the input file should be wrapped in a while loop and each line processed should be split into its constituent parts using the split method on the line object:
Section 3: Process Each Entry
For each record retrieved above, we need to determine that the Parent Organisational Unit Name actually exists in the data model. If it does not, we cannot process the entry. If it does exist, we need to check that the child OU doesn't already exist. If it also exists, then there is no point in continuing processing of this entry, otherwise we need to get an OrganizationalUnitMO object for the Parent OU to enable us to create an OU using the child Organisational Unit Name.
To check that an OU exists, we need to perform a search using a SearchMO object for an ORGUNIT object using a filter based on the OU name provided in the data file. If an object is found, we should return TRUE, else return FALSE:
An OrganizationalUnitMO object can be generated after searching for an OU by performing two additional functions:
The output from the script is:
In conclusion, I'm not a Jython/Python sripting guru. Indeed, this was my first foray into Python. It is, however, a relatively straightforward language to learn and can be a very powerful tool in your ITIM Administrative toolset.
The full script can be downloaded at downloads/loadous.zip.
My virtual environment for this purpose was:
- Windows 2003 R2 Standard Edition Service Pack 2
- IBM WebSphere Application Server v7.0.0.9
- IBM Tivoli Identity Manager v5.1.0.0
- APISCRIPT v5.0.0.4 (available from OPAL)
The Setup
The apiscript tool (v5.0.0.4) should be downloaded from the OPAL site and deployed as per the instructions. For example, the following files need to be configured to match the ITIM environment being managed:
- etc/host/{hostname}.properties
- bin/env_master.bat
But because I'm using an ITIM v5.1 system, I need to also make a "tweak" to the apiscript.bat (or apiscript.ksh) file as the structure of the extensions directory has been updated in this release. I need to include a 5.1 directory between extensions and examples as such:
set APISCRIPT_LOGIN_CONFIG=%APISCRIPT_ITIM_HOME%\extensions\5.1\examples\apps\bin\jaas_login_was.conf
Input File
The following input file is to be used to define the organisational structure:
ParentOrgUnit|OrgUnit||internal||external|internal|unit1|internal|unit2|external|unit3|unit2|unit4|unit2|unit5|unit10|unit7|unit4|unit6|external|unit4|external|unit8|external|unit8|unit8|unit9|unit9|unit8|
Each line was terminated with a | character because early in the testing process I noticed that carriage return/line feed characters were making their way into the ITIM environment.
The Code
The code required to process such an input file should be broken down into a number of sections:
Section 1: Process The Command Line Arguments
In order to ensure that the script can process a variety of input files, the file to be processed should be passed as a command line argument. An additional argument is being processed here to enable verbose logging to take place:
try:Section 2: Read the Input File
opts, args = getopt.getopt(sys.argv[1:], "qi:", ["inputfile="])
except getopt.GetoptError, err:
print str(err)
usage()
sys.exit(2)
inputfile = ""
quietmode = "false"
for opt, arg in opts:
if opt == "-q":
print "Quiet mode enabled"
quietmode = "true"
elif opt == "-i":
inputfile = arg
The processing of the input file should be wrapped in a while loop and each line processed should be split into its constituent parts using the split method on the line object:
infile = open(inputfile,"r")At this point, we now have an Organisational Unit Name and a Parent Organisational Unit Name ready for placement in the ITIM data model.
while infile:
line = infile.readline()
if not line: break
items=line.split("|")
parentorg=items[0]
org=items[1]
Section 3: Process Each Entry
For each record retrieved above, we need to determine that the Parent Organisational Unit Name actually exists in the data model. If it does not, we cannot process the entry. If it does exist, we need to check that the child OU doesn't already exist. If it also exists, then there is no point in continuing processing of this entry, otherwise we need to get an OrganizationalUnitMO object for the Parent OU to enable us to create an OU using the child Organisational Unit Name.
if parentorg == 'ParentOrgUnit':Section 4: Check That An OU Exists
# Do Nothing - it's the header
logit('Processing Header')
else:
logit('**********************************************')
logit('Processing ' + org)
if org_exists(parentorg) != 'False':
if org_exists(org) != 'False':
logit('Child OU already exists:' + org)
else:
# Create OU
logit('Child OU does not exist:' + org)
parent_org_mo = get_org_mo(parentorg)
orgchart.do_create_ou_from_args(org, parent_org_mo)
# end if loop
else:
logit('Parent Org Unit does not exist:' + parentorg)
# end if loop
# end if
To check that an OU exists, we need to perform a search using a SearchMO object for an ORGUNIT object using a filter based on the OU name provided in the data file. If an object is found, we should return TRUE, else return FALSE:
def org_exists(orgunit):Section 5: Get the MO Of An OU
foundit = 'False'
if orgunit == "":
foundit = 'true'
else:
def_org_cont_mo = orgchart.get_default_org_mo()
myPlatform = apiscript.util.get_default_platform_ctx_and_subject()
search_mo = SearchMO(myPlatform)
search_mo.setCategory(ObjectProfileCategoryConstant.ORGUNIT)
myFilter = "(ou=" + orgunit + ")"
search_mo.setFilter(myFilter)
search_mo.setScope(SearchParameters.SUBTREE_SCOPE)
search_mo.setContext(def_org_cont_mo)
results_mo = search_mo.execute()
for result in results_mo.getResults():
if orgunit == result.name:
foundit = 'True'
return foundit
# end org_exists
An OrganizationalUnitMO object can be generated after searching for an OU by performing two additional functions:
- getDistinguishedName on the search result
- create_org_container_mo using the DN returned from the above method
def get_org_mo(orgunit):The Result
if orgunit == "":
org_mo = orgchart.get_default_org_mo()
else:
def_org_cont_mo = orgchart.get_default_org_mo()
search_mo = SearchMO( *apiscript.util.get_default_platform_ctx_and_subject())
search_mo.setCategory(ObjectProfileCategoryConstant.ORGUNIT)
myFilter = "(ou=" + orgunit + ")"
search_mo.setFilter(myFilter)
search_mo.setScope(SearchParameters.SUBTREE_SCOPE)
search_mo.setContext(def_org_cont_mo)
results_mo = search_mo.execute()
for result in results_mo.getResults():
if orgunit == result.name:
mydn = result.getDistinguishedName()
org_mo = orgchart.create_org_container_mo(mydn)
return org_mo
# End get_org_mo
The output from the script is:
C:\work\apiscript-5.0.0.4\apiscript>c:\work\apiscript-5.0.0.4\apiscript\bin\apiscript.bat -f c:\work\apiscript-5.0.0.4\apiscript\py\orgs.py -z -i c:\\work\\apiscript-5.0.0.4\\apiscript\\data\\orgs.datVisually, this gets represented in ITIM as:
Using master environment: "C:\work\apiscript-5.0.0.4\apiscript\bin\env_master.bat"
Using custom BIN_HOSTNAME: "stephen-w0ckd5b"
Using custom ETC_HOSTNAME: "stephen-w0ckd5b"
Using host properties: "C:\work\apiscript-5.0.0.4\apiscript\etc\host\stephen-w0ckd5b.properties"
Using APISCRIPT_WAS_HOME: "C:\Program Files\IBM\WebSphere\AppServer"
Using APISCRIPT_ITIM_HOME: "C:\Program Files\IBM\itim"
WASX7357I: By request, this scripting client is not connected to any server process. Certain configuration and application operations will be available in local mode.
Welcome to IBM Tivoli Identity Manager API Scripting Tool (apiscript) version: 5.0.0.4
Setting system property: java.security.auth.login.config
Setting com.ibm.CORBA properties: loginSource, loginUserid, loginPassword
WASX7303I: The following options are passed to the scripting environment and are available as arguments that are stored in the argv variable: "[-z, -i, c:\\work\\apiscript-5.0.0.4\\apiscript\\data\\orgs.dat]"
Logging configuration file is not found. All the logging information will be sent to the console.
**********************************************
Input file selected is c:\work\apiscript-5.0.0.4\apiscript\data\orgs.dat
Processing Header
**********************************************
Processing internal
Child OU does not exist:internal
**********************************************
Processing external
Child OU does not exist:external
**********************************************
Processing unit1
Child OU does not exist:unit1
**********************************************
Processing unit2
Child OU does not exist:unit2
**********************************************
Processing unit3
Child OU does not exist:unit3
**********************************************
Processing unit4
Child OU does not exist:unit4
**********************************************
Processing unit5
Child OU does not exist:unit5
**********************************************
Processing unit7
Parent Org Unit does not exist:unit10
**********************************************
Processing unit6
Child OU does not exist:unit6
**********************************************
Processing unit4
Child OU already exists:unit4
**********************************************
Processing unit8
Child OU does not exist:unit8
**********************************************
Processing unit8
Child OU already exists:unit8
**********************************************
Processing unit9
Child OU does not exist:unit9
**********************************************
Processing unit8
Child OU already exists:unit8
C:\work\apiscript-5.0.0.4\apiscript>
In conclusion, I'm not a Jython/Python sripting guru. Indeed, this was my first foray into Python. It is, however, a relatively straightforward language to learn and can be a very powerful tool in your ITIM Administrative toolset.
The full script can be downloaded at downloads/loadous.zip.
Subscribe to:
Posts (Atom)