Monday, November 14, 2011

WebSEAL and WebSphere Portal - An Integration Pattern

Thanks to Niall, I've been encouraged to provide my thoughts on how Single Sign On from WebSEAL to WebSphere Portal can be achieved.

In reality, the integration pattern is one of the simpler patterns to adopt. I say simple, but as Niall alluded to when asking me to provide my thoughts on the issue he quite rightly stated that there are at least two approaches to the problem - the LTPA approach and the TAI approach.

Which way is the right way? It's rather like asking why do some airplanes have wings over the fuselage and others have wings under. Surely one of them is better than the other and I really don't want to find myself in a plane that has its wings in a sub-optimal position, right? (Thanks to Mr. Gunning for the analogy!)

To re-use a well-worn phrase... it's more a matter of "horses for courses"... but here is my view.

WebSphere Portal cries out for its own WebSphere instance. I wouldn't host other applications alongside Portal. Not, mind you, because you can't! But because future upgrades paths may make it more prudent to ensure that off-the-shelf applications are isolated from one another.

Given this isolation, the decision as to whether you adopt LTPA or TAI becomes a lot more simplified. Here, the path of least resistance would suggest that LTPA is the way forward. It is a well-trodden path that is well documented and requires virtually no effort on the part of the system administrators.

To give an example, a recent engagement with a client on a separate issue was enlightened when they declared an interest in SSO to their Portal. "Have you got a test environment with TAM and Portal in it?" was met with a "Yes". "Walk this way my dear friends!"

In just under 10 minutes, SSO from WebSEAL to Portal had been achieved. Jaws thumped tables! The approach? LTPA.

So, the approach:

Firstly, ensure that the Realm Name in the Federated Repositories section of the WebSphere Application Server console uses the same value as the LDAP name (including port number).

Step 1: Generate the LTPA key on the Application Server. Navigate to Security/Secure Administration/Applications & Infrastructure/Authentication Mechanisms & Expiration/Cross Cell Single Sign On!

Step 2: Enable "Use Available Authentication Data When An Unprotected URI Is Accessed" after navigating to Security/Secure Administration/Applications & Infrastructure/Web Security/General Settings. (Copy the resulting file as you need it on the WebSEAL server!)

Step 3: Copy the file generated in Step 2 to the WebSEAL server and generate some WebSEAL junctions:
   /wps
   /searchfeed
   /ibmjsfres
   /wps_semanticTag

The command for generating the junctions should include the following directives:
   -A -2 -F (file generated above) -Z (password used in Step 2)
   -x
   -j

Gotchas
When integrating WebSEAL and IBM Portal, it is necessary to understand the implications of the individual session caches that each component uses.

Portal, by default, has a 30-minute idle time-out and 2-hour session time-out on its user sessions. These values must be changed to accommodate the session times configured for WebSEAL (in that they should be slightly longer than the WebSEAL session). This recommendation helps to ensure that a consistent user experience is provided for a user. If a back-end HTTP application timed-out session data before the WebSEAL session for example, the user might experience errors from the back-end application.

In addition, a timeout.resume.session parameter can be added to automatically resume a session.

Login to the WebSphere admin console Select Resources -> Resources Environment providers -> WP ConfigService -> Custom properties. Add a new parameter as "timeout.resume.session" with value "true" as type "Boolean". Restart WebSphere Application Server.

Determine how you want the system to behave when users log out of portal. By default, when users click the Log out button in the SSO environment, they are not fully logged out of portal. Similarly, clicking the Log out button in the Portal environment does not mean that their SSO session has ended.

To accommodate a clean logout of Portal followed by a clean logout from the SSO environment, the IBM HTTP Server can be configured to perform a redirect to the WebSEAL logout page after the Portal Logout function has been requested. To achieve this, you need to edit the IBM HTTP Server configuration file to implement the post-log out behaviour. The IBM HTTP Server configuration file is called httpd.conf.

To capture requests to /ibm_security_logout and redirect them to /pkmslogout, add the following rewrite rules to the httpd.conf file:

RewriteEngine On
RewriteCond %{REQUEST_URI} /(.*)/ibm_security_logout(.*)
RewriteRule ^/(.*) /pkmslogout [noescape,L,R]

Note: You must add these rules to both the HTTP and HTTPS entries.

Ensure that the line that enables mod_rewrite is not commented out by removing the preceding # symbol. For example:

LoadModule rewrite_module modules/mod_rewrite.so

After all of that, you should have a perfectly operational WebSEAL/Portal integration.

Monday, October 17, 2011

Tivoli Directory Integrator Web Services Revisited

I really wanted to talk about two issues that have presented themselves to me recently when hacking my way around Tivoli Directory Integrator's implementation of web services functionality using the Apache Axis framework.

It has struck me, however, that the issues weren't really TDI issues at all. They were pure Axis issues that have been documented elsewhere on the web. That said, it can be difficult to find the information, and even more difficult to use the information in a TDI context. So, despite repeating what others have already found, I thought I'd share the solutions with you.

Integrated Windows Authentication
I like the idea that the WWW is fairly platform agnostic and encourages the use of open standards. But things don't always work out that way. Finding that a WSDL is being served by some Microsoft technology isn't necessarily a bad thing but when the server providing the service insists on the client using Integrated Windows Authentication to authenticate, then I start to scratch my head in a "why would someone do that" manner.

They do, though. And the Axis implementation within TDI won't perform Integrated Windows Authentication without some "tweakage".

That "tweakage" requires Axis to make use of the CommonsHTTPSender method as an HTTP transport rather than the default HTTPSender method. How did I find this out? I Googled it! Great news. Hurrah. But how do we tell Axis to make use of the CommonsHTTPSender method?

Axis 1.4 will search for a file called client-config.wsdd in the current working directory and this file contains the necessary parameters which tell Axis how to behave and more importantly, what transport to use. I figured that placing such a file somewhere in the Classpath would suffice, but I only managed to get Axis to load up the contents of my client-config.wsdd by placing it in the TDI home directory. In my case, that was:

c:\Program Files\IBM\TDI\V7.1\client-config.wsdd

And what were the contents of this file you might ask? Thankfully, I found one already on my system hidden in the bowels of the WebSphere file structure. A change of HTTPSender to CommonsHTTPSender resulted in this:

<?xml version=”1.0” encoding=”UTF-8”?>
<deployment
    name=”defaultClientConfig”
    xmlns=”http::://xml.apache.org/axis/wsdd/”
    xmlns:java=”http://xml.apache.org/axis/wsdd/providers/java”>
  <globalConfiguration>
    <parameter name=”disablePrettyXML” value=”true”/>
    <parameter name=”enableNamespacePrefixOptimization” value=”false”/>
  </globalConfiguration>
  <handler name=”WSSResponseConsumerHandler” type=”java:org.eclipse.higgins.sts.binding.axis1x.security.WSSResponseConsumerHandler”/>
  <service name=”Trust” provider=”java:RPC” style=”document user=”literal”>
    <responseFlow>
      <handler type=”WSSResponseConsumerHandler”/>
    </responseFlow>
  </service>
  <transport name=”http” pivot=”java:org.apache.axis.transpotr.http.CommonsHTTPSender”/>
  <transport name=”local” pivot=”java:org.apache.axis.transpotr.http.LocalSender”/>
  <transport name=”java” pivot=”java:org.apache.axis.transpotr.http.JavaSender”/>
</deployment>

A restart of TDI, and hey-presto! Without any further code changes, I was able to connect to my target web-service and authenticate using Integrated Windows Authentication.

Java Object Clashes
Another problem I had, however, was an issue with Complex Types. I've already spoken at length on this blog on Complex Types and how easy they are to handle and I staill maintain that this is true despite the issue I faced.

The target namespace listed in the WSDL I was pointing at had a URL style construction like such:


https://LARGECOMPANY.APPLICATION.SecurityManager/WSDL

Now, generating complex types was throwing an error within my TDI console and the root cause was the namespace above and the fact that it was attempting to create a package called LARGECOMPANY.APPLICATION.SecurityManager.

Some head-scratching and a tweak of the WSDL so that SecurityManager read securitymanager instead resulted in my complex type jar file being created successfully? Why would this be the case, you might ask? I think it may have something to do with java.lang.SecurityManager and a clash of names!

Tweaking the WSDL is fine for solving the problem, but it didn't seem like a sensible approach to me. Asking the owner of the WSDL if they would mind changing the namespace resulted in a "Computer Says No" response. (Actually this is unfair - it resulted in a perfectly valid response explaining that many other clients were using the web-services in this state!)

Looking at the TDI console and the Complex Type Generator function reveals the WSDL2Java Options attribute and a blank value! Using the -p option here got me around the problem quite nicely as such:

-p bigcorpsecurity

The -p option overrides the namespace to package mappings and ensures that Axis will use the value assigned to this option as the package name. Details on all the WSDL2Java Options can be found on the Apache Axis website and it's worth a read if you ever come unstuck attempting to use the Complex Types Generator.

Friday, September 30, 2011

Tivoli Security for Dummies

I've had the pleasure to work with a very entertaining IT consultant from the Land of Dracula recently. He's as self-deprecating as anyone I think I've ever met and constantly refers to himself as a Dummy. To be fair, he refers to himself as a Dummy when he is working with technology that is new to him - I don't for one minute believe that he is actually a Dummy!

When discussing the Tivoli security products, he would constantly over-state his Dummy credentials and ask me questions such as "How do dummies find out how to set up replication on these LDAPs?"

The standard response of reading the manual would be met by "I can't find the manual, or the manual isn't written for dummies like me. Why can't it be written in a format that dummies can understand."

On this particular issue - he's probably right. There are manuals, there are presentations, there are all sorts of documents scattered across the web but very few of them show a simple step-by-step approach to enabling replication for dummies.

"How does the dummy enable Single Sign On from WebSEAL to ITIM?" he would ask.

Again, there are plenty of documents around, but they haven't really been written for dummies and they are rarely complete!

It got me thinking... Maybe I should write a series of Dummies Guides covering some of the basics of the Tivoli Security suite of software. Of course, I'm sure the publishers of that well known series of guide books would take exception to my use of the word Dummies, so I'd need something else. Maybe "Tivoli Security for Newbies"?

Regardless of the title for the series, it seems like a worthwhile thing to-do as I'm conscious that my blog has become ever more technical as time passes with an assumption that my readers grow up alongside me. Refreshers for those readers would be useful and it may encourage the newbies to as well.

So, with that in mind, I'd like to call upon my readers to suggest topics that could be covered (taking the above two as givens for the start of the series). All topics will be considered!

Friday, August 19, 2011

Tivoli Directory Integrator Delta Processing

Tivoli Directory Integrator seems to be a favourite of mine when it comes to writing snippets of information on this blog. I can only assume that the reason for this is that it is so powerful and so capable in a wide variety of situations and that it never ceases to surprise.

It certainly surprised me last week when playing around with the Delta engine. Delta processing wasn't necessarily a new experience for me but requiring delta processing within a single Assembly Line for multiple data sources did present a challenge.

It is straightforward to write an Assembly Line and have the connection details for a connector be programmatically assigned at runtime. Take, for example, the following code:

LDAPConnector.connector.terminate();
LDAPConnector.connector.setParam("ldapSearchFilter", work.SearchFilter);
LDAPConnector.connector.setParam("ldapSearchBase", work.SearchBase);
LDAPConnector.connector.initialize(null);

This code will re-initialize my LDAPConnector using new connection parameters that I retrieve from my work entry.

However, I cannot use the same mechanism when attempting to set the Delta details! The reason was explained to me by Eddie Hartman who writes the excellent TDIing Out Loud. The Delta engine is initialised when the Assembly Line starts, not when the connector is initialised. So any attempts to programmatically set a DeltaDB parameter and re-initialise the connector is doomed to failure. (At least, this is true up to and including TDI v7.1 Fix Pack 4 though Eddie hinted that this "gap" may be "plugged" in the near future.)

So... what to do? The simple answer is to write two Assembly Lines! One of the Assembly Lines will determine the Delta DB table to be assigned (and presumably the target data source for your delta enabled connector). The other will be your main Assembly Line with the delta enabled connector.

The first Assembly Line - which I like to call the driver - would contain code like this:
java.lang.System.setProperty("deltastore", work.DeltaStore);

This would be followed by a call to the second Assembly Line which would have the Delta Store on the connector set as Advanced (Javascript) as such:

return java.lang.System.getProperty("deltastore");

Now, when my delta enabled connector starts up, a properly assigned Delta Store appropriate for the data source is assigned.

One word of warning though. The work.DeltaStore attribute used in the assignment of the Java property should NOT contain any funny characters of any shape or form. For example, if you wanted to assigned a Delta Store name which matched your data source name (which sounds like a mighty fine plan), you may find you have to translate one to the other as such:

Domino Address Book Name >>> Delta Store Name
Scotland.nsf >>> scotland
Northern-Ireland.nsf >>> northernireland

Thursday, July 14, 2011

The End Is Nigh

The end may be nigh for all I know and I'm quite sure that will please the rapturists. But my posting title doesn't actually refer to that "end".

The end I refer to is the end of official support for some IBM Tivoli security products. In some cases, nigh isn't very far away at all.

If you are running Tivoli Access Manager for e-Business v5.1, nigh is the end of September. In other words, you have a little over 2 months to upgrade! Trying to get support for v5.1 after September may result in a "Computer Says No" response!

If you are running Tivoli Identity Manager v4.6, you are in the exact same boat as our TAMeb friends above. In fact, I would wager that there are still a number of customers out there running TAM v5.1 and TIM v4.6!

Knowing what I know, I'd rather be in a position where I had to upgrade TAMeb than having to upgrade TIM. You TIM guys better get your skates on!

If you are running Tivoli Directory Server v6.0 - you're beat already. Support ended last year. For those on v6.1, you have a stay of execution until next April (2012).

This may sound crazy, but a programme of work to upgrade your TAMeb and underlying TDS infrastructure (and TIM if you have it) sounds like a good idea if you aren't on the latest versions. And by the way, check out the state of your DB2 while you are at it.

For more information on Tivoli Software Lifecycle Dates, keep an eye on the IBM Software Support site.

Thursday, June 30, 2011

WebSEAL Virtual Host

Tivoli Access Manager for e-business' WebSEAL supports the concept of virtual hosting but the documentation surrounding the setup of virtual hosting can sometimes be a little unclear. The diagrams available on Infocenter only help to muddy the waters from what I can tell.

Yet it doesn't have to be so. The concept is actually very simple indeed and can be encapsulated quite easily within a single diagram.

So without over-using words... here's a pretty picture:

All explained I hope? Of course it is!

Wednesday, June 29, 2011

What Fix Pack Are You On

In the world of commercial software, patches and fix packs can come thick and fast and it is often difficult to stay "current" despite the protestations of customer support.

How often have you had the following conversation?

User: "I'd like to report a problem with this software you sold me."

Customer Support: "What version and fix pack are you running?"

User: "Version 5.1, Fix Pack 1"

Customer Support: "Oh. I don't like the sound of that. You should really be on Fix Pack 6 you know. Why don't you apply Fix Pack 6 and see if the problem goes away. If it doesn't, ring me back."

User: "Does Fix Pack 6 address the issue I described?"

Customer Support: "No. But it might be fixed as a result of some other fix."

Familiar?

It's not necessarily possible to constantly apply fix packs that are released every quarter.The thought of regression testing a fundamental component in your architecture may send shivers down your spine. So maybe applying fix packs once every six months or once a year would suffice?

In my TIM and TAM world, I have a view on what is and isn't acceptable as far as fix packs are concerned. I prefer the latest and greatest but am happy to accept certain fix packs and being relatively fit for purpose.

IBM Tivoli Identity Manager
If you aren't on v5.1, what are you waiting for. Life in the world of v5.1 is a much happier experience and is further enhanced once you are on at least Fix Pack 1 (though preferable Fix Pack 5 or later).

IBM Tivoli Directory Integrator
If you aren't on v7.1, what are you waiting for. At last, TDI gets the interface it deserves and the functionality it promised. Apply Fix Pack 3 at least, though, as most of the niggles had been ironed out by then.

RMI Dispatcher
If you are using agentless adapters, you need to ensure you are running v5.1.3 of the RMI Dispatcher at least. The v5.1.2 had a memory leak! Feel free to deploy v5.1.7 though!

ITIM Adapters
Ah - keep a close eye on these bad boys. Try to keep up with them because they tend to only get an update if there are performance problems. It pays to stay current!

IBM Tivoli Access Manager
If you aren't on v6.1.1, what are you waiting for? Upgrades aren't that complicated! Of course, apply Fix Pack 1 but you will find a life of stability and tranquility with the veteran piece of software.

IBM Tivoli Directory Server
It's got to be at least v6.2 but there's not a lot wrong with v6.3 either. If you have a five in your version, you need to sort out your life!

DB2
Like TDS, your life will need sorting out if your version is antiquated. Anything pre v9.x needs replacing. Honestly. Now. Just do it. Give v9.7 your blessing. It will be good to you for a while.

Take the time to stay up-to-date as much as possible. You may find that the customer support conversation doesn't happen in the first place!

Thursday, May 26, 2011

ITIM Custom Participants Explained

One of IBM Tivoli Identity Manager's strengths is in its workflow engine. Visually defining workflows by adding actions, scripts and approval "nodes" can actually be fun and the visual results can often be a thing of beauty.

That said, the visual beauty can often be regarded as ugly compared to the elegance and simplicity within our scripting!

An approval node, for example, allows a workflow designer to select an approver as a particular ITIM user; a group of users associated with an ITIM role; or users with a specific relationship to the entity being operated on (such as a supervisor or service owner). But this can be extended using "Custom Participants".

CUSTOM PARTICIPANTS

Custom participants allows us to make use of the power of scripting. Instead of defining the participant for an approval (for example), we could use scripting to find an approver based on more obscure relationships.

All we need to do within our scripting is return an array of DNs representing each user who will act as an approver for the activity.

USE CASE & SOLUTION

Why would we want to do this? Let's walk through an example scenario:

Big Corp has lots of departments filled with lots of people who are very busy indeed. Supervisors have been identified but are too busy to deal with approval requests coming from ITIM because they operate in a highly volatile world when it comes to access rights. Big Corp has therefore decided that a role called "Departmental Approver" will be created and each department can assign multiple people to the role.

Big Corp, however, has insisted that users' access requests will only be approved by those Departmental Approver who exist within the same department as the requestor.

Native ITIM wizards allow you to select the "Departmental Approver" role as the participant of an approval activity but the link to the department that the approver is in won't exist! Using this mechanism, ALL departmental approver across the entire organisation would be used as an approver. This is where our Custom Participant script can come into play and provide us with a means of selecting Departmental Approver who exist in the same department as the requestor.

The solution to the problem would look a little like this:

// Let's search for the role first - we need its DN
var roleSearch = new RoleSearch();
var roleResult = roleSearch.searchByName("Departmental Approver");

if (roleResult.length < 1) {
    // This is a disaster - the role doesn't exist
    // You should handle this by whatever means suits you
} else {
    supervisordn = roleResult[0].dn;

    // Let's search for Departmental Approvers within our department
    var myFilter = "(&(erparent=" + container.get().dn + ")(erroles=" + supervisordn + "))";
    var personSearch = new PersonSearch();
    var personResult = personSearch.searchByFilter("person", myFilter, 2);

    // For each user found, let's add them to an array
    var myParticipants = new Array();
    for (i=0; i < personResult.length; i++) {
        myParicipants[i] = new Participant(ParticipantType.USER, personResult[i].dn);
    }
    return myParticipants;
}


Of course, we should handle the situation whereby no approvers were found! Hopefully, however, there is enough information above to help you build a robust custom participant solution.

Sunday, May 08, 2011

Tivoli Directory Integrator Web Services

Tivoli Directory Integrator has supported web services for quite some time but the AxisEasyInvokeSoapWebServiceFunctionComponent sounds like it should be a straightforward drag, drop and point at a WSDL in order to enable TDI to call a web service without any knowledge of how web services work.

Of course, the reality is quite different and the word Easy in the middle of that component string is a tad misleading.

Let's construct a TDI web service client to retrieve a stock quote. Those nice people at webservicex.net have made a stockquote service available at http://www.webservicex.net/stockquote.asmx?wsdl.

Complex Types
Invoking a web service normally means that some data needs to be provided to the service and the service will respond with some data. The data involved is normally wrapped up in what is called a complex type. In other words, a data object which can have one or more data elements.

Supplying complex types to a Function Component is straightforward, but not in the traditional way of building an input or output map the way we do for most other components.

Thankfully, TDI comes with a "Complex Types Generator". Drop a Complex Types Generator function component into your assembly line, point it at the http://www.webservicex.net/stockquote.asmx?wsdl WSDL, provide a JAR file to collect the necessary Java code that will construct the Complex Types as such:


Click on "Generate Complex Types" and you should get the following result:

The JAR file should now be copied to your {TDI_HOME}\jars directory. I create a WSDL directory under 3rdparty for such JAR files - you might like to do something similar.

To make use of the JAR file, TDI should be restarted!

We can disable the Complex Types Generator component as it won't be required at runtime. Now, we can make use of our JAR file and call the web service.

I use the excellent Java Decompiler to look inside the JAR file which helps me determine the full names of the Complex Types and the methods I can use on them:

Now, with a JAR file and the knowledge of what's inside the JAR file, we can build a web service call. For ease of understanding, I'm going to create three components/connectors:

Component/Connector 1: Script
First, we create a script component which will build our complex type. The code is:

var GQ = NET.webserviceX.www.GetQuote();
GQ.setSymbol("MSFT");
work.setAttribute("GetQuote", GQ);

What this is doing is creating a complex type called GetQuote containing a Symbol attribute with a value of MSFT.

Component/Connector 2: AxisEasyInvokeSoapWebServiceFunctionComponent
Now we can make the call to the service with our next connector. Pointing the AxisEasyInvoke.... component at  http://www.webservicex.net/stockquote.asmx?wsdl WSDL we can select the GetQuote operation by clicking on Operations. The parameter to be supplied to the component (Operation Parameters) will be GetQuote and we specify our input and output complex types in the Advanced Pane as NET.webserviceX.www.GetQuote and NET.webserviceX.www.GetQuoteResponse as such:

Our Output Map should map the GetQuote work object as such:

Our Input Map should map the supplied Return object as such:

Component/Connector 3: The Result
Finally, we are going to insert a script to decrypt the information retrieved from the AxisEasyInvoke.... component:

var myReturn = work.getAttribute("return").getValue(0);
task.logmsg("INFO", myReturn.getGetQuoteResult().toString());

And when we run the Assembly Line, this is what we should get:

23:19:40,789 INFO  - CTGDIS087I Iterating.
23:19:40,790 INFO  - CTGDIS086I No iterator in AssemblyLine, will run single pass only.
23:19:40,790 INFO  - CTGDIS092I Using runtime provided entry as working entry (first pass only).
23:19:41,242 INFO  - [AxisEasyInvokeSoapWebServiceFunctionComponent] CTGDIZ601I Web service called successfully.
23:19:41,246 INFO - <StockQuotes>;<Stock>;<Symbol>;MSFT</Symbol>;<Last>;25.87</Last>;<Date>;5/6/2011</Date>;<Time>;4:00pm</Time>;<Change>;+0.08</Change>;<Open>;26.01</Open>;<High>;26.22</High>;<Low>;25.75</Low>;<Volume>;55993640</Volume>;<MktCap>;218.2B</MktCap>;<PreviousClose>;25.79</PreviousClose>;<PercentageChange>;+0.31%</PercentageChange>;<AnnRange>;22.73 - 29.73</AnnRange>;<Earns>;2.517</Earns>;<P-E>;10.25</P-E>;<Name>;Microsoft Corpora</Name>;</Stock>;</StockQuotes>;
23:19:41,247 INFO  - CTGDIS088I Finished iterating.
23:19:41,247 INFO  - CTGDIS100I Printing the Connector statistics.
23:19:41,248 INFO  -  [BuildComplexType] Calls: 1
23:19:41,249 INFO  -  [AxisEasyInvokeSoapWebServiceFunctionComponent] CallReply:1
23:19:41,250 INFO  -  [DecodeReturn] Calls: 1
23:19:41,250 INFO  - CTGDIS104I Total: CallReply:1.
23:19:41,251 INFO  - CTGDIS101I Finished printing the Connector statistics.
23:19:41,252 INFO  - CTGDIS080I Terminated successfully (0 errors).


I leave the parsing of the result to you and wish you all the best with your future Web Services' adventures.

Tuesday, March 29, 2011

TAMeb Naughty Installer - SMS Part 2

Previously, I described the pain and heartache that is the Tivoli Access Manager for e-business SMS Server installer routine when another administrator has installed the WebSphere component [see http://blog.stephen-swann.co.uk/2011/03/tameb-naughty-installer-sms-part-1.html].

The title of that blog post claimed that it was merely Part 1 of the saga. Here's part 2.

It's not unusual for customers to want to install software somewhere other than the suggested location. In a Windows environment, it seems quite normal for the base operating system to occupy the C: drive and for additional components to be deployed on the D: drive. When an installer gives you the option to change the destination for your application, it's reasonable to assume that it is safe to do so.

With the TAMeb SMS installer, however, this assumption would be incorrect. While the PDSMS package can be deployed on to the D: drive, running the smscfg utility to configure the SMS product will fail claiming that c:\Program Files\Tivoli\PDSMS cannot be found on the system!

The simple way to resolve this issue is as you would expect. Temporarily copy the contents of your PDSMS directory from wherever you deployed it to the location above and smscfg will get you further. In fact, you should now be in a state where you can successfully deploy and configure the SMS Server and associated components.

For a comprehensive set of instructions on how to do that, follow the guide at IBM Tivoli Access Manager Session Management Server Deployment Architectures.

Sunday, March 27, 2011

TAMeb Naughty Installer - SMS Part 1

I've long been an admirer of IBM Tivoli security software. The components mostly do exactly what you would expect.

However, I've always been a bit confused by the means and mechanisms used to install the software. This week, I had a brilliant example of pure laziness on the part of the developer within IBM who had responsibility for coding the installer for the Tivoli Access Manager for e-business SMS component!

Consider the following facts:
  • Windows 2008 Server (64-bit) platform
  • WebSphere 7 already deployed
  • Tivoli Access Manager for e-business Policy Server already deployed and configured

Installing the SMS component should be as simple as running the installer from the TAMeb Base package and following the on-screen instructions. Unfortunately, on my system, the installer failed to recognise that WebSphere was installed and insisted on installing its own version of WebSphere.

And the reason for this failure to detect the WebSphere installation? I could plainly see that it was deployed at d:\IBM\WebSphere\AppServer. I could plainly see that WebSphere was running by checking the list of Windows Services. The installer, however, could not.

It was time for some code-hacking to determine what was going on and after a little bit of digging around the installer, I found that it was checking for a WebSphere installer by looking through the Windows registry.

Interesting, I thought. Surely there must be a reference in the registry for WebSphere? Well, that may indeed have been the case, but it certainly wasn't where the installer was looking for it!

A colleague of mine had installed WebSphere using his credentials. I was installing SMS using my credentials. The SMS installer was looking for WebSphere registry keys under LOCAL USER in the registry. And they didn't exist there because I didn't install WebSphere!



The addition of the following registry keys (under my session) allowed the installer to recognise the WebSphere instance:

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\IBM]

[HKEY_CURRENT_USER\Software\IBM\WebSphere Application Server Network Deployment]

[HKEY_CURRENT_USER\Software\IBM\WebSphere Application Server Network Deployment\7.0.0.0]
"BinPath"="D:\\IBM\\WebSphere\\AppServer\\bin"
"InstallLocation"="D:\\IBM\\WebSphere\\AppServer"
"LibPath"="D:\\IBM\\WebSphere\\AppServer\\lib"
"MajorVersion"="7"


That said, I wanted to install the SMS components on to my D: drive. Do you think that would work? Do you think if I installed the software there that the smscfg routine would work? If you think positively about these questions, then it's time to think again! Check back for the next thrilling episode in the SMS Installation Series!

Wednesday, March 23, 2011

A Proxy For Google

I was recently asked why I write down my thoughts on Identity and Access Management in a blog. In fact, I was recently asked why I give away all our secrets and didn't I know that my actions were damaging to my long-term job prospects. In effect, educating others means more competition in the job pool.

I have some answers to these questions:

1) I enjoy writing down my thoughts and it helps solidify the concepts in my own head. It also allows me to refer back to past experiences

2) I like the idea that others read my blog and are maybe inspired to take the thoughts and improve them

3) Educating others relieves me of the responsibility of being the custodian of a certain piece of information and allows me to concentrate on learning new things. After all, we should always aspire to learn new things and stretch our imaginations

Having said all that, my time is precious. When viewers of my blog request help, I will try my best to provide guidance and pointers but I may not respond immediately. I don't actually provide this as a service and therefore there is no SLA! Requests should also be thought through - I don't like being a proxy for Google, for example. (You may get a response including a Let Me Google That For You link!)

In short, I enjoy writing my blog and I enjoy helping people but I prefer to help people who have demonstrated that they have already made a good attempt at addressing their problem.

Wednesday, March 09, 2011

How To Externalise JavaScript in ITIM Workflow

Every now and again, I seem to hit on a problem which requires that my JavaScript seems to be veering towards being environment specific. As we all know, we want our ITIM workflows to be consistent between our development, test and production environments and workflow which contains environment specific code has got to be a bad thing.

But, it happens.

What if, however, we could maintain the same workflow between the environments but externalise the JavaScript to a file. The pain point of altering the workflow between environments would just go away!

Achieving this is really quite simple and has been alluded to in other forums - though rarely in a way that gives the reader all the information they require to complete the task. The following method works with ITIM v5.1.

JavaScript Node
To read an external JavaScript file, we require the following code:

// Open the external JavaScript file
var myScriptFile = new java.io.BufferedReader(new java.io.FileReader("/path/to/file");
var myLineOfCode = null;
var myCode = "";

// Read the file and add each line of code we find to a variable
while ((myLineOfCode = myScriptFile.readLine()) != null) {
   myCode += myLineOfCode;
}

// Close the file
myScriptFile.close();

// Execute the code that we have gathered together
eval(myCode);

Our external JavaScript file can contain any JavaScript code we require for our node:

Enrole.log("external", "Hello world - I'm an external script!");

scriptframework.properties
Of course, to expose the java.io methods to our JavaScript, we need to configure this ability in our scriptframework.properties file (and restart ITIM). Adding the following property to scriptframework.properties will do the trick:

ITIM.java.access.io=java.io.*

And there you go... from within our script nodes, we have access to the java.io classes. We can externalise our scripts. We can read files. We can write files. Why not let me know how you put your java.io capability to use in the script nodes.

Wednesday, February 16, 2011

WebSEAL Landing Page Personalisation

Creating a Landing Page for WebSEAL authenticated users can be a useful technique for ensuring a consistent user experience, providing a means of delivering messages to end users and providing a personalised experience. This does not mean that you have to invest in a heavy-weight portal product to provide this functionality, though.

Simple ASPs, JSPs, PHP scripts, PERL scripts and any number of other scripting technologies can be used to greet the user and personalise the landing page without resorting to performing a lookup in the credential store for information. How can we do that? Well, to reuse a well-known modern-day philosopher's phrase, it's simples!

WebSEAL can pass the User ID of the user to the protected landing page in an HTTP header called IV_USER. We can pick this up as follows:

PERL
#!c:/perl/bin/perl
my $user = $ENV{"HTTP_IV_USER"};

JSP
<%
String user = request.getHeader("iv-user");
%>

ASP
<%
user = Request.Headers["iv-user"];
%>

PHP
<?php
$user = $_SERVER['HTTP_IV_USER'];
?>

Note: The names of the HTTP Header will vary depending on the scripting technology being used.

So now I have my User ID, I can make use of this information in my page to say something like "Welcome sswann". I nice touch, I'm sure you will agree.

But of course, WebSEAL can be so much more powerful that that. It can also send IV_GROUPS out of the box which will be the groups that the user is a member of. With this information, we could build a list of hyperlinks that are available to that user. In code/pseudo code, that could look like this:

String groups = request.getHeader("iv-groups");
if (groups.indexOf("administrators") >-1) {
   // Show a link to the administrator's application
}
if (groups.indexOf("auditors") >-1) {
   // Show a link to the auditor's application
}

Wonderful, you might think, with the obvious next question being "what else can I do?"

Well, we could add any attribute assigned to the user object in the TAM LDAP as a similar HTTP header object. To do so, though, is a two-step process:

Step 1: WebSEAL Configuration
Let's assume that we want to make the forename and surname for our user available to our landing page. We need to configure WebSEAL to retrieve these attributes from the LDAP and make them available within the credential. To do so, the WebSEAL configuration file needs updated as such:

[aznapi-entitlement-services]
TAM_CRED_ATTRS_SVC = azn_ent_cred_attrs

[aznapi-configuration ]
cred-attribute-entitlement-services = TAM_CRED_ATTRS_SVC

[TAM_CRED_ATTRS_SVC]
person = azn_cred_registry_id

[TAM_CRED_ATTRS_SVC:person]
tagvalue_credattrs_sn = sn
tagvalue_credattrs_givenname = givenname

Step 2: Junction Configuration
Next we need to ensure that we pass these attributes to our landing page. On the WebSEAL junction hosting this "personalised" landing page, we would perform the following pdadmin commands:

pdadmin> object modify /WebSEAL/webseal_instance/junction_name set attribute HTTP-Tag-Value credattrs_sn=surname
pdadmin> object modify /WebSEAL/webseal_instance/junction_name set attribute HTTP-Tag-Value credattrs_givenname=forename

Now, we can extract the HTTP header variables for forename and surname and provide a "Welcome Stephen Swann" message because these header attributes will be passed to our landing page process:

String surname = request.getHeader("surname");
String forename = request.getHeader("forename");

We haven't had to perform any lookups in a data repository and our landing page can be kept very simple indeed with just a couple of lines of scripting.

Tuesday, February 15, 2011

ITIM Accesses and SoD

Just like all enterprise applications, IBM Tivoli Identity Manager has evolved over the years. New releases bring new functionality or even a new look and feel (as per v5.0 move away from the Ice Cream Parlour look of v4.x). The new functionality is typically a response to customer needs, competitor functionality or a general shift in focus.

Recent functional additions have included Accesses, Separation of Duties and Recertification. Fabulous you might think. However, I can't help thinking that some of these functions could be improved to make them actually work in a real world scenario.

Accesses
Accesses can be applied to roles and groups and allow end users to "request" these accesses through the self-service screens. This is great as it now provides us with a means of allowing users to request access to a File Share, for example. Or is it great? If all file shares were made accessible you might find that there are a lot of accesses that the end user needs to wade through. And what if the user isn't even entitled to an AD account which is a pre-requisite to getting access to a File Share? Well, in the ITIM world, the File Share access would still be displayed on screen!

What if you manage external users inside your ITIM. Can these external users (who may only access a web portal page, for example) now see the structure of your AD Groups because they are now accesses?

In short... why are accesses not locked down by ACIs in the same manner as almost all other ITIM data objects?

Separation of Duties
SoD has become a big issue in recent years and it is terrific that it is being addressed in ITIM. However, the implementation is restricted to Static Role clashes only. If, for example, I have a role in ITIM assigned to me by virtue of some attribute on my person record (ie. a Dynamic Role) then this cannot be used in the evaluation of SoD rules.

Take this example: I'm assigned the role of Approver because my Job Title is Manager (via a Dynamic role). A superior of mine assigns the role Auditor to me (via a static role). Ideally, I want to create a rule that states that Auditors cannot be Approvers. Unfortunately, I cannot create this rule in ITIM as the role of Approver is a Dynamic role.

I have no doubt that these issues are already well understood within IBM and that the next version of ITIM will address them. If that is not the case, that would be a terrible shame and a missed opportunity. Maybe my thoughts can help steer the product owners in the right direction? Do I have that amount of clout?

Thursday, February 03, 2011

WebSphere And IHS - What Gives

As regular readers will be aware, the majority of my professional work is in the field of Identity and Access Management. Indeed, it is with IBM Tivoli products that I play with day-in and day-out.

IBM Tivoli Identity Manager (ITIM) is a J2EE application which runs within a WebSphere Application Server instance or cluster and I often-times develop mini Java apps as either a helper for ITIM or as an extended authentication service for Tivoli Access Manager for e-business. The curious thing about ALL of these Java applications, however, is that from a user session perspective, they are short-running (typically) and don't really require any state to be maintained during that short interaction with the user.

In the WebSphere world, it seems to be good practice to deploy the IBM HTTP Server (which is a hybrid of the Apache HTTP Server). There are a number of reasons as to why this is a good idea including:
  • it adds a layer in front of your application server
  • it can perform load balancing
  • it can provide caching services
  • it can provide port translation
  • it can help maintain state within a clustered environment

All of this is fine and well, but remember... I work in a field where I typically have a WebSEAL reverse-proxy in place which can provide almost all of the above. Where it might fall down, is when session state becomes crucial.

That said, in a world where the applications purring away inside that WebSphere cluster will only ever see short running sessions (ie. log in, change password or log in, approve a request) then state isn't an issue.

In effect, surely it is quite acceptable to build your infrastructure without the added overhead of the IHS (where WebSEAL is already in place).

I've had this conversation a number of times down the years and the conclusion I've always come to is that the IHS in my configuration is pretty superfluous. However, it is "good practice" and sometimes it is easier to deploy it than have the political bun-fight with the powers that be who point at these architectural best-practices and tell me I can't deviate from the infrastructure design.

So this begs the question. Who writes these best practices/guidelines and can they please word them in such a way that allows those out in the field a certain amount of flexibility when the occasion demands?

NOTE: Should the application hosted within the application server be a long-running enterprise-scale application that is mission critical... please, please, please deploy the IHS. In other words... make sure you deploy components for the right reasons and not just out of fear because of some design pattern dreamt up in some lab by someone who has never deployed this stuff in a production environment!

Thursday, January 13, 2011

Let Me Introduce Myself

I get a lot of emails each day (as I'm sure most people do nowadays). Most of it is rubbish and can be binned immediately, some of it is from people I already know quite well and then some of it is from people who are introducing themselves to me. These new people could be people who want to hook up on LinkedIn or Facebook and tend to have job offers for me or are looking for a job.

My first reaction these days is to ask my good friend Google to do a background check on these people. That way, I'll get to maybe see a photo of them - putting a face to a name is always a good thing - and I'll get to see what they are up to, what their interests are and whether there is any point in responding to their introduction.

And so it came to pass that I did precisely this yesterday but with the result that I found someone's Twitter account and their last tweet was asking for payment from another Twitterer via bank transfer. Astonishingly, this tweet also contained my new acquaintance's account details - Sort Code & Account Number.

I pondered this information for only a short time. I had this person's name, address, sort code, account number, list of friends and interests within just a couple of minutes. If I was so inclined, I could have some fun with the information as I'm sure others would dearly like to do.

We should always be mindful that the information we release on to our computers then in to the WWW is accessible by others. Not only can the information be damaging to our reputation, but in the case above, our bank accounts could come under threat. That's not to say that we should stop releasing information, of course. We just need to be a little more careful about what we say, to whom and choose the medium to communicate wisely!