Tuesday, November 22, 2016

IGI Attribute Hierarchies

IBM Security Identity Governance & Intelligence (or IGI for short) has a very neat feature whereby hierarchies can be constructed using any attribute associated with an identity. Now, instead of identities being firmly placed within a rigid organizational hierarchy, additional hierarchies can be created to help model entitlements more accurately.

For example, it could be useful for everyone who reports to a specific manager to be automatically assigned a suite of entitlements. Additionally, we could find that rights should be assigned based on regional or office location.

Fundamentally, the assignment of this rights is somewhat akin to how IBM Security Identity Manager could be configured with dynamic roles - except IGI's approach is so much more powerful.

For example, let's consider identity records that contain the following attributes:
  • Country
  • City
  • Address

It could be interesting to model that hierarchy and "virtually" place identities in a hierarchy that might look like this:

World
- United Kingdom
- - Belfast
- - - 1 Main Street
- - London
- - - 2 High Street
- - - 3 Oxford Street
- - - 4 Piccadilly
- France
- - Paris
- - - 5 Rue de Provence

etc.

Modelling the hierarchy is simple. As an IGI administrator, one merely needs to navigate to Access Governance Core > Configure > Rules. In here, we can create a Rules Sequence called  LOCATION_HIERARCHY of type Hierarchy.

Now, within the Rules tab, we can select the rule class Hierarchy and rule flow LOCATION_HIERARCHY (ignoring the fact that the naming convention mismatch between sequence and flow is rather annoying).

Within the package imports section, we would place the following:

import com.engiweb.profilemanager.common.bean.UserBean
import com.crossideas.certification.common.bean.data.ResultBean
import com.engiweb.profilemanager.common.ruleengine.action.UtilAction
import java.util.ArrayList

global com.engiweb.pm.dao.db.DAO sql
global com.engiweb.logger.impl.Log4JImpl logger

This suite of imports exposing methods which we can now use. Our next step is to CREATE a Rules Package with the following code:

when
    userBean : UserBean(  )  
    resultBean : ResultBean(  )
then
/* Country>City>Office */
    String country = userBean.getCountry();
    String city = userBean.getLocality();
    String office = userBean.getAddress();

    if (country != null && city != null && office != null) {
        resultBean.setResultString("World;" + country + ";" + city + ";" + office); 
    } else {
        resultBean.setResultString("World"); 
    }

What does this code do? Well, it iterates over every identity in the platform and constructs a string of World;Country;City;Office which can be passed back to the core platform in order to construct a hierarchy. Give the package a "sensible" name (like Country>City>Office) and assign it to the LOCATION_HIERARCHY rule flow.

The next step is to Access Governance Core > Configure > Hierarchy in order to create a hierarchy that can use our rule flow. Under the Actions link, click on Add. Populate the blank form with the following details:

Name: Location Hierarchy
Configuration Type: Advanced
Rule: LOCATION_HIERARCHY
Value: Hierarchy
Separator Char: Semi-Colon (;)

Save the hierarchy, re-select it and under Actions, click on BUILD. Now the system will build an appropriate hierarchy which can be viewed under Access Governance Core > Manage > Groups.

So what can we do now?

Well... let's assume we want everyone in the United Kingdom to be assigned a role. Create and publish role called "United Kingdom Users". Now configure the role by updating the Org Units it is assigned to (ignoring the fact it is called Org Units which will no doubt be resolved in a future fix pack!). Add an "Org Unit" of type Location Hierarchy, navigate down through the hierarchy and find United Kingdom, click on OK and complete the following:

Default: Yes, and align users
Visibility Violation: No
Enabled: Yes
Hierarchy: Checked

That's it... every user under the United Kingdom hierarchy will automatically be assigned the United Kingdom Users role.

Tuesday, August 30, 2016

Javadoc Updates

It has been a while, but I've finally got round to uploading the latest Javadocs for IBM Security Identity Manager v7.0 and IBM Security Access Manager v9.0.

These can be found by following the links from here: https://www.stephen-swann.co.uk/links-and-tools/

Enjoy - if it's possible to enjoy Javadocs!

Wednesday, March 23, 2016

Property Changes In TDI - On The Fly

This week, I was asked if it was possible to update a TDI property while the TDI server was still running and, if so, how to go about doing it.

The reason for wanting to do so was akin to injecting a property into TDI at run-time so that it would "safely" shutdown an Assembly Line at a sensible point of processing. Now, there are many ways to address this actual requirement, but the fundamental question of how to inject a property into TDI is certainly something that can be explained easily.

tdisrvctl
The tdisrvctl command is a terrific command for communicating with a running TDI server. In order to communicate with the TDI server, you merely need to supply some key information, such as the Port Number that the TDI Server API is listening on, and some means to identify yourself using the TDI keystores. Finally, you supply an "operation" for the TDI Server to perform. Manipulating properties can be performed with the "prop" operation. In summary, tdisrvctl needs the following parameters:

  • -p {port}
  • -K {keystore}
  • -P {keystore password}
  • -T {trust store}
  • -W {trust store password}
  • -op prop


I set up a Project in TDI called propertyhandling, with an assemblyline called propertyhandling and a property in the propertyhandling properties file called status with a value of run.

In my AL, I created a conditional WHILE loop with this code:

if (system.getExternalProperty("status") == "run") {
      return true;
} else {
      return false;
}

I ran the AL and it trundles along nicely doing nothing but looping and consuming all available CPU. You got to love never-ending loops!

I then ran this command:

tdisrvctl.bat -p 1091 -T C:\TDISOL\testserver.jks -W server -K C:\TDISOL\serverapi\testadmin.jks -P administrator -op prop -c propertyhandling -o propertyhandling -g all

The -c, -o and -g options after the prop operation need a little explaining:

  • -c this is the Solution Name for my configuration, in this case propertyhandling
  • -o this is the name of the properties collection, in this case propertyhandling
  • -g this tells the TDI Server to return the property value for the property named, in this case all means return the values for all the properties in the properties file


When I run the command, the result I get on-screen is this:

--- propertyhandling ---

status=run

This is excellent news, I'm able to query the current properties held in this particular properties file. Swapping that -g argument for a -s argument means I can now manipulate the status property as such:

tdisrvctl.bat -p 1091 -T C:\TDISOL\testserver.jks -W server -K C:\TDISOL\serverapi\testadmin.jks -P administrator -op prop -c propertyhandling -o propertyhandling -s status=stop

The -s require a property/value pair to be supplied. In this case, -s is telling the TDI Server to set the value of property status to stop. When I run this command, I get this result:

CTGDJB070I The property status has been set and committed.

That looks positive and when I check whether my assemblyline is still running, I find that it has indeed come to an end - as expected. Thanks goodness, says my CPU!

Of course, there are a myriad of use cases for injecting properties at run time... safe shut-down of an assemblyline is just one.

Tuesday, May 26, 2015

TDI and HTTP POST

I was recently asked how to successfully perform an HTTP POST request using TDI's HTTP Client Connector. NULL values kept being processed by the web server and there seemed to be no obviously documented way to perform the task.

On the face of it, this seems like a straightforward piece of functionality, but the truth is that it is not quite as straightforward as it seems.

To understand how to pass parameters to a web server via HTTP POST using TDI, it would help to understand what is actually happening when using a standard web form.

Consider the following:

<html>
<head />
<body>
<form action="result.php" method="post">
User Name<br />
<input type="text" name="uname">
<br />
Password<br />
<input type="password" name="pass">
<br />
<input type="submit" value="submit">
</form>
</body>
</html>

The submit button on this form will cause the browser to send an HTTP request to the web server with a CONTENT-TYPE of: "application/x-www-form-urlencoded". This is the key to unlocking our problem!

The values for the attributes requested in the form will be passed in the BODY of the request rather like the query string you might see in HTTP GET requests:

uname=x&pass=y

So to mimic form submission using the POST method via TDI, all you need to do is follow these steps:
  • Set the Mode to CallReply (if you want to see what the web server has done with your request)
  • Set the Request method to POST
  • Set the http.Content-Type to "application/x-www-form-urlencoded"
  • Set the http.body to name/value pairs in query string format

Hopefully that will see your TDI Assembly Lines behaving themselves when acting as HTTP Client and attempting to use the POST method to transfer information.

Friday, May 15, 2015

LDAP Schema Issues

It's annoying when a basic task consumes too much of your time!

Getting an LDAP Operations Error when attempting to perform an ldapmodify on an object can be irksome. It is especially irksome if the change you are making is trivial!

Consider the following object that already exists in my LDAP Server:

dn: myattr=ABC,dc=com
objectclass: top
objectclass: mycustomobject
myattr: ABC
mytrivialattribute: Z

Now consider changing that object to the following:

dn: myattr=ABC,dc=com
objectclass: top
objectclass: mycustomobject
myattr: ABC
mytrivialattribute: Y

One might reasonable expect the LDAP modify operation to be successful bearing in mind how trivial the change is. But when you get an Operations Error being thrown back at you by a hissy-fitting Directory Server, you might start to scratch your head.

The V3.modifiedschema looked perfect as mytrivialattribute was defined with a syntax of 1.3.6.1.4.1.1466.115.121.1.15{1024}. But looking inside DB2 revealed something a little more sinister.

I followed these steps:
db2 connect to idsldap
db2 describe table idsldap.mytrivialattribute

And what I got back was:

EID with column length 4
MYTRIVIALATTRIBUTE with column length 240
RMYTRIVIALATTRIBUTE with column length 240

That didn't look right! Somehow, mytrivialattribute was created using default parameters and the V3.modifiedschema file was manually updated at a later date. As such, the database plain refused to act upon any requests to add/modify mytrivialattributes!

Getting around the problem is simple and can be done in a number of ways. I like the brutal approach though:

  • Drop the DB2 table idsldap.mytrivialattribute
  • Restart the LDAP server

Describing the table now returns:

EID with column length 4
MYTRIVIALATTRIBUTE with column length 1024
MYTRIVIALATTRIBUTE_T with column length 240
RMYTRIVIALATTRIBUTE_T with column length 240

So what was going on? Well, setting a length of 1024 on the schema definition meant that the LDAP Server wanted to put the full string into the column named MYTRIVIALATTRIBUTE and to put a truncated version of the string into MYTRIVIALATTRIBUTE_T. But the _T column didn't exist so the server couldn't perform the operation.

Dropping the table and allowing the directory server to recreate it properly on startup resolved the issue.

Of course, the ramifications now begin as to why there was a mismatch in the first place, but at least the problem has been diagnosed and rectified.

Monday, February 09, 2015

Giving Away Secrets

I've often been asked why I 'blog' (not that I update this blog anywhere near as often as I would like). I've had people say that I'm "giving away secrets" and that I won't be in demand in the job marketplace if I continually tell other people how to do what I do.

That maybe so, but surely that's a good thing right? I want to be able to move on and learn new things on a daily basis and if I can get others to do the things that I traditionally do, then that creates the opportunity for me to move on?
Then there is the fact that I'm not getting any younger and retaining information in my head isn't as easy these days as it used to be. It's like creaking limbs and deteriorating eyesight... I find I'm becoming more forgetful (which my wife will more than happily corroborate). As such, writing all these things down is actually as much for my own private use as it is to help others.
And finally, altruism feels good. Now, I don't for one minute think that I'm the most altruistic person I know. Giving away information on IBM Tivoli security software can barely be described as altruistic really. But nonetheless, it feels like I'm doing a good thing. Last week, I received an email from a very kind individual which reminded me that it is a good thing. The email said:

"You are a rock star and a gentleman. Thanks so much for all the helpful material you have put together! I owe you at least a suitcase of beer. Feel free to cash in anytime."
I don't get many emails of that type. Mostly, I get emails asking for free consultancy so it brightened my day when I saw the above. So, to the sender (Tim), I say thank you for the kind offer of beer but it's really not necessary. Acknowledgement that the material is worthwhile is quite enough for me.

Friday, December 12, 2014

The Who And When Of Account Recertification

We all know that there are rules and laws in place that mean that organisations must demonstrate that they have control over access to their IT systems. We've all heard about the concepts around recertification of access rights. But what is the best way to certify that the access rights in place are still appropriate and who should undertake that certification task?

Who

A starting point, at least, is to identify who are potential candidates for certifying access rights. They are, in no particular order:

  • The account owner themselves
  • The account owner's line manager
  • The application owner
  • The service owner
  • The person who defined the role that granted the entitlements


Of course, some of these people may or may not actually exist. In some cases, the person might not be appropriate for undertaking certification tasks. Line Managers these days are often merely the person who you see once or twice a year for appraisals and to be given the good news about your promotion/bonus (or otherwise). Functional Managers or Project Managers may be more appropriate, but can they be identified? Is that relationship with the account owner held anywhere? Unlikely. And definitely unlikely in the context of an identity management platform!

And what about service owners? Maybe this person is the person who has responsibility for ensuring the lights stay on for a particular system and maybe they don't care what the 500,000 accounts on that system are doing? Maybe her system is being used by multiple applications and she would prefer the application owners to take responsibility for the accounts?

And what of the account owner themselves? Self-certifying is more than likely going to merely yield a "yes, I still need that account and all those entitlements" response! But is that better than nothing? Maybe! Then again, maybe the account owner is unaware of what the account gives them. What do they do when they are told they have an AD account which is a member of the PC-CB-100 group, will they understand what that means? For many users, the answer will be no!

Ultimately, the decision as to who is best placed to perform the certification will come down to a number of factors:

  • Can a person be identified
  • Can the person recognise the information they will be presented with and correlate that with a day-to-day job function
  • Is the person empowered to make the certification


There will be a high chance that only one person fits the bill given this criteria!

When

Some organisations like to perform regular recertifications... it's not unusual to see quarterly returns being required. For most people, however, their job function doesn't change too often and a lot of effort will have gone into gathering information which (for the most part) is unchanged from the previous round of recertifications.

Is there a better way?

Merely lengthening the amount of time between cycles isn't necessarily the answer. But maybe recertification can be more "targeted". Rather than a time-based approach, recertification should probably target those people who have undergone some form of change or other life-cycle event, such as:

  • Change of Job Title
  • Change of Line Manager
  • Change of Organisational Unit
  • Return from long-term leave
  • Lack of use on the account


All of these events can be used as useful recertification triggers rather than waiting for a quarterly review. The benefits are easy to articulate to any CISO - immediate review and more control of access rights.

Of course, these triggers can be used in conjunction with a time-based approach - but maybe that time-based approach should be based on the last time a recertification was performed for that user/account/entitlement rather than a fixed date in the calendar.