Wednesday, December 02, 2009

The Power Of Twitter & God's IT Usage

When I posted my musings on "Identity & Access Management In The Cloud" the other day, I did something I don't normally do. I advertised the fact that I had posted something via Twitter.

Now, my blog is mainly a way of recording my own thoughts as I travel through space and time and I treat it like an online diary that I can look back on with fondness. I don't really expect anyone to read the stuff. I certainly don't expect anyone to agree with my thoughts. And the notion that people would even take the time to comment on the ramblings never entered my head. But then there was Twitter!

My "tweet" mentioned the words identity, access, management and cloud and seems to have been picked up by quite a large number of people - comparatively speaking! I had 3x more visitors in one day than I normally do in a month!

If anything, this turn of events impresses upon me the following:
  • People are interested in the Cloud
  • People are interested in security when it comes to the Cloud
  • If people are interested in what I have to say, I need to be very careful what I say!
That last one might seem strange, but I've always been careful with my online persona - I think. I don't use bad language whether it be within my blog entries, on Twitter, on Facebook or wherever. There's no need for it and we should remember that it's permanent! I'm also a little nervy about writing anything that is controversial. (I guess I just wanna be loved and can't bear the thought of upsetting anyone?)  In other words, my reputation is obviously very important to me.

Facebook & Twitter
There has been a lot of online discussions surrounding the management of identity with regards to online services such as Facebook & Twitter. While enterprises won't be too impressed with this notion, it is quite understandable that the likes of Facebook & Twitter could emerge as identity provider kings! I can't afford to have my Facebook account suspended and I certainly don't want my Twitter feed to suffer any kind of service interruption. As such, behaving appropriately when using these services is important to me. And, of course, because I'm a well behaved boy on these services, there's a good chance that they could be used to assert my identity quite faithfully.

Think about it. Would I be keen to authenticate myself to a dubious website using my reputable Facebook credentials? Reputation management, for me, is just as important as identity management (if not more so).

God
DISCLAIMER: If Pope Benedict and Richard Dawkins were lined up in the school playground pulling together their "gangs", I'd line up behind Dawkins. Sorry Benny.

Someone told me today that they doubted whether they would make it to heaven because they reckoned that God's choice of IT components would be akin to how government's go about their purchasing of IT components. It got me thinking...

  • Would God choose Oracle, DB2, MS SQL Server or MySQL? Nobody ever got fired by buying IBM, but who could fire God?
  • Would God choose Windows, AIX, Solaris or Linux for his servers?
  • Would God go Mac?
  • Would God deploy IIS or WebSphere?
  • Would God embrace open-source?

And what about Dawkins? Presumably he would prefer to select IT services based on the survival of the fittest model?

I'm having a laugh, of course. But the selection of any IT component can't possibly be determined to be right or wrong based on the component itself. It can be determined to be right or wrong based on how it interacts with the user and other IT components but I can't tell you that Macs are better than PCs. I can't tell you that Apache HTTP Server is better than Sun's offering. I can't tell you that PHP is better than Python which is better than COBOL which is better than C#, etc.

And the point? Well, I was asked yesterday whether I could help a customer select a database vendor and the options were Oracle and IBM. My answer? Technically, I come from the "a DBMS is a DBMS". The real questions are:
  • Do you have in-house skills in one of the technologies
  • Do you have existing relationships with either vendor
  • What is the cost to you - TCO-wise


Technically? Maybe I'm past caring. The "religious" questions are so much more important!

NOTE: The answer is DB2. No. Oracle. No. MySQL. Yeah. That's the one. Oh. Maybe not :-)

Monday, November 30, 2009

Identity & Access Management In The Cloud

Last week I was asked to give a presentation at the IBM Tivoli User Group on Identity & Access Management In The Cloud to IBM employees, IBM Business Partners and customers of IBM Tivoli Security products. I soon realised that my first problem was going to be defining The Cloud. Not everyone I spoke to in advance of the presentation knew what The Cloud was!

So What Is The Cloud?
The Cloud seems to be a term bandied about all too readily these days and for many people it merely represents everything that happens on the Internet. Others, however, are a little more strict with their definition:

"For me, cloud computing is a commercial extension of utility computing that enables scalable, elastic, highly available deployment of software applications while minimizing the level of detailed interaction with the underlying technology stack itself."

"Computing on tap - you get what you want literally from a socket in the wall."

"Cloud computing is just a virtual datacenter."

Wikipedia, naturally, has its own definition.
Cloud computing is Internet based development and use of computer technology. In concept, it is a paradigm shift whereby details are abstracted from the users who no longer need knowledge of, expertise in, or control over the technology infrastructure "in the cloud" that supports them.

Of course, there are different levels of computing that a provider in the Cloud can offer. The usage of a particular software application (eg Google Docs) is just one such offering. Another would be akin to a software development platform (think Google App Engine, Microsoft Azure and Salesforce's force.com). Then, of course, there are the raw infrastructure services - servers provisioned "on-tap" for end-user usage (eg Amazon Ec2).

We are probably all users of Cloud services if we think about it. A quick look inside my Password Safe vault reveals almost 300 different User ID & Password combinations for services on the net including:
  • Blogger [blogging platforms]
  • Twitter [divulging incoherent thoughts]
  • Facebook [staying in touch]
  • LinkedIn [professional networking]
  • Google Docs [MS Office Alternative]
  • Gmail [eMail]
  • Screenr [screencasting]
  • ChartGo [charting application]
The Enterprise Model
While it is easy to see how personal usage of Cloud applications has grown over recent years, it may come more of a surprise to learn how the Enterprise is adopting Cloud usage.

According to EDL Consulting, 38% of enterprises will be using a SaaS based eMail service by December 2010. Incisive Media report that 12% of Financial Services firms have already adopted SaaS, mainly in the CRM, ERP & HR fields. And our friends at Gartner reckon that one-third of ALL new software will be delivered via the SaaS model by 2010.

My guess? SaaS is already happening in the enterprise. It's here and it's here to stay.

With any change to the enterprise operating model there will be implications - some real and, just as critical, some perceived.

In the Perceived Risks category, I'd place risks such as loss of control; storing business critical data in the Cloud; reliability of the Cloud provider; longevity of the Cloud provider. Of course, these are only perceived risks. Who is to say that storing business critical data in the Cloud is any less risky that storing in the enterprise's own data centre? There may be different attack vectors that need to be mitigated against, but that doesn't mean the data is any less secure, does it? And who says the enterprise has to lose control!

Real risks, however, would include things like the proliferation of employee identities across multiple providers; compliance to company policies; the new attack vectors (already described); privacy management; the legislative impact of data storage locations; and, of course, user management!

Cloud Standards
As with any new IT delivery methodology, a raft of "standards" seem to appear. This is great as long as there is wide-spread adoption of the standards and the big suppliers can settle on a specific standard. Thanks goodness for:
These guys, at least, are attempting to address the standards issue and I am particularly pleased to see CSA's Domain 13 on Identity & Access Management insisting on the use of SAML, WS-Federation and Liberty ID-FF.

Access Control
And on that point, the various Cloud providers should be congratulated on their adoption of security federation. Security Assertion Markup Language (SAML) has been around for over 6 years now and is an excellent way of providing a Single Sign On solution across the enterprise firewall. OpenID, according to Kim Cameron, is now supported by 50,000 sites and 500 million people have an OpenID (even if the majority don't realise it!)

The problem, historically, has been the problem of identity ownership. All major providers want to be the Identity Provider in the "federation" and Relying Parties were few and far between. Thankfully, there has been a marked shift in this stance over the last 12 months (as Kim Cameron's figures support).

Then there are the "brokers". Those companies designed to make the "federation" process a lot less painful. The idea is that a single-authentication to the broker will allow wider access to the SaaS community, as such:


Symplified (http://www.symplified.com/) and Ping Identity (http://www.pingidentity.com/) seem to be the thought leaders in this space and their marketing blurb comes across as comprehensive and impressive. They certainly tick the boxes marked "Speed To Market" and "Usability" but again those perceived risks may be troublesome for the wary enterprise. The "Keys To The Kingdom" issue rears its ugly head once more!

Identity Management
SPML is to identity management as SAML is to access management. Right? Well, almost. Service Provisioning Markup Language (SPML) was first ratified in October 2003 with v2.0 ratified in April 2006. My guess? We need another round of ratification! Let's examine the evidence. Who is currently using it? A Google search returns precious little. Google Apps uses proprietary APIs. Salesforce uses proprietary APIs. Zoho uses proprietary APIs. What is the point of a standard if nobody uses it?

Compliance & Audit
Apparently, forty times more information will be generated during 2009 than during 2008 AND the "digital universe" will be ten times bigger in 2011 than it was in 2006! Those are staggering figures, aren't they? And the bulk of that data will be quite unstructured - like this blog or my tweets!

The need for auditing the information we put out into the digital universe is greater than ever but there is no standards based approach to Compliance & Audit in the Cloud!

Service Providers are the current custodians of the Compliance & Audit process and will likely continue to do so for the time being. Actually, the Service Providers are quite good at this as they already have to comply with many different regulations across many different legislative jurisdictions. Typically, however, they present Compliance & Audit dashboards tailored to vertical markets only.

It's understandable, I guess, that for a multi-tenancy service there will be complications separating out relevant data for the enterprise compliance check.

Moving To The Cloud
There are providers out there who claim to be capable of providing an Identity Management as a Service (IDaaS) which sounds great, doesn't it? Take away all that pain of delivering an enterprise robust IdM solution? In practice, however, it works well for enterprises who operate purely in the Cloud. These solutions already understand the provisioning requirements of the big SaaS operators. What they can't do quite as well, though, is the provisioning back into our enterprise systems! It's not enough to assume that an enterprise runs everything from their Active Directory instance, after all. Also, we have to remember that using an IDaaS is akin to giving away the "Keys To The Kingdom". Remember our perceived risks?

An alternative is to move the enterprise IdM solution into the Cloud. Existing installations of IBM Tivoli Identity Manager or Sun Identity Manager or {insert your favourite vendor here} Identity Manager could be moved to the cloud using the IaaS model - Amazon EC2. The investment in existing solutions would be retained with the added benefit of scalability, flexibility and cost-reduction. Is this a model that can be adopted easily? Most certainly, as long as the enterprise in question can get its head around the notion of moving the "Keys To The Kingdom" beyond its firewall.

Conclusion
The next generation of user is already web-aware - SaaS is here to stay - and SSO is finally within our grasp with only a handful of big players dragging their heels when it comes to implementing standards such as SAML v2.0. It was also intriguing to play with Chrome OS last week (albeit an early prototype version). Integrating desktop sign on with the web just tightens things that bit further (in a Google way, of course).

Provisioning (whether it is Just-In-Time or Pre-Populated) is still the pain-point. Nobody seems to be using SPML and proprietary APIs abound. Nailing this is going to be critical for mass adoption of SaaS solutions.

While Provisioning is the current pain-point, however, Governance, Risk & Compliance will be the next big-ticket agenda item. The lack of standards and proliferation of point solutions will surely start to hurt. Here, though, I run out of ideas.... for now. Seems to me that there is an opportunity for a thought leader in this space!

Tuesday, October 06, 2009

Learning Authentication

Securing mission critical applications is vitally important. Everyone can agree on that. Securing access to personal information is also vitally important. I'm sure there'll be no arguments with that one either. And I'm also quite sure that there will be no arguing with the fact that not all applications carry the same risk and therefore they can be secured in various manners ranging from pretty insecure to secure to the point where even with all the necessary credentials it is still hard to get access.

When I was young, I had a ZX Spectrum 48Kb. When I switched it on (because in those days I had no concept of “booting”), I was presented with a fairly blank screen and a cursor. At this point, I would normally type LOAD “” and insert a cassette into my attached cassette player. A program would load (normally a game to be fair) and I would enjoy the pleasure of using the program (or playing the game) for quite some time thereafter. No authentication necessary.

These days, our children need to be a lot more careful as they typical use computers in a state of always being connected to the internet. But is it reasonable to expect a young child to enter a User ID and Password when they want to play “Dora The Explorer” on the Cbeebies website? Maybe they should have to authenticate at an early age – after all, it is teaching them good practice, yes?

Well, there's probably no harm in having a child click on a photograph of themselves when it comes to authenticating at the OS level and then getting access to a customised desktop with all their favourite links on it (such as the now infamous Dora) but what about a password?

Would it be fair to ask a child of 10 to construct an eight character password which had to have alphabetic, numeric and symbols in it? They might be able to do that and remember the password. What about a child of 6? What about a child with learning difficulties? I would suggest that a hard to remember password would be inappropriate – especially if all they were trying to access is the aforementioned Dora in her quest to defeat the naughty Sniper!

So what would be an appropriate means for this demographic to learn the beauty of authenticating themselves? Retina recognition? Finger-prints? Visual cues? Let's examine...

Bio-metrics may seem like a fun way of authenticating and it certainly has merit except that not every PC or laptop comes equipped with the necessary hardware to enable it. Should we all rush out and buy finger print readers? I'm thinking not.

Passwords aren't necessarily an option even if we make the passwords very weak indeed. Maybe the children aren't at a stage in their development where they can recognise the characters on the keyboard let alone use the keyboard appropriately.

Visual cues? Consider the scenario whereby the child in question clicks on their own photograph to authenticate and is then presented with four images of animals from which they have to select their favourite. Now, consider the scenario whereby the child is presented with four colours from which they have to select their favourite. All of a sudden, we are actually verifying that there is a good chance that the child is who they say they are. The child is learning the beauty of IT security in a safe environment with visual cues which are protecting non-critical services. The authentication process is probably one of the least secure mechanisms I can think short of no authentication whatsoever. However, in an environment where security doesn't matter and for the benefit of educating the young and getting them used to the concepts of identifying themselves and verifying that they are who they say they are, then it probably has merit. (And no doubt has already been done many times before.)

I spend my life working with IBM Tivoli security products, focussing on Tivoli Access Manager and Tivoli Identity Manager. I haven't come across an EAI for such an authentication mechanism but would imagine it would be easy to implement and could be very useful within learning environments. Maybe I'll write one in my spare time!

So is there a drawback? Of course there is. And it is the same drawback that exists for all user credentials. Setup, user registration or the provisioning process. That issues seems (to me, at least) to rest with the educators who can talk the child through the process and explain the reasons behind it.

Thursday, October 01, 2009

Vista v Ubuntu

I had enough of Vista recently. Watching that little blue circle circling and circling and circling. And all the while the hard disk light would flash and flash again and flash once more. But nothing seemed to be happening.

I felt like exacting violence on my laptop and then finally decided that it was time for my workhorse laptop to get the Ubuntu treatment. After all, I'd done it on other machines so why not the machine I work with almost all the time.

The long and short of it is that Ubuntu and Vista are living happily on my ThinkPad and I can boot into either. They both have almost an identical list of applications installed - Apache Directory Studio, MySQL Workbench, Tweetdeck, Filezilla, Password Safe, Firefox, Thunderbird, Picasa, Google Earth, Skype, Open Proj, VMWare Player. I have Open Office rather than Microsoft Office and Kivio rather than Visio; GIMP rather than Photoshop and Pidgin in place of Live Messenger.

Today I was sent a spreadsheet that wouldn't play ball in Open Office so I booted into Vista. What an eye-opener - I hadn't done that in a while and I was so annoyed that I thought I would do it again and run time trials!

Boot-time to logon prompt
Vista: 30 seconds
Ubuntu: 20 seconds

Logon, launch Excel/Openoffice and open spreadsheet
Vista: 53 seconds
Ubuntu: 29 seconds

Shutdown
Vista: 81 seconds
Ubuntu: 9 seconds

Those times are dramatic (especially the shutdown time). But it is even worse when multi-tasking. Launching just a couple of applications within Vista renders it almost impossible to use. I have run CCleaner just 2 weeks ago because the slowness of the machine was so bad. CCleaner made a massive improvement but still not enough to push me towards Linux.

I am looking forward to Windows 7, to be fair. But it would have to be absolutely amazing to convince me to ditch my sleek Ubuntu setup.

Wednesday, September 23, 2009

SSO Still Hard After All These Years

Web Single Sign On isn't a new technology - it has been around for many years. You would think that the technology community would have this particular challenge "in the bag" yet the evidence of a frustrating day at work today would suggest that this may not be the case.

Authenticating once and accessing multiple web based applications without further authentication challenges is something that most people should be used to be now. Authenticating to Google allows you to access their Mail, Calendar, Docs and Blogger applications without the annoyance of constantly inputting your User ID and Password, right? Similarly, the big boys of the services world have provided us with Siteminder, Tivoli Access Manager for e-Business, Sun Access Manager and various others. In short - techies know what they are doing when it comes to minimising authentication challenges.

WRONG!

It seems that web based applications are still being developed in a manner which overly complicates the SSO capability. Worse still, this is not limited to the part-time developer knocking out PHP based apps in his spare time. The big services companies are not blameless here!

Today, I had to provide a mechanism whereby users could authenticate using a WebSEAL instance (part of the Tivoli Access Manager for e-Business suite) and then SSO their way to the Microsoft Live service for access to Mail and Calendar applications.

An IBM product fronting a Microsoft product? Two big companies committed to the notion of SSO and federation?

Microsoft make the Windows LiveID SSO Kit documentation available online which is great, but the SSO Kit itself is only available upon request via the Microsoft Connect website. That was a bad start but was resolved soon enough by pulling some strings. The documentation details three integration scenarios, two of which assume that the authentication provider will be an IIS based application. As you will have guessed from the requirement (above), IIS plays no part in the solution which leaves us with the third scenario.

The third scenario requires the authentication provider to make a SOAP call to the Windows Live Service in order to retrieve a Short Lived Token which is then passed to the Windows Live Login service.

No big deal, right?

WRONG!

Where is the SOAP service hosted? Where is the WSDL? What parameteres are passed to the SOAP service? These things aren't documented! It's as if Microsoft have assumed that nobody in their right mind would try to do this and the IIS scenarios are the way to go!

The real answer to these questions, of course, is that the information is available - if you are prepared to hunt for it. Hidden in the document is a sentence which states that the implementer should refer to the partner4.xml document for the SOAP service URL. And partner4.xml is where? Well, it's not in the kit! Again, you have to go hunting online for it (and it isn't called partner4.xml either).

To be fair, the documentation to say that if you want to adopt this scenario, you should deconstruct the C# source files provided for the IIS scenarios and write your own routines. The C# source files have limited comments (as you would expect) and there are a number of them. Trawling through source code is now required (and I'm not a C# expert).

While I'm sure that this will eventually work, it seems to me to be too difficult to even gather the information required for the build never mind performing the build. The information is available, but it is certainly not intuitive. So today was an expensive day spent hunting for information and very little progress was made. Maybe tomorrow will be more productive!

For info, the following C# source files are the interesting ones:
  • LIVE_SLT.cs
  • CredentialServiceAPISOAPServer.cs

Saturday, July 18, 2009

No Excuse For Ignorance

We should feel privileged to be alive but feel disappointed that we are going to miss out on so much.

Admittedly, some people's lives are very tough indeed and for many people around the world, hardship, suffering, poverty and hunger dominate.

But I am privileged. I live in the UK with a reasonable job and I have access to things that my parents couldn't have dreamed of. I live on the information super-highway; I have access to knowledge at my finger-tips which means I no longer need to retain the information in my own head. I just need to know where to look for the information.

As an example... I have been watching the latest Virgin Mobile advertisement on television in awe. I say watching, but I really mean listening. The accompanying song is one of the most beautiful sounds I think I've ever heard. I "need" to hear it again. And by the power that I have at my finger tips, I can search for "Virgin Mobile train advertisement song" and the first hit I get back from Google tells me that Mazzy Star performed the song which is called "Into Dust". I fired up Spotify and searched for Mazzy Star and within ~2 minutes of the advert being aired on television, I'm listening to the track.

Just a handful of years ago, the track would've been lost. I would never have found it. I went from having no knowledge of Mazzy Star to elevating "Into Dust" into my Top 10 songs of all time within minutes. The internet has allowed me to no longer feel ignorant.

All of this leads me to my next point. There is no excuse for ignorance anymore. All the information that the lay-person could ever hope to acquire is available. When someone asks me a "how do I" type of question, I'm more inclined to ask them why don't they already know the answer - especially if they are asking me the question over some form of instant messaging tool. Surely asking the question of Google would've been just as easy as asking me the question?

However, I guess the fact that people aren't capable of finding the information they are seeking is the thing that keeps me (and other IT consultants) employed.

BTW: I really do recommend "Into Dust" by Mazzy Star. It's a joy.

Wednesday, July 01, 2009

ADSI Guru

So the past few days were spent struggling to write a binary attribute into my Active Directory instance. Java isn't too clever when it comes to binary objects yet I seemed to be capable of generating a perfect binary object which I could write into ANY other LDAP compliant repository. Of course, Active Directory is merely LDAPpy - as I christened it yesterday.

Most of my efforts were probably in vain as the IBM Tivoli Identity Manager Active Directory Adapter does not support binary objects. Generating such an object an assigning it to a person record within ITIM would have been futile as would trying to generate the object on the fly during workflow.

The fact still remains that I absolutely must get this binary attribute into Active Directory as part of the provisioning process. And to that end, I wrote my first complete ADSI script today. I've spent years working on unix boxes and working with "real" LDAPs. To be scripting in VBScript and attempting to update AD was rather alien. I learned a thing or too on the way. VBScript desperately needs to know the precise size of your arrays, for example. Java, as we know, is fairly tolerant to lazy coding. VBScript desperately needs object types to be precisely as it expects whereas Java is quite tolerant when it comes to determining the difference between 1, "1" and "one"!

Design
I decided that I could write an ADSI script that would commit these binary objects to my accounts after they had been created with the AD Adapter. But I don't merely want to call this process as part of workflow. I've decided to take it a step further and create a separate service and adapter which will perform this function. ITIM, calling ITDI to write these attributes (which are based on attributes assigned to person objects as strings anyway) and ITDI calling a VBScript to commit the write.

Args
My ADSI script take command line arguments, of course. Things like the bind DN & password; the target AD instance; the target user; the raw data to be converted into binary. I'm pleased with the args processing. Not quite the way I would do it in shell scripting, but easy enough:

Dim args
Dim sBindUID

Set args = WScript.Arguments.Named
sBindUID = args.Item("bindUID")


I can now call the script as such:

cscript myscript.vbs /bindUID:Administrator

Binding
Next, I bound to the AD instance using scripting methods I found by Googling:

Dim oDS
Dim oAuth
Dim oConn
Set oDS = GetObject("LDAP:")
Set oAuth = oDS.OpenDSObject(sServer, sBindDN, sPassword, &H0200)
Set oConn = CreateObject("ADODB.Connection")
oConn.Provider = "ADsDSOObject"
oConn.Open "Active Directory Provider", sBindDN, sPassword


And then attempted to find the target for my update:

sSearchObject = "<" & sServer & ">;(" & sTarget & ");name,ADsPath;subtree"
set oRS = oConn.Execute(sSearchObject)
Set oUser = GetObject(oRS.Fields(1).Value)
oUser.GetInfo


Updating
Then came the tricky bit. I had a multi-valued attribute which required each attribute to be converted into a binary stream. I shan't bore you with the binary conversion as it is convoluted in the extreme. However, the multi-valued issue required use of the PutEx method:

oUser.PutEx ADS_PROPERTY_UPDATE, "mybinaryattribute", aEntityGUIDs

And, of course, my aEntityGUIDs object need to be an array of a size equal to the number of values in the array. Time for some Redim. Redim, of course, is not something I've ever had to do in Java! A goodly two hours were spent pondering my failure to commit my values to AD before it dawned on me that the size of the array may have an impact.

I tarted up the code to add a logging mechanism. ERROR, FATAL, WARN, INFO and DEBUG messages are written to a log file by calling a little function that includes this code:

Stuff = dateStamp & ": " & loggedString
Set myFSO = CreateObject("Scripting.FileSystemObject")
Set WriteStuff = myFSO.OpenTextFile("myvbs.log", 8, True)
WriteStuff.WriteLine(Stuff)
WriteStuff.Close

I'm not 100% sure but I'm fairly convinced that others would be fit to declare themselves VBScript/ADSI gurus. I shan't do likewise but I now have a better understanding of the pitfulls of VBScripting AD access.

NOTES
I haven't shown all the Dim statements for the objects defined in the code above - I'm sure you can work that out for yourself.

Tuesday, June 30, 2009

Active Directory Hell

I've spent a number of years playing at knowing a thing or two about LDAP but I've managed to avoid spending any worthwhile time playing with Active Directory.

Now, the observant amongst you will notice that I succeeded in writing a sentence that included LDAP and Active Directory. Active Directory is merely LDAPpy, for want of a better word (though now I've written it, I'm quite pleased with the way it looks and sounds).

Today, I had to work out how to write an attribute into an Active Directory instance. Trivial. At least, I thought it would be trivial. The attribute is a schema extension and is used to store a binary representation of a GUID. GUIDs are things I can handle... a lengthy string of HEXish characters! What could be easier.

Well, things are never straightforward. I have some VBScript which details how the GUID should be "manipulated" by taking the two characters starting at position 7, then the two characters starting at position 5, etc., etc. The resulting string of two character hex codes is still quite lengthy but quite jumbled from the original GUID. But here is where the fun begins. This attribute is of type java.lang.String (according to the schema) but is actually a binary object! The VBScript opens an ADODB.stream object (of type text) into which it places the ChrB representations of the HEX codes. It then strips of the UTF-8 marker and rereads the stream as a binary object before putting it into the directory.

Why? I have no idea other than someone said their application performed better if it was done that way!

Now... how do you create a stream object of type text, split of the UTF-8 marker then commit the resulting stream as a binary object from within Java?

I struggled, I can tell you. And my dear old friend Google wasn't being much help. In exasperation, I decided to test that I wasn't banging my head off a brick wall. My AD instance already had examples of accounts with these particular attributes populated by the VBScript routine. I was able to extract this data using IBM Tivoli Directory Integrater and inspect each byte. I was then able to determine exactly how the binary value was being created and recreated the object in code.

However, committing this object to AD failed with some kind of attribute constraint. I was mystified. After much scratching of the head, I decided to create an Assembly Line with the following 2 connectors:
  1. Lookup AD for a particular user entry
  2. Update the same user entry in AD with the attribute values retrieved in step 1

In other words, try to update the AD object with the same values that it already has.

It failed. Attribute Constraint! So, by merely reading some data and writing it directly back to the source, I managed to generate an attribute constraint error.

I may give up... I'm not happy with the way AD behaves and I'm certainly unhappy with the VBScript. I suspect the fact that the attribute is defined as a String but is storing a binary object is the route of all the evil. So today has ended on a low... no resolution as yet to a problem which I suspect may not be solved by conventional methods.

Thursday, June 04, 2009

DB2 Stored Procedure Hell

I'm a systems integrator and have no desire to understand everything at the lowest level of detail possible. I understand things conceptually and know where to find manuals to help me complete tasks but I have not got the time to understand every technology going.

Today, I had my first need to create a DB2 Stored Procedure and use it. In 20 years, I've never had to do that - which seems strange even to me. I've always known about them and understood their power but it has always been "someone else" who has created them and consumed them in their applications. Remember... I integrate systems?

So - how hard can it be? Well, my good friend Google helped when I asked for information on "creating a stored procedure". A tonne of links - must of which assumed that I would be using some heavyweight IDE. I'm a command-line kind of guy though!

It was at that point that I thought that even though this is a simple task and I'm fairly convinced I'll be able to do this within minutes rather than hours, I still figured I should maybe record the process I went through. The reason? I'm fed up being sent down blind alleys by trash on the internet.

I create a myfirstproc.sql file with the following contents:

CREATE PROCEDURE MYFIRSTPROC
(IN username CHAR99), IN disclaimer INT)
LANGUAGE SQL
BEGIN
IF disclaimer = 1 THEN
INSERT INTO USER (USERID, DISCLAIMER) VALUES (username, 'Y');
ELSE
INSERT INTO USER (USERID, DISCLAIMER) VALUES (username, 'N');
END IF;
END @
Straightforward, eh?

I applied it (eventually) using the db2 -td@ -vf myfirstproc.sql command.

Next, I wanted to use IBM Tivoli Directory Integrator to invoke the stored procedure. I created a passive JDBC connector called manageDB and created a script as such:

var con - manageDB.connector.connection;
command = "{call MYFIRSTPROC(?,?)}";
try {
cstmt = con.prepareCall(command);
cstmt.setString(1, "H12345678");
cstmt.setString(2, 1);
cstmt.execute();
cstmt.close();
}
catch (e) {
task.logmsg(e);
}
Now, that all looks quite neat but there is a bug in the code. When I ran the code I received a "Method Not Yet Supported" message. Now, what can you imagine would cause such a message? My first thought was that maybe I was using an out-of-date driver so I decided to get the latest one. This in itself is a major undertaking as anyone who has tried to find anything on the IBM website will testify! Certainly searching for db2jcc.jar (as I did) did not take me to anywhere from which I could download it!

The new JAR file was put in place and the routine executed again. "Method Not Yet Supported" again. Now - that's quite frustrating! Searching for "Method Not Yet Supported" yields information on how the method I'm calling isn't supported. In other words, a useless message!

Trawling through the code, it wasn't immediately obvious what the issue could've been. Not obvious because the code looked syntactically correct (and in any case, if there was a coding error, surely I would've been presented with an appropriate error message).

Well, the eagle eyed amongst you will notice that the original code above was attempting to set parameter 2 to a value of 1 as a string. Converting this to a setInt statement rectified the problem! The resulting code:

var con - manageDB.connector.connection;
command = "{call MYFIRSTPROC(?,?)}";
try {
cstmt = con.prepareCall(command);
cstmt.setString(1, "H12345678");
cstmt.setInt(2, 1);
cstmt.execute();
cstmt.close();
}
catch (e) {
task.logmsg(e);
}
So, why share this with you? Well... I'm not I guess. I've written this more as a reminder to myself. The important following lessons have been learned:
  • Don't trust the information on the internet - it's typically out-of-date (just as this article will be as soon as I hit the publish button?)
  • Don't trust error messages - they've been constructed by developers and could be meaningless
  • Good luck with searching that IBM website
  • Everything is possible with a smidgin' of perserverance

Tuesday, May 19, 2009

The Problem With The Web....

... is currency!

I have a Windows 2003 Server VMWare image within which I build demos and test environments. (I do have a SUSE Linux v10 demo environment for real stuff but sometimes customers want to see applications running happily inside Windows). The latest installation I attempted chucked a hissy-fit when it came to calculating disk space. It wanted 10GB and I only had 7GB on my C:!

Resolution 1
I thought, no problem... I'll add a new virtual disk, give it 20GB and call it my D: drive. After just a few moments, my disk was available and I restarted the installation. But guess what... it refuses to install anywhere other than C:

Resolution 2
Disappointed, I figured I'll just resize my primary partition. And the fun began...

VMWare Workstation 6.5 comes with vmware-vdiskmanager.exe which allowed me to resize the virtual disk. (For information, I took it from 15GB up to 30GB). But, of course, that doesn't help unless I resize the partition as well.

Time to boot Windows 2003, bring up a command prompt and type diskpart in order to resize. But diskpart will refuse to resize a bootable partition! Doh!

That's OK though - I have a copy of Easeus Partition Manager! Try to install it and it said "You've got Windows 2003 Server! Please purchase the Server Edition of EPM".

hmm.... seems my EPM version is for non server based Windows installations. Off to the Easeus website then and I found that the server edition will set me back $150!

Google Time
I'm not paying $150 for a one-off resize! Someone must've done this before so let's give Google a bash.

It seems that people have had this problem before and I study their techniques for resolving the problem. I find 3 possible options:

Option 1 - Knoppix with QTParted
I download 700mb of a Knoppix Live (as instructed) and boot my VM using the ISO image. But... the latest version of Knoppix doesn't ship with qtparted any more. The instructions I've found on Google are, sadly, out-of-date.

Option 2 - Knoppix with ntfsresize
Fortunately, my version of Knoppix does have ntfsresize so I give it a go. It says that it will resize my C: but only if the partition has been resized first so I have to use FDISK. I launch fdisk and tell it to increase the number of cylinders to be used on that partitioni and it point-blank refuses. More Googling tells me to delete the partition and recreate - but that just merely destroys all my data (I know - I did it - but only after I'd backed up my partition - phew!)

Option 3 - vmware converter
Next, I follow the procedure sfor vmware converter. I say follow... I did download the converter (which took a while) and installed it (which took longer) and then ran it. The screens didn't offer up the options that were describe by my Google search! It seems that my version of converter is a lot more recent than the one described in the web article and the functionality I'm looking for no longer exists.

Option 4 - GParted
My final options was GParted - a live bootable ISO image that claims to do the job. I searched for it, found it on Sourceforge, hit the download button and..... NOTHING. Doesn't exist or at least it's offline for the time-being.

Time to give up and go to bed

Next morning, though, I tried to retrieve GParted again and thankfully it was now available. Downloaded it, booted, clicked a couple of buttons and my partition was resized perfectly.

The Moral
This was a very simple procedure and it did not require too much effort to achieve it... in the end. The problem is that this is just the latest example of the web sending me off on tangents because the information that I found is no longer relevant or out-of-date. Unfortunately, the information has been round long enough to find itself high up on the search results yet the up-to-date, relevant stuff was actually tricky to find.

Don't get me wrong, I'm a big fan of Google but as the web clogs up with more irrelevant information, I'm finding it more and more difficult to get the information that I need.

It would be great if the custodians of information would clean-up their act. Maybe a "Best Before" date ;-)

Wednesday, May 13, 2009

Self Promotion

I had the pleasure of attending a wonderful wedding at The Manoir last weekend.

I was fortunate to be asked to be "Best Man" at the event. Of course, I had to give a speech which was quite nerve-racking but it went down a storm.

Speaking at such an event is a great way of getting introduced to people. Everyone came to me after I had spoken to congratulate me and tell me how much they enjoyed what I had to say. Would they have been so eager to speak to me if I had been a mere mortal at the event?

So lot's of strangers spoke to me and the usual conversation ensued: "How do you do?"; "Nice weather, isn't it?"; "What do you do for a living?".

Normal run of the mill stuff you might think and you'd be right. However, I did get some interesting questions:
  • How do you get business and how do you promote yourself?
  • How do you keep on top of your reputation?
  • Would you be my friend on Facebook?

I guess the answer to these questions differ depending on the business that you are in, but for me, getting business and self-promotion is all about the following:
  • Reputational enhancement through constant delivery
  • Ensuring the right people are made aware of the delivery success
  • Promotion through social networking (LinkedIn, Twitter, Website, Blog, etc.) and being careful what I say on each medium
  • Standing up in front of people and speaking - getting noticed

Indeed, giving a Best Man's speech, while important for the recently married couple in question, is another means of self-promotion I guess - unless you make a mess of it!

So how do I keep on top of my reputation? Time... might just take a few minutes each day to post to Twitter; maybe 15 minutes to write a blog entry (like this?); and just a few moments each month to check that my website is still relevant.

It doesn't take much and there really is no excuse for people allowing their reputation to waver!

As for being a friend on Facebook? Again, it might be reputationally damaging for me to be friends with certain people - I don't do too many randoms! Gain my trust first please.

Friday, May 01, 2009

Identity Mapping

 got to thinking the other day about my online "presence". I do the Facebook thing, the Twitter thing, the LinkedIn thing and I have a .tel domain now!

Some of these "things" talk to each other. Twitter feeds Facebook and Plaxo, for example. I thought it would be quite cool to try to map these services to show the linkages (and it was more difficult than I thought). I haven't included Flickr, Trip IT, Friends Re-United and probably a whole host of other services that I use but here is the current map:


I pulled together this map not by merely recalling the services that I use (although I could've done that quite easily with this particular map) but rather by taking a look at my Password Safe datbase and going through the various accounts I have. My Password Safe now has 257 items in it and I know there are some accounts missing!

257 account details. Whatever way you cut it, that's a lot of accounts. Thankfully, I only know the password to a couple of services (and have never known, and probably will never know my Facebook password, for example). I rely almost entirely on Password Safe to access my online accounts.

And here's the issue... So paranoid am I about losing my Password Safe database that I have it copied from my desktop PC to my Mac Mini (on a nightly backup). It is synchronised with my 8gb Freecom USB disk. It is then synchronised with my two laptops (one personal and one work) and it is copied to a secure location on a server I have in a data centre.

So, my precious information is stored in a number of locations. That's a few opportunities for the baddies to try to get it from me. What are the options, though?

Well, of the 257 accounts that I have, hardly any of them support some kind of federated security model. It is true that I can log in to some services using my Google ID or my Yahoo ID, but not many. OpenID? Again, hardly any of my service providers support this. In fact, it seems that I have THREE amazon accounts - one for purchasing; one for Affiliation and one for Amazon Advantage! (I may have an amazon developer account for their API, but can't remember!)

So managing my identity is a fairly manual process just now. Not the case, necessarily, for big corporations who can throw a Sun, Oracle or IBM Identity Management solution at their various data repositories. Could these tools be used "in the cloud" for web users? Would I want to pay for that? Could I host IBM Tivoli Identity Manager on a server on the net, build some connectors to the major websites (such as Facebook, Twitter, Google & Yahoo) for managing accounts? Could I, host a reverse-proxy on this internet-facing server which would provide me with a web-based single-sign on solution to these services?

Technically? Everything is possible. Is it likely? Not a chance... well... not yet. Too many companies are trying to gear themselves towards offering this terrific opportunity to be the master of identity related data but you've got to question why any organisation would want to do it. For your benefit? Not likely.

Maybe I'll build an IdM service just for me :-)

Sunday, February 08, 2009

Socialising

So I have my Facebook account (which I actually like using); a Bebo account (which I never look near); a Twitter account (which I've only just started using in order to find out what the fuss was about); a LinkedIn account (which is useful for my career); a Blogger account (thus this posting); a Plaxo account (in an attempt to synchronise my contact details across my various client machines); a Flickr account (which I rarely use and may be tempted to ditch in favour of Picasa); a Friends Re-United account (which doesn't seem to be a school friend hook-up tool anymore).

What I have created, however, is a social network which is difficult to maintain! I want to be able to find out what my friends are doing and tell them what I am doing. Facebook seems to fit the bill in that regard, though I guess Twitter would probably achieve the same thing.

I'm not into the Facebook applications to be honest. "What's Your Real Age" and "What Lord Of The Rings Character Are You" may seem like fun, but they are fairly trivial and quite franking a waste of time. So I find myself updating my status and writing on friends' walls (though mainly updating my status).

I've managed to get Twitter to update my Facebook status automatically which is great and I've installed a Twitter addon to Firefox which allows me to update my status through the address bar.

All fine and dandy but...

When I signed up to Twitter, I managed to get a "follower" immediately. A pretty young girl from somewhere I've never been to. Why was she interested in me? My first posting said something like "This is my first posting" so it can't be for the intellectual stimulation I provide. Ulterior motives, for sure.

I get friend requests through Bebo from people I've never met. Friend requests from people who were in my year at school (though I never spoke to them then and can't think why they feel the need to speak to me now).

Do social networking sites actually have a negative impact on our sociability? I'm guessing if I write on someone's wall, then I can feel that I've "connected" with them to an extent which removes any obligation to actually go and visit!

I'm also guessing that my "school chums" want to connect with me in order to get their friends number as high as possible? (For the record, I have about 20 friends on Facebook which I think is a lot bearing in mind that I probably only have 2 or 3 friends and they don't even use Facebook!)

The really concerning thing for me, however, is that these applications communicate with each other and share my user details. If one of these applications gets compromised, I maybe in bother! A Google search of my name yields some very disturbing results. Some results are links to pages I have created either on my personal website, this blog or LinkedIn. Some, however, have been created automatically by sites that have skimmed information from my primary sites without my permission. Even though I only have a couple of friends and just a handful of acquaintances on Facebook, it seems that I am a fairly popular guy net-wise.

Herein lies the problem. I want to use these tools to connect with a select few people and while these tools manage to do that, I can't help but worry that too many of my personal details are now public knowledge.

Right... I'm off to tell Twitter that I've blogged some old nonsense in the hope that Twitter will update Facebook with a link to this post!

Monday, January 12, 2009

Joined Up Contacts

So my new Blackberry turned up today and I can't sync my contacts and calendar entries from Thunderbird to that bad boy.

Disappointing... but time for a bit of googling to see how to get around the problem.

And the answer? I'm still disappointed.

I have managed to sort my email accounts in Thunderbird with the help of Folderpanes and I now have access to Hotmail from within Thunderbird using the webmail add-on. I've even sync'd my Google contacts with Plaxo (although that only works one way.... WHY!!!!).

I've downloaded Sunbird (though this gives me no benefit above and beyond the Lightning add-on for Thunderbird).

So... I have all this software, but no way to sync to the Blackberry unless I get radical as such:
  • Use Outlook (which is too expensive)
  • Use Outlook Express (which is too nasty)
  • Use Live Mail (which I just don't like)
  • Buy some additional software
I don't mind paying for software, but I've just spent a load on my Blackberry with the idea that I would be able to sync up simple things like contact lists and calendar entries. To find that I can't do it out of the box is.... disappointing.

Maybe I should write my own routine!

How To Search

So... I have a home network and a Mac Mini which is being used as a Media Server. It will stream video around my home network and I have a Pinnacle Show Center 200 which is a hardware based media player.

However... my Show Center isn't behaving itself. The picture is a bit 'slanty' and the sound is a bit tinny. My understanding is that this is the first sign of a power supply problem! At least, that's according to the horror stories online.

Replacement media players are going to cost anywhere between £100 and £200 and I can't justify that kind of expenditure just now. It would be simpler for me to reuse one of the old laptops with a software based media player installed and stream the output to a television.

What could be simpler?

Well, it is simple - once you have found the necessary software. A bit of googling and job is a good one. Except, googling for "upnp media client software" wasn't that helpful.

The On2Share plugin for WinAMP sounded promising so I downloaded and installed WinAMP. I downloaded and installed the On2Share plugin and the result? It doesn't stream videos at all. In fact, it downloaded the entire video before playing. Rubbish!

Kinsky and 4U2Stream appeared in the search results. I looked at their websites and the terminology used was alien to me. I downloaded them, installed them and.... rubbish! They aren't clients at all - they are remote control clients for UPNP servers.

VideoLAN Network Client also appeared in the search results and after it was downloaded and installed, I found that, once again, the software didn't work.

At this stage, it seemed that too much time and effort had been spent trying to solution what I figured would be an easy problem to solve. Windows Media Center can't play streaming video from a UPNP Media Server; Cyberlink's Media Center wouldn't bother to play streaming video either.

I promised myself just one final attempt and downloaded the XBMC Media Center - the software behind the XBOX Media Center. A quick download, and easy installation and guess what? IT WORKED! It actually worked and it worked brilliantly.

Why oh why was it so difficult to find it? I like to think that I'm skilled in the use of search engines but I'm finding that it is becoming more difficult over time. I recently had a requirement to relocate a DB2 database from one drive to another and my db2relocatedb command was failing. Could I find an answer to my problem? Not a chance.

The problem is that there is just too much information on the web and a tremendous amount of it is just garbage. And this tendency to end up at a site which is merely a site devoted to hosting paid-for banner advertisements is really winding me up.

What am I to do? Maybe the answer lies in storing the decent information when I find it and getting some trusted companions to also store decent information. I'm not talking DIGG here though. I'm not talking blogs. I merely want a repository of information that was difficult to find but was incredibly useful.

Anyone got an answer to that problem?