Sunday, May 30, 2010

The Computer That Never Dies

I've been reminded of the halcyon days a number of times in recent weeks. Meeting old colleagues and working on integration middleware components with legacy systems has taken me back to my days as a developer on mainframes.

For me, my mainframe of choice was a Bull DPS8000/DPS9000 which ran the GCOS8 operating system. Time-Sharing, JCL and EBCDIC were all terms I grew up with and I really enjoyed my time working on the platform. The code I wrote 20 years ago still exists which is reassuring and precisely the point of this article. The code was designed and written in a manner which meant it would stand the test of time.

Indeed, the code for that particular environment was so good that it required no modification in the run-up to the Year 2000 anti-climax.

In my early days, the mainframe took up almost an entire floor of an office block in the centre of Belfast. To be honest, this was at a time when my PC housed an enormous 8088 processor, with 12" long expansion cards and was painful to move about because of its sheer bulk. After a while, the 8088 was replaced with the 80286, then the 80386, then the 80486, then Pentium processors and so on. The mainframe adopted a similar evolution by getting quicker and smaller to the point where it no longer looked like a mainframe and took up little more space than other servers littered around the data centre.

Of course, during that period, most commentators expected the mainframe to finally die and be replaced by the modern array of UNIX based servers from HP, IBM and Sun. Why spend all that money on a mainframe when a couple of RS6000s could do the job just as well? Except they couldn't. Middleware did middleware very well. Mainframe did mainframe very well.

But the mainframe hasn't died. It's still alive and well in most of the big data crunching data centres and some would say that it has a new lease of life. Why? Because mainframe can now do middleware, that's why. The mainframe is no longer limited to long-running COBOL based batch processes invoked via JCL put together by Sys-Progs who look like dinosaurs. Mainframes can happily host our web servers, application servers and message buses as well our databases and batch routines.

But there is a problem! Mainframes aren't sexy and aren't attracting young talent into the arena. Sys-Progs are a dying breed. Guys who know their mainframes are in short supply. But at least we have tools to help the mainframe newbies administer their system.

Which brings me on to an exciting new initiative I've been privileged to be involved with. In conjunction with System Z and RACF guru, Alan Harrison (of Practically Secure fame), I've been working with building an extension to the IBM Tivoli Identity Manager interface to provide slick integration with zSecure Audit to provide a complete RACF management web based solution that integrates with an enterprise wide provisioning system. The result is now available from Pirean (where both Alan and I currently ply our trade).

Mainframes are suddenly sexy again and it almost feels like a home-coming for me.

Friday, May 14, 2010

Apply That Patch....

What should you do when system performance starts to deteriorate?

More specifically, if password changes are taking upwards of 12 minutes to complete, what should you do?

Even more specifically, what if password changes invoked from IBM Tivoli Identity Manager through the TAM Combo Adapter to Tivoli Access Manager are taking upwards of 12 minutes to complete yet manually changing the passwords via TAM's pdadmin command line tool completes sub-second?

This was the dilemma I faced yesterday and it seemed to happen "all of a sudden".

The password change via pdadmin confirmed that we weren't talking about a DB2 or LDAP issue. They had recently been tuned and everything was performing as expected. So I did what any decent IT professional would do - the on/off approach.

I stopped and restarted the TAM Policy Server - for no real reason, to be honest. I then stopped and restarted the TDI RMI Dispatcher that was "hosting" the TAM Combo Adapter.

The next password change that came through the system took approximately 6 minutes to complete. A 50% improvement but nowhere near good enough in an environment hosting 300,000 users eager to change their passwords (with the result being a bottle-necked system).

The TAM Combo Adapter was v5.0.5 and the Assembly Line for the password change seemed to be as straight-forward as an Assembly Line can be. At no point could I find:
if (systemAge > 12Months) {
   sleep(600);
}

Google was no help either. Nobody on the planet had experienced this sudden slowness and documented it!

Running out of ideas, I decided to upgrade the adapter to v5.0.9. A quick download, install, copy of the TAMComboUtils.jar file to the TDI directory structure and a restart of the TDI RMI Dispatcher was all it took.

Moments later, a password change came into the system. How long would it take? I guessed it would be around 6 minutes again but I was wrong. A handful of milli-seconds!

I quickly looked through the supporting documentation for the adapter to see what fixes were incorporated. No mention of password change "slowness". And no mention in the v5.0.6, v5.0.7 and v5.0.8 release notes!

So what's the moral of the story?

Developers don't change fix bugs when a new release of code is coming out. They frequently "tidy" things in a non-functional way which may have positive impacts!

For those of you still running your ITIM v5.0 environments with an old TAM Combo Adapter - upgrade now. You won't regret it. Maybe. Unless one of those "tidy" code changes has a negative impact on your environment. In that case - forget you ever read this post!

Of course, the approach adopted for this particular scenario can also be applied to all software environments. Keeping up-to-date with the latest patches is a good thing to do even though it can be time-consuming. But how do you do that in a manner which meets all your Change Management processes? Can you really patch WebSphere without having to perform a full regression test of all your J2EE applications? Can you upgrade your LDAP without a full regression test of all applications that make use of its services?

The answer is you can but it takes you to be convincing when it comes to getting the authorities to take that particular leap of faith. And therein lies the problem. Far too often, sensible environment management is put into the "too hard bucket" not for technical reasons, but for political reasons.

When will we learn.

NOTE: As my experience yesterday can attest, sometimes the best way to get patches applied is in a Sev 1, emergency scenario. Don't go creating those scenarios though!

Sunday, May 02, 2010

Infosec 2010 Review

I managed to spend a little bit of time this week taking in the spectacle that is Infosec at Earls Court, London.

The first thing that struck me about the event was the vastness of it. The number of exhibitors was really quite staggering and the quality of some of the stands was very impressive indeed.

If I had a particular interest in anti-virus, one-time password generation via SMS and hardened USB storage devices, I would've been in heaven as these particular products were over-represented at the event. But how do anti-virus vendors differentiate themselves at an event like Infosec? Well, by giving away an Apple iPad each day! That did the trick for Symantec.

It was interesting to see the various approaches that vendors took to attracting visitors to their stand. The guys at Qualys found a great way of attracting large numbers of visitors by giving away free beer from mid-afternoon onwards. Others tackled the marketing problem by using scantily-clad girls. I'm not sure what the link between scantily-clad girls and security software is but then again, Grolsch and Qualys don't seem to have a natural partnership either.

Wandering around the event entices the sales men and women to accost you. Free pens, stress balls and T-Shirts will be thrust into your possession with white papers and brochures, of course. These freebies have already been passed to my daughters and the white papers and brochures still haven't been read four days later!

And here's the crux of Infosec. For many people, I'm guessing the event has got very little to do with sales leads and more to do with CISSP CPEs and networking.

Of course, Infosec isn't just about vendors trying to showcase their wares. There were plenty of seminars, speeches, workshops and other types of get-togethers. But again, there seemed to be little by way of new or innovative ideas being discussed. A discussion on "Mash-Ups" reminded me of a similar discussion five years ago on "Process Orchestration"! A discussion on "The Cloud" reminded me of a similar discussion five years ago on "Application Service Providers". In other words, the terms may have changed but the concepts have not.

Of course, this doesn't mean the experience wasn't fruitful. It is still a great event well worth attending and it's a great opportunity to meet other vendors/suppliers and catch up with what they are getting up to. I, for one, am already looking forward to Infosec 2011.