Managing IT Projects in a Sea of Change

72762531If the world would just slow down for a minute, maybe we could get something done. The biggest problem with IT projects is that nothing in the project is ever stable. In other industries, there are elements which are stable enough that they can be counted on for the duration of the project. Carpentry skills and carpentry tools have had very few changes in years. Yet, in a major IT project, almost every significant aspect of the project is likely to change during the execution of the project.

The design that never ends
Over the 12-24 months that a major IT project is likely to take, many elements are likely to change. The design of the target application itself is almost always impossible to close.

In a commercial application, choosing the target functionality is often driven by a combination of factors including feedback from current clients, beta test users and focus groups. The target platforms may be different both from a hardware and software perspective. Imagine how many applications in the last two years have had major platform target changes as marketing directors run into the development department and declare that unless it’s in-the-cloud-enabled there’s no point in writing it.

In an internal system, the demands of the client change as they watch the world change around them. Thanks to the Internet, never before in the history of this industry have so many people had so much access to so much information about software. While the IT department is trying to pin down the exact requirements, users at virtually any level of the organization can find similar commercial applications or articles about such applications on their Web browser.

Hardware changes
Only a few years ago it seemed quite reasonable to amortize PC hardware over a 3 year period. Accounting firms still have formulas for amortizing PCs over a five year period (I’m not kidding, I ran into this only last week). As everyone knows, computer hardware is obsolete almost before it’s delivered. Over an 18 month period, the hardware that the project started on is invariably not the hardware it will be installed on. Since most users are changing their hardware to be larger, faster and better, this is not often a critical concern. Where it can become a problem is where hardware is upgraded with the programmers first and the software then only works with their hardware.

Operating System Changes
The switch over from Client Windows applications to browser based was a traumatic experience for most developers which not everyone has recovered from yet. Whenever a new version of an operating system is released and users across the enterprise begin to upgrade, programmers suddenly find themselves scrambling for compatibility. Any operating/system-provided layer becomes an issue. This includes data connectivity such display based modules.

Programming Platform Changes
While everything else is being changed, the language itself that the development is being done in is changing also. We suffered a 60 day delay in a new version of one of our projects earlier this year when our development environment got a new upgrade which we deemed essential to fixing some known problems but that required a re-write of our installation scripts and a whole new testing cycle. This kind of thing can happen all the time and it needs to ba managed.

Database upgrades
With so many IT projects now counting on commercially supplied databases, any upgrade to these products can have a dramatic effect. Do the drivers exist from all other parts of the project in order to connect properly to this database. Should you be reverse compatible with earlier versions? Should you not upgrade and risk not getting the new features and, worse, the fixes that are surely in this version?

Third party products
If your environment depends on any third party products, then the problem is compounded geometrically for each third party product that’s included.

Remember, everything needs to come together at one time. The operating system needs to be compatible with the hardware (noticed the PCs that have “We’re not quite ready for that version of Windows”  notices?). The database needs to have the features requied to support the project and needs to have drivers which can be accessed by the hardware, operation system, programming environment and third part tools. The third party tools need to be compatible with the hardware, operating system, programming environment and database. The programming tools need to have the drivers for all of the other components working simultaneously. Finally, the design needs to be something that can be delivered by the tools selected in the first place.

It’s a wonder that the combination ever turns out but when one element changes, all the others might have to as well.

If you’re managing a major IT project keeping the ship moving in this sea of potential mines is no trivial task. Still, there are some things you can do and some methods of operating which can mitigate the risk.

For hardware changes the biggest problem comes when the developers have their hardware upgraded first and then the software is written to fit inside their premium sized boxes. When the application is deployed smaller terminals suddenly need to be upgraded in order to function with the new software. One of the ways to mitigate this risk is through the QA process early on in development. Let the programmers write and compile software on the fastest machine in the building, but make them test it on the slowest. There’s nothing like making a programmer stare at a slow screen to give them incentive to speed things up.

For operating system changes, the best recommendation is “go slow”. Each change, even when it seems minor needs to be tested in the field. And every change can have a cascading effect.  In our office for example we found that testing the last release of Windows wouldn’t support some of our older but still perfectly functioning monitors.  Rather than upgrade these machines, we’ve left them on earlier versions of Windows until the monitors can be upgraded or the hardware retired. The cost of upgrading hardware in order to make the latest operating system work can be prohibitive so it’s not to be taken trivially. Make sure when the project starts just where the client operating systems can reasonably be expected to be over the coming year and write to the lowest common denominator.

For a programming platform the problems can be more severe. If fixes inside the programming platform mean that an upgrade is required, then run a test development platform to ensure your existing code will port to the new version. Earlier this year we were obliged to move to an upgraded programming environment only to find that some of our existing code had run afoul of an unreported bug in the new version. Since the new version was not 100% backward compatible for this particular feature, we were obliged to spend several days writing work-arounds for the code. If you’re sure you can upgrade the development environment successfully, assess the effort required, if any, to upgrade essential components to legacy users in the field. Your other option is to continue with the older version for some time with work-arounds where required.

Database upgrades are something to also take on with some caution. Database providers are also regularly upgrading existing versions and releasing new ones.  One experience we had recently had us move to an upgraded version of a database product which fixed several minor problems and threw a much more terrible one at us which we had to struggle with. If possible, patches, upgrades and, in particular, new versions should be run first on a mirror testing site to ensure they won’t cause any problems. In particular the availability of new drivers for all the other components is often not assured. Ideally, choose a database platform at the beginning of the project and stick with it until the first version is released, make an upgrade to the database coincide with a new version of the application.

For third party products, the rule is use the fewest sources possible. If third party products are essential to the application then access to the source code is likely something that should be considered. The worst case scenario is that a small third-party supplier goes out of business just before the application can be released leaving you with outstanding technical problems, no upgrade path and no support.

Finally the most ephemeral aspect of any IT project, the design. I’d love to tell you to firm up the design before you start, carve it in stone and bring it like Moses down from the mountain to be seen by all but it’s just not realistic. With a world that changes as fast as ours does, the chances that even the best written design specs (and most of them aren’t) could survive the 18 months or so the project will last are almost nil. The best way we’ve found to mitigate this risk is to modularize the design.  ITIL processes have come to adopt this kind of thinking. IT projects are almost always design/build projects anyway. (This means the building will be going on while designing is still occurring). If you can modularize the coding of the system by function and then use basic good programming practices for standards like object classes (dialog boxes, field types etc) then tossing functionality in or out of the design becomes much more palatable. Around here we often say “it’s better to look integrated than be integrated” and if only for the design problem alone this is very true.

Regardless of whether your project is a commercial or internal application, working in a sea of change makes the project risky. Keep yourself open to change when it occurs but if you’re to be successful, start working on the first day of the project on contingency plans to mitigate the risk.